text
stringlengths
6
128k
# Conditional Mutual Information Constrained Deep Learning for Classification En-Hui Yang, Shayan Mohajer Hamidi∗, Linfeng Ye∗, Renhao Tan, and Beverly Yang ∗ Authors contributed equally. En-Hui Yang, Shayan Mohajer Hamidi, Linfeng Ye and Renhao Tan are with the Department of Electrical and Computer Engineering, University of Waterloo, Waterloo, ON N2L 3G1, Canada (e-mail<EMAIL_ADDRESS><EMAIL_ADDRESS><EMAIL_ADDRESS>[email protected]). Beverly Yang is with the NBK Institute of Mining Engineering, University of British Columbia, Vancouver, BC V6T 1Z4, Canada (e-mail: [email protected]). ###### Abstract The concepts of conditional mutual information (CMI) and normalized conditional mutual information (NCMI) are introduced to measure the concentration and separation performance of a classification deep neural network (DNN) in the output probability distribution space of the DNN, where CMI and the ratio between CMI and NCMI represent the intra-class concentration and inter-class separation of the DNN, respectively. By using NCMI to evaluate popular DNNs pretrained over ImageNet in the literature, it is shown that their validation accuracies over ImageNet validation data set are more or less inversely proportional to their NCMI values. Based on this observation, the standard deep learning (DL) framework is further modified to minimize the standard cross entropy function subject to an NCMI constraint, yielding CMI constrained deep learning (CMIC-DL). A novel alternating learning algorithm is proposed to solve such a constrained optimization problem. Extensive experiment results show that DNNs trained within CMIC-DL outperform the state- of-the-art models trained within the standard DL and other loss functions in the literature in terms of both accuracy and robustness against adversarial attacks. In addition, visualizing the evolution of learning process through the lens of CMI and NCMI is also advocated. ###### Index Terms: Alternating minimization, concentration and separation, conditional mutual information, cross entropy, deep learning. ## I Introduction In recent years, deep neural networks (DNNs) have been applied in a wide range of applications, revolutionizing fields like computer vision, natural language processing, and speech recognition [1, 2]. Typically, a DNN consists of cascaded non-linear layers that progressively produce multi-layers of representations with increasing levels of abstraction, starting from raw input data and ending with a predicted output label. The success of DNNs is largely attributable to their ability to learn these multi-layers of representations as features from the raw data through a deep learning (DL) process. Putting its neural architecture aside, a classification DNN is, mathematically, a mapping from raw data $x\in\mathbb{R}^{d}$ to a probability distribution $P_{x}$ over the set of class labels, predicting an output label $\hat{y}$ with probability $P_{x}(\hat{y})$. Given a pair of random variables $(X,Y)$, the distribution of which governs either a training set or testing set, where $X\in\mathbb{R}^{d}$ represents the raw data and $Y$ is the ground truth label of $X$, the prediction performance of the DNN is often measured by its error rate $\epsilon=\Pr\\{\hat{Y}\not=Y\\},$ where $\hat{Y}$ is the label predicted by the DNN with probability $P_{X}(\hat{Y})$ in response to the input $X$. The accuracy of the DNN is equal to $1-\epsilon$. The error rate is further upper bounded by the average of the cross entropy between the conditional distribution of $Y$ given $X$ and $P_{X}$ (see Section II). To have better prediction performance, a DL process is then applied to minimize the error rate $\epsilon$ or its cross entropy upper bound [1, 2]. Although the error rate of a DNN is its most important performance as far as its prediction is concerned, focusing entirely on the error rate is not enough, and can actually lead to several problems. First, the error rate of a DNN depends not only on the DNN itself, but also on the governing joint distribution of $(X,Y)$. When a DNN has a small error rate for one governing joint distribution of $(X,Y)$, it does not necessarily imply that it would have a small error rate for another governing joint distribution of $(X,Y)$, especially when two distributions are quite different. This is essentially related to the well-known overfitting and robustness problems [2, 3, 4, 5]. Second, even when a DNN works well across different governing distributions of $(X,Y)$, it remains a block box to us, especially when its architecture is huge. We don’t know why it works and how it works. Its error rate does not reveal any useful information about the intrinsic mapping structure such as the intra-class concentration and inter-class separation of the DNN in its output probability distribution space. To gain deep insights into the intrinsic mapping structure of a DNN as a mapping from $x\in\mathbb{R}^{d}$ to $P_{x}$, in this paper we introduce information quantities from information theory [6] to measure intra-class concentration and inter-class separation of the DNN. Specifically, we propose to use the conditional mutual information (CMI) $I(X;\hat{Y}|Y)$ between $X$ and $\hat{Y}$ given $Y$ as the measure for the intra-class concentration of the DNN as a mapping $x\in\mathbb{R}^{d}\to P_{x}$. For each class label $y$, the conditional mutual information $I(X;\hat{Y}|Y=y)$ between $X$ and $\hat{Y}$ given $Y=y$ tells how all output probability distributions $P_{X}$ given $Y=y$ are concentrated around its “centroid”, the conditional probability distribution $P_{\hat{Y}|Y=y}$. The smaller $I(X;\hat{Y}|Y=y)$ is, the more concentrated all output probability distributions $P_{X}$ given $Y=y$ are around its centroid. We further introduce another information quantity (see Section II) to measure the inter-class separation of the DNN as a mapping $x\in\mathbb{R}^{d}\to P_{x}$. Define the ratio between $I(X;\hat{Y}|Y)$ and the inter-class separation as the normalized conditional mutual information (NCMI) between $X$ and $\hat{Y}$ given $Y$. One may interpret CMI and NCMI as certain mapping structure traits of the DNN. Then in addition to its error rate, the DNN can also be evaluated in terms of its CMI and NCMI. Equipped with our new concepts of CMI and NCMI, we further evaluate popular DNNs pretrained in the literature over ImageNet in terms of their respective CMI and NCMI. It turns out that their validation accuracies over the ImageNet validation data set are more or less inversely proportional to their NCMI values. In other words, even though these DNNs have different architectures and different sizes, their error rates and NCMI values have more or less a positive linear relationship. Indeed, the correlation between the error rate and NCMI is above $0.99$. This implies that given a DNN architecture, one may be able to further improve the effectiveness of DL by simultaneously minimizing the error rate (or cross entropy upper bound) and NCMI of the DNN during the learning process, where the error rate and NCMI represent the prediction performance and the concentration/separation mapping structure performance of the DNN, respectively. This in turn motivates us to modify the standard DL framework to minimize the standard cross entropy function subject to an NCMI constraint, yielding CMI constrained deep learning (CMIC-DL). To this end, a novel alternating learning algorithm is further proposed to solve such a constrained optimization problem. Extensive experiment results show that DNNs trained within CMIC-DL outperform the state-of-the-art models trained within the standard DL and other loss functions in the literature in terms of both accuracy and robustness against adversarial attacks. The remainder of this paper is organized as follows. In Section II, we formally introduce the concepts of CMI and NCMI to measure intra-class concentration and inter-class separation structure performance of a DNN when it is viewed as a mapping from $x\in\mathbb{R}^{d}$ to $P_{x}$. In Section III, we use NCMI to evaluate and compare popular DNNs pretrained in the literature over ImageNet. These DNNs have different architectures and different sizes. Section IV is devoted to the full development of CMIC-DL. In Section V, extensive experiment results are presented and compared with the prior art in the literature; visualizing the evolution of learning process through the lens of CMI and NCMI is also advocated. Finally, conclusions are drawn along with some open problems in Section VI. ## II Performance of DNNs: Concentration and Separation A DNN can be described either by its neural architecture along with its connection weights, the number of which can be in billions, or by its mathematical mapping from $x\in\mathbb{R}^{d}$ to $P_{x}$. Both perspectives are useful. In this and next sections, we will take the second perspective and regard a DNN simply as a mapping $x\in\mathbb{R}^{d}\to P_{x}$. Before formally introducing CMI and NCMI, we set up notation to be used throughout the paper. ### II-A Notation For a positive integer $K$, let $[K]\triangleq\\{1,\dots,K\\}$. Assume that there are $C$ class labels with $[C]$ as the set of class labels. Let ${\cal P}([C])$ denote the set of all probability distributions over $[C]$. For any two probability distributions $P_{1},P_{2}\in{\cal P}([C])$, the cross entropy of $P_{1}$ and $P_{2}$ is defined as $H(P_{1},P_{2})=\sum_{i=1}^{C}-P_{1}(i)\ln P_{2}(i),$ (1) where $\ln$ denotes the logarithm with base $e$; the Kullback–Leibler (KL) divergence (or relative entropy) between $P_{1}$ and $P_{2}$ is defined as $D(P_{1}||P_{2})=\sum_{i=1}^{C}P_{1}(i)\ln{P_{1}(i)\over P_{2}(i)}.$ (2) For any $y\in[C]$ and $P\in{\cal P}([C])$, write the cross entropy of the one- hot probability distribution corresponding to $y$ and $P$ as $H(y,P)=-\ln P(y).$ (3) Given a DNN: $x\in\mathbb{R}^{d}\to P_{x}$, let $\mathbf{\theta}$ denote its weight vector consisting of all its connection weights; whenever there is no ambiguity, we also write $P_{x}$ as $P_{x,\mathbf{\theta}}$, and $P_{x}(y)$ as $P(y|x,\mathbf{\theta})$ for any $y\in[C]$. ### II-B Error Rate Fix a DNN: $x\in\mathbb{R}^{d}\to P_{x}$. As before, let $(X,Y)$ be a pair of random variables representing the raw input data and the corresponding ground truth label; let $\hat{Y}$ be the label predicted by the DNN with probability $P_{X}(\hat{Y})$ in response to the input $X$, that is, for any input $x\in\mathbb{R}^{d}$ and any $\hat{y}\in[C]$ $P(\hat{Y}=\hat{y}|X=x)=P_{x}(\hat{y})=P(\hat{y}|x,\mathbf{\theta}).$ (4) Note that $Y\to X\to\hat{Y}$ forms a Markov chain in the indicated order. Therefore, given $X=x$, $Y$ and $\hat{Y}$ are conditionally independent. The error rate of the DNN for $(X,Y)$ is equal to $\epsilon=\Pr\\{\hat{Y}\not=Y\\}$ which can be upper bounded by the average of the cross entropy of the conditional probability distribution of $Y$ given $X$, $P_{Y|X}=P_{Y|X}(\cdot|X)$, and $P_{X}$, as shown in the following theorem. ###### Theorem 1. For any DNN: $x\in\mathbb{R}^{d}\to P_{x}$ and any $(X,Y)$, $\epsilon\leq\mbox{$\bf E$}_{X}\left[H(P_{Y|X},P_{X})\right]$ (5) where $\mbox{$\bf E$}_{X}$ denotes the expectation with respect to $X$. ###### Proof: Let $I_{\\{\hat{Y}\not=Y\\}}$ denote the indicator function of the event $\\{\hat{Y}\not=Y\\}$. Then $\displaystyle\epsilon$ $\displaystyle=$ $\displaystyle\Pr\\{\hat{Y}\not=Y\\}$ $\displaystyle=$ $\displaystyle\mbox{$\bf E$}[I_{\\{\hat{Y}\not=Y\\}}]$ $\displaystyle=$ $\displaystyle\mbox{$\bf E$}_{X}\left[\mbox{$\bf E$}[I_{\\{\hat{Y}\not=Y\\}}|X]\right]$ $\displaystyle=$ $\displaystyle\mbox{$\bf E$}_{X}\left[1-\sum_{i=1}^{C}P_{Y|X}(i|X)P_{X}(i)\right]$ $\displaystyle=$ $\displaystyle\mbox{$\bf E$}_{X}\left[\sum_{i=1}^{C}P_{Y|X}(i|X)(1-P_{X}(i))\right]$ $\displaystyle\leq$ $\displaystyle\mbox{$\bf E$}_{X}\left[\sum_{i=1}^{C}-P_{Y|X}(i|X)\ln P_{X}(i)\right]$ (7) $\displaystyle=$ $\displaystyle\mbox{$\bf E$}_{X}\left[H(P_{Y|X},P_{X})\right]$ (8) where (II-B) follows from the fact that $Y$ and $\hat{Y}$ are conditionally independent given $X$, and (7) is due to the inequality $\ln z\leq z-1$ for any $z>0$. This completes the proof of Theorem 1. ∎ Given $X=x$, what happens if the DNN outputs instead the top one label $\hat{Y}^{*}$ $\hat{Y}^{*}=\operatornamewithlimits{arg\,max}_{i\in[C]}P_{x}(i)?$ In this case, the error rate of the DNN for $(X,Y)$ is equal to $\epsilon^{*}=\Pr\\{\hat{Y}^{*}\not=Y\\}$ which can also be upper bounded in terms of $\mbox{$\bf E$}_{X}\left[H(P_{Y|X},P_{X})\right]$. ###### Corollary 1. For any DNN: $x\in\mathbb{R}^{d}\to P_{x}$ and any $(X,Y)$, $\epsilon^{*}\leq C\epsilon\leq C\mbox{$\bf E$}_{X}\left[H(P_{Y|X},P_{X})\right].$ (9) ###### Proof: $\displaystyle\epsilon^{*}$ $\displaystyle=$ $\displaystyle\Pr\\{\hat{Y}^{*}\not=Y\\}$ $\displaystyle=$ $\displaystyle\mbox{$\bf E$}_{X}\left[1-P_{Y|X}(\hat{Y}^{*}|X)\right]$ $\displaystyle\leq$ $\displaystyle C\mbox{$\bf E$}_{X}\left[P_{X}(\hat{Y}^{*})(1-P_{Y|X}(\hat{Y}^{*}|X))\right]$ $\displaystyle\leq$ $\displaystyle C\mbox{$\bf E$}_{X}\left[\sum_{i=1}^{C}P_{X}(i)(1-P_{Y|X}(i|X))\right]$ $\displaystyle=$ $\displaystyle C\mbox{$\bf E$}_{X}\left[1-\sum_{i=1}^{C}P_{Y|X}(i|X)P_{X}(i)\right]$ $\displaystyle=$ $\displaystyle C\epsilon\leq C\mbox{$\bf E$}_{X}\left[H(P_{Y|X},P_{X})\right],$ (11) where (II-B) follows from the fact that $P_{X}(\hat{Y}^{*})\geq 1/C$, and (11) is due to (II-B) and (8). ∎ In view of Theorem 1 and Corollary 1, no matter which form of error rate $\epsilon$ or $\epsilon^{*}$ is used, minimizing the average of the cross entropy $\mbox{$\bf E$}_{X}[H(P_{Y|X},P_{X})]$ would have an effect to reduce $\epsilon$ and $\epsilon^{*}$. This provides mathematical justifications for the use of the average of the cross entropy $\mbox{$\bf E$}_{X}[H(P_{Y|X},P_{X})]$ as an objective function or a major component thereof in DL and knowledge distillation, where $P_{Y|X}$ is approximated by the one-hot probability vector corresponding to $Y$ in DL[1, 2], and by the output probability distribution of the teacher in knowledge distillation [7, 8, 9]. Figure 1: The mappings from the label space to the input space, and from the input space to the output space of a DNN. Here caricatures are used to depict label and input spaces, where each of the three instances in the label space are mapped to two instances in input space according to $P_{X|Y}(\cdot|Y=y_{i})$, for $i\in\\{1,2,3\\}$. On the other hand, the figure for the output space is obtained from a real example, where for the ResNet56 model trained on CIFAR-100 dataset, the output probability vectors corresponding to all validation sample instances from three randomly-picked classes are projected over the two-dimensional probability simplex. ### II-C Concentration The error rates $\epsilon$ and $\epsilon^{*}$ of the DNN: $x\in\mathbb{R}^{d}\to P_{x}$ for $(X,Y)$ do not provide any useful information on the intrinsic mapping structure of the DNN in the probability distribution space ${\cal P}([C])$. Two important mapping structure properties the DNN: $x\in\mathbb{R}^{d}\to P_{x}$ possesses, are its intra-class concentration and inter-class separation in the space ${\cal P}([C])$. In this and next subsections, we formally introduce information quantities to quantify these two mapping structure properties, respectively. Visualize the DNN: $x\in\mathbb{R}^{d}\to P_{x}$ according to Fig. 1. Given $Y=y$, $y\in[C]$, the input data $X$ is conditionally distributed according to the conditional distribution $P_{X|Y}(\cdot|y)$ and then mapped into $P_{X}$, a random point in the space ${\cal P}([C])$. The instances (or realizations) of this random point $P_{X}$ form a cluster in the space ${\cal P}([C])$. The centroid of this cluster is the average of $P_{X}$ with respect to the conditional distribution $P_{X|Y}(\cdot|y)$, which is exactly the conditional distribution of $\hat{Y}$ given $Y=y$ $P_{\hat{Y}|y}=\mbox{$\bf E$}[P_{X}|Y=y].$ (12) Measure the “distance” between each $P_{X}$ and the centroid $P_{\hat{Y}|y}$ by their KL divergence $D(P_{X}||P_{\hat{Y}|y})$. Then the average of KL divergence $D(P_{X}||P_{\hat{Y}|y})$ with respect to the conditional distribution $P_{X|Y}(\cdot|y)$ is equal to $\displaystyle\mbox{$\bf E$}\left[D(P_{X}||P_{\hat{Y}|y})|Y=y\right]$ (13) $\displaystyle=$ $\displaystyle\mbox{$\bf E$}\left[\left(\sum_{i=1}^{C}P_{X}(i)\ln{P_{X}(i)\over P_{\hat{Y}|y}(\hat{Y}=i|Y=y)}\right)\left|Y=y\right.\right]$ $\displaystyle=$ $\displaystyle\sum_{x}P_{X|Y}(x|y)\left[\sum_{i=1}^{C}P(\hat{Y}=i|x)\times\right.$ $\displaystyle\left.\ln{P(\hat{Y}=i|x)\over P_{\hat{Y}|y}(\hat{Y}=i|Y=y)}\right]$ $\displaystyle=$ $\displaystyle I(X;\hat{Y}|y),$ (14) where $I(X;\hat{Y}|y)$ is the conditional mutual information between $X$ and $\hat{Y}$ given $Y=y$. (Please refer to [6] for the notions of mutual information and conditional mutual information.) In (13), $X$ is assumed to be discrete; if $X$ is continuous, then the average $\sum_{x}P_{X|Y}(x|y)$ should be replaced by the integral $\int_{x}dP_{X|Y}(x|y).$ Note that (14) is due to the fact that $Y\to X\to\hat{Y}$ forms a Markov chain. The information quantity $I(X;\hat{Y}|y)$ quantifies the concentration of the cluster formed by the instances of the random point $P_{X}$ given $Y=y$ around its centroid $P_{\hat{Y}|y}$. Averaging $I(X;\hat{Y}|y)$ with respect to the distribution $P_{Y}(y)$ of $Y$, we get the conditional mutual information $I(X;\hat{Y}|Y)$ between $X$ and $\hat{Y}$ given $Y$: $\displaystyle I(X;\hat{Y}|Y)$ $\displaystyle=$ $\displaystyle\sum_{y\in[C]}P_{Y}(y)I(X;\hat{Y}|y)$ (15) $\displaystyle=$ $\displaystyle\mbox{$\bf E$}\left[D(P_{X}||P_{\hat{Y}|Y})\right]$ $\displaystyle=$ $\displaystyle\sum_{y}\sum_{x}P(x,y)\left[\sum_{i=1}^{C}P(\hat{Y}=i|x)\times\right.$ $\displaystyle\left.\ln{P(\hat{Y}=i|x)\over P_{\hat{Y}|y}(\hat{Y}=i|Y=y)}\right].$ The CMI $I(X;\hat{Y}|Y)$ can then be regarded as a measure for the intra-class concentration of the DNN: $x\in\mathbb{R}^{d}\to P_{x}$ for $(X,Y)$. In practice, the joint distribution $P(x,y)$ of $(X,Y)$ may be unknown. To compute the CMI $I(X;\hat{Y}|Y)$ in this case, one may approximate $P(x,y)$ by the empirical distribution of a data sample $\\{(x_{1},y_{1}),(x_{2},y_{2}),\cdots,(x_{n},y_{n})\\}$. For any $y\in[C]$, let $n_{y}=|\\{(x_{j},y_{j}):y_{j}=y,1\leq j\leq n\\}|,$ (16) where $|S|$ denotes the cardinality of a set $S$, and $P_{y}={1\over n_{y}}\sum_{(x_{j},y_{j}):y_{j}=y}P_{x_{j}}.$ (17) Then $I(X;\hat{Y}|Y)$ can be computed as follows $\displaystyle I(X;\hat{Y}|Y)$ $\displaystyle=\sum_{y\in[C]}\sum_{(x_{j},y_{j}):y_{j}=y}{1\over n}D(P_{x_{j}}||P_{y})$ $\displaystyle={1\over n}\sum_{j=1}^{n}D(P_{x_{j}}||P_{y_{j}}).$ (18) ### II-D Separation and NCMI Let $(U,V)$ be a pair of random variables independent of $(X,Y)$, and having the same joint distribution as that of $(X,Y)$. With reference to Fig. 1, we define the following information quantity111Other information quantities can also be defined and used as a measure for the inter-class separation of the DNN: $x\in\mathbb{R}^{d}\to P_{x}$, which will be explored in Appendix B. Although they are more or less equivalent, the information quantity $\Gamma$ defined here is more convenient for the selection of hyper parameters in our proposed CMIC deep learning. $\Gamma=\mbox{$\bf E$}\left[I_{\\{Y\not=V\\}}H(P_{X},P_{U})\right],$ (19) and use $\Gamma$ as a measure for the inter-class separation of the DNN: $x\in\mathbb{R}^{d}\to P_{x}$. It is clear that the larger $\Gamma$ is, the further apart different clusters are from each other on average. Ideally, we want $I(X;\hat{Y}|Y)$ to be small while keeping $\Gamma$ large. This leads us to consider the ratio between $I(X;\hat{Y}|Y)$ and $\Gamma$: $\hat{I}(X;\hat{Y}|Y)\mbox{$\ \stackrel{{\scriptstyle\Delta}}{{=}}$}{I(X;\hat{Y}|Y)\over\Gamma}.$ (20) We call $\hat{I}(X;\hat{Y}|Y)$ the normalized conditional mutual information between $X$ and $\hat{Y}$ given $Y$. In case where the joint distribution $p(x,y)$ of $(X,Y)$ is unknown, it can be approximated by the empirical distribution of a data sample $\\{(x_{1},y_{1}),(x_{2},y_{2}),\cdots,(x_{n},y_{n})\\}$. In parallel with (18), $\Gamma$ can be computed in this case as follows: $\Gamma={1\over n^{2}}\sum_{j=1}^{n}\sum_{k=1}^{n}I_{\\{y_{j}\not=y_{k}\\}}H(P_{x_{j}},P_{x_{k}}),$ (21) from which and (18), $\hat{I}(X;\hat{Y}|Y)$ can be computed accordingly. ### II-E Related Works In the literature, intra-class concentration and inter-class separation of a DNN have been mainly investigated in the feature space corresponding to the penultimate layer of the DNN, and largely treated in an ad-hoc manner in a deep learning process or algorithm. Specifically, it was observed numerically in [10, 11, 12] that DNNs concentrate features of each class around their separated mean. This observation was further analyzed in [13] under the Gaussian mixture model assumption about features. In [14, 15, 16, 17, 18] and references therein, different loss functions including the so-called center loss, contrastive center loss, orthogonal project loss, constrained center loss, and their variants, all of which are defined in the feature space, were proposed and used in the respective learning processes to improve the intra- class concentration and inter-class separation of such trained DNNs. In contrast, in this paper we investigate the intra-class concentration and inter-class separation of a DNN in its output probability distribution space ${\cal P}([C])$, where the DNN is viewed as a mapping from $x\in\mathbb{R}^{d}$ to $P_{x}$. This perspective allows us to introduce information quantities, CMI, $\Gamma$, and NCMI, to quantify the intra-class concentration and inter-class separation of each DNN. In addition, our introduced CMI and NCMI can also be regarded as additional performance metrics for any DNN, which are in parallel with the error rate performance metric, are independent of any learning process, and represent mapping structure properties of a DNN. As additional performance metrics, they can be used to evaluate and compare different DNNs regardless of the architectures and sizes of DNNs. Another related work in the sense of introducing information theoretic ideas into DL is the so-called coded deep learning (CDL) [19], where information theoretic coding ideas are embedded into the inner workings of DL. The purposes of CDL are to eliminate essentially floating operations of a coded DNN during its inference time and efficiently compress the coded DNN while maintaining or even improving the error rate of the coded DNN. In the next section, CMI and NCMI $\hat{I}(X;\hat{Y}|Y)$ will be used to evaluate and compare popular DNNs pre-trained over ImageNet in the literature. ## III NCMI Vs. Accuracy The popular DNNs we selected for evaluation according to their respective CMI and NCMI are ResNet-$\\{18,34,50,101,152\\}$ [20], VGG-$\\{11,13,16,19\\}$ [21], EfficientNet-$\\{\text{B0},\text{B1},\text{B2},\text{B3}\\}$ [22], Wide- ResNet-$\\{50,101\\}$ [23], MobileNet-V3-$\\{\text{small},\text{large}\\}$ [24], and AlexNet [25]. They are all pre-trained on ImageNet dataset and obtained from the Pytorch official website222https://pytorch.org/vision/stable/models.html.. Table I lists the values of CMI, $\Gamma$, and NCMI of the selected DNNs, which are calculated, according to (18), (21), and (20), over the ImageNet validation set, along with their respective error rate $\epsilon^{*}$. From Table I, it is clear that within the same family, as the model size increases, the CMI value decreases. This shows that larger models have more compact clusters in the output probability space ${\cal P}([C])$. For the $\Gamma$ value, although the general trend is that within the same family, the $\Gamma$ value increases as the model size gets larger, there does exist an exception. Note that for the EfficientNet family, the smallest model EfficientNet-B0 has the largest $\Gamma$ value. Now turn our attention to the NCMI value. From Table I, it follows that as the model size within the same family increases, the NCMI value decreases as well. Even more interesting is the relationship between the NCMI and error rate $\epsilon^{*}$. Across all models evaluated, as the NCMI value decreases, so does the error rate $\epsilon^{*}$. To make the relationship between the NCMI and error rate $\epsilon^{*}$ more transparent, Figure 2 illustrates the relationship graphically. From Figure 2, it seems that the NCMI and error rate $\epsilon^{*}$ have a positive linear relationship; indeed, the Pearson correlation coefficient $\rho$ [26] between them is $\rho=0.9929$, strongly supporting the former statement. As such, the NCMI value of a DNN can be used to gauge the prediction performance of the DNN. To conclude this section, let us draw some analogies. If a DNN is analogized with a student, then the error rate and NCMI of the DNN can be analogized with the testing score of the student in an exam and certain trait of the student, respectively. In a way similar to using the trait of the student to predict the student’s testing performance, one can also use the NCMI value of the DNN to predict the DNN’s testing performance. TABLE I: CMI, $\Gamma$, and NCMI values over the validation set of some pre- trained models on ImageNet dataset along with their error rate $\epsilon^{*}$, where the DNNs from the same family are highlighted by the same color. Models | CMI | $\Gamma$ | NCMI | Error rate $\epsilon^{*}$ | Models | CMI | $\Gamma$ | NCMI | Error rate $\epsilon^{*}$ ---|---|---|---|---|---|---|---|---|--- ResNet18 | 0.999 | 9.891 | 0.101 | 0.302 | AlexNet | 1.331 | 9.830 | 0.135 | 0.434 ResNet34 | 0.902 | 9.919 | 0.090 | 0.266 | EfficientNet-B0 | 0.692 | 9.433 | 0.073 | 0.220 ResNet50 | 0.815 | 9.929 | 0.082 | 0.238 | EfficientNet-B1 | 0.661 | 9.114 | 0.072 | 0.213 ResNet101 | 0.779 | 9.948 | 0.078 | 0.226 | EfficientNet-B2 | 0.639 | 9.224 | 0.069 | 0.193 ResNet152 | 0.749 | 9.953 | 0.075 | 0.216 | EfficientNet-B3 | 0.627 | 9.365 | 0.067 | 0.180 VGG11 | 0.959 | 9.899 | 0.096 | 0.296 | Wide-ResNet50 | 0.749 | 9.935 | 0.075 | 0.215 VGG13 | 0.930 | 9.909 | 0.094 | 0.284 | Wide-ResNet101 | 0.734 | 9.937 | 0.073 | 0.211 VGG16 | 0.878 | 9.925 | 0.088 | 0.266 | MobileNet-V3-Small | 1.088 | 9.898 | 0.110 | 0.323 VGG19 | 0.860 | 9.930 | 0.086 | 0.257 | MobileNet-V3-Large | 0.922 | 9.956 | 0.092 | 0.259 $\displaystyle J_{\mathcal{B}}\left(\lambda,\beta,\theta,\\{Q_{c}\\}_{c\in[C]}\right)$ $\displaystyle=\frac{1}{|\mathcal{B}|}\sum_{(x,y)\in\mathcal{B}}H(y,P_{x,\mathbf{\theta}})+\lambda{1\over|\mathcal{B}|}\sum_{(x,y)\in\mathcal{B}}D(P_{x,\mathbf{\theta}}||Q_{y})-\beta\frac{1}{|\mathcal{B}|^{2}}\sum_{(x,y),(u,v)\in\mathcal{B}}I_{\\{y\not=v\\}}H(P_{x,\mathbf{\theta}},P_{u,\mathbf{\theta}}).$ (22) Figure 2: The error rate vs NCMI value over the validation set of popular pre-trained models on ImageNet dataset. The sizes of the circles represent the sizes of respective models in terms of the number of model parameters; the larger the circle, the larger the model. ## IV CMIC Deep Learning The discussions in the above section suggest a new way of learning. In the learning process, instead of minimizing the average of cross entropy $\mbox{$\bf E$}_{X}\left[H(P_{Y|X},P_{X})\right]$ alone, one also needs to look after the NCMI $\hat{I}(X;\hat{Y}|Y)$. This leads to a new form of learning framework dubbed CMI constrained deep learning (CMIC-DL), which is described next. ### IV-A Optimization Problem Formulation In CMIC-DL, the optimization problem to be solved is as follows: $\displaystyle\min_{\mathbf{\theta}}~{}$ $\displaystyle\mbox{$\bf E$}_{X}\left[H(P_{Y|X},P_{X,\mathbf{\theta}})\right]$ s.t. $\displaystyle\hat{I}(X;\hat{Y}|Y)=r,$ (23) where $r$ is a positive constant. By interpreting $\hat{I}(X;\hat{Y}|Y)$ as a rate, and $\mbox{$\bf E$}_{X}\left[H(P_{Y|X},P_{X,\mathbf{\theta}})\right]$ as a distortion, the above optimization problem resembles the rate distortion problem in information theory [6, 27, 28]. By rewriting the constraint in (IV-A), and using the Lagrange multiplier method, the constrained optimization problem in (IV-A) could be formulated as the following unconstrained one $\displaystyle\min_{\mathbf{\theta}}~{}$ $\displaystyle\mbox{$\bf E$}_{X}\left[H(P_{Y|X},P_{X,\mathbf{\theta}})\right]$ $\displaystyle+\lambda I(X;\hat{Y}|Y)-\beta\mbox{$\bf E$}\left[I_{\\{Y\not=V\\}}H(P_{X,\mathbf{\theta}},P_{U,\mathbf{\theta}})\right],$ (24) where $\lambda>0$ is a scalar, and $\beta=\lambda r$. Note that in view of (15), the CMI $I(X;\hat{Y}|Y)$ in (IV-A) depends on $P_{\hat{Y}|Y}$, which, for $Y=y$, is the average of $P_{X,\mathbf{\theta}}$ with respect to the conditional distribution $P_{X|Y}(\cdot|y)$ (see (12)). As such, the unconstrained optimization problem in its form (IV-A) is not amenable to numerical solutions. To overcome this, we first convert it into a double unconstrained minimization problem by introducing a dummy distribution $Q_{y}\in{\cal P}([C])$ for each $y\in[C]$, as shown in the following theorem, which will be proved in Appendix A. ###### Theorem 2. For any $\lambda>0$ and $\beta>0$, $\displaystyle\min_{\mathbf{\theta}}~{}$ $\displaystyle\left\\{\mbox{$\bf E$}_{X}\left[H(P_{Y|X},P_{X,\mathbf{\theta}})\right]\right.$ $\displaystyle\left.+\lambda I(X;\hat{Y}|Y)-\beta\mbox{$\bf E$}\left[I_{\\{Y\not=V\\}}H(P_{X,\mathbf{\theta}},P_{U,\mathbf{\theta}})\right]\right\\}$ $\displaystyle=$ $\displaystyle\min_{\mathbf{\theta}}~{}\min_{\\{Q_{c}\\}_{c\in[C]}}~{}\left\\{\mbox{$\bf E$}[H(P_{Y|X},P_{X,\mathbf{\theta}})+\lambda D(P_{X,\mathbf{\theta}}||Q_{Y})]\right.$ $\displaystyle\left.-\beta\mbox{$\bf E$}[I_{\\{Y\not=V\\}}H(P_{X,\mathbf{\theta}},P_{U,\mathbf{\theta}})]\right\\}.$ (25) In practice, the joint distribution $P(x,y)$ of $(X,Y)$ may be unknown. In this case, to solve (2) numerically, one may approximate $P(x,y)$ by the empirical distribution of a data sample (such as a mini-batch in the DL process) $\mathcal{B}=\\{(x_{i_{1}},y_{i_{1}}),(x_{i_{2}},y_{i_{2}}),\cdots,(x_{i_{m}},y_{i_{m}})\\}$, and $P_{Y|X}$ by the one-hot probability distribution corresponding to $Y$. Accordingly, the objective function in the double minimization (2) can be approximated by $J_{\mathcal{B}}\left(\lambda,\beta,\theta,\\{Q_{c}\\}_{c\in[C]}\right)$ shown in (22) (on the top of the page). ### IV-B Algorithm for Solving the Optimization in (2) Having addressed how to approximate the objection function in the double minimization (2), we are now ready to present an algorithm for solving (2). In fact, by reformulating the single minimization problem as a double minimization problem, Theorem 2 lends us an alternating algorithm that optimizes $\mathbf{\theta}$ and $\\{Q_{c}\\}_{c\in[C]}$ alternatively to minimize the objective function in (2), given that the other is fixed. Given $\\{Q_{c}\\}_{c\in[C]}$, $\mathbf{\theta}$ can be updated using the same strategy as in the conventional DL through stochastic gradient descent iterations over mini-batches, where the training set is divided into $B$ mini- batches $\\{\mathcal{B}_{b}\\}_{b\in[B]}$ with each batch of size $|\mathcal{B}|$. Given $\mathbf{\theta}$, how is $\\{Q_{c}\\}_{c\in[C]}$ updated? This is where differences arise. In view of (12) and (32), the optimal $\\{Q_{c}\\}_{c\in[C]}$ given $\mathbf{\theta}$ is equal to $Q_{c}=P_{\hat{Y}|y=c}=\sum_{x}P(x|y=c)P_{x,\mathbf{\theta}},$ (26) for any $c\in[C]$. Therefore, to update $\\{Q_{c}\\}_{c\in[C]}$ given $\mathbf{\theta}$, we construct, at each iteration, $C$ mini-batches $\\{\mathfrak{B}_{c}\\}_{c\in[C]}$ in the following manner: to make $\mathfrak{B}_{c}$, $\forall c\in[C]$, we randomly sample $|\mathfrak{B}_{c}|$ instances from the training samples whose ground truth labels are $c$. It then follows from (26) that for any $c\in[C]$, $Q_{c}$ is updated as333To update $\\{Q_{c}\\}_{c\in[C]}$, we may use momentum to make the updation more stable and less noisy. $Q_{c}=\frac{\sum_{x\in\mathfrak{B}_{c}}P_{x,\mathbf{\theta}}}{|\mathfrak{B}_{c}|}.$ (27) The procedure for solving the optimization problem (2) is now summarized in Algorithm 1, where we use $(\cdot)^{t}_{c,b}$ to indicate class $c$ at the $b$-th batch updation during the $t$-th epoch. We also use $(\cdot)^{t}_{c,B}$ as $(\cdot)^{t}_{c}$ whenever necessary, and set $(\cdot)^{t}_{c,0}=(\cdot)^{t-1}_{c}$. Algorithm 1 The proposed alternating algorithm for solving the optimization problem in (2) 0: The training set $\mathcal{T}$, all mini-batches $\\{\mathcal{B}_{b}\\}_{b\in[B]}$, number of epochs $\mathit{T}$, $\lambda$, and $\beta$. 1: Initialization:Initialize $\theta^{0}$ and $\\{Q_{c}^{0}\\}_{c\in[C]}$. 2: for $t=1$ to $T_{max}$ do 3: for $b=1$ to $B$ do 4: _$[$ Updating $\theta$$]$_: Fix $\\{Q_{c,b-1}^{t}\\}_{c\in[C]}$. Update $\theta^{t}_{b-1}$ to $\theta^{t}_{b}$ by using (stochastic) batch gradient descent over the loss function $J_{\mathcal{B}_{b}}\left(\lambda,\beta,\theta^{t}_{b-1},\\{Q_{c,b-1}^{t}\\}_{c\in[C]}\right)$. 5: _$[$ Updating $\\{Q_{c}\\}_{c\in[C]}$$]$_: Fix $\theta^{t}_{b}$. Construct mini-batches $\\{\mathfrak{B}_{c}\\}_{c\in[C]}$ from $\mathcal{T}$. Update $Q_{c,b-1}^{t}$ to $Q_{c,b}^{t}$, $\forall c\in[C]$, according to (27), i.e., $\displaystyle Q_{c,b}^{t}=\frac{\sum_{x\in\mathfrak{B}_{c}}P_{x,\mathbf{\theta}^{t}_{b}}}{|\mathfrak{B}_{c}|}.$ (28) 6: end for 7: end for 8: return model parameters $\theta^{T}$. ## V Experiment Results To demonstrate the effectiveness of CMIC-DL and compare it with some state-of- the-art alternatives, we have conducted a series of experiments. Specifically, we have performed experiments on two popular image classification datasets, namely CIFAR-100 [29] and ImageNet [25]. In Subsections V-A and V-B, we present their respective accuracy results. In Subsection V-C, we explore how to visualize the concentration and separation of a DNN, which is made possible by viewing the DNN as a mapping from $x\in\mathbb{R}^{d}$ to $P_{x}$; using such a visualization method, the concentration and separation of ResNet-56 trained within our CMIC-DL framework are then compared with those of ResNet-56 trained within the standard DL framework. In the literature, a deep learning process is typically analyzed experimentally through the evolution curve of its error rate. With our newly introduced performance metrics, CMI, $\Gamma$ (separation), and NCMI, the learning process can also be analyzed through the evolution curves of CMI, $\Gamma$, and NCMI, which show interestingly how the mapping structure in terms of CMI, $\Gamma$, and NCMI evolves over the course of learning process. In Subsection V-D, we use ResNet-56 as an example, and illustrate and compare the evolution curves of CMI, $\Gamma$, NCMI, and error rate within our CMIC-DL framework vs within the standard DL framework. Lastly, in Subsection V-E, we evaluate the robustness of models trained within our CMIC-DL framework against two different adversarial attacks, and show that in comparison with the standard DL, CMIC-DL improves the robustness of DNNs as well. ### V-A Experiments on CIFAR-100 CIFAR-100 dataset contains 50K training and 10K test colour images of size $32\times 32$, which are labeled for 100 classes. $\bullet$ Models: To show the effectiveness of CMIC-DL, we have conducted experiments on three different model architectural families. Specifically, we have selected (i) three models from ResNet family [20], namely ResNet-$\\{32,56,110\\}$; (ii) VGG-13 from VGG family [21]; and (iii) Wide- ResNet-28-10 from Wide-ResNet family [23]. $\bullet$ Benchmarks: We evaluate the performance of the DNNs trained via CMIC-DL against those trained by conventional cross entropy loss (CE), center loss (CL) [16] which promotes clustering the features, focal loss (FL) [30] which uses regularization, large-margin Gaussian Mixture (L-GM) loss [31] which imposes margin constraints, and orthogonal projection loss (OPL) [18] which imposes orthogonality in the feature space. $\bullet$ Training settings: We have deployed an SGD optimizer with a momentum of 0.9, a weight decay of 0.0005, and a batch size of 64. We have trained the models for 200 epochs, and adopted an initial learning rate of 0.1, which is further divided by 10 at the 60-th, 120-th and 160-th epochs. To have a fair comparison, we have reproduced the results of all the benchmark methods using their respective best hyper-parameters reported in their original papers. In addition, in Algorithm 1, we set $\\{Q_{c}^{0}(i)\\}_{c\in[C]}=\frac{1}{C}$, for $i\in[C]$, use $|\mathfrak{B}_{c}|=8$, $\forall c\in[C]$, and also update $Q_{c,b}^{t}$ using the momentum of 0.9999. The results are summarized in Table II. As seen, the models trained within our CMIC-DL framework outperform those trained by the benchmark methods. Importantly, the improvement is consistent across the models from different architectural families, showing that CMIC-DL can effectively train DNNs from different families. As a rule of thumb, compared to the CE method, CMIC-DL yields DNNs with almost 1.3% higher validation accuracy for the ResNet models. Furthermore, in Table III we report the NCMI values $\hat{I}(X;\hat{Y}|Y)$, over the validation set, for the models we trained in Table II, where we use the notation $\hat{I}_{\text{Loss}}$ to denote the NCMI value when the underlying DNN is trained using “Loss” method. As observed, $\hat{I}_{\text{CMIC}}$ has the smallest value compared to the other counterparts. In addition, in Table IV, we report the $\lambda^{*}$ and $\beta^{*}$ values for which we obtained the best validation accuracies. As observed, the $\lambda^{*}$ and $\beta^{*}$ values are almost the same for all the models. TABLE II: The validation accuracies (%) of different models trained by CMIC-DL and different benchmark methods over CIFAR-100 dataset, which are averaged over three different random seeds, and where Bold and underlined values denote the best and second best results, respectively. Loss | Res32 | Res56 | Res110 | VGG13 | WRN-28-10 ---|---|---|---|---|--- CL | 70.23 | 72.70 | 74.20 | 74.50 | 80.97 FL | 71.62 | 73.20 | 74.35 | 74.53 | 81.24 LGM | 71.50 | 73.06 | 74.39 | 74.57 | 81.29 OPL | 71.03 | 72.60 | 73.98 | 74.11 | 81.12 CE | 70.90 | 72.40 | 73.79 | 73.77 | 80.93 CMIC | 72.24 | 73.66 | 75.08 | 74.62 | 81.63 TABLE III: The respective NCMI values, measured over the validation set, of the models trained in Table II via different benchmark methods. The values are averaged over thee different runs. Loss | Res32 | Res56 | Res110 | VGG13 | WRN-28-10 ---|---|---|---|---|--- $\hat{I}_{\text{CL}}$ | 0.057 | 0.045 | 0.0395 | 0.0395 | 0.0309 $\hat{I}_{\text{FL}}$ | 0.053 | 0.046 | 0.0393 | 0.0399 | 0.0312 $\hat{I}_{\text{LGM}}$ | 0.054 | 0.047 | 0.0390 | 0.0398 | 0.0310 $\hat{I}_{\text{OPL}}$ | 0.056 | 0.050 | 0.0397 | 0.0402 | 0.0314 $\hat{I}_{\text{CE}}$ | 0.057 | 0.053 | 0.0402 | 0.0408 | 0.0317 $\hat{I}_{\text{CMIC}}$ | 0.051 | 0.042 | 0.0382 | 0.0392 | 0.0303 TABLE IV: Hyper-parameters, $\lambda^{*}$ and $\beta^{*}$, that were used in CMIC-DL in Table II. Params. | Res32 | Res56 | Res110 | VGG13 | WRN-28-10 ---|---|---|---|---|--- ($\lambda^{*}$,$\beta^{*}$) | (0.7,0.4) | (0.7,0.4) | (0.7,0.2) | (0.8,0.3) | (0.7,0.4) ### V-B Experiments on ImageNet ImageNet is a large-scale dataset used in visual recognition tasks, containing around 1.2 million training samples and 50,000 validation images. $\bullet$ Models: We have conducted experiments on two models from ResNet family, namely ResNet-18 and ResNet-50. $\bullet$ Benchmarks: We evaluate the performance of CMIC-DL against CE and OPL. $\bullet$ Training settings: We have deployed an SGD optimizer with a momentum of 0.9, a weight decay of 0.0001, and a batch size of 256. We have trained the models for 90 epochs, and adopted an initial learning rate of 0.1, which is further divided by 10 at the 30-th and 60-th epochs. In Algorithm 1, we set $\\{Q_{c}^{0}(i)\\}_{c\in[C]}=\frac{1}{C}$, for $i\in[C]$, use $|\mathfrak{B}_{c}|=8$, $\forall c\in[C]$, and also update $Q_{c,b}^{t}$ using the momentum of 0.9999. The top-$\\{1,5\\}$ accuracies are reported in Table V. As seen, in comparison with the CE method, CMIC-DL increases the top-1 validation accuracy for ResNet-18 and ResNet-50 by 0.56% and 0.37%, respectively. The improvement is also consistent for the top-5 validation accuracy. The hyper parameters $(\lambda^{*},\beta^{*})$ used in CMIC-DL for ResNet-18 and ResNet-50 are $(0.6,0.1)$ and $(0.6,0.2)$, respectively. The corresponding NCMI values are $\hat{I}_{\text{CE}}=0.110$ and $\hat{I}_{\text{CMIC}}=0.102$ for ResNet-18, and $\hat{I}_{\text{CE}}=0.091$ and $\hat{I}_{\text{CMIC}}=0.088$ for ResNet-50. TABLE V: The validation accuracies (%) of different models trained by CMIC-DL and different benchmark methods on ImageNet dataset. | ResNet-18 | ResNet-50 ---|---|--- Method | top-1 | top-5 | top-1 | top-5 CE (Baseline) | 69.91 | 89.08 | 76.15 | 92.87 OPL | 70.27 | 89.60 | 76.32 | 93.09 CMIC | 70.47 | 89.96 | 76.52 | 93.44 ### V-C Concentration and Separation Visualization In this subsection, we explore how to visualize concentration and separation of a DNN. Consider the data set CIFAR-100. To visualize concentration and separation of a DNN in a dimension reduced probability space, we randomly select three class labels. Restrict ourselves to a subset consisting of all validation sample instances with labels from the three selected labels. Given a DNN, feed each validation sample instance from the subset into the DNN, keep only three logits corresponding to the three selected labels, and then convert these three logits into a 3 dimension probability vector through the softmax operation. Following these steps in the indicated order, the DNN then maps each validation sample instance from the subset into a 3 dimension probability vector. Further project the 3 dimension probability vector into the 2 dimension simplex. Then the concentration and separation properties of the DNN for the three selected classes can be more or less visualized through the projected 2 dimension simplex. Using the above visualization method, Fig. 3 compares the concentration and separation properties of ResNet-56 trained within our CMIC-DL framework with those of ResNet-56 trained within the standard CE framework. From Fig. 3, it is clear that the three clusters in the case CMIC-DL are more concentrated than their counterparts in the case of CE, and also further apart from each other than their counterparts in the case of CE. Again, this is consistent with the NCMI values reported in Table III. Figure 3: Visualization and comparison of concentration and separation: ResNet56 trained via CE vs ResNet56 trained via CMIC, where different shapes indicate different classes. ### V-D Evolution of CMI, $\Gamma$, NCMI, and error rate In this subsection, we analyze and visualize a learning process within either our CMIC-DL framework or the conventional CE-based DL framework through the lens of CMI, $\Gamma$, NCMI, and error rate. Fig. 4 shows the evolution curves of CMI, $\Gamma$, NCMI, and error rate over the validation set during the course of training ResNet-56 on CIFAR-100 dataset in each case, where the training setup is the same as that used in Subsection V-A, and we use $\lambda=0.7$ and $\beta=0.4$ in the case of CMIC-DL. As seen in Fig. 4a, the CMI value in both CE and CMIC-DL cases is small at the beginning of the training (epoch zero). This is because at the beginning, all clusters in the output probability distribution space $\cal{P}([C])$ stick around together, as shown from the separation distance curve (see Fig. 4b), and probability distributions within each cluster are not separated at all. After the training starts and for the first a few epochs, the clusters move away from each other; during the course of movement, probability distributions within each cluster move in different speed, and become separated. As such, both the values of CMI and $\Gamma$ increase. Indeed, this is shown in Fig. 4a and Fig. 4b. Hereafter, the clusters continue to move away from each other, while at the same time, probability distributions within each cluster tend to move together. Thus the $\Gamma$ value continues to increase, while the CMI value decreases, as shown again in Fig. 4a and Fig. 4b. The above summarizes the general behaviour of the CMI and $\Gamma$ evolution curves in both CE and CMIC-DL cases. Let us now examine the differences between them. From Fig. 4a, it is clear that the CMI evolution curve in the case of CMIC-DL always remains below its counterpart in the CE case. On the other hand, as shown in Fig. 4b, although initially the $\Gamma$ value increases faster in the CE case than in the CMIC-DL case, after the first a few epochs, the rate of increase in $\Gamma$ value is consistently higher in the CMIC-DL case than in the CE case to the extent that the $\Gamma$ value in the CMIC-DL case surpasses its counterpart in the CE case in the late stage of the learning process. From Fig. 4c and Fig. 4d, we can see that once the learning process is more or less stabilized, both the NCMI value and error rate in the CMIC-DL case are consistently smaller than their counterparts in the CE case. Once again, this is consistent with our observation in Fig. 2: the smaller the NCMI value, the lower the error rate. In conjunction with the visualization method discussed in Subsection V-C, we have created a video available at https://youtu.be/G0fDwv6o9Ek to illustrate the learning process during the course of training ResNet-56 on CIFAR-100 dataset in each of the CE and NMIC- DL cases through the lens of CMI and $\Gamma$, where concentration and separation are shown for three randomly selected classes, and the evolution curves of CMI and $\Gamma$ are shown for all classes. (a) CMI ($I$). (b) Separation ($\Gamma$). (c) NCMI ($\hat{I}$). (d) Error rate ($\epsilon^{*}$). Figure 4: The evolution curves of (a) CMI, (b) $\Gamma$, (c) NCMI, and (d) error rate over the course of training ResNet-56 over CIFAR-100 dataset using CE and CMIC frameworks. ### V-E Robustness against adversarial attacks As a by-product, we would expect that DNNs trained within the CMIC-DL framework are more robust against adversarial attacks, in comparison with their counterparts trained within the standard CE-based DL framework. This is because when a DNN is trained within our CMIC-DL framework, its clusters in its output probability distribution space are more compact, and also further separated from each other, in comparison with its counterpart trained within the standard CE-based DL framework. As such, it is harder for an adversary to craft a perturbation which, when added to a clean sample, would result in an attacked sample falling into a cluster with a different label. Our purpose in this subsection is to confirm this by-product. To this end, we have performed the following experiments. $\bullet$ Dataset: We have used MNIST dataset [32] comprising of 10-class handwritten digits. $\bullet$ Model: We have selected a simple DNN with three convolutional and one fully connected layers. $\bullet$ Attacks: Two white-box attacks have been selected, where the adversary has an access to the gradients of the underlying model. Specifically, FGSM [3] and PGD attack [5] with 5 iterations were employed with attack perturbation budgets $\|\epsilon\|_{\infty}=\\{0.05,0.10,0.15,0.20,0.25,0.30,0.35\\}$. $\bullet$ Training settings: We have deployed an SGD optimizer with a batch size of 64. We have trained the models for 15 epochs and adopted an step learning rate annealing with decay factor of 0.7. The hyper parameters were selected to be $\lambda^{*}=2$ and $\beta^{*}=9$ in our CMIC-DL framework due to the fact that the classification task over MNIST dataset is far simpler than that over CIFAR-100 and ImageNet dataset. Fig. 5 illustrates the resulting trade-offs between robust accuracy and perturbation budget. From Fig. 5, it is clear that the DNN trained within the CMIC-DL framework is more robust against both FGSM and PGD attacks, in comparison with its counterpart trained within the standard CE-based DL framework, thus confirming the by-product. In addition, the clean accuracy for the models trained within the CE-based DL and CMIC-DL frameworks are 99.14% and 99.21%, respectively, showcasing that the accuracy over the benign samples is not sacrificed for a higher robust accuracy. (a) (b) Figure 5: The robustness of a simple DNN over MNIST dataset trained within the conventional CE-based DL and CMIC-DL frameworks against (a) FGSM attack and (b) PGD attack with 5 iteraions, respectively. We conclude this subsection by pointing out that although CMIC-DL can improve the robustness of DNNs trained therein against adversarial attacks, CMIC-DL itself is not a framework for adversarial training. In our future work, we will fully address CMIC adversarial training by extending the performance metrics of CMI, $\Gamma$ (separation), and NCMI to the new concepts of robust CMI, robust separation, and robust NCMI. ## VI Conclusion Viewing a DNN as a mapping from $x\in\mathbb{R}^{d}$ to $P_{x}$, in this paper we have introduced conditional mutual information (CMI) and normalized conditional mutual information (NCMI) as new performance metrics of the DNN to measure the intra-class concentration and inter-class separation of the DNN. As new performance metrics, CMI and NCMI are in parallel with error rate. We then have used CMI and NCMI to evaluate and compare DNNs of different architectures and sizes. It turns out that NCMI and error rate have essentially a positive linear relationship with their correlation $\geq 0.99$. As such, the NCMI value of a DNN can be used to gauge the prediction performance of the DNN. Based on NCMI, we have then developed a learning framework called CMI constrained deep learning (CMI-DL) within which the conventional cross entropy function is minimized subject to a NCMI constraint. An novel alternating learning algorithm has been further proposed to solve such a constrained optimization problem. Extensive experiment results consistently show that DNNs trained within the CMIC-DL framework outperform those trained using the other DL benchamrk methods discussed in the paper. In addition, with CMI and NCMI as performance metrics for measuring the concentration and separation of a DNN, the learning process of the DNN can also be analyzed and visualized through the evolution of CMI and NCMI. Open problems include (1) how to extend CMI and NCMI to define concepts of robust CMI, robust separation, and robust NCMI; (2) how to extend CMIC-DL to robust CMIC-DL to fully address adversarial training; (3) how to use CMI to help estimate the conditional probability distribution of $Y$ given $X$; and (4) the investigation of minimizing NCMI alone without using the standard cross entropy objective function by modifying a predictor. These problems will be addressed in the future. ## Appendix A Proof of Theorem 2 Since $\lambda>0$ and $\beta>0$, it suffices to show that $I(X;\hat{Y}|Y)=\min_{\\{Q_{c}\\}_{c\in[C]}}\mbox{$\bf E$}[D(P_{X,\mathbf{\theta}}||Q_{Y})].$ (29) To this end, we apply (15) to get the following: $\displaystyle I(X;\hat{Y}|Y)=\sum_{y}\sum_{x}P(x,y)\left[\sum_{i=1}^{C}P(\hat{Y}=i|x,\mathbf{\theta})\times\right.$ $\displaystyle\left.\ln{P(\hat{Y}=i|x,\mathbf{\theta})\over P_{\hat{Y}|y}(\hat{Y}=i|Y=y)}\right]$ $\displaystyle=\sum_{y}\sum_{x}P(x,y)\left[\sum_{i=1}^{C}P(\hat{Y}=i|x,\mathbf{\theta})\times\left[\ln{P(\hat{Y}=i|x,\mathbf{\theta})\over Q_{y}(i)}\right.\right.$ $\displaystyle\left.\left.+\ln{Q_{y}(i)\over P_{\hat{Y}|y}(\hat{Y}=i|Y=y)}\right]\right]$ $\displaystyle=\sum_{y}\sum_{x}P(x,y)D(P_{x,\mathbf{\theta}}||Q_{y})+\sum_{y}\sum_{x}P(x,y)\times$ $\displaystyle\left[\sum_{i=1}^{C}P(\hat{Y}=i|x,\mathbf{\theta})\ln{Q_{y}(i)\over P_{\hat{Y}|y}(\hat{Y}=i|Y=y)}\right]$ $\displaystyle=\mbox{$\bf E$}[D(P_{X,\mathbf{\theta}}||Q_{Y})]$ $\displaystyle+\sum_{y}P(y)\left[\sum_{i=1}^{C}P_{\hat{Y}|y}(\hat{Y}=i|y)\ln{Q_{y}(i)\over P_{\hat{Y}|y}(\hat{Y}=i|Y=y)}\right]$ $\displaystyle=\mbox{$\bf E$}[D(P_{X,\mathbf{\theta}}||Q_{Y})]-\mbox{$\bf E$}[D(P_{\hat{Y}|Y}||Q_{Y})]$ $\displaystyle\leq\mbox{$\bf E$}[D(P_{X,\mathbf{\theta}}||Q_{Y})],$ (30) for any $Q_{y}\in{\cal P}([C]),y\in[C]$, where the inequality above is due to the nonnegativity of KL divergence. Thus $I(X;\hat{Y}|Y)\leq\min_{\\{Q_{c}\\}_{c\in[C]}}\mbox{$\bf E$}[D(P_{X,\mathbf{\theta}}||Q_{Y})].$ (31) On the other hand, (30) becomes an equality whenever $Q_{c}=P_{\hat{Y}|y=c},\forall c\in[C].$ (32) This, together with (30), implies (29), and hence completes the proof of Theorem 2. ## Appendix B Other Information Quantities for Separation In this Appendix, we explore other information quantities which can also be defined and used as a measure for the inter-class separation of the DNN: $x\in\mathbb{R}^{d}\to P_{x}$. Specifically, two more information quantities $\Gamma^{\prime}$ and $\Gamma^{\prime\prime}$ are introduced and compared with $\Gamma$ defined in (19). Although they are more or less equivalent, $\Gamma$ is more convenient for selecting hyper parameters in our CMIC-DL framework. ### B-A Information Quantity $\Gamma^{\prime}$ A possible information quantity for measuring inter-class separation can be defined as follows $\displaystyle\Gamma^{\prime}=\mbox{$\bf E$}\left[I_{\\{Y\not=V\\}}D(P_{X}||P_{U})\right],$ (33) where the cross entropy function $H(P_{X},P_{U})$ in (19) is replaced by the KL divergence $D(P_{X}||P_{U})$. To connect $\Gamma^{\prime}$ with CMI and $\Gamma$, we simplify $\Gamma^{\prime}$ as follows: $\displaystyle\Gamma^{\prime}$ $\displaystyle=\mbox{$\bf E$}\left[I_{\\{Y\not=V\\}}\sum_{i=1}^{C}P(\hat{Y}=i|X)\ln{\frac{P(\hat{Y}=i|X)}{P(\hat{Y}=i|U)}}\right]$ $\displaystyle=\mbox{$\bf E$}\left[I_{\\{Y\not=V\\}}\sum_{i=1}^{C}P(\hat{Y}=i|X)\left(\ln{\frac{P(\hat{Y}=i|X)}{P_{\hat{Y}|Y}(\hat{Y}=i|Y)}}\right.\right.$ $\displaystyle\left.\left.+\ln{\frac{P_{\hat{Y}|Y}(\hat{Y}=i|Y)}{P(\hat{Y}=i|U)}}\right)\right]$ $\displaystyle=\mbox{$\bf E$}\left[I_{\\{Y\not=V\\}}D(P_{X}||P_{\hat{Y}|Y})\right]$ $\displaystyle+\mbox{$\bf E$}\left[I_{\\{Y\not=V\\}}\sum_{i=1}^{C}P(\hat{Y}=i|X)\ln{\frac{P_{\hat{Y}|Y}(\hat{Y}=i|Y)}{P(\hat{Y}=i|U)}}\right]$ (34) $\displaystyle=\mbox{$\bf E$}\left[(1-P(Y))D(P_{X}||P_{\hat{Y}|Y})\right]$ (35) $\displaystyle+\mbox{$\bf E$}\left[I_{\\{Y\not=V\\}}\sum_{i=1}^{C}P_{\hat{Y}|Y}(\hat{Y}=i|Y)\ln{\frac{P_{\hat{Y}|Y}(\hat{Y}=i|Y)}{P(\hat{Y}=i|U)}}\right]$ (36) $\displaystyle=\mbox{$\bf E$}\left[(1-P(Y))D(P_{X}||P_{\hat{Y}|Y})\right]$ $\displaystyle+\mbox{$\bf E$}\left[I_{\\{Y\not=V\\}}D(P_{\hat{Y}|Y}||P_{U})\right],$ (37) where (35) is due to the fact that $V$ is independent of $(X,Y)$, and (36) follows from the independence of $(X,Y)$ and $(U,V)$ and the Markov chain $Y\to X\to\hat{Y}$. Note that the first expectation in (37) is related to the CMI $I(X;\hat{Y}|Y)$. Indeed, when $P(Y)$ is equal to a constant, i.e., $1/C$, which is true in most empirical cases, it follows from (15) that $\mbox{$\bf E$}\left[(1-P(Y))D(P_{X}||P_{\hat{Y}|Y})\right]=(1-{1\over C})I(X,\hat{Y}|Y),$ which, together with (37), implies that $\Gamma^{\prime}=(1-{1\over C})I(X,\hat{Y}|Y)+\mbox{$\bf E$}\left[I_{\\{Y\not=V\\}}D(P_{\hat{Y}|Y}||P_{U})\right].$ (38) Plugging (38) into the optimization problem in (IV-A), we get the following optimization problem $\displaystyle\min_{\mathbf{\theta}}~{}\mbox{$\bf E$}_{X}$ $\displaystyle\left[H(P_{Y|X},P_{X,\mathbf{\theta}})\right]+\left(\lambda-\left(\beta-\frac{\beta}{C}\right)\right)I(X;\hat{Y}|Y)$ $\displaystyle-\beta\mbox{$\bf E$}\left[I_{\\{Y\not=V\\}}D(P_{\hat{Y}|Y}||P_{U,\mathbf{\theta}})\right].$ (39) Thus, if $\Gamma^{\prime}$ was used as a measure for inter-class separation, then it would cancel out part of the CMI, making the selection of hyper parameters $\lambda$ and $\beta$ become harder. ### B-B Information Quantity $\Gamma^{\prime\prime}$ Equations (38) and (B-A) suggest that one might use the following information quantity as a measure for inter-class separation instead $\displaystyle\Gamma^{\prime\prime}=\mbox{$\bf E$}\left[I_{\\{Y\not=V\\}}D(P_{\hat{Y}|Y}||P_{U})\right].$ (40) In fact, $\Gamma^{\prime\prime}$ has a descent physical meaning in the sense that it measures the average of distances between the output distributions of the DNN in response to input sample instances and the centroids of the clusters with different ground truth labels. To connect $\Gamma^{\prime\prime}$ with CMI and $\Gamma$, we further simplify $\Gamma^{\prime\prime}$ as follows $\displaystyle\Gamma^{\prime\prime}$ $\displaystyle=\mbox{$\bf E$}\left[I_{\\{Y\not=V\\}}\sum_{i=1}^{C}P(\hat{Y}=i|X)\ln{\frac{P_{\hat{Y}|Y}(\hat{Y}=i|Y)}{P(\hat{Y}=i|U)}}\right]$ (41) $\displaystyle=\mbox{$\bf E$}\left[I_{\\{Y\not=V\\}}H(P_{X},P_{U})\right]$ $\displaystyle+\mbox{$\bf E$}\left[I_{\\{Y\not=V\\}}\sum_{i=1}^{C}P(\hat{Y}=i|X)\ln P_{\hat{Y}|Y}(\hat{Y}=i|Y)\right]$ $\displaystyle=\Gamma+\mbox{$\bf E$}\left[I_{\\{Y\not=V\\}}\sum_{i=1}^{C}P_{\hat{Y}|Y}(\hat{Y}=i|Y)\ln P_{\hat{Y}|Y}(\hat{Y}=i|Y)\right]$ (42) $\displaystyle=\Gamma-\mbox{$\bf E$}\left[(1-P(Y))H(P_{\hat{Y}|Y},P_{\hat{Y}|Y})\right].$ (43) In the above, (41) follows from (34) and (37), (42) is due to the fact that $X$ is independent of $V$, and $Y\to X\to\hat{Y}$ forms a Markov chain, and (43) is attributable to the independence of $V$ and $Y$. Note again that the second term in (43) is related to the CMI $I(X;\hat{Y}|Y)$. Indeed, when $P(Y)$ is equal to a constant, i.e., $1/C$, which is true in most empirical cases, it follows that $\displaystyle\mbox{$\bf E$}\left[(1-P(Y))H(P_{\hat{Y}|Y},P_{\hat{Y}|Y})\right]$ (44) $\displaystyle=$ $\displaystyle(1-{1\over C})H(\hat{Y}|Y)$ $\displaystyle=$ $\displaystyle(1-{1\over C})\left[I(X;\hat{Y}|Y)+H(\hat{Y}|X,Y)\right]$ $\displaystyle=$ $\displaystyle(1-{1\over C})\left[I(X;\hat{Y}|Y)+H(\hat{Y}|X)\right],$ where $H(W|Z)$ denotes the Shannon conditional entropy of the random variable $W$ given the random variable $Z$, and (44) is due to the Markov chain $Y\to X\to\hat{Y}$. Combining (44) with (43) yields $\Gamma^{\prime\prime}=\Gamma-(1-{1\over C})\left[I(X;\hat{Y}|Y)+H(\hat{Y}|X)\right].$ (45) Plugging (45) into the optimization problem in (IV-A), we get the following optimization problem $\displaystyle\min_{\mathbf{\theta}}~{}\mbox{$\bf E$}_{X}$ $\displaystyle\left[H(P_{Y|X},P_{X,\mathbf{\theta}})\right]+\left(\lambda+\left(\beta-\frac{\beta}{C}\right)\right)I(X;\hat{Y}|Y)$ $\displaystyle+\beta(1-{1\over C})H(\hat{Y}|X)-\beta\Gamma.$ (46) Thus, if $\Gamma^{\prime\prime}$ was used as a measure for inter-class separation, then it would further enhance the effect of the CMI, making the selection of hyper parameters $\lambda$ and $\beta$ become harder as well. ## Acknowledgments This work was supported in part by the Natural Sciences and Engineering Research Council of Canada under Grant RGPIN203035-22, and in part by the Canada Research Chairs Program. ## References * [1] Y. LeCun, Y. Bengio, and G. Hinton, “Deep learning,” _nature_ , vol. 521, no. 7553, pp. 436–444, 2015. * [2] I. Goodfellow, Y. Bengio, and A. Courville, _Deep learning_. MIT press, 2016. * [3] I. J. Goodfellow, J. Shlens, and C. Szegedy, “Explaining and harnessing adversarial examples,” _arXiv preprint arXiv:1412.6572_ , 2014. * [4] N. Carlini and D. Wagner, “Towards evaluating the robustness of neural networks,” in _2017 ieee symposium on security and privacy (sp)_. Ieee, 2017, pp. 39–57. * [5] A. Madry, A. Makelov, L. Schmidt, D. Tsipras, and A. Vladu, “Towards deep learning models resistant to adversarial attacks,” in _International Conference on Learning Representations_ , 2018. * [6] T. M. Cover, _Elements of information theory_. John Wiley & Sons, 1999. * [7] G. Hinton, O. Vinyals, J. Dean _et al._ , “Distilling the knowledge in a neural network.” * [8] K. Zheng and E.-H. Yang, “Knowledge distillation based on transformed teacher matching,” in _To be submitted to International Conference on Learning Representations_. International Conference on Learning Representations, ICLR, 2024. * [9] A. K. Menon, A. S. Rawat, S. Reddi, S. Kim, and S. Kumar, “A statistical perspective on distillation,” in _International Conference on Machine Learning_. PMLR, 2021, pp. 7632–7642. * [10] E. Oyallon, “Building a regular decision boundary with deep networks,” in _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition_ , 2017, pp. 5106–5114. * [11] V. Papyan, “Traces of class/cross-class structure pervade deep learning spectra,” _The Journal of Machine Learning Research_ , vol. 21, no. 1, pp. 10 197–10 260, 2020. * [12] V. Papyan, X. Han, and D. L. Donoho, “Prevalence of neural collapse during the terminal phase of deep learning training,” _Proceedings of the National Academy of Sciences_ , vol. 117, no. 40, pp. 24 652–24 663, 2020. * [13] J. Zarka, F. Guth, and S. Mallat, “Separation and concentration in deep networks,” in _International Conference on Learning Representations_ , 2020\. * [14] Y. Wen, K. Zhang, Z. Li, and Y. Qiao, “A comprehensive study on center loss for deep face recognition,” _International Journal of Computer Vision_ , vol. 127, pp. 668–683, 2019. * [15] Z. Shi, H. Wang, and C.-S. Leung, “Constrained center loss for convolutional neural networks,” _IEEE Transactions on Neural Networks and Learning Systems_ , 2021. * [16] Y. Wen, K. Zhang, Z. Li, and Y. Qiao, “A discriminative feature learning approach for deep face recognition,” in _Computer Vision–ECCV 2016: 14th European Conference, Amsterdam, The Netherlands, October 11–14, 2016, Proceedings, Part VII 14_. Springer, 2016, pp. 499–515. * [17] C. Qi and F. Su, “Contrastive-center loss for deep neural networks,” in _2017 IEEE international conference on image processing (ICIP)_. IEEE, 2017, pp. 2851–2855. * [18] K. Ranasinghe, M. Naseer, M. Hayat, S. Khan, and F. S. Khan, “Orthogonal projection loss,” in _Proceedings of the IEEE/CVF international conference on computer vision_ , 2021, pp. 12 333–12 343. * [19] S. M. Hamidi and E.-H. Yang, “Coded deep learning: framework and algorithms,” _Submitted to IEEE transactions on pattern analysis and machine intelligence_. * [20] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in _Proceedings of the IEEE conference on computer vision and pattern recognition_ , 2016, pp. 770–778. * [21] K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” _arXiv preprint arXiv:1409.1556_ , 2014. * [22] M. Tan and Q. Le, “Efficientnet: Rethinking model scaling for convolutional neural networks,” in _International conference on machine learning_. PMLR, 2019, pp. 6105–6114. * [23] S. Zagoruyko and N. Komodakis, “Wide residual networks,” _arXiv preprint arXiv:1605.07146_ , 2016. * [24] L. Zhao and L. Wang, “A new lightweight network based on mobilenetv3.” _KSII Transactions on Internet & Information Systems_, vol. 16, no. 1, 2022\. * [25] A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional neural networks,” _Advances in neural information processing systems_ , vol. 25, 2012. * [26] I. Cohen, Y. Huang, J. Chen, J. Benesty, J. Benesty, J. Chen, Y. Huang, and I. Cohen, “Pearson correlation coefficient,” _Noise reduction in speech processing_ , pp. 1–4, 2009. * [27] T. Berger, “Rate distortion theory. englewood clis,” 1971. * [28] E.-H. Yang, Z. Zhang, and T. Berger, “Fixed-slope universal lossy data compression,” _IEEE Trans. Inf. Theory_ , vol. 43, no. 5, pp. 1465–1476, 1997. * [29] A. Krizhevsky, G. Hinton _et al._ , “Learning multiple layers of features from tiny images,” 2009. * [30] T.-Y. Lin, P. Goyal, R. Girshick, K. He, and P. Dollár, “Focal loss for dense object detection,” in _Proceedings of the IEEE international conference on computer vision_ , 2017, pp. 2980–2988. * [31] W. Wan, Y. Zhong, T. Li, and J. Chen, “Rethinking feature distribution for loss functions in image classification,” in _Proceedings of the IEEE conference on computer vision and pattern recognition_ , 2018, pp. 9117–9126. * [32] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner, “Gradient-based learning applied to document recognition,” _Proceedings of the IEEE_ , vol. 86, no. 11, pp. 2278–2324, 1998. | En-Hui Yang (M’97-SM’00-F’08) received the B.S. degree in applied mathematics from Huaqiao University, China, Ph.D. degree in mathematics from Nankai University, China, and Ph.D. degree in electrical engineering from the University of Southern California, USA, in 1986, 1991, and 1996, respectively. Since June 1997, he has been with the Department of Electrical and Computer Engineering, University of Waterloo, ON, Canada, where he is currently a Professor and Canada Research Chair, and the founding Director of the Leitch- University of Waterloo multimedia communications lab. A co-founder of SlipStream Data Inc. (now a subsidiary of BlackBerry) and the founder of BicDroid Inc., he currently also serves as an Executive Council Member of China Overseas Friendship Association, an Expert Advisor for the Overseas Chinese Affairs Office of the State Council of China, a Board Governor of the University of Waterloo, a Board Trustee of Huaqiao University, a member of IEEE Founders Medal Committee, and advisors for other national and provincial bodies. His current research interests are: multimedia compression, digital communications, information theory, source and channel coding, image and video coding, deep learning, big data analytics, and information security. Dr. Yang is a recipient of several awards and honors, a partial list of which includes the 2021 IEEE Eric E. Sumner Award, the prestigious Inaugural Premier’s Catalyst Award in 2007 for the Innovator of the Year; the 2007 Ernest C. Manning Award of Distinction, one of the Canada’s most prestigious innovation prizes; the 2013 CPAC Professional Achievement Award; the 2014 IEEE Information Theory Society Padovani Lecture; and the 2014 FCCP Education Foundation Award of Merit. With over 230 papers and more than 230 patents/patent applications worldwide, his research work has benefited people over 170 countries through commercialized products, video coding open sources, and video coding standards. He is a Fellow of the Canadian Academy of Engineering and a Fellow of the Royal Society of Canada: the Academies of Arts, Humanities and Sciences of Canada. He served, inter alia, as a review panel member for the International Council for Science; a general co-chair of the 2008 IEEE International Symposium on Information Theory; an Associate Editor for IEEE Transactions on Information Theory; a Technical Program Vice- Chair of the 2006 IEEE International Conference on Multimedia and Expo (ICME); the Chair of the award committee for the 2004 Canadian Award in Telecommunications; a Co-Editor of the 2004 Special Issue of the IEEE Transactions on Information Theory; and a Co-Chair of the 2003 Canadian Workshop on Information Theory. ---|--- | Shayan Mohajer Hamidi received the B.Sc. degree from Sharif University, Tehran, Iran, in 2016, and the MASc. degree from the University of Waterloo, Waterloo, ON, Canada, in 2018, both in electrical engineering. He was a research assistant with the CST lab at the University of Waterloo from 2018 to 2020. He is currently working toward his Ph.D. degree in Electrical Engineering at the University of Waterloo. His current research interests include machine learning, optimization, information theory. ---|--- | Linfeng Ye received the B.Sc. degree from Xi’an University of Science and Technology, Xi’an, China in 2020, in microelectronics, and the MEng. degree from the University of Waterloo, Waterloo, ON, Canada, in 2021, in electrical engineering. He is currently working towards his MASc. degree in electrical engineering at University of Waterloo. His current research interests include machine learning, and information theory. ---|--- | Renhao Tan received the Bachelor of Advanced Computing (Honours) degree from the Australian National University, Canberra, ACT, Australia, in 2021. He is currently pursuing a MASc. degree in Electrical & Computer Engineering at University of Waterloo, Waterloo, ON, Canada. His research interests include machine learning, computer vision and data mining. ---|--- | Beverly Yang received the BASc degree in Geological Engineering from the University of Waterloo in 2020. She is currently pursuing a PhD in mining engineering at the University of British Columbia. Her research interests include rock mechanics and machine learning. ---|---
# Unified theory of the nonlinear Schrödinger equation David B. Reinhardt<EMAIL_ADDRESS><EMAIL_ADDRESS>German Aerospace Center (DLR), Institute of Quantum Technologies, Wilhelm-Runge- Straße 10, 89081 Ulm, Germany Dean Lee Facility for Rare Isotope Beams and Department of Physics and Astronomy, Michigan State University, MI 48824, USA Wolfgang P. Schleich Institut für Quantenphysik and Center for Integrated Quantum Science and Technology (IQST), Universität Ulm, D-89069 Ulm, Germany Hagler Institute for Advanced Study at Texas A$\&$M University, Texas A$\&$M AgriLife Research, Institute for Quantum Science and Engineering (IQSE), and Department of Physics and Astronomy, Texas A$\&$M University, College Station, Texas 77843-4242, USA Matthias Meister<EMAIL_ADDRESS><EMAIL_ADDRESS>German Aerospace Center (DLR), Institute of Quantum Technologies, Wilhelm-Runge-Straße 10, 89081 Ulm, Germany (July 14, 2023) ###### Abstract The nonlinear Schrödinger equation (NLSE) is a rich and versatile model, which in one spatial dimension has stationary solutions similar to those of the linear Schrödinger equation as well as more exotic solutions such as solitary waves and quantum droplets. We present a unified theory of the NLSE, showing that all stationary solutions of the cubic-quintic NLSE can be classified according to a single number called the cross-ratio. Any two solutions with the same cross-ratio can be converted into one another using a conformal transformation, and the same also holds true for traveling wave solutions. In this way we demonstrate a conformal duality between solutions of cubic-quintic NLSEs and lower-order NLSEs. The same analysis can be applied to the Newtonian dynamics of classical particles with polynomial potentials. Our framework provides a deeper understanding of the connections between the physics of the NLSE and the mathematics of algebraic curves and conformal symmetry. Introduction – The nonlinear Schrödinger equation (NLSE) is ubiquitous in physics, where it plays a key role in plasma physics [1, 2, 3], hydrodynamics [4, 5, 6], degenerate quantum gases [7, 8] and light propagation in nonlinear fiber optics [9, 10, 11, 12]. Understanding the possible solutions of the NLSE is therefore of great importance for a large variety of purposes whether they are application-oriented or fundamental. In this Letter we point out a conformal duality between different classes of solutions and even different orders of the NLSE. This conformal mapping provides a unified picture of the cubic- and the cubic-quintic NLSE and even establishes a direct link to the linear Schrödinger equation. Moreover, our method allows us to systematically classify the complete solution spaces of these equations. The linear Schrödinger equation typically features oscillating and constant- amplitude solutions which have their counterparts in the NLSE. However, there also exist solutions which are uniquely nonlinear such as solitary waves [13, 14, 15, 16, 17, 18, 19] which are of versatile interest in physics [20, 21, 22, 23, 24]. Considering (multiple) higher-order self-modulating terms like in the cubic-quintic NLSE drastically expands the solution space allowing for instance for bright and dark soliton pairs [25] and solitons with power law tail decay [26]. Although the different polynomial NLSEs have been studied in great detail [27, 28, 29, 30, 31, 32, 33, 34, 35, 36] there exists so far no unified theory linking their solution spaces. In this work, we identify a large family of conformal dualities for the one- dimensional time-independent cubic-quintic NLSE. These dualities allow us to establish conformal maps between solutions of the cubic-quintic NLSE and even to conformally reduce the cubic-quintic to the cubic NLSE and the linear Schrödinger equation, highlighting that the lower-order equations essentially are conformal limiting cases of the cubic-quintic NLSE. Conformal dualities are of particular interest in physics [37, 38, 39] with famous instances being the Kramers–Wannier duality in statistical mechanics [40] relating high-and low temperature limits of the free energy in certain Ising models and the Montonen–Olive duality [41] a generalization of the electro-magnetic symmetry of Maxwell’s equations in quantum field theory. The new and useful insights into the conformal duality of NLSEs presented in this Letter can directly be transferred to Newtonian mechanics relating the motion of classical particles in various harmonic and anharmonic potentials. In fact, there is a remarkable similarity between the solutions of the NLSE and Newtonian dynamics. Finally, our theory provides a fundamental understanding of two-and three-body contact interactions being inherently related in one-dimensional Bose-Einstein condensates [42, 43, 44, 45] described by higher-order Gross-Pitaevskii equations [46, 47, 48, 14, 49, 17]. Nonlinear Schrödinger equation – We consider the dimensionless time- independent cubic-quintic NLSE $\displaystyle\left(-\frac{1}{2}\frac{\mathrm{d}^{2}}{\mathrm{d}x^{2}}+a_{3}\absolutevalue{\psi}^{2}+\frac{a_{4}}{2}\absolutevalue{\psi}^{4}\right)\psi=a_{2}\psi$ (1) in one spatial dimension of coordinate $x$, where $\psi=\psi\left(x\right)$ is the complex-valued wave function and $a_{2}$, $a_{3}$, and $a_{4}$ are constants 111The indices $j$ of the coefficients $a_{j}$ are chosen such that they correspond to the exponents of the different powers of $\sigma$ of the polynomial P defined in Eq. (3). By omitting position-dependent potentials, we focus on the homogeneous case with either box or periodic boundary conditions. The amplitude-phase representation $\psi\equiv\sqrt{\sigma}\exp\left(i\phi\right)$ casts Eq. (1) into the differential equations [28, 30, 25] $\displaystyle\left(\frac{\mathrm{d}\sigma}{\mathrm{d}x}\right)^{2}=P\left(\sigma\right)$ (2) for the density $\sigma=\sigma(x)$ with the quartic polynomial $\displaystyle P\left(\sigma\right)\equiv\frac{4}{3}a_{4}\,\sigma^{4}+4a_{3}\,\sigma^{3}-8a_{2}\,\sigma^{2}-16a_{1}\,\sigma-4a_{0}$ (3) and $\displaystyle\frac{\mathrm{d}\phi}{\mathrm{d}x}$ $\displaystyle=\pm\frac{\sqrt{a_{0}}}{\sigma}$ (4) for the phase $\phi=\phi(x)$. Here $a_{0},a_{1}$ are constants of integration and the different signs in Eq. (4) refer to the two possible directions of the flow induced by the phase gradient. Obviously, the order of the polynomial $P=P(\sigma)$ directly depends on the leading nonlinearity in Eq. (1) yielding a cubic or quartic polynomial in the case of the cubic ($a_{4}=0$) or cubic-quintic NLSE, while the polynomial is quadratic for the linear Schrödinger equation ($a_{3}=a_{4}=0$). Classification of solutions – The stationary solutions of Eq. (1) are in general determined [30] by the polynomial $P$, defined by Eq. (3), and its discriminant $\Delta=a_{4}^{6}\prod_{j\neq k}(\sigma_{j}-\sigma_{k})$ given by the roots $\sigma_{j}$ of $P$. Depending on the sign of $\Delta$, three classes of solutions can be identified: (i) simple complex conjugated roots ($\Delta<0$), (ii) multiple roots ($\Delta=0$), or (iii) only simple roots ($\Delta>0$) of $P$. In order to discuss the roots $\sigma_{j}$ of $P$ it is convenient to introduce the tuple notation ($r_{4}$, $r_{3}$, $r_{2}$, $r_{1}$), where every entry $r_{m}$ denotes the number of roots at order $m$. For instance $(0,0,0,4)$ labels a polynomial with four simple real roots as displayed in Fig. 1a, while a polynomial with two simple real roots and two simple complex- conjugated roots as shown in Fig. 1d is labeled by $(0,0,0,2+2_{{\mathbb{C}}})$. Figure 1: Conformal mapping between two realizations of the cubic-quintic NLSE with discriminant $\Delta>0$ (a–c) and $\Delta<0$ (d–f). (a,d) Polynomials $P(\sigma)$ and $\tilde{P}(\tilde{\sigma})$, defined by Eq. (2), with four simple roots $\sigma_{j}$ (a) or two simple and two complex roots $\tilde{\sigma}_{j}$ (d), respectively. (b,e) Oscillating phase-space trajectories corresponding to real (blue) and complex (red) solutions determined by $P$ in (a,d). The two cases (a–c) and (d–f) are related by a conformal transformation, Eq. (5), which maps the positions of the roots, the polynomials, and the corresponding phase-space orbits into each other. (c,f) Complex density plane of $P$ and $\tilde{P}$ shown in (a,d) illustrating the positions of the roots $\sigma_{j}$ and $\tilde{\sigma}_{j}$ (black dots) as well as the argument of the phase of the density $\sigma$ (color map). The lines of constant real and imaginary part of the density $\sigma$ form a square grid (c) which is mapped into a grid of circles (f) by the conformal transformation due to changing the sign of $\Delta$. The roots $\sigma_{j}$ are thus mapped from a straight line (c) to a circle (f) with counter- clockwise orientation starting from the first real root. Likewise, the cloverleaf-shaped boundary $Q$ shown in (c) is the inverse image of the square boundary shown in (f) while the center of the angle-shaped region in (f) corresponds to the point at infinity in (c). The explicit solutions of the stationary NLSE are obtained by direct integration of Eqs. (2) and (4) in the region between two neighboring real roots. Consequently, oscillatory solutions of Eq. (2) occur between two neighboring simple real roots which define the minimum and maximum density of the oscillation as displayed by the three closed phase-space orbits in Fig. 1b which originate from the polynomial in Fig. 1a. Complex conjugate roots with finite imaginary parts can therefore not be the turning points of such solutions, but instead deform the resulting orbits spanned between other real roots as illustrated in Fig. 1e. Due to the finite order of $P$, there is thus only one oscillatory orbit possible for the polynomial shown in Fig. 1d. For polynomials with a multiple root (shown in Fig. 2b for the case (0,0,1,2), solitonic and other more exotic solutions emerge 222See Supplemental Material at the end of the document for details on the conformal transformation of the NLSE and explicit expressions of the solutions discussed in context of the conformal reduction shown in Fig. 2. The Supplemental Material cites Refs. [52, 59, 58].. In fact, the multiple root acts as a bifurcation point in phase space constituting a separatrix for the phase-space trajectories separating the two other solution classes [25]. Moreover, there always exists a constant amplitude solution at the density value of the multiple root. Finally, the outer density regions which are restricted by only one real root typically lead to unbounded solutions with poles. For instance the light orange shaded region of the polynomial displayed in Fig. 2c yields such an unbounded solution [51]. Consequently, the sign of the discriminant $\Delta$ and thus the nature of the roots of $P$ not only determine the character and shape of the resulting solutions, but also the total number of different solutions for a given set of parameters. Indeed, according to Eq. (2) physically meaningful real solutions require $P(\sigma)>0$ between the roots considered, in addition to any restrictions set by the boundary conditions of the system under study, while for $P(\sigma)<0$ complex density solutions emerge. Hence, this approach enables a straightforward and systematic classification of all possible stationary solutions of higher-order NLSEs. Conformal duality – In the phase space $(\sigma,\sigma^{\prime})$ with $\sigma^{\prime}\equiv\mathrm{d}\sigma/\mathrm{d}x$, the differential equation Eq. (2) constitutes an elliptic curve. A key characteristic of elliptic curves is the possibility to transform their underlying algebraic equation by rational transformations [52]. Strikingly, in the case of the NLSE the Möbius transformation can be adapted to the differential equation Eq. (2) leading to the conformal map $\displaystyle\sigma\left(x\right)=\frac{A\,\tilde{\sigma}\left(\tilde{x}\right)+B}{C\,\tilde{\sigma}\left(\tilde{x}\right)+D}$ (5) of the densities $\sigma$ and $\tilde{\sigma}$ with the generally complex- valued coefficients $A,B,C,D$. In contrast to the mapping of elliptic curves, here the spatial coordinate $x$ also needs to be transformed according to the affine transformation $x=x_{0}+\left(AD-BC\right)\tilde{x}$, where $x$ can become complex-valued and $x_{0}$ is a constant. The duality, Eq. (5), relates any two physical systems with the same real- valued cross-ratio $k^{2}$ which is an invariant of the transformation determined by the roots $\sigma_{j}$ of $P$ [51]. Note that the conformal character of the Möbius transformation will preserve the angles in the complex density plane by mapping every straight line of constant density into another line or circle of constant density, and vice versa. The gradient of the phase, determined by Eq. (4), enjoys a similar transformation [51] $\displaystyle\frac{\mathrm{d}\phi}{\mathrm{d}x}=\pm\sqrt{a_{0}}\,\dfrac{D\,\frac{\mathrm{d}\tilde{\phi}}{\mathrm{d}\tilde{x}}\pm\sqrt{\tilde{a}_{0}}\,C}{B\,\frac{\mathrm{d}\tilde{\phi}}{\mathrm{d}\tilde{x}}\pm\sqrt{\tilde{a}_{0}}\,A}$ (6) with the very same coefficients $A,B,C,D$ as in Eq. (5). As a result, the combination of Eqs. (5) and (6) provides the complete conformal mapping of the differential equations under study. Hence, different realizations of the cubic-quintic NLSE are conformally related establishing a fundamental connection between their solution spaces. In particular, these transformations also apply to the solutions of density $\sigma$ and phase $\phi$ themselves such that the complete complex wave function $\psi$ can be conformally mapped. We emphasize that the conformal duality remains intact for traveling-wave solutions of the NLSE such as solitary waves subjected to a velocity boost. This effect is a direct consequence of the Galilean covariance of the NLSE allowing to obtain arbitrary many additional solutions through the application of Galilei-transformations [51]. Conformal mapping and reduction of the NLSE – Depending on the choice of the transformation coefficients different scenarios can be realized. Indeed, the conformal map, Eq. (5), directly relates different quartic polynomials with each other and therefore their solution spaces. In this case the ratio $A/C$ must not match the value of any of the roots of the involved polynomials to preserve their quartic order. Real-valued coefficients $A,B,C,D$ connect polynomials within a given solution class, while complex coefficients enable to change the solution class corresponding to a change of sign of $\Delta$. Figure 1 shows an intriguing example of the latter case, where a $(0,0,0,4)$-polynomial is mapped to one classified by $(0,0,0,2+2_{{\mathbb{C}}})$. Despite the fact that the graphs of the polynomials $P$ and $\tilde{P}$ (a,d) and their phase-space orbits (b,e) appear quite distinct in Fig. 1 they are intimately connected as visualized by the density maps (c,f). Here, the conformal character of the transformation manifests itself by transforming the straight line connecting all four simple roots in Fig. 1c to a circle which passes again through all (now partly complex) roots in Fig. 1f. In the same way, the rectangular boundary of the density plot in Fig. 1f is mapped into the cloverleaf-shaped boundary displayed in Fig. 1c. Figure 2: Conformal reduction from the cubic-quintic (b) to the cubic NLSE (c) and the linear Schrödinger equation (a). Exemplary polynomials $P$ (green) and $-P$ (orange), Eq. (2), with (a) two simple roots (black dots), (b) two simple roots $\sigma_{3}$, $\sigma_{4}$ and one double root $\sigma_{1,2}$ (encircled black dot), or (c) one simple and one double root. By moving the roots $\sigma_{4}$ (or $\sigma_{1,2}$) to plus (minus) infinity the cubic-quintic NLSE can be reduced to the cubic NLSE or the linear Schrödinger equation, respectively. The green (soliton solutions) and orange (oscillating solutions) fillings show which of the density regions (and corresponding solutions) are mapped into each other featuring similar characteristics. The light shaded green and orange areas in (a) and (c) illustrate unbounded solutions. Moreover, as depicted in Fig. 2, it is possible to conformally reduce the cubic-quintic NLSE to either the cubic NLSE or the linear Schrödinger equation by mapping either an outer simple root or an outer double root to plus or minus infinity, respectively. In these cases the ratio $A/C$ must match the value of the roots to be moved. As a consequence, the overall degree of the polynomial is reduced by one (simple root moved) or two (double root moved). Analogously, the linear Schrödinger equation with an energy eigenvalue of zero is obtained by removing a triple root. By reducing the degree of the polynomial the solution space changes based on Eq. (5) and as illustrated in Fig. 2: (i) one unbounded solution vanishes because the root constituting its minimum or maximum density has been removed, (ii) a bound solution becomes unbound since it is now only restricted by one root, and (iii) the remaining solutions get transformed, but keep their main characteristics as their roots retain their order. The case shown in Fig. 2 highlights two prominent solitonic solutions, namely the flat-top soliton [14] (green shaded area in b) and the elementary bright soliton [53] (green shaded area in c) which are both governed by a hyperbolic cosine in the denominator of their density profile. By the transformation from the cubic-quintic to the cubic NLSE only the prefactor in front of the hyperbolic cosine gets changed such that both solutions are quite similar [51]. Likewise, the oscillatory solution in the cubic-quintic case (orange area in Fig. 2b) governed by a cosine in the denominator as well changes its prefactor when transformed. However, in this case the corresponding solution of the cubic case (light orange area in Fig. 2c) becomes unbound due to the now different prefactor and has thus completely changed its character by the transformation. Fascinatingly, the solutions of the green and orange regions in Fig. 2 are also interconnected by a tranformation that maps a real position coordinate $x$ to a purely imaginary position $\tilde{x}$ changing the functional dependency from a hyperbolic sine (green) to a trigonometric sine (orange). Effectively, this transformation thus flips the overall sign of the polynomial from $P$ to $-P$. In this way all the solutions of the cubic and cubic-quintic NLSE as well as the linear Schrödinger equation are fundamentally connected. Connection to Newtonian mechanics – Besides the importance of the unified theory of the NLSE, the conformal duality discussed in this Letter has also strong implications for the dynamics of classical particles subjected to anharmonic conservative potentials in Newtonian mechanics. Indeed, it is well- known that the NLSE formally constitutes a classical Hamiltonian system for the density $\sigma$ with the Hamiltonian function [54] $\displaystyle\mathcal{H}(\sigma^{\prime},\sigma)\equiv\frac{1}{2}{\sigma^{\prime\,}}^{2}+U\left(\sigma\right)$ (7) with the potential $U=U\left(\sigma\right)$. Here, the density $\sigma$ will be analogous to the classical position, while the spatial coordinate $x$ corresponds to time in classical mechanics. By constraining the energy of $\mathcal{H}$ to the value of $-4a_{0}$ and considering the potential $U\left(\sigma\right)\equiv 2a_{0}-P\left(\sigma\right)/2$ one can recover [25] Eq. (2). Hence, in this analogy the nonlinearites in Eq. (1) directly correspond to anharmonic contributions in $U$ with the cubic or cubic-quintic NLSE giving rise to a cubic or quartic potential, respectively, while the linear Schrödinger equation yields a harmonic potential as usual. The conformal map, Eq. (5), now allows us to transform the Hamiltonian, Eq. (7), of a classical particle and consequently its underlying equations of motions. Thus, we can map a double-well problem to another double-well problem, or carry out the conformal reduction from a quartic to a cubic, quadratic, linear or constant potential by mapping a simple, double, triple or quadruple root of the potential to plus or minus infinity. As a result, soliton solutions, as shown in Fig. 2, in our classical mechanics analogy correspond to an oscillation with infinitely long period where the particle in phase space approaches a bifurcation point similar to a mathematical pendulum, where the angular coordinate approaches the unstable fixed point at $\pi$ radians. Analogously, unbounded solutions are the counterpart of scattering states of the corresponding classical potentials. Hence, the ideas and concepts for treating physics problems of classical particles in anharmonic potentials have a direct correspondence to those required for the NLSE. Remarkably, both systems enable the conformal mapping of their solutions within and between different solution classes. Conclusion – In summary we have provided a unified picture of the NLSE by establishing a conformal duality between the solution spaces of the cubic- quintic and cubic NLSE as well as the linear Schrödinger equation. This connection gives rise to novel and elementary understanding of the physics of nonlinear systems, in particular, when comparing the effect of nonlinearities of different degrees. Our results apply to stationary and travelling-wave solutions of the NLSE and remain valid even under Galilean transformations. We therefore expect our findings to have a wide variety of applications that include the dynamics of solitons and their dual counterparts, mode structures in nonlinear fiber optics, hydrodynamic wave-dynamics, and the interplay of two- and three-body interactions in quasi-1D Bose-Einstein condensates as utilized for atomtronics devices [55, 56]. In addition, the conformal duality can be employed for Newtonian mechanics allowing us to classify and relate numerous different dynamical systems caused by anharmonic conservative potentials. Finally, our algebraic-geometric classification scheme can straightforwardly be extended to even higher order NLSEs such as the cubic-quintic-septic NLSE [57] to search for new physics in the form of exotic solutions that require strong nonlinearities of this kind. Acknowledgments – We thank M.A. Efremov for fruitful discussions and helpful suggestions. D.L. acknowledges financial support from the U.S. Department of Energy (DE-SC0021152, DE-SC0013365, DE-SC0023658, SciDAC-5 NUCLEI Collaboration). W.P.S. is grateful to Texas A$\&$M University for a Faculty Fellowship at the Hagler Institute for Advanced Study at the Texas A$\&$M University as well as to the Texas A$\&$M AgriLife Research. ## References * Bohm and Gross [1949a] D. Bohm and E. P. Gross, Phys. Rev. 75, 1851 (1949a). * Bohm and Gross [1949b] D. Bohm and E. P. Gross, Phys. Rev. 75, 1864 (1949b). * Hasegawa [1975] A. Hasegawa, _Plasma Instabilities and Nonlinear Effects_ (Springer, Berlin, Heidelberg, 1975). * Zakharov [1968] V. E. Zakharov, J. Appl. Mech. Tech. Phys. 9, 190 (1968). * Peregrine [1983] D. H. Peregrine, The ANZIAM Journal 25, 16 (1983). * Kuznetsov _et al._ [1986] E. A. Kuznetsov, A. M. Rubenchik, and V. E. Zakharov, Phys. Rep. 142, 103 (1986). * Dalfovo _et al._ [1999] F. Dalfovo, S. Giorgini, L. P. Pitaevskii, and S. Stringari, Rev. Mod. Phys. 71, 463 (1999). * Giorgini _et al._ [2008] S. Giorgini, L. P. Pitaevskii, and S. Stringari, Rev. Mod. Phys. 80, 1215 (2008). * Haus and Wong [1996] H. A. Haus and W. S. Wong, Rev. Mod. Phys. 68, 423 (1996). * Kivshar and Luther-Davies [1998] Y. S. Kivshar and B. Luther-Davies, Phys. Rep. 298, 81 (1998). * Lederer _et al._ [2008] F. Lederer, G. I. Stegeman, D. N. Christodoulides, G. Assanto, M. Segev, and Y. Silberberg, Phys. Rep. 463, 1 (2008). * Copie _et al._ [2020] F. Copie, S. Randoux, and P. Suret, Rev. Phys. 5, 100037 (2020). * Zakharov and Shabat [1971] V. E. Zakharov and A. B. Shabat, Zh. Eksp. Teor. Fiz. 61, 118 (1971), [Sov. Phys. JETP 34, 62 (1972)]. * Bulgac [2002] A. Bulgac, Phys. Rev. Lett. 89, 050402 (2002). * Muryshev _et al._ [2002] A. Muryshev, G. V. Shlyapnikov, W. Ertmer, K. Sengstock, and M. Lewenstein, Phys. Rev. Lett. 89, 110401 (2002). * Konotop and Pitaevskii [2004] V. V. Konotop and L. Pitaevskii, Phys. Rev. Lett. 93, 240403 (2004). * Petrov and Astrakharchik [2016] D. S. Petrov and G. E. Astrakharchik, Phys. Rev. Lett. 117, 100401 (2016). * Zhou _et al._ [2021] Y. Zhou, H. Meng, J. Zhang, X. Li, X. Ren, X. Wan, Z. Zhou, J. Wang, X. Fan, and Y. Shi, Sci. Rep. 11, 11382 (2021). * Seidel _et al._ [2022] T. G. Seidel, S. V. Gurevich, and J. Javaloyes, Phys. Rev. Lett. 128, 083901 (2022). * Burger _et al._ [1999] S. Burger, K. Bongs, S. Dettmer, W. Ertmer, K. Sengstock, A. Sanpera, G. V. Shlyapnikov, and M. Lewenstein, Phys. Rev. Lett. 83, 5198 (1999). * Denschlag _et al._ [2000] J. Denschlag, J. E. Simsarian, D. L. Feder, C. W. Clark, L. A. Collins, J. Cubizolles, L. Deng, E. W. Hagley, K. Helmerson, W. P. Reinhardt, S. L. Rolston, B. I. Schneider, and W. D. Phillips, Science 287, 97 (2000). * Strecker _et al._ [2002] K. E. Strecker, G. B. Partridge, A. G. Truscott, and R. G. Hulet, Nature 417, 150 (2002). * Becker _et al._ [2008] C. Becker, S. Stellmer, P. Soltan-Panahi, S. Dörscher, M. Baumert, E.-M. Richter, J. Kronjäger, K. Bongs, and K. Sengstock, Nat. Phys. 4, 496 (2008). * Kibler _et al._ [2015] B. Kibler, A. Chabchoub, A. Gelash, N. Akhmediev, and V. E. Zakharov, Phys. Rev. X 5, 041026 (2015). * Crosta _et al._ [2011] M. Crosta, A. Fratalocchi, and S. Trillo, Phys. Rev. A 84, 063809 (2011). * Hayata and Koshiba [1995] K. Hayata and M. Koshiba, Phys. Rev. E 51, 1499 (1995). * Akhmediev _et al._ [1987] N. N. Akhmediev, V. M. Eleonskii, and N. E. Kulagin, Theor. Math. Phys. 72, 809 (1987). * Gagnon [1989] L. Gagnon, J. Opt. Soc. Am. A 6, 1477 (1989). * Pushkarov and Tanev [1996] D. Pushkarov and S. Tanev, Opt. Commun. 124, 354 (1996). * Schürmann [1996] H. W. Schürmann, Phys. Rev. E 54, 4312 (1996). * Serkin and Hasegawa [2000] V. N. Serkin and A. Hasegawa, Phys. Rev. Lett. 85, 4502 (2000). * Carr _et al._ [2000a] L. D. Carr, C. W. Clark, and W. P. Reinhardt, Phys. Rev. A 62, 063610 (2000a). * Carr _et al._ [2000b] L. D. Carr, C. W. Clark, and W. P. Reinhardt, Phys. Rev. A 62, 063611 (2000b). * Wamba _et al._ [2016] E. Wamba, A. Pelster, and J. R. Anglin, Phys. Rev. A 94, 043628 (2016). * Khawaja and Sakkaf [2019] U. A. Khawaja and L. A. Sakkaf, _Handbook of Exact Solutions to the Nonlinear Schrödinger Equations_ (IOP Publishing, Bristol, 2019). * Liu _et al._ [2021] Y.-Y. Liu, W.-D. Li, and W.-S. Dai, J. Phys. Commun. 5, 015011 (2021). * Burkhardt and Choi [1992] T. Burkhardt and J.-Y. Choi, Nucl. Phys. B 376, 447 (1992). * Nussinov _et al._ [2015] Z. Nussinov, G. Ortiz, and M.-S. Vaezi, Nucl. Phys. B 892, 132 (2015). * Ares _et al._ [2016] F. Ares, J. G. Esteve, F. Falceto, and A. R. de Queiroz, J. Stat. Mech. 2016, 043106 (2016). * Kramers and Wannier [1941] H. A. Kramers and G. H. Wannier, Phys. Rev. 60, 252 (1941). * Montonen and Olive [1977] C. Montonen and D. Olive, Phys. Lett., B 72, 117 (1977). * Schreck _et al._ [2001] F. Schreck, L. Khaykovich, K. L. Corwin, G. Ferrari, T. Bourdel, J. Cubizolles, and C. Salomon, Phys. Rev. Lett. 87, 080403 (2001). * Görlitz _et al._ [2001] A. Görlitz, J. M. Vogels, A. E. Leanhardt, C. Raman, T. L. Gustavson, J. R. Abo-Shaeer, A. P. Chikkatur, S. Gupta, S. Inouye, T. Rosenband, and W. Ketterle, Phys. Rev. Lett. 87, 130402 (2001). * Greiner _et al._ [2001] M. Greiner, I. Bloch, O. Mandel, T. W. Hänsch, and T. Esslinger, Phys. Rev. Lett. 87, 160405 (2001). * Meyrath _et al._ [2005] T. P. Meyrath, F. Schreck, J. L. Hanssen, C.-S. Chuu, and M. G. Raizen, Phys. Rev. A 71, 041604 (2005). * Kolomeisky _et al._ [2000] E. B. Kolomeisky, T. J. Newman, J. P. Straley, and X. Qi, Phys. Rev. Lett. 85, 1146 (2000). * Abdullaev _et al._ [2001] F. K. Abdullaev, A. Gammal, L. Tomio, and T. Frederico, Phys. Rev. A 63, 043604 (2001). * Salasnich _et al._ [2002] L. Salasnich, A. Parola, and L. Reatto, Phys. Rev. A 65, 043614 (2002). * Cardoso _et al._ [2011] W. B. Cardoso, A. T. Avelar, and D. Bazeia, Phys. Rev. E 83, 036604 (2011). * Note [1] The indices $j$ of the coefficients $a_{j}$ are chosen such that they correspond to the exponents of the different powers of $\sigma$ of the polynomial P defined in Eq. (3). * Note [2] See Supplemental Material at the end of the document for details on the conformal transformation of the NLSE and explicit expressions of the solutions discussed in context of the conformal reduction shown in Fig. 2. The Supplemental Material cites Refs. [52, 59, 58]. * Bateman [1953] H. Bateman, _Higher Transcendental Functions [Volumes I-III]_ , Vol. 2 (McGraw-Hill Book Company, New York, 1953). * Pethick and Smith [2002] C. Pethick and H. Smith, _Bose-Einstein Condensation in Dilute Gases_ (Cambridge University Press, Cambridge, 2002). * Dauxois and Peyrard [2006] T. Dauxois and M. Peyrard, _Physics of Solitons_ (Cambridge University Press, Cambridge, 2006). * Amico _et al._ [2021] L. Amico _et al._ , AVS Quantum Science 3, 039201 (2021). * Amico _et al._ [2022] L. Amico, D. Anderson, M. Boshier, J.-P. Brantut, L.-C. Kwek, A. Minguzzi, and W. von Klitzing, Rev. Mod. Phys. 94, 041001 (2022). * Reyna and de Araújo [2017] A. S. Reyna and C. B. de Araújo, Adv. Opt. Photon. 9, 720 (2017). * Byrd and Friedman [1971] P. F. Byrd and M. D. Friedman, _Handbook of Elliptic Integrals for Engineers and Scientists_ (Springer, Berlin, Heidelberg, 1971). * Milne [1911] J. J. Milne, _An Elementary Treatise on Cross-Ratio Geometry, with Historical Notes_ (Cambridge University Press, Cambridge, 1911). Supplemental material for Unified theory of the nonlinear Schrödinger equation David B. Reinhardt1, Dean Lee2, Wolfgang P. Schleich3,4 and Matthias Meister1 1German Aerospace Center (DLR), Institute of Quantum Technologies, Wilhelm-Runge-Straße 10, 89081 Ulm, Germany 2Facility for Rare Isotope Beams and Department of Physics and Astronomy, Michigan State University, MI 48824, USA 3Institut für Quantenphysik and Center for Integrated Quantum Science and Technology (IQST), Universität Ulm, D-89069 Ulm, Germany 4Hagler Institute for Advanced Study at Texas A$\&$M University, Texas A$\&$M AgriLife Research, Institute for Quantum Science and Engineering (IQSE), and Department of Physics and Astronomy, Texas A$\&$M University, College Station, Texas 77843-4242, USA (Dated: July 14, 2023) ## I Conformal duality of the nonlinear Schrödinger equation In this supplemental material we derive the transformation of the gradient of the phase from the Möbius transformation of the density and further provide more insights into the transformation of the differential equations with quartic, cubic, and quadratic polynomial dependency in the density. In addition, the cross-ratio of the transformation is discussed in more detail. We conclude this supplement by providing explicit expressions for the conformal reduction of typical oscillating solutions of the NLSE and of the solitonic solutions shown in Fig. 2. In this context we also present the involved analytical solutions of the NLSE. ### Basics of elliptic curves A common way to define elliptic- and hyper-elliptic curves is to introduce the relation $\displaystyle Y^{2}=P(X)$ (8) between the algebraic variables $X$ and $Y$ with the polynomial $\displaystyle P(X)\equiv\sum_{n=0}^{N}\alpha_{n}X^{n}$ (9) of order $N$ with coefficients $\alpha_{n}$. Note that for elliptic curves $N=3,4$, while for hyper-elliptic curves $N>4$ [58]. A key feature of elliptic or hyper-elliptic curves is the possibility to transform them into other elliptic or hyper-elliptic curves by rational transformations [52]. For instance the bi-rational transformations $\displaystyle X=\frac{A\tilde{X}+B}{C\tilde{X}+D}$ (10) and $\displaystyle Y=\frac{\tilde{Y}}{\left(CX+D\right)^{\frac{N}{2}}}$ (11) connect both algebraic variables $X$ and $Y$ with the corresponding new variables $\tilde{X}$ and $\tilde{Y}$ through the coefficients $A,B,C$ and $D$. ### I.1 Conformal transformation of the density and the phase gradient In the case of the nonlinear Schrödinger equation (NLSE) discussed here we identify the variables $X$ and $Y$ with $\sigma$ and $\sigma^{\prime}\equiv\mathrm{d}\sigma/\mathrm{d}x$ such that the transformation of the density is given by the relation $\displaystyle\sigma\left(x\right)=\frac{A\,\tilde{\sigma}\left(\tilde{x}\right)+B}{C\,\tilde{\sigma}\left(\tilde{x}\right)+D}$ (12) being of linear fractional type. In contrast, the transformation of the derivative of the density $\sigma^{\prime}$ is of nonlinear fractional type as is the mapping between the algebraic variables $Y$ and $\tilde{Y}$, Eq. (11). Nevertheless, in case of the NLSE the transformation of $\sigma^{\prime}$ is self-consistently obtained by differentiation of Eq. (12) $\displaystyle\frac{\mathrm{d}\sigma\left(x\right)}{\mathrm{d}x}$ $\displaystyle=\frac{\mathrm{d}\tilde{x}}{\mathrm{d}x}\frac{AD- BC}{\left(C\,\tilde{\sigma}\left(\tilde{x}\right)+D\right)^{2}}\frac{\mathrm{d}\tilde{\sigma}\left(\tilde{x}\right)}{\mathrm{d}\tilde{x}}$ (13) $\displaystyle=\frac{1}{\left(C\,\tilde{\sigma}\left(\tilde{x}\right)+D\right)^{2}}\frac{\mathrm{d}\tilde{\sigma}\left(\tilde{x}\right)}{\mathrm{d}\tilde{x}}\,,$ (14) where we have used the linear coordinate transformation $x=x_{0}+\left(AD- BC\right)\tilde{x}$ between $x$ and $\tilde{x}$ in the second step. The origin of the Möbius transformation of the phase gradient $\displaystyle\frac{\mathrm{d}\phi}{\mathrm{d}x}$ $\displaystyle=\pm\frac{\sqrt{a_{0}}}{\sigma}$ (15) is rooted in its reciprocal dependence on the density $\sigma$. Indeed, by inserting the conformal map of the density, Eq. (12), into Eq. (15) and dividing both the numerator and denominator by the density $\tilde{\sigma}$ we immediately obtain the transformation $\displaystyle\frac{\mathrm{d}\phi}{\mathrm{d}x}=\pm\sqrt{a_{0}}\,\dfrac{D\,\frac{\mathrm{d}\tilde{\phi}}{\mathrm{d}\tilde{x}}\pm\sqrt{\tilde{a}_{0}}\,C}{B\,\frac{\mathrm{d}\tilde{\phi}}{\mathrm{d}\tilde{x}}\pm\sqrt{\tilde{a}_{0}}\,A}\,,$ (16) of the phase gradient, where $\sqrt{\tilde{a}_{0}}$ is the flux in the transformed system. Thus, the phase gradient also transforms according to a Möbius transform with the same coefficients $A,B,C,D$ used for the density. Consequently, the transformations Eq. (12) and Eq. (16) of the density and gradient of the phase together enable a complete conformal mapping of the complex field $\psi=\sqrt{\sigma}\,\mathrm{e}^{\mathrm{i}\phi}$ of the NLSE. ### Conformal transformation of the polynomial equation The stationary solutions of the NLSE are governed by the differential equation (2) which features a polynom $P(\sigma)$ of fourth order on the right-hand side. In order to analyze this equation it is convenient to consider the factored form $\displaystyle\left(\frac{\mathrm{d}\sigma}{\mathrm{d}x}\right)^{2}$ $\displaystyle=a_{4}\,\left(\sigma-\sigma_{1}\right)\left(\sigma-\sigma_{2}\right)\left(\sigma-\sigma_{3}\ \right)\left(\sigma-\sigma_{4}\ \right)$ (17) where $\sigma_{j}$ are the roots of $P$. By applying the transformations Eq.(12) and (14) the transformed differential equation is given by $\displaystyle\left(\frac{\mathrm{d}\tilde{\sigma}}{\mathrm{d}\tilde{x}}\right)^{2}$ $\displaystyle=a_{4}\Big{(}\left(A-\sigma_{1}C\right)\sigma+B-\sigma_{1}D\Big{)}\cdot\Big{(}\left(A-\sigma_{2}C\right)\sigma+B-\sigma_{2}D\Big{)}\cdot\Big{(}\left(A-\sigma_{3}C\right)\sigma+B-\sigma_{3}D\Big{)}\cdot\Big{(}\left(A-\sigma_{4}C\right)\sigma+B-\sigma_{4}D\Big{)}$ $\displaystyle=\tilde{a}_{4}\,\Big{(}\tilde{\sigma}-\tilde{\sigma}_{1}\Big{)}\Big{(}\tilde{\sigma}-\tilde{\sigma}_{2}\Big{)}\Big{(}\tilde{\sigma}-\tilde{\sigma}_{3}\Big{)}\Big{(}\tilde{\sigma}-\tilde{\sigma}_{4}\Big{)}$ (18) with $\displaystyle\tilde{a_{4}}\equiv a_{4}\,C^{4}\left(\frac{A}{C}-\sigma_{1}\right)\left(\frac{A}{C}-\sigma_{2}\right)\left(\frac{A}{C}-\sigma_{3}\right)\left(\frac{A}{C}-\sigma_{4}\right)\,,$ (19) the new roots $\displaystyle\tilde{\sigma}_{j}\equiv\frac{D\sigma_{j}-B}{-C\sigma_{j}+A}$ (20) and $A/C\neq\sigma_{j}$. In general the conformal character of the transformation maps straight lines to circles and vice versa such that the new and original roots will always lie on a straight line or a circle. By choosing a specific set of coefficients $A,B,C$ and $D$ Eq. (I) can be used to relate two different solutions of the cubic-quintic NLSE as illustrated in Fig. 1. Moreover, the quartic polynomial can conformally be reduced to a cubic polynomial, describing the cubic NLSE, by mapping one of the outer roots to plus or minus infinity as shown in Fig. 2. In this case $A/C$ needs to be equal to the value of the root to be mapped. If for instance the first root $\sigma_{1}$ is mapped to minus infinity ($A/C=\sigma_{1}$) the resulting differential equation can be written as $\displaystyle\left(\frac{\mathrm{d}\rho}{\mathrm{d}\tilde{x}}\right)^{2}$ $\displaystyle=\tilde{a}_{3}\left(\rho-\rho_{1}\right)\left(\rho-\rho_{2}\right)\left(\rho-\rho_{3}\right)$ (21) $\displaystyle=\tilde{a}_{3}\rho^{3}+\tilde{a}_{2}\rho^{2}+\tilde{a}_{1}\rho+\tilde{a}_{0}$ (22) with $\displaystyle\tilde{a}_{3}\equiv a_{4}C^{2}\left(\frac{A}{C}-\sigma_{2}\right)\left(\frac{A}{C}-\sigma_{3}\right)\left(\frac{A}{C}-\sigma_{4}\right)\left(BC- AD\right)\,.$ (23) Here we have introduced the density $\rho$ the of the cubic NLSE as to distinguish between the various cases. In the case that the quartic polynomial, Eq. (2), exhibits a vanishing discriminant $\Delta$ multiple roots might emerge. If one of the outer roots is a double root, the degree of the polynomial can be reduced by two, giving rise to a simple parabola corresponding to the linear Schrödinger equation. By choosing e.g. $A/C=\sigma_{1,2}$ we obtain a conic curve $\displaystyle\left(\frac{\mathrm{d}\lambda}{\mathrm{d}\tilde{x}}\right)^{2}$ $\displaystyle=e_{2}\,\left(\lambda-\lambda_{1}\right)\left(\lambda-\lambda_{2}\right)$ (24) $\displaystyle=e_{2}\lambda^{2}+e_{1}\lambda+e_{0}$ (25) describing the stationary states of the linear Schrödinger equation, where $\displaystyle e_{2}\equiv a_{4}\left(\frac{A}{C}-\sigma_{3}\right)\left(\frac{A}{C}-\sigma_{4}\right)\left(BC- AD\right)^{2}$ (26) and the density is labeled by $\lambda$. ### I.2 Details of the cross-ratio In the mathematics literature [59], the cross-ratio is sometimes also called the anharmonic ratio of four co-linear or concyclic complex numbers $z_{1},z_{2},z_{3}$ and $z_{4}$ and is typically defined as $\displaystyle\left(z_{1},z_{2};z_{3},z_{4}\right)=\frac{z_{3}-z_{1}}{z_{3}-z_{2}}\frac{z_{4}-z_{2}}{z_{4}-z_{1}}\equiv\Lambda\,.$ (27) Actually, this definition is not unique as the value will be unaltered for pairwise interchange of all points . Furthermore, all other possibilities of partitioning the four points can eventually be related to the cross-ratio $\Lambda$ defined in Eq. (27). For instance, the solutions of the “W”-shaped graph of the polynomial $P$ shown in Fig. 1a employ a cross-ratio or elliptic modulus given by the relation $\displaystyle k^{2}$ $\displaystyle\equiv\frac{\sigma_{4}-\sigma_{3}}{\sigma_{4}-\sigma_{2}}\frac{\sigma_{2}-\sigma_{1}}{\sigma_{3}-\sigma_{1}}=\frac{\Lambda-1}{\Lambda}\,.$ (28) Similarly, the solutions of a “M”-shaped graph use the relation $\displaystyle k^{{\prime}^{2}}=\frac{\sigma_{3}-\sigma_{2}}{\sigma_{4}-\sigma_{2}}\frac{\sigma_{4}-\sigma_{1}}{\sigma_{3}-\sigma_{1}}=1-k^{2}=\frac{1}{\Lambda}$ (29) often referred to as the complementary elliptic modulus [58]. In case of a multiple root of the polynomial $P$, the cross-ratio employed in solutions given in terms of Jacobi elliptic functions, will either correspond to the trigonometric limit where $k^{2}=0$ or the hyperbolic limit with $k^{2}=1$. ### Galilean invariance of the conformal duality #### I.2.1 Velocity boost for the time-dependent nonlinear Schrödinger equation Here, we discuss the Galilean covariance of the cubic-quintic NLSE by showing that any Galilean transformation leaves the underlying equation of motion invariant. We consider a solution $\psi\left(x,t\right)$ of the time-dependent cubic-quintic NLSE $\displaystyle i\frac{\partial}{\partial t}\psi\left(x,t\right)=\left(-\frac{1}{2}\frac{\partial^{2}}{\partial x^{2}}+a_{3}\absolutevalue{\psi\left(x,t\right)}^{2}+a_{4}\absolutevalue{\psi\left(x,t\right)}^{4}\right)\psi\left(x,t\right)$ (30) in the rest frame $\mathcal{F}$ described by the coordinates $x$ and $t$. Applying a boost with the dimensionless velocity $u$ introduces the moving frame $\mathcal{F}^{\prime}$ with coordinates $x^{\prime}$ and $t^{\prime}$ which are related to the rest frame via $\displaystyle x$ $\displaystyle=x^{\prime}+ut^{\prime}$ (31) $\displaystyle t$ $\displaystyle=t^{\prime}\,.$ (32) To obtain the wave function $\psi^{\prime}\left(x^{\prime},t^{\prime}\right)$ in the boosted frame $\mathcal{F}^{\prime}$ we consider the transformation $\displaystyle\psi\left(x,t\right)=\psi^{\prime}\left(x^{\prime},t^{\prime}\right)\exp\left(iux^{\prime}+i\frac{u^{2}}{2}t^{\prime}\right)$ (33) of the wavefunction $\psi\left(x,t\right)$ in the rest frame with a phase containing the boost velocity and the additional kinetic energy due to the boost in the moving frame $\mathcal{F}^{\prime}$. Next, we will show that the Galilean transformation of the wavefunction and coordinates leaves the NLSE in Eq. (30) formally unchanged. For this purpose, we consider the transformations of the derivatives in Eq. (30) which are given by $\displaystyle\frac{\partial}{\partial t}=\frac{\partial}{\partial t^{\prime}}\cdot\frac{\partial t^{\prime}}{\partial t}+\frac{\partial}{\partial x^{\prime}}\cdot\frac{\partial x^{\prime}}{\partial t}=\frac{\partial}{\partial t^{\prime}}-u\frac{\partial}{\partial x^{\prime}}$ (34) and $\displaystyle\frac{\partial}{\partial x}=\frac{\partial}{\partial x^{\prime}}\frac{\partial x^{\prime}}{\partial x}+\frac{\partial}{\partial t^{\prime}}\frac{\partial t^{\prime}}{\partial x}=\frac{\partial}{\partial x^{\prime}}.$ (35) By substituting these derivatives into Eq. (30) and also inserting the transformation law for the wave function, Eq. (33), we obtain the differential equation $\displaystyle i$ $\displaystyle\left(\frac{\partial}{\partial t^{\prime}}-u\frac{\partial}{\partial x^{\prime}}\right)\psi^{\prime}\left(x^{\prime},t^{\prime}\right)\exp\left(iux^{\prime}+i\frac{u^{2}}{2}t^{\prime}\right)$ (36) $\displaystyle=\left(-\frac{1}{2}\frac{\partial^{2}}{\partial{x^{\prime}}^{2}}+a_{3}\absolutevalue{\psi^{\prime}\left(x^{\prime},t^{\prime}\right)}^{2}+a_{4}\absolutevalue{\psi^{\prime}\left(x^{\prime},t^{\prime}\right)}^{4}\ \right)\psi^{\prime}\left(x^{\prime},t^{\prime}\right)\exp\left(iux^{\prime}+i\frac{u^{2}}{2}t^{\prime}\right)$ describing the evolution of the boosted wave function $\psi^{\prime}\left(x^{\prime},t^{\prime}\right)$ in the moving frame $\mathcal{F}^{\prime}$. By performing the partial derivatives on both sides of the equation and factoring out the global phase factor $\exp\left(iux^{\prime}+i\frac{u^{2}}{2}t^{\prime}\right)$ the differential equation in the moving frame reduces to $\displaystyle i\frac{\partial}{\partial t^{\prime}}\psi^{\prime}\left(x^{\prime},t^{\prime}\right)=\left(-\frac{1}{2}\frac{\partial^{2}}{\partial{x^{\prime}}^{2}}+a_{3}\absolutevalue{\psi^{\prime}\left(x^{\prime},t^{\prime}\right)}^{2}+a_{4}\absolutevalue{\psi^{\prime}\left(x^{\prime},t^{\prime}\right)}^{4}\right)\psi^{\prime}\left(x^{\prime},t^{\prime}\right)\,.$ (37) The fact that Eqs. (30) and (37) have the same form proves the Galilean invariance of the cubic-quintic NLSE. #### I.2.2 Impact on stationary solutions In order to derive the impact of the velocity boost on stationary solutions, we start with the usual ansatz $\displaystyle\psi\left(x,t\right)=\psi\left(x,0\right)\exp\left(-ia_{2}t\right)$ (38) for the time-independent wave function $\psi(x,0)$ in the rest frame $\mathcal{F}$ with eigenvalue $a_{2}$ determined by Eq. (1). Applying the transformation, Eq. (33), to both the time-dependent and time- independent wave functions in Eq. (38) yields the relation $\displaystyle\psi^{\prime}\left(x^{\prime},t^{\prime}\right)=\psi^{\prime}\left(x^{\prime},0\right)\exp\left(-i\left(a_{2}+\frac{u^{2}}{2}\right)t^{\prime}\right)\,,$ (39) where we have also used the relation $t=t^{\prime}$, Eq. (32). Inserting this result into the time-dependent NLSE in the boosted frame, Eq. (37) allows us to obtain the time-independent NLSE in the boosted frame $\displaystyle\left(-\frac{1}{2}\frac{\partial^{2}}{\partial{x^{\prime}}^{2}}+a_{3}\absolutevalue{\psi^{\prime}\left(x^{\prime},0\right)}^{2}+a_{4}\absolutevalue{\psi^{\prime}\left(x^{\prime},0\right)}^{4}\right)\psi^{\prime}\left(x^{\prime},0\right)$ $\displaystyle=a_{2}^{\prime}\psi^{\prime}\left(x^{\prime},0\right)\,,$ (40) where $\displaystyle a_{2}^{\prime}\equiv a_{2}+\frac{u^{2}}{2}.$ (41) Consequently, the boost to the moving frame adds a velocity-dependent term to the eigenvalue $a_{2}^{\prime}$ of the stationary solution. This is just the kinetic energy associated with the Galilean boost. In the main text, we have shown that any two stationary solutions with the same cross-ratio can be converted into each other using conformal transformations. This fundamental result can now be extended to traveling wave solutions. We simply take the dual stationary solutions and perform Galilean boosts to produce the desired traveling waves. ### Explicit solutions for the conformal duality and reduction of the nonlinear Schrödinger equation #### I.2.3 Oscillating solutions As a first application of the conformal duality of the NLSE we show how typical oscillating solutions of the cubic and the cubic-quintic NLSE are connected via the transformation, Eq. (12). For this purpose we consider the real-valued solution $\displaystyle\rho\left(x_{2}\right)$ $\displaystyle=\left(\rho_{2}-\rho_{1}\right)\operatorname{sn}^{2}\left(\kappa_{2}x_{2},k\right)+\rho_{1},$ (42) which oscillates between the density values $\rho_{1}$ and $\rho_{2}$ corresponding to the roots of a cubic polynomial $P(\rho)$ with an “N”-shaped graph. Here $\operatorname{sn}$ refers to the Jacobi elliptic sine function with $\kappa_{2}=a_{3}\sqrt{\rho_{3}-\rho_{1}}/2$ and $k^{2}=(\rho_{2}-\rho_{1})/(\rho_{3}-\rho_{1})$. We now demonstrate the conformal duality between this solution and another solution for a quartic polynomial $P(\sigma)$. By applying the solution of the cubic NLSE, Eq. (42), to the transformation Eq. (12), and choosing $A/C=\sigma_{1}<0$, to pull a root from minus infinity, we obtain the solution $\displaystyle\sigma\left(x_{3}\right)$ $\displaystyle=\frac{\sigma_{2}\eta-\sigma_{1}\operatorname{sn}^{2}\left(\kappa_{3}\,x_{3},k\right)}{\eta-\operatorname{sn}^{2}\left(\kappa_{3}\,x_{3},k\right)},$ (43) of the cubic-quintic NLSE. Here, $\eta=\frac{\sigma_{3}-\sigma_{1}}{\sigma_{3}-\sigma_{2}}>1$ and $\kappa_{3}=\kappa_{2}/(AD-BC)$ such that the solution given by Eq. (43) oscillates between the densities $\sigma_{2}$ and $\sigma_{3}$ corresponding to the roots of a quartic polynomial $P(\sigma)$ with a “W”-shaped graph. Direct integration of the quartic polynomial of the cubic-quintic NLSE yields exactly the same solution as given by Eq. (43) proving the validity of the conformal transformation. The mapping coefficients of this conformal reduction are given by $\displaystyle A$ $\displaystyle\equiv\frac{\sigma_{1}}{\rho_{2}-\rho_{1}}\qquad$ $\displaystyle B\equiv-\sigma_{2}\eta-\sigma_{1}\frac{\rho_{1}}{\rho_{2}-\rho_{1}}$ (44) $\displaystyle C$ $\displaystyle\equiv\frac{1}{\rho_{2}-\rho_{1}}\qquad$ $\displaystyle D\equiv\eta-\frac{\rho_{1}}{\rho_{2}-\rho_{1}}.$ (45) The coordinates are related via the linear map $x_{3}=\left(AD- BC\right)x_{2}+\text{const.}$, where in this example the constant is set to zero for simplicity. #### I.2.4 Solitonic solutions Here, we provide the explicit transformations of the conformal reduction shown in Fig. 2. For the bright soliton solutions we naturally need a focusing or attractive nonlinearity with $\tilde{a}_{3}<0$ while the flat-top soliton likewise requires a focusing cubic nonlinearity $a_{3}<0$ and a defocusing quintic nonlinearity $a_{4}>0$. The analytical solutions for all orange and green shaded areas in Fig. 2 given by Eqs. (46) and (47), sorted according to the appearance in the figure from left to right, are given by $\displaystyle\lambda\left(x_{1}\right)$ $\displaystyle=\left(\lambda_{2}-\lambda_{1}\right)\sin^{2}\left(\kappa_{1}x_{1}\right)+\lambda_{1}\qquad$ $\displaystyle\sigma\left(x_{3}\right)=\sigma_{0}\eta\frac{1}{1+\sqrt{1-\eta}\cos\left(2\kappa_{3}x_{3}\right)}\qquad$ $\displaystyle\rho\left(x_{2}\right)=\rho_{0}\frac{1}{1+\cos\left(2\kappa_{2}x_{2}\right)}$ (46) $\displaystyle\tilde{\lambda}\left(\tilde{x}_{1}\right)$ $\displaystyle=\left(\lambda_{2}-\lambda_{1}\right)\sinh^{2}\left(\kappa_{1}\tilde{x}_{1}\right)+\lambda_{1}\qquad$ $\displaystyle\tilde{\sigma}\left(\tilde{x}_{3}\right)=\sigma_{0}\eta\frac{1}{1+\sqrt{1-\eta}\operatorname{cosh}\left(2\kappa_{3}\tilde{x}_{3}\right)}\qquad$ $\displaystyle\tilde{\rho}\left(\tilde{x}_{2}\right)=\rho_{0}\frac{1}{1+\cosh\left(2\kappa_{2}\tilde{x}_{2}\right)}\,,$ (47) where $\kappa_{1}\equiv\sqrt{\absolutevalue{e_{2}}}$, $\rho_{0}\equiv 4\tilde{a}_{2}/\tilde{a}_{3}$, $\kappa_{2}\equiv\sqrt{\absolutevalue{\tilde{a}_{2}}}$, $\sigma_{0}\equiv-3a_{3}/2a_{4}$, $\kappa_{3}\equiv\sqrt{\absolutevalue{a_{2}}}$, and $\eta\equiv-8a_{2}a_{4}/3a_{3}^{2}<1$. Again $\lambda$ refers to the density of the linear Schrödinger equation, $\rho$ corresponds to the density of the cubic NLSE, and $\sigma$ gives the density of the cubic-quintic NLSE. Now the transformation coefficients $A,B,C,D$, Eq. (12), of the conformal reduction from the cubic-quintic to the cubic NLSE, connecting the solutions $\sigma\left(x_{3}\right)$ and $\rho\left(x_{2}\right)$ or $\tilde{\sigma}\left(\tilde{x}_{3}\right)$ and $\tilde{\rho}\left(\tilde{x}_{2}\right)$, can be determined. For the particular set of solutions given by Eqs. (46) and (47) the phase is independent of position as they require a root at the origin such that the coefficient $a_{0}=0$ in Eq. (2). As a consequence, the transformation coefficient $B$ needs to be zero by construction in order to conformally relate theses solutions. In addition, this transformation requires $AD=\frac{\kappa_{2}}{\kappa_{3}}>0$ and $A/C=\sigma_{0}\left(1+\sqrt{1-\eta}\right)=\sigma_{4}$. Ultimately, the transformation coefficients are thus given by $\displaystyle A$ $\displaystyle\equiv\sqrt{\frac{\kappa_{2}}{\kappa_{3}}}\sqrt{\frac{\rho_{0}\sigma_{0}\eta}{\sqrt{1-\eta}}}\qquad$ $\displaystyle B\equiv 0$ (48) $\displaystyle C$ $\displaystyle\equiv\sqrt{\frac{\kappa_{2}}{\kappa_{3}}}\sqrt{\frac{\rho_{0}}{\sigma_{0}}}\frac{1}{1+\sqrt{1-\eta}}\sqrt{\frac{\eta}{\sqrt{1-\eta}}}\qquad$ $\displaystyle D\equiv\sqrt{\frac{\kappa_{2}}{\kappa_{3}}}\sqrt{\frac{\sqrt{1-\eta}}{\rho_{0}\sigma_{0}\eta}}.$ (49) As the roots of the polynomial $P$ and $-P$ are identical the transformation coefficients for the conformal reduction from $\sigma\left(x_{3}\right)$ to $\rho\left(x_{2}\right)$ (orange shaded areas in Fig. 2) are formally identical to those required for the reduction from $\tilde{\sigma}\left(\tilde{x}_{3}\right)$ to $\tilde{\rho}\left(\tilde{x}_{2}\right)$ (green shaded areas in Fig. 2). Hence, the coefficients given by Eqs. (48) and (49) apply to both cases. Finally, in order to connect the solutions of a polynomial $P$ to those of the inverse polynomial $-P$ one can for instance employ the coefficients $A=D=(1+i)/\sqrt{2},\,B=C=0$ which formally do not change the densities according to Eq. (12), but instead yield the coordinate transformation $x=\mathrm{i}\tilde{x}$. By transforming back to a real coordinate $\tilde{x}$ the imaginary unit is absorbed in the functional dependency of the density inducing the change from a trigonometric function to a hyperbolic function. In this way the conformal partnering of the trigonometric oscillating solution ${\sigma}\left({x}_{3}\right)$ and the the resting droplet solution $\tilde{\sigma}\left(\tilde{x}_{3}\right)$ given by Eqs. (46) and (47) can be realized.
$e^{k}(j)$ is a canonical vector (see Sec. II.1). Since $s_{k}=W^{i}_{k}u_{i}$ is the outgoing strength and $s_{k}e^{k}(j)x_{j}(t)=s_{k}e^{k}(j)\delta^{i}_{j}x_{i}(t)$, the diffusion equation can be written in more compact form as: $\displaystyle\frac{dx_{j}(t)}{dt}=-\mathcal{D}L^{i}_{j}x_{i}(t)\,,$ (66) where $L^{i}_{j}=W^{l}_{k}u_{l}e^{k}(j)\delta^{i}_{j}-W^{i}_{j}$ is the combinatorial Laplacian tensor [chung1997]. It is worth remarking here that this tensor differs from the normalized random walk Laplacian introduced in the previous section, although in some cases – as for a classical random walk – they are related by the simple relationship $\tilde{L}_{j}^{i}=(D^{-1})^{i}_{k}L^{k}_{j}$. The solution of Eq. (66) is given by $x_{j}(t)=x_{i}(0)e^{-\mathcal{D}L^{i}_{j}t}$, similarly to what we have seen for continuous-time random walks. However, at variance with random walks, it can be shown that in the stationary regime, the solution takes the form $x_{j}(\infty)\propto u_{j}$, i.e., one will find exactly the same fraction of the quantity in each node, uniformly distributed. We might wonder how fast diffusion happens. Since the Laplacian matrix is semi-positive definite, it is possible to show that it can be decomposed as $\displaystyle L^{i}_{j}=Q^{i}_{h}\Lambda^{h}_{k}(Q^{-1})^{k}_{j},$ (67) where $Q^{i}_{h}$ is a matrix whose columns are the eigenvectors of the Laplacian and $\Lambda^{h}_{k}$ is a diagonal matrix whose entries are the Laplacian’s eigenvalues. This algebraic feature allows us to prove that $e^{-\mathcal{D}L^{i}_{j}t}=Q^{i}_{h}(e^{-\mathcal{D}\Lambda t})^{h}_{k}(Q^{-1})^{k}_{j}$, highlighting that the exponential decay of $x_{j}(t)$ is dominated by the smallest positive eigenvalue, which usually is indicated by $\Lambda_{2}$. Therefore, the diffusion temporal scale is given by $\tau\approx 1/\Lambda_{2}$. Figure 35: Continuous-time diffusion on a duplex network, i.e., a multiplex consisting of two layers, obtained from two Erdős–Rényi networks with independently wiring probabilities $p_{1},p_{2}\in[0,1]$, used to connect two pairs of nodes within the same layer. Diffusion speed is quantified by the second smallest eigenvalue of the Laplacian tensor, $\Lambda_{2}$: depending on the wiring probabilities, diffusion in the duplex can be faster (left-hand side panel) or slower (right-hand side panel) than in each layer separately. The condition for enhanced diffusion to happen is given by $\Lambda_{2}^{\text{multiplex}}\geq\max\\{\Lambda_{2}^{\text{layer 1}},\Lambda_{2}^{\text{layer 2}}\\}$: a sharp change in the behavior of the characteristic temporal scale can be observed for varying weight of the inter- layer connections (left and right panels), with a clear transition between two distinct regimes above a certain critical value of inter-layer coupling. The middle panel shows when the enhanced diffusion condition holds (encoded by colors), while varying the wiring probabilities. Figure from [de2016physics]. In the case of interconnected multilayer networks, the diffusion equation has been generalized by means of the supra-adjacency matrix [gomez2013diffusion] and the tensorial formulation [de2013mathematical]. In this new setup, a quantity can diffuse through inter-layer connections as well. If we indicate by $X_{i\alpha}(t)$ the rank-2 state tensor at time $t$, then the multilayer diffusion equation can be written as $\displaystyle\frac{dX_{j\beta}(t)}{dt}=M^{i\alpha}_{j\beta}X_{i\alpha}(t)-M^{i\alpha}_{k\gamma}U_{i\alpha}E^{k\gamma}(i\beta)X_{i\beta}(t)\,,$ (68) where $U_{i\alpha}=u_{i}u_{\alpha}$ and $E^{k\gamma}(i\beta)=e^{k}(i)e^{\gamma}(\beta)$. If we define the multilayer combinatorial Laplacian tensor as $L^{i\alpha}_{j\beta}=M^{l\epsilon}_{k\gamma}U_{l\epsilon}E^{k\gamma}(j\beta)\delta^{i\alpha}_{j\beta}-M^{i\alpha}_{j\beta},$ (69) the diffusion equation can be written more compactly as $\displaystyle\frac{dX_{j\beta}(t)}{dt}=-L^{i\alpha}_{j\beta}X_{i\alpha}(t)\,,$ (70) whose solution is given by $X_{j\beta}(t)=X_{i\alpha}(0)e^{-L^{i\alpha}_{j\beta}t}$, a clear generalization of the result obtained in the case of single-layer networks. Also in this case, the second smallest eigenvalue $\Lambda_{2}$ – calculated from the supra-adjacency matrix representation – governs the speed of diffusion [de2013mathematical, sole2013spectral, gomez2013diffusion], leading to interesting phenomena (see Fig. 35 for details). ##### Topological transition with diffusive processes Figure 36: Algebraic phase transition. In (a) and (e), the intra-layer connectivity of a duplex (i.e., a multiplex consisting of 2 layers) is shown, where the color of each node refers to the value of the corresponding component in the Fiedler vector, i.e., the eigenvector associated with the second smallest eigenvalue $\lambda_{2}$. In (a) $p=p^{*}=0.602$ and in (e) $p=0.603$, In the middle column is displayed the eigenvalue $\lambda_{2}$ (b), the inner product $\langle v_{A}|v_{B}\rangle$ (c), and, in (d), inner products of $|v_{A}\rangle$ and $|v_{B}\rangle$ with the unit vector, i.e., the sum of their elements (see the text for details). Figure reproduced from [radicchi2013abrupt]. An important question concerns the emergence of such effects in spreading processes, but not only. One can adapt the dynamical rules of models defined in monolayers to encompass the fact that the topology is more complex and this, as we will see, can lead to substantially different behaviors than those observed in isolated networks. Another point of view, though, is to ask why and when a certain model could effectively behave as in an isolated network even though it runs in an a layered structure, or in the other way around, why and when the multilayer dimension dominates, thus disregarding any role of the individual networks of the layers. In some special cases, such as interconnected multiplex networks with identical coupling, it is possible to show the existence of two distinct regimes as a function of the inter-layer coupling strength [radicchi2013abrupt], highlighting how the multilayer structure can influence the outcome of several physical processes. By considering a duplex network – i.e., a multiplex with 2 layers, $A$ and $B$ – and by using spectral properties of multilayer systems, it was found an abrupt transition between the two aforementioned regimes as a function of $p$, the coupling strength. The two regimes are inferred by analyzing the behavior of eigenvectors and eigenvalues of the supra-Laplacian matrix and are clearly distinguishable, separated by a critical point $p^{*}$. For $p\leq p^{*}$, the second smallest eigenvalue111111Note that here we are using $\lambda_{2}$ instead of $\Lambda_{2}$, as in the previous section, to keep the same notation of the original paper by [radicchi2013abrupt]. $\lambda_{2}$, which is associated with a plethora of network properties, is independent of the structure of the layers and hence the dynamical processes can be studied separately, while in the regime $p>p^{*}$, $\lambda_{2}$ tends to a value independent of $p$, i.e., depending only on the details of the intra-layer networks. The eigenvector $|v\rangle$ associated with $\lambda_{2}(p)$ can be split into $|v_{A}\rangle$ and $|v_{B}\rangle$, which correspond to the elements of $|v\rangle$ associated with nodes of networks $A$ and $B$, respectively. It can be proved [radicchi2013abrupt] that for, $p\leq p^{*}$, $|v_{A}\rangle=-|v_{B}\rangle=0$ holds, where $|v_{A}\rangle=\pm\frac{1}{\sqrt{2N}}|1\rangle$, while for $p>p^{*}$ we have $\langle v_{A}|1\rangle=\langle v_{B}|1\rangle=0$. Physically, this means that in the subcritical regime the layers are structurally independent whereas in the supercritical regime the interlayer connection dominates, imposing the same sign in the eigenvector for nodes across networks and alternating the sign for nodes in the same layer. Therefore, the algebraic phase transition can be visualized in several ways. Panels in Fig. 36 show the behavior of the eigenvectors in the subcritical and supercritical regions. Here, $\lambda_{2}(p)$ displays a singular point $p^{*}$ at which the first derivative is not continuous, a sign of an abrupt transition. For $p\leq p^{*}$, $\lambda_{2}$ grows as $2p$, while in the other regime it tends to the value that would take for a weighted superposition of the two layers $A$ and $B$, whose Laplacian is $(1/2)(\mathcal{L}_{A}+\mathcal{L}_{B})$. This abruptness can be observed more directly in the middle and lower panels, which show the behavior of $\langle v_{A}|v_{B}\rangle$, $\langle v_{A}|1\rangle$ and $\langle v_{B}|1\rangle$ as a function of $p$. ### IV.2 Synchronization processes Synchronization is an emergent phenomenon of a population of dynamically interacting units that, usually with a second-order phase transition [nelson1977recent, stanley1999scaling, dorogovtsev2008critical], start operating in a collective, coherent way. Synchronization phenomena may be found in biology, sociology, and ecology and include birds flocking, fireflies flashing, people singing, and neurons spiking, just to mention a few examples. Once a fully synchronized state is reached, the (linear) stability of such a state is tested by studying the effects of a small perturbation of the system state. This approach was introduced in [Pecora1998a] for simple network configurations and has been widely used and extended to complex network topologies. Here we briefly introduce the framework of Master Stability Function for a network of oscillators and then extend the formalism to multilayer networks. For a more exhaustive description see [Boccaletti2018, Arenas2008]. It is worth remarking that, in the following, we will use the more traditional vector notation where $\mathbf{x}(t)$ indicates the state of the system at time $t$. In some cases, we will also use the Kronecker product operator $\otimes$, as is usual in equations governing synchronization dynamics. Our choice is to help the reader link these concepts to the original studies which introduced them. Nevertheless, the whole section could consider the tensorial formalism, where the system state is indicated by the rank-1 tensor $x_{\ell}(t)$ and where products such as $\mathbf{A}\otimes\mathbf{B}$ are indicated as $A^{\alpha}_{\beta}B^{\gamma}_{\delta}$. Let us consider a network of $N$ identical oscillators in an $m$-dimensional space, where, in the absence of any interaction, the dynamics of each node $i$ is described by: $\dot{\textbf{x}_{i}}=\textbf{F}(\textbf{x}_{i}),\qquad i=1,2,...,N;\textbf{x}_{i}\in\mathbb{R}^{m}.$ (71) We introduce an interaction between oscillators due to the fact that they are coupled in an unweighted network specified by the adjacency matrix $\textbf{A}=\\{A_{ij}\\}$ and we define the output function $\textbf{H}(\textbf{x})$ as the function that governs the interaction between nodes. We also assume that the coupling between the oscillators is diffusive, that is the effect that node $j$ has on node $i$ is proportional to the difference between $\textbf{H}(\textbf{x}_{j})$ and $\textbf{H}(\textbf{x}_{i})$. Then, the evolution of the state of node $i$ is given by: $\dot{\textbf{x}_{i}}=\textbf{F}(\textbf{x}_{i})+\sigma\sum_{i=1}^{N}A_{ij}[\textbf{H}(\textbf{x}_{j})-\textbf{H}(\textbf{x}_{i})]=\textbf{F}(\textbf{x}_{i})-\sigma\sum_{j=1}^{N}L_{ij}\textbf{H}(\textbf{x}_{j}),$ (72) where L is the Laplacian matrix and $\sigma$ is the coupling strength . ##### Stability of synchronized states The MSF approach to test the stability of a fully synchronized state, described in Appendix A, was extended to multilayer complex systems [DelGenio2016]. In this work, it is considered a network with $M$ different layers, each layer representing a different kind of interaction between nodes. Eq. (72) is thus extended to describe the dynamics of the whole system: $\dot{\textbf{x}_{i}}=\textbf{F}(\textbf{x}_{i})-\sum_{\alpha=1}^{M}\sigma_{\alpha}\sum_{j=1}^{N}L^{(\alpha)}_{ij}\textbf{H}^{(\alpha)}(\textbf{x}_{j}),$ (73) where $\alpha$ is the index accounting for layers. We obtain the $M$-parameter equation describing the time evolution of perturbation error: $\dot{\boldsymbol{\xi}_{i}}=\left[J\textbf{F}(\textbf{s})-\sum_{\alpha=1}^{M}\sigma_{\alpha}\lambda^{(\alpha)}_{i}J\textbf{H}_{\alpha}(\textbf{s})\right]\boldsymbol{\xi}_{i},$ (74) where $\boldsymbol{\xi}_{i}$ is the eigenmode associated with the eigenvalue $\lambda_{i}$ of L. As in the case of a dynamics evolving on top of a single layer, in a multilayer system the stability of the synchronized state is completely specified by the sign of the maximum conditional Lyapunov exponent $\Lambda_{max}$. In particular, it is also found that stability of the complete synchronization state may be reached even if each layer, taken individually, is unstable: a very interesting feature for practical application. It is worth noting that, together with complete synchronization, a network may exhibit other forms of synchronization where clusters of nodes have a synchronized dynamics but different clusters evolve on distinct time evolutions. This type of synchronization is called clustered synchronization (CS) and it has been well studied in terms of cluster formation, stability and role of network symmetries [Nicosia2013, Pecora2014, Sorrentino2016]. CS has been also studied in multiplex [Jalan2016] and, more recently, in multilayer [DellaRossa2020] networks, and cluster stability has been tested as a function of intra- and inter-layer symmetries. To describe the evolution of the perturbation error for complex synchronization patterns, such as in cluster synchronization, it is useful to rewrite Eq. (105) with a more compact formalism. To this end, we write the state vector as a vector of vectors, $\textbf{X}=(\textbf{x}_{1};\textbf{x}_{2};...;\textbf{x}_{N})$, and the variational equation assumes the form: $\delta\dot{\textbf{X}}=\left[\textbf{I}_{N}\otimes J\textbf{F}(\textbf{s})-\sigma\textbf{L}\otimes J\textbf{H}(\textbf{s})\right]\delta\textbf{X},$ (75) where $\textbf{I}_{N}$ is the identity matrix and $\otimes$ is the Kronecker product. Eq. (75) can be decoupled into N independent equations by diagonalizing L. However, to deal with cluster synchronization and multilayer interactions we have to further generalize Eq. (75). The variational equations for complex synchronization patterns on generalized networks has the form [zhang2020]: $\delta\dot{\textbf{X}}=\left[\sum_{l=1}^{L}\textbf{D}^{(l)}\otimes J\textbf{F}(\textbf{s}^{l})-\sum_{l=1}^{L}\sum_{\alpha=1}^{M}\sigma_{\alpha}\textbf{L}^{(\alpha)}\textbf{D}^{(l)}\otimes J\textbf{H}^{(\alpha)}(\textbf{s}^{l})\right]\delta\textbf{X}$ (76) where the identity matrix has been replaced by the diagonal matrix $\textbf{D}^{(l)}$, whose generic element $D_{ii}^{(l)}=1$ if node $i$ belongs to $l$th dynamical cluster and $D_{ii}^{(l)}=0$ otherwise. In a recent paper [zhang2020], it was established that to optimally decouple Eq. (76), the matrices encoding the synchronization pattern and the interaction pattern, that is $\\{\textbf{D}^{(l)}\\}$ and $\\{\textbf{L}^{(\alpha)}\\}$, should be simultaneously diagonalized. In this work, an algorithm was also developed to find the finest simultaneous block diagonalization. ##### Synchronization in a network of phase oscillators One of the first approaches to describe phase synchronization in an ensemble of oscillators on multiplex networks was done in 2015 [Gambuzza2015], to investigate the synchronization of indirectly coupled units using a system composed by two layers, where the top layer was made of disconnected oscillators and the bottom one, modeling the medium, consisted of oscillators coupled according to a given topology and with a characteristic natural frequency. Each node of the multiplex was modeled as a Stuart-Landau (SL) oscillator, that is an oscillator with amplitude as well as phase dynamics. Therefore, the Kuramoto model (KM, see Appendix B) can be retrieved as a limiting case when the amplitude dynamics vanishes. The Kuramoto order parameter can be generalized to the multiplex framework as: $r^{\alpha\beta}_{ij}=\big{|}\big{\langle}e^{i[\theta^{\alpha}_{i}(t)-\theta^{\beta}_{j}(t)]}\big{\rangle}_{t}\big{|},$ (77) while intra- and inter-layer coherence, respectively, can be also defined by $r^{\alpha}=\frac{1}{N(N-1)}\sum^{N}_{i,j=1}r^{\alpha\alpha}_{ij},$ (78) and $r^{\alpha\beta}=\frac{1}{N}\sum^{N}_{j=1}r^{\alpha\beta}_{jj}.$ (79) By studying a population of $N$ disconnected oscillators, indirectly coupled through an inhomogeneous medium, authors have shown the onset of intra-layer synchronization without inter-layer coherence, i.e. a state in which the nodes of a layer are synchronized between them without being synchronized with those of the other layer (see Fig. 37, left panel). Synchronization of units that are not connected requires the presence of an amplitude dynamics as the regime of intra-layer synchronization is not observed in purely phase oscillators, such as those in the KM. Figure 37: Left: Two-layer multiplex network with one-to-one coupling between the layers. In the top layer ($\alpha$) the nodes only interact with those in the bottom one whereas in the bottom layer ($\beta$) the nodes also interact with other members of the same layer. Middle: Kuramoto order parameters ($r^{\alpha}$) and ($r^{\alpha\beta}$) vs. coupling coefficients $\lambda=\lambda_{\alpha\beta}=\lambda_{\beta}/5$. Continuous lines refer to a multilayer network of Stuart-Landau oscillators with $a=1$, whereas the dashed ones to purely phase oscillators ($a\xrightarrow{}\infty$). Right: Synchronization transitions in two-layer networks with a fraction of the nodes adaptively controlled by a local order parameter $f=1$. Squares and circles (triangles and stars) refer to the values of $R1$ ($R2$), and the insets show the corresponding dependence of the width of hysteretic loop $d$ on $f$. Left and middle panels readapted from [Gambuzza2015], right panel from [Zhang2015] As previously mentioned, not only were continuous second-order transitions observed in an ensemble of networked phase oscillators, but examples of an abrupt first-order transition, named explosive synchronization (ES), were also observed [Gomez-Gardenes2011, Leyva2013]. It was first observed in a network of oscillators presenting a positive correlations between natural frequencies and the degree of the nodes. Moreover, ES was also studied in systems where a local order parameter for the $i$-th oscillator is defined [Zhang2015]: $r_{i}(t)e^{i\Phi(t)}=\frac{1}{k_{i}}\sum_{j=1}^{k_{i}}\sin(\theta_{j}),$ (80) and where the phase dynamics is expressed as: $\dot{\theta_{i}}=\omega_{i}+\sigma\alpha_{i}\sum_{j=1}^{N}A_{ij}\sin(\theta_{j}-\theta_{i}).$ (81) The overall amount of phase coherence in the network is measured by means of the global order parameter $R$: $Re^{i\Psi}=\frac{1}{N}\sum_{j=1}^{N}e^{i\theta_{j}},$ (82) where $0\leq R\leq 1$ and $\Psi$ denotes the average phase. Explosive synchronization onset has been reported in a system of two interdependent networks (see Fig. 37, right panel), with the same size and where nodes on the two layers are coupled in a one-to-one correspondence, so that a group of oscillators in the first layer is controlled by the local order parameters of the corresponding nodes on the second layer, and vice- versa [Zhang2015]. Nevertheless, it was shown that ES is a property of a generic multilayer network as long as some microscopic suppressive rule can prevent the formation of the giant synchronization cluster which characterizes second-order transitions. ### IV.3 Game dynamics: cooperation processes Human cooperation may be intended as a collective behavior that emerges as the result of the interactions among individuals. In the past few years cooperation has been studied in social sciences with methods of statistical physics [Perc2017], in particular Monte Carlo methods and the theory of collective behavior of interacting particles near phase-transition points. That approach has proven very valuable for understanding cooperation and its spatio-temporal dynamics. The mathematical framework used to study human cooperation is usually evolutionary game theory, which quantitatively describes social interactions using example games and formalizes the concept of the social dilemma, intended as the conflictual choice that an individual has between doing what is best for society or doing what is best for themselves. Figure 38: a) Mean final cooperation as function of the temptation to defect $T$, for several multiplex networks with different numbers of layers $M$. b) Multiplex with $M=2$ layers, where cooperation is studied as a function of synergy factor $r$ and edge overlap $\omega$. Dashed lines separate regions with either full defection $c=0$ (IA) or full cooperation $c=1$ (IB), from the region of continuously evolving coexistence of cooperators and defectors $0<c<1$ (II). Panel a) readapted from [gomez2012evolution] and panel b) readapted from [battiston2017determinants]. c) Mean final cooperation (color coded) for the Prisoner’s Dilemma as a function $M$ and the strength of degree correlations $\nu$. d) Mean final cooperation as a function of the initial density of cooperators $c_{0}$ and the strength of correlations $\nu$. Panels c) and d) readapted from [Kleineberg2018]. Evolution of cooperation strongly depends on the population interaction network: individuals who have the same strategy are more likely to interact [Nowak1992, Nowak2010], effectively creating resilient cooperative clusters in a structured population, a phenomenon named network reciprocity. When the edges that determine the interaction among individuals were fixed on a time scale, it was demonstrated [Santos2006] that in heterogeneous populations – modeled by networks with degree distributions exhibiting a power-law behavior – the sustainability of cooperation is simpler to achieve than in homogeneously structured populations. Here we address the problem of how human cooperation emerges in multiplex networks, with different interaction layers that can account for different kinds of social ties an individual may be involved in. In particular, we consider a Prisoner’s Dilemma game implemented in a set of $M$ interdependent networks, characterized by an average degree $\langle k\rangle$, each of them containing the same number $N$ of nodes. Each individual is represented by one node in each of the layers and we define a set of adjacency matrices $\\{A^{\alpha}\\}$ so that $A_{ij}^{\alpha}=1$ when two nodes are connected in layer $\alpha$ and $A_{ij}^{\alpha}=0$ otherwise. At each time step, for each of the $k_{i}^{\alpha}$ games played, each individual $i$ facing a cooperator collects a payoff $\pi_{ij}=R$ or $\pi_{ij}=T$ when playing as a cooperator or as a defector, respectively. Conversely, if $i$ faces a defector, $i$ collects a payoff $\pi_{ij}=S$ or $\pi_{ij}=P$ playing as a cooperator or as a defector, respectively. For game parameters it holds that $S<P<R<T$. The aggregated payoff of node $i$ is given by the sum of the payoffs $\pi_{i}$ over all layers. Furthermore, after each round of the game, individuals update their strategies with a rule that can use the global knowledge about the benefits of the neighbors or be random [Gomez-Gardenes2011b]. Finally, it worth noting that cooperation is also studied in public good games, where the Prisoner’s Dilemma is played in overlapping groups of individuals: cooperators contribute with a “cost” $d$ to the public good, while defectors do not contribute and the total amount in the common pool of each group is multiplied by a factor $r$ and distributed equally among all members of the group [Santos2008a]. Previous research [gomez2012evolution] demonstrated that the resilience of cooperative behavior can be enhanced by the multiplex structure through the simultaneous formation of correlated clusters of cooperators across different layers, i.e. through multiplex network reciprocity. However, it was also proven [battiston2017determinants] that, to gain benefits from multiplex structure, a high topological overlap is needed or, in other words, individuals have to be similarly linked across different layers (see Fig. 38, panels a) and b)). Other studies [Kleineberg2018] investigated the role of degree correlation $\nu$ between nodes in different layers for the emergence of cooperation. They found that in the absence of degree correlations, increasing the number of layers only leads to mild changes. However, if degree correlations are present we observe a mean final cooperation of $c=0.5$, and this value is nearly independent of the game payoff parameters. This mechanism is called _topological enslavement_ and can be understood by considering that, if degree correlations are strong, hubs dominate the game dynamics, since they have the potential to earn higher payoffs (because they play more games) and they are more likely to be selected by other nodes as imitation candidates. Furthermore, topological enslavement implies that the outcome of the evolution of the system is determined by the initial conditions (see Fig. 38, panels c) and d)). Finally, as anticipated in Sec. III.5, layer-layer correlations might have a deep impact on the dynamics on the top of multilayer systems. This is the case for game dynamics, where cooperation might be hampered, rather than enhanced, by specific correlation patterns combining assortative and disassortative degree mixing across layers [wang_pre14a, duh_njp19] (see Fig. 39 for details). Figure 39: Fraction of cooperators as a function of the payoff of a cooperator when playing with a defector ($S$) and the payoff of a defector when facing a cooperator ($T$). The interaction layer is subject to disassortative mixing (assortative coefficient $A_{I}<0$) while the updating network is subject to assortative mixing (disassortative coefficient $A_{U}>0$). Results show the disruptive effect of symmetry breaking for the evolution of cooperation. Parameter values are $A_{I}=-0.1$ and $A_{U}=0.1$ (Left) and $A_{I}=-0.2$ and $A_{U}=0.2$ (Right). Figure from [wang_pre14a]. ### IV.4 Interdependent processes The second class of dynamical processes on multilayer networks is the one of “interdependent” or “coupled” dynamics, consisting of systems characterized by different processes on each layer. The inter-layer interactions couple such processes and are responsible for emerging phenomena which could not be detected in a single-layer network framework (see Tab. 3). As in the case of the structure, it is possible to introduce a unifying framework in terms of a dynamical SNXI decomposition, outlined in Eq. (8), to describe dynamics on multilayer networks. Let $x^{[l]}_{i\alpha}$ (where $l\in{1,2,...,C}$) denote the $l$-th component of a $C-dimensional$ vector $\mathbf{x}_{i\alpha}$ that represents the state of node $i$ in layer $\alpha$. The most general (and possibly nonlinear) dynamics governing the evolution of each state is given by the systems of equations $\displaystyle\dot{\mathbf{x}}_{i\alpha}(t)$ $\displaystyle=F_{i\alpha}(\mathbf{X}(t))=\sum_{\beta=1}^{L}\sum_{j=1}^{N}f_{i\alpha}^{j\beta}(\mathbf{X}(t))$ (83) $\displaystyle=\underbrace{\sum_{\beta=1}^{L}\sum_{j=1}^{N}f_{i\alpha}^{j\beta}(\mathbf{X}(t))\delta_{\alpha}^{\beta}\delta_{i}^{j}+\sum_{\beta=1}^{L}\sum_{j=1}^{N}f_{i\alpha}^{j\beta}(\mathbf{X}(t))\delta_{\alpha}^{\beta}\left(1-\delta_{i}^{j}\right)}_{\text{intra- layer dynamics }}$ $\displaystyle+\underbrace{\sum_{\beta=1}^{L}\sum_{j=1}^{N}f_{i\alpha}^{j\beta}(\mathbf{X}(t))\left(1-\delta_{\alpha}^{\beta}\right)\delta_{i}^{j}+\sum_{\beta=1}^{L}\sum_{j=1}^{N}f_{i\alpha}^{j\beta}(\mathbf{X}(t))\left(1-\delta_{\alpha}^{\beta}\right)\left(1-\delta_{i}^{j}\right)}_{\text{inter- layer dynamics }}$ $\displaystyle=\underbrace{f_{i\alpha}^{i\alpha}(\mathbf{X}(t))}_{\text{self- interaction }}+\underbrace{\sum_{j\neq i}f_{i\alpha}^{j\alpha}(\mathbf{X}(t))}_{\text{endogenous interaction }}+\underbrace{\sum_{\beta\neq\alpha}\sum_{j\neq i}f_{i\alpha}^{j\beta}(\mathbf{X}(t))}_{\text{exogenous interaction }}+\underbrace{\sum_{\beta\neq\alpha}f_{i\alpha}^{i\beta}(\mathbf{X}(t))}_{\text{intertwining }}$ $\displaystyle=\mathbb{S}_{i\alpha}(\mathbf{X}(t))+\mathbb{N}_{i\alpha}(\mathbf{X}(t))+\mathbb{X}_{i\alpha}(\mathbf{X}(t))+\mathbb{I}_{i\alpha}(\mathbf{X}(t))$ where $\mathbf{X}(t)\equiv\left(\mathbf{x}_{11},\mathbf{x}_{21},\ldots,\mathbf{x}_{N1},\mathbf{x}_{12},\mathbf{x}_{22},\ldots,\mathbf{x}_{N2},\ldots,\mathbf{x}_{1L},\mathbf{x}_{2L},\ldots,\mathbf{x}_{NL}\right)$ and we did not use the tensorial formalism to make explicit the contributions of each term in the governing equation. In fact, this equation could also be compactly written as $\displaystyle\dot{x}_{j\beta}(t)=F[x_{j\beta}(t)]=\mathbb{S}[x_{j\beta}(t)]+\mathbb{N}[x_{j\beta}(t)]+\mathbb{X}[x_{j\beta}(t)]+\mathbb{I}[x_{j\beta}(t)].$ (84) We call this equation “dynamical $\mathbb{SNXI}$ decomposition”. Similarly to the structural decomposition in Eq. (8), we have decoupled the different contributions of intra-layer and inter-layer dynamics, allowing us to classify different dynamical processes in terms of the corresponding dynamical $\mathbb{SNXI}$ components. The peculiar behavior of the interdependent processes is ascribed to the exogenous and intertwining components. Although the most-studied examples come from mixed spreading processes, which are crucial for understanding phenomena such as the spreading dynamics of two concurrent diseases in two-layer multiplex networks [Sanz2014, salehi2015spreading, Dickison2012epidemics, Cozzo2013social, DeArruda2017], and the spread of diseases coupled with the spread of information or behavior [wang2015coupled, funk2015, Granell2013, lima2015disease, Funk2009, granell2014competing], other types of dynamical interdependence are attracting a growing interest. ##### Coupling diffusion with synchronization An illustrative and pedagogical example has been proposed by [nicosia2017collective]. In this work, the authors examine the interdependent dynamics of the two processes presented in the two previous sections, namely, diffusion and synchronization. They propose a model that mimics the interplay between the neural activity and energy transport in brain regions, from which a rich collection of behaviors emerges. These processes evolve in the two layers of a multilayer network and are related to each other by the correspondence between layers, which in this case is realized through the functional relation between the parameters governing the two processes and the state variables. In particular, the dynamics of the entire system is governed by the following equations: $\left\\{\begin{aligned} \dot{x}_{i}&=F_{\omega_{i}}\left(\mathbf{x},A^{[1]}\right)\\\ \dot{y}_{i}&=G_{\chi_{i}}\left(\mathbf{y},A^{[2]}\right)\end{aligned}\quad i=1,2,\ldots,N\right.$ (85) where $\mathbf{x}=\left\\{x_{1},x_{2},\ldots,x_{N}\right\\}\in\mathbb{R}^{N}$ and $\mathbf{y}=\left\\{y_{1},y_{2},\ldots,y_{N}\right\\}\in$ $\mathbb{R}^{N}$ denote the states of the two dynamical processes, while the topologies of the two layers are encoded in the adjacency matrices $A^{[1]}=\left\\{a_{ij}^{[1]}\right\\}$ and $A^{[2]}=\left\\{a_{ij}^{[2]}\right\\},$ respectively, such that $a_{ij}^{[1]}=1\left(a_{ij}^{[2]}=1\right)$ if a link exists between nodes $i$ and $j$ in the first (second) layer, and $a_{ij}^{[1]}=0$ $\left(a_{ij}^{[2]}=0\right)$ otherwise. The evolution of the system depends on a set of parameters $\omega$ and $\chi$, which in turn depend on the state of the nodes in the adjacent layer: $\begin{array}[]{l}\dot{\omega}_{i}=f\left(\omega_{i},y_{i}\right)\\\ \dot{\chi}_{i}=g\left(\chi_{i},x_{i}\right)\end{array}\quad i=1,2,\ldots N$ (86) The authors assign to functions $F_{\omega_{i}}$ and $G_{\chi_{i}}$ a Kuramoto dynamic and a continuous-time random walk, respectively. Subsequently, they assign the functions $f$ and $g$ in Eqs. (86), respectively, relating the frequency $\omega_{i}$ of the oscillator $i$ at layer 1 to the state $y_{i}$ at layer 2 , and the bias property $\chi_{i}$ of the random walkers at layer 2 to the oscillator phase $x_{i}$ at layer $1$. The natural frequency $\omega_{i}$ of the oscillator $i$ evolves relaxing to values proportional to the fraction of random walkers at the replica node $i$ in the other layer. Analogously, the random walks are biased toward (away from) strongly synchronized nodes. Note that the coupling between the two layers is tunable through two parameters $\lambda$ and $\alpha$ that represent, respectively, the intensity of the influence of the random walk on the oscillators and vice versa. This setup completely defines an interdependent process on a multilayer network: depending on the coupling strengths $\lambda$ and $\alpha$ the collective behavior exhibits special dynamics (see Fig. 40). Figure 40: Coupling diffusion with synchronization dynamics. a) Distribution $P(y_{i})$ of steady-state random walker fractions $y_{i}$ at layer $2$ for $\alpha=1.0$, when the oscillators at layer $1$ are incoherent ($\lambda=0.1$, top, blue) and synchronized ($\lambda=0.4$ bottom, red). b) Synchronization phase diagram showing the level of synchronization as a function of coupling $\lambda$ and bias exponent $\alpha$. The bistable region is colored in white. Figure adapted from [nicosia2017collective]. The random walkers are homogeneously distributed in the incoherent state, while in the synchronized state the distribution is heterogeneous. Conversely, at certain values of the tuning parameters the system encounters an explosive synchronization, or a bistability region characterized by a hysteretic behavior. Similar phase transitions can also be observed in single-layer networks under certain conditions, as reported in [acebron2005kuramoto, martens2009exact, buendia2021broad]. Importantly, the multilayer network model offers a parsimonious explanation of the emergence of these collective phenomena, considering explicitly the intertwined nature of the dynamics. ##### Coupling epidemics spreading with awareness diffusion Many different phenomena in nature can be described, in their essence, by the results of constructive or destructive relations between two or numerous parts. For instance, the mutualistic or competitive relation between dynamical systems gives rise to a wealth of fascinating behaviors. The dynamics occurring in one layer can have positive or negative feedbacks on another: for example, human behavior (e.g., information awareness) can inhibit the spread of disease; social mixing between classes and mobility may produce abrupt changes in the critical properties of the epidemic onset; cooperation emerges where the classical expectation was defection. A generalization of these results can be found in [danziger2019dynamic], in which the authors describe some universal features of interdependent systems with coupled dynamics. There is an interesting relation between the collective behavior emerging from percolation processes and from the ones arising in interdependent systems. In such processes, abrupt transitions and critical dynamics may arise in certain circumstances and, in dynamically coupled systems, other interesting phenomena may be also observed, such as hysteresis and multistability, with functionality of nodes in one layer influencing the functionality of their replicas in the other layers. In [danziger2019dynamic] the functionality is quantified by an order parameter, which plays a role in the coupling strength between the nodes in different layers, with the order of a node dynamics affecting the order of its neighbors. This fact is the dynamical counterpart of interdependent percolation, where the functionality of a node is related to its belonging to the mutual giant connected component (see Sec. IV.5 for details). It turns out that the dynamic interdependence increases the vulnerability of the system [danziger2016vulnerability], as in the case of percolation processes. Figure 41: Two (left) reciprocally enhanced and (right) reciprocally inhibited disease-spreading processes of susceptible–infected–susceptible type. The colors in the figure represent the prevalence levels of the diseases at a steady state of Monte Carlo simulations. Note the emergence of a curve of critical points (at a “metacritical point”) in which the spreading in one layer depends on the spreading in the other. Figure from [de2016physics]. Epidemic models are another important example of systems in which interdependencies play a crucial role (see [wang2015coupled, de2016physics] for a review) and, also in this case, hysteretic behavior with abrupt transitions may occur. This means that, for instance, explosive pandemics with no early-warning can suddenly appear and, in a similar implosive way, can disappear. Unfortunately, the value of the infection rate which can eradicate the disease must be much lower than the value that triggered it [danziger2019dynamic]. Other interesting behaviors arise in the case of competitive diseases, where mutual exclusion or endemic coexistence may spontaneously occur. Figure 41 shows two phase diagrams of disease incidence of reciprocally enhanced (right) and inhibited (left) disease-spreading processes. In the figure is highlighted the existence of a curve of critical points that separate endemic and non-endemic phases of the disease. Moreover, the curve that separates the endemic and non-endemic phases, is in turn divided into two parts: one in which the critical properties of one spreading process are independent of the other (straight dashed line), and one in which the critical properties of one spreading process do depend on those of the other layer (solid curve). The point at which this crossover occurs is called a metacritical point. Figure 42: SIS-SIS interacting diseases model. Left: multiplex representation of the diseases spreading. Each individual is present in both the layers and can be infected by one (or both) of the diseases, as indicated in the central panel. Right: possible transitions between the different states of the SIS model. The variables represent the densities of individuals of each type having degree equal to $k$ in the first layer and degree $l$ in the second. Figure from [Sanz2014]. In fact, traditional epidemic models can describe in details the spreading of a single disease in different realistic situations, whereas the multilayer representation can extend the possibilities to spreading processes with interacting diseases. In [Sanz2014] the authors propose a framework to describe the spreading dynamics of two concurrent diseases (see Fig.42). Using SIS-SIS and SIR-SIR models with appropriate interlayer couplings describing the influence of one disease on the other, they derive the epidemic threshold for the two interacting diseases, showing that the onset of a disease’s outbreak is conditioned to the prevalence levels of the other disease. Epidemic thresholds can be influenced not only by the presence of a second disease, but also by the individual’s change of awareness about an ongoing epidemic. In [Granell2013], a multilayer network couples the dynamics of disease spreading in a social network and the diffusion of awareness among actors. The two layers are coupled in a multiplex as illustrated in Fig. 43. The process of spreading of awareness (UAU) is akin to an SIS process, where in place of susceptible (S) there are unaware (U) and in place of infected (I) there are aware (A) actors. The probability of being infected is influenced by the state of awareness of the individuals. The authors found the existence of a metacritical point in which the state of awareness of the individuals can control the onset of an epidemic. Figure 43: Multiplex representation of awareness-disease spreading. The spreading of awareness occurs in the upper layer, whereas the spreading of the diseases takes place in the lower layer. Figure from [Granell2013]. ##### Coupling game dynamics with opinion dynamics Figure 44: Polarization of opinions and strategies in a multiplex with 5,000 nodes, in the presence of angular correlations between the layers. Top: comparison between the two layers; the color is the time average of the state of each node. Bottom: evolution of the density of cooperators in the angular bins used to compute the interlayer correlations. Numeric labels indicate the clusters of nodes that adopt the same strategy. Figure from [amato2017interplay]. The multilayer representation of the dynamics of complex systems can also be used to shed light on real world social dilemmas. The influence of factors acting on different layers might explain the emergence of particular patterns of cooperation between social agents. In [amato2017interplay] the authors present a model coupling evolutionary game dynamics and opinion dynamics, regarded as two processes evolving on distinct layers of a multiplex network. To model the game dynamics on the first layer the authors adopt a replicator, in which the individuals (nodes) copy the strategy of one of their neighbors with a probability that is so much higher the higher the neighbors’ payoff compared to themselves. The possible states are ”cooperate” and ”defeat”. The opinion dynamics on the second layer is modeled using the voter model, where individuals (nodes) adopt the opinion of a randomly selected neighbor with a certain probability. The opinions of the nodes can be ”cooperate” or ”defeat”, as in the game layer. The authors assume that imitating a cooperative opinion is more likely than imitating a defection. This can be interpreted as the influence of media campaigns or broadcasting agents. Depending on the specific parametrization of both the game and the opinion dynamics, this model gives rise to fascinating dynamical behaviors in which equilibria of different types exist for the game dynamics. The impact of social influence on the decision of individuals is conveyed by the interlayer coupling, which is encoded in the parameter $\gamma$, representing the tendency of individuals to act in agreement with their proclamations: the nodes in one layer copy their own state from the other layer with probability $\gamma$. The main result is that this model is sufficient to explain the emergence of cooperation in scenarios where the pure game dynamics predicts defection. This is due both to the intertwined dynamics and to the multilayer structure itself. In fact, the authors proved that the geometric correlation between layers has a significant impact on the stability of the system. Importantly, under certain conditions of correlation between layers, the system can reach a polarized metastable state (see Fig. 44). This result can explain the observed polarization in real world social systems. Ultimately, the emergence of cooperation in unexpected conditions is due to the interplay between the coupled dynamics of strategies and opinions, the complex topologies of the networks upon which the dynamics exert, and the structural relations between the layers. Missing one of these elements could hinder the right interpretation of the complex behavior of such systems. ##### Coupling epidemics spreading with social integration A multilayer perspective on epidemic spreading can be a valuable support to design more educated strategies to reduce disease risk. A recent example comes from [bosetti2020heterogeneity], which integrates social dynamics, human mobility and epidemic spreading to assess the risk of measles outbreak in Turkey. During the past decade, Turkey has received more than 3.5 million refugees coming from Syria. The levels of immunization of the two populations are considerably different. The outbreak risk is analysed through a multilayer transmission model, which takes into account the different levels of immunization in the two populations, along with the mobility pattern and the level of social integration. The main result of the study is that, in the case of heterogeneous immunization, high levels of social interaction can drastically reduce the spatial spread and incidence of a disease. This apparently counter-intuitive result is due to the fact that the high immunization coverage of one population (Turkish citizens) can shield the other (Syrian refugees) from getting exposed to the infection, as an effect of herd immunity. The network structure is defined by dividing Turkey into patches corresponding to administrative areas that represent the nodes. The layers of the network are encoded by Turkish $(T)$ and Syrian refugee $(R)$ populations. On this network, social dynamics, mobility and epidemic spreading happen simultaneously. Figure 45: (A) Model scheme. Prefectures of Turkey are the nodes of a network of geographic patches. Turkish and Syrian populations are encoded by two colors and move between patches following the inferred mobility pathways. The two populations encode two layers, where social dynamics and epidemic spreading happen simultaneously. (B) Mobility of Syrian refugees (upper) and Turkish citizens (lower) inferred from mobile phone data. Figure from [bosetti2020heterogeneity]. The authors denotes with $c_{ki}^{(p)}(p\in T,R)$ the elements of a matrix $\mathbf{C}^{(p)}$ encoding the number of people belonging to population $p\in\\{T,R\\}$ traveling from patch $k$ to patch $i$, and with $\alpha$ the fraction of Syrian contacts with Turkish citizens. The force of infection for each population in the $i$-th patch depends on the contribution of all patches in the country: $\displaystyle\begin{array}[]{l}\lambda_{i}^{(T)}\left(\alpha,\mathbf{C}^{(T)},\mathbf{C}^{(R)}\right)=\beta_{T}\sum\limits_{k=1}^{L}\left[\underbrace{c_{ki}^{(T)}\frac{I_{k}^{(T)}}{N_{k}^{(T)}}}_{\text{Endogenous }}+\underbrace{\alpha c_{ki}^{(R)}\frac{I_{k}^{(R)}}{N_{k}^{(R)}}}_{\text{Exogenous }}\right]\\\ \lambda_{i}^{(R)}\left(\alpha,\mathbf{C}^{(T)},\mathbf{C}^{(R)}\right)=\beta_{R}\sum\limits_{k=1}^{L}\left[\underbrace{\alpha c_{ki}^{(T)}\frac{I_{k}^{(T)}}{N_{k}^{(T)}}}_{\text{Exogenous }}+\underbrace{c_{ki}^{(R)}\frac{I_{k}^{(R)}}{N_{k}^{(R)}}}_{\text{Endogenous }}\right]\end{array}$ where $\beta_{p}=\beta/P_{i}^{(p)}(\alpha,c)$ is the transmission rate for population $p$ and $P_{i}^{(p)}(\alpha,c)$ is an appropriate normalization factor. From this equation we can easily recognise the structure of Eq. (83), where the contributions to the force of infection come from an endogenous term, accounting for the infectivity due to individuals from the same population, and an exogenous term, accounting for the infectivity due to the other population. The parameter $\alpha$ is the level of social mixing and can be changed according to different social integration scenarios. This is the parameter which plays the role of the coupling between the layers. An illustrative representation of the model is shown in Fig. 45. The epidemic transmission dynamics is eventually regulated by the following SIR dynamical model $\displaystyle\left\\{\begin{aligned} \dot{S}_{i}^{1}&=-\lambda_{i}^{1}(\alpha,c)S_{i}^{1}\\\ \dot{S}_{i}^{2}&=-\lambda_{i}^{2}(\alpha,c)S_{i}^{2}\\\ \dot{I}_{i}^{1}&=\lambda_{i}^{1}(\alpha,c)S_{i}^{1}-\gamma I_{i}^{1}\\\ \dot{I}_{i}^{2}&=\lambda_{i}^{2}(\alpha,c)S_{i}^{2}-\gamma I_{i}^{2}\\\ \dot{R}_{i}^{1}&=\gamma I_{i}^{1}\\\ \dot{R}_{i}^{2}&=\gamma I_{i}^{2}\end{aligned}\right.,$ whose analysis suggests that the incidence of the measles can be reduced up to 90% in the case of very high levels of integration [bosetti2020heterogeneity]. Despite the different contexts, the multilayer dynamics can have positive or negative feedbacks, leading to interdependence between the corresponding critical points of the dynamics. As a consequence, two different regimes exist: i) one in which the critical properties of one process depend on those of the other, and (ii) in which the critical properties are independent of the other. The two regimes are separated by a metacritical point, where a crossover occurs (see a recent review from [de2016physics] and Fig. 41). This regime-shift is analogous to the one occurring in percolation processes, which is presented in the next section. ### IV.5 Percolation In this section we switch our attention to percolation [stauffer2018introduction], where the goal is to analyse how different properties of the network change as we remove some of its nodes or links. The properties we are interested in are typically topological, such as the size of the largest connected component or the distribution of small connected components, and they are used as a first proxy to assess the functionality of a system exposed to failures or attacks. Failures are modelled as random removals, whereas attacks assume some a priori knowledge of the network, where the elements are ranked given some criterion —frequently topological (degree, betweenness, etc.) [albert2000error, cohen2001breakdown], albeit not strictly necessary [artime2020effectiveness, artime2021percolation]— and removed accordingly. We aim at presenting the basic mathematical framework to address the problem of multilayer percolation and showcase some applications. Good reviews to expand on what we present here can be found in [bianconi2018multilayer, lee2015towards]. The first step towards a description of multilayer percolation processes is to consider the generalization of the generating function methodology, frequent in single-layer networks [newman2001random]. In [leicht2009percolation], random percolation is studied in general multilayer networks described by the set of degree distributions $\\{p_{k_{1}k_{2}\ldots k_{L}}^{\alpha}\\}$, where $\alpha$ is the layer label and $k_{\beta}$ denotes the number of links toward nodes in layer $\beta$. Expressions for the size of the giant component of each layer $S^{\alpha}$ can be written as a function of the generating functions. This framework allows us to consider percolation on correlated multilayer networks, leading to non-trivial results. For example, in [lee2012correlated] correlated duplexes of Erdős–Rényi networks are studied. When degrees across layers are maximally anti-correlated – i.e., hubs in one layer are low degree nodes in the other layer – the giant component appears at a link density considerably higher than the value for which it appears in uncorrelated duplexes. The giant component exists, though, for any nonzero link density if interlayer degrees are maximally positively correlated. In other words, the more correlated the degrees are, the fewer edges are needed to make a macroscopic structure emerge. When it comes to attacks on the most connected nodes, the latter statement is true up to a certain intra-layer mean degree (assuming it is the same for both layers), after which the behavior is reversed and maximally negative correlated networks become more robust [min2014network]. This framework disregards any functional characteristics of the layers and, from a phenomenological point of view, continuous phase transitions are always found. This is no longer true if the nature of the layers is taken into account, for example via the interdependence of the nodes or antagonistic interactions. These more realistic scenarios define new conditions for a node to remain in the network, and need alternative, more function-oriented metrics to describe the state of the system. A widely accepted choice is the mutually connected component [buldyrev2010catastrophic, son2012percolation] already defined in Sec. III.2, which is the set of nodes that are connected within each and every layer simultaneously. The most striking result is that when the giant mutual component (GMC) is computed in interdependent networks, the percolation phase transition changes its nature, becoming a discontinuous one [son2012percolation]. This has serious implications for the robustness of the system, since the disintegration occurs abruptly, i.e., it is hard to anticipate. Mathematically, the idea is as follows. Let us assume an edge-colored multigraph, let $p_{k_{1}\ldots k_{L}}$ be the probability that a node has degree $k_{\alpha}$ to other nodes within the layer $\alpha$, and let $q_{k_{1}\ldots k_{L}}$ be the corresponding excess degree distribution. We indicate by $w_{\alpha}$ the probability that a node does not belong to the GMC via a link in layer $\alpha$. Hence, $w_{\alpha}^{k_{\alpha}}$ gives the probability that the node does not belong to the GMC via any of its neighbors in layer $\alpha$. The condition to belong to the GMC is that the node has to be connected to it in all the layers, i.e., the size of the GMC is proportional to $\prod_{\alpha=1}^{L}(1-w_{\alpha}^{k_{\alpha}})$. We just need to rescale by the occupation probability $\phi$ and average over the degree distribution, yielding $\displaystyle M$ $\displaystyle=\phi\sum_{k_{1}=0}^{\infty}\ldots\sum_{k_{L}=0}^{\infty}p_{k_{1}\ldots k_{L}}\prod_{\alpha=1}^{L}(1-w_{\alpha}^{k_{\alpha}}).$ (88) To compute $w_{\alpha}$, we first note that $1-w_{\alpha}$ is the probability that a node at the end of a link in layer $\alpha$ belongs to the GMC. For this to happen any of its remaining $k-1$ neighbors in layer $\alpha$ are in the GMC as well. Moreover, due to the condition of mutual connectivity, in every other layer, the node needs to belong to the GMC via any of its neighbors. Rescaling by $\phi$ because the node needs to be present in the network, and averaging, we obtain $1-w_{\alpha}=\phi\sum_{k_{1}=0}^{\infty}\ldots\sum_{k_{L}=0}^{\infty}q_{k_{1}\ldots k_{L}}\left(1-w_{\alpha}^{k_{\alpha}-1}\right)\prod_{\begin{subarray}{c}\beta=1\\\ \beta\neq\alpha\end{subarray}}^{L}(1-w_{\beta}^{k_{\beta}}).$ (89) By inserting the solutions $\\{w_{\alpha}\\}$ of this system of equations into Eq. (88) we readily obtain the size of the giant mutual component. See Fig. 46 to appreciate the emergence of the abrupt transition of the GMC for multiplex systems with Erdős–Rényi networks in the layers. The discontinuity is also present when considering multiplexes of scale-free networks but, at odds with single-layer scale-free percolation, the occupation probability $\phi_{c}$ is finite, making them more vulnerable to random failures than the single-layer network [baxter2012avalanche]. Targeted interventions in the network can be easily modeled as well by including a degree-dependent occupation probability $\phi_{k_{1}\ldots k_{L}}$ inside the sums [min2014network, zhao2016robustness]. The dependence of the mutual component $M$ shown in Fig. 46 for Erdős–Rényi multiplexes is shared for other intra-layer topologies as well. That is, when the number of layers increases, the value of the mean degree at which the discontinuity appears becomes larger, thus broadening the parameter region where the network is not functional. Moreover, the height of the discontinuity jump becomes larger too, thus making the transition harder to anticipate when going from the supercritical to the subcritical region. In light of these results, the more layers, the more fragile the system is, which might seem paradoxical from an evolutionary point of view: why would a system organize itself in a layered structure if that reduces its robustness? In [radicchi2017redundant], Radicchi and Bianconi provided a potential answer to this conundrum by proposing a model of multilayer percolation that relaxes the condition for functionality of the nodes. They argue that a node can be functional, i.e., it is not removed, as far as it is functioning in at least a pair of layers. This new condition for functionality allows them to conclude that the addition of extra layers boosts the robustness of the system. Figure 46: Percolation phase transition in a multiplex formed by intralayer Erdős–Rényi networks, with the same mean degree $\langle k\rangle$. On the left, we display $M$, which is the solution of $M=\phi(1-e^{-\langle k\rangle M})^{L}$, as a function of the mean degree, for different number of layers, with the occupation probability fixed to $\phi=1$. On the right, we study the emergence of the giant mutual component for partial multiplexes, where $q$ is the fraction of interdependent nodes. In this case $M$ is the solution of $M=\phi(1-e^{-\langle k\rangle M})(1-qe^{-\langle k\rangle M})$. In the inset we show the height of the jump of the order parameter at the transition point, along with the theoretical value of the tricritical point (vertical dashed line). The abrupt nature of the transition induced by Eq. (89) holds as far as $L>1$, see Fig. 46. When $L=1$ the mutual component coincides with the standard giant component, so the transition is continuous. We cannot interpolate continuously from $L=1$ to $L=2$ to understand how the nature of the phase transition changes. However, there are other variables that we can tune to go from an effective single-layer system to a multiplex. The first of these variables is the multiplexity parameter. It might occur that in real multiplex networks only a fraction $q$ of all nodes shares the functional dependency across layers, hence a natural question is what is the nature of the transition as a function of $q$. Does it suffice to have a non- zero fraction of dependent nodes to observe the discontinuity, or, on the contrary, is there a finite threshold, only above which we observe an abrupt transition? To answer this question, let us focus on duplexes. If a node in one of the layers has a dependency link, which occurs with probability $q$, then the condition to belong to the GMC is the same as discussed earlier. Instead, if the node does not have a dependency link, which occurs with probability $1-q$, then it belongs to the GMC as far as any of its neighbors of its very same layer belong to the component. Therefore, we can write $\displaystyle M_{\alpha}$ $\displaystyle=q\phi\sum_{k_{1}=0}^{\infty}\ldots\sum_{k_{L}=0}^{\infty}p_{k_{1}\ldots k_{L}}\prod_{\beta=1}^{L}(1-w_{\beta}^{k_{\beta}})$ $\displaystyle+(1-q)\phi\sum_{k_{1}=0}^{\infty}\ldots\sum_{k_{L}=0}^{\infty}p_{k_{1}\ldots k_{L}}w_{\alpha}^{k_{\alpha}}.$ (90) The set of probabilities $\\{w_{\alpha}\\}$ obeys the equations $\displaystyle 1-w_{\alpha}$ $\displaystyle=q\phi\sum_{k_{1}=0}^{\infty}\ldots\sum_{k_{L}=0}^{\infty}q_{k_{1}\ldots k_{L}}\left(1-w_{\alpha}^{k_{\alpha}-1}\right)\prod_{\begin{subarray}{c}\beta=1\\\ \beta\neq\alpha\end{subarray}}^{L}(1-w_{\beta}^{k_{\beta}})$ $\displaystyle+(1-q)\phi\sum_{k_{1}=0}^{\infty}\ldots\sum_{k_{L}=0}^{\infty}q_{k_{1}\ldots k_{L}}(1-w_{\alpha}^{k_{\alpha}-1}).$ (91) Focusing again on a duplex of Erdős–Rényi networks, we see in Fig. 46 that there is a finite tricritical point $q_{c}$ at which the transition changes its order, confirming that, in general, a multiplex needs a certain level of interdependency between layers to experience the discontinuous transition. Since many infrastructural networks are embedded in a two-dimensional space, in a similar fashion one might ask what type of transition we encounter when coupling low-dimensional networks, that alone show continuous transitions, in a multiplex. It turns out that in this case the transition does not change its order [son2011percolation, berezin2013comment]. The second variable that allows us to interpolate between multiplexes and monoplexes is the link overlap across layers. For complete overlapping all layers are equal and the problem is reduced to percolation of a single-layer, showing a continuous transition. How much overlap do we need to observe the abrupt transition? Different approaches and techniques have been used to answer this question, and accordingly slightly different phenomenology has been discovered. Although the details of the phase diagram depend on the number of layers and the degree distributions, Hu and coathors have found that the transition stays abrupt as far as we do not have complete overlapping [hu2013percolation]. In [cellai2013percolation] it has been reported that a critical value of the edge overlap exists that changes the nature of the phase transition from a hybrid first-order to a continuous one. See [cellai2016message] for the generalization of the theory to an arbitrary number of layers. If the edge overlap is combined with other topological correlations, the phase diagram displays multiple and recursive hybrid phase transitions [baxter2016correlated]. In [min2015link], the problem of link overlap has been formulated in a way that the discontinuous transitions display hysteresis. The role of the edge overlap, together with other topological correlations, has been also explored in the problem of multilayer optimal percolation [osat2017optimal], that is, in the identification of the smallest set of nodes that, when removed, cause the largest damage in the network [santoro2020optimal]. This problem is known to be NP-hard in single- layer networks [kempe2003maximizing], and although there is no equivalent proof in multilayer architectures, the intuition indicates that it is so as well. On this basis, heuristic approaches to identify the critical subset of nodes predominate. In [santoro2020optimal], the efficiency of $20$ of these strategies is evaluated. The authors find that when no structural correlations exist, a family of Pareto-efficient strategies based on both structural descriptors and multiobjective optimization is the best at dismantling the network. However, when evaluated in real multilayer networks that present non- trivial correlations, the variability in performance changes from one dismantling strategy to another, suggesting that a fair assessment of multilayer robustness requires a comparison between strategies. Beyond node interdependency as a condition for functioning, other interesting mechanisms have been considered in the literature. One of them is antagonistic relations, where a node in one layer is functional — that is, it has not been removed — only if its replicas are not [zhao2013percolation]. Examples of this kind of competitive or non-cooperative relations might be relevant, for example, in biological networks. Interestingly, it has been found that when percolation is considered in this scenario, the abrupt transition shown in what follows persists, but displays as well, at variance with the transition of the mutual component introduced earlier, the typical hysteresis and bi- stability behaviors of equilibrium thermal first-order phase transitions [goldenfeld2018lectures]. Other mechanisms have followed to include these behaviors too, e.g., in [min2014multiple, min2015link]. Note that all of these results derived with the machinery of generating functions come with some underlying assumptions that are very rarely met when studying real networks: (i) the network is tree-like, (ii) it has infinite size and (iii) the quantities of interest are computed as an average over the ensemble of networks with given degree distribution, although, in reality, we only have access to one supra-adjacency matrix, among all the possible ones that a graph model could generate. As a consequence, the analytical predictions might deviate from the actual process of percolation, obtained for example via simulations. Analytical approaches have been proposed to overcome points (ii) and (iii) in [radicchi2015percolation, bianconi2016percolation], where a percolation theory of multiplex and interdependent networks is introduced that takes as input the adjacency matrix instead of the degree distribution. Furthermore, in a real network we may need to assess the robustness under perturbation scenarios related to node metadata. For example, in [baggio2016multiplex] it is addressed, among other things, how a multiplex network capturing the flow of subsistence-related goods and services among households in several Alaskan Natives communities responds to perturbations involving targeted removals of specific resources by category, e.g., terrestrial, marine or riverine, as a representation of natural disasters. An analytical framework to take into account non-topological features in the robustness of a network has been recently developed for single layers [artime2021percolation], but, at present, there is not a generalization for multilayer networks. ### IV.6 Cascade failures The percolation model has the limitation that failures and attacks are treated statically. A more complete description of multilayer robustness and resilience would include their time evolution, in order to better understand under which conditions small perturbations can trigger global network-wide effects, the so-called _cascades_. In this section we review some of the most emblematic models for multilayer cascade failures. The seminal work of Buldyrev and colleagues was one of the first to deal with the propagation of failures across interdependent multiplexes [buldyrev2010catastrophic]. The motivation for such a study was the 2003 Italy blackout, for which it was hypothesized that the power grid and the Internet network, the latter acting as a supervisory control and data acquisition system, were interdependent, and that failures in the power stations hampered the Internet communication and further propagated the malfunction across the system, see top panels of Fig. 47). However, the interest in the amplification of small perturbations throughout the network is much more general, finding eventual applications in areas such as biochemistry, where metabolic pathways interact in complex ways with key tissues, or finance, where different banks might share the same asset in their balance sheet [huang2013cascading], among many other examples. In [buldyrev2010catastrophic] was introduced the concept of a mutually connected component from a dynamical perspective, at odds with its static definition presented in the previous section, see bottom panels of Fig. 47. It was shown that these cascades evolving in multiplex networks yield a discontinuous phase transition in the size of the giant mutual component [buldyrev2010catastrophic], meaning that the failure of one single node from an apparently healthy, functional infrastructure can generate the emergence of a global cascade that collapses the entire system. It was later shown that these cascading failures can be mapped to static percolation in multiplexes [son2012percolation]. In fact, many of the results presented in Sec. IV.5 are recovered at the final state of the cascade propagation of this type of model. Figure 47: Top: evolution of a cascade of failures in a real interdependent system composed by a power network (on the map) and an Internet network (shifted above) and involved in the Italian blackout of September 2003. In a, failure of a power station makes the Internet nodes to stop functioning, indicated in red. In green is shown the nodes that disconnect from the largest connected component in the next cascade step. These nodes will fail immediately after ($b$), introducing a feedback of failures back in the power network. Those nodes that have been isolated (green nodes in $b$) are the ones that will fail in the next event $(c)$, inducing further interdependent failures in the Internet network. Bottom: we sketch this process for further clarification. In d, an initial attack to one node in network $\mathsf{A}$ occurs. In e, the attacked node is removed, along with its links, and the corresponding interdependent node in network $\mathsf{B}$, with its links, is also removed. In f, the actual cascade starts. All the $\mathsf{B}$-links between $\mathsf{B}$-nodes connected interdependently to different $\mathsf{A}$-clusters are removed. In d, the same rule applies but for links in network $\mathsf{A}$: all $\mathsf{A}$-links between $\mathsf{A}$-nodes are deleted if the interdependent nodes do not belong to the same $\mathsf{B}$-cluster. Steps f and g are repeated, propagating the failures back and forth, until the cascade cannot further evolve. The remaining connected components at the end of the propagation of failures coincide with the mutually connected components introduced in Sec IV.5. Figures from [buldyrev2010catastrophic]. Despite the fact that the dynamical propagation of cascades needs a more convoluted mathematical treatment than percolation, these techniques have been very flexible at adapting to variations of the original work of Buldyrev, so that the propagation of cascades is modeled in more realistic scenarios. We briefly review some of them in the following, and refer the interested reader to the more complete reviews [kenett2014network, shekhtman2016recent, valdez2020cascading]. After [buldyrev2010catastrophic], the problem of dependency-based cascades has been generalized to $L$ layers in [gao2012networks], allowing analytical, closed solutions in certain interdependent setups. The problem of failure propagation in multiplexes with partial interdependency, where not all nodes have dependency connections, has been addressed in [Parshani2010reducing]. If the fraction of interdependent nodes is decreased enough, the transition is no longer abrupt but becomes second-order. In order to narrow the range of parameters for which the abrupt transition occurs, different strategies to choose the autonomous nodes, those without interdependencies, have been proposed [schneider2013towards]. It turns out that selecting those with the highest degree or highest betweenness, significantly reduces the likelihood of an abrupt collapse [schneider2013towards]. Intra-layer correlations can be included in the analysis as well. In [huang2013robustness], interdependent multiplexes with a tunable average number of single links and an average number of triangles per node are analyzed, finding that, for fixed average degree, the higher clustered networks are less robust than those with lower clustering. Intra-layer correlations might be coupled as well to inter-layer ones: in [reis2014avoiding] a flexible model including both types of correlations is studied, and the maximization of robustness is addressed as a function of the tunable strength of the correlations and the failure propagation dynamics, showing that the theoretical results match well with the experimental results of coupled functional brain modules. Beyond cascades driven by topological failures, there are other mechanisms via which small perturbations can drive a multilayer network to collapse. In many problems related to social sciences, such as the adoption of fads, the diffusion of norms and innovations, and the changes in the collective attention in a population, models of behavioral contagion are used to model the decisions of the agents. Agents need to decide between two alternative options/actions and the influence of their neighbors is crucial in the final choice. A simple way to encompass this influence is by setting an activation threshold. Initially all nodes are inactive but a controlled small fraction. An inactive agent changes her state and becomes active only when the fraction of her active neighbors is larger than a threshold. This process might require several activation steps before reaching a frozen state. In [watts2002simple] was given the range of parameters for which such global cascades, measured as the number of active users when the process stops, emerge in monoplexes, turning out that they are only possible when the network is neither too sparse nor too dense. The generalization of this threshold model in multiplexes was addressed in [brummitt2012multiplexity], where a propagation rule is proposed such that an agent activates as far as the fraction of active neighbors in at least one layer exceeds the threshold. Following this rule, multiplexity widens the ranges of parameters in which global cascades can occur, therefore increasing the network’s vulnerability. Interestingly, isolated layers in which, owing to their topological properties, global cascades would not be observed, when multiplex-coupled, cooperate in a way that facilitates agent activation and therefore lead to cascades. Figure 48: The power grid is a paradigmatic example of a network susceptible to overload cascades of failures due to the energy transported along the links. We see on the top an aggregated representation of the multilayer power grid of the USA, where each layer (link color) corresponds to different voltage ranges. In red is depicted the planned interconnections to transport wind power. On the bottom, the Bak-Tang-Wiesenfeld sandpile model is simulated in a duplex of random regular graphs. For a cascade starting in one of the networks (network $a$), is shown the probability of observing a final cascade size in $a$, $T_{aa}$, and in network $b$, $T_{ab}$, as a function of the probability of interconnections $p$ between nodes in $a$ and in $b$. Both $T_{aa}$ and $T_{a}=1/2(T_{aa}+T_{ab})$ display a minimum, indicating that an isolated network can reduce the damage caused by a cascade by interconnecting to another network, but only up to a certain level of interconnectedness $p^{*}$. Figures readapted from [brummitt2012suppressing]. Figure 49: Robustness of multilayer networks exposed to overload failures due to nonlocal load redistribution. We show the behavior of $\langle S\rangle$, the size of the largest connected component of the network at the end of the cascade, as a function of the tolerance parameter $\alpha$ (see text). The faster $\langle S\rangle$ grows, the more robust is the system because there the range of tolerance values for which it disintegrates is smaller. Three mechanisms to increase the robustness are presented. In $(a)$, the comparison between the multilayer network and its aggregated counterpart. In $(b)$, dependence on the number of layers. In $(c)$, $\langle S\rangle$ for different values of the multiplexity parameter (fraction of nodes participating simultaneously in a duplex), with the value indicated in the legend. Figure readapted from [artime2020abrupt]. Another mechanism that can induce cascades is load redistribution due to overloads. Arguably, the most famous stylized model accounting for this type of dynamics is the Bak-Tang-Wiesenfeld sandpile model [bak1988self], a paradigmatic example of self-organized criticality used in a variety of contexts [jensen1998self]. Each node has an internal, discrete variable, the load. At each unit of time the load of a node selected uniformly at random increases in one unit. When the load reaches a threshold, assumed to be degree, the node redistributes its load to its neighbors. The neighbors might, in turn, exceed their capacity and reallocate their load to their own neighbors, hence propagating the cascade. Once all nodes have their own load below the threshold, the random addition of load again is restarted. To avoid inundation of the system, a frequently used strategy is to dissipate load at a certain rate when it is reallocated. In [brummitt2012suppressing], the BTW model is used in the context of multilayer power grids, shedding light on the benefits and dangers of the level of interconnectivity between the layers. Surprisingly, it is found that there is an optimal value of the interconnectivity for which the chance to observe large cascades is minimum, see Fig. 48. This is because the addition of interlayer links between highly isolated power grids helps mitigate the cascades by creating a reservoir that absorbs excess load, a sort of alternative dissipative mechanism. This behavior, though, is valid up to a certain point, from which the further addition of interlayer links is no longer beneficial since it creates more loaded systems due to larger thresholds and, therefore, chances of larger cascades, as well as it creates a positive feedback due to the reentrant new paths of load into a layer. Regarding the critical properties of the BTW model, it has been found that multiplexity does not alter the mean-field scaling behavior observed in monoplexes [lee2012sandpiles]. To close this section, we discuss the relevant, but still quite unexplored, case of nonlocal cascade propagation. One might argue that nonlocality can be realized in spatially extended systems by allowing interlayer dependencies between different locations of the space [li2012cascading]. Yet, under this description the failures propagate via first neighbors, in the interdependent sense. In fact, all types of cascades discussed earlier evolve locally via first-neighbors, something that need not to be true in real systems, as happened in the 1996 disturbance of the Western Systems Coordinating Council (WSCC) system [NERC2002system], in the 2003 blackout in the northeastern USA [nerc2004technical] or in the air-traffic disruption due to the eruption of the Icelandic volcano Eyjafjallajökull [eurocontrol2010ashcloud]. A plausible description of this phenomenon is based on the load-capacity model of Motter and Lai [motter2002cascade], where a load, defined as the number of shortest paths crossing a node, and a constant capacity, a factor $1+\alpha$ larger than the initial load, are assigned to every node. When the load of a node exceeds its capacity, the node fails. An initial perturbation in the form of node removal is applied to the network, that globally modifies the loads and allows subsequent failures to occur not necessarily close to the prior failure. If during this process nodes that overload, they also fail, propagating the cascade. This model has been investigated recently in [artime2020abrupt], with the finding that the size of the largest connected component at the end of the cascade suffers an abrupt jump when the tolerance parameter $\alpha$ is increased. Moreover, since the load redistribution depends significantly on the topology of the network, the average path length is identified as a metric that correlates well with robustness. Based on this, the article proposes different strategies to increase the robustness of the network, such as adding new layers or reducing the level of multiplexity (see Fig. 49). Unlike the other cascades phenomena introduced in this section, which can be analytically treated with generating functions or multi-type branching processes, nonlocal cascades do not have a solid mathematical machinery to be described, and this certainly represents a challenge for the future. ## V Frontiers ### V.1 Kinematic geometry The geometric approach [boguna2020network] to network analysis has garnered growing interest in the past two decades. Here we focus on the geometry of network-driven processes on multilayer networks, a family of kinematic geometries generalizing the diffusion geometry introduced in [de2017diffusion]. The structure of a wide variety of real-world complex systems is modular and hierarchical [guimera2005functional] and the effect of these large scale properties on the dynamics of such systems has been studied, during the past decade. It has been shown that complex systems with such a mesoscale organization [fortunato2010community] are characterized by topological scales [arenas2006synchronization], exhibiting the emergence of functional clusters which might be different from topological ones. In [de2017diffusion] the authors investigate the multiscale functional geometry of monoplexes to characterize functional clusters. This approach defines the diffusion distance between any pair of units in a networked system, based on random walk dynamics, shown to correspond, among others, to the phase deviation of coupled oscillators close to metastable synchronization state and consensus dynamics. The diffusion distance for single-layer networks is a key concept to define a kinematic geometry based on network-driven processes and it can be calculated as the $L^{2}-$norm of the difference between rows of the propagator $e^{-t\tilde{\mathbf{L}}}$: $D_{t}(i,j)=\left\lVert\mathbf{p}(t|i)-\mathbf{p}(t|j)\right\rVert_{2}.$ (92) Exploiting the fact that diffusion geometry is based on random walk dynamics, it is possible to extend it to the realm of multilayer networks. At variance with walks on edge-colored networks presented in Sec. IV.1, where one can obtain the transition rules for the multigraph as a weighted average of the transition probabilities in each distinct layer, see Eq. (61), on a multilayer network the walk type determines the probability of a random walker jumping across and within layers (see Eq. 62 and Tab. 1). Consequently, in the edge- colored case diffusion distances between nodes can be obtained directly from Eq. (92), while for multilayers we have to introduce a diffusion distance between state nodes. Regardless of the random walk type, let us indicate the probability of finding a random walker at a given node and layer at time $t$ by $p_{j\beta}(t)$. Similarly to Eq. (92) we define the diffusion distance between state nodes $(i,\alpha)$ and $(j,\beta)$ as $D_{t}^{2}((i,\alpha),(j,\beta))=\sum\limits_{k,\gamma}(p_{k\gamma}(t|(i,\alpha))-p_{k\gamma}(t|(j,\beta)))^{2},$ (93) where probabilities are conditional to using the corresponding state nodes as the origins of random walkers at time $t=0$. One can summarize this supra- distance matrix $\mathbf{D}_{t}=\left(D_{t}((i,\alpha),(j,\beta))\right)$ into an $N\times N$ matrix by encoding the diffusion distance among the physical nodes, across all the layers. Intuitively, resembling the parallel sum of resistances in electrical circuits leading to the equivalent resistance, the _equivalent diffusion distance_ can be written as $D_{t}^{\text{eq}}(i,j)=\left(\sum_{\alpha=1}^{L}\frac{1}{D_{t}((i,\alpha),(j,\alpha))}\right)^{-1}.$ (94) Usually, the functional distance (supra-)matrices are rescaled in $[0,1]$, normalizing each by its maximum value to allow for comparisons. If, additionally, one is interested in the most persistent patterns, the diffusion distance is averaged over time and called the average diffusion distance. The corresponding supra-distance matrix is indicated by $\bar{\mathbf{D}}_{t}$. The diffusion distance between nodes is highly dependent on the type of random walk dynamics, its propagation time, the topology of layers and the layer- layer correlations. To better understand the effects of dynamics and topological variations, let us consider three classes of two-layer networks, namely: * • Barabasi-Albert layers with preferential attachment of 4 links; * • Watts-Strogatz layers, with rewiring probability 0.2; * • Girvan-Newman model layers, where the inter-community connectivity probability is 1, whereas cross-group connections exist with probability 0.05. Additionally, we consider the five random walks of Tab. 1. For the three cases, the link overlap across layers – i.e., the fraction of links present in both layers among the same pairs of nodes [de2015structural]— is fixed to 10%. As shown in Fig. 50, the diffusive walk on a scale-free topology leads to a high level of mixed pathways across layers, while in small-world systems, nodes tend to shape more distinct functional clusters. Figure 50: Average diffusion distance supra-matrices $\bar{\mathbf{D}}_{t}$ for different combinations of multilayer topologies and random walk dynamics (see the text for details). Figure reprinted with permission from [bertagnolli2020diffusion]. Copyright (2021) by the American Physical Society. The functional geometry framework has also been used to analyze empirical systems in [bertagnolli2020diffusion], where the authors highlighted dissimilarities in the diffusion spaces of the public transportation of London and of the social network of Noordin terrorists, by looking at the projections of the spaces and at the results of a Mantel’s test [mantel1967detection] applied to supra-distance matrices. Finally, it is worth mentioning that one can use other walks instead of random ones: this is the case of walks that result from powers of the adjacency matrix and provide the basis for _communicability_ , a measure recently used to introduce a geometric framework with applications to single and multiplex networks [Estrada2019]. ### V.2 Statistical physics of multilayer systems #### V.2.1 Classical ensembles To study the properties of an observed multilayer network, comparison with null models is often essential. The maximum entropy approach is one of the standard ways to obtain the required null models, in terms of ensembles of networks exhibiting one (or more) specific property of the observed network, while being maximally random with respect to the other properties. For instance, in the case of single layer networks, one of the most famous null models is the configuration model (CM) [molloy1995critical] which is an ensemble of networks with the same degree sequence as the observed one. The unbiased probability distribution of the members of this ensemble is indicated as $P(G)$ and must maximize the Shannon information entropy $S=-\sum\limits_{G\in\mathcal{G}}P(G)\log{P(G)}.$ (95) It is worth mentioning that such a probability distribution does not correspond to a physical process related to the second law of thermodynamics. However, the discussed entropy maximization approach is mimicking the mathematical machinery of statistical physics and, in the case of a fixed degree sequence, it leads to a microcanonical ensemble where all members of the CM have equal probability. Depending on the type of study, the mentioned constraint of a fixed degree sequence can be relaxed to obtain other ensembles [cimini2019]. For instance, by fixing the average degree instead of the full degree sequence, we achieve the grand-canonical ensemble of random graphs. Furthermore, if our knowledge is limited to the degree distribution from which the degree sequence is sampled, one obtains the hypersoft configuration model [Krioukov2020MaxEntropy] which is a hypercanonical ensemble of random networks all drawn from the fixed distribution (see Fig. 51). Figure 51: Schematic of sampling the network, represented by $G$, from hypercanonical, grandcanonical, and microcanonical ensembles discussed in the text. Figure from [Krioukov2020MaxEntropy]. Similarly, maximum entropy approaches have been used to study ensembles of non-interconnected multiplex networks. Assume an edge-colored multigraph $M$ with $L$ layers, each forming a network $G^{(\ell)}$ ($\ell=1,2,...N$). Remarkably, when there is no correlation between the layers, the probability of observing $M$ can be obtained as the product of the probabilities of the layers: $\displaystyle P(M)=\prod\limits_{\ell=1}^{L}P(G^{(\ell)})$ (96) and the corresponding Shannon entropy becomes the summation of Shannon entropies of layers [bianconi2013statistical]: $\displaystyle S=\sum\limits_{\ell=1}^{L}S^{(\ell)}.$ (97) By imposing the soft constraints – e.g., fixing the average degree instead of degree sequence – and using the Lagrangian multipliers method, one can obtain the canonical multiplex ensemble with probability $P_{C}(M)$ maximizing the Shannon entropy: $\displaystyle P_{C}(M)=\frac{e^{-\sum\limits_{\mu}\lambda_{\mu}F_{\mu}(M)}}{Z_{C}}=\prod\limits_{\ell=1}^{L}P_{C}(G^{(\ell)}),$ (98) where $\lambda_{\mu}$ corresponds to the Lagrangian multipliers and $F_{\mu}(M)$ determine the constraints on the network (e.g., average degree). Note that, in presence of layer-layer correlations, we have $P(M)\neq\prod\limits_{\ell=1}^{L}P(G^{(\ell)})$ leading to other probability distributions extensively studied in [bianconi2013statistical]. #### V.2.2 Quantum-like ensembles Complex systems include a wide variety of physical attributes and dynamics. The interactions between their units vary, from the electrochemical signals traveling among neurons in the human brain to the transport of goods between different areas of a urban ecosystem and the spreading of an infectious pathogen between individuals of a society. Regardless of the nature of these interactions, they can be described as information exchange. In fact, complex systems resemble one another in the way they handle information. Therefore, to understand how these systems operate, from a physical point of view, investigating their information dynamics is crucial. It is important to note that the information flow between the units is regulated by the underlying structure, depending on the local neighborhood in certain classes of networks and on long-range communication between distant components in other topologies. At the same time, it is essential to consider the coupling between the structure and the dynamical processes governing the flow of information, such as diffusion, random walks, synchronization and consensus and, since it is often difficult to understand a system in terms of microscopic features, a framework providing a statistical description of the system might be relevant for applications. Recently, a statistical field theory of complex information dynamics has been introduced to unify a range of dynamical processes governing the evolution of information on top of static or time-varying structures [de2020SFT]. This framework describes the interactions among the units in terms of _information streams_ , a set of operators that determines the direction of information flow and provides a statistical ensemble to construct the statistical physics of complex information dynamics. In fact, the formalism allows for defining density matrices— i.e., statistical average of streams— from which a variety of descriptors can be derived. Of course, density matrix formalism comes from quantum mechanics, where vectors were found insufficient to represent the mixed states and encode the pairwise coherence between quantum states. Similarly, properties of networked systems cannot be fully described by vectors or distribution functions, without information loss. For instance, a system’s structure is often encoded in two dimensional data structures, being the adjacency matrix, except for trivial symmetries like chains or paths. Therefore, the counterpart of the quantum density matrix has been introduced for complex networks, where off-diagonal elements provide a proxy for interactions between the nodes, instead of the coherence between the states. So far, this framework has only been used to analyze classical networks. In the following, we provide a direct application of the theoretical framework to multilayer systems. Figure 52: Schematic illustration of the spectral entropy for different classes of networks has been presented. Remarkably, the entropic distances provided by the framework can be used to compare network similarity and characterize the network topology with high accuracy. Figure from [de2016spectral]. ##### Redundancy and reducibility of multilayer systems As discussed previously, the constituents of complex systems must exchange information efficiently in order to function properly. However, a deep understanding of the structure’s role in enhancing or hindering the transport properties – such as navigability [boguna2009navigability] – continues to elude us, especially in the case of multilayer systems [gao2012networks, de2013mathematical, gomez2013diffusion, radicchi2013abrupt, kivela2014multilayer, boccaletti2014structure, de2016physics] where an efficient flow might be hindered by the lack of synchronization between different layers [gallotti2014anatomy] or the redundancy of pathways across the layers. There are a number of methods to reduce multiplex networks by structurally merging their most similar layers, based on information-theoretic frameworks [de2015structural]. Despite their success, these methods are mostly based on heuristics and have been proved inaccurate under specific circumstances [de2020EnhanceTransport]. Enhancing the flow distribution in multilayer systems is challenging, since adding links to the layers (e.g., highways, tubes, flights, synapses, etc.) comes with a cost. Interestingly, for multilayer networks with interlinks, changing the weights of interlinks can enhance the diffusion on top of the networks [gomez2013diffusion]. When acting on the structure is not an option, it has been shown that one can still enhance the transport properties (see Fig. 53) using the statistical physics of complex information dynamics by coupling layers dynamically, in a way that a dynamical process can not distinguish the functionally coupled layers – e.g., airlines proposing shared flights to their customers – while evolving. The functional reduction includes coupling the layers with high similarity that are responsible for the redundant diffusion pathways in the system, in order to identify the (sub)set with maximally diverse layers [de2020EnhanceTransport]. Figure 53: Schematic illustration comparing structural against functional reduction of a multilayer network consisting of $L=4$ layers. The procedure is similar for both approaches, but the relevant difference is that while structural reducibility alters the topology of the system, function reducibility allows us to functionally couple layers without altering their structure. Figure from [de2020EnhanceTransport]. Diffusive processes, such as random walks, have been used extensively to model the information transport within complex structures [masuda2017random]. Here, we consider random walk dynamics governed by the normalized Laplacian $\hat{\tilde{\mathbf{L}}}=\langle\hat{\tilde{\mathbf{L}}}^{(\ell)}\rangle$ given by $\hat{\tilde{\mathbf{L}}}=\hat{\mathbf{I}}-\langle\hat{\mathbf{T}}^{(\ell)}\rangle$ playing the role of the quasi-Hamiltonian (see Eq. (61)). The density matrix can be obtained for random walks on multiplex networks as $\hat{\boldsymbol{\rho}}(t)=\frac{e^{-t\hat{\tilde{\mathbf{L}}}}}{Z(t)}$. Interestingly, it has been shown that the partition function, which is encoding the amount of the trapped field, is proportional to the average return probability of random walk dynamics: $Z(t)=N\mathcal{R}(t)$, where $\mathcal{R}(t)=N^{-1}\sum\limits_{i=1}^{N}e^{-t\lambda_{i}}$ is the average return probability and $\lambda_{i}$ is the $i$–th eigenvalue of $\hat{\tilde{\mathbf{L}}}$. Intuitively, the average return probability is high when the structural symmetries and abundance of redundant diffusion pathways slows down the information propagation between the units. Thus, it is expected that breaking the structural regularities by adding long-range interactions [watts1998collective] or increasing the diversity of diffusion pathways across layers can lead to faster information flow and and lower $Z(t)$. Multiplexity of interactions among units generates non-trivial dynamical correlations between layers that have no counterpart when layers are considered in isolation. This important difference can be characterized in terms of average entropy distance between the multiplex and its layers. This measure, named intertwining, is defined by $\displaystyle\mathcal{I}$ $\displaystyle=$ $\displaystyle\langle\mathcal{D}_{KL}(\hat{\boldsymbol{\rho}}||\hat{\boldsymbol{\rho}}^{(\ell)})\rangle=\frac{1}{L}\sum\limits_{\ell=1}^{L}\mathcal{D}_{KL}(\hat{\boldsymbol{\rho}}||\hat{\boldsymbol{\rho}}^{(\ell)})$ (99) where $\mathcal{D}_{KL}(\hat{\boldsymbol{\rho}}||\hat{\boldsymbol{\rho}}^{(\ell)})=Tr[\hat{\boldsymbol{\rho}}(\log_{2}\hat{\boldsymbol{\rho}}-\log_{2}\hat{\boldsymbol{\rho}}^{(\ell)})]$ is the quantum-like Kullback-Leibler (KL) divergence between layer $\ell$ and a multiplex system as a whole. Directly from intertwining, a fundamental inequality between the partition function of a multiplex system as whole and the partition functions of its layers can be derived. This inequality is important, as it relates the transport phenomena of multiplex system and layers, through average dynamical trapping (i.e., the partition function): $\displaystyle Z(t)\leq\prod\limits_{\ell=1}^{L}Z^{(\ell)}(t)^{1/L},$ (100) where equality holds if and only if all the layers are the same. Using dynamical trapping as a measure of transport, Eq. (100) shows that a multiplex network has better transport properties than the geometric average of layers, adding an advantage to multilayer structures. Furthermore, being reminiscent of statistical physics of particles, the equality defines the non-interacting scenario where layer-layer correlations do not alter the underlying dynamics: the entropy $S^{(\ell)}(t)$ of each layer is calculated separately and the overall entropy is given by their average $\mathcal{S}_{nint}(t)=\langle S^{(\ell)}(t)\rangle$. Conversely, any topological alteration of the non-interacting scenario introduces a dynamical correlation between layers, requiring the exploration of layers to gather more information about the system: in this case the network consists of interacting layers, where the diffusion dynamics on the whole multiplex network is considered to measure the entropy $\mathcal{S}_{int}(t)$. To obtain another form of Eq. (99), a mean-field approximation of the Von Neumann entropy is given by $\displaystyle\mathcal{S}^{MF}(t)=\frac{1}{\log 2}\left(t\frac{Z(t)-1}{Z(t)}+\log Z(t)\right),$ (101) and can be used to prove that layer-layer interactions lower the system’s entropy ($\mathcal{S}_{int}(t)\leq\mathcal{S}_{nint}(t)$) and partition function $Z(t)$. Normalizing the intertwining by its upper bound, for values of time $t$ sufficiently large and in absence of isolated state nodes, Eq. (99) reduces to the relative intertwining: $\displaystyle\mathcal{I}^{*}(t)=1-\frac{\mathcal{S}_{int}(t)}{\mathcal{S}_{nint}(t)},$ (102) which is bounded between 0 (i.e., the layers are redundant) and 1 (i.e., the layers are diverse and the system is irreducible). We can show how intertwining is proportional to the functional diversity of layers. The Laplacian matrix of the multiplex network is given by $\hat{\tilde{\mathbf{L}}}=\langle\hat{\tilde{\mathbf{L}}}^{(\ell)}\rangle$. Therefore, the Laplacian matrix of each layer $\hat{\tilde{\mathbf{L}}}^{(\ell)}$ can be written as a perturbation of multiplex Laplacian $\hat{\tilde{\mathbf{L}}}^{(\ell)}=\hat{\tilde{\mathbf{L}}}+\Delta\hat{\tilde{\mathbf{L}}}^{(\ell)}$, reflected in its eigenvalues as $\lambda_{i}^{(\ell)}=\lambda_{i}+\Delta\lambda_{i}^{(\ell)}$ $(i=0,1,...N)$. It is straightforward to show that $\frac{1}{N}\sum\limits_{i=1}^{N}\Delta\lambda_{i}^{(\ell)}=\overline{\Delta\lambda^{(\ell)}}=0$ and that $\overline{\Delta\lambda^{(\ell)^{2}}}\geq 0$, the latter quantifying the influence of the perturbation to each layer. The average of the variance across all layers $\overline{\langle(\Delta\lambda^{(\ell)})^{2}\rangle}$ provides a measure of the overall spectral diversity of layers which, interestingly, is demonstrated to be proportional to the relative intertwining: $\displaystyle\mathcal{I}^{\star}(t)$ $\displaystyle\approx$ $\displaystyle\frac{t^{2}}{2}\overline{\left\langle(\Delta\lambda^{(\ell)})^{2}\right\rangle}.$ (103) This proves the sensitivity of intertwining to the spectral diversity of layers. Furthermore, the partition functions of layers can be written in terms of perturbations such as $Z^{(\ell)}(t)=Z(t)+\Delta Z^{(\ell)}(t)$, leading to $\displaystyle\mathcal{I}^{\star}(t)$ $\displaystyle\approx$ $\displaystyle\frac{\overline{\Delta Z^{(\ell)}}(t)}{Z(t)-1};\ \overline{\Delta Z^{(\ell)}}(t)=\frac{1}{L}\sum\limits_{\ell=1}^{L}\Delta Z^{(\ell)}(t).$ (104) Equations (103) and (104) provide a fundamental result: they show that by minimizing the partition function of the system one maximizes the relative intertwining while favoring the maximum functional diversity of layers. Additionally, it has been shown that an inverse proportionality holds between the diffusion time ($1/\lambda_{2}$) and intertwining, as further evidence for the role of intertwining in characterizing the transport properties. These theoretical findings have been applied to a broad range of synthetic and empirical systems, providing a transparent framework for coupling the most similar layers of multiplex networks, in order to improve their transport properties including dynamical trapping, diffusion time, and navigability [de2020EnhanceTransport]. ## VI Conclusions Network Science is one of the greatest achievements of the 21st century, paving the way toward a mathematical approach for the analysis of disparate complex systems, and allowing us to find regularities in apparently disordered connectivity patterns. The past decade has seen the flourishing of analytical techniques and models exploiting, or characterizing, the inherent multidimensionality of empirical systems, from multiplexity – i.e., the existence of distinct types of relationships among the same set of actors or units – to interdependency – i.e., the existence of structural or functional connections among sets of actors or units of a different nature. Such multiple dimensions are nowadays easily encoded into layers of information. Figure 54: Multilayer representation of a cell. Layers: i) regulatory interactions involving RNA and protein expression, ii) protein-protein interactions involved in signaling and responsible cell function, iii) metabolic interactions with reactions and pathways crucial for cell function. Figure from [vermeulen2020exposome]. Most of the advances in this direction are described in some detail or referred to in this work. Starting from the mathematical representation of multilayer networks (Chap. 2) we have introduced structural descriptors for units, layers and the whole system (Chap. 3), providing an overview of the micro- and meso-scale organization of such systems. We have discussed the rich spectrum of phenomena, with no classical counterparts, related to dynamical processes on the top of the networks and their intertwining (Chap. 4), guiding the reader towards two promising research areas for the future, namely network geometry and information dynamics (Chap. 5), although many other exciting sub- field are emerging, e.g., higher-order modeling and analysis [lambiotte2019networks, battiston2020networks]. At this point, the reader should be sufficiently familiar with multilayer network science and, to conclude, we would like to make a quick journey through the most recent applications of its paradigm, moving across different spatial scales and ranging from cells to societies. The first stop of this journey is exactly a cell, which can vary in diameter between $10^{-6}$ m and $10^{-4}$ m (note that a DNA double helix is about $10^{-8}$ m wide). The cell can be seen as a multilayer system consisting of three interdependent layers (see Fig. 54). Here, applications are mostly related to the emerging field of systems biology and network medicine, promising to use biomolecular interactions to develop a deeper knowledge of biology across scales with the ultimate goal to better understand diseases, to prevent them, and to treat them with medical drugs that reduce side effects. The second stop requires a big jump of about three orders of magnitude, to explore one of the most famous multicellular organisms with a small-scale neural system: the Caenorhabditis elegans, a nematode of about $10^{-3}$ m. The nervous system of this small worm consists of synaptic and neuropeptide interactions which can be seen as interconnected layers. Here, the multilayer perspective provides the opportunity to better understand how the integration between hard-wired synaptic or junctional circuits and extrasynaptic signals can modulate the large-scale behavior of the worm [bentley2016multilayer]. At a larger scale, around $10^{-1}$ m, another emblematic neural system, namely the human brain, is currently being characterized in terms of how its structural and functional connectivity evolves over time, e.g., while performing a specific task, or across groups, unraveling the existence of modular and hierarchical structures which would remain hidden under the lens of less sophisticated models and analytical techniques [bassett2017network]. In parallel, the functional connectivity of the human brain can be stratified by frequency bands (see Fig. 55) where specific correlations or causal relationships between regions of interests appear [de2017multilayer]. Multilayer analysis can be used to better characterize the relative importance of all brain regions [williamson2021multilayer] and to enhance the accuracy in discriminating between healthy and unhealthy patients starting from imaging information [de2016mapping]. Figure 55: Multilayer representation of a human brain. Both the 3D and layered visualization encode the functional brain of a schizophrenic subject (11 non- overlapping frequency-band layers between 0.01 and 0.23 Hz). Figure from [de2017multilayer], readapted from [de2016mapping]. At larger scales, on the order of hundreds of meters ($10^{2}$m), cooperative systems like that of dolphins, can be characterized with respect to distinct types of interactions allowing us to get unprecedented insights about the underlying social organization and group dynamics. At similar spatial scales, another emblematic example, accounting for the socio-spatial interdependence typical of many other networks, concerns the organization of ecological systems (Fig. 56): for instance, it has been recently shown that the importance of dispersers for an ecosystem is better captured by multilayer measures of importance rather than by standard metrics [timoteo2018multilayer]. Figure 56: Aggregate (top) and multilayer (bottom) representations of a real seed-dispersal network. Reproduced from [timoteo2018multilayer] under Creative Commons Attribution 4.0 International License. At the human scale, relatively small-scale social systems ($10^{4}\leavevmode\nobreak\ $m) have been studied to better understand the impact of external shocks on social structure and dynamics. For instance, it has been shown that multilayer modeling can disentangle the different social ties that conform to the proximity interaction networks of extant hunter- gatherer societies, identifying social relationships with a key role in the spread and accumulation of culture [migliano2017characterization]. In another study, the analysis of mixed economies in three villages of Alaska unraveled that factors related to climate change, such as global warming, have a non- negligible effect on household structure, but the most important factors for vulnerability are indeed due to social shift rather than resource depletion [baggio2016multiplex]. Let us jump by three more orders of magnitude and discuss systems at the planetary scale: the ones based on information exchange thanks to large-scale communication infrastructures, such as the Internet. At this scale the activity of a complex system might be very frenetic; think about an online social media platform, where millions or billions of users worldwide continuously produce content to be shared, which quickly travels and bounces from one country to another. For instance, it is interesting to study how rumors and memes spread on these systems, as recently proposed in [d2019spreading] to capture the behavior of users who post information from one social media platform to another and to provide a plausible explanation for the heavy-tailed distributions of meme popularity that is usually observed in empirical data. Recent applications also include the analysis of trade networks and their nested and modular structure [torreggiani2018identifying, a2018unfolding, alves2019nested]. Additionally, as anticipated, multilayer models of financial networks have been proposed very recently, to better explain how the financial distress of one country can ignite a global financial crisis, like the one in 2008. By considering distinct asset types as layers where countries (nodes) exchange financial assets (links), [del2020multiplex] have highlighted the importance of both intra-layer and inter-layer connectivity contributions to the propagation of contagions (Fig. 57). Figure 57: Multilayer representation of a global financial network. Layers are asset types, nodes are countries and links are cross-country financial relationships. Figure from [del2020multiplex]. The long journey summarized in the last few pages of this work allowed us to stress the importance of multilayer modeling across a broad spectrum of disciplines, including cell biology, neuroscience, ecology and social sciences, spanning 10 orders of magnitude from a spatial scale of about $10^{-6}$ m to one of about $10^{4}$ m. The theoretical and computational framework presented in this work provides researchers and practitioners with a versatile and unified tool kit to shed light and gain new insights on the complexity of natural and artificial systems. ## Appendix A Master Stability Function (MSF) formalism To test the stability of the synchronized state s we study how the perturbation error $\delta\textbf{x}_{i}(t)=\textbf{x}_{i}(t)-\textbf{s}(t)$ evolves. By assuming small perturbations, we can write the variational equation for $\delta\textbf{x}_{i}$: $\delta\textbf{x}_{i}=J\textbf{F}(\textbf{s})\delta\textbf{x}_{i}-\sigma J\textbf{H}(\textbf{s})\sum_{j=1}^{N}L_{ij}\delta\textbf{x}_{j}$ (105) where $J\textbf{F}$ and $J\textbf{H}$ are the Jacobian of F and H, respectively. To find the solution of Eq. (105) we have to project $\delta\textbf{x}$ into the eigenspace formed by the eigenvectors of the Laplacian matrix L, obtaining a decomposition of the time evolution of perturbation error into $N$ decoupled eigenmodes: $\dot{\boldsymbol{\xi}_{i}}=[J\textbf{F}(\textbf{s})-\sigma\lambda_{i}J\textbf{H}(\textbf{s})]\boldsymbol{\xi}_{i},\quad i=1,...,N$ (106) where $\boldsymbol{\xi}_{i}$ is the eigenmode associated with the eigenvalue $\lambda_{i}$ of L. By indicating as $\Lambda_{max}$ the maximum Lyapunov exponent associated with the system of equations (106), then we can write the time evolution of $\boldsymbol{\xi}$ as $|\boldsymbol{\xi}|\sim e^{\Lambda_{max}t}$ and, finally, we found that a necessary condition for the stability of a synchronized state is that $\Lambda_{max}<0$. The expression of $\Lambda_{max}$ as a function of a generic parameter $\alpha_{i}=\sigma\lambda_{i}$ is named the Master Stability Function (MSF), and it usually assumes negative values in an interval $\alpha\in(\alpha_{1},\alpha_{2})$. It means that, for a fixed coupling strength $\sigma$, a network can reach and maintain complete synchronization only if its structure, defined by its Laplacian matrix, is such that: $\alpha_{1}<\sigma\lambda_{2}\leq\sigma\lambda_{\leq}...\leq\sigma\lambda_{N}<\alpha_{2}.$ (107) or, equivalently, $R\equiv\frac{\lambda_{N}}{\lambda_{2}}<\frac{\alpha_{2}}{\alpha_{1}},$ (108) used, for instance, in [sole2013spectral] to unravel the existence of an optimal value for the synchronizability of a multilayer system. ## Appendix B Kuramoto model on networks The first approach to model collective synchronization considers a population of coupled limit-cycle oscillators whose natural frequencies are drawn from some prescribed distribution and exert a phase-dependent influence on each others. In formulae, we can write the Kuramoto model (KM) [Strogatz2000, Arenas2008] as a system of $N$ oscillators, whose instantaneous phases $\theta_{i}$ are described by the equation: $\dot{\theta_{i}}=\omega_{i}+\frac{\sigma}{N}\sum_{j=1}^{N}\sin(\theta_{j}-\theta_{i}),$ (109) where $\sigma$ is the coupling constant and $\omega_{i}$ is the natural frequency of oscillator $i$, chosen from an unimodal distribution $g(\omega)$. We can use a order parameter that describes the transition: it is defined as a macroscopic measure that quantifies the collective rhythm produced by the whole population: $r(t)e^{i\Phi(t)}=\frac{1}{N}\sum_{j=1}^{N}\sin(\theta_{j}),$ (110) where $\Phi(t)$ is the average phase, and $0\leq r(t)\leq 1$ measures the phase coherence, where the two extreme values correspond to phase locked ($r=1)$ or incoherent oscillators. Equation (109) is then rewritten in terms of the order parameter, and the equation for the instantaneous phase reduces to $\dot{\theta_{i}}=\omega_{i}+\sigma r\sin(\theta_{i}),$ (111) which has two types of long-term behaviours. Oscillators for which $|\omega_{i}|\leq\sigma r$ are phase-locked and form a mutually synchronized cluster. Oscillators with frequencies in the tails of $g(\omega)$ distribution, where $|\omega_{i}|>\sigma r$ holds, are drifting with respect to the synchronized cluster. The Kuramoto model is then generalized on networks by including in Eq. (109) information about network connectivity: $\dot{\theta_{i}}=\omega_{i}+\sum_{j=1}^{N}\sigma_{ij}A_{ij}\sin(\theta_{j}-\theta_{i}).$ (112) ## Appendix C Transitions in multilayer systems Type of phase transition | Mode | Reference ---|---|--- Enhanced diffusion | C | [gomez2013diffusion] Emergence of multiplexity | D | [radicchi2013abrupt] Table 2: Phase transitions (C: continuous, D: discontinuous, H: hybrid) in multilayer networks due to the algebraic, topological and simple dynamics described in Sec. IV.1. Type of dynamics | Mode | Reference ---|---|--- Synchronization | C | [Gambuzza2015] Explosive synchronization with local order parameter | D | [Zhang2015] Synchronization of oscillators with random walks | C & D | [nicosia2017collective] Interacting diseases | C | [Sanz2014] Epidemic onset control by raising of awareness | C | [Granell2013] Explosive pandemics with no early-warning | D | [danziger2019dynamic] Cooperative behaviour in coupled competitive games – social influence | C | [amato2017interplay] Cooperative behaviour in Prisoner’s Dilemma game | C | [gomez2012evolution] Cooperative behaviour in Public Good game | C | [battiston2017determinants] Table 3: Phase transitions in multilayer networks due to the simple and couple dynamics described in Sec. IV.2, Sec. IV.3 and Sec. IV.4. Type of network | Mode | Reference ---|---|--- Multilayer network | C | [leicht2009percolation] Multiplex | D | [son2012percolation] Partial Multiplex | C & D | [son2012percolation] Multiplex with overlap | C, D & H | [hu2013percolation, baxter2016correlated, cellai2016message] Multiplex with spatially embedded networks | C & D | [son2011percolation, bashan2013extreme, grassberger2015percolation] Bond Perc. on Multiplex | C | [hackett2016bond] Multiplex & Local cascades | D | [buldyrev2010catastrophic] Multiplex & Non-local cascades | D | [artime2020abrupt] Table 4: Type of transitions for different scenarios in percolation and cascades propagation.
# Guided Sampling-based Evolutionary Deep Neural Network for Intelligent Fault Diagnosis Arun K. Sharma Student Member, IEEE and Nishchal K. Verma Senior Member, IEEE Arun K. Sharma and Nishchal K. Verma are with the Dept. of Electrical Engineering, Indian Institute of Technology, Kanpur, India. e-mail: <EMAIL_ADDRESS>and<EMAIL_ADDRESS> ###### Abstract The diagnostic performance of most of the deep learning models is greatly affected by the selection of model architecture and their hyperparameters. The main challenges in model selection methodologies are the design of architecture optimizer and model evaluation strategy. In this paper, we have proposed a novel framework of evolutionary deep neural network which uses policy gradient to guide the evolution of DNN architecture towards maximum diagnostic accuracy. We have formulated a policy gradient-based controller which generates an action to sample the new model architecture at every generation. The best fitness obtained is used as a reward to update the policy parameters. Also, the best model obtained is transferred to the next generation for quick model evaluation in the NSGA-II evolutionary framework. Thus, the algorithm gets the benefits of fast non-dominated sorting as well as quick model evaluation. The effectiveness of the proposed framework has been validated on three datasets: the Air Compressor dataset, Case Western Reserve University dataset, and Paderborn university dataset. Nowadays even if there are a lot of advancements in computational intelligence, intelligent fault diagnosis requires good expertise to design the diagnostic model. This is because the performance of most of the diagnostic models depends on model selection and hyperparameters. The proposed method of guided sampling-based evolutionary deep neural network non-dominated sorting-based evolutionary algorithm to optimize the model architecture. The model architecture is sampled by using a reward-based policy to force the evolutionary algorithm to get a model architecture that can give maximum accuracy. Therefore, the proposed method is the best solution for the model architecture selection without any human expertise while ensuring the best possible diagnostic performance. Neural architecture search, Intelligent fault diagnosis, Deep neural network, Non-dominated sorting algorithm, Policy gradient. ## 1 Introduction With the advancement in modern computational technology, machine learning based intelligent fault diagnosis has become an integral part of almost all industrial sectors. Intelligent fault diagnosis refers to the preventive maintenance of industrial machines using machine learning-based data analysis and class detection [1, 2, 3, 4, 5, 6]. Data-driven fault diagnosis of industrial machines faces many challenges: (i) labeled data availability as running the machine in a faulty state with real-time load is not possible, (ii) training the deep learning algorithms with limited availability of labeled dataset, (iii) model selection best suited for the dataset under consideration. A deep neural network (DNN) has been commonly used for intelligent fault diagnosis due to its high capability of non-linear feature transformation [7, 8]. However, training the DNN from scratch for every change in the operating condition of the machine becomes an uneconomical process. To solve such problems, many researchers have suggested a variety of domain adaptation methods[9, 10, 11, 12, 13, 14, 15, 16]. [11], [14] and [15] uses labeled source dataset from the laboratory machine and the unlabeled target dataset from the test machine to train a model capable to diagnose the fault on the test dataset from the target machine. A quick learning mechanism to train the model in the target domain by transforming an already trained model on source data. All these methods work well only if a suitable architecture is selected based on trial and error and some prior experience with the dataset. In practice, it becomes a big challenge for selecting and training the best suitable architecture with every change in the dataset due to variations in machine operating conditions and fault types. Therefore, our main objective is to investigate and develop an algorithm that can find a best suitable architecture for the given dataset with limited computational resources and computational time. As the performance of most of the deep learning algorithms is very much affected by the hyper-parameters, research on hyper-parameter optimization or neural architecture search (NAS) has gained very much attention nowadays [17, 18]. Based on architecture optimization strategy, NAS methods are mainly categorized as: (i) random search and grid search [19], (ii) surrogate model-based optimization [20], (iii) reinforcement learning [21], (iv) genetic algorithm [22, 23, 24, 25, 26], (v) gradient descent [27], and (vi) hybrid algorithms [28]. In most of the aforementioned methods, the biggest challenge for architecture search is the model evaluation because of the complex training mechanism for most of the deep learning models [18]. In genetic algorithm-based NAS methods, the evolution of architecture takes many days of GPU. For example, regularized evolution of image classifier [29] with 450 K40 GPU takes 3150 GPU days. Architecture optimization using non-dominated sorting genetic algorithm II (NSGA-II [30]) has also gained much attention due to the fast sorting framework [31, 26]. EvoN2N [26] uses the concept of knowledge transfer for fitness evaluation in the NSGA-II based framework. The quick fitness evaluation with fast NSGA-II makes the algorithm faster compared to the state- of-the-art evolutionary methods. In this paper, we extend the work of EvoN2N [26] to introduce a guided sampling-based evolution that makes the convergence faster while exploiting the search space with the help of a reward-based controller. The key contributions of this work are highlighted below 1. i) We have formulated the guided sampling-based evolution of DNN architecture (GS-EvoDNN) using policy gradient. 2. ii) We have introduced a mean and variance term to exploit the search space for the model optimization. 3. iii) We have formulated an update law for mean and variance term using policy gradient. 4. iv) We have adopted the knowledge transfer mechanism to initialize the weight matrices of the model at a generation by using the best model from the previous generation. The rest of the article is organized as follows. Table 1 summarizes the symbols commonly used throughout the literature except index variables on summation and loop variables in algorithmic steps. Section 2 defines the objective problem. Section 3 briefly discusses the related works and the theoretical backgrounds. Section 4 explains the implementation details of the proposed framework of GS-EvoDNN. Section 5 discussed the effectiveness of the proposed framework for fault diagnosis under various load and operating conditions of the machine. And finally, Section 6 concludes the whole paper. Table 1: List of Symbols Symbol | Description ---|--- $\mathcal{D}^{tr}$, $\mathcal{D}^{val}$, & $\mathcal{D}^{te}$ | Dataset, training, validation & test dataset $\textbf{X}\in\Re^{(n_{s}\times n_{f})}$, $\textbf{y}\in\Re^{n_{s}}$ | Input data, Output labels $n_{s}$ & $n_{f}$ | Number of samples & features $C$ | number class of the dataset $\Psi_{t}$ | DNN model at generation $t$ $\Psi_{t}^{\dagger}$ | Best DNN model at generation $t$ $\Lambda_{t},\,\mathcal{R}_{t}$ | Fitness matrix, Rank at generation $t$ $P_{t},\,Q_{t}$ | Population, Offspring at generation $t$ $n_{p}$ | number hidden layers in $p^{th}$ model $h_{k}$ | Number nodes in $k^{th}$ hidden layer ## 2 Problem Statement Let the training dataset, validation dataset, and test dataset are $\mathcal{D}^{tr}=\left(\textbf{X}^{tr},\textbf{y}^{tr}\right)$, $\mathcal{D}^{val}=\left(\textbf{X}^{val},\textbf{y}^{val}\right)$, and $\mathcal{D}^{te}=\left(\textbf{X}^{te},\textbf{y}^{te}\right)$, respectively where $\textbf{X}\in\Re^{(n_{s}\times n_{f})}$ be the input data with $n_{s}$ samples & $n_{f}$ features and $\textbf{y}\in\Re^{n_{s}}$ be the corresponding output label. The objective of optimal DNN architecture search for fault classification is mathematically be formulated as $\displaystyle\Psi^{\dagger}=\mathcal{H}\left(P,\;\mathcal{D}^{tr},\;\mathcal{D}^{val}\right)$ (1) $\displaystyle\hat{y}^{te}\;=\;\mathcal{F}\left(\Psi^{\dagger},\;X^{te}\right)$ (2) where $\mathcal{H}(.)$ denotes the optimization function to get the best model $\Psi^{\dagger}$ with optimal parameters for the training dataset and $\mathcal{F}(.)$ is the feed-forward DNN function which predicts the fault class $\hat{\textbf{y}}^{te}$ for the test data $\textbf{X}^{te}$. ## 3 Related Works and Theoretical Background ### 3.1 Deep Neural Network (DNN) The deep neural network (DNN): a multi-layered neural network is the most popular technique for pattern recognition via non-linear feature transformation in multiple stages [32]. From the training point of view, DNN can be considered as two parts: Stack of a given number of auto-encoders ( also called stacked auto-encoder: SAE) [33] and a classifier usually softmax classifier as output layer. First, a greedy layer unsupervised training is used to train each of the auto-encoder (AE) in the SAE. Then, the SAE stacked with the classifier at the end layer is fine-tuned using a labeled training dataset. The SAE with softmax classifier (DNN model $\Psi$) is depicted in Fig. 1. Figure 1: SAE with softmax classifier (DNN model: $\Psi$) ### 3.2 Intelligent Fault Diagnosis Recently, with the advent of advanced machine learning techniques and the availability of fast computational resources, the data-driven intelligent fault diagnosis method has gained much popularity, [1, 2, 4, 3]. In these methods, various machine learning techniques are utilized to learn the specific signature of the recorded signals like current, vibration, temperature, etc., and thereafter identify the existence of machinery fault using the test samples. Neural network (NN) [34], Support vector machine (SVM) [35, 36], and random forest (RF) classifier [5] have been very effectively used for intelligent fault diagnosis and have been proved to be the baseline method for pattern recognition. But the diagnostic performances by these methods are reduced due to high sparsity and low-quality features in the dataset [37]. The intelligent fault diagnosis using deep learning methods has gained much attention due to its capability of multi-scale hierarchical feature transformation and large dimensional data handling, [7, 8, 14]. However, using a deep neural network for fault diagnosis faces a major challenge of training from scratch for every new operating condition of the machines. The recent trend of using deep transfer learning methods for domain adaptation has been very effective for fault diagnosis under changeable operating conditions [9, 10, 12, 13, 38, 15, 16]. However, the diagnosis performance by these methods is very much affected by the selection of the architecture of the deep neural network and the other hyper-parameters. ### 3.3 Neural Architecture Search (NAS) The main objective of NAS methods is to obtain the optimal architecture in a given search space with the best model performance [18]. There are three important aspects of the NAS methods: (i) formulation of the search space, (ii) the architecture optimizer, and (iii) the model evaluation. Formulation of search space defines the design format for the model architecture. It can be categorized into four groups: (i) cell-based (ii) entire-structured, (iii) morphism-based, and (iv) hierarchical search space. The most important aspect of NAS methods is the model evaluation as it is computationally very expensive to train each model during the search process and evaluate on the unseen dataset. To accelerate the evolution, various mechanism have been suggested for the model evaluation [18, 29, 39, 40, 20, 41, 42, 43]. K. Kandasamy, et.al, [41] suggested learning the curve extrapolation for the model performance evaluation instead to train and evaluate the actual architecture. H. Pham et. al, [43] proposed the method of parameter sharing for the faster training and evaluation of the model architecture. Another important aspect of NAS methods is the architecture optimizer (AO). The objective automatic AO is to automatically guide the model architecture search in a direction to get the best suitable model for a given dataset. The AO methods adopted by various researchers can be categorized as (i) random search (RS) (ii) grid search (GS), (iii) surrogate model-based optimization (SMBO), (iv) gradient descent (GD), (v) reinforcement learning (RL), (vi) genetic algorithms (GA), and (vii) hybrid methods. In the RS method, the search optimizer tries different architecture randomly from the defined search space, [19], whereas, in GS, the search method uses a grid to sample and evaluate the model architecture [40]. SMBO methods use Basian optimization [20, 41, 42] or neural networks [44] as a surrogate model of the objective function to obtain the most promising solution (the model architecture). Gradient descent-based method uses softmax function to find the optimal architecture over a continuous and differentiable search space [27]. In RL- based NAS [21, 45], a controller (usually, a recurrent neural network) generate an action to sample a new architecture. The observation (state) & the reward from the environment is used to update the controller policy to generate new architecture samples. Here, the training & validation process of the sampled neural network is treated as the environment that returns back the validation accuracy. GA-based NAS [22, 23, 24, 25, 46, 31], use heuristic search to find the best performing architecture over a given search space. In these methods, heuristically sampled neural architectures are trained and evaluated using the convention neural training methods and the performance metrics are used as fitness for evolution to obtain the optimal architecture. The main challenge of these methods is the fitness evaluation of the individual model All of these methods of AO have their own merits and demerits. The hybridization of two of the above methods may give a significant improvement in the search efficiency, called the hybrid method of AO [47, 28, 48]. ### 3.4 Policy Gradient Policy gradient (PG) is a tool to optimize the controller policy for reinforcement learning algorithm [49, 50]. The controller policy is the parameterized function that defines the learning agent’s way to act on the environment to get maximum reward Fig. 2. The reward defines the good or bad effect of the action taken by the policy towards the fulfillment of the optimal objective. Let the parameter vector be $\theta_{t}$, then the parameterized function policy is represented as $\pi_{\theta_{t}}(a_{t}|s_{t})$, where $a_{t}$ and $s_{t}$ represent action and state at given time $t$. Let the action $a_{t}$ produces reward $r_{t+1}$ from the environment, then the trajectory of state, action and reward can be represented as $\left((s_{0},a_{0},r_{1}),(s_{1},a_{1},r_{2}),....(s_{t},a_{t},r_{t+1})\right)$. The policy parameter $\theta_{t}$ can be updated using policy gradient as $\theta_{t+1}=\theta_{t}+\eta_{t}\nabla_{\theta_{t}}\mathcal{J}(\theta_{t})$ (3) where $\eta_{t}$ denotes the learning rate at time $t$, usually a constant real number. $\nabla_{\theta_{t}}\mathcal{J}(\theta_{t})$ denotes the policy gradient and can be calculated using expected of cumulative reward ${U}_{t}$ over the time $t$ as follows. $\displaystyle\nabla_{\theta_{t}}\mathcal{J}(\theta_{t})=\nabla_{\theta_{t}}E\left[{U}_{t}\right]=\nabla_{\theta_{t}}\int_{t}\pi(\tau)r(\tau)d(\tau)$ (4) $\displaystyle=E\left[r(\tau).\nabla_{\theta_{t}}\log\pi_{\theta_{t}}(\tau)\right]$ (5) $\nabla_{\theta_{t}}\mathcal{J}(\theta_{t})=\frac{1}{N}\sum_{k=1}^{N}r(k)\left(\sum_{t=1}^{T}\nabla_{\theta_{t}}\log\pi_{\theta_{t}}\left(a_{t}|a_{t-1:1};\theta_{t}\right)\right)$ (6) ## 4 Proposed Framework In this section, the proposed framework of guided sampling-based evolutionary DNN (GS-EvoDNN) is described in detail. The Fig 2 shows the schematic of the workflow of GS-EvoDNN. In the figure, DNN architecture optimization in the NSGA-II framework constitute the environment. The fitness of the best model is termed as the reward. The sorted fitness of all the individuals in the population is treated as the state of the controller. The controller policy $\pi_{\theta_{t}}$ generates an action $a_{t}=[m_{t},\sigma_{t}]$, where $m_{t}$ and $\sigma_{t}$ be the mean and variance for the sampling of DNN architecture at generation $t$. Given the training dataset $\mathcal{D}^{tr}$ and the validation dataset $\mathcal{D}^{val}$, the algorithmic steps for the GS-EvoDNN is presented in Algorithm 1. Our contributions are highlighted in Algorithm 1 and are further discussed in the following sections. Figure 2: Flow Diagram of guided sampling based NSGA-II. Figure 3: Variable- length gene encoding strategy. Algorithm 1 GS-EvoDNN: The Main Framework Input: $\mathcal{D}^{tr}\,\&\,\mathcal{D}^{val}$ = training & validation datasets, $n_{R}\,\&\,h_{R}=$ Maximum depth & width of the DNN respectively Output: $\Psi^{\dagger}$ = best model after the termination or last generation of the evolution. 1:$t\longleftarrow 0$ //Set generation count ($t$) = 0; 2:$[m,\;\sigma]\longleftarrow$ Compute mean and variance from allowable range for depth ($n_{R}$) and width ($h_{R}$) of the DNN. 3:$\textbf{$P_{0}$}\longleftarrow\textbf{GuidedPop}(m,\sigma,N_{p})$ //Generate $N_{p}$ number of populations using Algorithm 2. 4:$\Psi_{0}\longleftarrow$ Initialize weight matrices of the first model ($P_{0}\\{1\\}$) by small random numbers. 5:$\Lambda,\,\Psi_{1}^{\dagger}\longleftarrow\textbf{FitnessEval}(P_{0},\mathcal{D}^{tr},\mathcal{D}^{val},\Psi_{0})$ //Evaluate fitness of all individuals in $P_{0}$ using the Algorithm 3. 6:$\mathcal{R}\longleftarrow NonDominatedSorting(\Lambda_{1})$ //Assign rank using non-dominated sorting [30]. 7:$P_{1}\longleftarrow SelectParents(P_{0},\;\mathcal{R})\,$ //Select parents bybinary tournament selection, [30]. 8:${\Lambda}_{s}\\{1\\}\longleftarrow\Lambda$ //Store the fitness History. 9:$[m,\;\sigma]\longleftarrow\,\textbf{UpdateMeanVar}(P_{1},\;{\Lambda}_{s})$ //Update mean and variance term using the Algorithm 4. 10:$Q_{1}\longleftarrow\textbf{CrossoverMutation}(P_{1},m,\sigma))$ //Apply crossover and mutation on $P$ using Algorithm 5. 11:$t\longleftarrow\;t+1$ //Update the generation count 12:while termination condition is false do 13:$S_{t}\longleftarrow(P_{t}\cup Q_{t})$ //Combine the parent population ($P_{t}$) & the child population ($Q_{t}$). 14:$\Lambda,\,\Psi_{t+1}^{\dagger}\longleftarrow\textbf{FitnessEval}(S_{t},\mathcal{D}^{tr},\mathcal{D}^{val},\Psi_{t}^{\dagger})$ //Evaluate fitness of all individuals in $S_{t}$ using the Algorithm 3. 15:$\mathcal{R}\longleftarrow NonDominatedSorting(\Lambda)$ //Assign rank by non-dominated sorting of fitness $\Lambda$, [30]. 16:$\mathcal{K}\longleftarrow CrowdingDistances(S_{t},\mathcal{R},\Lambda)$ //Find crowding distances of individuals in population set $S_{t}$, [30]. 17:$P_{t+1}\longleftarrow SelectParentsByRankDist(S_{t},\mathcal{K},\Lambda)$ //Select parents by crowding distance and rank, [30]. 18:$\Lambda_{s}\\{t+1\\}\longleftarrow\Lambda$ //Store the fitness History. 19:$[m,\;\sigma]\longleftarrow\,\textbf{UpdateMeanVar}(S_{t},\;\Lambda_{s})$ //Update mean and variance term using the Algorithm 4. 20:$Q_{t+1}\longleftarrow\textbf{CrossoverMutation}(P_{t+1},m,\sigma)$ //Apply crossover and mutation on $P$ using Algorithm 5. 21: if Termination condition is true then 22: Exit 23: else 24: $t\longleftarrow t+1$ //Update the generation counter 25: end if 26:end while 27:Return: $\textrm{{Best Model}}:\;\Psi^{\dagger}\longleftarrow\Psi_{t+1}^{\dagger}$ //Best model of the last generation. ### 4.1 Population Sampling using Mean and Variance AO includes depth and width variation in a defined search space. The real- coded gene encoding strategy is adopted to encode the depth as the number of genes (length of a chromosome) and the number of nodes in a hidden layer as the value of a gene as shown in Fig. 3. Let $n_{R}$ & $h_{R}$ be the maximum depth and width of the DNN, then the search space is defined as $[1\;n_{R}]$ & $[1\;h_{R}]$ for depth and the width variations. The mean and variance $m=[m_{1},m_{2}]$ & $\sigma=[\sigma_{1},\sigma_{2}]$) are initialized as $m_{1}=(1+n_{R})/2,\;m_{2}=(1+h_{R})/2$ and $\sigma_{1}=(n_{R}-1)/2,\;\sigma_{2}=(h_{R}-1)/2$. Algorithm 2 GuidedPop: Population Sampling Input: $N$ = Population size, $m=[m_{1},m_{2}]=$ mean & $\sigma=[\sigma_{1},\sigma_{2}]=$ variance. Output: $P$ = Population with $N$ chromosomes. 1:$H\longleftarrow$ generate $N$ Gaussian numbers with $m_{1}$ and $\sigma_{1}$. 2:for p = 1 : N do 3:$h\longleftarrow H(p)$ : depth of $p^{th}$ chromosome 4:$tmp\longleftarrow$ generate $h$ Gaussian numbers with $m_{2}$ and $\sigma_{2}$. 5:$P\\{p\\}\longleftarrow$ convert all numbers in $tmp$ to nearest integers. 6:end for 7:Return $P$ ### 4.2 Fitness Evaluation Fast model evaluation is the most important requirement for NAS, especially when the evolutionary algorithm is used as an AO strategy. If the best model at a generation is transferred for initialization of the DNN weight matrices in the next generation, it makes the training and evaluation of the models faster. The quick learning mechanism suggested in [16] is adopted for the fitness evaluation as shown in Fig. 4. For the first generation, DNN models are initialized randomly and trained using Limited-Broyden-Fletcher-Goldfarb- Shanno (LBFGS) [51] algorithm. For next-generation and onward, the best model obtained is transformed (Fig. 4) to initialize the models followed by fine- tuning with the LBFGS algorithm for a few iterations only. If a model $\Psi^{t}$ at generation $t^{th}$ has weight matrix $W^{t}$, the classification loss $\mathcal{J}$ for a $C$ problem can be defined in term of $[w,\,b]\in W^{t}$ as $\displaystyle\mathcal{J}(W^{t})=\frac{1}{n^{s}}\left[\sum_{k=1}^{n^{s}}\sum_{i=1}^{C}I[y_{k}=c_{i}]\log{\frac{e^{(w_{i}^{T}f(x_{p})+b_{i})}}{\sum_{i=1}^{C}e^{(w_{i}^{T}f(x_{p}^{)}+b_{i})}}}\right]$ (7) where, $f(x)=\Phi(wx+b)$ is the h-level features representation of DNN, $y_{k}$ be the output label of the $k^{th}$ data sample and $c_{i}$ denotes the $i^{th}$ class. The classification accuracy ($CA$) of fine-tuned model for the validation data is returned as the fitness of that model. Figure 4: Fitness evaluation strategy. Algorithm 3 FitnessEval: Fitness Evaluation Input: $P$ = Population with population size $N_{p}$, $(\mathcal{D}^{tr},\mathcal{D}^{val})$ = Training & validation data. Output: $\Lambda$ = Fitness matrix and $\Psi^{\dagger}$ = Best model. 1:$t\longleftarrow$ current generation 2:for $p\,=\,1\,:\,N_{p}$ do 3: $\Psi_{t}\longleftarrow\textrm{N2N}(\Psi_{t}^{\dagger})$ // Transform $\Psi_{t}^{\dagger}$ as $p^{th}$ model using N2N transformation as depicted in step-1 of Fig. 4. 4: Fine-tune the model ($\Psi_{t}$) on $\mathcal{D}^{tr}$ to minimize Eq. (7). 5: $\Lambda(p)\longleftarrow$ Find $CA$ of $\Psi_{t}$ on dataset $\mathcal{D}^{val}$. 6:end for 7:$\Psi_{t}^{\dagger}\longleftarrow\textrm{Best model}$ // Find the model with maximum $CA$ and minimum number of model parameters. 8:Return $\Lambda,\;\Psi_{t}^{\dagger}$ ### 4.3 Update Mean and Variance using PG Mean ($m$) and variance ($\sigma$) terms are used for sampling of a new population as illustrated in Section-4.1. Here, we design a PG-based update laws for $m$ & $\sigma$ such that the best fitness ($\max(\Lambda)$) is maximized. At any generation $t$, $\max(\Lambda)$ is termed as reward, $a_{t}=[m,\sigma]$ is termed as action, and weighted average of fitness ($\Lambda$) is termed as state of the policy. Thus, the policy generates $[m^{T},\sigma^{T}]$ to guide the evolution for faster and better convergence. For the design simplification, let us assume that the action is generated by a deterministic policy as $a_{t}=f(\theta_{t})=\frac{1}{1+e^{-\theta_{t}}}$ (8) where, policy parameter $\theta_{t}$ is selected such that it controls $m\in\Re^{2}$ (mean of depth and width of DNN) and $\sigma\in\Re^{2}$ (variance for depth and width of DNN). The parameter $\theta$ is updated by the policy gradient in (3) calculated using gradient of expected total reward $U_{t}$ derived in (6). The total cumulative reward is calculated using fitness matrix $\Lambda_{t}$ at generation $t$ as in (9). $U_{t}\;=\;\sum_{i=1}^{t}\frac{\max(\Lambda_{i})-\max(\Lambda_{i-1}}{\max(\Lambda_{i-1})}$ (9) The algorithmic steps for the implementation of policy gradient based update of $a_{t}=[m,\sigma]$ at generation $t$ is summarised in Algorithm 4. Algorithm 4 UpdateMeanVar: Update $m$ & $\sigma$ using PG Input: $P_{t}$ = Current population, $\Lambda$ = Fitness matrix, $a_{t-1}=[m,\sigma]$ = Initial mean & variance term. Output: $a_{t}=[m,\sigma]$ = Updated mean & variance. 1:$N=$ Number of models in $P_{t}$, $\alpha=$ Learning rate 2:$\bar{n}\longleftarrow$ Compute average depth of models in $P_{t}$ 3:$\Omega=[\omega_{p}]_{p=1}^{N}$ //Generate a set of weights $\omega_{p}$ such that $\sum_{p=1}^{N}\omega_{p}=0$ and $\omega_{1}>\omega_{2}>\,....\,>\omega_{N}$ 4:$\Lambda_{t}^{sorted},idx$ = sort($\Lambda_{t}$, ‘descending’) 5:$P_{t}\longleftarrow$ Sort $P_{t}$ according to $idx$. 6:$n_{p}=$ no. of hidden layers (depth) in $p^{th}$ model. 7:$\delta_{p}=\max(H_{p})-\min(H_{p})$ //$H_{p}=$ set of nodes in hidden layers of $p^{th}$ model 8:if $t\leq 1$ then 9: $m=\sum_{j=1}^{N}\left[n_{p}\omega_{p}\;\;\;\frac{1}{n_{p}}\sum_{k=1}^{n_{p}}h_{kp}\omega_{p}\right]$ //$h_{kp}\in H_{p}$. 10: $\sigma=\sum_{j=1}^{N}\left[(\bar{n}-n_{p})\omega_{p}\;\;\;\delta_{p}\omega_{p}/2\right]$ 11:else 12: $\theta_{t-1}=\ln{\left[a_{t-1}/(1-a_{t-1})\right]}$ 13: $U_{t}\longleftarrow$ Compute cumulative reward using (9). 14: $s_{m}=\sum_{j=1}^{N}\left[n_{p}\omega_{p}\;\;\;\frac{1}{n_{p}}\sum_{k=1}^{n_{p}}h_{kp}\omega_{p}\right]$ 15: $s_{\sigma}=\sum_{j=1}^{N}\left[(\bar{n}-n_{p})\omega_{p}\;\;\;\delta_{p}\omega_{p}/2\right]$ 16: $s_{t}=[s_{m}^{T}\;s_{\sigma}^{T}]$ 17: $\theta_{t}\longleftarrow\theta_{t-1}+\alpha*\frac{1}{N}\sum_{k=1}^{N}U_{t}E[\nabla\log{\pi_{\theta}(s_{t}|a_{t-1};\theta)}]$ 18: $a_{t}={1}/{\left(1+e^{-\theta_{t}}\right)}$ 19:end if 20:Return $a_{t}$ ### 4.4 Crossover and Mutation For the optimal search of the network architecture, a combination of exploration and exploitation strategies is adopted. The guided sampling-based generation of new population exploits the search space to force the evolution towards maximum accuracy. To avoid local convergence, $N$ number of individuals are sampled using $m$ and $\sigma$ based on Gaussian distribution, and also $N$ number of parent populations are selected using crowding distance and rank from the current generation. After that, the two populations are merged to create a double-sized mating pool. Now, the two-step crossover operator introduced in [26] is applied. The two steps are (i) single point depth crossover (SPDC) for depth variation and (ii) common depth simulated binary crossover (CDSBC) for gene value (width) alteration. The two-step crossover method is depicted in Fig. 5. The whole process of offspring generation is provided in Algorithm 5: Figure 5: Two-steps crossover for chromosomes with different length. Algorithm 5 Offspring Generation: Crossover and Mutation Input: $P$ = Parent population, $p_{c}$ = Crossover probability, $(m,\sigma)$ = mean & variance term for sampling. Output: $Q$ = Offspring population. 1:$Q\longleftarrow$ GuidedPop($m,\sigma,N,n_{R},h_{R}$) //Generate $N$ offspring populations using Algorithm 2. 2:$I_{c}=$ generate random indices of $p_{c}*100$% members from $P$. 3:for $i\in I_{c}$ do 4: Select $P_{1}=P\\{i\\}\;\&\;P_{2}=Q\\{i\\}$. 5: Find lengths ($n_{1},\;n_{2}$) of $P_{1},\;P_{2}$ 6: Set a point $h<\min(n_{1},n_{2})$ on $P_{1},\;P_{2}$. 7: $C_{1},\,C_{2}\longleftarrow$ SPDC of $P_{1},\;P_{2}$ at point $h$ (as depicted in step-1 of Fig. 5) 8: $\bar{C}_{1},\bar{C}_{2}\longleftarrow$ CDSBC for genes of the common depth portion of $C_{1},\,C_{2}$ as depicted in step-2 of Fig 5. 9: Replace $Q\\{i\\}$ by $\bar{C}_{1}$ or $\bar{C}_{2}$. 10:end for 11:Return $Q$ ## 5 Experimental Results and Discussion The efficacy of the proposed framework of GS-EvoDNN is demonstrated on fault diagnosis dataset under different operating conditions taken from (i) Air compressor fault data [52], (ii) Paderborn University (PBU) bearing fault data [53], and (iii) CWRU bearing fault data [54]. ### 5.1 Experimental Setup #### 5.1.1 Air Compressor Fault Data [52] The air compressor data contains acoustic signal recorded on single stage reciprocating type air compressor driven by an $5hp$ induction motor installed at the workshop, EE Department, IIT Kanpur. Data were recorded in eight different cases: healthy and seven different faulty states of the air compressor valve. Therefore, the dataset has 8 classes: (i) Healthy (H), (ii) Leakage Inlet Valve (LIV), (iii) Leakage Outlet Valve (LOV), (iv) Non-Return Valve (NRV), (v) Piston Ring (PR), (vi) Flywheel (F), (vii) Rider Belt (RB), and (viii) Bearing (B). For each class, 225 measurements were taken with 50k samples in each measurement. #### 5.1.2 PBU Bearing Fault Data [53] PBU bearing fault data is the collection of time-series signals recorded on electrical machine operating under wide variation of shaft load and rotational speed. The four Load and speed settings (LS) are LS1: N09_M07_F10 (speed = 900 rpm, torque = 0.7 Nm & radial force = 1000 N), LS2: N15_M01_F10 (speed = 1500 rpm, torque = 0.1 Nm & radial force = 1000 N), LS3: N15_M07_F04 (speed = 1500 rpm, torque = 0.7 Nm & radial force = 400 N), and LS4: N15_M07_F10 (speed = 1500 rpm, torque = 0.7 Nm & radial force = 1000 N). Total of 32 experimentation with 6 healthy, 12 artificially damaged, and 14 damaged by long run accelerated tests were conducted to record current, vibration signal, radial forces, torque, and bearing temperature. The recorded signals contains two types of faults: inner race (IR) fault and outer race (OR) fault. #### 5.1.3 CWRU Bearing Fault Data [54] The CWRU bearing fault data provided by Case Western Reserve University (CWRU) has been a widely used benchmark dataset for bearing fault diagnosis. It contains vibration signal recorded at drive-end (DE) and fan-end (FE) of bearing artificially seeded with inner race fault (IR), outer race (OR), and rolling element ball (B) faults of variable fault diameters (F.D.) (from 0.007 to 0.028 inches). The bearing test rig setup details can be found in [54]. ### 5.2 Segmentation and Data Processing The recorded time-series signals contain a huge number of samples which is not suitable for training the DNN. To make the dimension of the time-series signals compatible with the DNN, we adopt a segmentation rule with the segment length of approximately $1/4^{th}$ of data points recorded per revolution. Here, we have selected segmentation lengths of 100, 200, & 400 for the CWRU dataset, Air compressor dataset, and PBU dataset respectively. Also, the time- series signals are usually unstructured and not to the scale. Therefore, we have applied the min-max normalization technique to scale down the dataset to $[0,\,1]$. The min-max normalization also removes the effect of outlier points. If for some cases, the outlier points carry some important information, then the z-score minimization technique may be used to make the dataset well-structured [16]. ### 5.3 Dataset Preparation For the study of fault diagnosis with the proposed GS-EvoDNN, we prepare the training, the validation, & the testing dataset under various operating conditions described below. Case-1 (T1): From Air Compressor Dataset, $7$ different cases of binary classes and one case of multi-class diagnosis are investigated as listed in table 2. For each class, 4 measurement files (having 50k samples/file) are merged to create a sample of size $1000\times 200$ per class taking $200$ points as segment length. Case-2 (T2): From CWRU FE Dataset, multi-class diagnosis with class name healthy (H), inner race (IR), outer race (OR), and ball element (B) are considered under different load ($1$, $2$, & $3$ hp) conditions and different fault diameters (FD) ($7$, $14$, & $21$ mil). For each FD (for example, $7$ mil), dataset from all three load conditions are prepared. Thus, the fault diagnosis on total of $9$ cases are presented as listed in table 3. Case-3 (T3 & T4): From PBU Dataset, two different cases are considered (i) T3: artificially damaged bearing fault and (ii) T4: bearing fault due to long accelerated test. In both cases, multi-class diagnosis with three classes namely H-OR-IR is studied under four load settings (LS) as listed in table 4. Now for the training, the validation, & the testing, each of the above dataset is split into three portions: $64\%$ train ($\mathcal{D}^{tr}$), $20\%$ test ($\mathcal{D}^{te}$), and $16\%$ validate ($\mathcal{D}^{val}$) datasets. Table 2: Air Compressor Dataset (T1): Diagnostic Performance in term of Classification Accuracy Class | SVM [35] | DNN [7] | DTL [13] | DAFD [12] | N2N [16] | EvoDCNN [25] | EvoN2N[26] | GS-EvoDNN ---|---|---|---|---|---|---|---|--- W. D. A. | D. A. H-LIV | 99.75 | 96.25 | 99.00 | 99.75 | 99.50 | 99.75 | 100.00 | 100.00 | 100.00 H-LOV | 98.25 | 95.75 | 99.25 | 99.66 | 99.25 | 99.25 | 99.75 | 99.75 | 100.00 H-PR | 98.25 | 93.25 | 93.30 | 98.75 | 97.75 | 98.75 | 98.25 | 99.75 | 99.75 H-B | 98.25 | 98.50 | 98.75 | 98.75 | 96.75 | 98.75 | 98.75 | 99.75 | 100.00 H-F | 99.25 | 99.25 | 99.00 | 98.75 | 99.25 | 99.25 | 99.25 | 100.00 | 100.00 H-NRV | 98.75 | 99.00 | 99.00 | 99.75 | 99.00 | 99.25 | 99.25 | 100.00 | 100.00 H-RB | 98.25 | 98.25 | 98.25 | 99.00 | 99.75 | 99.75 | 99.25 | 100.00 | 100.00 H-ALL | 97.75 | 99.25 | 99.00 | 99.00 | 99.25 | 99.25 | 99.75 | 99.75 | 100.00 S. D. | 0.65 | 2.16 | 2.00 | 0.46 | 1.02 | 0.38 | 0.57 | 0.13 | 0.09 Table 3: CWRU FE Dataset (T2): Diagnostic Performance in term of Classification Accuracy Class | F.D. | Load | SVM [35] | DNN [7] | DTL [13] | DAFD [12] | N2N [16] | EvoDCNN [25] | EvoN2N[26] | GS-EvoDNN ---|---|---|---|---|---|---|---|---|---|--- W. D. A. | D. A. H-IR-OR-B | DE 7 mil | 1hp | 88.12 | 96.69 | 96.56 | 97.94 | 98.94 | 98.94 | 99.60 | 100.00 | 100.00 2hp | 98.12 | 95.94 | 93.44 | 96.12 | 97.12 | 98.12 | 99.60 | 100.00 | 100.00 3hp | 99.10 | 98.75 | 98.75 | 98.44 | 99.44 | 99.44 | 99.70 | 100.00 | 100.00 DE 14 mil | 1hp | 99.10 | 94.75 | 96.88 | 97.19 | 99.19 | 99.67 | 100.00 | 100.00 | 100.00 2hp | 98.10 | 95.31 | 92.19 | 95.69 | 97.69 | 98.69 | 98.12 | 99.12 | 99.60 3hp | 99.25 | 96.88 | 94.69 | 97.62 | 99.33 | 98.62 | 98.44 | 98.84 | 100.00 DE 21 mil | 1hp | 96.88 | 86.56 | 84.69 | 89.62 | 95.62 | 96.62 | 93.75 | 98.75 | 100.00 2hp | 88.44 | 85.31 | 82.19 | 86.69 | 90.69 | 90.69 | 90.10 | 95.37 | 98.85 3hp | 92.19 | 86.56 | 79.38 | 88.06 | 91.06 | 92.06 | 92.81 | 95.81 | 98.81 S. D. | 4.62 | 5.25 | 7.06 | 4.66 | 3.46 | 3.32 | 3.69 | 1.81 | 0.51 Table 4: CA for Target-2 & Target-3 Dataset and for very limited samples of Target-2 & Target-3 Dataset Class | Data- L.S. | SVM [35] | DNN [7] | DTL [13] | DAFD [12] | N2N [16] | EvoDCNN [25] | EvoN2N[26] | GS-EvoDNN ---|---|---|---|---|---|---|---|---|--- W. D. A. | D. A. H-OR-IR | T3-L1 | 94.25 | 96.92 | 96.92 | 96.92 | 98.64 | 98.94 | 99.25 | 99.75 | 100.00 T3-L2 | 90.00 | 93.58 | 95.00 | 94.58 | 95.12 | 96.12 | 99.58 | 99.83 | 99.75 T3-L3 | 87.17 | 91.92 | 93.33 | 92.08 | 94.44 | 94.44 | 97.50 | 97.70 | 100.00 T3-L4 | 87.17 | 93.15 | 93.75 | 94.17 | 97.19 | 95.28 | 100.00 | 100.00 | 100.00 T4-L1 | 95.00 | 97.50 | 97.50 | 98.33 | 98.33 | 99.17 | 99.17 | 100.00 | 100.00 T4-L2 | 92.83 | 95.92 | 96.50 | 96.33 | 96.33 | 96.33 | 98.60 | 99.15 | 100.00 T4-L3 | 94.83 | 94.67 | 93.33 | 94.17 | 95.72 | 96.33 | 98.60 | 99.15 | 100.00 T4-L4 | 94.83 | 95.33 | 95.00 | 95.83 | 95.69 | 95.69 | 93.33 | 98.75 | 99.75 S. D. | 3.41 | 1.92 | 1.65 | 1.95 | 1.50 | 1.68 | 2.13 | 0.79 | 0.10 ### 5.4 Implementation Details For the implementation of the proposed framework of GS-EvoDNN, the initial parameters are selected as: population size ($N$) = 100, crossover probability ($P_{c}$) = 0.5, and the maximum number of generations is set to very high usually at 50. Also, the termination criteria are set as either the validation accuracy reaches 100% or it does not change continuously for 3 generations. The allowable ranges for the variation of depth and width are selected as $n_{R}\in[1,\;10]$ and $h_{R}\in[10,\;400]$ respectively. The learning rate $\alpha=0.1$. The GS-EvoDNN framework is applied to the training dataset and the best model obtained is tested for the test dataset under all cases (T1, T2, T3, & T4) described in section 5.3. The classification accuracies ($CA$) are tabulated in tables 2, 3, & 4. For comparison and analysis, the state-of-the-art methods for intelligent fault diagnosis best reported in various literature are selected as support vector machines (SVM) [35], deep neural network (DNN) [7], deep transfer learning (DTL) based on sparse autoencoder [13], Deep neural network for domain Adaptation in Fault Diagnosis (DAFD) [12], Net2Net without domain adaptation (N2N_WDA) [16], Net2Net with domain adaptation (N2N_DA) [16], evolutionary deep CNN (EvoDCNN) [25], and evolutionary Net2Net (EvoN2N)[26]. The DNN, DTL, and DAFD are trained with hidden sizes as $(70-50-20)$. The initial and hyper parameters for EvoDCNN and EvoN2N are kept same as mentioned above. The same dataset (T1, T2, T3, & T4) are used to train and test all these methods using the procedure suggested in the corresponding cited references. The diagnostic performance in term of $CA$ are tabulated in tables 2, 3, & 4. The standard deviation (S.D.) of $CA$ calculated over the variation in the operating conditions is also tabulated to compare the result deviation with the change in the operating conditions. Figure 6: TI in term of $\overline{CA}$ for (i) T1: Air Compressor dataset, (ii) T2: CWRU dataset, (iii) T3: PBU dataset with single point fault, and (iv) T4: PBU dataset with distributed fault. Figure 7: Confusion matrix for dataset T4-L1 (Table 4): class label {‘1’, ‘2’, ‘3’} represents the class name {‘H’, ‘OR’, ‘IR’}. ### 5.5 Discussion The diagnostic performances by the proposed method and the selected state-of- the-art methods conclude the following important points. 1. 1. The $CA$ comparison in tables 2, 3, & 4 reveal that the diagnostic performances are very much affected by the model selection. The DNN model with the best suitable architecture for the given dataset gives $CA$ up to almost 100% whereas other methods with pre-selected architecture fail to perform well. 2. 2. Considering SVM [35] as the baseline diagnostic method, We evaluate the transfer improvement ($TI$) in term of average $CA$ for the dataset T1, T2, T3, & T4 separately. If the average $CA$ is denoted as $\overline{CA}$, the $TI$ is defined as $TI=\overline{CA}-\overline{CA}_{b}$, where $\overline{CA}_{b}$ is the average $CA$ by SVM. The $TI$ graph shown in Fig. 6 shows the performance improvement of the proposed framework in comparison with the state-of-the-art methods and the baseline method ‘SVM’. 3. 3. The performance comparison between EvoN2N [26] and the proposed method shows that architecture optimization is greatly affected by the guided sampling with policy gradient framework. 4. 4. Fig. 7 demonstrate the classification performance by confusion chart matrices for one of the dataset (T4-L1: table 4). Classifications with 100% accuracies are highlighted by blackened diagonal elements where for other cases, diagonal elements are highlighted by gray shade. 5. 5. The comparison between EvoDCNN [25], EvoN2N [26], & the proposed GS-EvoN2N reveal that the DNN model with the best architecture performs better than the CNN model for fault diagnosis applications with the segmented dataset (described in section 5.2. ### 5.6 Complexity Analysis The worst complexities in one iteration of the entire algorithm 1 are contributed by (i) fitness evaluation of the DNN model and (ii) the non- dominated sorting. The complexity of the fitness evaluation of DNN is mainly contributed by parameter optimization by L-BFGS which is $O(N_{I}*n^{2})$, where $n\&N_{I}$ be the total number of parameters and number of iterations required to fine-tune the DNN model. The non-dominated sorting algorithm has a complexity of $O(MN_{p}^{2})$, where $M\&N_{p}^{2}$ be the number of objectives and the population size respectively. Since $M$ is very small compared to $N_{I}$, therefore, the time complexity for GS-EvoDNN with population size $N_{p}$ is given by $O(N_{I}N_{p}n^{2})$, where $n=$ total number of parameters in one model and $N_{I}$ = number of iterations taken for model training. ## 6 Conclusions In this article, we have formulated a guided sampling-based evolutionary algorithm for DNN architecture search. The proposed framework uses the concept of policy gradient to sample the new population to force the evolution towards the maximization of classification performance. The best model obtained at any generation is transferred to the next generation to initialize the model evaluation which makes the entire algorithm faster. Therefore, this method is very good in terms of faster evolution and faster convergence while ensuring global convergence by update policy of mean and variance. The validation using dataset under various cases from Air Compressor data, CWRU data, and PBU data prove that the proposed framework is capable to obtain the best model to get diagnostic performance almost up to 100% accuracy. This method can also be employed for the architecture optimization of the CNN model with image classification applications. ## References * [1] S. Nandi, H. A. Toliyat, and X. Li, “Condition monitoring and fault diagnosis of electrical motors—a review,” _IEEE Transactions on Energy Conversion_ , vol. 20, no. 4, pp. 719–729, Dec 2005. * [2] A. Siddique, G. S. Yadava, and B. Singh, “A review of stator fault monitoring techniques of induction motors,” _IEEE Transactions on Energy Conversion_ , vol. 20, no. 1, pp. 106–114, March 2005. * [3] S. Yin, S. X. Ding, X. Xie, and H. Luo, “A review on basic data-driven approaches for industrial process monitoring,” _IEEE Transactions on Industrial Electronics_ , vol. 61, no. 11, pp. 6418–6428, 2014. * [4] X. Chen, S. Wang, B. Qiao, and Q. Chen, “Basic research on machinery fault diagnostics: Past, present, and future trends,” _Front. Mech. Eng._ , vol. 13, p. 264–291, 2018. * [5] Z. Chen, F. Han, L. Wu, J. Yu, S. Cheng, P. Lin, and H. Chen, “Random forest based intelligent fault diagnosis for pv arrays using array voltage and string currents,” _Energy conversion and management_ , vol. 178, pp. 250–264, 2018. * [6] A. K. Sharma, V. Singh, N. K. Verma, and J. Liu, “Condition based monitoring of machine using mamdani fuzzy network,” in _2018 Prognostics and System Health Management Conference (PHM-Chongqing)_ , Oct 2018, pp. 1159–1163. * [7] Y. Qi, C. Shen, D. Wang, J. Shi, X. Jiang, and Z. Zhu, “Stacked sparse autoencoder-based deep network for fault diagnosis of rotating machinery,” _IEEE Access_ , vol. 5, pp. 15 066–15 079, 2017. * [8] R. Zhao, R. Yan, Z. Chen, K. Mao, P. Wang, and R. X. Gao, “Deep learning and its applications to machine health monitoring,” _Mechanical Systems and Signal Processing_ , vol. 115, pp. 213 – 237, 2019. * [9] S. J. Pan, I. W. Tsang, J. T. Kwok, and Q. Yang, “Domain adaptation via transfer component analysis,” _IEEE Transactions on Neural Networks_ , vol. 22, no. 2, pp. 199–210, Feb 2011. * [10] M. Long, J. Wang, G. Ding, S. J. Pan, and P. S. Yu, “Adaptation regularization: A general framework for transfer learning,” _IEEE Transactions on Knowledge and Data Engineering_ , vol. 26, no. 5, pp. 1076–1089, 2014. * [11] Y. Ganin, E. Ustinova, H. Ajakan, P. Germain, H. Larochelle, F. Laviolette, M. March, and V. Lempitsky, “Domain-adversarial training of neural networks,” _Journal of Machine Learning Research_ , vol. 17, no. 59, pp. 1–35, 2016. * [12] W. Lu, B. Liang, Y. Cheng, D. Meng, J. Yang, and T. Zhang, “Deep model based domain adaptation for fault diagnosis,” _IEEE Transactions on Industrial Electronics_ , vol. 64, no. 3, pp. 2296–2305, March 2017. * [13] L. Wen, L. Gao, and X. Li, “A new deep transfer learning based on sparse auto-encoder for fault diagnosis,” _IEEE Transactions on Systems, Man, and Cybernetics: Systems_ , vol. 49, no. 1, pp. 136–144, Jan 2019. * [14] L. Guo, Y. Lei, S. Xing, T. Yan, and N. Li, “Deep convolutional transfer learning network: A new method for intelligent fault diagnosis of machines with unlabeled data,” _IEEE Transactions on Industrial Electronics_ , vol. 66, no. 9, pp. 7316–7325, Sep. 2019. * [15] D. Wei, T. Han, F. Chu, and M. J. Zuo, “Weighted domain adaptation networks for machinery fault diagnosis,” _Mechanical Systems and Signal Processing_ , vol. 158, p. 107744, 2021. * [16] A. K. Sharma and N. K. Verma, “Quick learning mechanism with cross-domain adaptation for intelligent fault diagnosis,” 2021. * [17] G. Wang, J. Qiao, J. Bi, W. Li, and M. Zhou, “Tl-gdbn: Growing deep belief network with transfer learning,” _IEEE Transactions on Automation Science and Engineering_ , vol. 16, no. 2, pp. 874–885, April 2019\. * [18] X. He, K. Zhao, and X. Chu, “Automl: A survey of the state-of-the-art,” _Knowledge-Based Systems_ , vol. 212, p. 106622, 2021. * [19] J. Bergstra and Y. Bengio, “Random search for hyper-parameter optimization,” _J. Mach. Learn. Res._ , vol. 13, pp. 281–305, Feb. 2012. * [20] F. Hutter, H. H. Hoos, and K. Leyton-Brown, “Sequential model-based optimization for general algorithm configuration,” in _Learning and Intelligent Optimization_ , C. A. C. Coello, Ed. Berlin, Heidelberg: Springer Berlin Heidelberg, 2011, pp. 507–523. * [21] B. Baker, O. Gupta, N. Naik, and R. Raskar, “Designing neural network architectures using reinforcement learning,” _CoRR_ , vol. abs/1611.02167, 2016. * [22] S. M. R. Loghmanian, H. Jamaluddin, R. Ahmad, R. Yusof, and M. Khalid, “Structure optimization of neural network for dynamic system modeling using multi-objective genetic algorithm,” _Neural Computing and Applications_ , vol. 21, no. 6, pp. 1281–1295, 2012. * [23] C. Wang, C. Xu, X. Yao, and D. Tao, “Evolutionary generative adversarial networks,” _IEEE Transactions on Evolutionary Computation_ , vol. 23, no. 6, pp. 921–934, 2019. * [24] Y. Sun, B. Xue, M. Zhang, and G. G. Yen, “A particle swarm optimization-based flexible convolutional autoencoder for image classification,” _IEEE Transactions on Neural Networks and Learning Systems_ , vol. 30, no. 8, pp. 2295–2309, Aug 2019. * [25] Y. Sun, B. Xue, M. Zhang, and G. G. Yen, “Evolving deep convolutional neural networks for image classification,” _IEEE Transactions on Evolutionary Computation_ , vol. 24, no. 2, pp. 394–407, 2020. * [26] A. K. Sharma and N. K. Verma, “Transfer learning based evolutionary deep neural network for intelligent fault diagnosis,” 2021. * [27] H. Liu, K. Simonyan, and Y. Yang, “Darts: Differentiable architecture search,” in _International Conference on Learning Representations_ , 2019\. * [28] Z. Yang, Y. Wang, X. Chen, B. Shi, C. Xu, C. Xu, Q. Tian, and C. Xu, “Cars: Continuous evolution for efficient neural architecture search,” in _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , 2020, pp. 1829–1838. * [29] E. Real, A. Aggarwal, Y. Huang, and Q. V. Le, “Regularized evolution for image classifier architecture search,” _Proceedings of the AAAI Conference on Artificial Intelligence_ , vol. 33, no. 01, pp. 4780–4789, Jul. 2019. [Online]. Available: https://ojs.aaai.org/index.php/AAAI/article/view/4405 * [30] K. Deb, A. Pratap, S. Agarwal, and T. Meyarivan, “A fast and elitist multiobjective genetic algorithm: NSGA-II,” _IEEE Transactions on Evolutionary Computation_ , vol. 6, no. 2, pp. 182–197, 2002. * [31] Z. Lu, I. Whalen, Y. Dhebar, K. Deb, E. D. Goodman, W. Banzhaf, and V. N. Boddeti, “Multiobjective evolutionary design of deep convolutional neural networks for image classification,” _IEEE Transactions on Evolutionary Computation_ , vol. 25, no. 2, pp. 277–291, 2021. * [32] G. E. Hinton and R. R. Salakhutdinov, “Reducing the dimensionality of data with neural networks,” _Science_ , vol. 313, no. 5786, pp. 504–507, 2006\. * [33] Y. Bengio, P. Lamblin, D. Popovici, and H. Larochelle, “Greedy layer-wise training of deep networks,” in _Advances in Neural Information Processing Systems 19_ , B. Schölkopf, J. C. Platt, and T. Hoffman, Eds. MIT Press, 2007, pp. 153–160. * [34] H. Su and K. T. Chong, “Induction machine condition monitoring using neural network modeling,” _IEEE Transactions on Industrial Electronics_ , vol. 54, no. 1, pp. 241–249, 2007. * [35] A. Widodo and B.-S. Yang, “Support vector machine in machine condition monitoring and fault diagnosis,” _Mechanical Systems and Signal Processing_ , vol. 21, no. 6, pp. 2560–2574, 2007. * [36] X. Yan and M. Jia, “A novel optimized svm classification algorithm with multi-domain feature and its application to fault diagnosis of rolling bearing,” _Neurocomputing_ , vol. 313, pp. 47–64, 2018. * [37] S. D. Juan Jose, et al., “Multifault diagnosis method applied to an electric machine based on high-dimensional feature reduction,” _IEEE Transactions on Industry Applications_ , vol. 53, no. 3, pp. 3086–3097, 2016. * [38] X. Li, W. Zhang, and Q. Ding, “Cross-domain fault diagnosis of rolling element bearings using deep generative neural networks,” _IEEE Transactions on Industrial Electronics_ , vol. 66, no. 7, pp. 5525–5534, July 2019\. * [39] B. Zoph, V. Vasudevan, J. Shlens, and Q. V. Le, “Learning transferable architectures for scalable image recognition,” in _Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit._ , 2018, pp. 8697–8710. * [40] A. Hundt, V. Jain, and G. D. Hager, “sharpdarts: Faster and more accurate differentiable architecture search,” _CoRR_ , vol. abs/1903.09900, 2019. [Online]. Available: http://arxiv.org/abs/1903.09900 * [41] K. Kandasamy, W. Neiswanger, J. Schneider, B. Póczos, and E. P. Xing, “Neural architecture search with bayesian optimisation and optimal transport,” in _Proceedings of the 32nd International Conference on Neural Information Processing Systems_ , ser. NIPS’18, 2018, p. 2020–2029. * [42] J. S. Bergstra, R. Bardenet, Y. Bengio, and B. Kégl, “Algorithms for hyper-parameter optimization,” in _Advances in Neural Information Processing Systems 24_ , J. Shawe-Taylor, R. S. Zemel, P. L. Bartlett, F. Pereira, and K. Q. Weinberger, Eds. Curran Associates, Inc., 2011, pp. 2546–2554. * [43] H. Pham, M. Guan, B. Zoph, Q. Le, and J. Dean, “Efficient neural architecture search via parameters sharing,” in _International Conference on Machine Learning_. PMLR, 2018, pp. 4095–4104. * [44] R. Luo, F. Tian, T. Qin, E. Chen, and T.-Y. Liu, “Neural architecture optimization,” in _Advances in Neural Information Processing Systems_ , S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett, Eds., vol. 31. Curran Associates, Inc., 2018. * [45] B. Zoph and Q. V. Le, “Neural architecture search with reinforcement learning,” _arXiv preprint arXiv:1611.01578_ , 2016. * [46] J. Sun, X. Liu, T. Bäck, and Z. Xu, “Learning adaptive differential evolution algorithm from optimization experiences by policy gradient,” _IEEE Transactions on Evolutionary Computation_ , vol. 25, no. 4, pp. 666–680, 2021\. * [47] Y. Chen, G. Meng, Q. Zhang, S. Xiang, C. Huang, L. Mu, and X. Wang, “Renas: Reinforced evolutionary neural architecture search,” in _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , 2019, pp. 4787–4796. * [48] Y. Sun, H. Wang, B. Xue, Y. Jin, G. G. Yen, and M. Zhang, “Surrogate-assisted evolutionary deep learning using an end-to-end random forest-based performance predictor,” _IEEE Transactions on Evolutionary Computation_ , vol. 24, no. 2, pp. 350–364, 2019. * [49] R. J. Williams, “Simple statistical gradient-following algorithms for connectionist reinforcement learning,” _Machine Learning_ , vol. 8, no. 3-4, p. 229–256, May 1992. * [50] R. Sutton and A. Barto, _Reinforcement Learning: An Introduction, second edition_ , ser. Adaptive Computation and Machine Learning series. MIT Press, 2018. * [51] J. Nocedal and S. J. Wright, _Large-Scale Unconstrained Optimization_. New York, NY: Springer New York, 2006, pp. 164–192. * [52] N. K. Verma, R. K. Sevakula, S. Dixit, and A. Salour, “Intelligent condition based monitoring using acoustic signals for air compressors,” _IEEE Transactions on Reliability_ , vol. 65, no. 1, pp. 291–309, March 2016\. * [53] C. Lessmeier, J. Kimotho, D. Zimmer, and W. Sextro, “Condition monitoring of bearing damage in electromechanical drive systems by using motor current signals of electric motors: A benchmark data set for data-driven classification,” _European Conf., PHM Society, Bilbao (Spain)_ , vol. 3, no. 1, 2016. * [54] W. A. Smith and R. B. Randall, “Rolling element bearing diagnostics using the Case Western Reserve University data: A benchmark study,” _Mechanical Systems and Signal Processing_ , vol. 64, pp. 100–131, 2015.
construct some interesting hermitian line bundles on it, and explain the relations between them. \subsection{A special Shimura variety} \label{ss:special shimura} To attach a Shimura variety to our fixed $V$, choose relevant (Definition \ref{def:relevant}) hermitian spaces $(W_0,h_0)$ and $(W,h)$ of signatures $(1,0)$ and $(n-1,1)$, respectively, in such a way that \begin{equation}\label{hermitian hom} V \iso \Hom_\kk(W_0 , W) \end{equation} as $$-hermitian spaces, where the hermitian form $⟨- , - ⟩$ on the right satisfies \begin{equation}\label{basic hom hermitian} \langle x,y\rangle \cdot h_0(w_0,w'_0 ) = h( x(w_0) , y(w'_0) ) \end{equation} for all $w_0 , w_0'∈W_0$ and $x,y∈V$. \begin{remark} Such $W_0$ and $W$ always exist: take $W_0=\kk$ with its norm form, and $W=V$. They are not uniquely determined by $V$, but their strict similarity classes are. \end{remark} As in \S 2.2 of [32], if $S$ is a connected $_$-scheme and \[ (A_0,A) \in \mathcal{M}_{W_0}(S) \times \mathcal{M}_{W} (S), \] then $__(A_0,A)$ carries a positive definite hermitian form \begin{equation}\label{KR hermitian} \langle x,y\rangle = \psi_0^{-1} \circ y^\vee \circ \psi\circ x \in \End_{\co_\kk}(A_0) \iso \co_\kk, \end{equation} where $ψ_0 :A_0 →A_0^∨$ and $ψ:A→A^∨$ are the principal polarizations. %If $S=\Spec(F)$ with $F$ an algebraically closed field and $\ell \neq\mathrm{char}(F)$ is prime, we can repeat this construction with $A_0$ and $A$ replaced by their $\ell$-adic Tate modules to endow $\Hom_{\co_\kk} ( T_\ell(A_0) , T_\ell(A) )$ with an $\co_{\kk,\ell}$-valued hermitian form. %If $S=\Spec(\C)$ may similarly endow $\Hom_{\co_\kk} ( H_1(A_0,\Z) , H_1(A,\Z) )$ with an $\co_\kk$-valued hermitian form, and then %\Hom_{\co_\kk} ( T_\ell(A_0) , T_\ell(A) ) \iso \Hom_{\co_\kk} ( H_1(A_0,\Z) , H_1(A,\Z) ) \otimes_\Z \Z_\ell. As in \S 2.3 of [13], there is an open and closed substack \begin{equation}\label{moduli inclusion} \mathcal{S}_V \subset \mathcal{M}_{W_0} \times_{\co_\kk} \mathcal{M}_{W} \end{equation} characterized by its points valued in algebraically closed fields, which are those pairs \[ (A_0,A)\in \mathcal{M}_{W_0} (F) \times \mathcal{M}_{W} (F) \] for which there exists an isometry \[ V\otimes \Q_\ell \iso \Hom_{\co_\kk} \big( T_\ell(A_0) , T_\ell(A) \big) \otimes \Q_\ell \] of $_ℓ$-hermitian spaces for every $ℓ≠char(F) $. Here the hermitian form on the right is defined as in \eqref{KR hermitian}. When $F=$ this is equivalent to the existence of an isometry of $$-hermitian spaces \[ V \iso \Hom_{\kk} \big( H_1(A_0 , \Q) , H_1(A , \Q) \big) . \] \begin{remark} When $n$ is even the inclusion \eqref{moduli inclusion} is an isomorphism. \end{remark} \begin{remark}\label{rem:projection fiber} The projection \mathcal{S}_V \to \mathcal{M}_W is a finite \'etale surjection, and the fiber over a geometric point $x \in \mathcal{M}_W(\F)$ satisfies \[ \sum_{ y \in \mathcal{S}_{V,x}(\F) } \frac{1}{ |\Aut(y) | } = \frac{ |\mathrm{CL}(\kk) |} { | \co_\kk^\times|} \cdot \begin{cases} 1 & \mbox{if $n$ is even} \\ 2^{1-o(D)} & \mbox{if $n$ is odd}. \end{cases} \] \end{remark} \begin{remark}\label{rem:honest shimura} The stack $\mathcal{S}_V$ is denoted $\mathcal{S}_\Kra$ in [13]. Later we will want to vary $V$, and so we have included it in the notation to avoid confusion. As explained in [loc.~cit.], the generic fiber of $\mathcal{S}_V$ is a Shimura variety for the subgroup $G \subset \mathrm{GU}(W_0) \times \mathrm{GU}(W)$ of pairs $(g_0,g)$ for which the similitude factors of the two components are equal. \end{remark} \begin{definition}\label{def:special exceptional} If $n\ge 2$, define the \emph{exceptional divisor} \[ \mathrm{Exc}_V \subset \mathcal{S}_V \] as the pullback of the exceptional divisor \eqref{basic exceptional} via $ \mathcal{S}_V \to \mathcal{M}_{(n-1,1)}$. Equivalently, it is defined by the cartesian diagram \begin{equation}\label{special exceptional} \xymatrix{ {\mathrm{Exc}_V } \ar[r] \ar[d] & { \mathcal{S}_V } \ar[d] \\ {\mathrm{Sing}_{(n-1,1)} } \ar[r] & { \mathcal{M}^\Pap_{(n-1,1)} }. \end{equation} \end{definition} \begin{definition}[{[32]}] \label{def:KR} For any positive $m\in \Z$, the \emph{Kudla-Rapoport divisor} $\mathcal{Z}_V(m)$ is the $\co_\kk$-stack classifying triples $(A_0,A,x)$ consisting of a pair \[ (A_0,A) \in \mathcal{S}_V(S) \] and an x\in \Hom_{\co_\kk}(A_0,A) satisfying $\langle x,x\rangle=m$. \end{definition} The natural forgetful morphism \[ \mathcal{Z}_V(m) \to \mathcal{S}_V \] is finite and unramified, with image of codimension $1$. We denote again by $𝒵_V(m)$ the image of this morphism, viewed as a divisor on $𝒮_V$ in the usual way (that is to say, each irreducible component of $𝒵_V(m)$ contributes an irreducible component of the image, counted with multiplicity equal to the length of the local ring of its generic point). Denote by \[ \bar{\mathcal{Z}}_V(m)\to \bar{\mathcal{S}}_V \] the normalization of $𝒵_V(m) →𝒮̅_V$, and denote in the same way its image, viewed as a divisor on $𝒮̅_V$. In other words, take the Zariski closure of $𝒵_V(m)$. Loosely speaking, each Kudla-Rapoport divisor is a union of unitary Shimura varieties associated to $$-hermitian spaces of signature $(n-2,1)$. In \S \ref{s:KR divisors} we will make this more precise, at least when $m$ is a prime split in $$. \begin{definition} Let $N$ be a positive integer, and let $S$ be an $\co_\kk$-scheme with $N\in \co_S^\times$. A \emph{level $N$-structure} on a pair $(A_0,A) \in \mathcal{S}_V(S)$ consists of level $N$-structures $(\eta_0,\xi_0)$ and $(\eta,\xi)$ on $A_0$ and $A$, in the sense of Definition \ref{def:kramer-pappas level}, such that $ \xi_0 = \xi$. \end{definition} Adding level structure to pairs $(A_0,A)$ defines a finite \'etale cover \[ \mathcal{S}_V(N) \to \mathcal{S}_{ V /\co_\kk[1/N] }, \] and \eqref{moduli inclusion} lifts to a canonical open and closed immersion \[ \mathcal{S}_V (N) \subset \mathcal{M}_{W_0} (N) \times_{\co_\kk[1/N]} \mathcal{M}_W (N). \] Define a toroidal compactification \begin{equation}\label{compact inclusion} \bar{\mathcal{S}}_V (N) \subset \mathcal{M}_{W_0} (N) \times_{\co_\kk[1/N]} \bar{\mathcal{M}}_W (N) \end{equation} as the Zariski closure of $𝒮_V(N)$, or, equivalently, as the normalization of \[ \mathcal{S}_V (N) \to \bar{\mathcal{M}}_{W/ \co_\kk[1/N]}. \] When $N=1$ we abbreviate this to $𝒮̅_V$. \begin{remark}\label{rem:special compact level} As \eqref{compact inclusion} is an open and closed immersion, and $\mathcal{M}_{W_0} (N)$ is finite \'etale over $\co_\kk[1/N]$, the compactification $\bar{\mathcal{S}}_V (N)$ inherits all the nice properties of $\bar{\mathcal{M}}_W (N)$. In particular, Proposition \ref{prop:full compactification} holds word-for-word with $\mathcal{M}_W$ replaced by $\mathcal{S}_V$, and the same is true of the entire discussion of arithmetic intersection theory in \S \ref{ss:chow} and \S \ref{ss:volumes}. \end{remark} \subsection{Construction of hermitian line bundles} The construction \eqref{hodge metric} associates to the universal CM elliptic curve $A_0 →ℳ_W_0 = ℳ_(1,0)$ its metrized Hodge bundle \begin{equation}\label{elliptic hodge} \widehat{\omega}^\mathrm{Hdg}_{A_0 / \mathcal{M}_{W_0} } \in \widehat{\Pic}(\mathcal{M}_{W_0} ) . \end{equation} Similarly, the universal $A →ℳ_W$ determines a metrized Hodge bundle \begin{equation}\label{basic hodge} \widehat{\omega}^\mathrm{Hdg}_{A / \mathcal{M}_W } \in \widehat{\Pic}(\mathcal{M}_W ) . \end{equation} Pulling back the universal objects via projection to the two factors in \eqref{moduli inclusion} yields a universal pair $(A_0,A)$ of polarized abelian schemes (of relative dimensions $1$ and $n$) over $𝒮_V$. As in \S 2.4 of [13] there is a \emph{metrized line bundle of modular forms} \begin{equation}\label{metrized taut} \widehat{\taut}_V \in \widehat{\Pic}(\mathcal{S}_V). \end{equation} The line bundle underlying \eqref{metrized taut} has inverse \begin{equation}\label{dual taut} \taut_V^{-1} = \Lie(A_0) \otimes \Lie(A) / \mathcal{F} , \end{equation} where $ℱ ⊂(A)$ is the universal hyperplane satisfying Kr\"amer's signature condition (\S \ref{ss:basic moduli}). The hermitian metric is defined as in \S 7.2 of [13]: if we use Proposition 2.4.2 of [13] to identify \begin{equation}\label{taut realization} \taut_{V,z} \subset \Hom_{\kk} ( H_1(A_{0,z} ,\Q) , H_1(A_z,\Q) ) \otimes_\Q\C \iso V\otimes_\Q\C \end{equation} at a complex point $z∈𝒮_V()$, the line $_V,z$ is isotropic with respect to the $$-bilinear extension of the $$-bilinear form $ [ x,y] = Tr_/ ⟨x,y⟩$ on $V$, and \begin{equation}\label{taut metric} \| s \|^2 = - \frac{ [s,\overline{s}] }{ 4\pi e^\gamma } \end{equation} for any $s∈_V,z$. Note that our $_V$ is denoted $ω$ in [13, 14]. \begin{proposition}\label{prop:taut-hodge1} The hermitian line bundles \eqref{basic hodge} and \eqref{metrized taut} lie in the subgroups \begin{align*} \widehat{\Pic}( \bar{\mathcal{M}}_W, \mathscr{D}_\BKK ) \subset \widehat{\Pic}(\mathcal{M}_W ) \quad \mbox{and} \quad \widehat{\Pic}( \bar{\mathcal{S}}_V, \mathscr{D}_\BKK ) \subset \widehat{\Pic}(\mathcal{S}_V ), \end{align*} respectively, of \eqref{pre-log injection}. \end{proposition} \begin{proof} The extension to $\bar{\mathcal{M}}_W$ of the line bundle \[ \omega^\mathrm{Hdg}_{A / \mathcal{M}_W } \iso \det( \Lie(A) )^{-1} \] underlying \eqref{basic hodge} is part of \eqref{hyperplane extension}. The fact that the hermitian metric has a pre-log singularity along the boundary is a special case of Theorem 6.16 of [12]. Note that this also uses Proposition 3.2 of [16], which shows that any rank one log-singular hermitian vector bundle in the sense of [12] is also a pre-log-singular hermitian line bundle in the sense of Definition 1.20 of [6]. This proves the claim for \eqref{basic hodge}, and the proof for \eqref{metrized taut} is the same. \end{proof} Pulling back the hermitian line bundles \eqref{elliptic hodge} and \eqref{basic hodge} via projection to the two factors in \eqref{compact inclusion}, we obtain three hermitian line bundles \begin{equation}\label{three bundles} \widehat{\omega}^\mathrm{Hdg}_{A_0 / \mathcal{S}_V },\, \widehat{\omega}^\mathrm{Hdg}_{A / \mathcal{S}_V },\, \widehat{\mathcal{L}}_V \in \widehat{\Pic}( \bar{\mathcal{S}}_V, \mathscr{D}_\BKK ). \end{equation} In the remaining subsections we will make explicit the relations between them. The reader may wish to skip directly to Theorem \ref{thm:taut-hodge compare} for the main results. \subsection{An application of the Chowla-Selberg formula} We will prove that, up to numerical equivalence (Definition \ref{def:numerical}), the first line bundle in \eqref{three bundles} is just the trivial line bundle with a constant metric. The constant defining the metric is an interesting quantity in its own right. Recall the Chowla-Selberg formula: the Faltings height, normalized as in \S 5.2 of [14], of any elliptic curve with CM by $_$ is \begin{equation}\label{faltings} h^\mathrm{Falt}_\kk = - \frac{1}{2} \frac{L'(0,\eps)}{L(0,\eps)} - \frac{1}{4} \log(4\pi^2 D ) . \end{equation} \begin{proposition}\label{prop:easy numerical} The metrized line bundles \[ \widehat{\omega}^\mathrm{Hdg}_{A_0 / \mathcal{S}_V } \in \widehat{\Pic}(\mathcal{S}_V ) \quad \mbox{and}\quad ( 0 , C_1) \in \widehat{\Pic}(\mathcal{S}_V ) \] are numerically equivalent, where $C_1 = \log(2\pi) + 2 h^\mathrm{Falt}_\kk$, and we are using the notation of \eqref{constant metrics}. \end{proposition} \begin{proof} Fix an $N\ge 3$. As $\mathcal{M}_{W_0}(N)$ is finite \'etale over $\co_\kk[1/N]$, there is an isomorphism \[ \mathcal{M}_{W_0}(N) \iso \bigsqcup_i \mathcal{X}_i \] with each $\mathcal{X}_i \iso \Spec(\co_{\kk_i} [ 1/N])$ for a finite field extension $\kk_i/\kk$ unramified outside $N$. For each $\mathcal{X}_i$, consider the metrized Hodge bundle \[ \widehat{\omega}^\mathrm{Hdg}_{A_0/\mathcal{X}_i} \in \widehat{\Pic}(\mathcal{X}_i ) \] of the universal $A_0 \to \mathcal{X}_i$. The underlying line bundle can be identified with an element in the ideal class group of $\co_{\kk_i}[1/N]$, and hence some power of it admits a trivializing section \[ s_i \in H^0\big( \mathcal{X}_i , ( \omega^\mathrm{Hdg}_{A_0/\mathcal{X}_i})^{\otimes d_i} \big) \iso H^0( A_0 , \Omega^{\otimes d_i} _{ A_0 / \mathcal{X}_i } ). \] Comparing \eqref{hodge metric} with the definition of the Faltings height, normalized as in \S 5.2 of [14], shows that \begin{align*} \frac{ -1}{ \# \mathcal{X}_i(\C) } \sum_{ x\in \mathcal{X}_i(\C) } \log\| s_{i,x}\|^2 % & = \frac{ -1}{ \# \mathcal{X}_i(\C) } \sum_{ x\in \mathcal{X}_i(\C) } \log \left| \frac{1}{2\pi} \int_{ A_{0,x} (\C)} s_{i,x} \wedge \overline{s}_{i,x} \right| \\ & = d_i \cdot C_1, \end{align*} up to a $\Q$-linear combination of $\{ \log(p) : p\mid N\}$. Letting $d$ be the least common multiple of all $d_i$'s, we see that \[ \big( \widehat{\omega}^\mathrm{Hdg}_{A_0/\mathcal{S}_V(N)} \big)^{\otimes d} \in \widehat{\Pic}( \bar{\mathcal{S}}_V(N) ) \] admits a trivializing section $s$ with $-\log\| s\|^2$ constant on every connected component of $\bar{\mathcal{S}}_V(N)(\C)$, and such that the average value of $-\log\| s\|^2$ over any $\Aut(\C/\kk)$-orbit of components is $d\cdot C_1$, up to a $\Q$-linear combination of $\{ \log(p) : p\mid N\}$. If we represent \[ d \cdot \widehat{\omega}^\mathrm{Hdg}_{A_0/\mathcal{S}_V(N)} \in \widehat{\mathrm{CH}}^1( \bar{\mathcal{S}}_V(N) ) \] by the arithmetic divisor \widehat{\mathrm{div}}(s) = ( 0 , - \log\| s\|^2), then for any \[ (\mathcal{Z}_N , g_N) \in \widehat{\mathrm{CH}}^{n-1}( \bar{\mathcal{S}}_V(N) ) \] the vanishing of the Chern form of $-\log\| s\|^2$ implies the $*$-product formula \[ [ - \log\| s\|^2] * g_N = - \log\| s\|^2 \wedge \delta_{\mathcal{Z}_N}, \] which implies the intersection formula \[ d \cdot \widehat{\deg}_N\big( (\mathcal{Z}_N , g_N) \cdot \widehat{\omega}^\mathrm{Hdg} _{A_0/\mathcal{S}_V(N)} \big) - \sum_{ x \in \mathcal{Z}_N(\C) } \log\| s_x\|^2 \] up to a $\Q$-linear combination of $\{ \log(p) : p \mid N\}$. Let $L\subset \C$ be a finite Galois extension of $\kk$ large enough that all complex points of $\mathcal{Z}_N$ are defined over $L$, and rewrite the equality above as \[ d \cdot \widehat{\deg}_N \big( (\mathcal{Z}_N , g_N) \cdot \widehat{\omega}^\mathrm{Hdg}_{A_0/\mathcal{S}_V(N)} \big) \frac{-1}{ [ L:\kk] } \sum_{ \substack{ x \in \mathcal{Z}_N(L) \\ \sigma \in \Gal(L/\kk) } } \log\| s_{x^\sigma} \|^2. \] The right hand side is $d C_1 \cdot \# \mathcal{Z}_N(\C)$, and hence \[ \widehat{\deg}_N \big( (\mathcal{Z}_N , g_N) \cdot \widehat{\omega}^\mathrm{Hdg}_{A_0/\mathcal{S}_V(N)} \big) = C_1 \cdot \# \mathcal{Z}_N(\C) = \widehat{\deg}_N \big( (\mathcal{Z}_N , g_N) \cdot (0,C_1) \big) \] up to a $\Q$-linear combination of $\{ \log(p) : p \mid N\}$. Varying $N$ shows that \[ \widehat{\deg}\big( \widehat{\mathcal{Z}} \cdot \widehat{\omega}^\mathrm{Hdg} _{A_0/\mathcal{S}_V} \big) \widehat{\deg}\big( \widehat{\mathcal{Z}} \cdot (0,C_1) \big) \] for every $\widehat{\mathcal{Z}} \in \widehat{\mathrm{CH}}^{n-1}( \bar{\mathcal{S}}_V )$, and the claim follows. \end{proof} \subsection{Another hermitian line bundle} In order to relate the hermitian line bundles of \eqref{three bundles}, we recall from \S 5.1 of [14] a fourth hermitian line bundle. Denote by \[ H^1_\mathrm{dR}(A) = \mathbb{R}^1\pi_* \Omega^\bullet_{A/\mathcal{S}_V} \] the first relative algebraic deRham cohomology of $π: A→𝒮_V$, a rank $2n$ vector bundle on $𝒮_V$. The action of $_$ on $A$ induces an action on $H_1^dR(A) = H^1_dR(A)^∨$, which is locally free of rank $n$ over $_⊗__S$. \[ H_1^\mathrm{dR}(A) \to \mathcal{V} \] denotes the largest quotient on which the action of $_$ is through the structure morphism $_→_𝒮_V$, then $𝒱$ is a rank $n$ vector bundle on $𝒮_V$, equipped with a morphism \[ \det(\mathcal{V})^{-1} \to \bigwedge\nolimits^n H^1_{\mathrm{dR}}(A) \to H^n_{\mathrm{dR}}(A) . \] Given a complex point $z∈𝒮_V()$ and a vector s_z ∈(𝒱_z)^-1, we view $s_z$ as an element of $H^n(A_z,)$, and define \[ \| s_z \|^2 = \left| \int_{A_z (\C)} s_z \wedge \overline{s}_z \right| . \] This defines a hermitian metric on $(𝒱)^-1$, and hence we obtain a hermitian line bundle \begin{equation}\label{det bundle} \det(\mathcal{V}) \in \widehat{\Pic}(\mathcal{S}_V) . \end{equation} \begin{proposition} \label{prop:full bundle compare} Assume $n\ge 2$. Recalling the exceptional divisor of Definition \ref{def:special exceptional} and the notation \eqref{constant metrics}, we have the equality \[ 2 \widehat{\taut}_V = \widehat{\omega}^\mathrm{Hdg}_{ A / \mathcal{S}_V} + 2 \widehat{\omega}^\mathrm{Hdg}_{ A_0 / \mathcal{S}_V} + \det( \mathcal{V}) + ( \mathrm{Exc}_V , C_2) \] in $\widehat{\Pic}(\mathcal{S}_V)$, where \[ C_2 = 2 \log\left( \frac{2 e^\gamma}{ D} \right) + (2-n) \log(2\pi). \] In particular, $\det(\mathcal{V}) \in \widehat{\Pic}( \bar{\mathcal{S}}_V, \mathscr{D}_\BKK )$ by \eqref{three bundles}. \end{proposition} \begin{proof} This is a restatement of Proposition 5.1.2 of [14], keeping in mind that the metric on \[ \det(\Lie(A))^{-1} = \omega^\mathrm{Hdg}_{A / \mathcal{S}_V} \] used in [loc.~cit.] differs from \eqref{hodge metric} by a power of $2\pi$. \end{proof} It is an observation of Gross [18] that the hermitian line bundle $(𝒱)$ behaves in the generic fiber, for all arithmetic purposes, like the trivial bundle with a constant metric. This observation was extended to integral models in \S 5.3 of [14], whose results are the basis of the following proposition. \begin{proposition}\label{prop:gross numerical} The Chern form of $\det(\mathcal{V})$ is identically $0$. If $n>2$ then, up to numerical equivalence, \[ \det(\mathcal{V}) = ( 0 , C_3 ), \] \[ C_3 = (4-2n) h_\kk^\mathrm{Falt} + \log( 4\pi^2 D). \] \end{proposition} \begin{proof} Fix an integer $N\ge 3$. As in Theorem 1 of [18], the $\C$-vector space $H^0 ( \mathcal{S}_V(N)_{/\C} , \det(\mathcal{V}) )$ has dimension $1$, and the norm of any global section is a locally constant function on $\mathcal{S}_V(N)(\C)$. The vanishing of the Chern form follows. Moreover, one can choose a nonzero global section $t$ defined over a finite Galois extension $\kk'/\kk$, and then for every $\kk$-algebra embedding $\sigma:\kk' \to \C$ the norm $\| t^\sigma \|$ must again be locally constant. If $n>2$ then\footnote{The assumption $n>2$ is mistakenly omitted in Theorem 5.3.1 of [14], whose proof requires that $\mathcal{M}^\Pap_{(n-1,1)}$ has geometrically normal fibers. The normality is a theorem of Pappas when $n>2$, but is false when $n=2$.} Theorem 5.3.1 of [14] allows us to choose $t$ so that it extends to a nowhere vanishing section \[ t \in H^0 \big( \mathcal{S}_V(N)_{/\co_{\kk'}[1/N] } , \det(\mathcal{V}) \big) . \] Setting $d=[\kk':\kk]$ and taking the tensor product of all $\Gal(\kk'/\kk)$-conjugates of $t$, we obtain a section \[ s \in H^0( \mathcal{S}_V(N) , \det(\mathcal{V})^{\otimes d}) \] such that $\mathrm{div}(s) =0$, and such that $-\log\| s\|$ is locally constant. \[ c_X = -\log\| s\|^2 \] denote its value on the connected component $X \subset \mathcal{S}_V(N)_{/\C}$. Fix a finite Galois extension $L/\kk$ contained $\C$ large enough that every component $X \subset \mathcal{S}_V(N)_{/\C}$ is defined over $L$. We may further enlarge $L$ to assume that each $X$ admits an $L$-point \[ x \in X(L) \subset \mathcal{S}_V(N)(L) \] that extends to \[ \underline{x} : \Spec( \co_L[1/N] ) \to \mathcal{S}_V(N). \] For example, start by fixing a complex point $x\in X(\C)$ corresponding to a pair $(A_0,A)$ for which $A$ has complex multiplication. Then enlarge $L$ so that both $A_0$ and $A$ (along with their level $N$-structures) are defined over $L$ and have everywhere good reduction. Applying Corollary 5.3.2 and Proposition 5.3.3 of [14] to $\underline{x}$, we find \[ \frac{-1}{ [ L:\kk] } \sum_{ \sigma \in \Gal(L/\kk) } \log\| s_{x^\sigma} \|^2 = d C_3 \] up to a $\Q$-linear combination of $\{ \log(p) : p \mid N\}$, and hence \[ \frac{1}{ [ L:\kk] } \sum_{ \sigma \in \Gal(L/\kk) } c_{X^\sigma} = d C_3 \] up to the same ambiguity. We have now shown that the average value of $-\log\|s\|^2$ over any $\Aut(\C/\kk)$ orbit of connected components in $\mathcal{S}_V(N)(\C)$ is $d C_3$, up to a $\Q$-linear combination of $\{ \log(p) : p \mid N\}$. The proposition follows from this, by the same argument used in the proof of Proposition \ref{prop:easy numerical}. \end{proof} \subsection{Comparison of hermitian line bundles} %\label{ss:bundle comparisons} We now come to the main results of \S \ref{s:special shimura bundles}. The exceptional divisor of Definition \ref{def:special exceptional} is a recurring nuisance, in part because it has nontrivial arithmetic intersection with $_V$. This will be explored more fully in \S \ref{ss:exceptional volume} below. The following proposition allows us to avoid this nuisance by slightly modifiying $_V$, and also clarifies the relation between the second and third line bundles in \eqref{three bundles}. \begin{theorem}\label{thm:taut-hodge compare} Assume $n\ge 2$. Recalling the notation \eqref{constant metrics}, the hermitian line bundle \[ \widehat{\tautmod}_V \define 2 \widehat{\taut}_V - (\mathrm{Exc}_V,0) \in \widehat{\Pic}( \bar{\mathcal{S}}_V, \mathscr{D}_\BKK ) \] enjoys the following properties. \begin{enumerate} \item Every irreducible component $E\subset\mathrm{Exc}_V$ satisfies \[ (E,0) \cdot \widehat{\tautmod}_V =0 \] in $ \widehat{\mathrm{CH}}^2( \bar{\mathcal{S}}_V , \mathscr{D}_\BKK )_\Q$, as well as the height relation \[ \mathrm{ht}_{\widehat{\tautmod}_V } (E) =0 . \] The same equalities hold if we replace $\widehat{\tautmod}_V$ with $\widehat{\omega}^\mathrm{Hdg}_{ A / \mathcal{S}_V}$ or $\widehat{\omega}^\mathrm{Hdg}_{ A_0 / \mathcal{S}_V}$. \item We have the equality of Chern forms \[ \chern( \widehat{\omega}^\mathrm{Hdg}_{ A / \mathcal{S}_V} ) =2 \chern( \widehat{\taut}_V ) = \chern( \widehat{\tautmod}_V ) . \] \item If $n>2$ then, up to numerical equivalence, \[ \widehat{\tautmod}_V \widehat{\omega}^\mathrm{Hdg}_{ A / \mathcal{S}_V} + ( 0 , C_0(n) ) \in \widehat{\mathrm{Pic}}(\mathcal{S}_V) , \] where $C_0(n)$ is the constant of Theorem \ref{thm:intro main}. In particular, up to numerical equivalence, \[ 2 \widehat{\taut}_V \widehat{\omega}^\mathrm{Hdg}_{ A / \mathcal{S}_V} + ( \mathrm{Exc}_V , C_0(n) ) . \] \end{enumerate} \end{theorem} \begin{proof} For (1), they key observation is that the abelian scheme $A \to \mathcal{S}_V$ is a pullback via the vertical arrow on the right in \eqref{special exceptional}. In particular, $\omega^\mathrm{Hdg}_{A/\mathcal{S}_V}$ is isomorphic to the pullback of \[ \omega^\mathrm{Hdg}_{A/\mathcal{M}^\Pap_{(n-1,1)}} \in \Pic ( \mathcal{M}^\Pap_{(n-1,1)} ) . \] If $\mathcal{M}^\Pap_{(n-1,1)}$ were a scheme we could trivialize this latter line bundle over a Zariski open neighborhood of the ($0$-dimensional) singular locus. This would pull-back to a trivialization of $\omega^\mathrm{Hdg}_{A/ \mathcal{S}_V}$ over an open neighborhood of $\mathrm{Exc}_V$, and in particular over an open neighborhood of any irreducible component $E\subset \mathrm{Exc}_V$. To account for the stackiness, simply fix an integer $N\ge 3$ and apply the same reasoning with level $N$-structure to see that $\omega^\mathrm{Hdg}_{A/\mathcal{S}_V(N)}$ is trivial in some Zariski open neighborhood of \[ E(N) = E \times_{ \mathcal{S}_V} \mathcal{S}_V(N). \] This implies the arithmetic intersection formula ( E(N) , 0 ) \cdot \widehat{\omega}^\mathrm{Hdg}_{A/\mathcal{S}_V(N)} =0 and varying $N$ proves \[ ( E , 0 ) \cdot \widehat{\omega}^\mathrm{Hdg}_{A/\mathcal{S}_V} =0 . \] The line bundle \eqref{det bundle} is also a pullback via the vertical arrow on the right in \eqref{special exceptional}, hence the same argument shows \[ (E,0) \cdot \det(\mathcal{V}) =0. \] The proof of Proposition \ref{prop:easy numerical} shows that, for any $N\ge 3$, there is a positive multiple of \[ \widehat{\omega}^\mathrm{Hdg}_{A_0/\mathcal{S}_V(N)} \in \widehat{\mathrm{CH}}^1( \bar{ \mathcal{S}} _V (N) , \mathscr{D}_\BKK ) \] that can be represented by a purely archimedean arithmetic divisor $(0,g)$. Any such arithmetic divisor satisfies (E(N) ,0 ) \cdot (0,g) =0, and varying $N$ shows that \[ (E ,0 ) \cdot \widehat{\omega}^\mathrm{Hdg}_{ A_0 / \mathcal{S}_V} =0. \] Rewriting the relation of Proposition \ref{prop:full bundle compare} as \begin{equation}\label{Kalt} \widehat{\tautmod}_V = \widehat{\omega}^\mathrm{Hdg}_{ A / \mathcal{S}_V} + 2 \widehat{\omega}^\mathrm{Hdg}_{ A_0 / \mathcal{S}_V} + \det( \mathcal{V}) + ( 0 , C_2) , \end{equation} we have shown that the right hand side has trivial arithmetic intersection with $(E,0)$, and hence so does the left hand side. This proves the first equality of (1). The second is a formal consequence of this and \eqref{height degree}. The first equality of (2) follows from \eqref{Kalt}, as the Chern forms of the final three terms on the right vanish by Lemma \ref{lem:numerical basics}, Proposition \ref{prop:easy numerical}, and Proposition \ref{prop:gross numerical}. The second equality of (2) is clear from the definitions. Claim (3) also follows from \eqref{Kalt}, using Proposition \ref{prop:easy numerical}, Proposition \ref{prop:gross numerical}, and the equality $ C_0 (n) = 2C_1 + C_2+ C_3$. \end{proof} \begin{remark}\label{rem:taut-hodge volume} If $n>2$ then part (3) of Theorem \ref{thm:taut-hodge compare} implies \begin{align*} 2^n \cdot \widehat{\mathrm{vol}} ( \widehat{\tautmod}_V ) & = \widehat{\mathrm{vol}} \big( \widehat{\omega}^\mathrm{Hdg}_{ A / \mathcal{S}_V} + ( 0 , C_0(n) ) \big) \\ & = \widehat{\mathrm{vol}} \big( \widehat{\omega}^\mathrm{Hdg}_{ A / \mathcal{S}_V} \big) + n C_0(n) \int_{\mathcal{S}_V(\C) } \chern( \widehat{\omega}^\mathrm{Hdg}_{ A / \mathcal{S}_V} )^{n-1} \end{align*} where we have used Lemma \ref{lem:numerical basics} for the first equality, and Lemma \ref{lem:trivial volume shift} for the second. In Proposition \ref{prop:taut-hodge strong volume} we will show that this equality also holds when $n=2$. \end{remark} \subsection{Volume of the exceptional divisor} \label{ss:exceptional volume} In this subsection we assume $n ≥2$, and fix an irreducible ($=$ connected) component $E$ of the exceptional divisor of Definition \ref{def:special exceptional}. Recalling the notation \eqref{constant metrics}, our goal is to compute the arithmetic volume of \[ (E,0) \in \widehat{\Pic}( \bar{\mathcal{S}}_V ). \] By definition of the exceptional divisor, there is a commutative diagram with cartesian squares \begin{equation*}%\label{singular point} \xymatrix{ {E} \ar[r] \ar[d] & { e } \ar[d] \\ {\mathrm{Exc}}_V \ar[r] \ar[d] & { \mathcal{M}_{(1,0)} \times_{\co_\kk} \mathrm{Sing}_{(n-1,1)} } \ar[d] \\ {\mathcal{S}_V} \ar[r] & { \mathcal{M}_{(1,0)} \times_{\co_\kk} \mathcal{M}^\Pap_{(n-1,1)} } \end{equation*} in which $e$ is a connected component of the $0$-dimensional reduced and irreducible $_$-stack $ℳ_(1,0) ×__ Sing_(n-1,1)$. In particular, $e$ is supported in a single characteristic $p|D$, and admits a presentation as a stack quotient \[ e \iso \Delta \backslash \Spec(\F_\mathfrak{p}'), \] in which $𝔭 ⊂_$ is the prime above $p$, $_𝔭'$ is a finite extension of $_𝔭=_/𝔭$, and $Δ$ is a finite group acting on $_𝔭'$. Define a rational number \[ m_E \define \sum_{ z\in e(\F_\mathfrak{p}^\alg) } \frac{ 1 }{ |\Aut(z)|} = \frac{[ \F_\mathfrak{p}' : \F_\mathfrak{p} ] }{ |\Delta| } . \] \begin{proposition}\label{prop:projective intersection} The iterated intersection \[ \widehat{\taut}_V^{-1} \cdots \widehat{\taut}_V^{-1} \cdot (E,0) \in \widehat{\mathrm{CH}}^n(\bar{\mathcal{S}}_V , \mathscr{D}_\BKK) \] has arithmetic degree $\widehat{m}_E = m_E \log(p)$ \end{proposition} \begin{proof} After Remark \ref{rem:singular description}, one might expect that $E$ is isomorphic to the projective space of hyperplanes in an $n$-dimensional vector space. In particular, there should be a universal hyperplane $\mathcal{F} \subset \co_E^n$, with the property that the restriction of $\taut_V^{-1}$ to $E$ is isomorphic to $\co_E^n /\mathcal{F}$. The obstruction to this being true is due to stacky issues, which can be removed by fixing an integer $N\ge 3$ prime to $p$ and adding level $N$-structure to the universal pair $(A_0,A)$ over $e$. Let $e(N) \to e$ be the scheme classifying such level structures, and consider the cartesian diagram (this is the definition of the upper left corner) \[ \xymatrix{ { E(N) } \ar[r]\ar[d] & { e(N) } \ar[d] \\ { E } \ar[r] & { e .} \] The scheme $e(N)$, being a reduced scheme finite over $\F_\mathfrak{p}$, is a disjoint union of finitely many spectra of finite extensions of $\F_\mathfrak{p}$. Fix a connected component $E' \subset E(N)$, and let \[ \Spec(\F_\mathfrak{p}') = e' \subset e(N) \] be the connected component below it. As in Remark \ref{rem:singular description}, $E'$ is precisely the projective space over $e'$ classifying hyperplanes $ \mathcal{F} \subset \Lie(A|_{e'})$. Moreover, after fixing a trivialization $\Lie(A_0|_{e'}) \iso \F_\mathfrak{p}'$, the isomorphism \eqref{dual taut} identifies \[ \taut_V^{-1} |_{E'} = \Lie(A|_{E'}) / \mathcal{F} \in \Pic(E') \iso \mathrm{CH}^1(E') \] where $\mathcal{F} \subset \Lie(A|_{E'})$ is the universal hyperplane. A routine exercise then shows that the iterated intersection \[ ( \taut_V^{-1} |_{E'}) \cdots (\taut_V^{-1} |_{E'}) \in \mathrm{CH}^{n-1}(E') \iso \Z \] is represented by the cycle class of any $\F_\mathfrak{p}'$-valued point of $E'$. In other words, the cycle class of any section to $E' \to e'$. Now allow the connected component $E' \subset E(N)$ to vary. If we fix any section to $E(N) \to e(N)$, and use it to view $e(N)$ as a $0$-cycle on $E(N)$, then \[ e(N) = ( \taut_V^{-1} |_{E(N)}) \cdots (\taut_V^{-1} |_{E(N)}) \in \mathrm{CH}^{n-1}(E(N)) . \] This implies the arithmetic intersection formula \[ ( e(N) , 0 ) = \widehat{\taut}_V^{-1} \cdots \widehat{\taut}_V^{-1} \cdot ( E(N) , 0 ) \in \widehat{\mathrm{CH}}^n( \bar{\mathcal{S}}_V(N) , \mathscr{D}_\BKK )_\Q. \] Finally, recalling \eqref{degree normalization}, we deduce \[ \widehat{\deg} \big( \widehat{\taut}_V^{-1} \cdots \widehat{\taut}_V^{-1} \cdot ( E , 0 ) \big) \frac{ \# e(N) (\F_\mathfrak{p}^\alg) }{d_N} \log(p) \sum_{ z\in e(\F_\mathfrak{p}^\alg) } \frac{\log(p)}{ |\Aut(z)|} \] up to a $\Q$-linear combination of $\{ \log(p) : p\mid N\}$. Varying $N$ completes the proof. \end{proof} \begin{corollary}\label{cor:exceptional volume} The hermitian line bundle $(E,0) \in \widehat{\Pic}(\bar{\mathcal{S}}_V )$ has arithmetic volume \[ \widehat{\vol} ( E,0) = (-2)^{n-1} \cdot \widehat{m}_E . \] \end{corollary} \begin{proof} Part (1) of Theorem \ref{thm:taut-hodge compare} implies the second equality in \[ (E,0)\cdot (E,0) = (\mathrm{Exc}_V,0) \cdot (E,0) = 2 \widehat{\taut}_V \cdot (E,0) \in \widehat{\mathrm{CH}}^2(\bar{\mathcal{S}}_V , \mathscr{D}_\BKK)_\Q. \] This implies the iterated intersection formula \[ (E,0)\cdots (E,0) = 2^{n-1} \widehat{\taut}_V \cdots \widehat{\taut}_V \cdot (E,0) \in \widehat{\mathrm{CH}}^n(\bar{\mathcal{S}}_V , \mathscr{D}_\BKK)_\Q , \] and claim follows using Proposition \ref{prop:projective intersection}. \end{proof} \section{Kudla-Rapoport divisors at split primes} \label{s:KR divisors} Assume $n > 2$, and let $𝒮_V$ be the Shimura variety \eqref{moduli inclusion} associated to a $$-hermitian space $V$ of signature $(n-1,1)$ containing a self-dual lattice. Fix a prime $p$ split in $$, and factor \[ p\co_\kk = \mathfrak{p} \overline{\mathfrak{p}}. \] We explain how the Kudla-Rapoport divisor $𝒵_V(p) →𝒮_V$ of Definition \ref{def:KR} is related to a Shimura variety $𝒮_V'$ with $V'$ of signature $(n-2,1)$. \subsection{Statement of the results} \label{ss:split divisors main} Let $V'$ denote the $$-hermitian space of signature $(n-2,1)$ whose local invariants satisfy \begin{equation}\label{V'} \mathrm{inv}_\ell(V') = ( p, -D)_\ell \cdot \mathrm{inv}_\ell(V) \end{equation} for all places $ℓ≤∞$. Equivalently, $V'$ is the orthogonal complement to the $$-span of any $x∈V$ of hermitian norm $⟨x,x⟩=p$. \begin{lemma}\label{lem:lower hermitian} The hermitian space $V'$ admits a self-dual $\co_\kk$-lattice. \end{lemma} \begin{proof} Suppose $\ell$ is a rational prime unramified in $\kk$. If $\ell \neq p$ then $p\in \Z_\ell^\times$, and hence is a norm from the unramified extension $\kk_\ell$. If $\ell =p$ then $p$ is again a norm from $\kk_\ell \iso \Q_p \times \Q_p$. In either case, $(p,-D)_\ell=1$, and so the local invariants of $V$ and $V'$ agree at all unramified primes. In general, a $\kk$-hermitian space over $\kk$ admits a self-dual $\co_\kk$-lattice if and only if it has local invariant $1$ at every prime unramified in $\kk$. As $V$ satisfies this condition by hypothesis, so does $V'$. \end{proof} Lemma \ref{lem:lower hermitian} allows us to form the $_$-stacks \[ \mathcal{S}_{V'} \subset \bar{\mathcal{S}}_{V'} \subset \mathcal{M}_{(1,0)} \times_{\co_\kk} \bar{\mathcal{M}}_{(n-2,1)} \] analogous to \[ \mathcal{S}_V \subset \bar{\mathcal{S}}_V\subset \mathcal{M}_{(1,0)} \times_{\co_\kk} \bar{\mathcal{M}}_{(n-1,1)}, \] but in one dimension lower. The stack $𝒮_V'$ is endowed with its own hermitian line bundle $_V'$ as in \eqref{metrized taut}, its own exceptional divisor $Exc_V'$ as in Definition \ref{def:special exceptional}, and its own universal pair $(A_0',A')$ of polarized abelian schemes with $_$-actions. The following theorems, whose proofs will occupy the remainder of \S \ref{s:KR divisors}, lie at the core of the inductive arguments of \S \ref{s:volumes}. \begin{theorem}\label{thm:height descent 1} If we set \[ \widehat{\tautmod}_V = 2 \widehat{\taut}_V - ( \mathrm{Exc}_V , 0) \in \widehat{\Pic} ( \bar{\mathcal{S}}_V , \mathscr{D}_\BKK ) \] as in Theorem \ref{thm:taut-hodge compare}, and similarly with $V$ replaced by $V'$, then \[ \int_{ \mathcal{Z}_V(p) (\C) } \chern( \widehat{\tautmod}_V)^{n-2} = ( p^{n-1}+1) \vol_\C( \widehat{\tautmod}_{V'} ). \] Moreover, there is a rational number $a(p)$ such that \[ \frac{ \mathrm{ht}_{ \widehat{\tautmod}_V } (\bar{\mathcal{Z}}_V(p)) } { p^{n-1}+1 } = \widehat{\vol} (\widehat{\tautmod}_{V'} ) + a(p) \log(p) . \] \end{theorem} \begin{theorem}\label{thm:height descent 2} There is a rational number $b(p)$ such that \begin{align*} \frac{ \mathrm{ht}_{\widehat{\omega}^\mathrm{Hdg}_{A/\mathcal{S}_V}} (\bar{\mathcal{Z}}_V(p)) }{ p^{n-1}+1 } & = \widehat{\vol} ( \widehat{\omega}^\mathrm{Hdg}_{A'/\mathcal{S}_{V'}} ) + b(p) \log(p) \\ & \quad + (1-n) \left( \frac{L'(0,\eps)}{L(0,\eps)} +\frac{\log(D)}{2} \right) \vol_\C( \widehat{\omega}^\mathrm{Hdg}_{A'/ \mathcal{S}_{V'}} ) . \end{align*} \end{theorem} The proofs are rather long, so we summarize now the key steps. The central idea is to make precise the impressionistic relation \[ \mathcal{Z}_V(p) ``=" (p^{n-1}+1) \cdot \mathcal{S}_{V'} , \] by decomposing \begin{equation}\label{flavor decomp} \mathcal{Z}_V(p)_{/\co_\kk[1/p]} = \mathcal{U}_0 \sqcup \mathcal{U}_{\mathfrak{p}} \sqcup \mathcal{U}_{\overline{\mathfrak{p}}} \end{equation} as a disjoint union of open and closed substacks (Proposition \ref{prop:split divisor flavors} below). The stacks on the right hand side are related to $𝒮_V'$ by closed immersions \[ i_\mathfrak{p} : \mathcal{S}_{V' / \co_\kk[1/p]} \to \mathcal{U}_{\mathfrak{p}} ,\qquad i_{\overline{\mathfrak{p}}} : \mathcal{S}_{V' / \co_\kk[1/p]} \to \mathcal{U}_{\overline{\mathfrak{p}}}, \] and by a diagram \[ \xymatrix{ { \mathcal{S}_{V' / \co_\kk[1/p]} } & { \mathcal{T}_{V'} } \ar[l] \ar[r]^{i_0} & { \mathcal{U}_0 } \] in which the leftward arrow is a finite \'etale surjection of degree $p^n-1-1$, and the rightward arrow is a closed immersion. See \eqref{tau cover} for the definition of the stack $ 𝒯_V' $. Recalling the exceptional locus of Definition \ref{def:special exceptional}, denote by \[ \mathcal{S}^{\nonexc}_V = \mathcal{S}_V \smallsetminus \mathrm{Exc}_V \] the (open) nonexceptional locus of $𝒮_V$, set \[ \mathcal{Z}^{\nonexc}_{V}(p) = \mathcal{Z}_{V}(p) \times_{\mathcal{S}_{V}} \mathcal{S}^\nonexc_V , \] and make the same definitions with $V$ replaced by $V'$. \[ \mathcal{U}^\nonexc_\square = \mathcal{U}_\square \cap \mathcal{Z}^\nonexc_V(p) \] for $□∈{ 0, 𝔭 ,𝔭 }$, and \[ \mathcal{T}^\nonexc_{V'} = \mathcal{T}_{V'} \times_{\mathcal{S}_{V'}} \mathcal{S}_{V'}^\nonexc . \] We will show that the closed immersions $i_0$, $i_𝔭$, and $i_𝔭$ are close to being isomorphisms. More precisely, $i_0$ restricts to an isomorphism \[ i_0 : \mathcal{T}^\nonexc_{V'} \iso \mathcal{U}^\nonexc_0, \] $i_𝔭$ and $i_𝔭$ restrict to isomorphisms \[ i_\mathfrak{p} : \mathcal{S}^\nonexc_{V' / \co_\kk[1/p]} \iso \mathcal{U}^\nonexc_{\mathfrak{p}} \quad \mbox{and}\quad i_{\overline{\mathfrak{p}}} : \mathcal{S}^\nonexc_{V' / \co_\kk[1/p]} \iso \mathcal{U}^\nonexc_{\overline{\mathfrak{p}}}. \] After taking compactifications into account, both theorems above will follow easily from these isomorphisms. Note that it suffices to work only over the nonexceptional locus, as part (1) of Theorem \ref{thm:taut-hodge compare} guarantees that the hermitian line bundles in questions have trivial arithmetic intersection with all components of the exceptional divisor (which is why we work with $𝒦_V$ in Theorem \ref{thm:height descent 1} instead of $ℒ_V$). %If $F$ is an algebraically closed field, it follows from the local model calculations of Pappas [45] that an $F$-point % (A_0,A) \in \mathcal{S}_{V} (F) %lies in nonexceptional locus if and only if the action of $\sqrt{-D} \in \End(A)$ on $\Lie(A)$ is nonzero. %See \S 2.3, and especially Theorem 2.3.2, of [13] for a more thorough discussion. \subsection{Decomposing the Kudla-Rapoport divisor} The decomposition \eqref{flavor decomp} is a geometric reflection of a result of linear algebra. As in \eqref{hermitian hom}, choose an isomorphism \[ V \iso \Hom_\kk(W_0,W) \] in which $(W_0,h_0)$ and $(W,h)$ are relevant hermitian spaces of signatures $(1,0)$ and $(n-1,1)$. Fix self-dual $_$-lattices $𝔞_0 ⊂W_0$ and $𝔞 ⊂W$. Any vector \[ x\in \Hom_{\co_\kk}(\mathfrak{a}_0, \mathfrak{a}) \subset V \] with $⟨x,x⟩=p$ determines an orthogonal decomposition \[ W = \tilde{W}_0 \oplus W' \] with $W̃_0 = x(W_0)$, and a corresponding decomposition \[ V = \kk x \oplus V' \] with $V'= _(W_0 ,W')$. \begin{lemma}\label{lem:lattice trifurcation} The $\co_\kk$-lattice \tilde{\mathfrak{a}}_0=\mathfrak{a} \cap \tilde{W}_0 satisfies exactly one of \[ \tilde{\mathfrak{a}}_0 = x(\mathfrak{a}_0) ,\qquad \tilde{\mathfrak{a}}_0 =\mathfrak{p}^{-1} x(\mathfrak{a}_0), \qquad \tilde{\mathfrak{a}}_0=\overline{\mathfrak{p}}^{-1} x(\mathfrak{a}_0). \] \end{lemma} \begin{proof} Use $x$ to identify $W_0 = \tilde{W}_0$, and hence $\mathfrak{a}_0 \subset \tilde{\mathfrak{a}}_0 \subset \mathfrak{a}$. Recalling the symplectic forms \[ e_0: W_0 \times W_0 \to \Q ,\qquad e: W \times W \to \Q \] of \eqref{symplectic}, the relation \eqref{basic hom hermitian} implies that $e|_{W_0} = p \cdot e_0$. The inclusion \[ e_0 ( p \tilde{\mathfrak{a}}_0 , \mathfrak{a}_0) = e ( \tilde{\mathfrak{a}}_0 , \mathfrak{a}_0) \subset e( \mathfrak{a} , \mathfrak{a}) =\Z, \] together with the self-duality of $\mathfrak{a}_0$ under $e_0$, shows that $p \tilde{\mathfrak{a}}_0 \subset \mathfrak{a}_0$. If equality held we would have \[ e_0( \mathfrak{a}_0 , \mathfrak{a}_0 ) = p^{-1} e( \mathfrak{a}_0 , \mathfrak{a}_0) \subset p e( \tilde{\mathfrak{a}}_0,\tilde{\mathfrak{a}}_0)\subset p e(\mathfrak{a},\mathfrak{a}) \subset p \Z, \] contradicting $\mathfrak{a}_0$ being self-dual under $e_0$. As $\tilde{\mathfrak{a}}_0$ is $\co_\kk$-stable with $\mathfrak{a}_0 \subset \tilde{\mathfrak{a}}_0 \subsetneq p^{-1} \mathfrak{a}_0$, it is $\mathfrak{a}_0$, $\mathfrak{p}^{-1} \mathfrak{a}_0$, or $\overline{\mathfrak{p}}^{-1} \mathfrak{a}_0$. \end{proof} \begin{proposition}\label{prop:split divisor flavors} Let $(A_0,A,x)$ be the universal triple over $\mathcal{Z}_V(p)_{/\co_\kk[1/p]}$, so that x\in \Hom_{\co_\kk}(A_0,A) satisfies $\langle x,x\rangle =p$. There is a decomposition \eqref{flavor decomp} into open and closed substacks in which \begin{itemize} \item $\mathcal{U}_0$ is the locus of points where $\ker(x)=0$, \item $\mathcal{U}_\mathfrak{p}$ is the locus of points where $\ker(x) = A_0[\mathfrak{p}]$, \item $ \mathcal{U}_{\overline{\mathfrak{p}}}$ is the locus of points where $\ker(x) = A_0[ \overline{\mathfrak{p}}]$. \end{itemize} \end{proposition} \begin{proof} Recalling \eqref{KR hermitian}, the relation $\langle x,x\rangle =p$ implies that multiplication-by-$p$ factors as \[ A_0 \map{x} A \iso A^\vee \map{x^\vee} A_0^\vee \iso A_0, \] and so $\ker(x) \subset A_0[p]$. Both of these group schemes are finite \'etale over $\mathcal{Z}_V(p)_{/\co_\kk[1/p]}$, which implies that each of $\mathcal{U}_0$, $\mathcal{U}_\mathfrak{p}$, and $\mathcal{U}_{\overline{\mathfrak{p}}}$ is open and closed. It only remains to prove that every geometric point $s \to \mathcal{Z}_V(p)_{/\co_\kk[1/p]}$ is contained in one of them. Abbreviate $T_0= T_p(A_{0 s})$ and $T=T_p(A_s)$. Applying the snake lemma to the diagram \[ \xymatrix{ 0 \ar[r] & {T_0 } \ar[r] \ar[d]_{x_s} & { T_0\otimes \Q_p } \ar[r] \ar[d]_{x_s} & { A_{0 s}[p^\infty]} \ar[r] \ar[d]_{x_s} & 0 \\ 0 \ar[r] & {T } \ar[r] & { T \otimes \Q_p } \ar[r] & { A_{s}[p^\infty]} \ar[r] & 0 , \] and using the vertical arrow on the left to identify $T_0 \subset T$, we find that \[ \ker(x_s) \iso \tilde{T}_0 / T_0, \] where $\tilde{T}_0 = T \cap T_{0 \Q_p}$. After fixing an isomorphism $\Z_p \iso \Z_p(1)$ of \'etale sheaves on $s$, there are unique $\co_{\kk,p}$-valued hermitian forms $h_0$ and $h$ on $T_0$ and $T$, respectively, related to the Weil pairings $e_0$ and $e$ by \eqref{symplectic}. Thus we may apply Lemma \ref{lem:lattice trifurcation} with $\mathfrak{a}_0$ and $\mathfrak{a}$ replaced by $T_0$ and $T$, to see that $\tilde{T}_0$ must be one of $T_0$, $\mathfrak{p}^{-1} T_0$, or $\overline{\mathfrak{p}}^{-1} T_0$. These three cases correspond to $\ker(x_s)$ being trivial, $A_{0s}[\mathfrak{p}]$, or $A_{0s}[\overline{\mathfrak{p}}]$. \end{proof} \subsection{Analysis of $\mathcal{U}_\mathfrak{p}$} In this subsection we study the structure of the substack $𝒰_𝔭$ of \eqref{flavor decomp}, and make explicit its relation to $𝒮_V'$. The analogous analysis of $𝒰_𝔭$ is obtained by replacing $𝔭$ by $𝔭$ everywhere. Return to the situation of Lemma \ref{lem:lattice trifurcation}, so that $x∈__(𝔞_0,𝔞)$ with $⟨x,x⟩=p$ determines an orthogonal decomposition \[ W = \tilde{W}_0 \oplus W' . \] Set $𝔞̃_0 = 𝔞∩W̃_0$. \begin{lemma}\label{lem:pi linear algebra} If $\tilde{\mathfrak{a}}_0 =\mathfrak{p}^{-1} x(\mathfrak{a}_0)$, there is an orthogonal decomposition \[ \mathfrak{a} = \tilde{\mathfrak{a}}_0 \oplus \mathfrak{a}' \subset W \] in which $\mathfrak{a}' = \mathfrak{a} \cap W'$. \end{lemma} \begin{proof} If we use $x$ to identify $W_0 = \tilde{W}_0$, the assumption $\tilde{\mathfrak{a}}_0=\mathfrak{p}^{-1}\mathfrak{a}_0$ implies that $\tilde{\mathfrak{a}}_0$ is self-dual with respect to the hermitian form $h|_{ \tilde{W}_0 }= p h_0$. The desired decomposition then follows by elementary linear algebra. \end{proof} The lemma suggests that if $(A_0,A,x) ∈𝒰_𝔭(S)$ for an $_[1/p]$-scheme $S$, then $x:A_0 →A$ should determine an $_$-linear splitting \begin{equation}\label{pi product} A = \tilde{A}_0 \times A' \end{equation} of principally polarized abelian schemes. Indeed, this is the case. If we set \[ \tilde{A}_0 = A_0 / A_0[\mathfrak{p}] \] and recall that $(x) = A_0[𝔭]$, the morphism $x :A_0 →A$ factors as \[ A_0 \to \tilde{A}_0 \map{y} A \] for some $y∈__(Ã_0, A) $ satisfying $⟨y,y⟩=1$. In other words, the composition \[ \tilde{A}_0 \map{y} A \iso A^\vee \map{y^\vee} \tilde{A}_0^\vee \iso \tilde{A}_0 \] is the identity. This implies that the composition \[ A \iso A^\vee \map{y^\vee} \tilde{A}_0^\vee \iso \tilde{A}_0 \map{y} A \] is a Rosati-fixed idempotent in $__(A)$, and $A$ admits a unique splitting \eqref{pi product} of principally polarized abelian schemes over $S$ such that this idempotent is the projection to the first factor. Apply the above construction to the universal triple $(A_0,A,x)$ over $𝒰_𝔭$, and recall that $A$ comes equipped with an $_$-stable hyperplane $ℱ ⊂(A)$ satisfying Kr\"amer's signature condition. Using the decomposition \[ \Lie(A) = \Lie(\tilde{A}_0) \oplus \Lie(A') \] of vector bundles on $S$, denote by the largest closed substack over which $(Ã_0 ) ⊂ℱ$. \begin{proposition}\label{prop:i_p construction} There is a canonical isomorphism \[ i_\mathfrak{p} : \mathcal{S}_{V'/\co_\kk[1/p]} \iso \mathcal{U}_\mathfrak{p}^\dagger. \] \end{proposition} \begin{proof} As above, let $S$ be an $\co_\kk[1/p]$-scheme. Given a point $(A_0,A,x) \in \mathcal{U}_\mathfrak{p}^\dagger(S)$, the splitting \eqref{pi product} determines \[ A' \in \mathcal{M}_{( n-2,1)}(S), \] where we have endowed $A'$ with the $\co_\kk$-stable hyperplane \[ \mathcal{F}' = \mathcal{F} / \Lie(\tilde{A}_0 ) \subset \Lie(A)/ \Lie(\tilde{A}_0 ) \iso \Lie(A'). \] satisfying Kr\"amer's condition. The pair $(A_0,A')$ defines an $S$-point of \[ \mathcal{S}_{V'} \subset \mathcal{M}_{(1,0)} \times_{\co_\kk} \mathcal{M}_{( n-2,1)}, \] and we have now constructed a morphism \begin{equation}\label{i_p inverse} \mathcal{U}^\dagger_\mathfrak{p} \map{ (A_0,A,x) \to (A_0,A')} \mathcal{S}_{V'/\co_\kk[1/p]}. \end{equation} Conversely, start with an $S$-point \[ (A'_0 , A') \in \mathcal{S}_{V'}(S) . \] First define elliptic curves $A_0 = A_0'$ and $\tilde{A}_0 = A_0 /A_0[\mathfrak{p}]$. Then define an abelian scheme $A$ by \eqref{pi product}, and endow $A$ with its product principal polarization and product $\co_\kk$-action. Recalling that $A'$ comes equipped with a hyperplane $\mathcal{F}' \subset \Lie(A')$ satisfying Kr\"amer's condition, we endow $A$ with the hyperplane \[ \mathcal{F} = \Lie( \tilde{A}_0 ) \oplus \mathcal{F}' \subset \Lie(A). \] It is easy to check that $A$, with its extra data, defines an $S$-point of $\mathcal{M}_{(n-1,1)}$, and that $(A_0,A)$ defines an $S$-point of the open and closed substack \[ \mathcal{S}_V \subset \mathcal{M}_{(1,0)} \times_{\co_\kk} \mathcal{M}_{(n-2,1)}. \] If we define $x\in \Hom_{\co_\kk}(A_0,A)$ as the composition \[ A_0 \to \tilde{A}_0 \hookrightarrow \tilde{A}_0 \times A' =A, \] where the first arrow is the quotient map, then $\langle x ,x \rangle =p$ and $\ker(x) =A_0[\mathfrak{p}]$. The triple $(A_0,A,x)$ defines an $S$-point of $\mathcal{U}^\dagger_\mathfrak{p}$, and the morphism \[ \mathcal{S}_{V'/\co_\kk[1/p]} \map{ (A'_0 , A') \to (A_0,A,x) } \mathcal{U}^\dagger_\mathfrak{p} \] is inverse to \eqref{i_p inverse}. \end{proof} Proposition \ref{prop:i_p construction} gives us a commutative diagram \begin{equation}\label{i_p pullback} \xymatrix{ { \mathcal{S}_{V'/\co_\kk[1/p] } } \ar[r]\ar[d] & { \mathcal{S}_{V/\co_\kk[1/p] } } \ar[d] \\ { \big( \mathcal{M}_{(1,0)} \times_{\co_\kk} \mathcal{M}^\Pap_{ (n-2,1) } \big)_{/\co_\kk[1/p]} } \ar[r] & { \big( \mathcal{M}_{(1,0)} \times_{\co_\kk} \mathcal{M}^\Pap_{ (n-1,1) } \big)_{/\co_\kk[1/p]} } \end{equation} in which the top horizontal arrow is the composition \[ \mathcal{S}_{V'/\co_\kk[1/p]} \map{i_\mathfrak{p}} \mathcal{U}^\dagger_\mathfrak{p} \hookrightarrow \mathcal{Z}_V(p)_{/\co_\kk[1/p]} \to \mathcal{S}_{V/\co_\kk[1/p]} , \] and the bottom horizontal arrow sends $(A_0' , A') ↦(A_0,A)$, where \begin{equation}\label{i_p pullback explicit} A_0 = A_0' , \quad \tilde{A}_0 = A_0 / A_0[\mathfrak{p}] , \quad A = \tilde{A}_0 \times A'. \end{equation} Both horizontal arrows are finite and unramified. \begin{proposition}\label{prop:i_p pullbacks} The homomorphism \[ \widehat{\Pic} ( \mathcal{S}_{V/\co_\kk[1/p]} )_\Q \to \widehat{\Pic} ( \mathcal{S}_{V'/\co_\kk[1/p]} )_\Q \] induced by \eqref{i_p pullback} sends \[ \widehat{\omega}^\mathrm{Hdg}_{A/ \mathcal{S}_V } \mapsto \widehat{\omega}^\mathrm{Hdg}_{A'_0/ \mathcal{S}_{V'} } + \widehat{\omega}^\mathrm{Hdg}_{A'/ \mathcal{S}_{V'} } , \] where $(A_0' , A')$ is the universal pair over $\mathcal{S}_{V'}$. The same map also sends \[ \widehat{\taut}_V \mapsto \widehat{\taut}_{V'} \quad \mbox{and}\quad (\mathrm{Exc}_V,0) \mapsto (\mathrm{Exc}_{V'} ,0 ). \] \end{proposition} \begin{proof} It follows from \eqref{i_p pullback explicit} that the top horizontal arrow in \eqref{i_p pullback} sends \[ \widehat{\omega}^\mathrm{Hdg}_{A/ \mathcal{S}_V } \mapsto \widehat{\omega}^\mathrm{Hdg}_{\tilde{A}_0/ \mathcal{S}_{V'} } + \widehat{\omega}^\mathrm{Hdg}_{A'/ \mathcal{S}_{V'} }. \] As we have inverted $p$ on the base, the degree $p$ quotient map $A_0' \to \tilde{A}_0$ induces an isomorphism on Lie algebras, and hence an isomorphism \[ \omega^\mathrm{Hdg}_{\tilde{A}_0/ \mathcal{S}_{V'} } \iso \omega^\mathrm{Hdg}_{ A'_0/ \mathcal{S}_{V'} } . \] This isomorphism does not respect the metrics \eqref{hodge metric}, but one can easily check that \[ \widehat{\omega}^\mathrm{Hdg}_{\tilde{A}_0/ \mathcal{S}_{V'} } \widehat{\omega}^\mathrm{Hdg}_{ A'_0/ \mathcal{S}_{V'} } + (0 , - \log(p) ) \] as elements of $\widehat{\Pic}( \mathcal{S}_{V'/\co_\kk[1/p]})$. The correction term $(0 , - \log(p) )$ is torsion in the arithmetic Picard group, as \[ 2 ( 0 , -\log(p) ) = (\mathrm{div}(p) , - \log(p^2) ) = 0, \] and the first claim follows. For the second claim, let $S$ be an $\co_\kk[1/p]$-scheme, fix an $S$-point $(A_0' , A') \in \mathcal{S}_{V'}(S)$, and let $(A_0,A) \in \mathcal{S}_V(S)$ be its image under the top horizontal arrow in \eqref{i_p pullback}. Tracing through the construction of $i_\mathfrak{p}$ in Proposition \ref{prop:i_p construction}, we see that \[ \Lie(A) / \mathcal{F} \iso \Lie(A') / \mathcal{F}'. \] It follows immediately from this and \eqref{dual taut} that $\taut_V|_S \iso \taut_{V'}|_S$. At a complex point $s \in S(\C)$ we have a decomposition \[ H_1(A_s , \Q) = H_1( \tilde{A}_{0s} ,\Q) \oplus H_1( A_s' , \Q), \] which induces an isometric embedding \[ \Hom_\kk( H_1(A_{0s} ,\Q ) , H_1( A_s' , \Q) ) \subset \Hom_\kk( H_1(A_{0s} ,\Q ) , H_1( A_s , \Q) ) \] of $\kk$-hermitian spaces. Recalling \eqref{taut realization}, this isometric embedding restricts to the isomorphism $\taut_{V',s} \iso \taut_{V,s}$ just constructed, which therefore preserves the metrics defined by \eqref{taut metric}. Finally, we show that under the top horizontal arrow in \eqref{i_p pullback}, the exceptional divisor $\mathrm{Exc}_V \subset \mathcal{S}_V$ pulls back to the exceptional divisor $\mathrm{Exc}_{V'} \subset \mathcal{S}_{V'}$. These divisors are, by Definition \ref{def:special exceptional}, the pullbacks of the singular loci \begin{align} \mathcal{M}_{(1,0)} \times_{\co_\kk} \mathrm{Sing}_{ (n-1,1) } & \subset \mathcal{M}_{(1,0)} \times_{\co_\kk} \mathcal{M}^\Pap_{ (n-1,1) } \label{first singular} \\ \mathcal{M}_{(1,0)} \times_{\co_\kk} \mathrm{Sing}_{ (n-2,1) } & \subset \mathcal{M}_{(1,0)} \times_{\co_\kk} \mathcal{M}^\Pap_{ (n-2,1) } \label{second singular} \end{align} of \eqref{singular} under the vertical arrows in \eqref{i_p pullback}, and so it suffices to prove that the first singular locus \eqref{first singular} pulls back to the second singular locus \eqref{second singular} under the bottom horizontal arrow in \eqref{i_p pullback}. Moreover, as each of \eqref{first singular} and \eqref{second singular} is a reduced $\co_\kk$-stack of dimension $0$, and as the bottom horizontal arrow in \eqref{i_p pullback} is finite and unramified, it suffices to check this on the level of geometric points. This is an easy exercise in linear algebra, using the characterization of the singular locus found in Remark \ref{rem:singular description}. %So, let $F$ be an algebraically closed field $F$ of characteristic dividing $D$, fix an $F$-point %(A_0',A') \in \mathcal{M}_{(1,0)}(F) \times \mathcal{M}^\Pap_{ (n-2,1) }(F) , %and let %(A_0,A) \in \mathcal{M}_{(1,0)}(F) \times \mathcal{M}^\Pap_{ (n-1,1) }(F) %be its image under the bottom horizontal arrow in \eqref{i_p pullback}. %By \eqref{i_p pullback explicit}, there in an $\co_\kk$-linear isomorphism of $F$ vector spaces %\Lie(A) = \Lie( \tilde{A}_0 ) \oplus \Lie(A'). %Consider the endomorphism $\sqrt{-D} \in \End( \tilde{A}_0 )$. As $F$ has characteristic dividing $D$, the square of this endomorphism annihilates the $1$-dimensional space $\Lie(\tilde{A}_0)$, which implies that $\sqrt{-D}$ also annihilates it. %Now we use Pappas's characterization of the singular locus [45] (see also Theorem 2.3.2 of [13]): the pair $(A_0',A')$ lies in \eqref{second singular} if and only if $\sqrt{-D} \in \End(A')$ annihilates the Lie algebra of $A'$. %This hold if and only if $\sqrt{-D} \in \End(A)$ annihilates the Lie algebra of $A$, which holds if and only if $(A_0,A)$ lies in \eqref{first singular}. % Thus \eqref{first singular} pulls back to \eqref{second singular} under \eqref{i_p pullback}, completing the proof. \end{proof} \begin{proposition}\label{prop:i_p nonexceptional} The nonexceptional locus $ \mathcal{U}_\mathfrak{p}^\nonexc \subset \mathcal{U}_\mathfrak{p}$ satisfies \begin{equation*} %\label{dagger open} \mathcal{U}_\mathfrak{p}^\nonexc \subset \mathcal{U}_\mathfrak{p}^\dagger, \end{equation*} and the isomorphism of Proposition \ref{prop:i_p construction} restricts to an isomorphism \[ \mathcal{S}^\nonexc_{V' / \co_\kk[1/p] } \iso \mathcal{U}^\nonexc_{\mathfrak{p}} . \] \end{proposition} \begin{proof} Suppose $S$ is an $\co_\kk[1/p]$-scheme, \[ (A_0,A,x) \in \mathcal{U}_\mathfrak{p}^\nonexc(S), \] and let $A'$ be the abelian scheme of \eqref{pi product}. In particular, \[\Lie(A) = \Lie( \tilde{A}_0 ) \oplus \Lie(A').\] The natural map \[ \mathcal{U}_\mathfrak{p}^\nonexc \to \mathcal{M}_{(n-1,1)} \smallsetminus \mathrm{Exc}_{(n-1,1)} \] endows $A$ with a hyperplane $\mathcal{F} \subset \Lie(A)$, and results of Kr\"amer, as summarized in Theorem 2.3.3 of [13], shows that this hyperplane is actually determined by the $\co_\kk$-action on $A$ using the following recipe: If we fix a $\pi \in \co_\kk$ such that $\co_\kk= \Z[\pi]$, and let \[ \overline{\epsilon}_S = \overline{\pi} \otimes 1 - 1\otimes i_S(\overline{\pi}) \in \co_\kk \otimes_\Z \co_S, \] where $i_S : \co_\kk \to \co_S$ is the structure map, then \[ \mathcal{F} = \mathrm{ker}( \overline{\epsilon}_S : \Lie(A) \to \Lie(A) ). \] On the other hand, the action of $\co_\kk$ on the Lie algebra of $A_0$ is through the structure morphism $i_S$, and so the same is true of $\tilde{A}_0 = A_0 / A_0[\mathfrak{p}]$. This implies that $\overline{\epsilon}_S$ annihilates $ \Lie( \tilde{A}_0 )$, and hence \Lie( \tilde{A}_0 ) \subset \mathcal{F}. Having shown \[ (A_0,A,x) \in \mathcal{U}_\mathfrak{p}^\dagger(S), \] the first claim of the proposition is proved. The second claim follows from the first and Proposition \ref{prop:i_p pullbacks}. \end{proof} \begin{proposition}\label{prop:i_p toroidal} The closed immersion $i_\mathfrak{p} : \mathcal{S}_{V'/\co_\kk[1/p]} \to \mathcal{U}_\mathfrak{p}$ of Proposition \ref{prop:i_p construction} extends uniquely to a proper morphism \[ \bar{\mathcal{S}}_{V'/\co_\kk[1/p] } \to \bar{\mathcal{U}}_\mathfrak{p}, \] where the codomain is defined as the Zariski closure of \[ \mathcal{U}_\mathfrak{p} \subset \bar{\mathcal{Z}}_V(p)_{/\co_\kk[1/p]}. \] or, equivalently, as the normalization of $ \mathcal{U}_\mathfrak{p} \to \bar{\mathcal{S}}_{V/\co_\kk[1/p]}.$ \end{proposition} \begin{proof} For ease of notation, we omit all subscripts $\co_\kk[1/p]$ in the proof. As the exceptional locus of $\bar{\mathcal{S}}_{V'}$ does not meet the boundary, it suffices to construct the extension over its complement \[ \bar{\mathcal{S}}_{V'}^\nonexc = \bar{\mathcal{S}}_{V'} \smallsetminus \mathrm{Exc}_{V'} . \] Consider the universal pair $(A_0,A')$ over $\mathcal{S}_{V'}^\nonexc$, and let \[ x: A_0 \to A = \tilde{A}_0 \times A' \] be as in the proof of Proposition \ref{prop:i_p construction}. In other words, the triple $(A_0,A,x)$ over $\mathcal{S}_{V'}^\nonexc$ determines the isomorphism i_\mathfrak{p} : \mathcal{S}_{V'}^\nonexc \iso \mathcal{U}_\mathfrak{p}^\nonexc. of Proposition \ref{prop:i_p nonexceptional}. The discussion of \S \ref{ss:toroidal} shows that $A'$ extends to a semi-abelian scheme over $\bar{\mathcal{S}}_{V'}^\nonexc$. The elliptic curve $A_0$ extends to an elliptic curve $\bar{\mathcal{S}}_{V'}^\nonexc$, and so the same is true of its quotient $\tilde{A}_0$. It follows that $A$ extends to a semi-abelian scheme over $\bar{\mathcal{S}}_{V'}^\nonexc$, and the extension property used in the proof of Lemma \ref{lem:normalized compactification} provides us with a morphism \[ i_\mathfrak{p} : \bar{\mathcal{S}}_{V'}^\nonexc \to \mathcal{M}_{(1,0)} \times_{\co_\kk} \bar{\mathcal{M}}^\Pap_{(n-1,1)} \] taking values in the open subscheme \[ \mathcal{M}_{(1,0)} \times_{\co_\kk} \big( \bar{\mathcal{M}}^\Pap_{(n-1,1)} \smallsetminus \mathrm{Sing}_{(n-1,1)} \big) \iso \mathcal{M}_{(1,0)} \times_{\co_\kk} \big( \bar{\mathcal{M}}_{(n-1,1)} \smallsetminus \mathrm{Exc}_{(n-1,1)} \big). \] This provides us with a morphism \[ i_\mathfrak{p} : \bar{\mathcal{S}}_{V'}^\nonexc \to \mathcal{M}_{(1,0)} \times_{\co_\kk} \bar{\mathcal{M}}_{(n-1,1)} , \] taking values in the open and closed substack $\bar{\mathcal{S}}_V$. We now have a commutative diagram of solid arrows \[ \xymatrix{ { \mathcal{S}_{V'}^\nonexc } \ar[r]^{i_\mathfrak{p}} \ar[d] & { \bar{\mathcal{U}}_\mathfrak{p} } \ar[d] \\ { \bar{\mathcal{S}}_{V'}^\nonexc } \ar[r]<EMAIL_ADDRESS> { \bar{\mathcal{S}}_V } \] in which the vertical arrow on the right is integral, and hence affine, by its construction as a normalization; see Lemma 29.53.4 of [51]. To complete the proof of the proposition, it suffices to show that there is a unique dotted arrow making the diagram commute. This immediately reduces to the corresponding claim for affine schemes, in which we are given homomorphisms of rings \[ \xymatrix{ { R' } & { B } \ar[l]<EMAIL_ADDRESS>\\ { \bar{R}' } \ar[u] & { A } \ar[l] \ar[u] \] with $B$ integral over $A$, and $\bar{R}' \subset R'$ is an inclusion of normal domains with the same field of fractions. The image of any $b\in B$ under $B \to R'$ is integral over $\bar{R}'$, hence contained in $\bar{R}'$. Thus there is a unique dotted arrow making the diagram commute. \end{proof} \subsection{Two lemmas on abelian schemes} For use in the next subsection, we prove two lemmas on abelian schemes. The first is a criterion for determining when a polarization descends to a quotient. The second is a criterion for the existence of an extension as a semi-abelian scheme. \begin{lemma}\label{lem:polarization descent} Suppose $f : X\to Y$ is an isogeny of abelian schemes over a Noetherian scheme $S$, and $\psi_X : X\to X^\vee$ is a polarization whose degree $d$ is invertible in $\co_S$. The following are equivalent. \begin{enumerate} \item The kernel of $f$ is contained in $\mathrm{ker}(\psi_X)$, and is totally isotropic with respect to the Weil pairing \[ e : \mathrm{ker}(\psi_X) \times \mathrm{ker}(\psi_X) \to \mu_d. \] \item There exists a (necessarily unique) polarization $\psi_Y : Y \to Y^\vee$ making the diagram \[ \xymatrix{ { X } \ar[r]^{\psi_X} \ar[d]_f & { X^\vee } \\ { Y } \ar[r]_{\psi_Y} & { Y^\vee } \ar[u]_{f^\vee} \] \end{enumerate} \end{lemma} \begin{proof} This is routine. See, for example, Proposition 10.4 of [46]. %We may assume that $S$ is connected, and fix a geometric point $s \to S$. For each $p\mid d$ set %V_p(X_s) = T_p(X_s) \otimes_{\Z_p} \Q_p. %The polarization $\psi_X$ induces a $p$-adic Weil pairing %e_p : V_p(X_s) \times V_p(X_s) \to \Q_p(1), %and identifies $T_p(X_s^\vee)$ with the dual lattice of $T_p(X_s)$ with respect to \eqref{p-weil}. %Moreover, we may identify %\mathrm{ker}(\psi_X)_s \iso T_p(X_s^\vee)/T_p(X_s) %in such a way that the Weil pairing on the left hand side agrees with the pairing % T_p(X_s^\vee)/T_p(X_s) \times T_p(X_s^\vee)/T_p(X_s) \to (\Q_p/\Z_p)(1). %induced by \eqref{p-weil}. %If we now use $f:X\to Y$ to identify $V_p(X_s) \iso V_p(Y_s)$, the claim is that both properties of the proposition are equivalent to %\item [(3)] %for every prime $p\mid d$, the pairing \eqref{p-weil} is $\Z_p(1)$-valued on $T_p(Y_s)$. \end{proof} \begin{lemma}\label{lem:semiextension} Suppose $S$ is a normal Noetherian scheme, $U\subset S$ is a dense open subscheme, and $f : X \to Y$ is an isogeny of abelian schemes over $U$ whose degree $d$ is invertible in $\co_S$. If $X$ extends to a semi-abelian scheme over $S$ then so does $Y$, and both extensions are unique. \end{lemma} \begin{proof} The uniqueness claim follows from Proposition I.2.7 of [15], so we only need to prove the existence of a semi-abelian extension of $Y$. Consider the morphisms \mathrm{ker}(f) \to X[d] \to U. The second arrow and the composition are both \'etale, as $d\in \co_U^\times$, and so the first arrow is also \'etale by \cite[\href{https://stacks.math.columbia.edu/tag/02GW}{Tag 02GW}]{stacks-project}. As the first arrow is also a closed immersion, we deduce that $\mathrm{ker}(f) \subset X[d]$ is a union of connected components. If $X^* \to S$ denotes the semi-abelian extension of $X$, the multiplication map $[d] : X^* \to X^*$ is quasi-finite and flat. Indeed, by Lemma 37.16.4 of [51] flatness can be checked fiberwise on $S$, where it follows from Proposition 3.11 of Expos\'e VI.B of [50], and the surjectivity of $[d]$ on any extension of an abelian variety by a torus. The group scheme $X^*[d] \to S$ is therefore quasi-finite and flat. Using our assumption that $d\in \co_S^\times$, and Lemma 29.36.8 of [51], we deduce that it is \'etale. As we assume that $S$ is normal, so is $X^*[d]$. As the connected components of a normal scheme are the same as its irreducible components, it follows that the Zariski closure of $\mathrm{ker}(f)$ in $X^*[d]$ is an open and closed subscheme $\mathrm{ker}(f)^* \subset X^*[d]$. In particular $\mathrm{ker}(f)^*\to S$ is quasi-finite \'etale. By Lemma IV.7.1.2 of [44], the fppf quotient $Y^* = X^*/\mathrm{ker}(f)^*$ provides a semi-abelian extension of $Y$ to $S$. \end{proof} \subsection{Analysis of $\mathcal{U}_0$} In this subsection we study the substack $𝒰_0$ of \eqref{flavor decomp}, and make explicit its relation with $𝒮_V'$. Return to the situation of Lemma \ref{lem:lattice trifurcation}, so that $x∈__(𝔞_0,𝔞)$ with $⟨x,x⟩=p$ determines an orthogonal decomposition \begin{equation}\label{W split} W = \tilde{W}_0 \oplus W' . \end{equation} Set $𝔞̃_0 = 𝔞∩W̃_0$. \begin{lemma}\label{lem:0 linear algebra} Assume that $\tilde{\mathfrak{a}}_0 = x(\mathfrak{a}_0)$. If we set $\mathfrak{b} = \mathfrak{a} \cap W'$ and let $\mathfrak{b}^\vee \subset W' $ be its dual lattice with respect to $h|_{ W' }$, the $\co_\kk$-lattice \[ \mathfrak{a}'= \mathfrak{p} \cdot \mathfrak{b}^\vee + \mathfrak{b} , \] is self-dual with respect to $h|_{W'}$. Moreover, there are inclusions of $\co_\kk$-lattices \begin{equation}\label{lattice correspondence} \tilde{\mathfrak{a}}_0 \oplus \mathfrak{a}' \stackrel{p}{\supset} \tilde{\mathfrak{a}}_0 \oplus \mathfrak{b} \stackrel{p^2}{\subset} \mathfrak{a} \end{equation} in $W$ of the indicated indices, and a canonical $\co_\kk$-linear injection \begin{equation}\label{lattice t} \mathfrak{p}^{-1} \tilde{\mathfrak{a}}_0/ \tilde{\mathfrak{a}}_0 \iso \mathfrak{p}^{-1} \mathfrak{b} / \mathfrak{b} \map{t} \mathfrak{p}^{-1} \mathfrak{a}'/ \mathfrak{a}' . \end{equation} \end{lemma} \begin{proof} Throughout the proof, we use $x$ to identify $W_0 = \tilde{W}_0$, so that $h|_{W_0} = p h_0$. The projections to the two factors in \eqref{W split} induce isomorphisms \[ \mathfrak{a} / ( \mathfrak{a}_0 \oplus \mathfrak{b} ) \to \mathfrak{a}_0^\vee / \mathfrak{a}_0, \qquad \mathfrak{a} /( \mathfrak{a}_0 \oplus \mathfrak{b} ) \to \mathfrak{b}^\vee / \mathfrak{b}, \] where $\mathfrak{a}_0^\vee= p^{-1} \mathfrak{a}_0$ is the dual lattice of $\mathfrak{a}_0$ with respect to $h|_{W_0}$. Composing these isomorphisms yields an $\co_\kk$-linear isomorphism \begin{equation}\label{p trivialization} p^{-1} \mathfrak{a}_0 / \mathfrak{a}_0 \iso \mathfrak{b}^\vee / \mathfrak{b} \end{equation} respecting the $p^{-1}\co_\kk/\co_\kk$-valued hermitian forms induced by $h|_{W_0}= ph_0$ and $h|_{W'}$. It follows that $\mathfrak{p} \cdot ( \mathfrak{b}^\vee / \mathfrak{b} ) \subset \mathfrak{b}^\vee / \mathfrak{b}$ is maximal isotropic with respect to $h|_{W'}$, and so $\mathfrak{a}'$ is self-dual under $h|_{W'}$. Consider the inclusions $\mathfrak{b} \subset \mathfrak{a}' \subset \mathfrak{b}^\vee$ of $\co_\kk$-modules. The first is an isomorphism everywhere locally except at $\overline{\mathfrak{p}}$, while the second is an isomorphism everywhere locally except at $\mathfrak{p}$. In particular, the first induces an isomorphism \[ \mathfrak{p}^{-1} \mathfrak{b} / \mathfrak{b} \iso \mathfrak{p}^{-1} \mathfrak{a}'/\mathfrak{a}' . \] Restricting \eqref{p trivialization} to an isomorphism of $\mathfrak{p}$-torsion submodules defines \eqref{lattice t}. The inclusions of \eqref{lattice correspondence}, with the indicated indices, are clear from what we have said. \end{proof} Just as Lemma \ref{lem:0 linear algebra} is more complicated than Lemma \ref{lem:pi linear algebra}, the analysis of $𝒰_0$ is more complicated than that of $𝒰_𝔭$. Let $S$ be an $_[1/p]$-scheme. Given Lemma \ref{lem:0 linear algebra}, one expects that any $S$-point $(A_0,A,x) ∈𝒰_0(S)$ should determine a diagram of abelian schemes with $_$-actions \begin{equation}\label{abelian correspondence} \xymatrix{ { \tilde{A}_0 \times A' } & &{ \tilde{A}_0 \times B} \ar[ll]_{\deg = p} \ar[rr]^{\deg = p^2} & & { A} \end{equation} in which the arrows are $_$-linear isogenies of the indicated degrees. There should also be a canonical $_$-linear closed immersion \begin{equation}\label{t construct} \tilde{A}_0[\mathfrak{p}] \iso B[\mathfrak{p}] \map{t} A'[ \mathfrak{p} ], \end{equation} and the morphism $x:A_0 →A$ should factor as \[ A_0 \iso \tilde{A}_0 \hookrightarrow \tilde{A}_0 \times B \to A. \] Moreover, these abelian schemes should come with polarizations with the following properties. The elliptic curve $Ã_0$ is equipped with its unique polarization of degree $p^2$, $B$ is equipped with a polarization of degree $p^2$, and $A'$ is equipped with a principal polarization. The pullback of the product polarization on $Ã_0 ×A'$ via the leftwards arrow in \eqref{abelian correspondence} is the product polarization on $Ã_0 ×B$, and similarly for the the pullback of the principal polarization on $A$ via the rightward arrow. Here is how to construct the data above from $(A_0,A,x)$. As in the proof of Proposition \ref{prop:split divisor flavors}, at any geometric point $s →S$ we may apply Lemma \ref{lem:0 linear algebra} with $𝔞_0$ and $𝔞$ replaced by the $p$-adic Tate modules of $A_0s$ and $A_s$ to obtain $_$-lattices \begin{equation}\label{tate correspondence} T_p( \tilde{A}_{0s}) \oplus T_p( A_s') \supset T_p( \tilde{A}_{0s}) \oplus T_p(B_s) \subset T_p(A_s). \end{equation} analogous to \eqref{lattice correspondence}, along with an $_$-linear injection \begin{equation}\label{tate-level t} \mathfrak{p}^{-1} T_p(\tilde{A}_{0s})/T_p(\tilde{A}_{0s}) \iso \mathfrak{p}^{-1} T_p(B_s)/T_p(B_s) \map{t} \mathfrak{p}^{-1} T_p(A_s')/T_p(A_s'). \end{equation} To be clear, we have not yet constructed the abelian schemes $Ã_0$, $A'$, and $B$, only lattices in $T_p(A_s)[1/p]$ that we choose to call $T_p(Ã_s)$, $T_p(A_s')$, and $T_p(B_s)$. However, as $p∈_S^×$ and both inclusions in \eqref{tate correspondence} have finite index, there are abelian schemes $C$ and $D$ over $S$ with $_$-actions and $_$-linear isogenies \[ \xymatrix{ { D } & { C } \ar[r]^{p^2} \ar[l]_{p} & { A } \] of the indicated degrees that identify \begin{equation}\label{tate-level decomp} T_p(C_s) = T_p(\tilde{A}_{0s}) \oplus T_p(B_s) ,\qquad T_p(D_s) = T_p(\tilde{A}_{0s}) \oplus T_p(A_s'). \end{equation} The condition that $⟨x,x⟩=p$ implies that \[ A_0 \map{x} A \iso A^\vee \map{x^\vee} A_0^\vee \iso A_0 \] is multiplication by $p$, which implies that the composition \[ A \iso A^\vee \map{x^\vee} A_0^\vee\iso A_0 \map{x} A \] is a Rosati-fixed element $ α∈__(A)[1/p]$ such that $α∘α= pα$. In particular, we obtain an idempotent \[ p^{-1}\alpha \in \End_{\co_\kk}(A)[1/p]\iso \End_{\co_\kk}(C)[1/p]\iso \End_{\co_\kk}(D)[1/p], \] whose induced actions on $T_p(C_s)[1/p]$ and $T_p(D_s)[1/p]$ are just the projections to the first summands in \eqref{tate-level decomp}. It follows that we in fact have \[ p^{-1}\alpha \in \End_{\co_\kk}(C) \iso \End_{\co_\kk}(D). \] These idempotents are the projections to the first factors for unique decompositions \[ C= \tilde{A}_0 \times B ,\qquad D=\tilde{A}_0 \times A' \] of abelian schemes with $_$-actions, and the image of $x$ under \[ \Hom_{\co_\kk}(A_0 , A) [1/p] \iso \Hom_{\co_\kk}(A_0 , C)[1/p] \to \Hom_{\co_\kk}(A_0 , \tilde{A}_0)[1/p] \] is an isomorphism $A_0 Ã_0$. In particular, we now have $_$-linear isogenies \eqref{abelian correspondence} such that taking $p$-adic Tate modules recovers \eqref{tate correspondence}, and \eqref{tate-level t} induces the closed immersion \eqref{t construct}. If we pullback the principal polarization via $Ã_0 ×B →A$, we obtain a polarization on $Ã_0 ×B$ of degree $p^4$. As the idempotent $p^-1α$ constructed above is Rosati-fixed, this polarization of $Ã_0 ×B$ splits as the product of polarizations $Ã_0 →Ã_0^∨$ and $B→B^∨$. By examining the induced Weil pairing on $p$-adic Tate modules, one can show that each polarization has kernel isomorphic, \'etale locally on $S$, to $_/(p)$, and that \[ \ker(B\to A') = \ker( B\to B^\vee )[\overline{\mathfrak{p}}] \] is totally isotropic under the Weil pairing on $(B→B^∨)$. Hence, by Lemma \ref{lem:polarization descent}, $B→B^∨$ descends to a principal polarization on $A'$. This completes the construction of the abelian schemes \eqref{abelian correspondence} with all of their expected extra structure. Now apply this construction to the universal triple $(A_0,A,x)$ over $𝒰_0$. As $p∈_𝒰_0^×$, the $p$-power isogenies \eqref{abelian correspondence} induce an isomorphism \begin{equation}\label{i_0 Lie} \Lie(A) \iso \Lie( \tilde{A}_0) \times \Lie(A') \end{equation} of rank $n$ vector bundles on $𝒰_0$. Recalling that $A$ comes equipped with an $_$-stable hyperplane $ℱ ⊂(A)$, denote by \[ \mathcal{U}_0^\dagger \subset \mathcal{U}_0 \] the largest closed substack over which $(Ã_0) ⊂ℱ$ Define a finite \'etale cover \begin{equation}\label{tau cover} \mathcal{T}_{V'} \to \mathcal{S}_{V'/\co_\kk[1/p]} \end{equation} of degree $p^n-1-1$ as follows: for any $_[1/p]$-scheme $S$, let $𝒯_V'(S)$ be the groupoid of triples $(A'_0,A',t)$ in which \[ (A'_0,A') \in \mathcal{S}_{V'}(S) \quad \mbox{and} \quad t : A'_0[ \mathfrak{p} ] \to A'[\mathfrak{p}] \] is an $_$-linear closed immersion of group schemes over $S$. \begin{proposition}\label{prop:i_0 construction} There is a canonical isomorphism $i_0 : \mathcal{T}_{V'} \iso \mathcal{U}_0^\dagger$. \end{proposition} \begin{proof} The morphism $\mathcal{U}_0^\dagger \to \mathcal{T}_{V'}$ is essentially described above. Suppose $S$ is an $\co_\kk[1/p]$-scheme and \[ (A_0,A,x) \in \mathcal{U}_0^\dagger(S). \] We can use the isomorphism \eqref{i_0 Lie} to endow the abelian scheme $A'$ of \eqref{abelian correspondence} with the rank one local direct summand \[ \mathcal{F} ' = \mathcal{F} / \Lie( \tilde{A}_0) \subset \Lie(A) /\Lie( \tilde{A}_0) = \Lie(A') \] to obtain a point $A' \in \mathcal{M}_{(n-2,1)}(S)$. Setting $A_0'=\tilde{A}_0$, this defines a morphism \[ \mathcal{U}_0^\dagger \map{ (A_0,A,x) \mapsto (A'_0,A') } \mathcal{S}_{V' / \co_\kk[1/p] }, \] and the closed immersion \eqref{t construct} defines the desired lift to $\mathcal{U}_0^\dagger \to \mathcal{T}_{V'}$. Now we construct the inverse. Start with a point (A'_0, A' , t) \in \mathcal{T}_{V'}(S). Denote by $B$ the abelian scheme dual to \[ B^\vee = A' / \mathrm{Im}(t). \] Using the principal polarization to identify $A'$ with its dual, the quotient map $A' \to B^\vee$ dualizes to a morphism $B \to A'$, and the composition \[ B\to A' \to B^\vee \] is a degree $p^2$ polarization (it is the pullback via $B \to A'$ of the principal polarization on $A'$). It has the property that each factor on the right hand side of \[ \ker( B \to B^\vee ) = \ker(B \to B^\vee ) [\mathfrak{p}] \times \ker(B \to B^\vee ) [ \overline{\mathfrak{p}} ] \] has order $p$, and the second factor is the kernel of $B\to A'$. In particular, the induced map $B[\mathfrak{p}] \to A'[\mathfrak{p}]$ is an isomorphism, and the closed immersion \[ A'_0[\mathfrak{p}] \map{t} A'[\mathfrak{p}] \iso B[\mathfrak{p}] \] has image $\ker(B \to B^\vee) [\mathfrak{p}] $. Setting $\tilde{A}_0 = A'_0$, the resulting isomorphism \[ \tilde{A}_0[\mathfrak{p}] \iso \ker(B \to B^\vee) [\mathfrak{p}] \] admits a unique $\co_\kk$-linear extension to an isomorphism \[ t : \tilde{A}_0[ p ] \iso \ker(B \to B^\vee) \] identifying the Weil pairings on source and target. The antidiagonal subgroup \[ \Delta \define \tilde{A}_0[p]\map{ a_0 \mapsto ( - a_0 , t (a_0) ) } \tilde{A}_0 \times B. \] is totally isotropic under the Weil pairing induced by the product polarization (where $\tilde{A}_0$ is given the polarization of degree $p^2$), which therefore descends, by Lemma \ref{lem:polarization descent}, to a principal polarization on the quotient \[ A = (\tilde{A}_0\times B)/\Delta. \] As $p \in \co_S^\times$, there is an induced isomorphism of Lie algebras \eqref{i_0 Lie}, which allows us to define a corank one local direct summand $\mathcal{F} \subset \Lie(A)$ as the product \[ \mathcal{F} = \Lie(\tilde{A}_0) \times \mathcal{F}'. \] This defines a point $A \in \mathcal{M}_{(n-1,1)}(S)$. Setting $A_0 = \tilde{A}_0$, the composition \[ A_0=\tilde{A}_0 \hookrightarrow \tilde{A}_0 \times B \to A \] defines $x \in \Hom_{\co_\kk}(A_0,A)$ with $\langle x,x\rangle=p$ The above construction determines a morphism \[ \mathcal{T}_{V'} \map{ (A'_0,A',t) \to (A_0,A,x) } \mathcal{Z}_V(p) \] taking values in the open substack $\mathcal{U}_0^\dagger$, which is inverse to the map $\mathcal{U}_0^\dagger \to \mathcal{T}_{V'}$ constructed above. \end{proof} Proposition \ref{prop:i_0 construction} gives us morphisms \begin{equation}\label{i_0 pullback} \xymatrix{ { \mathcal{S}_{V'/\co_\kk[1/p] } } & & { \mathcal{T}_{V'} } \ar[rr] \ar[ll]_{\eqref{tau cover}} & & { \mathcal{S}_{V/\co_\kk[1/p] } } \end{equation} in which the arrow on the right is the (finite and unramified) composition \[ {\mathcal{T}_{V'} } \map{i_0 } \mathcal{U}^\dagger_0 \hookrightarrow \mathcal{Z}_V(p)_{/\co_\kk[1/p]} \to \mathcal{S}_{V/\co_\kk[1/p]} , \] sending $(A_0' , A' , t ) ↦(A_0,A)$, where $A_0=A_0'$, and $A$ is related to $A'$ by a diagram \eqref{abelian correspondence}. \begin{proposition}\label{prop:i_0 pullbacks} The hermitian line bundles \[ \widehat{\omega}^\mathrm{Hdg}_{A_0'/ \mathcal{S}_{V'}} + \widehat{\omega}^\mathrm{Hdg}_{A'/ \mathcal{S}_{V'}} \in \widehat{\Pic}( \mathcal{S}_{V' } ) \] \widehat{\omega}^\mathrm{Hdg}_{A/ \mathcal{S}_{V}} \in \widehat{\Pic}( \mathcal{S}_{V } ) have the same images under the homomorphisms \[ \xymatrix{ { \widehat{\Pic}( \mathcal{S}_{V'} ) } \ar[r] & { \widehat{\Pic} ( \mathcal{T}_{V'} ) _\Q }& { \widehat{\Pic}( \mathcal{S}_{V } ) } \ar[l] \] induced by \eqref{i_0 pullback}. The same is true of $(\mathrm{Exc}_{V'} ,0 )$ and $(\mathrm{Exc}_V,0)$, and of $\widehat{\taut}_{V'}$ and $\widehat{\taut}_V$. \end{proposition} \begin{proof} If $(A'_0,A')$ denotes the pullback of the universal object via the left arrow in \eqref{i_0 pullback}, and $(A_0,A)$ denotes the pullback of the universal object via the right arrow in \eqref{i_0 pullback}, examination of the proof of Proposition \ref{prop:i_0 construction} shows that there is a quasi-isogeny \[ f \in \Hom( A'_0 \times A' , A)[1/p] \] of degree $\deg(f)=p$. As in the proof of Proposition \ref{prop:i_p pullbacks}, this implies that \[ \widehat{\omega}^\mathrm{Hdg}_{A/ \mathcal{T}_{V'}} \widehat{\omega}^\mathrm{Hdg}_{ A'_0 / \mathcal{T}_{V'}} \widehat{\omega}^\mathrm{Hdg}_{ A' / \mathcal{T}_{V'}} + (0 , -\log(p)) \] as elements of $\widehat{\Pic}(\mathcal{T}_{V'} )$, and the term $(0,-\log(p))$ is torsion. The first claim follows from this. The remaining claims are essentially the same as in Proposition \ref{prop:i_p pullbacks}, and we leave the details to the reader. \end{proof} \begin{proposition}\label{prop:i_0 nonexceptional} The nonexceptional locus $ \mathcal{U}_0^\nonexc \subset \mathcal{U}_0$ satisfies \[ \mathcal{U}_0^\nonexc \subset \mathcal{U}_0^\dagger. \] Moreover, the morphism of Proposition \ref{prop:i_0 construction} restricts to an isomorphism \[ \mathcal{S}^\nonexc_{V' / \co_\kk[1/p] } \iso \mathcal{U}^\nonexc_0 . \] \end{proposition} \begin{proof} The proof is essentially the same as Proposition \ref{prop:i_p nonexceptional}, and the details are left to the reader. \end{proof} \begin{proposition}\label{prop:i_0 toroidal} Denote by $\bar{\mathcal{T}}_{V'}$ the normalization of $ \mathcal{T}_{V'} \to \bar{\mathcal{S}}_{V'/\co_\kk[1/p] }$. The closed immersion $i_0:\mathcal{T}_{V'} \to \mathcal{U}_0$ of Proposition \ref{prop:i_0 construction} extends to a proper morphism \[ \bar{\mathcal{T}}_{V'} \to \bar{\mathcal{U}}_0, \] where the codomain is defined as the Zariski closure of \[ \mathcal{U}_0 \subset \bar{\mathcal{Z}}_V(p)_{/\co_\kk[1/p]}, \] or, equivalently, as the normalization of $ \mathcal{U}_0 \to \bar{\mathcal{S}}_{V/\co_\kk[1/p]}$. \end{proposition} \begin{proof} Consider the universal triple $(A_0,A',t)$ over $\mathcal{T}_{V'}$, and let $(A_0, A)$ be the pullback of the universal pair over $\mathcal{S}_V$ via the composition \[ \mathcal{T}_{V'} \map{i_0} \mathcal{U}_0 \to \mathcal{S}_{V} . \] We know that $A'$ extends to a semi-abelian scheme over $\bar{\mathcal{T}}_{V'}$, obtained as a pullback via \[ \bar{\mathcal{T}}_{V'} \to \bar{\mathcal{S}}_{V'} \to \bar{\mathcal{M}}_{(n-2,1)}, \] and that $A_0$ extends to an elliptic curve over $\bar{\mathcal{T}}_{V'}$, obtained as a pullback via \[ \bar{\mathcal{T}}_{V'} \to \bar{\mathcal{S}}_{V'} \to \mathcal{M}_{(1,0)}, \] Using Lemma \ref{lem:semiextension} and the isogenies \eqref{abelian correspondence}, we deduce that $A$ also extends to a semi-abelian scheme over $\bar{\mathcal{T}}_{V'}$. With this extension in hand, the proof is essentially identical to that of Proposition \ref{prop:i_p toroidal}. \end{proof} \subsection{Proof of Theorems \ref{thm:height descent 1} and \ref{thm:height descent 2}} Define an $_[1/p]$-stack \[ \mathcal{Z}^\dagger_V(p) = \mathcal{T}_{V'} \sqcup \mathcal{S}_{V' / \co_\kk[1/p] } \sqcup \mathcal{S}_{V' /\co_\kk[1/p]} . \] Propositions \ref{prop:i_p construction} and \ref{prop:i_0 construction} provide us with a canonical closed immersion \[ \mathcal{Z}^\dagger_V(p) \map{i = i_0 \sqcup i_\mathfrak{p} \sqcup i_{\overline{\mathfrak{p}} }} \mathcal{U}_0 \sqcup \mathcal{U}_\mathfrak{p} \sqcup \mathcal{U}_{\overline{\mathfrak{p}}} = \mathcal{Z}_V(p)_{ / \co_\kk[1/p] } \] with image the closed substack $ 𝒰^†_0 ⊔𝒰^†_𝔭 ⊔𝒰^†_𝔭$. Hence we have a diagram \begin{equation}\label{divisor correspondence} \xymatrix{ { \mathcal{Z}^\dagger_V(p) } \ar[rr]^{i} \ar[d]_\alpha & & { \mathcal{Z}_V(p)_{ / \co_\kk[1/p] } } \ar[d]^\beta \\ { \mathcal{S}_{V'/\co_\kk[1/p]} } & & { \mathcal{S}_{V/\co_\kk[1/p] } } \end{equation} in which $α$ is a finite \'etale surjection of degree $p^n-1+1$, and $β$ is finite. \begin{lemma}\label{lem:exceptional KR error} As divisors on $ \mathcal{S}_{V/\co_\kk[1/p] }$, we have \[ \mathcal{Z}_V(p)_{ / \co_\kk[1/p] } = \mathcal{Z}^\dagger_V(p) + E, \] where $\mathcal{Z}^\dagger_V(p)$ has no irreducible components contained in the exceptional divisor $\mathrm{Exc}_{V/\co_\kk[1/p]} \subset \mathcal{S}_{V/\co_\kk[1/p]}$ of Definition \ref{def:special exceptional}, and $E$ is supported entirely on the exceptional divisor. \end{lemma} \begin{proof} The \emph{exceptional locus} of $ \mathcal{Z}^\dagger_V(p)$ is the preimage of the exceptional divisor under $\alpha$, and an irreducible component of it is \emph{exceptional} if it is contained in the exceptional locus. Similarly, the \emph{exceptional locus} of $ \mathcal{Z}_V(p)_{ / \co_\kk[1/p] }$ is the preimage of the exceptional divisor under $\beta$, and an irreducible component of it is \emph{exceptional} if it is contained in the exceptional locus. The final claims of Propositions \ref{prop:i_p nonexceptional} and \ref{prop:i_0 nonexceptional} show that $i$ restricts to an isomorphism between the non-exceptional loci of the source and target. In particular, it establishes a bijection between the generic points of non-exceptional irreducible components, and corresponding non-exceptional generic have local rings of the same (finite) length. However, the morphism $\alpha$ is finite \'etale, so no irreducible component of the source can map into the exceptional divisor of the target. Thus the closed immersion $i$ actually establishes a bijection between irreducible components of the source and non-exceptional irreducible components of the target, in a way that matches up their multiplicities. The claim follows immediately. \end{proof} Propositions \ref{prop:i_p toroidal} and \ref{prop:i_0 toroidal} imply that the diagram above extends to \begin{equation}\label{compact divisor correspondence} \xymatrix{ { \bar{\mathcal{Z}}^\dagger_V(p) } \ar[rr]^i \ar[d]_\alpha & & { \bar{\mathcal{Z}}_V(p)_{ / \co_\kk[1/p] } } \ar[d]^\beta \\ { \bar{\mathcal{S}}_{V'/\co_\kk[1/p]} } & & { \bar{\mathcal{S}}_{V/\co_\kk[1/p] } }, \end{equation} where we have defined \begin{equation}\label{dagger KR} \bar{\mathcal{Z}}^\dagger_V(p) = \bar{\mathcal{T}}_{V'} \sqcup \bar{\mathcal{S}}_{V' / \co_\kk[1/p] } \sqcup \bar{\mathcal{S}}_{V' /\co_\kk[1/p]}. \end{equation} Equivalently, this is the normalization of \[ \alpha: \mathcal{Z}^\dagger_V(p) \to \bar{\mathcal{S}}_{V'/\co_\kk[1/p] }. \] For any $N ≥1$ prime to $p$, we can add level structure to $𝒯̅_V'$ by defining \[ \mathcal{T}_{V'}(N) = \mathcal{T}_{V'} \times_{\mathcal{S}_{V'}} \mathcal{S}_{V'}(N), \] and letting $𝒯̅_V'(N)$ be the normalization of \[ \mathcal{T}_{V'}(N) \to \bar{\mathcal{T}}_{V' / \co_\kk[1/N]}. \] Equivalently, this is the normalization of \[ \mathcal{T}_{V'}(N) \to \bar{\mathcal{S}}_{V'}(N)_{ / \co_\kk[ \frac{1}{Np} ]}. \] \begin{lemma}\label{lem:T compact} The stack $\bar{\mathcal{T}}_{V'}(N)$ is regular, flat, and proper over $\co_\kk[\frac{1}{Np} ]$, and is a projective scheme if $N\ge 3$. It is smooth in a neighborhood of its boundary \[ \partial \bar{\mathcal{T}}_{V'}(N) = \bar{\mathcal{T}}_{V'}(N) \smallsetminus \mathcal{T}_{V'}(N) , \] which is a Cartier divisor smooth over $\co_\kk[\frac{1}{Np}]$. \end{lemma} \begin{proof} The proof is similar to that of Proposition \ref{prop:full compactification}, and is again based on the results of [39] and [41]. Recall from Remark \ref{rem:honest shimura} that the generic fiber of $\mathcal{S}_{V'}$ is the Shimura variety associated to a reductive group $G'$ and a certain compact open subgroup of $G'(\A_f)$, and one can easily check that the generic fiber of the finite \'etale cover \begin{equation}\label{alt level} \mathcal{T}_{V'}(N) \to \mathcal{S}_{V'}(N)_{ /\co_\kk[ \frac{1}{Np} ]} \end{equation} is obtained by shrinking compact open subgroup. Recall from \S \ref{ss:special shimura} that $\mathcal{S}_{V'}(N)$ is constructed as an open and closed substack \[ \mathcal{S}_{V'}(N) \subset \mathcal{M}_{W_0}(N) \times_{\co_\kk[1/N]} \mathcal{M}_{W'}(N). \] In this construction, we may replace $ \mathcal{M}_{W'}(N)$ with the stack $\mathcal{M}^\Pap_{W'}(N)$ appearing in the proof of Proposition \ref{prop:full compactification} to obtain a blow-down \[ \mathcal{S}^\Pap_{V'}(N) \subset \mathcal{M}_{W_0}(N) \times_{\co_\kk[1/N]} \mathcal{M}^\Pap_{W'}(N). \] Repeating the construction of \eqref{alt level} with $\mathcal{S}_{V'}(N)$ replaced by $\mathcal{S}^\Pap_{V'}(N)$ yields a finite \'etale cover \[ \mathcal{T}^\Pap_{V'}(N) \to \mathcal{S}^\Pap_{V'}(N)_{ /\co_\kk[ \frac{1}{Np} ]}, \] and $\mathcal{T}_{V'}(N)$ is the blow-up of the normal stack $\mathcal{T}^\Pap_{V'}(N)$ along its proper and $0$-dimensional locus of nonsmooth points. As in the proof of Proposition \ref{prop:full compactification}, we now have morphisms \[ \mathcal{T}^\Pap_{V'}(N) \to \mathcal{S}^\Pap_{V' }(N)_{/\co_\kk[ \frac{1}{Np} ]} \to \mathcal{M}^\Pap_{W'}(N)_{/\co_\kk[ \frac{1}{Np} ]} \to \bar{\mathcal{A}}_{/\co_\kk[ \frac{1}{Np} ]}, \] and we may define $\bar{\mathcal{T}}^\Pap_{V'}(N)$ as the normalization of the composition. We may now apply the results of [39] and [41] to deduce properties of $\bar{\mathcal{T}}^\Pap_{V'}(N)$ from properties of its interior. Arguing exactly as in the proof of Proposition \ref{prop:full compactification}, this compactification is normal and flat, and is a projective scheme if $N\ge 3$. Its boundary is a smooth Cartier divisor. The nonsmooth locus has dimension $0$, and does not meet the boundary. As $\bar{\mathcal{T}}_{V'}(N)$ can be identified with the blow-up of $\bar{\mathcal{T}}^\Pap_{V'}(N)$ along its nonsmooth locus, it has all the same properties, and is also regular (being smooth near the boundary and regular in the interior). \end{proof} Lemma \ref{lem:T compact} and Remark \ref{rem:special compact level} provide us with an $_[ 1/Np ]$-stack \[ \bar{\mathcal{Z}}^\dagger_V(p,N) \define \bar{\mathcal{T}}_{V'}(N) \sqcup \bar{\mathcal{S}}_{V'}(N)_{ / \co_\kk[\frac{1}{Np}] } \sqcup \bar{\mathcal{S}}_{V'}(N)_{/\co_\kk[ \frac{1}{Np} ] } , \] with all the nice properties enumerated in Proposition \ref{prop:full compactification}. Thus \eqref{dagger KR} has its own theory of pre-log singular hermitian line bundles and Burgos-Kramer-K\"uhn arithmetic Chow groups \[ \widehat{\CH}^d ( \bar{\mathcal{Z}}^\dagger_V(p) , \mathscr{D}_\BKK ) = \mil_{ \substack{ N\ge 3 \\ p \nmid N} } \widehat{\CH}^d ( \bar{\mathcal{Z}}^\dagger_V(p,N) , \mathscr{D}_\BKK ) , \] exactly as in \S \ref{ss:chow}. These Chow groups include notions of arithmetic heights and volumes as in \S \ref{ss:volumes}, taking values in the abelian group $/ log(p)$. One can repeat the construction of the diagram \eqref{compact divisor correspondence} with level structures, and so obtain pullbacks \[ \xymatrix{ { \widehat{\CH}^d( \bar{\mathcal{S}}_{V'} ,\mathscr{D}_\BKK ) } \ar[rr]^{\alpha^*} & & { \widehat{\CH}^d( \bar{\mathcal{Z}}^\dagger_V(p) ,\mathscr{D}_\BKK ) } & & { \widehat{\CH}^d( \bar{\mathcal{S}}_V ,\mathscr{D}_\BKK ) } \ar[ll]_{(\beta \circ i)^*} \] \[ \xymatrix{ { \widehat{\Pic}( \bar{\mathcal{S}}_{V'} ,\mathscr{D}_\BKK )_\Q } \ar[rr]^{\alpha^*} & & { \widehat{\Pic}( \bar{\mathcal{Z}}^\dagger_V(p) ,\mathscr{D}_\BKK ) }_\Q & & { \widehat{\Pic}( \bar{\mathcal{S}}_V ,\mathscr{D}_\BKK )_\Q } \ar[ll]_{(\beta \circ i)^*} \] \begin{lemma}\label{lem:formal height induction} If two pre-log singular hermitian line bundles \[ \widehat{\mathcal{P}}' \in \widehat{\Pic}( \bar{\mathcal{S}}_{V'} , \mathscr{D}_\BKK) \quad \mbox{and}\quad \widehat{\mathcal{P}} \in \widehat{\Pic}( \bar{\mathcal{S}}_{V} , \mathscr{D}_\BKK) \] have the same pullback to $ \widehat{\Pic}( \bar{\mathcal{Z}}^\dagger_V(p) ,\mathscr{D}_\BKK ) _\Q$ then \[ \int_{ \mathcal{Z}_V(p) (\C) } \chern( \widehat{\mathcal{P}})^{n-2} = ( p^{n-1}+1) \int_{ \mathcal{S}_{V'} (\C) } \chern( \widehat{\mathcal{P}}' )^{n-2}. \] Moreover, there is an $a(p) \in \Q$ such that \[ \mathrm{ht}_{\widehat{\mathcal{P}}} (\bar{\mathcal{Z}}_V(p)) (p^{n-1}+1 ) \cdot \widehat{\vol}( \widehat{\mathcal{P}}' ) + \mathrm{ht}_{\widehat{\mathcal{P}}} (E) + a(p) \log(p) , \] where $E$ is the divisor of Lemma \ref{lem:exceptional KR error}. \end{lemma} \begin{proof} As in the proof of Lemma \ref{lem:exceptional KR error}, the closed immersion $i$ in \eqref{divisor correspondence} restricts to an isomorphism of non-exceptional loci, and hence induces an isomorphism in the generic fiber. \begin{align*} \int_{ \mathcal{Z}_V(p) (\C) } \beta^* \chern( \widehat{\mathcal{P}})^{n-2} & = \int_{ \mathcal{Z}^\dagger_V(p) (\C) } (\beta \circ i)^* \chern( \widehat{\mathcal{P}})^{n-2} \\ & = \int_{ \mathcal{Z}^\dagger_V(p) (\C) } \alpha^* \chern( \widehat{\mathcal{P}}' )^{n-2} \\ & = ( p^{n-1}+1) \int_{ \mathcal{S}_{V'} (\C) } \chern( \widehat{\mathcal{P}}' )^{n-2} . \end{align*} The second claim follows from Lemma \ref{lem:exceptional KR error} and the equalities \[ \mathrm{ht}_{ \widehat{\mathcal{P}}} (\bar{\mathcal{Z}}^\dagger_V(p)) \widehat{\vol}( (\beta \circ i)^*\widehat{\mathcal{P}} ) \widehat{\vol}( \alpha^*\widehat{\mathcal{P}}' ) (p^{n-1}+1 ) \cdot \widehat{\vol}( \widehat{\mathcal{P}}' ) \] in $\R/\Q\log(p)$, where the first and last equalities are obtained directly by unpacking the definitions of pullbacks, heights, and arithmetic intersections in [11], and using the fact that $\alpha$ is (away from the boundary) a finite \'etale surjection of degree $p^{n-1}+1$. %By Theorem 7.47 of [11], any arithmetic cycle class %\widehat{\mathcal{D}} \in \widehat{\CH}^{n-1} ( \bar{\mathcal{S}}_{V'} , \mathscr{D}_\BKK ) %has a pullback %f^* \widehat{\mathcal{D}} \in \widehat{\CH}^{n-1} ( \bar{\mathcal{Z}}^\dagger_V(p) , \mathscr{D}_\BKK ), %which satisfies %\begin{equation}\label{pullback degrees} %\widehat{\deg} ( f^* \widehat{\mathcal{D}} )= (p^{n-1}+1 ) \cdot \widehat{\deg}( \widehat{\mathcal{D}} ). %To justify \eqref{pullback degrees}, fix $N \ge 3$ prime to $p$, and represent the pullback of $\widehat{\mathcal{D}}$ to %$\bar{\mathcal{S}}_{V'}(N)$ by an arithmetic $0$-cycle $(\mathcal{D} , g_\mathcal{D})$ with $\mathcal{D}$ supported in the interior $\mathcal{S}_{V'}(N)$. This can be done, for example, by applying Chow's moving lemma for projective varieties to the reductions of $ \bar{\mathcal{S}}_{V'}(N)$. %The equality \eqref{pullback degrees} then follows immediately from the fact that %f: \mathcal{Z}^\dagger_V(p,N) \to \mathcal{S}_{V'}(N)_{/\co_\kk[ \frac{1}{Np} ]} % is a finite \'etale surjection of degree $p^{n+1}+1$. % The final equality of \eqref{correspondence swap} follows by applying \eqref{pullback degrees} with $\widehat{\mathcal{D}}$ equal to the iterated self-intersection of $\widehat{\mathcal{P}}'$, and using the compatibility of arithmetic intersection with $f^*$. \end{proof} \begin{proof}[Proof of Theorems \ref{thm:height descent 1} and \ref{thm:height descent 2}] Propositions \ref{prop:i_p pullbacks} and \ref{prop:i_0 pullbacks} imply that the hypotheses of Lemma \ref{lem:formal height induction} are satisfied by \[ \widehat{\mathcal{P}}' = \widehat{\tautmod}_{V'} \quad \mbox{and} \quad \widehat{\mathcal{P}} = \widehat{\tautmod}_{V} \] which therefore gives the equality \[ \int_{ \mathcal{Z}_V(p) (\C) } \chern( \widehat{\tautmod}_V)^{n-2} = ( p^{n-1}+1) \int_{ \mathcal{S}_{V'} (\C) } \chern( \widehat{\tautmod}_{V'} )^{n-2}, \] and the equality \[ \mathrm{ht}_{ \widehat{\tautmod}_V }(\bar{\mathcal{Z}}_V(p)) ( p^{n-1}+1 ) \widehat{\vol} (\widehat{\tautmod}_{V'} ) + \mathrm{ht}_{ \widehat{\tautmod}_V }(E) \] up to a rational multiple of $\log(p)$. The second term on the right vanishes by claim (1) of Theorem \ref{thm:taut-hodge compare}, completing the proof of Theorem \ref{thm:height descent 1}. Similarly, Propositions \ref{prop:i_p pullbacks} and \ref{prop:i_0 pullbacks} show that the hypotheses of Lemma \ref{lem:formal height induction} are satisfied by \[ \widehat{\mathcal{P}}' = \widehat{\omega}^\mathrm{Hdg}_{A_0'/ \mathcal{S}_{V'}} + \widehat{\omega}^\mathrm{Hdg}_{A'/ \mathcal{S}_{V'}} \quad \mbox{and} \quad \widehat{\mathcal{P}} = \widehat{\omega}^\mathrm{Hdg}_{A/\mathcal{S}_V} , \] and so \[ \mathrm{ht}_{ \widehat{\omega}^\mathrm{Hdg}_{A/\mathcal{S}_V} } (\bar{\mathcal{Z}}_V(p)) (p^{n-1}+1 ) \cdot \widehat{\vol}( \widehat{\omega}^\mathrm{Hdg}_{A_0'/ \mathcal{S}_{V'}} + \widehat{\omega}^\mathrm{Hdg}_{A'/ \mathcal{S}_{V'}} ) + \mathrm{ht}_{ \widehat{\omega}^\mathrm{Hdg}_{A/\mathcal{S}_V} } (E) \] up to a rational multiple of $\log(p)$. One again, the second term on the right vanishes by claim (1) of Theorem \ref{thm:taut-hodge compare}. For the first term on the right, Proposition \ref{prop:easy numerical} and Lemmas \ref{lem:numerical basics} and \ref{lem:trivial volume shift} imply \begin{align*} \widehat{\vol}( \widehat{\omega}^\mathrm{Hdg}_{A_0'/ \mathcal{S}_{V'}} + \widehat{\omega}^\mathrm{Hdg}_{A'/ \mathcal{S}_{V'}} ) & = \widehat{\vol}( (0 , C_1) + \widehat{\omega}^\mathrm{Hdg}_{A'/ \mathcal{S}_{V'}} ) \\ &= \widehat{\vol}( \widehat{\omega}^\mathrm{Hdg}_{A'/ \mathcal{S}_{V'}} ) + (n-1) C_1 \int_{\mathcal{S}_{V'}(\C) } \chern( \widehat{\omega}^\mathrm{Hdg}_{A'/ \mathcal{S}_{V'}} )^{n-2} , \end{align*} \[ = \log(2\pi) + 2h^\mathrm{Falt}_\kk = - \frac{L'(0,\eps)}{L(0,\eps)} - \frac{\log(D)}{2} . \] This proves Theorem \ref{thm:height descent 2}. \end{proof} \section{Borcherds products} \label{s:borcherds} We continue to work with the Shimura variety $𝒮_V$ of \eqref{moduli inclusion} associated to a $$-hermitian space $V$ of signature $(n-1,1)$, and now assume $n≥2$. After explaining the connection between the complex orbifold $𝒮_V()$ and the Shimura variety \eqref{eq:X} associated to the unitary group $(V)$, we will use the results of \S \ref{ss:green functions} to construct Green functions for certain linear combinations of the Kudla-Rapoport divisors $𝒵_V(m) →𝒮_V$. We then recall the arithmetic theory of Borcherds products on $𝒮_V$ from [13], and show that one can produce Borcherds products whose divisors are linear combinations of only those $𝒵_V(p)$ with $p$ a prime congruent to $1$ modulo $D$, up to a linear combination of vertical divisors which can be computed explicitly. \subsection{Green functions and Borcherds products} \label{ss:general borcherds} %We now combine Theorem~\ref{thm:int} with the formulas of \S~\ref{s:eisenstein} to derive an explicit formula for the integrals of automorphic Green functions as in \eqref{eq:greenint}. %For simplicity we restrict ourselves to the case when the principal part of the input harmonic Maass form is supported on the trivial coset (up to the constant term). Such input forms can be obtained as lifts of scalar valued harmonic Maass forms for $\Gamma_0(D)$. \[ V\iso \Hom_\kk(W_0,W) \] as in \S \ref{ss:special shimura}, fix self-dual $_$-lattices $𝔞_0 ⊂W_0$ and $𝔞 ⊂W$ as in \S \ref{ss:basic moduli}, and define a self-dual $_$-lattice \begin{equation}\label{lattice choice} L = \Hom_{\co_\kk}(\mathfrak{a}_0 , \mathfrak{a}) \subset V. \end{equation} The subgroup $G⊂GU(W_0) ×GU(W)$ of Remark \ref{rem:honest shimura} acts on both $W_0$ and $W$ via unitary similitudes, and we denote by $K ⊂G(_f)$ the largest compact open subgroup fixing the lattices $𝔞_0$ and $𝔞$. Recalling the hermitian symmetric domain $𝒟$ of \S \ref{ss:u(v) shimura}, we identify \[ \mathcal{S}_V(\C) \iso G(\Q) \backslash \mathcal{D} \times G(\A_f) / K \] as in \S 2 of [13]. The group $G$ also acts on $V$, defining a surjective homomorphism \[ G \to H=\Uni(V) \] with kernel the diagonally embedded $Res_/ 𝔾_m$. Denoting again by $K$ the image of the above compact open subgroup under $G(_f) →H(_f)$, and recalling the complex Shimura variety \eqref{eq:X}, we obtain a finite cover \begin{equation}\label{cover} \mathcal{S}_V(\C) \to \mathrm{Sh}_K(H,\mathcal{D}). \end{equation} Let $H^∞_2-n(D,^n)$ denote the space of $$-valued harmonic Maass forms $f$ of weight $2-n$, level $Γ_0(D)$, and character $^n$ such that \begin{itemize} \item $f$ is bounded at all cusps of $\Gamma_0(D)$ different from the cusp $\infty$, \item $f$ has polynomial growth at $\infty$, in sense that there is a \[ P_f = \sum_{m\leq 0} c^+(m)q^m \in \C[q^{-1}] \] such that $f-P_f=o(1)$ as $q$ goes to $0$. \end{itemize} Such a harmonic Maass form has a Fourier expansion analogous to \eqref{eq:fourierf} with Fourier coefficients $c^±(m)∈$. Fix an $f∈H_2-n^∞(D,^n)$ with Fourier coefficients $c^±(m)$. This form can be lifted to a vector valued harmonic Maass form, in the sense of \S \ref{ss:green functions}, by setting \begin{align} \label{eq:lift} \tilde f=\sum_{\gamma\in \Gamma_0(D)\backslash \SL_2(\Z)} (f|_{2-n} \gamma) (\omega_L(\gamma)^{-1}\varphi_0) \in H_{2-n}(\omega_L) , \end{align} where $φ_0 ∈S_L = [L'/L]$ is the characteristic function of $0 ∈L'/L$. We denote the Fourier coefficients of $f̃$ by $c̃^±(m,μ)$ for $μ∈L'/L$ and $m∈$. The coefficients of $f̃$ can be computed in terms of the coefficients of $f$, and for $m<0$ we have \begin{align} \label{eq:coeffrel} \tilde c^{+}(m,\mu) =\begin{cases} c^+(m) & \text{if $\mu=0$}\\ 0&\text{if $\mu\neq 0$} \end{cases} \end{align} as in Proposition 6.1.2 of [13] or \S 5 of [47]. Under the covering map \eqref{cover}, the divisors $Z(m)$ and the hermitian line bundle $ℒ$ of \S \ref{ss:u(v) shimura} pull pack to the divisors $𝒵_V(m)$ and $ℒ_V$ of \S \ref{ss:special shimura}. This allows us to apply the construction \eqref{eq:AutoGreen} to the vector valued form \eqref{eq:lift} to obtain a Green function $Φ(z,h,f̃)$ for the analytic divisor \[ Z(f) = \sum_{m>0} c^+(-m) Z(m) \in \mathrm{Div}_\C( \mathrm{Sh}_K(H,\mathcal{D}) ) \] of \eqref{eq:zf}, which we pull back to a Green function $Φ_V(f)$ for the divisor \[ \mathcal{Z}_V(f)= \sum_{m>0} c^+(-m) \mathcal{Z}_V(m) \in \mathrm{Div}_\C ( \mathcal{S}_V ) . \] If $n>2$, or if $n=2$ and $V$ is anisotropic, then \begin{equation}\label{S_V green integral} \vol_\C(\widehat{\taut}_V )^{-1} \int_{ \mathcal{S}_V(\C) } \Phi_V (f) \chern( \widehat{\taut}_V )^{n-1} = \sum_{ m>0 } c^+(-m)B'(m, 0 ,s_0) \end{equation} by Theorem \ref{thm:int} and \eqref{eq:coeffrel}. Here, as always, $s_0=(n-1)/2$. Now consider the subspace of weakly holomorphic forms \[ M_{2-n}^{!,\infty}(D,\eps^n) \subset H^\infty_{2-n}(D,\eps^n) . \] These are meromorphic modular forms of the indicated weight, level, and character that are holomorphic outside the cusp $∞$ of $Γ_0(D)$. If we fix an \begin{equation}\label{input form} f(\tau) = \sum_{m \gg -\infty} c(m) q^m \in M_{2-n}^{ ! , \infty}( D , \eps^n ) \end{equation} in such a way that $c(m) ∈$ for all $m$ then, after possibly replacing $f$ by a nonzero integer multiple, Theorem 5.3.1 of [13] provides us with a Borcherds product $ψ(f)$. This is a rational section of the hermitian line bundle $_V^⊗k(f)$ on $𝒮̅_V$ \begin{equation}\label{borcherds norm} \Phi_V(f) = - \log\| \psi(f) \|^2, \end{equation} whose divisor (at least if $n>2$) is \begin{equation}\label{naive Bdiv} \bar{\mathcal{Z}}_V(f) = \sum_{m>0} c(-m) \bar{\mathcal{Z}}_V(m) \end{equation} plus an explicit (but complicated) linear combination of boundary components and vertical divisors in characteristics $p|D$. \begin{remark}\label{rem:lifted weight} The integer $k(f)$, called the \emph{weight} of the Borcherds product, is given as follows: Applying the construction \eqref{eq:lift} yields a weakly holomorphic vector valued form \[ \tilde{f} = \sum_{ \substack{ m\in \Q \\ m \gg -\infty } } \tilde{c}(m) q^m \in M_{2-n}^!(\omega_L) \] as in \eqref{eq:fourierf}, with coefficients $\tilde{c}(m) \in S_L=\C [L'/L]$. The weight \begin{equation}\label{wt def} k(f) = \tilde{c}(0,0) \end{equation} is the value of $\tilde{c}(0)$ at the trivial coset in $L'/L$. \end{remark} The space of all forms \eqref{input form} is an infinite dimensional $$-vector space. As evidenced by the following theorem, this gives us a great deal of freedom to choose a Borcherds product $ψ(f)$ with prescribed properties. \begin{theorem}\label{thm:good borcherds} Assume $n>2$. Given any infinite subset $\mathscr{A} \subset \Z^+$, there is a weakly holomorphic form \eqref{input form} satisfying \begin{enumerate} \item $c(m) \in \Z$ for all $m\in \Z$, \item $c(-m) =0$ for all positive integers $m \not\in \mathscr{A}$, \item $k(f) \neq 0$, \item up to a vertical divisor supported in characteristics $p\mid D$, the divisor of $\psi(f)$ is equal to \eqref{naive Bdiv}. \end{enumerate} In particular, the divisor of $\psi(f)$ contains no irreducible components supported on the boundary. \end{theorem} The proof will occupy the entirety of the next subsection. \subsection{Borcherds products with prescribed properties} As $L' ⊂V$ is the dual lattice of $L$ with respect to the quadratic form \ref{Q quadratic}, any $μ∈L'/L$ determines a coset $ -Q(μ) ⊂$. Denote by $P(ω_L)$ the space of finite Fourier polynomials \[ \sum_{\mu\in L'/L}\sum_{\substack{m \in \Z-Q(\mu)\\ m\leq 0}} c(m ,\mu ) \varphi_\mu \cdot q^m \] valued in $S_L=[L'/L]$, whose coefficients satisfy $c(m,μ)=c(m,-μ)$. As $n≥2$, we may view H_2-n(ω_L) ⊂P(ω_L) by sending a harmonic Maass form \eqref{eq:fourierf} to its principal part \[ \sum_{m\le 0} c^+(m) \cdot q^m \sum_{\mu\in L'/L} \sum_{m\le 0} c^+(m,\mu) \varphi_\mu \cdot q^m . \] In particular, this allows us to view M^!_2-n(ω_L) ⊂P(ω_L). Denote by $S_L[[q^1/D]]$ the space of $S_L$-valued formal power series in the variable $q^1/D$, and denote by \[ \vartheta \define q\frac{d}{dq} : S_L[[q^{1/D}]] \to S_L[[q^{1/D}]] \] the Ramanujan theta operator. For any $k∈$, taking $q$-expansions allows us to view $M_k(ω̅_L) ⊂S_L[[q^1/D]]$. Following [7] we consider the $$-bilinear pairing \[ \{ - , - \} : P(\omega_L)\times S_L[[q^{1/D}]] \to \C \] defined by \[ \{p,g\} = \sum_{ \substack{ m\ge 0 \\ \mu\in L'/L } } c(-m,\mu) b(m,\mu) , \] where $c(m,μ)$ and $b(m,μ)$ denote the coefficients of $p$ and $g$, respectively. \begin{proposition} \label{prop:crit} For any \[ p=\sum_{\mu\in L'/L}\sum_{m<0} c(m,\mu)\varphi_\mu\, q^{m}\in P(\omega_L), \] the following are equivalent: \begin{enumerate} \item There exists a weakly holomorphic modular form $f\in M_{2-n}^!(\omega_L)$ whose principal part agrees with $p$ up to a constant in $S_L$. \item We have $\{p, g\}=0$ for every $g\in S_{n}(\bar\omega_L)$. \end{enumerate} When these conditions holds, the constant term $c(0,0)$ of $f$ is related to the value of the Eisenstein series \begin{equation}\label{E_L} E_L=E_L(\tau,s_0,n)\in M_n(\bar\omega_L) \end{equation} of \ref{eq:fouriereis} at $s_0=(n-1)/2$ by \begin{align} \label{eq:ctcond} -c(0,0)= \{p,E_L\}=\sum_{\mu\in L'/L}\sum_{m>0} c(-m,\mu)\, B(m,\mu, s_0) . \end{align} \end{proposition} \begin{proof} See [3], or Corollary 3.9 of [7]. \end{proof} \begin{proposition} \label{prop:keyprop} Let $\mathscr{A} \subset \Z^+$ be any infinite subset. If $n>2$, there exists a weakly holomorphic form $f\in M_{2-n}^!(\omega_L)$ whose Fourier coefficients $c(m,\mu)$ are integers satisfying the following properties: \begin{itemize} \item[(i)] if $c(-m,\mu)\neq 0$ with $m >0$, then $\mu=0$ and $m\in \mathscr{A}$, \item[(ii)] $c(0,0)\neq 0$, \item[(iii)] $\{f,\vartheta(g)\}= 0$ for all $g\in M_{n-2}(\bar \omega_L)$. \end{itemize} \end{proposition} %The weakly holomorphic form $f\in M^!_{2-n}(\bar\omega_L)$ of the proposition can be obtained as the lift of an element of $M_{2-n}^{!,\infty}(D,\chi)$ as in \eqref{eq:lift}. \begin{proof} We generalize the argument of \cite[Proposition~3.1]{Br2}. It follows from the main result of [42] that the space $M_{2-n}^!(\omega_L)$ has a basis of weakly holomorphic modular forms with integral coefficients. Hence, it suffices to show the existence of an $f\in M_{2-n}^!(\omega_L)$ with {\em rational} coefficients satisfying the stated properties. Write $M_n(\bar \omega_L,\Q)$ for the $\Q$-vector space of modular forms in $M_\kappa(\bar\omega_L)$ with rational coefficients, and $S_n(\bar \omega_L,\Q)$ for the subspace of cusp forms with rational coefficients. To lighten notation, throughout the proof we denote by $M$ the finite dimensional $\Q$-vector space \[ M=M_n(\bar \omega_L,\Q)\oplus \vartheta M_{n-2}(\bar \omega_L,\Q)\subset \] and by $S$ the subspace \[ S= S_n(\bar \omega_L,\Q)\oplus \vartheta M_{n-2}(\bar \omega_L,\Q)\subset M. \] The $\Q$-duals are denoted $M^\vee$ and $S^\vee$, and we denote by \[ \pr: M^\vee\to S^\vee . \] the surjection induced by $S\subset M$. For $\mu\in L'/L$ and $m\in \Z+Q(\mu)$, denote by $a_{m,\mu} \in M^\vee$ the linear functional sending \[ g=\sum_{\nu \in L'/L} \sum_{\ell \ge 0} b(\ell,\nu) \varphi_\nu \cdot q^\ell \in M \] to the Fourier coefficient $a_{m,\mu}(g)= b(m,\mu).$ Let $M_{\mathscr{A}}^\vee\subset M^\vee$ be the subspace generated by all functionals $a_{m,0}$ with $m\in \mathscr{A}$, and fix $e_1,\dots,e_d\in M_{\mathscr{A}}^\vee$ such that $\pr(e_1),\dots,\pr(e_d)$ is a basis of the subspace $\pr(M_{\mathscr{A}}^\vee)\subset S^\vee$. For every $m\in \mathscr{A}$ there is a unique tuple \[ r(m) = ( r_1(m),\dots,r_d(m) ) \in \Q^d \] such that \[ \pr(a_{m,0}) = r_1(m)\cdot \pr(e_1)+\ldots+r_d(m)\cdot \pr(e_d). \] The linear combination \[ \tilde a_{m,0} \define a_{m,0} - ( r_1(m)\cdot e_1+\cdots+r_d(m)\cdot e_d) \in M_{\mathscr{A}}^\vee \] clearly lies in the kernel of $\pr$. \begin{lemma} There is an $m\in \mathscr{A}$ such that the Eisenstein series \eqref{E_L} satisfies $\tilde a_{m,0}(E_{L}) \neq 0$. \end{lemma} \begin{proof} We assume on the contrary that $\tilde a_{m,0}(E_{L})=0$ for all $m\in \mathscr{A}$. In other words, that the coefficients \eqref{eq:coeff} satisfy \begin{align} \label{eq:eisa} B(m,0,s_0) = r_1(m)\cdot e_1( E_{L})+\ldots+r_d(m)\cdot e_d(E_{L}) \end{align} for all such $m$. Let $\|r\|$ be the euclidian norm of a vector $r\in \R^d$, and denote by $\|\cdot\|$ a norm on $S^\vee\otimes_\Q \R$, say the operator norm with respect to a fixed norm on the finite dimensional vector space $S\otimes_\Q \R$. Since $\pr(e_1),\dots,\pr(e_d)$ are linearly independent, there exists an $\eps>0$ such that \[ \eps \|r\|\leq \| r_1 \pr(e_1)+\ldots+r_d \pr(e_d)\| \] for all $r=(r_1,\dots,r_d)\in \R^d$. Taking $r=r(m)$, we obtain \[ \eps \cdot \|r(m)\| \leq \| \pr (a_{m,0}) \| . \] On the other hand, \eqref{eq:eisa} implies that there is a constant $c>0$ such that \[ |B(m,0,s_0)|\leq c\cdot \|r(m)\|. \] Combining these last two inequalities, we find \begin{align} \label{eq:3} |B(m,0,s_0)| \leq \frac{c}{\eps}\cdot \| \pr (a_{m,0}) \| \end{align} for all $m\in \mathscr{A}$. The Hecke bound for the coefficients of (scalar valued) cusp forms of weight $n$ for $\Gamma(D)$ implies that \[ |\pr (a_{m,0})(g)|= O(m^{n/2}), \] as $m\to \infty$ for any $g\in S_n(\bar \omega_L)$. On the other hand, an elementary estimate shows that \[ |\pr (a_{m,0})(g)|= O(m^{n-2+\delta}), \] as $m\to \infty$ for any $\delta>0$ and any $g\in\vartheta ( M_{n-2}(\bar \omega_L))$. As $n>2$, these bounds imply $\| \pr (a_{m,0}) \| = O(m^{n-3/2})$. Combining this with \eqref{eq:3} shows that \[ |B(m,0,s_0)| = O(m^{n-3/2}) \] for $m\in \mathscr{A}$ and $m\to \infty$, contradicting Corollary \ref{cor:eisgrowth}. \end{proof} We now complete the proof of Proposition \ref{prop:keyprop}. The lemma provides us with an $a=\tilde a_{m,0}\in M_{\mathscr{A}}^\vee$ satisfying $\pr(a)=0$ and $a(E_L)\neq 0$. By definition of $M_{\mathscr{A}}^\vee$, we may expand $a$ as a finite linear combination \[ a = \sum_{m \in \mathscr{A}} c(m,0) a_{m,0} \] with $c(m,0) \in \Q$, and then form the Fourier polynomial \[ p = \sum_{m \in \mathscr{A}} c(m,0) \varphi_0 \cdot q^m \in P(\omega_L). \] The condition $\pr(a)=0$ implies that $\{ p,g\}=0$ for all $g\in S_{n}(\bar\omega_L)$, and $\{ p, \vartheta(g)\}=0$ for all $g\in S_{n-2}(\bar\omega_L)$. In particular, Proposition \ref{prop:crit} provides us with a form $f\in M^!_{2-n}(\omega_L)$ whose principal part agrees with $p$ up to a constant, satisfies \[ \{ f, \vartheta(g) \} = \{ p,\vartheta(g)\} =0 \] for all $g\in S_{n-2}(\bar\omega_L)$, and has constant term $ \{ p,E_L\} =a(E_L) \neq 0$. \end{proof} As a special case of the following proposition, the form $f$ of Proposition \ref{prop:keyprop} lies in the image of the lifting map \begin{equation}\label{holomorphic lift} M^{!,\infty}_{2-n}(D,\varepsilon^n) \map{h\mapsto \tilde{h}} M_{2-n}^!(\omega_L) \end{equation} defined by \eqref{eq:lift}. \begin{proposition} \label{prop:keypropscal} Assume $n>2$, and let $f\in M_{2-n}^!(\omega_L)$ be a form whose Fourier coefficients $c(m,\mu)$ satisfy $c(m,\mu)=0$ for all $(m,\mu)$ with $m<0$ and $\mu\neq 0$. In the notation of \eqref{eq:lift}, there exists an $h\in M^{!,\infty}_{2-n}(D,\varepsilon^n)$ with principal part \begin{align} \label{eq:prinh} \sum_{m<0} c(m,0) q^m + \mathrm{constant}, \end{align} and such that $\tilde h=f$. \end{proposition} \begin{proof} Using the basis $\{ \varphi_\mu\}_{\mu\in L'/L}$ of $S_L$, any form $g(\tau) \in S_n(\bar\omega_L)$ can be written as $g(\tau) =\sum_{\mu\in L'/L} g_\mu(\tau) \varphi_\mu$. Taking the component corresponding to $\mu=0$ defines a linear map \[ S_n(\bar\omega_L)\map{ g\mapsto g_0} S_{n}(D,\varepsilon^n). \] We claim that the map $g\mapsto g_0$ is surjective. Indeed, this is equivalent to the injectivity of the adjoint map $ S_{n}(D,\varepsilon^n)\to S_n(\bar\omega_L)$, which is just the map $g\mapsto \tilde{g}$ of \eqref{eq:lift}. This injectivity follows from the explicit formula for the Fourier expansion of $\tilde g $ found in Proposition 3.3.2 of [14], along with the fact that the $\Q/\Z$-valued quadratic form on $L'/L$ represents all elements of $\frac{1}{D}\Z/\Z$ primitively. Now suppose we are given a cusp form \begin{equation}\label{dual cusp} \sum_{m>0} b(m) q^m \in S_{n}(D,\varepsilon^n). \end{equation} If we choose a $g\in S_n(\bar\omega_L)$ such that $g_0 = \sum_m b(m) q^m$, then \[ \sum_{m>0} c(-m,0)b(m)= \{ f,g\}=0, \] where the first equality follows from our hypotheses on the coefficients of $f$, and the second follows from the residue theorem. As this vanishing holds for all forms \eqref{dual cusp}, it follows from Serre duality on the modular curve $X_0(D)$ that there exists an $h\in M^{!,\infty}_{2-n}(D,\varepsilon^n)$ with principal part \eqref{eq:prinh}. In particular, it follows from \eqref{eq:coeffrel} that $\tilde{h} -f$ is holomorphic at $\infty$. As $\tilde{h} -f$ has negative weight, it is identically $0$. \end{proof} \begin{proof}[Proof of Theorem \ref{thm:good borcherds}] First pick a vector-valued form $\tilde{f}\in M_{2-n}^!(\omega_L)$ as in Proposition \ref{prop:keyprop}, and then apply Proposition \ref{prop:keypropscal} to pick a scalar-valued form \[ f(\tau) = \sum_{ m \gg -\infty } c(m) q^m \in M^{!,\infty}_{2-n}(D,\varepsilon^n) \] that maps to it under \eqref{holomorphic lift}. It follows from the relation \eqref{eq:coeffrel} between the coefficients of $f$ and $\tilde{f}$ that for all $m>0$ the coefficient $c(-m)$ is an integer, and vanishes unless $m\in \mathscr{A}$. Now we use the fact that $M^{!,\infty}_{2-n}(D,\varepsilon^n)$ has a basis of forms with integer coefficients\footnote{One can deduce this from the corresponding statement for holomorphic modular forms, by multiplying weakly holomorphic modular forms by powers of Ramanujan's discriminant to kill the poles at $\infty$}. For any $\sigma \in \Aut(\C/\Q)$, it follows that the formal $q$-expansion $f^\sigma$ again lies in $M^{!,\infty}_{2-n}(D,\varepsilon^n)$. The difference $f(\tau)-f^\sigma(\tau)$ is then holomorphic at every cusp with weight $n-2<0$, and so vanishes identically. Thus all coefficients of $f(\tau)$ are rational, and we may replace $f(\tau)$ by a positive integer multiple to assume that $c(m)\in \Z$ for all $m\in \Z$. As we chose $\tilde{f}$ so that its constant term $\tilde{c}(0,0)$ is nonzero, the associated Borcherds product $\psi(f)$ has nonzero weight by Remark \ref{rem:lifted weight}. It only remains to verify property (4) in Theorem \ref{thm:good borcherds}. For this we appeal to Theorem 5.3.3 of [13], which tells us that \begin{equation}\label{divisor boundary} \mathrm{div}(\psi(f)) = \bar{\mathcal{Z}} _V(f) + \sum_{m>0} c(-m) \mathcal{B}(m) , \end{equation} up to a linear combination of divisors supported in characteristics $p\mid D$. Here, as in (5.3.3) of [13], \begin{equation}\label{boundary mult} \mathcal{B}(m) = \frac{m}{n-2} \sum_\Phi \rho_{L_0}(m) \cdot \mathcal{S}_V(\Phi), \end{equation} where the sum is over a finite set $\{ \Phi \}$ indexing the irreducible boundary components $\mathcal{S}_V(\Phi) \subset \partial \bar{\mathcal{S}}_V$, each of which is connected and smooth over $\co_\kk$. Inside the sum, each $L_0$ is a self-dual hermitian $\co_\kk$-module (which depends on $\Phi$) of signature $(n-2,0)$, and \[ \rho_{L_0}(m) = \# \{ x\in L_0 : \langle x,x\rangle =m \} \] is the number of times that lattice represents $m$. As explained in \S 3.1 of [13], each $\Phi$ is an equivalence class of pairs $(I , g)$ in which $I \subset V$ is an isotropic $\kk$-line, and $g\in G(\A_f)$. This data determines a filtration \[ \mathfrak{a} \subset \mathfrak{a}^\perp \subset gL \] by $\co_\kk$-module direct summands, with $\mathfrak{a} = I \cap gL$ isotropic of rank one and $\mathfrak{a}^\perp = \{ x\in gL : \langle x, \mathfrak{a} \rangle =0 \}$. The quotient $L_0 = \mathfrak{a}^\perp / \mathfrak{a}$ inherits a self-dual hermitian form from that on $gL \subset V$, and the filtration admits a (non-canonical) splitting \[ gL = \mathfrak{a} \oplus L_0 \oplus \mathfrak{b} \] in which $\mathfrak{b}$ is rank one and isotropic, and $\mathfrak{a} \oplus \mathfrak{b}$ is orthogonal to $L_0$. As $L_0$ and $gL$ are themselves self-dual hermitian $\co_\kk$-modules, we may form modular forms valued in the finite dimensional vector spaces $S_{L_0}=\C [L_0'/L_0]$ and $S_{gL} = \C[(gL)'/(gL)]$, exactly we did for $L$. The action of $g\in G(\A_f)$ defines a canonical bijection L' / L \iso (gL)'/(gL), which induces an isomorphism $S_L \iso S_{gL}$ respecting the Weil representations on source and target. To each $\Phi$, we may attach the $S_{L_0}$-valued theta series \[ \Theta_\Phi(\tau) = \sum_{ \mu \in L_0' / L_0 } \Theta_{\Phi,\mu} (\tau) \varphi_\mu \in M_{n-2}(\bar{\omega}_{L_0} ) \] \Theta_{\Phi,\mu} (\tau) = \sum_{ x\in \mu+L_0 } q^{\langle x,x\rangle} . As in Theorem 4.1 of [48], there is an induced $S_{gL}$-valued modular form \[ \ind_L(\Theta_\Phi) = \sum_{\substack{\mu\in L_0'/L_0\\ \beta \in \mathfrak{d}_\kk^{-1} \mathfrak{a}/\mathfrak{a}}} \Theta_{E,\mu}(\tau)\varphi_{\mu+\beta} \in M_{n-2}(\bar\omega_{gL}) , \] and we use $S_L \iso S_{gL}$ to view $\ind_L( \Theta_\Phi )\in M_{n-2}(\bar\omega_{L})$. Using the relation \eqref{eq:coeffrel} between the coefficients of $\tilde{f}$ and $f$, we see that \[ \sum_{m>0} m c(-m) \rho_{L_0}(m) = \{ \tilde{f} , \vartheta(\ind_L( \Theta_\Phi )) \} =0, \] where the second equality follows from condition (iii) in Proposition \ref{prop:keyprop}. Comparing with \eqref{boundary mult} shows that \sum_{m>0} c(-m) \mathcal{B}(m) = 0, and property (4) of Theorem \ref{thm:good borcherds} follows by comparison with \eqref{divisor boundary}. \end{proof} %We now prove a second existence result for Borcherds products. It can be used to determine the arithmetic volumes of the special divisors $Z(p,0)$ for almost all primes $p\equiv 1\pmod{D}$. \note{I don't think we need this. There is a cleaner way to get at volume of $Z(p,0)$} %Assume that $n\geq 3$ and let $\kk$ and $L$ be as above. Let $\mathscr{A}$ be an infinite set of positive integers. %There exists a positive integer $t_0$ such that for all $m_0\in \mathscr{A}$ with $m_0\geq t_0$ there is a Borcherds product $\Psi$ of weight $0$ on $X^*$ with %\dv_{X^*}(\Psi) = \sum_{\substack{m\in \mathscr{A}\\m\geq t_0}} c(m) Z^*(m,0), %where $c(m_0)\neq 0$ and only finitely many of the other coefficients $c(m)$ are non-zero. %In particular, the divisor of $\Psi$ does not include any boundary components. %In view of \cite[Theorem 8.1]{Hof} (see also \cite[Theorem 13.3]{Bo1}) and Corollary \ref{cor:boundary}, Theorem \ref{thm:bpdiv2} is a consequence of the following proposition. %%Assume that $n\geq 3$. %There exists a positive integer $t_0$ such that for all $m_0\in \mathscr{A}$ with $m_0\geq t_0$ there is a %weakly holomorphic modular form $f\in M_{2-n}^!(\omega_L)$ with integral Fourier coefficients $c(m,\mu)$ with the properties: %if $(m,\mu)\in \Q^+\times L'/L$ with $c(-m,\mu)\neq 0$, then $\mu=0$ and $m\in \mathscr{A}$, %\item[(ii)] $c(-m_0,0)\neq 0$ and %$c(0,0)= 0$, %\item[(iii)] $\{f,\vartheta(g)\}= 0$ for all $g\in M_{n-2}(\bar \omega_L)$. %We use the notation of the proof of %Proposition \ref{prop:keyprop}. For $l\in \Z^+$ we let %$A_l\subset M^\vee$ be the linear span of all functionals %a_{m,0}\in M^\vee \quad \text{with $m\in \mathscr{A}$ and $m\geq l$.} %We obtain a descending chain of subspaces %M^\vee \supset A_1 \supset A_2 \supset A_3 \supset\dots . %Since $M^\vee$ has finite dimension this chain becomes stationary, i.e., there exists a positive integer $t_0$ such that $A_l=A_{t_0}$ for all $l\geq t_0$. %Hence for every $m_0\in \mathscr{A}$ with $m_0\geq t_0$ and for every $l\geq m_0$ we have %a_{m_0,0}\in A_l. %Consequently, there exist $m_1,\dots, m_d\in A_l$ and coefficients $c(m_1),\dots ,c(m_d)\in \Q$ such that %-a_{m_0,0}= \sum_{j=1}^d c(m_j) a_{m_j,0}. %According to Proposition~\ref{prop:crit}, there exists an $f\in M_{2-n}^!(\omega_L)$ with principal part %q^{-m_0}\varphi_0+\sum_{j=1}^d c(m_j)q^{-m_j}\varphi_0 %that satisfies (i)--(iii). \subsection{A carefully chosen Borcherds product} Recalling the functions $𝐚_k(s)$ of \eqref{a_k}, set 𝐛_V,1(s) =𝐚_1(s). For $k ≥2$ even, set \[ \mathbf{b}_{V,k}(s) = \mathbf{a}_k(s) \prod_{\ell \mid D} \left( 1+ \leg{-1}{\ell}^{ \frac{k}{2} } \mathrm{inv}_\ell(V) \ell^{ -s- \frac{k}{2} } \right). \] For $k ≥3$ odd, set \[ \mathbf{b}_{V,k}(s) = \mathbf{a}_k(s) \prod_{\ell \mid D} \left( 1+ \leg{-1}{\ell}^{ \frac{k-1}{2} } \mathrm{inv}_\ell(V) \ell^{ -s +\frac{1-k}{2}} \right)^{-1}. \] When $k>1$ we have $𝐛_V,k(s) 𝐛_V,k+1(s) = 𝐚_k(s) 𝐚_k+1 (s)$, which implies that the function \eqref{A_V} factors as \begin{equation}\label{A_V factorization} \mathbf{A}_V(s) = \mathbf{b}_{V,1}(s) \cdots \mathbf{b}_{V,n}(s) . \end{equation} Now suppose $n>2$, abbreviate \[ \mathscr{A} = \{ \mbox{primes } p \equiv 1\pmod{D} \}, \] and assume that the weakly modular form \[ f(\tau) = \sum_{m \gg -\infty} c(m) q^m \in M_{2-n}^{ ! , \infty}( D , \eps^n ) \] of \eqref{input form} is chosen as in Theorem \ref{thm:good borcherds}. In particular, the divisor of the Borcherds product $ψ(f)$ contains no components of the boundary, and so \begin{equation}\label{borcherds divisor} \mathrm{div} ( \psi(f) ) = \bar{\mathcal{Z}}_V(f) + \vertical(f) \end{equation} in which \[ \bar{\mathcal{Z}}_V(f) = \sum_{ p \in \mathscr{A} } c(-p) \bar{\mathcal{Z}}_V(p), \] and $ (f)$ is supported in characteristics dividing $D$. \begin{remark} The notation $\vertical(f)$ is slightly misleading, as $ \bar{\mathcal{Z}}_V(f)$ may itself have vertical components. Any such components are supported on the exceptional divisor $\mathrm{Exc}_V \subset \bar{\mathcal{S}}_V$, by Corollary 3.7.3 of [13]. \end{remark} \begin{proposition}\label{prop:awesome B} The Borcherds product $\psi(f)$ has weight \[ k(f) = \frac{1}{\mathbf{b}_{V,n}(0)} \sum_{p \in \mathscr{A} } c(-p) ( p^{n-1} + 1 ) , \] and, recalling the notation \eqref{constant metrics}, satisfies \[ ( \vertical (f) , 0 ) = (E,0) - k(f) \sum_{\ell \mid D} \left( 0 , \frac{ \log(\ell) }{1+ \beta_\ell } \right) \in \widehat{\Pic}(\mathcal{S}_V)_\Q, \] where $E$ is a divisor (which may depend on $f$) supported on the exceptional divisor of Definition \ref{def:special exceptional}, and \begin{equation}\label{bang} \beta_\ell = (-1)^{n+1} \cdot \begin{cases} \leg{-1}{\ell}^{\frac{n}{2} } \inv_\ell(V) \ell^{ \frac{n}{2} } & \mbox{if $n$ is even} \\[2ex] \leg{-1}{\ell}^{\frac{n-1}{2} } \inv_\ell(V) \ell ^{ \frac{n-1}{2} } & \mbox{if $n$ is odd.} \end{cases}. \end{equation} \end{proposition} \begin{proof} The proof requires a short digression on Eisenstein series. For any divisor $r\mid D$ set $r'=D/r$. Our assumption that $D$ is odd implies that the quadratic character $\eps : (\Z/D\Z)^\times \to \{ \pm 1\}$ determined by $\kk$ is \[ \eps(a) = \leg{a}{D}. \] Hence we may factor $\eps = \eps_r \cdot \eps_{r'}$ with \[ \eps_r(a) = \left( \frac{a}{r} \right) \quad \mbox{and} \quad \eps_{r'} (a) = \left( \frac{a}{r'} \right). \] Define the quadratic Gauss sum \[ \tau(\eps_r) = \sum_{ a \in (\Z/r\Z)^\times } \eps_r(a) e^{2\pi i a/r} = \begin{cases} \sqrt{r} & \mbox{if } r\equiv 1\pmod{4} \\ i\sqrt{r} & \mbox{if } r \equiv 3\pmod{4}, \end{cases} \] and similarly with $r$ replaced by $r'$. \begin{lemma}\label{lem:eisenstein formulas} For every divisor $r\mid D$ there is an Eisenstein series \[ E_r = \sum_{m \ge 0} e_r(m) \cdot q^m \in M_n(\Gamma_0(D),\eps^n) \] whose Fourier coefficients are as follows. The constant term is \[ e_r(0)= \begin{cases} 1 & \mbox{if }r=1 \\ 0 & \mbox{otherwise.} \end{cases} \] If $n$ is even the coefficients indexed by $m>0$ are \[ e_r(m) = \frac{ r^{n/2} (-2\pi i )^n }{ D^n \Gamma(n) L_D( n ,\eps^n ) } \sum_{ \substack{ c\mid m \\ c>0 \\ \gcd( m /c, r) =1 } } c^{n-1} \sum_{ d \mid \gcd( c , r' ) } d \mu( r' / d) . \] If $n$ is odd the coefficients indexed by $m>0$ are \[ e_r(m) = \eps_r(r') \frac{ r^{n/2} (-2\pi i )^n \tau(\eps_{r'}) }{ D^n \Gamma(n) L_D( n ,\eps^n ) } \sum_{ \substack{ c\mid m \\ c>0 \\ \gcd( m /c, r) =1 } } \eps_r( m/c) \eps_{r'}(c) \cdot c^{n-1}. \] In both formulas $L_D(s,\eps^n)$ is the Dirichlet $L$-function with Euler factors at all $\ell\mid D$ removed. \end{lemma} \begin{proof} % Choose a matrix % \[ % R_r = \left(\begin{matrix} \alpha & \beta \\ \gamma r' & \delta r \end{matrix}\right) \in \Gamma_0(D/r) % \] % with $\alpha,\beta,\gamma,\delta\in \Z$, set % $ % W_r = R_r \cdot \left( \begin{smallmatrix} r \\ & 1 \end{smallmatrix}\right), % $ %and define %E (z) % = % \sum_{ \left( \begin{smallmatrix} a & b \\ c & d \end{smallmatrix} \right) \in \Gamma_\infty \backslash \Gamma_0(D) } \frac{ \eps^n (d) } { (c z + d)^n } \in M_n( \Gamma_0(D) , \eps^n). % \] % Here $\Gamma_\infty \subset \mathrm{SL}_2(\Z)$ is the subgroup of upper triangular matrices. %The claim is that % \[ %E_r = \eps_r(-\beta) \eps_{r'}(\alpha r) \cdot ( E \mid_n W_r ) \in M_n(\Gamma_0(D),\eps^n). % has the stated Fourier expansion. For each $s,t\in \Z/D\Z$ define an Eisenstein series \[ G_{(s,t)}(z) = \sum_{ \substack{ c,d \in \Z \\ (c,d) \neq (0,0) \\ (c,d) \equiv (s,t) \pmod{D} } } ( cz + d)^{-n} . \] Theorem 7.1.3 of [43] shows that \[ E_r(z) = \frac{ \eps^n_r(r') }{ 2 r^{n/2} L_D( n ,\eps^n ) } \sum_{ s,t \in \Z/D\Z } \eps_r^n( s ) \eps_{r'}^n( t) G_{ ( s,t ) } ( r' z ) \] has the desired Fourier expansion. % Indeed, one first checks that [warning:there was a bad search and replace in the formulas below. Some $\alpha$ should be $\beta$] % \[ %L_D( n ,\eps^n ) \cdot E (z) % = \frac{1}{2} \sum_{ s , t \in \Z/D\Z } \eps^n(t) G_{(s,t)} (Dz) . %Applying $|_nW_r$ to both sides and using % \[ % W_r % = % \left( \begin{matrix} D^{-1} & \\ & 1 \end{matrix} \right) % \left(\begin{matrix} \alpha r & \alpha r' \\ \gamma & \delta \end{matrix}\right) % \left( \begin{matrix} D & \\ & r \end{matrix} \right), % \] %one then finds % \begin{align*} % L_D( n ,\eps^n ) \cdot E_r(z) % & = \frac{ \eps^n_r(-\alpha) \eps^n_{r'}(\alpha r) }{ 2 D^{n/2} } \sum_{ s ,t\in \Z/D\Z } \eps^n(t) G_{(s,t)} \big|_n \left( \begin{matrix} D \\ & 1 \end{matrix} \right) W_r \\ %& = % \frac{\eps^n_r(-\alpha) \eps^n_{r'}(\alpha r)}{ 2 D^{n/2}} \sum_{ s , t \in \Z/D\Z } \eps^n(t) % G_{ ( \alpha rs + \gamma t , \alpha r' s + \delta t ) } \big|_n % \left( \begin{matrix} D & \\ & r \end{matrix} \right) \\ % & = % \frac{\eps^n_r(-\alpha) \eps^n_{r'}(\alpha r) }{ 2r^{n/2} } \sum_{ s,t \in \Z/D\Z } \eps^n(t) % G_{ ( \alpha rs + \gamma t , \alpha r' s + \delta t ) } ( r' z ) \\ % & = % \frac{\eps^n_r(-\alpha) \eps^n_{r'}(\alpha r) }{ 2 r^{n/2} } \sum_{ s,t \in \Z/D\Z } \eps^n( -\alpha r' s + \alpha rt) % G_{ ( s,t ) } ( r' z ) \\ % & = % \frac{\eps^n_r(r') }{ 2 r^{n/2} } % \sum_{ s,t \in \Z/D\Z } \eps_r^n( s ) \eps_{r'}^n( t) G_{ ( s,t ) } ( r' z ) % \end{align*} %where the second to last formula is obtained by %(s,t) \mapsto ( \delta s - \gamma t , -\alpha r' s + \alpha rt) %= (s,t) \cdot \left( \begin{matrix} \delta & -\alpha r' \\ -\gamma & r \end{matrix} \right) . %First suppose that $n$ is even, so that $\eps_r^n$ and $\eps_{r'}^n$ are the trivial Dirichlet characters modulo $r$ and $r'$, respectively. %The constant term is %g_r(0) = \begin{cases} %2 \zeta_D(n ) & \mbox{if } r=1 \\ %0 & \mbox{otherwise,} %and when $m>0$ we have %g_r (m) = % \frac{2 r^n (-2\pi i )^n }{ D^n \Gamma(n) } \sum_{ \substack{ c\mid m \\ c>0 \\ \gcd( m/c, r) =1 } } % c^{n-1} \sum_{ d \mid \gcd( c , r' ) } d \mu( r' / d) . %Now suppose that $n$ is odd, so that $\eps_r^n=\eps_r$ and $\eps_{r'}^n=\eps_{r'}$ are primitive Dirichlet characters modulo $r$ and $r'$. The constant term is %g_r(0) = \begin{cases} %2 L (n, \eps ) & \mbox{if } r=1 \\ %0 & \mbox{otherwise,} %and when $m>0$ we have %g_r (m) = % \frac{2 r^n (-2\pi i )^n \tau(\eps_{r'}) }{ D^n \Gamma(n) } % \sum_{ \substack{ c\mid m \\ c>0 \\ \gcd( m/c, r) =1 } } % \eps_r( m/c) \eps_{r'}(c) \cdot c^{n-1} . \end{proof} %The above formulas can be rewritten using the relations % \frac{ 2 \zeta(n) } { \zeta(1-n) } = \frac{ (-2\pi i )^n }{ \Gamma(n) } % \frac{ 2 L(n,\eps_{r'}) }{ L(1-n,\eps_{r'}) } = \frac{ (-2\pi i )^n \tau (\eps_{r'}) r^n } { D^n \Gamma(n) } %for $n$ even and odd (respectively). Suppose $r\mid D$, and set $r'=D/r$. If $n$ is even, set \[ a = \frac{ (-2\pi i )^n \mu(D) }{ D^n \Gamma(n) L_D(n , \eps^n) } \quad\mbox{and} \quad b_r = r^{n/2} \mu( r ) . \] If $n$ is odd, set \[ a = \frac{ (-2\pi i )^n \tau(\eps) }{ D^n \Gamma(n) L_D(n , \eps^n) } \quad\mbox{and} \quad b_r = \frac{ r^{n/2} \eps_r( r') \tau(\eps_{r'}) }{ \tau(\eps) }. \] One may check directly that b_r=\prod_{\ell\mid r} b_\ell, where the product is over all primes $\ell\mid r$, and that when $p \in \mathscr{A}$ the formulas of Lemma \ref{lem:eisenstein formulas} simplify to \begin{equation}\label{simple miyake} e_r(p) = a b_r \cdot ( p^{n-1}+1 ) . \end{equation}
# RF-Diffusion: Radio Signal Generation via Time-Frequency Diffusion Guoxuan Chi1, Zheng Yang1🖂, Chenshu Wu2, Jingao Xu1, Yuchong Gao3, Yunhao Liu1, Tony Xiao Han4 1Tsinghua University 2The University of Hong Kong 3Beijing University of Posts and Telecommunications 4Huawei Technologies Co., Ltd chiguoxuan, hmilyyz, wucs32, xujingao13, gaoyc01<EMAIL_ADDRESS><EMAIL_ADDRESS> (2024) ###### Abstract. Along with AIGC shines in CV and NLP, its potential in the wireless domain has also emerged in recent years. Yet, existing RF-oriented generative solutions are ill-suited for generating high-quality, time-series RF data due to limited representation capabilities. In this work, inspired by the stellar achievements of the diffusion model in CV and NLP, we adapt it to the RF domain and propose RF-Diffusion. To accommodate the unique characteristics of RF signals, we first introduce a novel Time-Frequency Diffusion theory to enhance the original diffusion model, enabling it to tap into the information within the time, frequency, and complex-valued domains of RF signals. On this basis, we propose a Hierarchical Diffusion Transformer to translate the theory into a practical generative DNN through elaborated design spanning network architecture, functional block, and complex-valued operator, making RF- Diffusion a versatile solution to generate diverse, high-quality, and time- series RF data. Performance comparison with three prevalent generative models demonstrates the RF-Diffusion’s superior performance in synthesizing Wi-Fi and FMCW signals. We also showcase the versatility of RF-Diffusion in boosting Wi- Fi sensing systems and performing channel estimation in 5G networks. RF Signal, Generative Model, Time-Frequency Diffusion, Wireless Sensing, Channel Estimation 🖂 Zheng Yang is the corresponding author. Our project is available here. ††ccs: Human-centered computing Ubiquitous and mobile computing††ccs: Networks Mobile networks††journalyear: 2024††conference: International Conference On Mobile Computing And Networking; September 30–October 4, 2024; Washington D.C., DC, USA††booktitle: International Conference On Mobile Computing And Networking (ACM MobiCom ’24), September 30–October 4, 2024, Washington D.C., DC, USA††doi: 10.1145/3636534.3649348††isbn: 979-8-4007-0489-5/24/09 ## 1\. Introduction Artificial intelligence generated content (AIGC) has catalyzed a revolutionary impact in both industrial and academic frontier, birthing a constellation of cutting-edge products founded on deep neural networks (DNNs). Remarkable odysseys include Stable Diffusion (Rombach et al., 2022), Midjourney (Midjourney, 2023), DALL-E (Ramesh et al., 2022) for image creation, and ChatGPT (OpenAI, 2023) for text generation. Nowadays, AIGC is gradually knocking on the door of the radio-frequency (RF) domain. Current practice offers initial proof of its potential to boost wireless systems in terms of data augmentation (Rizk et al., 2019), signal denoising (Bando et al., 2020) and time-series prediction (Hamdan and Hamdi, 2020). In downstream tasks like device localization (Zhao et al., 2023), human motion sensing (Yang et al., 2022a), and channel estimation (Liu et al., 2021), such progress not only enhances system performance but also cuts down the cumbersome ground truth annotation costs for application-layer DNN training. Existing RF data generation models can be broadly divided into two main categories: $\bullet$ Environment modeling based generative model. This approach exploits LiDAR point clouds or video footage to craft a detailed 3D model of the environment. It then employs physical models, like ray tracing (McKown and Hamilton, 1991), to simulate how RF signals interact with surroundings, which eventually aids in forecasting the signals a receiver might capture. However, one notable limitation is the method’s insufficient consideration of how the materials and properties of targets can affect RF signal propagation. Additionally, obtaining a 3D model with accuracy compatible with RF signal wavelengths (e.g., 1-10 mm) remains a challenge and will significantly raise system expenses. While the recent study uses the neural radiance field for implicit modeling of RF-complex environments to estimate signal propagation (Zhao et al., 2023), it requires a stationary receiver (Rx), which complicates generating essential time-series data for wireless communication systems or tasks like human motion recognition. $\bullet$ Data-driven probabilistic generative model. Current innovations leverage models like generative adversarial network (GAN) and variational autoencoder (VAE) to augment RF datasets (Ha et al., 2020). Essentially, these models learn the distribution within the training data and then generate new RF data that follow this distribution. However, these models mainly focus on expanding feature-level distributions and struggle to precisely generate raw RF signals due to their constrained representation capabilities (Yang et al., 2022a). Additionally, most of them are designed for specific tasks with dedicated loss functions and DNN architectures, limiting their versatility. On the other hand, GAN’s training is notoriously fickle due to the tug-of-war between the generator and discriminator (Xia et al., 2022). Table 1. Illustrative examples. Methods | Examples | | SSIM ---|---|---|--- | Wi-Fi | FMCW | | Wi-Fi | FMCW Ground Truth | | | | N/A | N/A Ours | | | | 0.81 | 0.75 DDPM (Ho et al., 2020) | | | | 0.65 | 0.58 DCGAN (Radford et al., 2015) | | | | 0.68 | 0.61 CVAE (Sohn et al., 2015a) | | | | 0.47 | 0.4 Remark. Albeit inspiring, there still lacks a versatile generative model for generating accurate and time-series raw RF signals suitable for diverse downstream applications. Recently, Diffusion Model has emerged as a luminous star in the visual AIGC cosmos, underpinning a variety of innovative DNNs for a range of prominent image/video applications such as Stable Diffusion, Midjourny, and DALL-E. Compared to the aforementioned generative models, its unique iterative process of noise addition (i.e., noising) and removal (i.e., denoising) allows for precise capture of intricate raw data distributions (Yang et al., 2022b). Moreover, its training is straightforward and avoids typical problems like mode collapse or convergence troubles, since it doesn’t juggle competing parts or require delicate fine-tuning (Dhariwal and Nichol, 2021). These compelling advantages inspire us to embrace the diffusion model for synthesizing RF data. However, transferring existing diffusion models (Ho et al., 2020) to the RF domain faces significant challenges arising from RF signal’s unique characteristics beyond images, as summarized below. $(i)$ Time series. RF signals capture dynamic details like target movement and environment/channel changes over time, unlike static snapshots. Diffusion models designed for single-image generation struggle to synthesize RF signal sequences. $(ii)$ Frequency domain. Essential RF details (e.g., Doppler shift, chirp) are embedded in the frequency domain. While recent video diffusion models can create time series, they mainly focus on the spatial domain (e.g., pixel-wise brightness), discarding the rich information in the frequency domain. $(iii)$ Number field. RF data is complex-valued with both amplitude and phase readings. While existing diffusion models only focus on amplitude (e.g., light strength), the phase data can’t be ignored due to its crucial role in wireless systems. In summary, while diffusion models hold great promise, there is a need to upgrade current models to suit the unique traits of RF signals and tap into the underlying information in the time series, frequency, and complex-valued domains. Our Work. We propose RF-Diffusion, the first versatile generative model for RF signals based on Diffusion model. To overcome the above challenges, we expand existing denoising-based diffusion model to the time-frequency domain by revisiting its theoretical foundation, overall DNN architecture, and detailed operator design, enabling RF-Diffusion to generate diverse, high-quality, and time-series RF data. $\bullet$ Time-Frequency Diffusion Theory. We first propose the time-frequency diffusion (TFD) theory as a novel paradigm to guide diffusion models in extracting and leveraging characteristics of RF signals across both temporal and frequency domains. Specifically, we demonstrate a diffusion model could effectively destruct and restore high-quality RF signals by alternating between adding noise in the time domain and blurring in the frequency domain (§3). $\bullet$ Hierarchical Diffusion Transformer Design. We further re-design the DNNs of existing denoising-based diffusion model to be compatible with TFD. The derived DNN, designated as the hierarchical diffusion transformer (HDT), from a top-down perspective, incorporates $(i)$ a hierarchical architecture to fully uncover time-frequency details by decoupling spatio-temporal dimensions of RF data; $(ii)$ attention-based diffusion blocks leveraging enhanced Transformers to extract RF features; and $(iii)$ a complex-valued design to encode both signal strength and phase information. The three key designs work hand-in-hand to enable RF-Diffusion to generate high-quality RF data (§4). We implement RF-Diffusion and conduct extensive experiments that include synthesis of both Wi-Fi and FMCW signals. To provide a clear understanding of its performance, an intuitive comparison of the time-frequency spectrograms generated by RF-Diffusion and those from related works is presented in Table 1. Evaluation results demonstrate that RF-Diffusion generates RF signals with high fidelity, achieving an average structural similarity of $81\%$ relative to the ground truth. This performance surpasses prevalent generative models such as DDPM, DCGAN, and CVAE by over $18.6\%$. We also demonstrate the performance of RF-Diffusion in two case studies: augmented Wi-Fi gesture recognition and 5G FDD channel estimation. By employing RF-Diffusion as a data augmentor, existing wireless gesture recognition systems experience a significant accuracy improvement ranging from $4\%$ to $11\%$. When applied to the channel estimation task, RF-Diffusion showcases a substantial $5.97$ dB improvement in SNR compared to state-of-the-arts. In summary, this paper makes the following contributions. (1) We propose RF-Diffusion, the first generative diffusion model tailored for RF signal. RF-Diffusion is versatile and can be leveraged in a wide spectrum of fundamental wireless tasks such as RF data augmentation, channel estimation, and signal denoising, propelling AIGC to shine in the RF domain. (2) We present the Time-Frequency Diffusion theory, an advanced evolution beyond traditional denoising-based diffusion methods. The integration of TFD with its bespoke Hierarchical Diffusion Transformer (HDT) enables enhanced precision in time-series sampling and a balanced focus on spectral details of the data. (3) We fully implement RF-Diffusion. Extensive evaluation results from case studies show RF-Diffusion’s efficacy. Community Contribution. RF-Diffusion’s code and pre-trained model are publicly available. Our solution, in part or in all, could provide a collection of tools for both industry and academia to push forward AIGC in RF domain. Moreover, its ability to handle time-series sampling while highlighting the spectral nuances of the data has potential benefits beyond the wireless community, offering value to video, audio processing, and other time-series- dependent modalities. ## 2\. Overview Figure 1. RF-Diffusion overview. We propose RF-Diffusion, a pioneering probabilistic generative model for RF data that leverages the diffusion model framework, as detailed in Fig. 1. At its core, RF-Diffusion aligns with the principle of denoising-based diffusion models by employing a dual-process approach: a forward process of integrating noise into the data, and a reverse process of generating data from noise. However, RF-Diffusion distinguishes itself through two innovative features: $(i)$ RF-Diffusion incorporates the proposed Time-Frequency Diffusion (§3) theory to direct each stage of state transition in both forward (i.e., $q(\boldsymbol{x}_{t}|\boldsymbol{x}_{t-1})$) and reverse (i.e., $p_{\theta}(\boldsymbol{x}_{t-1}|\boldsymbol{x}_{t})$) processes, enabling RF- Diffusion to harness the RF signal information across both the time and frequency domain. $(ii)$ RF-Diffusion introduces the Hierarchical Diffusion Transformer (§4), which is a restructured DNN model for the reverse generation process, to align with the Time-Frequency Diffusion theory and the characteristics of RF signals. As for the specific data flow, RF-Diffusion gradually introduces Gaussian noise in the time domain and blurs the spectrum in the frequency domain at each stage in the forward direction. As the diffusion step $t$ advances, the original RF signal $\boldsymbol{x}_{0}$ diminishes, eventually degrading into noise. In TFD theory, we demonstrate any destructed signal $\boldsymbol{x}_{t}$ can be restored to its original form $\boldsymbol{x}_{0}$ using a parameterized reverse process. Guided by the destruction process alternating in time-frequency domain, the reverse restoration process emphasizes both time-domain amplitude accuracy and frequency-domain continuity to achieve time-frequency high-fidelity signal generation. In the reverse direction, HDT are served as the parameterized model for learning the restoration process. It decouples the Gaussian noise and the spectral blur, effectively addressing them in the spatial denoise and time- frequency deblur stages, respectively. During its training, HDT takes destructed signal $\boldsymbol{x}_{t}$ as the model input, and uses the signal of previous diffusion step $\boldsymbol{x}_{t-1}$ to supervise the output. Once trained, RF-Diffusion is capable of iteratively transforming fully degraded noise back into a specific type of signal. ## 3\. Time-Frequency Diffusion In this section, we introduce the proposed Time-Frequency Diffusion (TFD) process. Unlike prevailing denoising diffusion models, the time-frequency diffusion process comprehensively addresses two potential distortions in wireless signal data: 1) amplitude distortion due to additive Gaussian noise; 2) spectral aliasing resulting from insufficient time resolution. Therefore, the learned reverse process focuses not only on precisely reconstructing the amplitude of individual samples but also on preserving spectral resolution in time-series signals. In what follows, we first introduce the forward destruction process (§3.1) which jointly eliminates the original data distribution in the time and the frequency domain. On this basis, we describe how to reverse the process (§3.2) and fit it through a parameterized model, which is the basis of our conditional generation (§3.3) task. ### 3.1. Forward Destruction Process The time-frequency diffusion model is proposed for the RF signal, which can be treated as the complex-valued time-series data. Therefore, we take the signal as a two-dimensional complex tensor $\boldsymbol{x}\in\mathbb{C}^{M\times N}$, where $M$ represents the spatial dimension of each sample, while $N$ represents the temporal dimension of the times series. Given a signal that follows a specific distribution $\boldsymbol{x}_{0}\sim q(\boldsymbol{x}_{0})$, the forward destruction process yields a progression of random variables $\boldsymbol{x}_{1},\boldsymbol{x}_{2},\dots,\boldsymbol{x}_{T}$. Each diffusion step in this process disrupts the original distribution from both the time and frequency domains. Specifically, the forward diffusion process from step $t-1$ to $t$ is described as follows: * • Frequency Blur. To dissipate the spectral details of the original signal, the Fourier transform $\mathfrak{F}(\cdot)$ is first performed to the temporal dimension. Subsequently, with the predefined Gaussian convolution kernel $\mathbf{G}_{t}$, a cyclic convolution $*$ operation is performed on the spectrum, resulting in a blurred spectrum $\mathbf{G}_{t}*\mathfrak{F}(\boldsymbol{x}_{t-1})$. * • Time-series Noise. To drown out the amplitude details of the signal, complex standard Gaussian noise $\boldsymbol{\epsilon}\sim\mathcal{CN}(0,\mathbf{I})$ is introduced, and a weighted summation is performed with a predefined parameter $\sqrt{\alpha_{t}}$, where $\alpha_{t}\in(0,1)$. By combining the above two steps, we get: (1) $\boldsymbol{x}_{t}=\sqrt{\alpha_{t}}\mathfrak{F}^{-1}(\mathbf{G}_{t}*\mathfrak{F}(\boldsymbol{x}_{t-1}))+\sqrt{1-\alpha_{t}}\boldsymbol{\epsilon},$ where $\mathfrak{F}^{-1}(\cdot)$ indicates the inverse Fourier transform. To ensure the practical feasibility of the time-frequency diffusion process, it is essential that the transition from $\boldsymbol{x}_{0}$ to $\boldsymbol{x}_{t}$ for any given step $t\in[1,T]$ can be executed with an acceptable time complexity, instead of involving an iteration of $t$ steps. To simplify this process, certain advantageous characteristics of the Fourier transform and the Gaussian function are leveraged. Based on the convolution theorem (Weisstein, 2023), we have $\mathfrak{F}^{-1}(\mathbf{G}_{t}*\mathfrak{F}(\boldsymbol{x}_{t-1}))=\mathfrak{F}^{-1}(\mathbf{G}_{t})\boldsymbol{x}_{t-1}$ 111Vector multiplications in this paper default to element-wise products.. Therefore, the operation in Eqn. 1 can be expressed as: (2) $\displaystyle\boldsymbol{x}_{t}$ $\displaystyle=\sqrt{\alpha_{t}}\boldsymbol{g}_{t}\boldsymbol{x}_{t-1}+\sqrt{1-\alpha_{t}}\boldsymbol{\epsilon},$ where $\boldsymbol{g}_{t}=\mathfrak{F}^{-1}(\boldsymbol{G}_{t})$ is still a Gaussian kernel, which means the convolution of the signal with the Gaussian kernel in the frequency domain can be equivalently transformed into the multiplication of the signal with another Gaussian kernel in the time domain. For ease of notion, let $\boldsymbol{\gamma}_{t}=\sqrt{\alpha_{t}}\boldsymbol{g}_{t}$, and $\sigma_{t}=\sqrt{1-\alpha_{t}}$, indicating the weight of the signal $\boldsymbol{x}_{t-1}$ and the standard deviation of the added noise at step $t$. Since the forward process is a Markov chain, by recursively applying Eqn. 2 and incorporating with the reparametrization trick (Kingma and Welling, 2013), the relationship between the original signal $\boldsymbol{x}_{0}$ and the degraded signal $\boldsymbol{x}_{t}$ can be obtained: (3) $\displaystyle\boldsymbol{x}_{t}$ $\displaystyle=\bar{\boldsymbol{\gamma}}_{t}\boldsymbol{x}_{0}+\sum_{s=1}^{t}{(\sqrt{1-\alpha_{s}}\frac{\bar{\boldsymbol{\gamma}}_{t}}{\bar{\boldsymbol{\gamma}}_{s}})\boldsymbol{\epsilon}}=\bar{\boldsymbol{\gamma}}_{t}\boldsymbol{x}_{0}+\bar{\boldsymbol{\sigma}}_{t}\boldsymbol{\epsilon},$ where $\bar{\boldsymbol{\gamma}}_{t}=\prod_{s=1}^{t}\boldsymbol{\gamma}_{s}=\boldsymbol{\gamma}_{t}\cdots\boldsymbol{\gamma}_{1}$. As $\alpha_{t}$ and $\boldsymbol{g}_{t}$ are predefined hyperparameters corresponding to the noise and blur scheduling strategy, any $\bar{\boldsymbol{\gamma}}_{t}$ and $\bar{\boldsymbol{\sigma}}_{t}$ are constant coefficients, representing the weight of the original signal and the standard deviation of the added noise. Thus, the forward destruction process to any step $t$ can be quickly completed without iteration. Stated in probabilistic terms, essentially $\boldsymbol{x}_{t}$ follows an non-isotropic Gaussian distribution conditioned on $\boldsymbol{x}_{0}$: (4) $q(\boldsymbol{x}_{t}|\boldsymbol{x}_{0})=\mathcal{CN}(\boldsymbol{x}_{0};\bar{\boldsymbol{\mu}}_{t},\bar{\boldsymbol{\sigma}}_{t}^{2}\mathbf{I}),$ where $\bar{\boldsymbol{\mu}}_{t}=\bar{\boldsymbol{\gamma}}_{t}\boldsymbol{x}_{0}$ and $\bar{\boldsymbol{\sigma}}_{t}=\sum_{s=1}^{t}{(\sqrt{1-\alpha_{s}}\frac{\bar{\boldsymbol{\gamma}}_{t}}{\bar{\boldsymbol{\gamma}}_{s}}})$. Specifically, the vector $\bar{\boldsymbol{\gamma}}_{t}$ consists of distinct weighting coefficients, each applied multiplicatively across the temporal dimension of the original signal to perform weighting adjustments. It is proven in Appendix A that as the diffusion step $t$ increases, the original signal is gradually eliminated, and $\boldsymbol{x}_{t}$ eventually converges to a closed-form noise distribution: (5) $\lim_{T\to\infty}\boldsymbol{x}_{T}=\lim_{T\to\infty}\sum_{t=1}^{T}{(\sqrt{1-\alpha_{t}}\frac{\bar{\boldsymbol{\gamma}}_{T}}{\bar{\boldsymbol{\gamma}}_{t}}})\boldsymbol{\epsilon}=\lim_{T\to\infty}\bar{\boldsymbol{\sigma}}_{T}\boldsymbol{\epsilon},$ where $\bar{\boldsymbol{\sigma}}_{T}=\sum_{t=1}^{T}{(\sqrt{1-\alpha_{t}}\frac{\bar{\boldsymbol{\gamma}}_{T}}{\bar{\boldsymbol{\gamma}}_{t}}})\boldsymbol{\epsilon}$ is determined by predefined noise scheduling strategy in practical implementation. ### 3.2. Reverse Restoration Process The restoration process is the reversal of the destruction, which gradually eliminates the noise and restores the original data distribution. To learn a parameterized distribution $p_{\theta}(\boldsymbol{x}_{0})$ which approximates the original distribution $q(\boldsymbol{x}_{0})$, an effective approach is to minimize their Kullback-Leibler (KL) divergence: (6) $\displaystyle\theta$ $\displaystyle=\operatorname*{arg\,min}_{\theta}D_{\mathrm{KL}}(q(\boldsymbol{x}_{0})\|p_{\theta}(\boldsymbol{x}_{0}))$ $\displaystyle=\operatorname*{arg\,min}_{\theta}(\mathbb{E}_{q(\boldsymbol{x}_{0})}[-\log p_{\theta}(\boldsymbol{x}_{0})]+\mathbb{E}_{q(\boldsymbol{x}_{0})}[\log q(\boldsymbol{x}_{0})])$ $\displaystyle=\operatorname*{arg\,max}_{\theta}\mathbb{E}_{q(\boldsymbol{x}_{0})}[\log p_{\theta}(\boldsymbol{x}_{0})].$ Unfortunately, $q(\boldsymbol{x}_{0})$ is intractable to calculate in general (Sohl-Dickstein et al., 2015; Kong et al., 2020), therefore $\mathbb{E}_{q(\boldsymbol{x}_{0})}[\log p_{\theta}(\boldsymbol{x}_{0})]$ cannot be expressed explicitly. Building on the concepts of prior works (Song and Ermon, 2019; Ho et al., 2020), we approximate the distribution by maximizing the variational lower bound. As established in (Ho et al., 2020), the optimization problem in Eqn. 6 can be approximated as: (7) $\theta=\operatorname*{arg\,min}_{\theta}D_{\mathrm{KL}}(q(\boldsymbol{x}_{t-1}|\boldsymbol{x}_{t},\boldsymbol{x}_{0})\|{p_{\theta}(\boldsymbol{x}_{t-1}|\boldsymbol{x}_{t})}),$ where $q(\boldsymbol{x}_{t-1}|\boldsymbol{x}_{t},\boldsymbol{x}_{0})$ represents the actual reverse process conditioned on $\boldsymbol{x}_{0}$, while $p_{\theta}(\boldsymbol{x}_{t-1}|\boldsymbol{x}_{t})$ denotes the reverse process fitted by our model. Eqn. 7 shows the problem of reconstructing the original data distribution can be transformed into a problem of fitting the reverse process. Rewrite $q(\boldsymbol{x}_{t-1}|\boldsymbol{x}_{t},\boldsymbol{x}_{0})$ based on the Bayesian theorem (Appendix B), and we prove it follows a Gaussian distribution over $\boldsymbol{x}_{t-1}$: (8) $\displaystyle q(\boldsymbol{x}_{t-1}|\boldsymbol{x}_{t},\boldsymbol{x}_{0})\sim\mathcal{CN}(\boldsymbol{x}_{t-1};\tilde{\boldsymbol{\mu}}_{t-1},\tilde{\boldsymbol{\sigma}}_{t-1}^{2}\mathbf{I}),$ $\displaystyle\tilde{\boldsymbol{\mu}}_{t-1}$ $\displaystyle=\frac{1}{\bar{\boldsymbol{\sigma}}_{t}^{2}}(\boldsymbol{\gamma}_{t}\bar{\boldsymbol{\sigma}}_{t-1}^{2}\boldsymbol{x}_{t}+\bar{\boldsymbol{\gamma}}_{t-1}\boldsymbol{\sigma}^{2}_{t}\boldsymbol{x}_{0}),\ \ \tilde{\boldsymbol{\sigma}}_{t-1}=\frac{\bar{\boldsymbol{\sigma}}_{t-1}}{\bar{\boldsymbol{\sigma}}_{t}}\boldsymbol{\sigma}_{t}.$ Let’s assume that $p_{\theta}(\boldsymbol{x}_{t-1}|\boldsymbol{x}_{t})$ is a Gaussian Markov process: (9) $p_{\theta}(\boldsymbol{x}_{t-1}|\boldsymbol{x}_{t})\sim\mathcal{CN}(\boldsymbol{x}_{t-1};\boldsymbol{\mu}_{\theta}(\boldsymbol{x}_{t}),\boldsymbol{\sigma}_{\theta}^{2}(\boldsymbol{x}_{t})\mathbf{I}).$ Therefore, the KL divergence of two Gaussian distributions in Eqn. 7 can be simplified as follows: (10) $\displaystyle D_{\mathrm{KL}}(q(\boldsymbol{x}_{t-1}|\boldsymbol{x}_{t},\boldsymbol{x}_{0})\|{p_{\theta}(\boldsymbol{x}_{t-1}|\boldsymbol{x}_{t})})$ $\displaystyle=$ $\displaystyle\mathbb{E}_{q(\boldsymbol{x}_{0})}[\frac{1}{2\tilde{\boldsymbol{\sigma}}_{t}^{2}}\lVert\tilde{\boldsymbol{\mu}}_{t-1}-\boldsymbol{\mu}_{\theta}(\boldsymbol{x}_{t})\rVert^{2}]+C.$ In summary, the optimization of the the parameterized model $p_{\theta}(\boldsymbol{x}_{t-1}|\boldsymbol{x}_{t})$ can be achieved by minimizing the mean square error (MSE) between $\boldsymbol{\mu}_{\theta}$ and $\tilde{\boldsymbol{\mu}}_{t-1}$. In other words, if a model can infer the mean value $\tilde{\boldsymbol{\mu}}_{t-1}$ of the previous step from the input $\boldsymbol{x}_{t}$ of the current diffusion step, then it is competent for the data generation task. Figure 2. Illustration of the conditional forward and reverse trajectories. ### 3.3. Conditional Generation In most practical applications, the generation process is expected to be guided by the condition label $\boldsymbol{c}$, which indicates a specific type of the generated signal (e.g., the signal corresponding to a specific device location or human activity). Incorporating the conditional generation mechanism into RF-Diffusion offers significant advantages: $(i)$ Enhanced practicality. The conditional generation mechanism enables RF-Diffusion system to generate signals of different categories based on various condition combinations. This eliminates the need for training separate models for each signal type, significantly improving the model’s utility in practical applications. $(ii)$ Increased signal diversity. A well-trained conditional generation model creates diverse samples featuring any conceivable combination of characteristics within the condition-label space of the training dataset, which extends the model’s generalizability beyond the initial scope of the training set, ensuring that data augmentation contributes to performance improvements in downstream tasks. In this context, the condition input $\boldsymbol{c}$ defines specific scenarios, including various rooms, Tx-Rx deployments, human activity types, and signal bandwidths. This input guides the generation process to produce data that aligns with the conditional distribution $p_{\theta}(\boldsymbol{x}|\boldsymbol{c})$. An illustration of the conditional forward and reverse processes is presented in Fig. 2. Following the conclusion of previous work (Sohn et al., 2015a; Dhariwal and Nichol, 2021; Ho and Salimans, 2021), we directly incorporate the condition $\boldsymbol{c}$ in both the forward process Eqn. 4 and the reverse process Eqn. LABEL:eqn:reverse-prob, and get $q(\boldsymbol{x}_{t}|\boldsymbol{x}_{0},\boldsymbol{c})$ and $q(\boldsymbol{x}_{t-1}|\boldsymbol{x}_{t},\boldsymbol{x}_{0},\boldsymbol{c})$ respectively. Then, by combining Eqn. 7 and Eqn. LABEL:eqn:kl, the optimization can be written as: (11) $\displaystyle\theta$ $\displaystyle=\operatorname*{arg\,min}_{\theta}D_{\mathrm{KL}}(q(\boldsymbol{x}_{t-1}|\boldsymbol{x}_{t},\boldsymbol{x}_{0},\boldsymbol{c})\|{p_{\theta}(\boldsymbol{x}_{t-1}|\boldsymbol{x}_{t}),\boldsymbol{c}})$ $\displaystyle=\operatorname*{arg\,min}_{\theta}\mathbb{E}_{q(\boldsymbol{x}_{0})}[\lVert\tilde{\boldsymbol{\mu}}_{t-1}-\boldsymbol{\mu}_{\theta}(\boldsymbol{x}_{t}(\boldsymbol{x}_{0},t,\boldsymbol{\epsilon}),\boldsymbol{c})\rVert^{2}].$ Algorithm 1 RF-Diffusion Training. 1:Dataset following $\boldsymbol{x}\sim q(\boldsymbol{x})$ with condition $\boldsymbol{c}$ 2:Trained model $\mu_{\theta}$ 3:Set hyperparameters $T$, $\alpha_{t}$ and $\boldsymbol{g}_{t}$ 4:while $\mu_{\theta}$ not converged do 5: Sample $\boldsymbol{x}_{0}\sim q(\boldsymbol{x}_{0})$ with condition $\boldsymbol{c}$ from dataset 6: Sample diffusion step $t\in\mathrm{Uniform}(1,\dots,T)$ 7: Sample noise $\boldsymbol{\epsilon}\sim\mathcal{CN}(0,\mathbf{I})$ 8: Get $\boldsymbol{x}_{t}=\bar{\boldsymbol{\gamma}}_{t}\boldsymbol{x}_{0}+\bar{\boldsymbol{\sigma}}_{t}\boldsymbol{\epsilon}$ $\triangleright$ Eqn. 3 9: Calculate $\tilde{\boldsymbol{\mu}}_{t-1}$ based on $\boldsymbol{x}_{0}$ and $\boldsymbol{x}_{t}$ $\triangleright$ Eqn. LABEL:eqn:reverse-prob 10: Minimize $\lVert\tilde{\boldsymbol{\mu}}_{t-1}-\boldsymbol{\mu}_{\theta}(\boldsymbol{x}_{t}(\boldsymbol{x}_{0},t,\boldsymbol{\epsilon}),\boldsymbol{c})\rVert^{2}$ $\triangleright$ Eqn. 11 11:end while Algorithm 2 RF-Diffusion Sampling. 1:Trained model $\mu_{\theta}$, condition $\boldsymbol{c}$ 2:Generated sample $\boldsymbol{x}_{0}$ 3:Set hyperparameters $T$, $\alpha_{t}$ and $\boldsymbol{g}_{t}$ 4:Sample noise $\boldsymbol{\epsilon}\sim\mathcal{CN}(0,\mathbf{I})$ 5:Let $\boldsymbol{x}_{T}=\bar{\boldsymbol{\sigma}}_{T}\boldsymbol{\epsilon}$ $\triangleright$ Eqn. 5 6:for $t=T,\dots,1$ do 7: Get model output $\boldsymbol{\mu}_{\theta}(\boldsymbol{x}_{t},\boldsymbol{c})$, and let $\boldsymbol{\sigma}_{\theta}=\tilde{\boldsymbol{\sigma}}_{t-1}$ 8: Sample $\boldsymbol{x}_{t-1}\sim p_{\theta}(\boldsymbol{x}_{t-1}|\boldsymbol{x}_{t})$ with $\boldsymbol{\mu}_{\theta}$ and $\boldsymbol{\sigma}_{\theta}$, which means let $\boldsymbol{x}_{t-1}=\boldsymbol{\mu}_{\theta}(\boldsymbol{x}_{t},\boldsymbol{c})+\boldsymbol{\sigma}_{\theta}\boldsymbol{\epsilon}$ $\triangleright$ Eqn. 9 9:end for 10:return $\boldsymbol{x}_{0}$ The training process of the parameterized model used for restoration is summarized Algorithm 1. By incorporating the desired signal type as a conditional input, the trained model can iteratively synthesize the original signal from a sampled noise. The generative process is illustrated in Algorithm 2. Figure 3. Hierarchical Diffusion Transformer design. ## 4\. Hierarchical Diffusion Transformer To bridge the gap between time-frequency theory and a practical generative model, we introduce a hierarchical diffusion transformer (HDT). Our proposed HDT incorporates many innovative designs, aligning it with the underlying time-frequency diffusion theory and making it adept for RF signal generation. We first introduce the overarching hierarchical design (§4.1), followed by the detailed design of our proposed attention-based diffusion block (ADB) (§4.2). Addressing the challenge of complex-valued signal generation, we extend the core design of the classic transformer block (Vaswani et al., 2017) into the complex domain (§4.3). Moreover, we propose phase modulation encoding (PME) (§4.4), a novel positional encoding approach tailored for complex-valued neural networks. ### 4.1. Hierarchical Architecture From the top perspective, HDT adopts a hierarchical architecture to efficiently decouples the estimation of non-isotropic noise. As shown in Fig. 3, HDT is divided into two stages: spatial denoising and time-frequency deblurring. The diffusion step, denoted as $t$, is encoded, thereby informing the model about the current input’s diffusion level. The conditional vector $c$ undergoes encoding as well. In conjunction with the input $\boldsymbol{x}_{t}^{(n)}$, these components engage in computations, striving to discern the latent correlation between the input and its pertinent condition. Our observation is that the non-isotropic noise can be dissected into two components: 1) Independent Gaussian noise $\boldsymbol{\epsilon}$ across both the spatial dimension $M$ and the temporal dimension $N$. 2) Different information and noise weights (i.e., $\bar{\gamma}_{t}^{(n)}$ and $\bar{\sigma}_{t}^{(n)}$) along the temporal dimension $N$. Therefore, by splitting the time-series data into separate samples, we get $\boldsymbol{x}_{t}^{(n)}=\bar{\gamma}_{t}^{(n)}\boldsymbol{x}_{0}^{(n)}+\bar{\sigma}_{t}^{(n)}\boldsymbol{\epsilon}^{(n)}$. Herein, within each sample, both signal and noise weights remain constant. Therefore, each spatial denoising module processes a single sample $\boldsymbol{x}_{t}^{(n)}$ of the input sequence independently. During this stage, denoising circumvents the temporal domain weighting induced by spectral blurring, focusing exclusively on the Gaussian noise $\boldsymbol{\epsilon}^{(n)}$ introduced into the original information. This approach resonates with the principles of denoising diffusion (Ho et al., 2020). Although the spatial denoising module effectively mitigates the impact of noise $\boldsymbol{\epsilon}$, its individual treatment for each sample disregards the temporal weighting effects originating from spectral blurring. Therefore, the processed results are concatenated as $\hat{\boldsymbol{\mu}}=[\hat{\boldsymbol{\mu}}^{(1)},\cdots,\hat{\boldsymbol{\mu}}^{(N)}]$ and serve as sequence input for the time-frequency deblurring module, aiming to estimate the mean value $\tilde{\boldsymbol{\mu}}_{t-1}$. ### 4.2. Attention-based Diffusion Block As shown in Fig. 3, the input data are process by a sequence of transformer blocks in both the denoising and deblur stage. We introduce an innovative attention-based diffusion block to jointly analyze the noisy input $\boldsymbol{x}_{t}$, condition $\boldsymbol{c}$, and step $t$. Self-attention for feature extraction. The multi-head self-attention module captures autocorrelation feature from the noisy input and extracts the high- level representations implicit in the signal. Compared to convolutional layers with translation invariance, attention layers are sensitive to the positional information of each sample in the sequence, thus enabling more effective restoration of the original signal. Cross-attention for conditioning. To enhance the conditional generation capability, RF-Diffusion incorporates a cross-attention module to learn the latent associations between the inputs and their corresponding conditions. This module is designed to directly capture the intricate dynamics between the inputs and specified conditions, thereby improving the diversity and fidelity of generated signals. Adaptive layer normalization for diffusion embedding. Inspired by the widespread usage of adaptive normalization layer (adaLN) (Perez et al., 2018) in existing conditional generative models (Brock et al., 2018; Dhariwal and Nichol, 2021), we explore replacing standard layer normalization with adaLN. Rather than directly learn dimension-wise scale $a$ and shift parameters $b$, we regress them from the $t$, embedding the diffusion step information into our model. ### 4.3. Complex-Valued Module Design In order to work effectively with complex-valued wireless signals, the RF- Diffusion model is designed as a complex-valued neural network. Several adaptations have been made to HDT to facilitate complex-valued operations. Complex-valued attention module. Two key improvements have been implemented in the dot-product attention mechanism to accommodate complex number computation: 1) The dot product of the query and key vectors is extended to the hermitian inner product, $\boldsymbol{q}^{\mathrm{H}}\boldsymbol{k}$, which captures the correlation of two vectors in the complex space. This preserves the effective information of both the real and imaginary parts to the fullest extent. 2) Given that the softmax function operates on real numbers, adjustments have been made to make it compatible with complex vectors. Specifically, softmax is applied to the magnitude of the dot product, while the phase information remains unchanged. This modification maintains the probabilistic interpretation of vector relevance. In mathematical terms, the complex-valued attention computation for complex vectors $\boldsymbol{q}$ and $\boldsymbol{k}$ can be expressed as: (12) $\mathrm{softmax}(\lvert\boldsymbol{q}^{\mathrm{H}}\boldsymbol{k}\rvert)\exp(j\angle{(\boldsymbol{q}^{\mathrm{H}}\boldsymbol{k})}).$ Complex-valued feed-forward module. Feed-forward module consists of two main of operations: linear transformation and non-linear activation. A complex- valued linear transformation can be decomposed into real-valued ones (Trabelsi et al., 2018). Specifically, for complex-valued input $\boldsymbol{x}=\boldsymbol{x}_{r}+j\boldsymbol{x}_{i}$, the transformation with complex weight $\boldsymbol{w}=\boldsymbol{w}_{r}+j\boldsymbol{w}_{i}$ and bias $\boldsymbol{b}=\boldsymbol{b}_{r}+j\boldsymbol{b}_{i}$ can be written as follows: (13) $\displaystyle\boldsymbol{w}\boldsymbol{x}+\boldsymbol{b}=\left[\begin{array}[]{l}\Re(\boldsymbol{w}\boldsymbol{x}+\boldsymbol{b})\\\ \Im(\boldsymbol{w}\boldsymbol{x}+\boldsymbol{b})\end{array}\right]=\left[\begin{array}[]{rr}\boldsymbol{w}_{r}&\ -\boldsymbol{w}_{i}\\\ \boldsymbol{w}_{r}&\ \boldsymbol{w}_{i}\end{array}\right]\left[\begin{array}[]{l}\boldsymbol{x}_{r}\\\ \boldsymbol{x}_{i}\end{array}\right]+\left[\begin{array}[]{l}\boldsymbol{b}_{r}\\\ \boldsymbol{b}_{i}\end{array}\right].$ Furthermore, applying an activation function $g(\cdot)$ to a complex value can be seen as activating the real and imaginary parts separately: $g({\boldsymbol{x}})=g({\boldsymbol{x}_{r}})+jg({\boldsymbol{x}_{i}})$. Figure 4. Illustration of phase modulation encoding. ### 4.4. Phase Modulation Encoding Leveraging the attention mechanism, a Transformer network parallelly processes the entire sequence. Yet, it lacks inherent capability to discern the positional information of the input. To address this, we introduce an innovative phase modulation encoding (PME) strategy tailored for complex spaces, serving as the positional encoding scheme for HDT. As illustrated in Fig. 4, suppose the maximum dimension of each vector in the sequence is $d$. For the $i$-th element of the $n$-th vector in the sequence, PME operates as follows: (14) $\displaystyle\mathrm{PME}(\boldsymbol{x}^{(n)}(i),n)$ $\displaystyle=\boldsymbol{x}_{i}^{(n)}\exp{(jn\theta_{i})},$ where $\theta_{i}$ is given by $\theta_{i}=10000^{-\frac{i}{d}}$. This procedure can be conceptualized as a phase modulation process—essentially imparting a specific phase offset to the original data based on the position $n$ in the sequence. The PME inherently decodes the relative position during computation, establishing its essential role in position encoding. Specifically, when executing the complex-domain Attention operation on the encoded key vector $\boldsymbol{k}$ and query vector $\boldsymbol{q}$, it is equivalent to: (15) $\displaystyle\mathrm{PME}(\boldsymbol{q},n)^{\mathrm{H}}\mathrm{PME}(\boldsymbol{k},m)=\mathrm{PME}(\boldsymbol{q}^{\mathrm{H}}\boldsymbol{k},m-n).$ Therefore, the relative position information $m-n$ can be derived. This enables our model to learn more proficiently by integrating the positional details of the sequence. ## 5\. Implementation We implement RF-Diffusion based on PyTorch and train our model on 8 NVIDIA GeForce 3090 GPUs, incorporating essential implementation techniques outlined below. Exponential moving average. Following common practice in most generative models, we adopt the exponential moving average (EMA) mechanism with a decay rate of $0.999$. EMA calculates a sliding average of the model’s weights during training, which improves model robustness. Weight initialization. We zero-initialize each final layer before the residual to accelerate large-scale training (Goyal et al., 2017), and apply Xavier uniform initialization (Glorot and Bengio, 2010) to other layers, which is a standard weight initialization technique in transformer-based models (Dosovitskiy et al., 2021). Hyperparameters. We train our model using AdamW optimizer (Kinga et al., 2015; Loshchilov and Hutter, 2019) with an initial learning rate of $1\times 10^{-3}$. A step learning rate scheduler with a decay factor of $0.5$ is adopted to improve training efficiency. In the training process, we apply a dropout rate of $0.1$ to mitigate overfitting. Noise scheduling strategy. In our implementation, the rate of data destruction is designed to increase incrementally from a lower to a higher intensity as the diffusion process progresses. This is aimed at achieving a balance between the model complexity and the generation quality (Kong et al., 2020). Specifically, we configure the diffusion process with a maximum of $T=300$ steps. The noise coefficient, $\beta_{t}=\sqrt{1-\alpha_{t}}$, is set to linearly increase from $10^{-4}$ to $0.03$, i.e., $\beta_{t}=10^{-4}t$. In parallel, the standard deviation of the Gaussian convolution kernel in the frequency domain, denoted as $\boldsymbol{G}_{t}$, is adjusted to linearly escalate from $10^{-3}$ to $0.3$, facilitating the controlled amplification of noise across the diffusion steps. Data preprocessing. Each signal sequence from the dataset is either interpolated or downsampled to a consistent length of 512. This guarantees uniformity in the model’s input length. Prior to input into our model, each sample within the input sequence is normalized by average signal power, which means each sample is divided by the average L2-norm of all the samples in the sequence. (a) Classroom (b) Hall (c) Office Figure 5. Experimental scenarios. ## 6\. Evaluation ### 6.1. Experiment Design #### 6.1.1. Data Collection. As shown in Fig. 5, our dataset comprises wireless signals collected under three distinct scenarios, featuring variations in room selection, device location, and human factors, including their location, orientation, and activities. We compile condition labels for each sequence into a conditional vector $\boldsymbol{c}$, guiding both training and sampling phases. Our research evaluates RF-Diffusion’s proficiency in producing signals across different modulation modes, focusing on Wi-Fi and FMCW radar signals as two primary types of wireless sensing and communications. * • Wi-Fi. We collect Wi-Fi signal based on the commercial NIC IWL5300 working in 5.825 GHz with 40 MHz bandwidth. The transmitter injects Wi-Fi packets to 3 receivers to extract the channel state information (CSI) corresponding to the environment. * • FMCW. FMCW signals are recorded using the mmWave radar IWR1443 (Instruments, 2022). This radar device can be placed at either one of two different locations in each scene, working at a frequency band from 77 GHz to 81 GHz. More than 20,000 Wi-Fi sequences and 13,000 FMCW sequences are collected. Each sequence has an associated condition label indicating the room, device placement, human ID, location, orientation and activity type. All experiments conducted in this paper conform to the IRB policies. #### 6.1.2. Comparative Methods. We compare RF-Diffusion with three representative data generation model: * • DDPM (Ho et al., 2020). The denoising diffusion probabilistic model (DDPM) introduces Gaussian noise to original data and subsequently learns to reverse this process, thereby generating raw data from the noise. * • DCGAN (Radford et al., 2015). The deep convolutional generative adversarial network (DCGAN) stands as a widely recognized GAN. In DCGAN, two models (i.e., generator and discriminator) are simultaneously trained in an adversarial manner. Once trained, the generator can produce data that convincingly bypasses the discriminator’s scrutiny. * • CVAE (Sohn et al., 2015b). The conditional variational autoencoder (CVAE) learns the Gaussian implicit representation of the data, thereby enabling data generation. This method is widly adopted in both sensing (Ha et al., 2020) and communication (Liu et al., 2021) systems to synthesize of wireless features. To adapt them for RF signal, we have re-implemented the model using complex- valued neural networks (Trabelsi et al., 2018). #### 6.1.3. Evaluation Metrics. For a comprehensive evaluation, we adopt two metrics, both of which are commonly used in previous research for evaluating data-driven generative models (Jiang et al., 2021; Nichol and Dhariwal, 2021; Ho et al., 2020; Dhariwal and Nichol, 2021; Allen-Zhu and Li, 2023). Recognizing that a definitive “gold standard” for generative models has not been established, these metrics are among the most authoritative. * • SSIM (Wang et al., 2004): The Structural Similarity Index Measure (SSIM) is a prominent criterion for gauging the similarity between two samples by analyzing their means and covariances. We’ve adapted SSIM for the complex domain, making it suitable for assessing complex-valued signals. * • FID (Heusel et al., 2017): The Fréchet Inception Distance (FID) evaluates generative models by measuring the Fréchet distance between the high-level features of real and synthesized data. We adopt a pretrained STFNets (Yao et al., 2019) as the feature extractor to better fit the property of wireless signals. ### 6.2. Overall Generation Quality The evaluation result for RF-Diffusion on Wi-Fi and FMCW signal are illustrated in Fig. 6 and Fig. 7 respectively. As shown, our proposed RF- Diffusion has proved the superiority over comparative methods on both two metrics. Specifically, as shown in Fig. 6, RF-Diffusion generates Wi-Fi signal with an average SSIM of $0.81$, exceeding DDPM, DCGAN, and CVAE by $25.4\%$, $18.6\%$ and $71.3\%$ respectively. RF-Diffusion achieves an FID of $4.42$, outperforming the above comparative methods by $42.4\%$, $63.0\%$, and $57.3\%$. RF-Diffusion also outperforms the comparative methods in terms of generating high-fidelity FMCW signals. As shown in Fig. 7, the FMCW signal generated by RF-Diffusion achieves an average SSIM of $0.75$ and an average FID of $6.10$. The impressive performance of RF-Diffusion can be attributed to several key factors: 1) Our proposed time-frequency diffusion adopted by RF-Diffusion emphasizes refining the frequency spectrum of the RF signal, thereby preserving finer spectral details in the generated signals, which is difficult to be captured by other methods. 2) Through its iterative generation approach, RF-Diffusion attains precise reconstruction of data details via multi-step approximations, leading to a superior quality of generated data. 3) In contrast to DCGAN, which optimizes two models concurrently, RF-Diffusion’s loss function is more streamlined and its training process more stable, ensuring a richer diversity in the generated signal and contributing to a commendable FID score. (a) SSIM (b) FID Figure 6. Wi-Fi signal generation quality. (a) SSIM (b) FID Figure 7. FMCW signal generation quality. Figure 8. Impact of diffusion method. Figure 9. Impact of network design. Figure 10. Scalability analysis. ### 6.3. Micro-benchmarks #### 6.3.1. Impact of Diffusion Methods. To validate the efficacy of our proposed time-frequency diffusion theory, we retained the network model architecture of RF-Diffusion but replaced the time- frequency diffusion process with two alternate schemes: 1) Gaussian diffusion, which is similar to DDPM and by only introduces Gaussian noise to the signal amplitude, and 2) blur diffusion which only performs spectral blurring. As depicted in Fig. 10, our time-frequency diffusion theory consistently outperforms both in terms of the SSIM and FID metrics. Specifically, the SSIM values for time-frequency diffusion, Gaussian diffusion, and blur diffusion stand at $0.81$, $0.71$, and $0.45$, respectively. This translates to the time-frequency diffusion offering an SSIM improvement of $13.9\%$ over Gaussian diffusion and a notable $79.2\%$ over blur diffusion. In terms of the FID, the time-frequency diffusion surpasses the other two methods by margins of $41.3\%$ and $83.5\%$, respectively. The results indicates that the time- frequency diffusion theory successfully incorporates two diffusion methods on orthogonal spaces, and thus achieving complementary benefits. #### 6.3.2. Impact of Network Design. To demonstrate the advantages of our proposed hierarchical diffusion transformer (HDT), we compare it against: 1) single-stage diffusion transformer (SDT), which is a simplified form of HDT, with only one stage for end-to-end data restoration, and 2) U-Net (Ronneberger et al., 2015), a popular choice in prevalent diffusion models. As shown in Fig. 10, our proposed HDT outperforms the SDT and U-Net. Specifically, the SSIM for HDT, SDT, and U-Net are $0.81$, $0.75$, and $0.68$ respectively. This indicate that HDT achieves a SSIM boost of $7.7\%$ over SDT and a significant $18.9\%$ increment compared to U-Net. When assessed using the FID metric, HDT continues to lead by margins of $48.2\%$ and $49.3\%$ against SDT and U-Net, respectively. The outstanding performance benefits from the follows aspects: 1) Compared with SDT, HDT can efficiently decouple the non-isotropic noise introduced in the diffusion process and eliminate it through two sequential stages; 2) Compared with translation-invariant U-Net, HDT’s transformer architecture can effectively distinguish the signal characteristics at different times, thereby achieving more accurate signal generation. #### 6.3.3. Scalability Analysis. Scalability refers to a model’s ability to enhance its performance with increasing size, which is critical for large generative models like RF- Diffusion. To verify the scalability of RF-Diffusion, we trained 9 models of different sizes, exploring different numbers of attention-based diffusion blocks (16B, 32B, 64B) and hidden dimensions (64, 128, 256). Fig. 10 illustrates that the FID performance of RF-Diffusion is strongly correlated with model parameters and GFLOPs, indicating that scaling up model parameters and additional model computation is the critical ingredient for better performance. Increasing the model size is anticipated to further enhance RF- Diffusion’s performance. ## 7\. Case Study This section showcases how RF-Diffusion benefits wireless researches in two distinct downstream tasks: Wi-Fi-based gesture recognition and 5G FDD channel estimation. ### 7.1. Wi-Fi Gesture Recognition Wireless sensing (Chi et al., 2022; Yang et al., 2023; Zhang et al., 2023; Gao et al., 2023) has emerged as a major research focus. By serving as a data augmentor, RF-Diffusion can boost the performance of existing wireless sensing systems, all while preserving the original model structure without any modifications. In particular, our approach involves initially training RF- Diffusion using a real-world dataset. Subsequently, RF-Diffusion generates synthetic RF signals of the designated type, guided by condition labels. These synthetic samples are then integrated with the original dataset, collectively employed to train the wireless sensing model. Both RF-Diffusion-augmented solution and baseline are fundamentally based on the same real-world dataset, ensuring a fair comparison, as RF-Diffusion itself is trained on this real- world dataset and no extra data is ever involved. We illustrate this approach through the case of Wi-Fi-based gesture recognition and evaluate the performance gains achieved by integrating RF- Diffusion into established gesture recognition models. (a) Cross-domain (b) In-domain Figure 11. Performance of augmented Wi-Fi sensing. #### 7.1.1. Experiment Design. We select two different types of Wi-Fi-based model for a comprehensive evaluation: * • Widar3.0 (Zheng et al., 2019) is a gesture recognition model founded on physical principles. It initially extracts features from raw signals and subsequently conducts recognition through a deep neural network. * • EI (Jiang et al., 2018) is a data-driven end-to-end human activity recognition model that takes raw signal as input. We utilize the publicly available dataset from Widar3.0 (Zheng et al., 2019) to assess performance. This evaluation encompasses scenarios where RF- Diffusion and comparative methods (§6.1.2) were employed as data augmentors. Figure 12. Impact of synthetic data volume. #### 7.1.2. Cross-domain Evaluation. We first evaluate the sensing performance when the training and testing set are from different domains (i.e., room, device placement, human location, orientation, etc.), a common scenario in real-world wireless sensing system deployments. We synthesize an equivalent volume of data as the real-world dataset using the pre-trained RF-Diffusion. Subsequently, both synthesized and authentic datasets are used for training. As shown in Fig. 11a, integrating RF-Diffusion brings performance improvements of $4.7\%$ and $11.5\%$ for Widar3.0 and EI, respectively. Integrating DDPM can bring relatively limited performance gains of only $1.8\%$ and $7.3\%$, respectively. Additionally, the integration of DCGAN or CVAE may result in a degradation of recognition accuracy due to deviations in the synthetic data distribution from the original data distribution. Compared with Widar 3.0, the EI model obtains a more significant improvement since: 1) EI is more sensitive to the data volume and data diversity as an end-to-end DNN; 2) the information in the wireless signals generated in a data-driven manner cannot be fully exploited in Widar3.0 when being converted into physical features. In conclusion, RF-Diffusion enhances the cross-domain performance of wireless sensing systems in two aspects: * • Enhanced data diversity. Synthetic training data with higher diversity avoid model overfitting and thus implicitly improves the model’s domain generalization ability. * • Feature distillation. The generative model RF-Diffusion implicitly imparts its learned signal features to the recognition model through synthetic training data, contributing to improved performance. #### 7.1.3. In-domain Evaluation. In the in-domain scenario, the training and testing data are from the same domain. As shown in Fig. 11b, the integration of RF-Diffusion yields performance improvements of $1.8\%$ and $8.7\%$ for Widar3.0 and EI, respectively. Overall, compared with the cross-domain scenario, the performance gain in the in-domain case is relatively modest. This is attributed to the limited impact of diverse synthetic training data on enhancing the model’s performance within the same domain. For in-domain testing, even with a less diverse synthetic data generated by DCGAN, an obvious performance gain can be achieved. #### 7.1.4. Impact of Synthesized Data Ratio. We further investigate the impact of the synthesized data ratio used for training to provide more insights. We evaluate Widar 3.0 and EI in both cross- domain (CD) and in-domain (ID) cases. As shown in the Fig. 12, we introduce varying quantities of synthetic data (from $+25\%$ to $+200\%$) to the real-world dataset for joint training of the recognition model. Notably, as the volume of synthetic data increases, the trend in recognition accuracy exhibits an ascent to a peak followed by a decline. Specifically, in the cross-domain case, Widar3.0 reaches the highest accuracy of $92.7\%$ with $+100\%$ synthetic data, while EI reaches the highest accuracy of $83.8\%$ with $+125\%$ synthetic data. In the in-domain case, Widar3.0 achieved the highest accuracy of $93.4\%$ with $+50\%$ synthetic data, while EI achieved the highest accuracy of $89.1\%$ with $+75\%$ synthetic data. Drawing from these statistical findings, we deduce the following insights: 1) For most wireless recognition models, judicious incorporation of synthetic data into the training set can effectively enhance model performance. 2) Excessive introduction of synthetic data can potentially shift the training data distribution away from the original, consequently diminishing recognition accuracy. 3) The cross-domain scenario requires a greater infusion of synthetic data into the training set to achieve optimal model performance compared to the in-domain scenario. 4) Data-driven end-to- end models (e.g., EI) reap more substantial benefits from data augmentation facilitated by RF-Diffusion. ### 7.2. 5G FDD Channel Estimation (a) Channel amplitude and phase (b) SNR Figure 13. Performance of channel estimation. In this section, we discuss how RF-Diffusion enables channel estimation of the Frequency Domain Duplex (FDD) system in 5G, where the uplink and downlink transmissions operate at different frequency bands. Therefore, the principle of reciprocity that two link channels are equal no longer holds (Liu et al., 2021). To estimate the downlink channel state, client devices must receive additional symbols from a base station with a massive antenna array and send back the estimated results, causing unsustainable overheads. To solve this problem, substantial research is devoted to predicting the downlink channel by observing the uplink channel state information. For example, FNN (Bakshi et al., 2019) and FIRE (Liu et al., 2021) make use of a fully connected network and a VAE to transfer the estimated CSI from the uplink to the downlink, respectively. We discover that by employing the uplink CSI as conditional input, RF- Diffusion demonstrates the capacity to estimate downlink channel CSI in a generative manner. Specifically, in RF-Diffusion, the downlink CSI $\boldsymbol{x}_{\mathrm{down}}$ serves as the target data for generation, while the uplink CSI is encoded as the condition $\boldsymbol{c}_{\mathrm{up}}$ and input into the model. The trained RF- Diffusion learns the correlation between $\boldsymbol{c}_{\mathrm{up}}$ and $\boldsymbol{x}_{\mathrm{down}}$, thereby accomplishing the channel estimation task. This efficacy is rooted in the assumption of shared propagation paths, positing that both link channels are shaped by the same underlying physical environment (Huang et al., 2019; Vasisht et al., 2016). #### 7.2.1. Experiment Design. Our evaluation is based on the publicly available dataset Argos (Shepard et al., 2016), which is a real-world MIMO dataset collected in a complex environment with a large number of non-line-of-sight (NLoS) propagation. Each CSI frame contains 52 subcarriers. Similar to previous works (Zhao et al., 2023; Liu et al., 2021), we designate the initial 26 subcarriers for the uplink channel, while the remaining 26 are allocated to the downlink channel. For a comprehensive evaluation, we compare our approach against three different types of channel estimation solutions: * • NeRF2 (Zhao et al., 2023) implicitly learns the signal propagation environment from the uplink channel state based on a neural network, and then estimate the downlink channel. * • FIRE (Liu et al., 2021) is a channel estimation system based on VAE, which compresses the uplink channel CSI into a latent space representation and further transforms it into a downlink channel estimation. * • Codebook (Kaltenberger et al., 2008), commonly used in standard implementations per the 3GPP physical channel standard(3rd Generation Partnership Project, 3GPP), requires both base stations and clients to maintain a codebook of vectors created using predefined rules. Clients measure the channel locally, select the closest codebook vectors, and send the corresponding indices back to the base station. #### 7.2.2. Channel Estimation Accuracy. As shown in Fig. 13a, when we input the blue uplink CSI as a condition into the trained RF-Diffusion, the red downlink estimate will be output, which closely aligns with the ground truth downlink channel state. Assessment of channel estimation accuracy employs the Signal-to-Noise Ratio (SNR) metric (Zhao et al., 2023; Liu et al., 2021). This metric gauges the congruence between the estimated downlink channel $\boldsymbol{x}_{\mathrm{est}}$ and the ground truth $\boldsymbol{x}_{\mathrm{down}}$ through the following formulation: (16) $\mathrm{SNR}=-10\log_{10}\left(\frac{\lVert\boldsymbol{x}_{\mathrm{down}}-\boldsymbol{x}_{\mathrm{est}}\rVert^{2}}{\lVert\boldsymbol{x}_{\mathrm{down}}\rVert^{2}}\right).$ A higher positive SNR corresponds to enhanced proximity between the predicted channel and the ground truth. As shown in Fig. 13b, RF-Diffusion achieves the highest SNR among all comparative methods with an average SNR of $27.01$, outperforming NeRF2 and FIRE by $34.6\%$ and $77.5\%$ respectively, and achieves more than $5\times$ performance gain compared with the standard implementation based on codebook. The observed underperformance of NeRF2 can be attributed to its treatment of the signal propagation space as a time-invariant system, a characterization that may not hold in practical scenarios. VAE-based FIRE and codebook-based methods fall short in the fine-grained characterization of the underlying distribution of channel states. In contrast, RF-Diffusion adeptly learns the intricate correlation between the uplink and downlink channels, leveraging its robust modeling capacity to achieve highly accurate channel estimation. ## 8\. Related Work We briefly review the related works in the following. Diffusion probabilistic models. Diffusion probabilistic models (Yang et al., 2022b; Sohl-Dickstein et al., 2015) have emerged as a powerful new family of deep generative models with record-breaking performance in many applications (Dhariwal and Nichol, 2021), including image synthesis, point cloud completion, and natural language processing, etc. One of the best-known diffusion model is the DDPM (Sohl-Dickstein et al., 2015), which progressively destruct data by injecting gaussian noise, then learn to reverse this process for high-fidelity sample generation. On this basis, DDIM (Nichol and Dhariwal, 2021) expedites reverse sampling, while LDM (Rombach et al., 2022) conducts diffusion in latent space to curtail computational overhead. The above schemes have been widely used in a wide range of tasks such as image super-resolution (Saharia et al., 2022b; Ho et al., 2022), inpainting (Lugmayr et al., 2022), and style transfer (Saharia et al., 2022a). The most recent studies (Lee et al., 2022; Rissanen et al., 2023) have successfully applied a combination of blurring and additive noise to the image, yielding satisfactory results. Although first proposed for image generation, the diffusion model’s versatility extends to other domains including point cloud completion (Zhou et al., 2021; Lyu et al., 2021), text generation (Austin et al., 2021; Gong et al., 2023), audio synthesis (Kong et al., 2020; Chen et al., 2021), and beyond. In addition, diffusion model has a great potential for multi-modal generation. By integrating pre-trained language model (Radford et al., 2021), the diffusion models achieve impressive performance in text-to-image (Nichol et al., 2022; Ramesh et al., 2022) and text-to-audio (Popov et al., 2021) tasks. RF-Diffusion, in contrast, stands as the pioneering diffusion model tailored for wireless signal generation. It introduces an innovative time-frequency diffusion process, which regulates noise and blurring across two orthogonal domains, thus encompassing both temporal and spectral intricacies of wireless signals. By generating high-fidelity signals, RF-Diffusion benefits wireless applications like Wi-Fi sensing and 5G channel estimation. Signal generation in wireless systems. Conventional wireless signal generation schemes are mainly based on modeling and simulation. In particular, these methods involve utilizing LiDAR-scanned 3D models, and employing electromagnetic (EM) ray tracing techniques (McKown and Hamilton, 1991) to simulate the distribution of wireless signals. Recent studies (Korany et al., 2019; Cai et al., 2020; Zhang et al., 2022) have integrated vision-based human reconstruction techniques with signal propagation models, enabling the generation of wireless signals that interact with the human body. Unfortunately, the above schemes fails to model the structure material and physical characteristics, which constraints their performance in real-world applications. The recently proposed NeRF2 (Zhao et al., 2023) learns the properties of the signal propagation space based on a deep neural network and then accomplishes the signal generation task. However, NeRF2 is limited to specific static scenarios and degrades for dynamic real-world scenarios. RF- EATS (Ha et al., 2020) and FallDar (Yang et al., 2022a) employ Variational Autoencoders (VAEs) to extract environment-independent features, thereby enhancing the generalizability of wireless sensing models. Additionally, other studies have utilized Generative Adversarial Networks (GANs) to generate Doppler spectrum (Erol et al., 2019). Other research endeavors have addressed channel estimation in wireless communication systems using either GANs (Doshi et al., 2022; Balevi and Andrews, 2021) or VAEs (Baur et al., 2022; Liu et al., 2021). Nonetheless, due to their limited representation capabilities, solutions based on GANs and VAEs struggle to faithfully characterize the intrinsic properties of original wireless signals. Consequently, the aforementioned systems are suitable solely for specific tasks, lacking the competence for general-purpose wireless data generation. In contrast, RF-Diffusion, as a versatile generative model for wireless signals, can proficiently generate fine-grained signals in high-fidelity, even within dynamic scenarios. ## 9\. Discussion and Future Work RF-Diffusion is a pioneering attempt towards diffusion-based RF signal generation, and there is room for continued research in various perspectives. $\bullet$ RF-Diffusion for data-driven downstream tasks. Extensive practices (He et al., 2023; Shivashankar and Miller, 2023; Azizi et al., 2023; Zheng et al., 2023) indicate that synthetic data from generative models significantly enhances data-driven downstream tasks. As a conditional generative model, RF- Diffusion effectively captures the representative features and their novel combinations, while randomizing non-essential details. This approach allows for the generation of innovative data samples that extend beyond the initial scope of the dataset, thus improving the generalization ability of downstream models. This paper specifically explores and experiments with applying RF- Diffusion to augment Wi-Fi gesture recognition, demonstrating its potential. However, the applicability of RF-Diffusion extends to any data-driven task in wireless communication and sensing. $\bullet$ RF-Diffusion as a simulation tool. As a probabilistic generative model, RF-Diffusion operates independently of any signal propagation assumptions and does not require pre-modeling of the environment. This flexibility implies that, while RF-Diffusion offers novel opportunities for signal synthesis, it may not achieve the same level of stability and precision as traditional signal simulation tools in all scenarios. RF-Diffusion is not designed to supplant simulation tools but rather to introduce a novel data- driven approach for signal synthesis, which is particularly valuable in complex and dynamic environments, such as indoor spaces with human activity, where accurate modeling poses challenges. $\bullet$ Autoregressive signal generation. RF-Diffusion, a non-autoregressive generative model, processes time series as a unified entity, necessitating downsampling and interpolation for variable-length sequences, which limits its versatility. The advent of autoregressive models like GPT introduces alternative methods for time-series signal generation, which improves adaptability for sequences of differing lengths and enable effective exploration of temporal correlation features. ## 10\. Conclusion This paper presents RF-Diffusion, the pioneering generative diffusion model designed for RF signals. RF-Diffusion excels in generating high-fidelity time- series signals by employing a novel time-frequency diffusion process. This process captures the intricate characteristics of RF signals across spatial, temporal, and frequency domains. This theoretical framework is then translated into a practical generative model based on the hierarchical diffusion transformer. RF-Diffusion exhibits remarkable versatility. It holds significant potential for essential wireless tasks, ranging from boosting the accuracy of wireless sensing systems, to estimating channel states in communication systems, shedding light on the applications of AIGC in wireless research. ## 11\. Acknowledgments We sincerely thank the MobiSense Group, the anonymous reviewers, and our shepherd for their constructive comments and feedback in improving this work. This paper is supported in part by the NSFC under grant No. 62372265, No.62302254, and No. 62222216, and by the Hong Kong RGC ECS under grant 27204522 and RIF under grant R5060-19. ## Appendix A Convergence of Forward Destruction Process As $T\to\infty$, the forward process converges to a distribution independent of the original signal. The above proposition is equivalent to the following two conditions: (1) $\lim_{T\to\infty}\bar{\boldsymbol{\mu}}_{T}=\mathbf{0}$, (2) $\lim_{T\to\infty}\bar{\boldsymbol{\sigma}}_{T}<\infty$. We find a sufficient condition for the above to hold true: all element in $\boldsymbol{\gamma}_{t}=\sqrt{\alpha_{t}}\boldsymbol{g}_{t}$ should be less than 1, i.e., $\forall n,\gamma_{t}^{(n)}<1$. Under this condition, according to Eqn. 3, $\lim_{T\to\infty}\bar{\boldsymbol{\mu}}_{T}=\lim_{T\to\infty}\bar{\boldsymbol{\gamma}}_{T}\boldsymbol{x}_{0}=\mathbf{0}$ holds. Let $\alpha_{\mathrm{min}}=\min(\alpha_{t}),t\in[1,T]$ and ${\gamma}^{(n)}_{\mathrm{max}}=\max(\gamma_{t}^{(n)})$ and $\boldsymbol{\gamma}_{\mathrm{max}}=({\gamma}^{(1)}_{\mathrm{max}},\dots,{\gamma}^{(N)}_{\mathrm{max}})$. It can be proven that: (17) $\displaystyle\lim_{T\to\infty}\bar{\boldsymbol{\sigma}}_{T}=\lim_{T\to\infty}\sum_{t=1}^{T}{(\sqrt{1-\alpha_{t}}\frac{\bar{\boldsymbol{\gamma}}_{T}}{\bar{\boldsymbol{\gamma}}_{t}})}$ $\displaystyle\leq\sqrt{1-\alpha_{\mathrm{min}}}\lim_{T\to\infty}\sum_{t=1}^{T}{(\boldsymbol{\gamma}_{\mathrm{max}})^{t-1}}=\frac{\sqrt{1-\alpha_{\mathrm{min}}}}{1-\boldsymbol{\gamma}_{\mathrm{max}}}<\infty$ ## Appendix B Reverse Process Distribution Based on the Bayes’ theorem, we get: (18) $\displaystyle q(\boldsymbol{x}_{t-1}|\boldsymbol{x}_{t},\boldsymbol{x}_{0})=q(\boldsymbol{x}_{t}|\boldsymbol{x}_{t-1},\boldsymbol{x}_{0})\frac{q(\boldsymbol{x}_{t-1}|\boldsymbol{x}_{0})}{q(\boldsymbol{x}_{t}|\boldsymbol{x}_{0})}$ $\displaystyle\propto$ $\displaystyle\exp(-\frac{1}{2}(\frac{(\boldsymbol{x}_{t}-\boldsymbol{\gamma}_{t}\boldsymbol{x}_{t-1})^{2}}{\boldsymbol{\sigma}_{t}^{2}}+\frac{(\boldsymbol{x}_{t-1}-\bar{\boldsymbol{\gamma}}_{t-1}\boldsymbol{x}_{0})^{2}}{\bar{\boldsymbol{\sigma}}_{t-1}^{2}}-\frac{(\boldsymbol{x}_{t}-\bar{\boldsymbol{\gamma}}_{t}\boldsymbol{x}_{0})^{2}}{\bar{\boldsymbol{\sigma}}_{t}^{2}}))$ $\displaystyle=$ $\displaystyle\exp(((\frac{\boldsymbol{\gamma}_{t}}{\boldsymbol{\sigma}_{t}})^{2}+(\frac{1}{\bar{\boldsymbol{\sigma}}_{t-1}})^{2})\boldsymbol{x}_{t-1}^{2}-(\frac{2\boldsymbol{\gamma}_{t}}{\boldsymbol{\sigma}_{t}^{2}}\boldsymbol{x}_{t}+\frac{2\bar{\boldsymbol{\gamma}}_{t-1}}{\bar{\boldsymbol{\sigma}}_{t-1}^{2}}\boldsymbol{x}_{0})\boldsymbol{x}_{t-1}+C(\boldsymbol{x}_{t},\boldsymbol{x}_{0})),$ in which the recursive relationship is used: $\bar{\boldsymbol{\sigma}}_{t}^{2}=\boldsymbol{\gamma}_{t}^{2}\bar{\boldsymbol{\sigma}}_{t-1}^{2}+\boldsymbol{\sigma}_{t}^{2}$, which can be inferred by combining Eqn. 2 and Eqn. 3. ## References * (1) * 3rd Generation Partnership Project (3GPP) 3rd Generation Partnership Project (3GPP). 2023\. _5G; NR; Physical channels and modulation._ Technical Specification (TS) 38.211. 3rd Generation Partnership Project (3GPP). Version 17.5.0. * Allen-Zhu and Li (2023) Zeyuan Allen-Zhu and Yuanzhi Li. 2023. Forward Super-Resolution: How Can GANs Learn Hierarchical Generative Models for Real-World Distributions. In _The Eleventh International Conference on Learning Representations_. * Austin et al. (2021) Jacob Austin, Daniel D Johnson, Jonathan Ho, Daniel Tarlow, and Rianne Van Den Berg. 2021\. Structured denoising diffusion models in discrete state-spaces. _Advances in Neural Information Processing Systems_ 34 (2021), 17981–17993. * Azizi et al. (2023) Shekoofeh Azizi, Simon Kornblith, Chitwan Saharia, Mohammad Norouzi, and David J Fleet. 2023\. Synthetic data from diffusion models improves imagenet classification. _arXiv preprint arXiv:2304.08466_ (2023). * Bakshi et al. (2019) Arjun Bakshi, Yifan Mao, Kannan Srinivasan, and Srinivasan Parthasarathy. 2019. Fast and efficient cross band channel prediction using machine learning. In _The 25th Annual International Conference on Mobile Computing and Networking_. 1–16. * Balevi and Andrews (2021) Eren Balevi and Jeffrey G Andrews. 2021. Wideband channel estimation with a generative adversarial network. _IEEE Transactions on Wireless Communications_ 20, 5 (2021), 3049–3060. * Bando et al. (2020) Yoshiaki Bando, Kouhei Sekiguchi, and Kazuyoshi Yoshii. 2020\. Adaptive Neural Speech Enhancement with a Denoising Variational Autoencoder.. In _INTERSPEECH_. * Baur et al. (2022) Michael Baur, Benedikt Fesl, Michael Koller, and Wolfgang Utschick. 2022. Variational autoencoder leveraged mmse channel estimation. In _2022 56th Asilomar Conference on Signals, Systems, and Computers_. IEEE, 527–532. * Brock et al. (2018) Andrew Brock, Jeff Donahue, and Karen Simonyan. 2018\. Large Scale GAN Training for High Fidelity Natural Image Synthesis. In _International Conference on Learning Representations_. * Cai et al. (2020) Hong Cai, Belal Korany, Chitra R Karanam, and Yasamin Mostofi. 2020\. Teaching rf to sense without rf training measurements. _Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies_ 4, 4 (2020), 1–22. * Chen et al. (2021) Nanxin Chen, Yu Zhang, Heiga Zen, Ron J Weiss, Mohammad Norouzi, and William Chan. 2021\. WaveGrad: Estimating Gradients for Waveform Generation. In _International Conference on Learning Representations_. * Chi et al. (2022) Guoxuan Chi, Zheng Yang, Jingao Xu, Chenshu Wu, Jialin Zhang, Jianzhe Liang, and Yunhao Liu. 2022. Wi-drone: wi-fi-based 6-DoF tracking for indoor drone flight control. In _Proceedings of the 20th Annual International Conference on Mobile Systems, Applications and Services_. 56–68. * Dhariwal and Nichol (2021) Prafulla Dhariwal and Alexander Nichol. 2021. Diffusion models beat gans on image synthesis. _Advances in neural information processing systems_ 34 (2021), 8780–8794. * Doshi et al. (2022) Akash S Doshi, Manan Gupta, and Jeffrey G Andrews. 2022\. Over-the-Air Design of GAN Training for mmWave MIMO Channel Estimation. _IEEE Journal on Selected Areas in Information Theory_ 3, 3 (2022), 557–573. * Dosovitskiy et al. (2021) Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, and Neil Houlsby. 2021\. An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale. In _International Conference on Learning Representations_. * Erol et al. (2019) Baris Erol, Sevgi Z Gurbuz, and Moeness G Amin. 2019\. GAN-based synthetic radar micro-Doppler augmentations for improved human activity recognition. In _2019 IEEE Radar Conference (RadarConf)_. IEEE, 1–5. * Gao et al. (2023) Yuchong Gao, Guoxuan Chi, Guidong Zhang, and Zheng Yang. 2023\. Wi-Prox: Proximity Estimation of Non-directly Connected Devices via Sim2Real Transfer Learning. In _GLOBECOM 2023-2023 IEEE Global Communications Conference_. IEEE. * Glorot and Bengio (2010) Xavier Glorot and Yoshua Bengio. 2010. Understanding the difficulty of training deep feedforward neural networks. In _Proceedings of the thirteenth international conference on artificial intelligence and statistics_. JMLR Workshop and Conference Proceedings, 249–256. * Gong et al. (2023) Shansan Gong, Mukai Li, Jiangtao Feng, Zhiyong Wu, and Lingpeng Kong. 2023. DiffuSeq: Sequence to Sequence Text Generation with Diffusion Models. In _The Eleventh International Conference on Learning Representations_. * Goyal et al. (2017) Priya Goyal, Piotr Dollár, Ross Girshick, Pieter Noordhuis, Lukasz Wesolowski, Aapo Kyrola, Andrew Tulloch, Yangqing Jia, and Kaiming He. 2017. Accurate, large minibatch sgd: Training imagenet in 1 hour. _arXiv preprint arXiv:1706.02677_ (2017). * Ha et al. (2020) Unsoo Ha, Junshan Leng, Alaa Khaddaj, and Fadel Adib. 2020\. Food and liquid sensing in practical environments using $\\{$RFIDs$\\}$. In _17th USENIX Symposium on Networked Systems Design and Implementation (NSDI 20)_. 1083–1100. * Hamdan and Hamdi (2020) Mutasem Q Hamdan and Khairi A Hamdi. 2020. Variational Auto-encoders application in wireless Vehicle-to-Everything communications. In _2020 IEEE 91st Vehicular Technology Conference (VTC2020-Spring)_. IEEE. * He et al. (2023) Ruifei He, Shuyang Sun, Xin Yu, Chuhui Xue, Wenqing Zhang, Philip Torr, Song Bai, and Xiaojuan Qi. 2023\. Is synthetic data from generative models ready for image recognition?. In _The Eleventh International Conference on Learning Representations_. * Heusel et al. (2017) Martin Heusel, Hubert Ramsauer, Thomas Unterthiner, Bernhard Nessler, and Sepp Hochreiter. 2017\. GANs Trained by a Two Time-Scale Update Rule Converge to a Local Nash Equilibrium. In _Advances in Neural Information Processing Systems_ , Vol. 30. Curran Associates, Inc. * Ho et al. (2020) Jonathan Ho, Ajay Jain, and Pieter Abbeel. 2020. Denoising diffusion probabilistic models. _Advances in neural information processing systems_ 33 (2020), 6840–6851. * Ho et al. (2022) Jonathan Ho, Chitwan Saharia, William Chan, David J Fleet, Mohammad Norouzi, and Tim Salimans. 2022\. Cascaded diffusion models for high fidelity image generation. _The Journal of Machine Learning Research_ 23, 1 (2022), 2249–2281. * Ho and Salimans (2021) Jonathan Ho and Tim Salimans. 2021. Classifier-Free Diffusion Guidance. In _NeurIPS 2021 Workshop on Deep Generative Models and Downstream Applications_. * Huang et al. (2019) Chongwen Huang, George C Alexandropoulos, Alessio Zappone, Chau Yuen, and Mérouane Debbah. 2019\. Deep learning for UL/DL channel calibration in generic massive MIMO systems. In _ICC 2019-2019 IEEE International Conference on Communications (ICC)_. IEEE, 1–6. * Instruments (2022) Texas Instruments. 2022\. Texas Instruments IWR1443BOOST. https://www.ti.com/tool/IWR1443BOOST * Jiang et al. (2021) Liming Jiang, Bo Dai, Wayne Wu, and Chen Change Loy. 2021\. Focal frequency loss for image reconstruction and synthesis. In _Proceedings of the IEEE/CVF International Conference on Computer Vision_. 13919–13929. * Jiang et al. (2018) Wenjun Jiang, Chenglin Miao, Fenglong Ma, Shuochao Yao, Yaqing Wang, Ye Yuan, Hongfei Xue, Chen Song, Xin Ma, Dimitrios Koutsonikolas, et al. 2018\. Towards environment independent device free human activity recognition. In _Proceedings of the 24th annual international conference on mobile computing and networking_. 289–304. * Kaltenberger et al. (2008) Florian Kaltenberger, David Gesbert, Raymond Knopp, and Marios Kountouris. 2008. Performance of multi-user MIMO precoding with limited feedback over measured channels. In _IEEE GLOBECOM 2008-2008 IEEE Global Telecommunications Conference_. IEEE, 1–5. * Kinga et al. (2015) D Kinga, Jimmy Ba Adam, et al. 2015\. Adam: A method for stochastic optimization. In _International conference on learning representations (ICLR)_ , Vol. 5. San Diego, California;, 6\. * Kingma and Welling (2013) Diederik P Kingma and Max Welling. 2013. Auto-encoding variational bayes. _arXiv preprint arXiv:1312.6114_ (2013). * Kong et al. (2020) Zhifeng Kong, Wei Ping, Jiaji Huang, Kexin Zhao, and Bryan Catanzaro. 2020. DiffWave: A Versatile Diffusion Model for Audio Synthesis. In _International Conference on Learning Representations_. * Korany et al. (2019) Belal Korany, Chitra R Karanam, Hong Cai, and Yasamin Mostofi. 2019. XModal-ID: Using WiFi for through-wall person identification from candidate video footage. In _The 25th Annual International Conference on Mobile Computing and Networking_. 1–15. * Lee et al. (2022) Sangyun Lee, Hyungjin Chung, Jaehyeon Kim, and Jong Chul Ye. 2022. Progressive Deblurring of Diffusion Models for Coarse-to-Fine Image Synthesis. In _NeurIPS 2022 Workshop on Score-Based Methods_. https://openreview.net/forum?id=KP8BrpZBbv * Liu et al. (2021) Zikun Liu, Gagandeep Singh, Chenren Xu, and Deepak Vasisht. 2021. FIRE: enabling reciprocity for FDD MIMO systems. In _Proceedings of the 27th Annual International Conference on Mobile Computing and Networking_. 628–641. * Loshchilov and Hutter (2019) Ilya Loshchilov and Frank Hutter. 2019. Decoupled Weight Decay Regularization. In _International Conference on Learning Representations_. * Lugmayr et al. (2022) Andreas Lugmayr, Martin Danelljan, Andres Romero, Fisher Yu, Radu Timofte, and Luc Van Gool. 2022\. Repaint: Inpainting using denoising diffusion probabilistic models. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_. 11461–11471. * Lyu et al. (2021) Zhaoyang Lyu, Zhifeng Kong, Xudong Xu, Liang Pan, and Dahua Lin. 2021. A conditional point diffusion-refinement paradigm for 3d point cloud completion. _arXiv preprint arXiv:2112.03530_ (2021). * McKown and Hamilton (1991) John W McKown and R Lee Hamilton. 1991. Ray tracing as a design tool for radio networks. _IEEE Network_ 5, 6 (1991), 27–30. * Midjourney (2023) Midjourney. 2023\. Midjourney. https://www.midjourney.com/ * Nichol and Dhariwal (2021) Alexander Quinn Nichol and Prafulla Dhariwal. 2021. Improved denoising diffusion probabilistic models. In _International Conference on Machine Learning_. PMLR, 8162–8171. * Nichol et al. (2022) Alexander Quinn Nichol, Prafulla Dhariwal, Aditya Ramesh, Pranav Shyam, Pamela Mishkin, Bob Mcgrew, Ilya Sutskever, and Mark Chen. 2022. GLIDE: Towards Photorealistic Image Generation and Editing with Text-Guided Diffusion Models. In _Proceedings of the 39th International Conference on Machine Learning_. * OpenAI (2023) OpenAI. 2023. GPT-4 Technical Report. arXiv:2303.08774 [cs.CL] * Perez et al. (2018) Ethan Perez, Florian Strub, Harm De Vries, Vincent Dumoulin, and Aaron Courville. 2018. Film: Visual reasoning with a general conditioning layer. In _Proceedings of the AAAI conference on artificial intelligence_ , Vol. 32. * Popov et al. (2021) Vadim Popov, Ivan Vovk, Vladimir Gogoryan, Tasnima Sadekova, and Mikhail Kudinov. 2021. Grad-tts: A diffusion probabilistic model for text-to-speech. In _International Conference on Machine Learning_. PMLR, 8599–8608. * Radford et al. (2021) Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. 2021\. Learning transferable visual models from natural language supervision. In _International conference on machine learning_. PMLR, 8748–8763. * Radford et al. (2015) Alec Radford, Luke Metz, and Soumith Chintala. 2015. Unsupervised representation learning with deep convolutional generative adversarial networks. _arXiv preprint arXiv:1511.06434_ (2015). * Ramesh et al. (2022) Aditya Ramesh, Prafulla Dhariwal, Alex Nichol, Casey Chu, and Mark Chen. 2022. Hierarchical text-conditional image generation with clip latents. _arXiv preprint arXiv:2204.06125_ 1, 2 (2022), 3\. * Rissanen et al. (2023) Severi Rissanen, Markus Heinonen, and Arno Solin. 2023\. Generative Modelling with Inverse Heat Dissipation. In _The Eleventh International Conference on Learning Representations_. https://openreview.net/forum?id=4PJUBT9f2Ol * Rizk et al. (2019) Hamada Rizk, Ahmed Shokry, and Moustafa Youssef. 2019\. Effectiveness of data augmentation in cellular-based localization using deep learning. In _2019 IEEE Wireless Communications and Networking Conference (WCNC)_. IEEE. * Rombach et al. (2022) Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Björn Ommer. 2022\. High-resolution image synthesis with latent diffusion models. In _Proceedings of the IEEE/CVF conference on computer vision and pattern recognition_. 10684–10695. * Ronneberger et al. (2015) Olaf Ronneberger, Philipp Fischer, and Thomas Brox. 2015\. U-net: Convolutional networks for biomedical image segmentation. In _Medical Image Computing and Computer-Assisted Intervention–MICCAI 2015: 18th International Conference, Munich, Germany, October 5-9, 2015, Proceedings, Part III 18_. Springer, 234–241. * Saharia et al. (2022a) Chitwan Saharia, William Chan, Huiwen Chang, Chris Lee, Jonathan Ho, Tim Salimans, David Fleet, and Mohammad Norouzi. 2022a. Palette: Image-to-image diffusion models. In _ACM SIGGRAPH 2022 Conference Proceedings_. 1–10. * Saharia et al. (2022b) Chitwan Saharia, Jonathan Ho, William Chan, Tim Salimans, David J Fleet, and Mohammad Norouzi. 2022b. Image super-resolution via iterative refinement. _IEEE Transactions on Pattern Analysis and Machine Intelligence_ 45, 4 (2022), 4713–4726. * Shepard et al. (2016) Clayton Shepard, Jian Ding, Ryan E Guerra, and Lin Zhong. 2016\. Understanding real many-antenna MU-MIMO channels. In _2016 50th Asilomar Conference on Signals, Systems and Computers_. IEEE, 461–467. * Shivashankar and Miller (2023) C Shivashankar and Shane Miller. 2023. Semantic Data Augmentation With Generative Models. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_. 863–873. * Sohl-Dickstein et al. (2015) Jascha Sohl-Dickstein, Eric Weiss, Niru Maheswaranathan, and Surya Ganguli. 2015. Deep unsupervised learning using nonequilibrium thermodynamics. In _International conference on machine learning_. PMLR, 2256–2265. * Sohn et al. (2015a) Kihyuk Sohn, Honglak Lee, and Xinchen Yan. 2015a. Learning structured output representation using deep conditional generative models. _Advances in neural information processing systems_ 28 (2015). * Sohn et al. (2015b) Kihyuk Sohn, Honglak Lee, and Xinchen Yan. 2015b. Learning Structured Output Representation using Deep Conditional Generative Models. In _Advances in Neural Information Processing Systems_ , C. Cortes, N. Lawrence, D. Lee, M. Sugiyama, and R. Garnett (Eds.), Vol. 28. Curran Associates, Inc. * Song and Ermon (2019) Yang Song and Stefano Ermon. 2019. Generative modeling by estimating gradients of the data distribution. _Advances in neural information processing systems_ 32 (2019). * Trabelsi et al. (2018) Chiheb Trabelsi, Olexa Bilaniuk, Ying Zhang, Dmitriy Serdyuk, Sandeep Subramanian, Joao Felipe Santos, Soroush Mehri, Negar Rostamzadeh, Yoshua Bengio, and Christopher J Pal. 2018\. Deep Complex Networks. In _International Conference on Learning Representations_. * Vasisht et al. (2016) Deepak Vasisht, Swarun Kumar, Hariharan Rahul, and Dina Katabi. 2016. Eliminating channel feedback in next-generation cellular networks. In _Proceedings of the 2016 ACM SIGCOMM Conference_. 398–411. * Vaswani et al. (2017) Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. _Advances in neural information processing systems_ 30 (2017). * Wang et al. (2004) Zhou Wang, Alan C Bovik, Hamid R Sheikh, and Eero P Simoncelli. 2004\. Image quality assessment: from error visibility to structural similarity. _IEEE transactions on image processing_ 13, 4 (2004), 600–612. * Weisstein (2023) Eric W Weisstein. 2023\. Convolution Theorem. https://mathworld.wolfram.com/ConvolutionTheorem.html * Xia et al. (2022) Weihao Xia, Yulun Zhang, Yujiu Yang, Jing-Hao Xue, Bolei Zhou, and Ming-Hsuan Yang. 2022\. Gan inversion: A survey. _IEEE Transactions on Pattern Analysis and Machine Intelligence_ (2022). * Yang et al. (2022b) Ling Yang, Zhilong Zhang, Yang Song, Shenda Hong, Runsheng Xu, Yue Zhao, Yingxia Shao, Wentao Zhang, Bin Cui, and Ming-Hsuan Yang. 2022b. Diffusion models: A comprehensive survey of methods and applications. _arXiv preprint arXiv:2209.00796_ (2022). * Yang et al. (2023) Zheng Yang, Yi Zhang, Kun Qian, and Chenshu Wu. 2023\. SLNet: A Spectrogram Learning Neural Network for Deep Wireless Sensing. In _20th USENIX Symposium on Networked Systems Design and Implementation (NSDI 23)_. 1221–1236. * Yang et al. (2022a) Zheng Yang, Yi Zhang, and Qian Zhang. 2022a. Rethinking fall detection with Wi-Fi. _IEEE Transactions on Mobile Computing_ (2022). * Yao et al. (2019) Shuochao Yao, Ailing Piao, Wenjun Jiang, Yiran Zhao, Huajie Shao, Shengzhong Liu, Dongxin Liu, Jinyang Li, Tianshi Wang, Shaohan Hu, et al. 2019\. Stfnets: Learning sensing signals from the time-frequency perspective with short-time fourier neural networks. In _The World Wide Web Conference_. 2192–2202. * Zhang et al. (2023) Guidong Zhang, Guoxuan Chi, Yi Zhang, Xuan Ding, and Zheng Yang. 2023. Push the Limit of Millimeter-wave Radar Localization. _ACM Transactions on Sensor Networks_ 19, 3 (2023), 1–21. * Zhang et al. (2022) Xiaotong Zhang, Zhenjiang Li, and Jin Zhang. 2022. Synthesized Millimeter-Waves for Human Motion Sensing. In _Proceedings of the 20th ACM Conference on Embedded Networked Sensor Systems_. 377–390. * Zhao et al. (2023) Xiaopeng Zhao, Zhenlin An, Qingrui Pan, and Lei Yang. 2023\. NeRF2: Neural Radio-Frequency Radiance Fields. _arXiv preprint arXiv:2305.06118_ (2023). * Zheng et al. (2023) Chenyu Zheng, Guoqiang Wu, and Chongxuan Li. 2023. Toward Understanding Generative Data Augmentation. _arXiv preprint arXiv:2305.17476_ (2023). * Zheng et al. (2019) Yue Zheng, Yi Zhang, Kun Qian, Guidong Zhang, Yunhao Liu, Chenshu Wu, and Zheng Yang. 2019. Zero-effort cross-domain gesture recognition with Wi-Fi. In _Proceedings of the ACM MobiSys_. * Zhou et al. (2021) Linqi Zhou, Yilun Du, and Jiajun Wu. 2021. 3d shape generation and completion through point-voxel diffusion. In _Proceedings of the IEEE/CVF International Conference on Computer Vision_. 5826–5835.
# Collisionless Pattern Discovery in Robot Swarms Using Deep Reinforcement Learning Nelson Sharma1, Aswini Ghosh1, Rajiv Misra1, Supratik Mukhopadhyay2, and Gokarna Sharma3 3Department of Computer Science, Kent State University, Kent, Ohio, USA E-mail: {nelson_2121cs07, aswini_2121cs02<EMAIL_ADDRESS><EMAIL_ADDRESS><EMAIL_ADDRESS>2Department of Environmental Sciences, Louisiana State University, Baton Rouge, Louisiana, USA 1Department of Computer Science and Engineering, Indian Institute of Technology, Patna, India ###### Abstract We present a deep reinforcement learning-based framework for automatically discovering patterns available in any given initial configuration of fat robot swarms. In particular, we model the problem of collision-less gathering and mutual visibility in fat robot swarms and discover patterns for solving them using our framework. We show that by shaping reward signals based on certain constraints like mutual visibility and safe proximity, the robots can discover collision-less trajectories leading to “well-formed” gathering and visibility patterns. ###### Index Terms: Swarm robots, Pattern formation, Gathering, Collision avoidance, Deep reinforcement learning ## I Introduction In recent years, swarm robotics [1] has gained a lot of attention from the research community. In a swarm, a number of robots work together collectively for performing a certain task, such as disaster recovery, foraging, mapping, etc [2]. In many cases, it is assumed that there is no explicit means of communication between the robots (e.g., in a bandwidth-limited disaster zone); a robot can only see others through a camera that provides a $360^{\circ}$ view. The robots are anonymous (no unique identifiers), disoriented (no agreement on coordinate systems or units of distance), autonomous (no external control), and indistinguishable (no external identifiers). In the classical oblivious swarm model [4, 52], the position of every robot is considered as a point on 2D plane and has a local co-ordinate system. Each robot senses other robots using sensory equipment installed on it, such as a camera. However, the robots are considered silent since it is assumed that there is no direct communication among them [4, 52]. Each robot executes the same algorithm. Additionally, each robot executes Look-Compute-Move (LCM) cycle – when a robot is activated, it takes a snapshot of its surroundings from its local perspective (Look), then it uses the snapshot and the algorithm it is executing to determine an action to take (Compute), and finally it performs the action (Move). A swarm of robots can operate asynchronously, semi- synchronously or fully synchronously. In the fully synchronous operating model, every robot in the swarm becomes active in every LCM cycle and cycles of every robot are synchronized (i.e., there is a global clock). In the semi- synchronous operating model, not all swarm robots necessarily become active in every LCM cycle, but LCM cycles are synchronized (i.e., there is still a global clock). In the asynchronous operating model, not every robot is necessarily active all the time, the duration of each LCM cycle is arbitrary, and the LCM cycles of different robots are not synchronized (i.e., no global clock). In the classical model, robots are oblivious because they do not keep memory of what they did in any past LCM cycle. There has been a lot of research recently in forming patterns with maximum visibility and gathering without collisions for swarm robots in the classical model [7, 8, 9, 10, 11, 12, 13, 14]. Pattern formation concerns arranging $n$ robots in a given pattern on the plane. A configuration of robots matches the pattern if rotation, translation, and scaling of the configuration can produce the pattern explicitly [7, 8, 9, 10, 11, 12, 13, 14]. An algorithm solves a pattern formation problem, if given any configuration of $n$ robots located initially at distinct points on a plane, it enables them to reach a configuration that matches the given pattern and remain stationary thereafter [39]. Specific patterns like circle formation have been considered in [23, 24, 25, 26, 27]. Convex hull and cubic spline formations as solutions to the mutual visibility problem – i.e., each robot sees all others in the swarm, have also been studied in the classical model [17, 41]. Pattern formation has been studied in [42] for UAVs. Pattern Formation via Deep Reinforcement Learning (DRL) has been studied in [43]. The objective in the gathering problem is to arrange the robots as close together as possible around a gathering point (that may or may not be known a priori) [38]. Existing pattern formation algorithms belong to two broad classes: combinatorial algorithms and machine learning-based approaches. In the combinatorial paradigm [3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37], it is assumed that the target pattern to be formed is given as input to the robots and one designs a sophisticated algorithm to arrange robots to be positioned on the target pattern positions, starting from a given initial configuration. Designing such algorithms is labor-intensive, error-prone, and time consuming. Additionally, the designed combinatorial algorithm may not work for any initial configuration, for example, many combinatorial algorithms on the literature assume that the initial configurations are asymmetric [3, 4, 5, 6]. The same kind of assumption of asymmetric initial configuration extends also to the combinatorial algorithms for gathering in the literature. Existing machine learning based approaches [38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51] attempt to solve pattern formation for a given target pattern, i.e., similarly as in the combinatorial algorithms, the target pattern needs to be given to the machine learning algorithm. It would be ideal, given any initial configuration (symmetric/asymmetric), if the valid target patterns could be identified and discovered, even when the target pattern is not given as input a priori. In this paper, we precisely aim to do that for the first time in the context of swarm robotics. Particularly, we seek a _single algorithm_ that can identify and _discover_ patterns that provide solutions to problems such as mutual visibility or gathering, rather than giving as input the solution pattern beforehand and the goal of the algorithm is to reach to the given target from the given initial configuration. We present a DRL-based framework for automatically discovering patterns available in any given initial configuration of robot swarms. We depart from the literature and consider robot swarms which are not points but fat – and opaque robots with an extent and hence they occupy certain area. In particular, we model the problem of collision-less gathering and mutual visibility in fat opaque robot swarms and discover patterns for solving them using our framework. Since our proposed approach is DRL-based, it is based on designing good reward functions. We show that designing reward signals based on constraints like mutual visibility and safe proximity, the robots can discover collision-less trajectories leading to “well-formed” gathering and visibility patterns. In summary, we make the following two major contributions. * • We model the problem of automatic pattern discovery for collision-less gathering in robot swarms and formulate it in terms of DRL framework. * • We propose DRL based approach for the collision-less gathering and pattern discovery using multiple policies to discover “well-formed” gathering patterns. ## II RELATED WORK Specific target pattern formation for swarm robots have been extensively studied in [40, 41, 27, 28, 29, 30, 31]. Optimal convex hull pattern formation assuming obstructed visibility has been studied in [40]. In [41], the authors studied proposed self organised cubic spline based pattern formation method. Several researchers have studied circular pattern formation [27] problem. In [29], authors have investigated the plane formation problem that requires a swarm of robots moving in the three-dimensional Euclidean space to land on a common plane and considered the fully synchronous setting and where robots are endowed with only visual perception.Most works that studied pattern formation problem considered the classic oblivious point swarm model [30, 39]. In [39], the authors presented a combinatorial framework for the pattern formation problem and used a randomized procedure for leader election in their proposed algorithm. They measured the time complexity of their algorithms in “epochs” (i.e., time interval in which every robot performs at least one look-compute- move step). For the fundamental problem of gathering, the authors in [16] considered gathering of $N$ autonomous robots on a plane, which requires all robots to meet at a single point under limited visibility. Here the point of gathering is not known beforehand. The authors in [38] significantly improved the time complexity of gathering under limited visibility model with the assumption that the point of gathering is known beforehand. The authors in [37] considered the state machine representation to formulate the gathering problem and developed a distributed algorithm that solves the problem of gathering for any number of fat robots. In [43], the authors proposed a Deep Reinforcement Learning (DRL)-based method that generates general-purpose pattern formation strategies for any target pattern. A bio- inspired algorithm was proposed in [47] to control robot swarms for pattern formation. The authors in [49] have formulated reward functions to ensure drones arrive at their destinations following predefined routes avoiding collisions. The authors in [50] presented the idea of guided policy learning and investigated how to learn to control a group of cooperative agents with limited sensing capabilities such as robot swarms. The authors in [51] presented a framework for collision-less flying of drones over a directed circulant digraph. However, none of the above discussed approaches have attempted discovering collision-less patterns where mutual visibility is maximized with or without predefined gathering points. In this paper, we consider a DRL-based framework for discovering “well-formed” patterns for mutual visibility and gathering where gathering point may not be known beforehand. ## III System Model and Preliminaries Robot Swarm. A robot swarm is a collection on $n$ robots moving autonomously inside a rectangular region of size $(2X_{w},2Y_{w})$ on a 2D Cartesian plane with origin being at the center of the rectangle. The robots are _fat_ and _opaque_ , meaning that a robot $(n\in N)$ is represented as a circular opaque disk of radius $R_{bot}$. As shown in Fig. 1 (left), a robot has a center and a circular body of fixed size. A robot’s position vector $(P_{n})$ and velocity vector $(V_{n})$ are assumed to be known at all times, recorded using sensors internal to robots. It is assumed that all robots are fitted with sensors that can detect the positions of neighboring robots within a fixed distance from their centers, called the _scan-radius_ ($R_{scan}$). The sensors’ readings for a robot provide: (i) the number of neighbors ($G^{all}_{n}$), (ii) the number of fully visible neighbors ($G^{vis}_{n}$), and (iii) the number of occluded neighbors ($G^{occ}_{n}$). Additionally, a _safe distance_ ($\delta_{s}$) is defined, which can be used to calculate a measure of how unsafe is the current positioning of the robots. Figure 1: (left) Fat robot model (right) Visibility model Since robots are fat and opaque, a robot $B_{i}$ is said to be occluded by robot $B_{j}$ from the view of robot $B_{k}$ if any part of $B_{i}$ lies in the shadow formed by $B_{j}$ assuming that a point source of light placed at the center of $B_{k}$, as shown in Fig. 1 (right). Two robots are _mutually visible_ to each other if and only if none of them are occluded by any robot from the other’s view. The robots are allowed to move in any direction by changing their velocity vector $(V_{n}=[V_{n}^{x},V_{n}^{y}])$ within a fixed range ($V^{min}\leq V_{n}^{x},V_{n}^{y}\leq V^{max}$ ) at each time cycle. Problem Formulation. The goal of robot swarm is to discover appropriate patterns for gathering such that the robots do not collide with each other and maintain maximum visibility among one another during pattern formation. We consider two gathering problem variations as follows: Gathering at a Predefined Point. In this case, the robots are initially scattered far apart, i.e., mostly outside the scan-radius of each other. It is assumed that the gathering point is at the origin of the environment. Gathering without a Predefined Point. We assume a very large scan-radius and all robots are within the scan-radius of each other. Gathering point is not known a priori and the robot swarm has to decide the appropriate gathering place. Markov Decision Process. We formulate the above variations of the gathering problem as an appropriate _Markov Decision Process (MDP)_ and solve it using a DRL-based approach that tries to maximize its underlying reward function. The MDP is formulated as follows: * • State Space ($\mathcal{S}$): The State Space contains the position coordinates of all the robots. $\mathcal{S}=\\{P_{n}\\}:\forall n\in\\{1...N\\}$ (1) * • Action Space ($\mathcal{A}$): The Action Space contains the velocity components that can be set at a given time-step, for all the robots. $\mathcal{A}=\\{[V_{n}^{x},V_{n}^{y}]\\}:\forall n\in\\{1...N\\}$ (2) * • Reward Function ($\mathcal{R}$): The reward for each time-step is based on the _Reward Signal_ ($\mathcal{C}$) from the current state ($S_{t}$) and the next state ($S_{t+1}$). $\mathcal{R}(S_{t+1}|S_{t},A_{t})=\mathcal{C}(S_{t+1})-\mathcal{C}(S_{t})$ (3) where, the Reward Signal for a given state is a weighted sum of multiple signals. We define two reward signals, $\mathcal{C}_{1}$ and $\mathcal{C}_{2}$ as follows: $\mathcal{C}_{1}(S_{t})=\sum_{C\in C_{1}}C(S_{t})$ (4) $\mathcal{C}_{2}(S_{t})=\sum_{C\in C_{2}}C(S_{t})$ (5) where, $C_{1}$ and $C_{2}$ are sets of signals defined as: $C_{1}=\\{C_{close},C_{safety},C_{neighbors},C_{visible}\\}$ (6) $C_{2}=\\{C_{nclose},C_{safety},C_{neighbors},C_{visible}\\}$ (7) where, the members are defined as follows: * – $C_{close}$ : measure of closeness of robots from the origin. This signal causes robots to move closer to the gathering point, i.e., the origin. $C_{close}(S_{t})=1-W_{close}\sum_{n\in N}||P_{n}(t)||$ (8) * – $C_{safety}$ : is the sum of safety measure ($F_{safe}$) for all robots, which helps robots to avoid collisions maintaining appropriate distance between each other. $C_{safety}(S_{t})=W_{safety}\sum_{i,j\in N,i\neq j}F_{safe}(i,j,t)$ (9) where, $F_{safe}$ is defined between robots $i$ and $j$ at time-step $t$, based on a known constant _safe-distance_ ($\delta_{s}$), as follows: $F_{d}=||P_{i}(t)-P_{j}(t)||-\delta_{s}$ $F_{safe}(i,j,t)=\begin{cases}\sqrt{\delta_{s}+R_{bot}-F_{d}},&F_{d}\geq 0,\\\ \text{0},&F_{d}<0,\\\ \end{cases}$ * – $C_{neighbors}$ : denotes the number of neighbors and encourages the robots to have more robots in their neighborhood. $C_{neighbors}(S_{t})=W_{neighbors}\sum_{n\in N}G_{n}^{all}(t)$ (10) * – $C_{visible}$ : denotes total number of neighbors that are currently fully visible for each robot and encourages maximum visibility within the robot swarm. $C_{visible}(S_{t})=W_{visible}\sum_{n\in N}G_{n}^{vis}(t)$ (11) * – $C_{nclose}$ : measures the average distances from all neighbors for each robot. In the absence of a priori gathering point, this signal encourages robots to move closer to each other for gathering. $C_{nclose}(S_{t})=1-W_{nclose}\sum_{i,j\in N,i\neq j}||P_{i}(t)-P_{j}(t)||$ (12) The weights assigned to each signal ($W_{i}$) are chosen so as to normalize the values to the range $[0,1]$. * • Initial States ($\mathcal{S}_{0}$): We consider three types of initial state configurations: * – _Scattered_ : robots are uniformly scattered along a roughly circular shape with radius $\min(X_{w},Y_{w})$. * – _Distributed_ : robots are equally scattered near any two of the corners of the environment. * – _Packed_ : robots are closely placed in a line (or multiple lines). An initial state configuration $(\mathcal{S}_{0},\epsilon)$ is chosen at the start of each episode where $\mathcal{S}_{0}$ is a set of $n$ coordinates, one for each robot, and $\epsilon$ is the noise radius. A robot $(B_{n})$ is placed at a location ($P_{n}(0)$) given by: $P_{n}(0)=P_{n}^{0}+\mathcal{U}(-\epsilon,\epsilon)$ (13) where $P_{n}^{0}$ is an predefined point generated based on the initial state $(\mathcal{S}_{0})$. Initial position of a robot is determined by adding uniformly sampled noise ($\epsilon$) to each dimension of $P_{n}^{0}$. For gathering at a predefined point, we use the reward signal $\mathcal{C}_{1}$, which tries to maximize the $C_{close}$ signal causing robots to move closer to a pre-defined gathering point. In contrast, for gathering without a predefined point, we shall use the reward signal $\mathcal{C}_{2}$ in our MDP which tries to maximize the $C_{nclose}$ signal causing robots to move closer to each other. For both the problems, the common reward signal is $\mathcal{C}_{safe}$ which encourages collision-less movement at a safe distance; and $\mathcal{C}_{visible}$ which encourages maximum mutual visibility. Proximal Policy Optimization (PPO). PPO [53] is a policy optimization algorithm where the parameterized policy is updated using one of two objective functions. The first is _clipped objective_ function ($L_{C}$) defined as follows: $L_{C}=\mathbb{E}_{t}\left[\min(r_{t}\hat{A}_{t},clip(r_{t},1-\epsilon_{c},1+\epsilon_{c})\hat{A}_{t})\right]$ (14) where, $\hat{A}_{t}$ represents the advantage estimate, $\epsilon_{c}$ represents the _clipping hyper-parameter_ and $r_{t}$ is the ratio of probabilities given by the current policy ($\pi_{i}$) and an older version of it ($\pi_{i-1}$), defined as: $r_{t}=\frac{\pi_{i}(A_{t}|S_{t})}{\pi_{i-1}(A_{t}|S_{t})}$ (15) The clipped objective can be maximized using a mini-batch ($\mathcal{D}$) of trajectories, in which case the objective becomes: $L_{C}=\frac{1}{|\mathcal{D}|T}\sum_{\mathcal{D}}\sum_{t=0}^{T}\min(r_{t}\hat{A}_{t},clip(r_{t},1-\epsilon_{c},1+\epsilon_{c})\hat{A}_{t})$ (16) where, $T$ represents the trajectory length. The advantage is estimated using a truncated version of Generalized Advantage Estimation (GAE)[54] defined as: $\delta_{t}=\mathcal{R}_{t}+\gamma V(S_{t+1})-V(S_{t})$ (17) $\hat{A}_{i}(\gamma,\lambda)=\sum_{l=0}^{T-t-1}(\gamma\lambda)^{l}\delta_{l+t}$ (18) where, $\lambda$ is a hyper-parameter, $\gamma$ is the discount factor, and $V$ represents the value function. Notice that PPO is an on-policy algorithm, i.e., it uses the latest policy to sample actions during the learning process. The exploration is dependent on the randomness in the stochastic policy. As the policy is tuned over time, the randomness in sampled action decreases which often causes the agent to explore less, leading to the policy settling in a local optima. We shall take advantage of this fact and use multiple policies to move around local optima until we find a satisfactory one. This can be achieved, in one way, by altering the value of discount factor during multiple runs of the learning algorithm. ## IV Algorithm To discover a gathering pattern, we use Algorithm 2 to generate two policies sequentially using an augmented version of the Clipped Proximal Policy Optimization (Clipped-PPO) algorithm developed in [53]. The first policy ($\Pi_{0}$) is called the _Base_ policy and the second one ($\Pi_{1}$) is called the _Auxiliary_ policy. The base policy finds near-optimal patterns in most cases but may sometimes get struck on sub-optimal patterns (i.e., local optima). The Auxiliary policy is used to break out of local optima and improve the reward further. The base policy uses a larger value of the discount factor ($\gamma$) and trains on longer horizons. On the other hand, the auxiliary policy uses a smaller value of discount factor and trains on short horizons using only the sub-optimal patterns formed by the base policy as the initial states. The pattern can be sub-optimal, in the sense that the robots do satisfy the gathering criteria only to a certain degree. In other words, it might be possible that the reward can be further maximized to obtain a more optimal pattern. In such a case, the auxiliary policy is trained starting at various sub-optimal patterns formed by the base policy and is expected to improve the pattern further using a few more steps. The algorithm additionally uses hyper-parameters which are described in Table I. Table I: Hyper-parameters used in proposed algorithms Hyper-Parameter | Description ---|--- $\gamma$ | Discount Factor $\lambda$ | GAE Lambda $\epsilon_{c}$ | Clipping Range for Objective $\alpha$ | Learning Rate for policy updates $\beta$ | Learning Rate for value updates $Z$ | No. of Learning Epochs Algorithm 1 Gathering using Clipped PPO 0: Set of Initial States $({\sigma})$ 0: Hyper-Parameters $(\gamma,\lambda,\epsilon_{c},\alpha,\beta,Z)$ 1: Initialize Policy-Network $(\pi_{\theta})$ and Value-Network $(V_{\phi})$ with parameters $\theta_{0}$ and $\phi_{0}$ respectively 2: Initialize Learning Environment with a uniform Initial State Distribution on set $\sigma$. 3: Initialize an empty set $\sigma^{*}$ 4: for $i=1$ to $Z$ do 5: Collect batch of trajectories $\mathcal{D}_{i}=\\{S_{t},A_{t},R_{t},S_{t+1}\\}$ from the environment using current policy $\pi_{\theta_{i}}$ and compute _Rewards-to-go_ ($\hat{R}_{t}$) at each time-step $t$ $\hat{R}_{t}=\sum_{l=t}^{T}\mathcal{R}(S_{l+1}|(S_{l},A_{l}))$ 6: Compute Clipped Objective ($L_{C}$) for batch $\mathcal{D}_{i}$ using equation (16) 7: Update Policy Network using: $\theta_{i+1}=\theta_{i}+\alpha\cdot\nabla_{\theta}L_{C}$ 8: Update Value Network using: $E(\phi_{i})=\frac{1}{|\mathcal{D}_{i}|T}\sum_{\mathcal{D}_{i}}\sum_{t=0}^{T}(V_{\phi_{i}}(S_{t})-\hat{R}_{t})^{2}$ $\phi_{i+1}=\phi_{i}-\beta\cdot\nabla_{\phi}E(\phi_{i})$ 9: end for 10: Evaluate Policy $\pi_{\theta}$ in the Environment using initial states from $\sigma$. For each $S_{i}\in\sigma$, add the final state $S^{*}_{i}$ to set $\sigma^{*}$ only if $\mathcal{S}^{*}_{i}$ is not a terminal state. 11: Output policy $\pi_{\theta}$ and set of states $\sigma^{*}$ The clipping parameter ($\epsilon_{c}$) is gradually increased with each epoch of Algorithm 1. The Discount Factor Schedule ($F_{\gamma}$) is a function which provides appropriate value of discount factor on each successive run of Algorithm 2. We train only two successive policies where the value of Discount Factor is high for the first run and reduced during the second run. The auxiliary policy is responsible for improving patterns discovered by the base policy and can be trained on customized reward signals that give rise to various patterns while satisfying given gathering criteria. Algorithm 2 Pattern Discovery for robot swarm 0: Initial State Configuration $(\mathcal{S}_{0},\epsilon)$ 0: Hyper-Parameters $(\lambda,\epsilon_{c},\alpha,\beta,Z)$ 0: Discount Factor Schedule $(F_{\gamma})$ 1: Initialize set of initial states $\sigma\leftarrow(\mathcal{S}_{0},\epsilon)$ 2: Initialize $t\leftarrow 0$ 3: Initialize empty list of policies ($\Pi$) 4: while $|\sigma|>0$ do 5: Run Algorithm[1] with Set of Initial States ($\sigma$) and hyper-parameters $(F_{\gamma}(t),\lambda,\epsilon_{c},\alpha,\beta,Z)$ to obtain $\pi^{t}$ and $\sigma^{t}$ 6: Append $\pi^{t}$ to list of policies $\Pi$ 7: Update set of initial states $\sigma\leftarrow\sigma^{t}$ 8: $t\leftarrow t+1$ 9: Evaluate policy list ($\Pi$) with given Initial State Configuration $(\mathcal{S}_{0},\epsilon)$ to obtain desired trajectory set $\mathcal{T}$ 10: If $\mathcal{T}$ forms valid collision-less patterns then Break 11: end while 12: if $|\sigma|=0$ then 13: No valid pattern formation trajectory was discovered 14: else 15: Output Policy List $\Pi$ 16: end if ## V Experiments and Results To test the pattern discovery algorithm (Algorithm 2), we conducted a total of six experiments under three different swarm compositions with two different gathering criteria (reward signals) starting in one of the three initial state configurations (Scattered, Distributed, Packed). The experimental settings are described in Table II. Table II: Experiment Settings Experiment | Swarm Size | Gathering Target | Initial States ---|---|---|--- A | $6$ | Origin | Packed B | $8$ | Origin | Scattered C | $10$ | Origin | Distributed D | 6 | Undefined | Packed E | 8 | Undefined | Scattered F | 10 | Undefined | Distributed The training results for swarm size of 10 and 8 (in experiments C and E) are shown in Figs. 2 and 3, respectively. Note that, in both cases, the auxiliary policy improves the patterns formed by the base policy which is evident from the improvement seen in the reward signal during auxiliary training. Figure 2: [Exp C] Training result for Base Policy and Auxiliary Policy for swarm of size 10 with pre-defined gathering point Figure 3: [Exp E] Training result for Base Policy and Auxiliary Policy for swarm of size 8 with undefined gathering point Fig. 4 shows the reward obtained by robot during a single episode of pattern formation. When the _step-reward_ received by the agent becomes zero, it indicates that the base policy cannot improve the pattern any further, and hence, has converged to a stable pattern. It can be seen that the base policy leads to a sub-optimal pattern which is then improved by the auxiliary policy. Figure 4: [Exp C] Cumulative Reward obtained during single episode: (a) Reward Obtained by Base Policy, (b) Reward Obtained by Auxiliary Policy. The patterns discovered by the trained policies are shown in Figs. 5–8, which show the (a) initial states, (b) the final pattern formed by base policy, and (c) the improved pattern formed by auxiliary policy. It should be noted that the patterns discovered are quite different in both the cases and the proposed algorithm is able to gather robots even when gathering without a predefined point. Similar results can be seen for swarm size of 8 (in experiments B and E) in Figs. 6 and 8 which also show that the auxiliary policy is able to improve the base pattern even though the improved pattern is only slightly different in case of experiment E as seen in Fig. 8. Figure 5: [Exp C] Pattern formation for swarm of size 10 in distributed initial state with pre-defined gathering point Figure 6: [Exp F] Pattern formation for swarm of size 10 in distributed initial state with undefined gathering point Figure 7: [Exp B] Pattern formation for swarm of size 8 in scattered initial state with pre-defined gathering point Figure 8: [Exp E] Pattern formation for swarm of size 8 in scattered initial state with undefined gathering point Figure 9: Swarm size vs average pattern formation steps for distributed and scattered initial state configurations Table III: Average Steps taken for pattern formation Experiment | Swarm Size | Base Steps | Auxiliary Steps | Total Steps ---|---|---|---|--- A | 6 | 113.63 | 4.01 | 117.64 B | 8 | 728.37 | 18.22 | 746.59 C | 10 | 1730.20 | 162.38 | 1892.58 D | 6 | 269.88 | 2.89 | 272.77 E | 8 | 521.71 | 223.56 | 1168.83 F | 10 | 1550.26 | 486.97 | 2037.23 Table III shows the average number of time-steps taken for robots to form a collision-less gathering pattern under different experimental settings and standard initial state configurations with uniform noise $\sim\mathcal{U}(-2R_{bot},2R_{bot})$. A total of $200$ random initial states were generated to test pattern formation steps under the corresponding initial state configuration. To study the relationship between swarm size and number of steps needed for pattern formation, we conducted similar experiments with robot swarms of sizes ranging from $4$ to $10$ under distributed and scattered initial state configurations; see Fig. 9. Note that the number of steps varies almost linearly with the swarm size. ## VI Conclusions and Future Work In this paper, we proposed a DRL-based framework for automatic pattern discovery and gathering for swarms of oblivious fat opaque robots. Using state-of-the-art DRL techniques for policy optimization and proper reward system design, the agents trained using the proposed methods were shown to be able to discover collision-less trajectories for gathering in desirable patterns. With the use of multiple policies at different stages of gathering, it was possible for the robots to improve their patterns successively instead of being struck at local optima. In the future, we shall work upon improving the robustness and scale of our models by making them fully distributed and autonomous while also being able to handle environmental uncertainty like unseen obstacles, actuator failures, and recovery mechanisms. We look forward to explore trajectory planing and environmental mapping as well. ## References * [1] M. Dorigo, M. Birattari, and M. Brambilla, “Swarm robotics,” _Scholarpedia_ , vol. 9, no. 1, p. 1463, 2014. * [2] C. Blum and D. Merkle, _Swarm intelligence: introduction and applications_. Springer Science & Business Media, 2008. * [3] R. Vaidyanathan, G. Sharma, J.L. Trahan, On fast pattern formation by autonomous robots, in: SSS, 2018, pp.203–220. * [4] P. Flocchini, G. Prencipe, N. Santoro, Distributed Computing by Oblivious Mobile Robots, Synthesis Lectures on Distributed Computing Theory, Morgan and Claypool Publishers, 2012. * [5] S. Das, P. Flocchini, G. Prencipe, N. Santoro, M. Yamashita, Autonomous mobile robots with lights, Theor. Comput. Sci. 609(P1) (2016) 171–184. * [6] D. Peleg, Distributed coordination algorithms for mobile robot swarms: new directions and challenges, in: IWDC, 2005, pp.1–12. * [7] Q. Bramas, S. Tixeuil, Probabilistic asynchronous arbitrary pattern formation (short paper), in: SSS, 2016, pp.88–93. * [8] Y. Dieudonné, F. Petit, V. Villain, Leader election problem versus pattern formation problem, in: DISC, 2010, pp.267–281. * [9] P. Flocchini, G. Prencipe, N. Santoro, P. Widmayer, Arbitrary pattern formation by asynchronous, anonymous, oblivious robots, Theor. Comput. Sci. 407(1–3) (2008) 412–447. * [10] N. Fujinaga, Y. Yamauchi, H. Ono, S. Kijima, M. Yamashita, Pattern formation by oblivious asynchronous mobile robots, SIAM J. Comput. 44(3) (2015) 740–785. * [11] I. Suzuki, M. Yamashita, Distributed anonymous mobile robots: formation of geometric patterns, SIAM J. Comput. 28(4) (1999) 1347–1363. * [12] M. Yamashita, I. Suzuki, Characterizing geometric patterns formable by oblivious anonymous mobile robots, Theor. Comput. Sci. 411(26–28) (2010) 2433–2453. * [13] Y. Yamauchi, M. Yamashita, Pattern formation by mobile robots with limited visibility, in: SIROCCO, 2013, pp.201–212. * [14] Y. Yamauchi, M. Yamashita, Randomized pattern formation algorithm for asynchronous oblivious mobile robots, in: DISC, 2014, pp.137–151. * [15] R. Vaidyanathan, C. Busch, J.L. Trahan, G. Sharma, S. Rai, Logarithmic-time complete visibility for robots with lights, in: IPDPS, 2015, pp.375–384. * [16] P. Poudel, G. Sharma, Universally optimal gathering under limited visibility, in: SSS, 2017, pp.323–340. * [17] G. Sharma, C. Busch, S. Mukhopadhyay, Brief announcement: complete visibility for oblivious robots in linear time, in: SPAA, 2017, pp.325–327. * [18] G. Sharma, R. Vaidyanathan, J.L. Trahan, Constant-time complete visibility for asynchronous robots with lights, in: SSS, 2017, pp.265–281. * [19] G. Sharma, R. Vaidyanathan, J.L. Trahan, C. Busch, S. Rai, Logarithmic-time complete visibility for asynchronous robots with lights, in: IPDPS, 2017, pp.513–522. * [20] S. Cicerone, G.D. Stefano, A. Navarra, Asynchronous arbitrary pattern formation: the effects of a rigorous approach, Distrib. Comput. 32(2) (2019) 91–132. * [21] S. Cicerone, G.D. Stefano, A. Navarra, Asynchronous embedded pattern formation without orientation, in: DISC, 2016, pp.85–98. * [22] S. Das, P. Flocchini, N. Santoro, M. Yamashita, Forming sequences of geometric patterns with oblivious mobile robots, Distrib. Comput. 28(2) (2015) 131–145. * [23] X. Défago, A. Konagaya, Circle formation for oblivious anonymous mobile robots with no common sense of orientation, in: FOMC, 2002, pp.97–104. * [24] X. Défago, S. Souissi, Non-uniform circle formation algorithm for oblivious mobile robots with convergence toward uniformity, Theor. Comput. Sci. 396(1–3) (2008) 97–112. * [25] Y. Dieudonné, O. Labbani-Igbida, F. Petit, Circle formation of weak mobile robots, in: SSS, 2006, pp.262–275. * [26] A. Dutta, S.G. Chaudhuri, S. Datta, K. Mukhopadhyaya, Circle formation by asynchronous fat robots with limited visibility, in: ICDCIT, 2012, pp.83–93. * [27] B. Katreniak, Biangular circle formation by asynchronous mobile robots, in: SIROCCO, 2005, pp.185–199. * [28] P. Flocchini, G. Prencipe, N. Santoro, G. Viglietta, Distributed computing by mobile robots: uniform circle formation, Distrib. Comput. 30(6) (2017) 413–457. * [29] Y. Yamauchi, T. Uehara, S. Kijima, M. Yamashita, Plane formation by synchronous mobile robots in the three-dimensional Euclidean space, J. ACM 64(3) (2017) 16. * [30] S. Das, P. Flocchini, G. Prencipe, N. Santoro, Synchronized dancing of oblivious chameleons, in: FUN, 2014, pp.113–124. * [31] Z. Derakhshandeh, R. Gmyr, A.W. Richa, C. Scheideler, T. Strothmann, Universal shape formation for programmable matter, in: SPAA, 2016, pp.289–299. * [32] [30]G.A. Di Luna, P. Flocchini, N. Santoro, G. Viglietta, Y. Yamauchi, Brief announcement: shape formation by programmable particles, in: DISC, 2017, pp.48:1–48:3. * [33] A. Cord-Landwehr, B. Degener, M. Fischer, M. Hüllmann, B. Kempkes, A. Klaas, P. Kling, S. Kurras, M. Märtens, F. Meyer auf der Heide, C. Raupach, K. Swierkot, D. Warner, C. Weddemann, D. Wonisch, A new approach for analyzing convergence algorithms for mobile robots, in: ICALP, 2011, pp.650–661. * [34] J.D. Lawrence, A Catalog of Special Plane Curves, Dover Books on Mathematics, Dover, New York, NY, 2014. * [35] H. Attiya, J. Welch, Distributed Computing: Fundamentals, Simulations and Advanced Topics, John Wiley & Sons, 2004. * [36] H.M. El-Boghdadi, R. Vaidyanathan, J.L. Trahan, S. Rai, On the communication capability of the self-reconfigurable gate array architecture, in: IPDPS, 2002, pp.1–8. * [37] C. Agathangelou, C. Georgiou, M. Mavronicolas, A distributed algorithm for gathering many fat mobile robots in the plane, in: PODC, 2013, pp.250–259. * [38] GOKARNA SHARMA,COSTAS BUSCH, SUPRATIK MUKHOPADHYAY, and CHARLES MALVEAUX, Tight Analysis of a Collisionless Robot Gathering Algorithm ACM Transactions on Autonomous and Adaptive SystemsVolume 12 Issue 1 March 2017 Article No.: 3pp 1–20https://doi.org/10.1145/3056460 * [39] Ramachandran ,Vaidyanathana,GokarnaSharma ,bJerryTrahana, On fast pattern formation by autonomous robots,https://doi.org/10.1016/j.ic.2021.104699 * [40] Rory Hector; Ramachandran Vaidyanathan; Gokarna Sharma; Jerry L. Trahan ,Optimal Convex Hull Formation on a Grid by Asynchronous Robots With Lights,IEEE Transactions on Parallel and Distributed Systems ( Volume: 33, Issue: 12, 01 December 2022) * [41] Belkacem Khaldi , Fouzi Harrou , Foudil Cherif , and Ying Sun,Toward Emerging Cubic-Spline Patterns With a Mobile Robotics Swarm System, IEEE TRANSACTIONS ON COGNITIVE AND DEVELOPMENTAL SYSTEMS, VOL. 14, NO. 2, JUNE 2022 * [42] Gunasekaran Raja, Saran V S, Sudha Anbalagan, Ali Kashif Bashir,Collisionless Fast Pattern Formation Mechanism for Dynamic Number of UAVs ,GLOBECOM 2020 - 2020 IEEE Global Communications Conference. * [43] Jia Wang; Jiannong Cao; Milos Stojmenovic; Miao Zhao; Jinlin Chen; Shan Jiang,Pattern-RL: Multi-robot Cooperative Pattern Formation via Deep Reinforcement Learning. * [44] Abhik Singla, Sindhu Padakandla , and Shalabh Bhatnagar, Memory-Based Deep Reinforcement Learning for Obstacle Avoidance in UAV With Limited Environment Knowledge, IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, VOL. 22, NO. 1, JANUARY 2021. * [45] Linbo Luo , Xinyu Wang, Jianfeng Ma , Member, IEEE, and Yew-Soon Ong , GrpAvoid: Multigroup Collision-Avoidance Control and Optimization for UAV Swarm, IEEE TRANSACTIONS ON CYBERNETICS * [46] Haitao Zhao ,Hai Liu, Yiu-Wing Leung,and Xiaowen Chu, Self-Adaptive Collective Motion of Swarm Robots * [47] Seongin Na , Hanlin Niu , Barry Lennox ,and Farshad Arvin ,Bio-Inspired Collision Avoidance in Swarm Systems via Deep Reinforcement Learning IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, VOL. 71, NO. 3, MARCH 2022. * [48] E. Sahin; T.H. Labella; V. Trianni; J.-L. Deneubourg; P. Rasse; D. Floreano; L. Gambardella; F. Mondada; S. Nolfi; M. Dorigo SWARM-BOT: pattern formation in a swarm of self-assembling mobile robots ,IEEE International Conference on Systems, Man and Cybernetics (SMC) * [49] Sirui Song, Yuanhang Zhang, Xi Qin, Kirk Saunders, Jundong Liu , Vision-guided Collision Avoidance Through Deep Reinforcement Learning, NAECON 2021 - IEEE National Aerospace and Electronics Conference * [50] Maximilian Huttenrauch, Adrian sosic,and Gerhard Neumann, Guided Deep Reinforcement Learning for Swarm Systems, https://doi.org/10.48550/arXiv.1709.06011 * [51] ZBIGNIEW R. BOGDANOWICZ , Flying Swarm of Drones Over Circulant Digraph,Armament Research, Development and Engineering Center, Picatinny Arsenal, NJ USA * [52] Paola Flocchini, Giuseppe Prencipe, and Nicola Santoro, Distributed Computing by Mobile Entities, Current Research in Moving and Computing, Lecture Notes in Computer Science 11340, Springer, 2019 * [53] John Schumann, Fillip Wolski, Prafulla Dhariwal, Alec Radford, Oleg Klimov ,Proximal Policy Optimization Algorithms,arXiv:1707.06347v2 [cs.LG] 28 Aug 2017 * [54] John Schulman, Philipp Moritz, Sergey Levine, Michael Jordan, Pieter Abbeel,High-Dimensional Continuous Control Using Generalized Advantage Estimation,arXiv:1506.02438v6 [cs.LG] 20 Oct 2018
# The multiplexed light storage of Orbital Angular Momentum based on atomic ensembles Xin Yang Ministry of Education Key Laboratory for Nonequilibrium Synthesis and Modulation of Condensed Matter, Shaanxi Province Key Laboratory of Quantum Information and Quantum Optoelectronic Devices, School of Physics, Xi’an Jiaotong University, Xi’an, 710049, China Hong Chang Key Laboratory of Time and Frequency Primary Standards, National Time Service Center, Chinese Academy of Sciences, Xi’an 710600, China Jinwen Wang Ministry of Education Key Laboratory for Nonequilibrium Synthesis and Modulation of Condensed Matter, Shaanxi Province Key Laboratory of Quantum Information and Quantum Optoelectronic Devices, School of Physics, Xi’an Jiaotong University, Xi’an, 710049, China Yan Ma Key Laboratory of Time and Frequency Primary Standards, National Time Service Center, Chinese Academy of Sciences, Xi’an 710600, China Yun Chen Ministry of Education Key Laboratory for Nonequilibrium Synthesis and Modulation of Condensed Matter, Shaanxi Province Key Laboratory of Quantum Information and Quantum Optoelectronic Devices, School of Physics, Xi’an Jiaotong University, Xi’an, 710049, China Shuwei Qiu Ministry of Education Key Laboratory for Nonequilibrium Synthesis and Modulation of Condensed Matter, Shaanxi Province Key Laboratory of Quantum Information and Quantum Optoelectronic Devices, School of Physics, Xi’an Jiaotong University, Xi’an, 710049, China Zehao Shen Ministry of Education Key Laboratory for Nonequilibrium Synthesis and Modulation of Condensed Matter, Shaanxi Province Key Laboratory of Quantum Information and Quantum Optoelectronic Devices, School of Physics, Xi’an Jiaotong University, Xi’an, 710049, China Chengyuan Wang Ministry of Education Key Laboratory for Nonequilibrium Synthesis and Modulation of Condensed Matter, Shaanxi Province Key Laboratory of Quantum Information and Quantum Optoelectronic Devices, School of Physics, Xi’an Jiaotong University, Xi’an, 710049, China Quan Quan Ministry of Education Key Laboratory for Nonequilibrium Synthesis and Modulation of Condensed Matter, Shaanxi Province Key Laboratory of Quantum Information and Quantum Optoelectronic Devices, School of Physics, Xi’an Jiaotong University, Xi’an, 710049, China Dong Wei Ministry of Education Key Laboratory for Nonequilibrium Synthesis and Modulation of Condensed Matter, Shaanxi Province Key Laboratory of Quantum Information and Quantum Optoelectronic Devices, School of Physics, Xi’an Jiaotong University, Xi’an, 710049, China Haixia Chen Ministry of Education Key Laboratory for Nonequilibrium Synthesis and Modulation of Condensed Matter, Shaanxi Province Key Laboratory of Quantum Information and Quantum Optoelectronic Devices, School of Physics, Xi’an Jiaotong University, Xi’an, 710049, China Mingtao Cao Key Laboratory of Time and Frequency Primary Standards, National Time Service Center, Chinese Academy of Sciences, Xi’an 710600, China Hong Gao Ministry of Education Key Laboratory for Nonequilibrium Synthesis and Modulation of Condensed Matter, Shaanxi Province Key Laboratory of Quantum Information and Quantum Optoelectronic Devices, School of Physics, Xi’an Jiaotong University, Xi’an, 710049, China Corresponding<EMAIL_ADDRESS>Fuli Li Ministry of Education Key Laboratory for Nonequilibrium Synthesis and Modulation of Condensed Matter, Shaanxi Province Key Laboratory of Quantum Information and Quantum Optoelectronic Devices, School of Physics, Xi’an Jiaotong University, Xi’an, 710049, China ###### Abstract The improvement of the multi-mode capability of quantum memory can further improve the utilization efficiency of the quantum memory and reduce the requirement of quantum communication for storage units. In this letter, we experimentally investigate the multi-mode light multiplexing storage of orbital angular momentum (OAM) mode based on rubidium vapor, and demultiplexing by a photonic OAM mode splitter which combines a Sagnac loop with two dove prisms. Our results show a mode extinction ratio higher than 80$\%$ at 1 $\mu$s of storage time. Meanwhile, two OAM modes have been multiplexing stored and demultiplexed in our experimental configuration. We believe the experimental scheme may provide a possibility for high channel capacity and multi-mode quantum multiplexed quantum storage based on atomic ensembles. ††journal: journal Qauntum memories, enabling the storage of an input photonic qubit and retrieval on controllable time, which can beat any classical device and constitute essential components in quantum repeaters and optical quantum information processing[1, 2]. Over the past years, various memory protocols have been proposed, such as electromagnetically induced transparency (EIT)[3], gradient echo storage[4], optical frequency comb [5], Raman scheme [6] and the Duan–Lukin–Cirac–Zoller protocol [7]. The EIT storage protocol is widely used among these storage protocols because of its simple configuration and easy implementation. Improving the multi-mode storage capability in the spatial domain can effectively improve the working efficiency of quantum repeaters and reduce the requirement for quantum communication on the storage unit. Orbital angular momentum (OAM) beam is an attractive light field mode, and its topological charge can be taken as an arbitrary integer[8]. In principle, an infinite- dimensional Hilbert space can be constructed to realize higher dimensional spatial mode encoding of photon OAM. Therefore, using the OAM mode for storage is an effective way to realize efficient quantum repeaters. In recent years, research on the storage of photon OAM light modes has been carried out [9, 10, 11]. These studies mainly focus on storing low-order vector beams and finite- dimensional photon OAM quantum states [12, 13, 11], but the high-dimensional properties of OAM have not been fully utilized in any quantum memory experiment and multi-OAM mode storage simultaneously based on atomic ensembles has not been studied. It is well known that effectively identifying the OAM mode is significant in multi-mode storage. To date, there are many traditional measurement methods for the photon OAM quantum state, including the round-hole diffraction measurement [14], optical transformation method [15] and interference method [16, 17]. However, the schemes of state identification of OAM quantum states after storage are all based on the projection measurement scheme[18]; The projection measurement scheme can only be used for the identification of the OAM quantum states but cannot achieve the separation of the OAM quantum states.An applicable quantum memory with multimode capacity should be achieved multiplexed storage and demultiplexed retrieval simultaneously. Therefore, if photons encoded with high-dimensional OAM in a quantum memory, not only the multimode storage shall be realized but also the stored OAM quantum states need to be efficiently separated after storage. Figure 1: (a)The experimental setup of Multiplexing quantum memory. PBS, polarization beam splitter; BS, beam splitter; HWP, half-wave plate; QWP, quarter-wave plate; SLM, spatial light modulator. (b) The setup of demultiplexing; DP, Dove prism; ICCD, intensified charge coupled device (c) Schematic energy diagram of 87Rb and light coupling. In this letter, we experimentally investigate multi-mode quantum multiplexing storage and demultiplexing based on rubidium vapor. Specifically, OAM beams with different topological charges have been stored in atomic ensembles through the use of electromagnetic induced transparency storage protocol and demultiplexed into different output ports by using the photon OAM mode splitter which combines a Sagnac loop with two dove prisms. Our results show a mode extinction ratio higher than 80 $\%$ at 1 $\mu$s of storage time. Meanwhile, two OAM modes have been multiplexed storage and demultiplexed output in different ports by using an OAM mode splitter. The results show that our experimental system can absolutely separate these two OAM modes after storage. We believe that the experimental scheme has applications in high channel capacity multi-mode quantum multiplexing storage and demultiplexing in atomic ensembles. The experimental setup is shown in Fig.1. The output of a 795 nm external cavity diode laser is divided into two parts. One part is applied for frequency locking, and the other one is sent through a single-mode fiber (SMF) to improve the spatial mode. After the fiber, the beam passes through a half- wave plate and polarization beam splitter to control the intensity of the control and probe beams. The transmitted part is chosen as the probe beam with the waist diameter of 2 mm (shown in Fig.1(a)), while the reflected part is selected as the control beam, whose size is expanded three times larger than the probe beam by a telescope. The acousto-optic modulations(AOM) are inserted into these two parts for pulse modulation. The spatial light modulator(HAMAMATSU-X10468-07) is used for loading grating to generate the OAM light in the probe beam part. For effective imaging, the probe beam with OAM passed through the 4f imaging system (not shown in the Fig.1) and propagates collinearly with the control beam. Finally, Both beams incident into the Rubidium cell and with the corresponding Zeeman sublevels of 87Rb. The energy- level configuration is shown in Fig.1(c). In our experiment, the laser frequency is locked to the $5S_{1/2},F=2\to 5P_{1/2},F^{\prime}=1$ transition of the 87Rb D1 line. The Rb cell has a length of 50 mm. A three-layer $\mu$ -metal magnetic shield isolates the cell from ambient magnetic fields. In our experiment, the power of the control and probe beam are The temperature of the cell is set to 65°C with a controller. The control and the probe beams are separated into the reflected and transmitted parts using a QWP and a PBS after the Rb cell, and the reflected part is the probe beam carrying OAM. Then a Sagnac loop with two Dove prisms is applied to demultiplex OAM beams after storage. As shown in Fig.1(b), the OAM probe beam is injected into the Sagnac loop and divided into two parts. These two parts go through the same path (clockwise and counterclockwise paths) and inject into the Dove prism with different rotation angles. The Dove prisms in the two paths produce different phases, and finally, these two parts interfere with each other in the beam splitter passes through the Sagnac loop. The spatial intensity distribution of the retrieval signal is recorded by an intensified charge-coupled device camera with a shutter time of 2 nanoseconds (ICCD,ANDOR TECHNOLOGY), as shown in Fig.1(b). Now let us focus on the experimental procedure shown in Fig.1(d). First, the probe beam pulse with 5$\mu$s is input to the spatial light modulator(SLM) before the rubidium cell, and an OAM light pulse is generated to load a fork- shaped grating. In one-third of the back edge of the probe light pulse, the control light is turned off and then switched on again by using an AOM after a controllable time. The retrieved probe beam can be demultiplexed after the sagnac loop and detected by ICCD1 or ICCD2, depending on the topological charge of the probe light pulse. Secondly, our system realizes the multiplexing memory and demultiplexing of OAM states using the OAM mode splitter. To clearly understand the principle of demultiplexing based on the Sagnac loop, we assume that the quantum state of the probe beam is $\left|l\right\rangle$($l$ can be any integer). The transmitted and reflected states become $\frac{1}{2}\left|l\right\rangle$ and $\frac{i}{2}\left|l\right\rangle$. The outgoing light at the ICCD1 and the ICCD2 ports are expressed as $\displaystyle T_{ICCD1}=\frac{i}{2}\exp(i(2l\alpha+\pi))\left|l\right\rangle+\frac{i}{2}\exp(i\pi)\left|l\right\rangle$ (1a) $\displaystyle T_{ICCD2}=-\frac{1}{2}\exp(i(2l\alpha+\pi))\left|l\right\rangle+\frac{1}{2}\exp(i\pi)\left|l\right\rangle$ (1b) The first term of Eq.1 is the quantum state that undergoes a counterclockwise path, and the second term describes the quantum state that undergoes a clockwise path. $\alpha$ is the rotation angle of dove prism. Actually, we set the angle of Dove Prism1 (DP1) to 90 degrees experimentally. Thus, the expression above becomes $\displaystyle T_{ICCD1}=-\frac{i}{2}\exp(il\pi)\left|l\right\rangle-\frac{i}{2}\left|l\right\rangle$ (2a) $\displaystyle T_{ICCD2}=\frac{1}{2}\exp(il\pi))\left|l\right\rangle-\frac{1}{2}\left|l\right\rangle$ (2b) According to the Eq.2, when the topological charge of the incident probe beam is an odd number, we can obtain the $T_{ICCD1}=0$ and $T_{ICCD2}=\left|l\right\rangle$, which means that the probe beam with quantum state $\left|l\right\rangle$ constructive interference in the ICCD2 port and destructive interference at the ICCD1 port. The output is reversed when the topological charge is an even number. ## 1 Results and analysis Figure 2: (a) The demultiplexing of the probe beam after 0.25 $\mu$s of memory. (b) The demultiplexing of the probe beam after 1 $\mu$s of memory. After checking the system reliability again, mainly including light leakage and ICCD normal trigger. We shown Our experimental results in Fig. 2(a). Clearly, the retrieval signal when carrying topological charge of 0, 2, 4 is captured by ICCD1, while, the signal carrying topological charge of 1 and 3 are captured by ICCD2 after 0.25$\mu$s storage time, respectively. The demultiplexing system still has good characteristics after 1 $\mu$s storage time as shown in Fig. 2(b). It can be seen that our demultiplexing system can separate the OAM mode very well and has a good mode contrast. In addition, we use oblique lens to realize self-interference of retrieval signal in experiment. The interference pattern is shown in the upper right corner of Fig. 2. As the results show that the OAM states remain unchanged after storage, and different OAM states are separated in different output channels. In order to study the quantum multiplexing memory and demultiplexing characteristics of OAM probe beams with different topological charges at different storage times. For effectively compare the retrieval mode quality under specific storage time, the mode extinction ratio is defined as $V_{T_{ICCD1}/T_{ICCD2}}=|\frac{T_{ICCD1}-T_{ICCD2}}{T_{ICCD1}+T_{ICCD2}}|$ (3) We experimentally changed the clock switch of the control beam and the grating, and all results are shown in Fig. 3. Figure 3: The separation contrast of different OAM probe beams at different storage times. The black, red, blue, pink and green lines express the OAM probe beam with $l=0$, $l=1$, $l=2$, $l=3$ and $l=4$, respectively. We can obtain several differences by comparing the mode extinction ratio obtained from our demultiplexing system. It is observed that the Gauss pulse ($l=0$) maintains high mode extinction ratio about 80 $\%$ after 2.5 $\mu$s of quantum memory, as the black line shows in Fig. 3. The OAM pulse with $l=1$ remains an extinction ratio higher than 80 $\%$ after 1.5 $\mu$s of storage, as shown in the red line. However, the optical mode extinction ratio drops dramatically when the storage time exceeds 1.5 $\mu$s. The reason is due to the motion of the atomic spin wave that intensifies the diffusion of the readout optical mode when the optical mode carrying OAM is recorded into the atomic spin wave, which leads to a decrease in the extinction ratio after demultiplexing in our system. This influence becomes more pronounced when the optical mode carries higher-order topological charges, as shown in the blue line, pink and green lines of Fig. 3. In order to verify the multi-mode capabilities of multiplexing storage and demultiplexing in our experimental setup. Two OAM modes expressed as $\left|1\right\rangle+\left|2\right\rangle$, $\left|2\right\rangle+\left|3\right\rangle$ and $\left|3\right\rangle+\left|4\right\rangle$ have been generated for multi-mode storage, and the intensity distribution as shown in the left column of Fig. 4. The figure shows that, two OAM modes injected into the memory system interfere to form a moon-like intensity distribution. After 0.25 $\mu$s stored, this two OAM modes with different topological charges are separated into different paths, as shown in the two right columns of Fig. 4. All odd OAM modes have been recorded by ICCD1, while ICCD2 has recorded the even OAM modes, and the separated OAM modes have a good intensity distribution. An effective OAM mode separator can not only effectively separate the OAM modes in different channels, but also maintain the OAM modes characteristic. Therefore, oblique lenses are applied again to achieve OAM mode self-interference. The experiment results are shown in the upper right corner of Fig. 4, the two OAM states remain unchanged after demultiplexing. At this point, our experimental system can not only realize the storage of the single OAM state and separated it in different ports by using an OAM modes splitter but can also store and separate two OAM modes simultaneously. Figure 4: Multiplexing memory and demultiplexing output of two OAM modes at 0.25 $\mu$s. In conclusion, we have demonstrated the experimental realization of multi-mode multiplexed storage and demultiplexed of OAM beams at room temperature based on atomic ensembles. With the Sagnac interferometer embedding two dove prisms, we obtained a retrieval signal of the OAM mode with an odder or even topological charge in different interfering ports and a mode extinction ratio higher than 80 $\%$ at the specific storage time. Meanwhile, two OAM superimposed modes have been stored and demultiplexed successfully, and the results show that our experimental scheme can effectively separate the two OAM modes after quantum memory. The OAM demultiplexing devices based on the Sagnac loop structure not only have higher separation contrast but can also be extended to a higher dimensional multimode multiplexing storage and demultiplexing of OAM modes. It is worth noting that quantum memory with multimode capacity not only needs to maintain the storage modes, but also needs to maintain the high fidelity. Our results may have applications in high channel capacity multi-mode quantum multiplexing storage and demultiplexing based on atomic ensembles. Funding National Natural Science Foundation of China (NSFC) (11374238, 11534008, 11574247, 11604257, 11604258, 11774286); National Science Foundation (NSF) (1602755); Disclosures Disclosures The authors declare no conflicts of interest. Data Availability Statement Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request. ## References * [1] A. I. Lvovsky, B. C. Sanders, and W. Tittel, Nature photonics 3, 706 (2009). * [2] R. Zhao, Y. Dudin, S. Jenkins, C. Campbell, D. Matsukevich, T. Kennedy, and A. Kuzmich, Nature Physics 5, 100 (2009). * [3] J. P. Marangos, Journal of modern optics 45, 471 (1998). * [4] D. B. Higginbottom, B. M. Sparkes, M. Rancic, O. Pinel, M. Hosseini, P. K. Lam, and B. C. Buchler, Physical Review A 86, 023801 (2012). * [5] T. Fortier and E. Baumann, Communications Physics 2, 1 (2019). * [6] D. Saunders, J. Munns, T. Champion, C. Qiu, K. Kaczmarek, E. Poem, P. Ledingham, I. Walmsley, and J. Nunn, Physical review letters 116, 090501 (2016). * [7] L.-M. Duan, M. D. Lukin, J. I. Cirac, and P. Zoller, Nature 414, 413 (2001). * [8] Y. Shen, X. Wang, Z. Xie, C. Min, X. Fu, Q. Liu, M. Gong, and X. Yuan, Light: Science & Applications 8, 1 (2019). * [9] A. Nicolas, L. Veissier, L. Giner, E. Giacobino, D. Maxein, and J. Laurat, Nature Photonics 8, 234 (2014). * [10] T.-S. Yang, Z.-Q. Zhou, Y.-L. Hua, X. Liu, Z.-F. Li, P.-Y. Li, Y. Ma, C. Liu, P.-J. Liang, X. Li _et al._ , Nature communications 9, 1 (2018). * [11] C. Wang, Y. Yu, Y. Chen, M. Cao, J. Wang, X. Yang, S. Qiu, D. Wei, H. Gao, and F. Li, Quantum Science and Technology 6, 045008 (2021). * [12] V. Parigi, V. D’Ambrosio, C. Arnold, L. Marrucci, F. Sciarrino, and J. Laurat, Nature communications 6, 1 (2015). * [13] Y.-H. Ye, M.-X. Dong, Y.-C. Yu, D.-S. Ding, and B.-S. Shi, Optics letters 44, 1528 (2019). * [14] G. C. Berkhout and M. W. Beijersbergen, Physical review letters 101, 100801 (2008). * [15] G. C. Berkhout, M. P. Lavery, M. J. Padgett, and M. W. Beijersbergen, Optics letters 36, 1863 (2011). * [16] H. Huang, Y. Ren, Y. Yan, N. Ahmed, Y. Yue, A. Bozovich, B. I. Erkmen, K. Birnbaum, S. Dolinar, M. Tur _et al._ , Optics letters 38, 2348 (2013). * [17] M. Padgett, J. Arlt, N. Simpson, and L. Allen, American Journal of Physics 64, 77 (1996). * [18] A. Mair, A. Vaziri, G. Weihs, and A. Zeilinger, Nature 412, 313 (2001). sample
# VL-InterpreT: An Interactive Visualization Tool for Interpreting Vision- Language Transformers Estelle Aflalo111Equal Contributions Intel Labs <EMAIL_ADDRESS>Meng Du111Equal Contributions Intel Labs, UCLA <EMAIL_ADDRESS>Shao-Yen Tseng Intel Labs <EMAIL_ADDRESS>Yongfei Liu Microsoft Research <EMAIL_ADDRESS>Chenfei Wu Microsoft Research <EMAIL_ADDRESS>Nan Duan Microsoft Research <EMAIL_ADDRESS>Vasudev Lal Intel Labs <EMAIL_ADDRESS> ###### Abstract Breakthroughs in transformer-based models have revolutionized not only the NLP field, but also vision and multimodal systems. However, although visualization and interpretability tools have become available for NLP models, internal mechanisms of vision and multimodal transformers remain largely opaque. With the success of these transformers, it is increasingly critical to understand their inner workings, as unraveling these black-boxes will lead to more capable and trustworthy models. To contribute to this quest, we propose VL- InterpreT, which provides novel interactive visualizations for interpreting the attentions and hidden representations in multimodal transformers. VL- InterpreT is a task agnostic and integrated tool that (1) tracks a variety of statistics in attention heads throughout all layers for both vision and language components, (2) visualizes cross-modal and intra-modal attentions through easily readable heatmaps, and (3) plots the hidden representations of vision and language tokens as they pass through the transformer layers. In this paper, we demonstrate the functionalities of VL-InterpreT through the analysis of KD-VLP, an end-to-end pretraining vision-language multimodal transformer-based model, in the tasks of Visual Commonsense Reasoning (VCR) and WebQA, two visual question answering benchmarks. Furthermore, we also present a few interesting findings about multimodal transformer behaviors that were learned through our tool. ## 1 Introduction Since transformers were introduced in Vaswani _et al_. [30], not only have they seen massive success in NLP applications, their impact on computer vision and multimodal problems has also become increasingly disruptive. However, the internal mechanisms of transformers that lead to such successes are not well understood. Although efforts have been made to interpret the attentions [8] and hidden states [22] of transformers for NLP, such as BERT [12], investigations in the mechanisms of vision and multimodal transformers are relatively scarce, and tools for probing such transformers are also limited. Given the fast-growing number of successful vision and multimodal transformers (_e.g_., ViT [13] and CLIP [25]), enhanced interpretability of these models is needed to guide better designs in the future. Past research has shown the importance of interpreting the inner mechanisms of transformers. For example, Clark _et al_. [8] found certain BERT attention heads specialized in handling certain syntactic relations, as well as interesting ways in which BERT attention utilizes special tokens and punctuation marks. Additionally, Lin _et al_. [21] showed that the linguistic information encoded in BERT becomes increasingly abstract and hierarchical in later layers. These studies provide valuable insights into the functions of various elements in transformer architecture for NLP, and shed light on their limitations. This paper presents VL-InterpreT111A screencast of our application is available at https://www.youtube.com/watch?v=4Rj15Hi_Pdo. Source code and a link to a live demo: https://github.com/IntelLabs/VL-InterpreT, which is an interactive visualization tool for interpreting the attentions and hidden representations of vision-language (VL) transformers. Importantly, it is a single system that analyzes and visualizes several aspects of multimodal transformers: first, it tracks behaviors of both vision and language attention components in attention heads throughout all layers, as well as the interactions across the two modalities. Second, it also visualizes the hidden representations of vision and language tokens as they pass through transformer layers. The main contributions of our work are: * • Our tool allows interactive visualization for probing hidden representations of tokens in VL transformers. * • Our tool allows systematic analysis, interpretation, and interactive visualization of cross- and intra-modal components of attention in VL transformers. * • As an application of VL-InterpreT, we demonstrate multimodal coreference in two analyses: 1) how contextualized tokens in different modalities referring to the same concept are mapped to proximate representations, and 2) how attention components capture the conceptual alignment within and across modalities. ## 2 Related Work As deep learning models flourish, many tools and methods have been proposed to offer insight into their inner workings. Some methods are general-purpose [24, 32], while others are nuanced for specific models such as CNNs [35, 36] or RNNs [16, 27, 17]. In transformers, the introduction of attention not only helped improve performance, but also served as an additional component towards interpretability. Interpretability of NLP transformers was initially approached through the analysis of attention to capture its alignments with syntactic or semantic relationships [31, 8]. Following this, subsequent works introduced additional functionalities including visualizations of hidden representations, task matching of attention heads, aggregate metrics, and interactive datapoint analysis [28, 15, 18, 20]. While common in allowing a user to understand the inner workings of transformers, each tool introduces novel applications. For example, LIT [28] enables probing for bias through examination of coreferences in counterfactuals. InterpreT [18] allows tracking of token representations through the layers and offers users the ability to define new metrics to identify coreference relationships in attention heads. Additionally, T3-Vis [20] focused on allowing users to improve transformer training by integrating the training dynamics in their visualization tool. Interpreting vision transformers, such as those for object detection [4, 13] or image captioning [10, 19], has also become increasingly popular. Cordonnier _et al_. [9] showed that the first few layers in transformers can learn to behave similarly to convolutional layers, and demonstrated the filter patterns through visualization of image-to-image attention. As illustrated in this paper, the attention mechanism in transformers is a natural gateway to understanding vision models, as the heatmaps of attention can be used to highlight salient image regions. Furthermore, Chefer _et al_. [7] proposed a method for visualizing self-attention models by calculating a LRP [1]-based relevancy score for each attention head in each layer, and propagating relevancies through the network. The end result is a class-specific visualization of image regions that led to the classification outcome. Multimodal interpretability has, up until now, mainly entailed using probing tasks to study the impact of each modality on the responses generated by the model. These probing tasks aim to quantify the information captured in the hidden representations by training classifiers, or applying metrics to embeddings at different points in a model. For instance, Cao _et al_. [3] proposed various probing tasks to analyze VL transformers, where the authors observed modality importance during inference, and identified attention heads tailored for cross-modal interactions as well as alignments between image and text representations. Additionally, other works have proposed probing tasks to interpret VL transformers for aspects such as visual-semantics [11], verb understanding [14], and other concepts such as shape and size [26]. However, a disadvantage of probing tasks is the amount of work: additional training of the classifiers is often required, and specific task objectives must be defined to capture different embedded concepts. Finally, most aforementioned works require image-caption pairs as input, and are therefore not best suited for interpreting multimodal transformers in tasks such as visual question answering. Recently, a first attempt at explaining predictions by a VL transformer was proposed in [6]. There the authors constructed a relevancy map using the model’s attention layers to track the interactions between modalities. The relevancy map is updated by a set of update rules that back-propagates relevancies of the prediction result back to the input. This map is very useful in understanding how model decisions are formed, but a more comprehensive interpretation for other aspects of VL transformers is still needed. Our proposed tool, VL-InterpreT, differs from previous works in that it interprets various aspects of multimodal transformers in a single interface. This interactive interface allows users to explore interactions between tokens in each modality from a bottom-up perspective, without tying to task-specific inputs and outcomes. To the best of our knowledge, this is the first interactive tool for interpreting multimodal transformers. ## 3 System Design ### 3.1 Workflow Figure 1: VL-InterpreT workflow. Data shown in this figure are for illustration purposes only. VL-InterpreT is designed as a two-stage workflow: First, the attentions and hidden states of a given multimodal model are generated and saved for a set of examples (in this case, 100 examples). Next, the saved data, along with the metadata of the corresponding examples, are loaded into our tool to enable visualizations of the inner workings of the model. The workflow of VL- InterpreT is shown in Figure 1, and the user interface is shown in Figure 3. Different from interpretability tools for NLP transformers, VL-InterpreT addresses the analysis and interpretation of the following properties of multimodal transformers: * • Input: In general, multimodal transformers are able to process inputs originating from different modalities, _e.g_., video, audio, or language. Here, we only consider VL transformers where the input is composed of visual and textual tokens. These tokens are mapped into a shared space, allowing for concept-level alignment between the two modalities. * • Attention: Because of the bi-modal nature of the input, the resulting attention can be split into four components: language-to-language, vision-to- vision, vision-to-language, and language-to-vision. An illustration of these components is shown in Figure 1. * • Hidden states: In each layer, the transformer produces as many hidden states as the number of input tokens. Each input token is embedded as a _d_ -dimension vector after processed by each layer. Figure 2: The VL-InterpreT user interface (rearranged for print). Figure 3: The Attention Head Summary plot colored by the metrics selected from the dropdown menu ### 3.2 Visualizations #### 3.2.1 Attention heads components In this section, we describe each attention component and how VL-InterpreT visualizes them interactively. The attention matrix in a VL transformer, as they are loaded in VL-InterpreT, is of size $(N_{layers},N_{heads},L_{v}+L_{l},L_{v}+L_{l})$, where $N_{layers}$ and $N_{heads}$ correspond to the number of layers and heads, respectively, $L_{v}$ to the number of visual tokens, and $L_{l}$ to the number of text tokens. * • The Language-to-Vision attention component (L2V) of size $(N_{layers},N_{heads},L_{l},L_{v})$ reflects the text tokens’ dependency on the visual tokens. This partition of the attention contains the attention scores calculated from the dot product of the query vector based on the selected image patch and the key vectors from text tokens. These attention scores are the weights given to value vectors of text tokens when summing for the updated representation of a specific image patch in the next layer. A user can select any attention head and image patch in the interface of our tool, and the corresponding L2V attention weights are displayed as a heatmap overlaid on the input text, to visualize how much each text token contributes to the updated image patch representation (see Figure 7). * • The Vision-to-Language attention component (V2L) of size $(N_{layers},N_{heads},L_{v},L_{l})$ reflects the visual tokens’ dependency on the text tokens. This component of attention in a VL transformer arises through the query-key dot product where the query vectors are computed from text token embeddings, and the key vectors are computed from the image token embeddings. A user can select a specific attention head and a text token, and the corresponding attention scores will be overlaid onto the image as a heatmap (see Figure 6). This visualization helps users understand the relative contributions of various parts of the image to the updated representations of the text tokens. The interactive application also allows users to play an animation, in which the heatmap over the image is automatically displayed in a sequence for each word. * • The Language-to-Language attention component (L2L) of size $(N_{layers},N_{heads},L_{l},L_{l})$ corresponds to the attention mapping in NLP Transformers. This component visualizes how all text tokens attends to each individual token of the input sentence (see Figure 6). * • The Vision-to-Vision attention component (V2V) of size $(N_{layers},N_{heads},L_{v},L_{v})$ is analogous to language-to-language attention, but in the visual space. It represents the attention between one visual tokens and all visual tokens, including itself. Similar to the V2L component, an attention vector (of size $(L_{v},1)$) here in a given head can also be translated into a heatmap and overlaid onto the image. This visualization is useful for identifying the contributions of different image patches to the updated representation of the image patch selected by a user. #### 3.2.2 Attention head summary This functionality allows users to visualize a head summary plot containing statistics of the attentions calculated for all heads and layers. For an attention matrix of size $(N_{layers},N_{heads},L_{v}+L_{l},L_{v}+L_{l})$, the head summary computes statistical metrics over the last two dimensions, resulting in a plot of size $(N_{layers},N_{heads})$. * • The mean attention in an attention head is generated by calculating the average of the corresponding attention matrix, while some tokens can be excluded from this calculation. A user can select a summary metric from the list of options. Each metric can be restricted to different components of the attention. For example, instead of computing the mean of all input tokens regardless of modality, the calculation can be limited to the vision-to- language part or the vision-to-vision part of the attention. Additionally, users may also focus on both cross-modal components (i.e., V2L and L2V averaged) or both intra-modal ones (i.e., V2V and V2L averaged). See the drop down menu in Figure 3. * • Custom metrics: Based on users’ interests, custom metrics can be integrated in this plot to show relevant attention heads. For example, we created a custom metric to look for the attention heads responsible for aligning the same person between vision and language modalities, based on the V2L component. This metric, labeled Spearman correlation b/w person attention and segment (see Figure 3), is the Spearman correlation between the panoptic segmentation mask for a person in the image (generated by Maskformer model [2]), and the attention heatmap to the corresponding person token in the Vision-to-Language attention component. This metric allows a user to identify attention heads that perform a function similar to panoptic segmentation for people (see Figure 4). Figure 4: Custom metric: Average correlation between the V2L attention to the [PERSON] tokens (bottom right) and the person’s panoptic segmentation mask (top right). Finally, for each metric, a mean is automatically computed for each layer and shown in the rightmost column of the attention head summary (see Figures 3 and 4). This column visualizes the general behavior in each layer for the selected metrics, and shows its evolution throughout the layers of the transformer. #### 3.2.3 Hidden state representation For each input token, a _d-_ dimensional hidden representation is produced by the transformer after each layer (in our setup, $d=768$). This pool of hidden representations is then filtered by two criteria: (1) if the related text is a stop word and (2) if the related image patch comes from a part of the background (e.g., wall, ground, etc.). In order to visualize the remaining hidden representations in a readable form, t-distributed Stochastic Neighbor Embedding (t-SNE) [29] was applied to reduce dimensions and create disjoint t-SNE spaces for different layers. This way, given a selected example, VL- InterpreT tracks the hidden representations both before the first layer and after each subsequent layer, and plots them in two-dimensional spaces. Figure 8(a) shows the data points representing the visual (in blue) and textual (in red) tokens from a given example. When hovering on a data point from language, the corresponding text is displayed. When hovering on a data point representing visual tokens, the image is shown with a highlighted blue patch corresponding to the visual token. Further observations on the hidden states often reveal the concept-level vision-text alignments that are learnt in this multimodal setup (see Section 4.2.3). To help further understand this alignment, VL-InterpreT allows a user to select a token (text or image patch) from a given example, and shows the nearest token in the other modality from the whole subset of examples, marked with a green star. ## 4 Case Studies To demonstrate the functionalities of VL-InterpreT, we analyze an end-to-end VL transformer model, KD-VLP [23], on two benchmarks: Visual Commonsense Reasoning (VCR) [34] and WebQA [5]. Nonetheless, our tool is generally applicable to a variety of multimodal transformer configurations and types of VL datasets. ### 4.1 Model The KD-VLP model used in our case study is a transformer for end-to-end vision-language processing. This model utilizes a ResNet backbone for visual inputs, and is pretrained using text-oriented, image-oriented, and cross-modal tasks in the form of masked language modeling, object-guided masked vision modeling, and phrase-region alignment, respectively. Depending on the application, the KD-VLP model can be fine-tuned for classification or generation tasks given bi-modal input of image and text. ### 4.2 Analysis on VCR The VCR benchmark consists of 290K multiple choice QA problems derived from 110K movie scenes. This dataset is uniquely valuable in that it requires higher-order cognition and commonsense reasoning about the world. Given an image and a question, the objective is to select an appropriate answer from four possible choices, and then provide the rationale. To predict the correct answer and rationale, the KD-VLP model is fed with the image, the question, and each answer or rationale individually. The predicted answer $a_{p}$ is the answer/rationale choice that receives the highest probability score, _i.e_., $a_{p}=\underset{a_{j}}{\arg\\!\max}~{}f(v,q,a_{j})$ (1) where $f$ is the KD-VLP model, $v$ is the image, $q$ the question, and $\\{a_{j}|~{}j\in[1,2,3,4]\\}$ are the possible answer choices. The functionalities of VL-InterpreT are demonstrated using example val-445 from the VCR validation set, or example ID 89 in the VL-InterpreT live demo. This data sample, shown in Figure 6, comprises an image showing a little girl running to a man and a woman in a garden. The question is: Where is [PERSON_0] running to? where [PERSON_0] corresponds to the little girl on the right side of the image (the locations of persons are provided in the VCR dataset and also passed to the model). We also analyze the answer predicted by the model (which in this case is correct): [PERSON_0] is running to help [PERSON_1] and [PERSON_2] with the plants. where [PERSON_1] and [PERSON_2] refer to the couple. The following analyses on this example will highlight the visualization capabilities that our application provides. #### 4.2.1 Attention head summary This functionality allows a user to identify interesting heads based on various metrics. By selecting Mean cross-modal attention from the dropdown menu (see Figure 3), a user can identify attention heads specialized in cross- modal attention. For instance, in Figure 5 the eighth attention head in layer 11 (denoted as (11, 8)) has, on average, the highest attention across modalities. Thus, we focus on this specific head and plot its cross-modal (V2L and L2V) attention. Apart from cross-modal attention, specific heads could also been identified through Mean image-to-text attention for analyzing V2L attentions, or Mean text-to-image attention for L2V attentions. Analogous procedure also applies to L2L and V2V attention components – for instance, the metric Mean image-to-image attention (without self) shows the heads’ attentions from every image patch to the every other image patches, excluding each patch itself and its neighbors. As described in section 3.2.2, a custom metric was used to identify attention heads with V2L components for [PERSON] tokens highly correlated with their panoptic segmentations. As shown in Figure 4, head (8, 9) scores particularly high in this metric. With head (8, 9) selected, the V2L attention attention heatmap for the person token (Figure 4, bottom right) indeed aligns well with the panoptic segmentation of the corresponding woman (Figure 4, top right). Furthermore, this correlation metric exhibits a trend where middle and later layers tend to have higher correlation coefficients on average (above 0.5) than early layers (less than 0.1), showing the evolution of attention patterns as the transformer layers grow deeper. Figure 5: Attention Head Summary #### 4.2.2 Attention components As described in Section 3.2.1, Figure 6 shows the heatmaps over the image and the text generated by the Language-to-Language and Vision-to-Language attention components of head (11, 8). This figure shows how attentions differ for two selected tokens: [PERSON_0] and plants. The V2L components are represented as heatmaps over the images at the bottom right of Figures 6(a) and 6(b). It can be observed that the attention is concentrated on regions corresponding to the text, namely the little girl for the attention to [PERSON_0] (Figure 6(a)) and the plants for the attention to the plants token (Figure 6(b)). The L2L components are on the top right of these figures. For both selected text tokens, related text tokens (including themselves) are highlighted in the heatmaps. For the example in Figure 6(a), the attention to [PERSON_0] is mostly from [PERSON_0], running to, and running to help [PERSON_1]. In the other example in Figure 6(b), the attention to plants is mostly from the plants token itself. Figure 7 visualizes the attentions to vision tokens, i.e., the Language-to- Vision and the Vision-to-Vision attentions. In this example, we select an image patch that is a part of the plants on the left image for analysis. Accordingly, the L2V component (top right of Figure 7) shows that the attention to this image patch is mainly from the [CLS] token as well as the plants token, which aligns with the concept behind the selected region. As for the V2V component (bottom right of the figure), it is also interesting that most of the regions containing plants in the image attend to this specific patch of plants, which again shows a conceptual alignment. In summary, this example shows evidence of a unified concept of “plants”, where the attention has a consistent pattern between intra- and cross-modal components. (a) Attention to the text token [PERSON_0] (b) Attention to the text token plants Figure 6: Two selected text tokens and the corresponding L2L (top right) and V2L (bottom right) attention to them. Figure 7: Attentions to vision for a selected image patch in the left image. The heatmaps show the attention from the text (top right) as well as other patches in the image (bottom right). #### 4.2.3 Hidden states Figure 8 demonstrates the capabilities of the VL-InterpreT for visualizing hidden states. When selecting the text token plants (marked in orange) from the previous example val-445 (ex 89), Figure 8(b) shows that in layer 11, the closest image patch from the whole pool of examples is the [IMG_52] (marked with a green star) from val-495 (ex 99). It is interesting that even when it comes from a very different example, this token also refers to the plants in the image. Other than layer 11, users can select different layers on the right to see other closest image tokens to plants throughout layers. (a) Hovering over a data point representing a visual token displays the corresponding image patch. (b) Clicking on the text token plants marks it orange, and displays the closest image token from the whole dataset, marked as a green star. Figure 8: t-SNE plot from the hidden representations of the selected example. (a) Predicted: A cross appears at the top of the dome of Saint Peter’s Basilica. (b) Predicted: There are 6 pillars in front of the Façade of the Kurhaus. (c) Predicted: The center of both the Aster amellus and the Wood Anemone is yellow. (d) Predicted: The ring around the eye of Trogon surrucura is red. (e) Predicted: The Traditional wedding attire of the Yoruba culture in Nigeria is white. Correct color: maroon. (f) Predicted: The olive that goes in Moroccan Tagine is yellow. Correct color: green. Figure 9: V2L attention heatmaps for WebQA examples and generated answers. Figures have been rearranged for print. (a)-(d) are correct predictions and (e) and (f) are incorrect. Sections 4.2.2 and 4.2.3 show evidence of alignment between visual and textual concepts of plants. We see that the concept of ”plants” in both modalities and across examples is captured by proximate representations. As such, VL- InterpreT allows studying how such sense of objectness emerges by probing the attentions and hidden states across all layers of the transformer. ### 4.3 Analysis on WebQA The WebQA benchmark focuses on multimodal, multihop reasoning for open-domain question answering. This benchmark emulates a knowledge-seeking query to a search engine for information which may be contained in either text-based articles or images. Given a query, the goal is to identify which information is relevant across modalities, and to generate a full natural language answer based on the selected sources. The dataset contains 50k QA pairs, half of which are text-based and the other half are image-based. We use the KD-VLP model to first select relevant sources using a classification head. Then, by adding a decoder in a Fusion-in-Decoder manner as in [33], a predicted answer is generated based on the retrieved sources. Similar behaviors can be studied for WebQA as in the previous section. The following analyses will focus on the L2V attention in the KD-VLP encoder, as these visualizations helps in understanding how the model generates answers. For this analysis, attention head (11, 5), identified in the same way as previous analyses, was selected. Figure 9 shows the V2L attention from image to the highlighted word above the pictures. These heatmaps show that the model attends more to the regions that help answer the question. For instance, when the question is about pillars, Figure 9(b) shows that the model attention comes particularly from the columns in the picture. In order to determine the color of the center of the flower in Figure 9(c), the model exhibits attention from the flower center in the image and generates the correct color (yellow). Furthermore, these visualizations can also be generated for incorrectly answered questions, providing insights into the reason why incorrect answers were generated. For example, one may imagine why the model answered incorrectly in Figure 9(e) and 9(f): In 9(e), the attire that the question asks about was misidentified. That is, instead of getting attention from patches of the maroon dress, the model focuses on the white feathers. A similar behavior is also seen in 9(f), where the model fails to locate the olives but focuses on the yellow vegetable in the tagine. In both cases, the model answers according to the identified object color (i.e., ”white” feathers and ”yellow” olive). These interpretations of attention helps users identify why a model fails in certain cases, and provides guidance for future efforts in improving model accuracy. ## 5 Conclusions and Future Directions In this paper we presented VL-InterpreT, an interactive visualization tool for interpreting vision-language transformers. This tool allows for interactive analysis of attention and hidden representations in each layer of any VL transformer. VL-InterpreT can be used to freely explore the interactions between and within different modalities to better understand the inner mechanisms of a transformer model, and to obtain insight into why certain predictions are made. Through case studies, we demonstrated how VL-InterpreT can be used to validate the learning of cross-modal concepts, as well as to “explain” cases of failure. In the latest version, VL-Interpret is able to run a live model to process user-generated examples in real time. This allows interactive manipulations of inputs, including both text and image, to study their effects on the attention and hidden representations. For future work, we would like to include aggregated metrics and visualizations over multiple samples to obtain a more comprehensive understanding of model operation. In addition, we hope to experiment with any additional functionalities that will assist users in interpreting multimodal transformers, and continue to enhance this interpretability tool. ## References * [1] Sebastian Bach, Alexander Binder, Grégoire Montavon, Frederick Klauschen, Klaus-Robert Müller, and Wojciech Samek. On pixel-wise explanations for non-linear classifier decisions by layer-wise relevance propagation. PloS one, 10(7):e0130140, 2015. * [2] Alexander Kirillov Bowen Cheng, Alexander G. Schwing. Per-pixel classification is not all you need for semantic segmentation. 2021\. * [3] Jize Cao, Zhe Gan, Yu Cheng, Licheng Yu, Yen-Chun Chen, and Jingjing Liu. Behind the scene: Revealing the secrets of pre-trained vision-and-language models. In European Conference on Computer Vision, pages 565–580. Springer, 2020. * [4] Nicolas Carion, Francisco Massa, Gabriel Synnaeve, Nicolas Usunier, Alexander Kirillov, and Sergey Zagoruyko. End-to-end object detection with transformers. In European Conference on Computer Vision, pages 213–229. Springer, 2020. * [5] Yingshan Chang, Mridu Narang, Hisami Suzuki, Guihong Cao, Jianfeng Gao, and Yonatan Bisk. Webqa: Multihop and multimodal qa. arXiv preprint arXiv:2109.00590, 2021. * [6] Hila Chefer, Shir Gur, and Lior Wolf. Generic attention-model explainability for interpreting bi-modal and encoder-decoder transformers. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), pages 397–406, October 2021. * [7] Hila Chefer, Shir Gur, and Lior Wolf. Transformer interpretability beyond attention visualization. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 782–791, 2021. * [8] Kevin Clark, Urvashi Khandelwal, Omer Levy, and Christopher D. Manning. What does BERT look at? An analysis of BERT’s attention. In BlackboxNLP@ACL, 2019. * [9] Jean-Baptiste Cordonnier, Andreas Loukas, and Martin Jaggi. On the relationship between self-attention and convolutional layers. In International Conference on Learning Representations, 2020. * [10] Marcella Cornia, Matteo Stefanini, Lorenzo Baraldi, and Rita Cucchiara. Meshed-memory transformer for image captioning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 10578–10587, 2020. * [11] Adam Dahlgren Lindström, Johanna Björklund, Suna Bensch, and Frank Drewes. Probing multimodal embeddings for linguistic properties: the visual-semantic case. In Proceedings of the 28th International Conference on Computational Linguistics, pages 730–744, Barcelona, Spain (Online), Dec. 2020\. International Committee on Computational Linguistics. * [12] Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota, June 2019. Association for Computational Linguistics. * [13] Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, and Neil Houlsby. An image is worth 16x16 words: Transformers for image recognition at scale. In International Conference on Learning Representations, 2021. * [14] Lisa Anne Hendricks and Aida Nematzadeh. Probing image-language transformers for verb understanding. In Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021, pages 3635–3644, Online, Aug. 2021. Association for Computational Linguistics. * [15] Benjamin Hoover, Hendrik Strobelt, and Sebastian Gehrmann. exBERT: A Visual Analysis Tool to Explore Learned Representations in Transformer Models. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics: System Demonstrations, pages 187–196, Online, July 2020. Association for Computational Linguistics. * [16] Bo-Jian Hou and Zhi-Hua Zhou. Learning with interpretable structure from gated rnn. IEEE transactions on neural networks and learning systems, 31(7):2267–2279, 2020. * [17] Andrej Karpathy, Justin Johnson, and Li Fei-Fei. Visualizing and understanding recurrent networks. arXiv preprint arXiv:1506.02078, 2015. * [18] Vasudev Lal, Arden Ma, Estelle Aflalo, Phillip Howard, Ana Simoes, Daniel Korat, Oren Pereg, Gadi Singer, and Moshe Wasserblat. InterpreT: An interactive visualization tool for interpreting transformers. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: System Demonstrations, pages 135–142, Online, Apr. 2021. Association for Computational Linguistics. * [19] Guang Li, Linchao Zhu, Ping Liu, and Yi Yang. Entangled transformer for image captioning. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 8928–8937, 2019. * [20] Raymond Li, Wen Xiao, Lanjun Wang, Hyeju Jang, and Giuseppe Carenini. T3-vis: visual analytic for training and fine-tuning transformers in NLP. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 220–230, Online and Punta Cana, Dominican Republic, Nov. 2021. Association for Computational Linguistics. * [21] Yongjie Lin, Yi Chern Tan, and Robert Frank. Open sesame: Getting inside BERT’s linguistic knowledge. In Proceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 241–253, Florence, Italy, Aug. 2019. Association for Computational Linguistics. * [22] Nelson F. Liu, Matt Gardner, Yonatan Belinkov, Matthew E. Peters, and Noah A. Smith. Linguistic knowledge and transferability of contextual representations. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 1073–1094, Minneapolis, Minnesota, June 2019. Association for Computational Linguistics. * [23] Yongfei Liu, Chenfei Wu, Shao-Yen Tseng, Vasudev Lal, Xuming He, and Nan Duan. KD-VLP: Improving end-to-end vision-and-language pretraining with object knowledge distillation, 2021. * [24] Harsha Nori, Samuel Jenkins, Paul Koch, and Rich Caruana. Interpretml: A unified framework for machine learning interpretability. arXiv preprint arXiv:1909.09223, 2019. * [25] Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, and Ilya Sutskever. Learning transferable visual models from natural language supervision. In ICML, 2021. * [26] Emmanuelle Salin, Badreddine Farah, Stéphane Ayache, and Benoit Favre. Are vision-language transformers learning multimodal representations? A probing perspective. In AAAI 2022, 2022. * [27] Hendrik Strobelt, Sebastian Gehrmann, Hanspeter Pfister, and Alexander M Rush. Lstmvis: A tool for visual analysis of hidden state dynamics in recurrent neural networks. IEEE transactions on visualization and computer graphics, 24(1):667–676, 2017. * [28] Ian Tenney, James Wexler, Jasmijn Bastings, Tolga Bolukbasi, Andy Coenen, Sebastian Gehrmann, Ellen Jiang, Mahima Pushkarna, Carey Radebaugh, Emily Reif, and Ann Yuan. The language interpretability tool: Extensible, interactive visualizations and analysis for NLP models. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 107–118, Online, Oct. 2020. Association for Computational Linguistics. * [29] Laurens van der Maaten. Learning a parametric embedding by preserving local structure. In Proceeding of AISTATS 2009., 2009. * [30] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. Attention is all you need. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, editors, Advances in Neural Information Processing Systems, volume 30. Curran Associates, Inc., 2017. * [31] Jesse Vig and Yonatan Belinkov. Analyzing the structure of attention in a transformer language model. In Proceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 63–76, 2019. * [32] James Wexler, Mahima Pushkarna, Tolga Bolukbasi, Martin Wattenberg, Fernanda Viégas, and Jimbo Wilson. The what-if tool: Interactive probing of machine learning models. IEEE transactions on visualization and computer graphics, 26(1):56–65, 2019. * [33] Donghan Yu, Chenguang Zhu, Yuwei Fang, Wenhao Yu, Shuohang Wang, Yichong Xu, Xiang Ren, Yiming Yang, and Michael Zeng. KG-fid: Infusing knowledge graph in fusion-in-decoder for open-domain question answering, 2022. * [34] Rowan Zellers, Yonatan Bisk, Ali Farhadi, and Yejin Choi. From recognition to cognition: Visual commonsense reasoning. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 6720–6731, 2019. * [35] Quanshi Zhang, Ruiming Cao, Feng Shi, Ying Nian Wu, and Song-Chun Zhu. Interpreting cnn knowledge via an explanatory graph. In Thirty-Second AAAI Conference on Artificial Intelligence, 2018\. * [36] Quanshi Zhang, Yu Yang, Haotian Ma, and Ying Nian Wu. Interpreting cnns via decision trees. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 6261–6270, 2019.
space in a linear form, and in $\text{rank}_{1,w}$ it is essentially done in the original space of the variables through nonlinear inequalities. In the first experiment we set $\lambda=10^{-2},\mu=10^{-4},p=50,d=1325,\pi=0.1$ and $N\in\\{10,50,100,200\\}$. In Figure 1 we report the optimality gap and solution time for different convex relaxations. In determining the optimality gap, we compare the objective value of the given relaxation against the best known feasible solution for the instance (among the ones found by the B&B method applied to any of the formulations of (35) presented in (41), (47) and (56). This solution corresponds to the optimal solution if the time limit has not been reached in the B&B algorithm, or the best feasible solution reported by MOSEK if the time limit has been reached. The results in Figure 1 suggest that the qualities of the convex relaxations based on the (56) formulation are the best. In particular, $\text{rank}_{1}^{+}\\!$ relaxation attains an average gap smaller than $5\%$. Moreover, adding valid inequalities to the sets $\Delta_{b}$ and $\Omega_{b}$ significantly improve the quality of the $\text{rank}_{1}\\!$ and $\text{rank}_{1}^{+}\\!$ relaxations. For example, the $\text{rank}_{1,v}^{+}\\!$ relaxation attains the average gap smaller than $1\%$, which is $5$ times smaller than the gap attained by $\text{rank}_{1}^{+}\\!$. However, adding these valid inequalities comes with a computational downside. Specifically, the optimization problems now involve more constraints, which result in longer solution times. It is also worth to note that the optimality gap of the $\text{rank}_{1}\\!$ relaxations is significantly superior to that of the separable and natural relaxations, albeit at the expense of relatively longer solution times. Finally, we highlight that the optimality gap of the continuous relaxations from $\text{rank}_{1,v}\\!$ and $\text{rank}_{1,w}\\!$, with the latter being inspired by (Wei et al., 2022b, Theorem 1), are identical. This is due to the fact that when the variables ${\bm{w}}$ are relaxed to be continuous the projection of $\text{rank}_{1,v}\\!$ in the original space leads to precisely the same inequalities as in $\text{rank}_{1,w}\\!$. Nonetheless, it takes roughly twice as long to solve the $\text{rank}_{1,w}\\!$ relaxation than the $\text{rank}_{1,v}\\!$ relaxation. This is expected as the $\text{rank}_{1,w}\\!$ relaxation introduces a considerably larger number of constraints and variables compared to the $\text{rank}_{1,v}\\!$ relaxation. It is also important to note that a complete implementation of (Wei et al., 2022b, Theorem 1) requires using a characterization of $\operatorname{conv}({\cal Z}\backslash\\{\bm{0}\\})$ which may possibly involve an exponential number of constraints. In contrast, $\text{rank}_{1,v}\\!$ relaxation handles this complexity through the use of binary variables ${\bm{w}}$, making the $\text{rank}_{1,v}\\!$ relaxation much more applicable in practice. Figure 1: Comparison of different continuous relaxations for $p=50$ and $\pi=0.1$ as $N$ varies. We next examine the B&B performance of these alternative formulations of (35) in which we always keep the variables ${\bm{z}}$ as binary but we create two variants for each of the $\text{rank}_{1}\\!$ and $\text{rank}_{1}^{+}\\!$ formulations based on whether the variables ${\bm{w}}$ are kept as binary or ${\bm{w}}$ are relaxed to be continuous: * • In _separable reformulation_ , we consider (41). * • In _$\text{rank}_{1}\\!$ reformulation_, we consider (47). * • In _$\text{rank}_{1,r}\\!$ reformulation_, we replace $\Delta_{b}$ in (47) with $\operatorname{relax}_{{\bm{w}}}(\Delta_{b})$. * • In _$\text{rank}_{1,v}\\!$ reformulation_, we replace $\Delta_{b}$ in (47) with $\Delta_{b}\cap\Delta_{v}$. * • In _$\text{rank}_{1,r,v}\\!$ reformulation_, we replace $\Delta_{b}$ in (47) with $\operatorname{relax}_{{\bm{w}}}(\Delta_{b}\cap\Delta_{v})$. * • In _$\text{rank}_{1,w}\\!$ reformulation_, we consider (4). * • In _$\text{rank}_{1}^{+}\\!$ reformulation_, we consider (56). * • In _$\text{rank}_{1,r}^{+}\\!$ reformulation_, we replace $\Omega_{b}$ in (56) with $\operatorname{relax}_{{\bm{w}}}(\Omega_{b})$. * • In _$\text{rank}_{1,v}^{+}\\!$ reformulation_, we replace $\Omega_{b}$ in (56) with $\Omega_{b}\cap\Omega_{v}$. * • In _$\text{rank}_{1,r,v}^{+}\\!$ reformulation_, we replace $\Omega_{b}$ in (56) with $\operatorname{relax}_{{\bm{w}}}(\Omega_{b}\cap\Omega_{c})$. Figure 2 reports the true optimality gap computed using the best known heuristic solution (in our experiments this was usually obtained by a variant of (47) formulation) as discussed earlier, the optimality gap reported by the solver, the number of B&B nodes, and the solution time. Figure 2: The B&B performance for $p=50$ and $\pi=0.1$ as $N$ varies. We start by analyzing the true optimality gap and solver gap in Figure 2. As also seen in Figure 1, in terms of true optimality gap, the formulations based on (56) consistently outperform those based on (47) in terms of the true optimality gap. This is primarily because the relaxations based on (56) can produce high-quality lower bounds, even though these formulations take longer to solve and thus result in significantly fewer number of nodes explored in B&B. Among $\text{rank}_{1}\\!$ and $\text{rank}^{+}_{1}\\!$ variants, the ones where the variables ${\bm{w}}$ are kept as binary perform better in terms of the optimality gaps than the ones where ${\bm{w}}$ are relaxed to be continuous even though having ${\bm{w}}$ binary results in fewer B&B nodes. This is because when ${\bm{w}}$ are binary, the solver is able to leverage the structure of the sets $\Delta_{b}$ and $\Omega_{b}$ to generate further cuts and results in higher quality relaxations. Among $\text{rank}_{1}\\!$ variants, the performance of $\text{rank}_{1,v}\\!$ seems to be best and $\text{rank}_{1,w}\\!$ seems to be the worst. As the continuous relaxation of $\text{rank}_{1,w}$ takes longer to solve compared to those based on (47), its B&B can explore only a smaller number of nodes, and therefore, results in a worse optimality gap. When we examine the gaps reported by the solver, we still observe that the same phenomena, but this time the associated gaps reported by the solver are considerably larger than the true optimality gaps. This is because for this class of problems, the heuristic methods utilized by the solver are not very advanced, and the good quality feasible solutions of a formulation are often found at integral nodes in the B&B tree, so essentially by chance. Therefore, the B&B procedure for the formulations that admit quick to solve node relaxations often results in high quality feasible solutions. Consequently, in the case of expensive lifted formulations such as (56) while the actual optimality gaps are very close to zero, the solver is unable to report this gap due to the inferior quality of the feasible solution found in the associated B&B tree. To address this issue, BARON introduces the concept of “relaxation-only constraints” (Sahinidis, 1996, accessed November 13, 2023). These constraints are active during the relaxation step, but are disregarded during the local search step. To the best of our knowledge, MOSEK 10 does not support this feature, and thus in our figures we report both the true optimality gap and the solver reported gap for the formulations tested. In the second experiment we set $\lambda=10^{-2},\mu=10^{-4},N=100,\pi=0.1$ and $p\in\\{10,20,30,40,50\\}$, which translates to $d\in\\{65,230,495,860,1325\\}$. Figure B.1 and Figure B.2 in Appendix B compare the quality of different continuous relaxations and also report the performance of the B&B algorithm, respectively. The observations from Figure B.1 are very similar to the ones in Figure 1; thus we omit this discussion for brevity. Despite the similarity between the observations in Figure B.2 and Figure 2, it is worth noting that when $p=10$, the B&B performance is slightly different. In particular, when $p=10$, all methods except $\text{rank}_{1,r}^{+}\\!$ and $\text{rank}_{1,r,v}^{+}\\!$ can solve the optimization problem in less than $20$ seconds. This implies that the integer programs may be relatively simple to solve when the dimension is small. As a result, stronger relaxations may not be needed when dealing with small scale instances. In the last experiment we set $\lambda=10^{-2},\mu=10^{-4},p=50,d=1325,N=100$ and $\pi\in\\{0.1,0.3,0.5,0.7,0.9\\}$; see Figure B.3 and Figure B.4 in Appendix B for the quality of different continuous relaxations and the B&B performance. In all of these instances time limit was reached in B&B, so the solution time is not reported in Figure B.4. As the value of $\pi$ increases, we notice that the optimality gap of the $\text{rank}_{1}\\!$ relaxation gets closer to that of separable relaxation. This is expected as the binary variable $w_{j}$ models whether ${\bm{a}}_{j}^{\top}{\bm{x}}=0$ or not. When $\pi$ is large, the probability of such event is low. As a result, $w_{j}$ is assigned a value of $1$ with high probability, which make the $\text{rank}_{1}\\!$ relaxations much less effective. As a final observation, we note that the value of $\pi$ seems to not affect the quality of the $\text{rank}_{1}^{+}\\!$ relaxations; in particular these relaxations continue to be of high quality even for high values of $\pi$. ##### Acknowledgements This research is supported by Early Postdoc Mobility Fellowship SNSF grant P2ELP2_195149 and AFOSR grant FA9550-22-1-0365. ## References * Aktürk et al. (2009) M. S. Aktürk, A. Atamtürk, and S. Gürel. A strong conic quadratic reformulation for machine-job assignment with controllable processing times. _Operations Research Letters_ , 37(3):187–191, 2009. * Atamtürk and Gómez (2018) A. Atamtürk and A. Gómez. Strong formulations for quadratic optimization with M-matrices and indicator variables. _Mathematical Programming_ , 170(1):141–176, 2018\. * Atamtürk and Gómez (2019) A. Atamtürk and A. Gómez. Rank-one convexification for sparse regression. _arXiv:1901.10334_ , 2019. * Atamtürk and Gómez (2020) A. Atamtürk and A. Gómez. Safe screening rules for $\ell_{0}$-regression from perspective relaxations. In _International Conference on Machine Learning_ , pages 421–430, 2020. * Atamtürk and Gómez (2022) A. Atamtürk and A. Gómez. Supermodularity and valid inequalities for quadratic optimization with indicators. _Mathematical Programming (Forthcoming)_ , pages 1–44, 2022. * Atamtürk et al. (2021) A. Atamtürk, A. Gómez, and S. Han. Sparse and smooth signal estimation: Convexification of $\ell_{0}$-formulations. _Journal of Machine Learning Research_ , 22:52–1, 2021\. * Bacci et al. (2019) T. Bacci, A. Frangioni, C. Gentile, and K. Tavlaridis-Gyparakis. New MINLP formulations for the unit commitment problems with ramping constraints. _Optimization Online_ , 2019. * Behdin and Mazumder (2021) K. Behdin and R. Mazumder. Archetypal analysis for sparse nonnegative matrix factorization: Robustness under misspecification. _arXiv:2104.03527_ , 2021. * Ben-Tal and Nemirovski (2001) A. Ben-Tal and A. Nemirovski. _Lectures on Modern Convex Optimization: Analysis, Algorithms, and Engineering Applications_. SIAM, 2001. * Bertsimas et al. (2021a) D. Bertsimas, R. Cory-Wright, and J. Pauphilet. A new perspective on low-rank optimization. _arXiv:2105.05947_ , 2021a. * Bertsimas and King (2016) D. Bertsimas and A. King. OR forum—an algorithmic approach to linear regression. _Operations Research_ , 64(1):2–16, 2016. * Bertsimas et al. (2016) D. Bertsimas, A. King, and R. Mazumder. Best subset selection via a modern optimization lens. _Annals of Statistics_ , 44(2):813–852, 2016\. * Bertsimas et al. (2021b) D. Bertsimas, J. Pauphilet, and B. Van Parys. Sparse classification: a scalable discrete optimization perspective. _Machine Learning_ , 110(11):3177–3209, 2021b. * Bertsimas and Van Parys (2020) D. Bertsimas and B. Van Parys. Sparse high-dimensional regression: Exact scalable algorithms and phase transitions. _Annals of Statistics_ , 48(1):300–323, 2020\. * Bien et al. (2013) J. Bien, J. Taylor, and R. Tibshirani. A LASSO for hierarchical interactions. _Annals of Statistics_ , 41(3):1111, 2013. * Bienstock (1996) D. Bienstock. Computational study of a family of mixed-integer quadratic programming problems. _Mathematical Programming_ , 74(2):121–140, 1996\. * Ceria and Soares (1999) S. Ceria and J. Soares. Convex programming for disjunctive convex optimization. _Mathematical Programming_ , 86(3):595–614, 1999\. * Combettes (2018) P. L. Combettes. Perspective functions: Properties, constructions, and examples. _Set-Valued and Variational Analysis_ , 26(2):247–264, 2018. * Cozad et al. (2014) A. Cozad, N. V. Sahinidis, and D. C. Miller. Learning surrogate models for simulation-based optimization. _AIChE Journal_ , 60(6):2211–2227, 2014. * Cozad et al. (2015) A. Cozad, N. V. Sahinidis, and D. C. Miller. A combined first-principles and data-driven approach to model building. _Computers & Chemical Engineering_, 73:116–127, 2015\. * Dantzig and Eaves (1973) G. B. Dantzig and B. C. Eaves. Fourier-Motzkin elimination and its dual. _Journal of Combinatorial Theory_ , 14(3):288–297, 1973. * Deza and Atamtürk (2022) A. Deza and A. Atamtürk. Safe screening for logistic regression with $\ell_{0}$-$\ell_{2}$ regularization. _arXiv:2202.00467_ , 2022. * Frangioni and Gentile (2006) A. Frangioni and C. Gentile. Perspective cuts for a class of convex 0–1 mixed integer programs. _Mathematical Programming_ , 106(2):225–236, 2006\. * Frangioni et al. (2020) A. Frangioni, C. Gentile, and J. Hungerford. Decompositions of semidefinite matrices and the perspective reformulation of nonseparable quadratic programs. _Mathematics of Operations Research_ , 45(1):15–33, 2020. * Gómez (2021) A. Gómez. Outlier detection in time series via mixed-integer conic quadratic optimization. _SIAM Journal on Optimization_ , 31(3):1897–1925, 2021. * Günlük and Linderoth (2010) O. Günlük and J. Linderoth. Perspective reformulations of mixed integer nonlinear programs with indicator variables. _Mathematical Programming_ , 124(1):183–205, 2010\. * Han and Gómez (2021) S. Han and A. Gómez. Compact extended formulations for low-rank functions with indicator variables. _arXiv:2110.14884_ , 2021. * Hastie et al. (2015) T. Hastie, R. Tibshirani, and M. Wainwright. _Statistical learning with sparsity: the lasso and generalizations_. CRC Press, 2015. * Hazimeh and Mazumder (2020a) H. Hazimeh and R. Mazumder. Fast best subset selection: Coordinate descent and local combinatorial optimization algorithms. _Operations Research_ , 68(5):1517–1537, 2020a. * Hazimeh and Mazumder (2020b) H. Hazimeh and R. Mazumder. Learning hierarchical interactions at scale: A convex optimization approach. In _International Conference on Artificial Intelligence and Statistics_ , pages 1833–1843, 2020b. * Hazimeh et al. (2023) H. Hazimeh, R. Mazumder, and P. Radchenko. Grouped variable selection with discrete optimization: Computational and statistical perspectives. _Annals of Statistics_ , 51(1):1–32, 2023. * Hazimeh et al. (2021) H. Hazimeh, R. Mazumder, and A. Saab. Sparse regression at scale: Branch-and-bound rooted in first-order optimization. _Mathematical Programming_ , pages 1–42, 2021. * Heller and Tompkins (1956) I. Heller and C. B. Tompkins. An extension of a theorem of Dantzig’s. In H. W. Kuhn and A. W. Tucker, editors, _Linear Inequalities and Related Systems_ , pages 247–254. Princeton University Press, 1956. * Hiriart-Urruty and Lemaréchal (2004) J.-B. Hiriart-Urruty and C. Lemaréchal. _Fundamentals of Convex Analysis_. Springer, 2004. * Huang et al. (2012) J. Huang, P. Breheny, and S. Ma. A selective review of group selection in high-dimensional models. _Statistical Science_ , 27(4), 2012. * Jeon et al. (2017) H. Jeon, J. Linderoth, and A. Miller. Quadratic cone cutting surfaces for quadratic programs with on–off constraints. _Discrete Optimization_ , 24:32–50, 2017. * Kucukyavuz et al. (2020) S. Kucukyavuz, A. Shojaie, H. Manzour, L. Wei, and H.-H. Wu. Consistent second-order conic integer programming for learning Bayesian networks. _arXiv:2005.14346_ , 2020. * Liu et al. (2022) P. Liu, S. Fattahi, A. Gómez, and S. Küçükyavuz. A graph-based decomposition method for convex quadratic optimization with indicators. _Mathematical Programming (Forthcoming)_ , 2022. * Lubin and Dunning (2015) M. Lubin and I. Dunning. Computing in operations research using Julia. _INFORMS Journal on Computing_ , 27(2):238–248, 2015. * Manzour et al. (2021) H. Manzour, S. Küçükyavuz, H.-H. Wu, and A. Shojaie. Integer programming for learning directed acyclic graphs from continuous data. _INFORMS Journal on Optimization_ , 3(1):46–73, 2021. * Natarajan (1995) B. K. Natarajan. Sparse approximate solutions to linear systems. _SIAM Journal on Computing_ , 24(2):227–234, 1995\. * Ramachandra et al. (2021) A. Ramachandra, N. Rujeerapaiboon, and M. Sim. Robust conic satisficing. _arXiv:2107.06714_ , 2021. * Rockafellar (1970) R. T. Rockafellar. _Convex Analysis_. Princeton University Press, 1970. * Rudin and Ustun (2018) C. Rudin and B. Ustun. Optimized scoring systems: Toward trust in machine learning for healthcare and criminal justice. _Interfaces_ , 48(5):449–466, 2018. * Sahinidis (1996) N. V. Sahinidis. BARON: A general purpose global optimization software package. _Journal of Global Optimization_ , 8:201–205, 1996. * Sahinidis (accessed November 13, 2023) N. V. Sahinidis. BARON user manual v. 2023.11.10. https://minlp.com/downloads/docs/baron%20manual.pdf, accessed November 13, 2023. * Tibshirani (1996) R. Tibshirani. Regression shrinkage and selection via the lasso. _Journal of the Royal Statistical Society Series B: Statistical Methodology_ , 58(1):267–288, 1996. * Wei et al. (2022a) L. Wei, A. Atamtürk, A. Gómez, and S. Küçükyavuz. On the convex hull of convex quadratic optimization problems with indicators. _arXiv:2201.00387_ , 2022a. * Wei et al. (2020) L. Wei, A. Gómez, and S. Küçükyavuz. On the convexification of constrained quadratic optimization problems with indicator variables. In _International Conference on Integer Programming and Combinatorial Optimization_ , pages 433–447, 2020. * Wei et al. (2022b) L. Wei, A. Gómez, and S. Küçükyavuz. Ideal formulations for constrained convex optimization problems with indicator variables. _Mathematical Programming_ , 192(1):57–88, 2022b. * Wolsey (1989) L. A. Wolsey. Submodularity and valid inequalities in capacitated fixed charge networks. _Operations Research Letters_ , 8(3):119–124, 1989. * Xie and Deng (2020) W. Xie and X. Deng. Scalable algorithms for the sparse ridge regression. _SIAM Journal on Optimization_ , 30(4):3359–3386, 2020. ## A Additional Proofs ###### Proof ​ _of Lemma 4 _ As for assertion (i), let $\overline{\cal U}\\!:=\\!{\mathbb{R}}\times{\cal U}$ and $\overline{\cal V}:=\\{(\tau,{\bm{\mu}},{\bm{t}}\hskip 1.00006pt):{\bm{1}}^{\top}{\bm{t}}=\tau\\}$. By construction, ${\cal V}$ is the projection of $\overline{\cal U}\cap\overline{\cal V}$ onto $(\tau,{\bm{\mu}})-$space. Hence, $\operatorname{conv}({\cal V})=\operatorname{conv}(\operatorname{Proj}_{\tau,{\bm{\mu}}}(\overline{\cal U}\cap\overline{\cal V}))=\operatorname{Proj}_{\tau,{\bm{\mu}}}(\operatorname{conv}(\overline{\cal U}\cap\overline{\cal V}))$. Therefore, in the remainder of the proof we characterize convex hull of $\overline{\cal U}\cap\overline{\cal V}$. As $\overline{\cal V}$ is convex, to prove $\operatorname{conv}(\overline{\cal U}\cap\overline{\cal V})=\operatorname{conv}(\overline{\cal U})\cap\overline{\cal V}$ it suffices to show that $\operatorname{conv}(\overline{\cal U})\cap\overline{\cal V}\subseteq\operatorname{conv}(\overline{\cal U}\cap\overline{\cal V})$. Take a point $(\bar{\tau},\bar{\bm{\mu}},\bar{\bm{t}}\hskip 0.43057pt)$ from $\operatorname{conv}(\overline{\cal U})\cap\overline{\cal V}$. Since $(\bar{\tau},\bar{\bm{\mu}},\bar{\bm{t}}\hskip 0.43057pt)\in\overline{\cal V}$, we have $\bar{\tau}={\bm{1}}^{\top}\bar{\bm{t}}$. On the other hand, since $(\bar{\tau},\bar{\bm{\mu}},\bar{\bm{t}}\hskip 0.43057pt)\in\operatorname{conv}(\overline{\cal U})$, we can always express it as a convex combination of a finite number of points in $\overline{\cal U}$, that is, $(\bar{\tau},\bar{\bm{\mu}},\bar{\bm{t}}\hskip 0.43057pt)=\sum_{k\in[q]}\lambda_{k}(\tau^{k},{\bm{\mu}}^{k},{\bm{t}}^{k})$ for some finite $q$, $(\tau^{k},{\bm{\mu}}^{k},{\bm{t}}^{k})\in\overline{\cal U}$, and $\lambda_{k}>0$ for all $k\in[q]$ satisfy $\sum_{k\in[q]}\lambda_{k}=1$. Notice that there is no restriction on $\tau$ in the definition of $\overline{\cal U}$. Hence, we can take $\tau^{k}:={\bm{1}}^{\top}{\bm{t}}^{k}$ for all $k\in[q]$. Therefore, we have $(\tau^{k},{\bm{\mu}}^{k},{\bm{t}}^{k})\in\overline{\cal V}$ for all $k\in[q]$. We thus deduce $(\tau^{k},{\bm{\mu}}^{k},{\bm{t}}^{k})\in\overline{\cal U}\cap\overline{\cal V}$ for all $k\in[q]$. Moreover, because $\bar{\bm{t}}=\sum_{k\in[q]}\lambda_{k}{\bm{t}}^{k}$, we have $\sum_{k\in[q]}\lambda_{k}\tau^{k}=\sum_{k\in[q]}\lambda_{k}({\bm{1}}^{\top}t^{k})={\bm{1}}^{\top}(\sum_{k\in[q]}\lambda_{k}t^{k})={\bm{1}}^{\top}\bar{\bm{t}}=\bar{\tau}$. Hence, this proves that any point $(\bar{\tau},\bar{\bm{\mu}},\bar{\bm{t}}\hskip 0.43057pt)$ can be written as a convex combination of points from $\overline{\cal U}\cap\overline{\cal V}$ as desired. Therefore, $\operatorname{conv}(\overline{\cal U}\cap\overline{\cal V})=\operatorname{conv}(\overline{\cal U})\cap\overline{\cal V}$, and the claim follows by projecting onto $(\tau,{\bm{\mu}})-$space. As for assertion (ii), note that the set ${\cal V}$ is the affine transformation of the convex set $\widetilde{\cal U}=\\{\bm{\eta}\in{\cal U}:{\bm{B}}\bm{\eta}=\bm{b}\\}$, that is, ${\cal V}={\bm{A}}\,\widetilde{\cal U}$. We then have $\displaystyle\operatorname{cl}({\cal V})=\operatorname{cl}(\operatorname{rint}({\cal V}))=\operatorname{cl}({\bm{A}}(\operatorname{rint}(\widetilde{\cal U})))$ $\displaystyle=\operatorname{cl}({\bm{A}}(\operatorname{rint}(\operatorname{cl}(\widetilde{\cal U}))))$ $\displaystyle=\operatorname{cl}(\operatorname{rint}({\bm{A}}(\operatorname{cl}(\widetilde{\cal U}))))=\operatorname{cl}({\bm{A}}(\operatorname{cl}(\widetilde{\cal U}))).$ The first equality holds since a convex set and its relative interior have the same closure. The second equality holds as the linear transformation and the relative interior operators are interchangeable for convex sets (see (Rockafellar, 1970, Theorem 6.6)); thus, we have $\operatorname{rint}({\cal V})=\operatorname{rint}({\bm{A}}\,\widetilde{\cal U})={\bm{A}}(\operatorname{rint}(\widetilde{\cal U}))$. The third equality holds as the relative interior of a convex set and the relative interior of its closure are the same. The fourth equality follows from the convexity of $\operatorname{cl}(\widetilde{\cal U})$, which allows us to interchange the linear transformation and the relative interior operators. Finally, the last inequality holds as the closure of the relative interior of a convex set equals the closure of the set. Since we assumed that there exists a point $\bm{\eta}^{\star}\in\operatorname{rint}({\cal U})$ satisfying the condition ${\bm{B}}\bm{\eta}^{\star}=\bm{b}$, we have $\operatorname{cl}(\widetilde{\cal U})=\\{\bm{\eta}\in\operatorname{cl}({\cal U}):{\bm{B}}\bm{\eta}=\bm{b}\\}$ thanks to (Rockafellar, 1970, Corrolay 6.5.1). Thus, we showed that $\displaystyle\operatorname{cl}({\cal V})=\operatorname{cl}({\bm{A}}(\operatorname{cl}(\widetilde{\cal V})))=\operatorname{cl}(\\{{\bm{\mu}}:\exists\bm{\eta}\in\operatorname{cl}({\cal U})\text{ s.t. }{\bm{A}}\bm{\eta}={\bm{\mu}},{\bm{B}}\bm{\eta}=\bm{b}\\}).$ As we assumed that $\bm{\eta}=\bm{0}$ is the only $\bm{\eta}\in\operatorname{rec}(\operatorname{cl}(\widetilde{\cal V}))$ with ${\bm{A}}\bm{\eta}=\bm{0}$, by (Rockafellar, 1970, Theorem 9.1), we have $\operatorname{cl}({\bm{A}}(\operatorname{cl}(\widetilde{\cal V})))\\!=\\!{\bm{A}}(\operatorname{cl}(\widetilde{\cal V}))$. This completes the proof.​ ∎ ###### Proof ​ _of Lemma 6 _ Note that the matrix $\displaystyle{\bm{A}}=\begin{bmatrix}-1&~{}{\bm{1}}^{\top}\\\\[2.15277pt] 0&~{}{\bm{1}}^{\top}\end{bmatrix}$ is totally unimodular. Recall that if $A$ is totally unimodular, then the matrix $[{\bm{A}}^{\top},~{}{\bm{I}},~{}-{\bm{I}}]^{\top}$ is also totally unimodular. By definition, $\Delta_{1}=\\{(w,{\bm{z}})\in\\{0,1\\}^{1+d}:~{}w\leq{\bm{1}}^{\top}{\bm{z}},~{}{\bm{1}}^{\top}{\bm{z}}\leq\kappa\\}$. As the feasible set of $\Delta_{1}$ will be represented by the matrix $[{\bm{A}}^{\top},~{}{\bm{I}},~{}-{\bm{I}}]^{\top}$ and an integer vector, the set $\Delta_{1}$ is totally unimodular, and $\operatorname{conv}(\Delta_{1})$ is thus given by the continuous relaxation of $\Delta_{1}$.​​​​ ∎ ###### Proof ​ _of Lemma 7 _ Note that the matrix $\displaystyle{\bm{A}}=\begin{bmatrix}1&-{\bm{1}}_{d-1}^{\top}&0\\\\[2.15277pt] 0&-{\bm{1}}_{d-1}^{\top}&1\end{bmatrix}$ is totally unimodular as every square submatrix of $A$ has determinant $0$, $+1$, or $-1$. Moreover, it is easy to see that $\displaystyle\Delta_{1}$ $\displaystyle\textstyle=\\{(w,{\bm{z}})\in\\{0,1\\}^{1+d}:~{}w\leq{\bm{1}}^{\top}{\bm{z}},~{}z_{d}\leq\sum_{i\in[d-1]}z_{i}\\}$ $\displaystyle\textstyle=\\{(w,{\bm{z}})\in\\{0,1\\}^{1+d}:~{}w\leq\sum_{i\in[d-1]}z_{i},~{}z_{d}\leq\sum_{i\in[d-1]}z_{i}\\},$ which implies that the feasible set of $\Delta_{1}$ can be represented by the matrix $[{\bm{A}}^{\top},~{}{\bm{I}},~{}-{\bm{I}}]^{\top}$ and an integer vector. Thus, the new representation of $\Delta_{1}$ is totally unimodular, and $\operatorname{conv}(\Delta_{1})$ is given by its continuous relaxation. ∎ ###### Proof ​ _of Lemma 8 _ The set $\Delta_{1}$ can be written as $\displaystyle\Delta_{1}$ $\displaystyle=\left(\left\\{0\right\\}\times\left\\{\bm{0}\right\\}\right)\cup\left(\left\\{0,1\right\\}\times{\cal Z}\backslash\left\\{\bm{0}\right\\}\right).$ Since $\operatorname{conv}(A\times B)=\operatorname{conv}(A)\times\operatorname{conv}(B)$ and $\operatorname{conv}(A\cup B)=\operatorname{conv}(\operatorname{conv}(A)\cup\operatorname{conv}(B))$, we arrive at $\displaystyle\operatorname{conv}(\Delta_{1})$ $\displaystyle=\operatorname{conv}\left(\left(\left\\{0\right\\}\times\left\\{\bm{0}\right\\}\right)\cup\left([0,1]\times\operatorname{conv}({\cal Z}\backslash\\{\bm{0}\\})\right)\right)$ $\displaystyle=\left\\{(w,{\bm{z}})\in{\mathbb{R}}^{1+d}:\exists\lambda\in[0,1]\text{~{}s.t.~{}}\begin{array}[]{l}{\bm{F}}^{0}{\bm{z}}\geq\bm{0},~{}0\leq w\leq\lambda\\\\[4.30554pt] {\bm{z}}^{\top}\bm{f}^{+}_{k}\geq\lambda,~{}\forall k\in{\cal K},\\\\[4.30554pt] {\bm{z}}^{\top}\bm{f}_{l}^{-}\leq\lambda,~{}\forall l\in{\cal L}\end{array}\right\\}$ The proof concludes by projecting out the variable $\lambda$ using the Fourier-Motzkin elimination approach. ∎ ###### Proof ​ _of Lemma 9 _ By (Wei et al., 2022b, Lemma 3), we have $\displaystyle\textstyle\operatorname{conv}({\cal Z}\backslash\left\\{\bm{0}\right\\})=\left\\{{\bm{z}}\in[0,1]^{d}:~{}\begin{array}[]{l}1\leq\sum_{i\in[d-1]}z_{i}-(d-2)z_{d},\\\\[4.30554pt] z_{d}\leq z_{i},~{}\forall i\in[d-1]\end{array}\right\\}.$ (A.3) The proof concludes by applying Lemma 8. ∎ ###### Proof ​ _of Theorem 3.3 _ Recall the definition of $\widetilde{\cal T}$ and $\Delta_{p}$. By letting $\displaystyle{\bm{\beta}}_{i}:=(s_{i},{\bm{x}}_{i}),\,\bm{\gamma}:={\bm{t}},\,{\bm{\delta}}:=({\bm{w}},{\bm{z}}),\,\Delta:=\Delta_{p},\,{\bm{C}}_{i}=[-1,{\bm{a}}_{i}^{\top}],\,{\mathbb{C}}_{i}=\left\\{0\right\\},\,\forall i\in[p],$ we can represent the set $\widetilde{\cal T}$ as an instance of the set ${\cal W}$ defined as in (11). Then, Proposition 2 yields $\displaystyle\operatorname{cl}\operatorname{conv}(\widetilde{\cal T})\\!=\\!\left\\{({\bm{x}},{\bm{z}},{\bm{s}},{\bm{w}},{\bm{t}})\\!\in\\!{\mathbb{R}}^{d+d+p+p+p}:\begin{array}[]{l}h^{\pi}(s_{i},w_{i})\leq t_{i},~{}\forall i\in[p]\\\\[4.30554pt] {\bm{a}}_{i}^{\top}{\bm{x}}_{i}=s_{i},~{}\forall i\in[p]\\\\[4.30554pt] ({\bm{w}},{\bm{z}})\in\operatorname{conv}(\Delta_{p})\end{array}\right\\}.$ In the following we characterize $\operatorname{cl}\operatorname{conv}({\cal T})$ in terms of $\operatorname{cl}\operatorname{conv}(\widetilde{\cal T})$. Let ${\cal T}^{\prime}:=\\{(\tau,{\bm{x}},{\bm{z}}):\exists{\bm{s}},{\bm{w}},{\bm{t}}\text{~{}s.t.~{}}({\bm{x}},{\bm{z}},{\bm{s}},{\bm{w}}.{\bm{t}})\in\widetilde{\cal T},{\bm{1}}^{\top}{\bm{t}}=\tau\\}$. By Proposition 3, we have $\displaystyle\operatorname{cl}\operatorname{conv}({\cal T})=\operatorname{cl}\big{(}\operatorname{conv}(\operatorname{Proj}_{\tau,{\bm{x}},{\bm{z}}}({\cal T}^{\prime}))\big{)}=\operatorname{cl}\big{(}\operatorname{Proj}_{\tau,{\bm{x}},{\bm{z}}}(\operatorname{conv}({\cal T}^{\prime}))\big{)}.$ Moreover, applying Lemma 4 (i) yields $\operatorname{conv}({\cal T}^{\prime})\\!=\\!\\{(\tau,{\bm{x}},{\bm{z}}):\exists{\bm{s}},{\bm{w}},{\bm{t}}\text{ s.t. }({\bm{x}},{\bm{z}},{\bm{s}},{\bm{w}},{\bm{t}})\in\operatorname{conv}(\widetilde{\cal T}),\,{\bm{1}}^{\top}{\bm{t}}=\tau\\}.$ By letting $\displaystyle\begin{array}[]{c}{\bm{\mu}}:=(\tau,{\bm{x}},{\bm{z}}),~{}\bm{\eta}:=(\tau,{\bm{x}},{\bm{z}},{\bm{s}},{\bm{w}},{\bm{t}}),~{}{\cal U}:={\mathbb{R}}\times\operatorname{conv}(\widetilde{\cal T}),\\\\[4.30554pt] {\bm{A}}:=[{\bm{I}}_{1+2d},~{}\bm{0}],~{}{\bm{B}}:=(-1,\bm{0},\bm{0},\bm{0},\bm{0},{\bm{1}}_{p})^{\top},~{}\bm{b}:=0,\end{array}$ we observe that $\operatorname{Proj}_{\tau,{\bm{x}},{\bm{z}}}(\operatorname{conv}({\cal T}^{\prime}))=\\{{\bm{\mu}}:\exists\bm{\eta}\in{\cal U}\text{ s.t. }{\bm{A}}\bm{\eta}={\bm{\mu}},~{}{\bm{B}}\bm{\eta}=\bm{b}\\}$ as in Lemma 4 (ii). Moreover, the first requirement of Lemma 4 (ii) is trivially satisfied as the variable $\tau$ is free in the set ${\cal U}$. In addition, we have the set $\displaystyle\left\\{\bm{\eta}\in\operatorname{cl}({\cal U}):~{}{\bm{A}}\bm{\eta}=\bm{0},~{}{\bm{B}}\bm{\eta}=\bm{b}\right\\}$ $\displaystyle=$ $\displaystyle\left\\{(0,\bm{0},\bm{0},{\bm{s}},{\bm{w}},{\bm{t}}):~{}\begin{array}[]{l}({\bm{w}},\bm{0})\in\operatorname{conv}(\Delta_{p}),~{}{\bm{1}}^{\top}{\bm{t}}=0\\\\[4.30554pt] s_{i}={\bm{a}}_{i}^{\top}{\bm{x}}_{i},\,h^{\pi}(s_{i},w_{i})\leq t_{i},\,\forall i\in[p]\end{array}\right\\}$ $\displaystyle=$ $\displaystyle\left\\{(0,\bm{0},\bm{0},{\bm{s}},{\bm{w}},{\bm{t}}):~{}{\bm{w}}=\bm{0},~{}{\bm{s}}=\bm{0},~{}h^{\pi}(0,0)\leq t_{i},\,\forall i\in[p],~{}{\bm{1}}^{\top}{\bm{t}}=0\right\\}=\left\\{\bm{0}\right\\},$ where the first equation holds by the definition of $\operatorname{cl}\operatorname{conv}(\widetilde{\cal T})$ and the fact that ${\bm{A}}\bm{\eta}=\bm{0}$ implies $(\tau,{\bm{x}},{\bm{z}})=(0,\bm{0},\bm{0})$, and the second equation holds because, by definition of $\Delta_{p}$ we have $0\leq w_{i}\leq{\bm{1}}^{\top}{\bm{z}}_{i}$ which enforces $w_{i}=0$ as ${\bm{z}}_{i}=\bm{0}$ for all $i\in[p]$, and the last equation holds as the function $h$ satisfies $h(0)=0$, which along with ${\bm{1}}^{\top}{\bm{t}}=0$ implies ${\bm{t}}=\bm{0}$. Thus, we can apply Lemma 4 (ii) to conclude that $\displaystyle\operatorname{cl}\operatorname{conv}({\cal T})=\\{(\tau,{\bm{x}},{\bm{z}}):\exists{\bm{s}},{\bm{w}},{\bm{t}}\text{ s.t. }({\bm{x}},{\bm{z}},{\bm{s}},{\bm{w}},{\bm{t}})\in\operatorname{cl}\operatorname{conv}(\widetilde{\cal T}),~{}{\bm{1}}^{\top}{\bm{t}}=\tau\\}.$ The proof concludes by projecting out the variable ${\bm{t}}$ using the Fourier-Motzkin elimination approach. ∎ The convex hull of $\Delta_{p}$ relies on the set ${\cal Z}_{i}:=\\{{\bm{z}}_{i}\in{\mathbb{R}}^{d_{i}}:\exists\tilde{\bm{z}}\in{\cal Z}~{}\text{s.t.}~{}\tilde{\bm{z}}_{i}={\bm{z}}_{i},~{}\tilde{\bm{z}}_{j}=\bm{0},\forall j\neq i\\}$ for every $i\in[p]$. In particular, given any $i\in[p]$, we assume that $\displaystyle\operatorname{conv}({\cal Z}_{i}\backslash\left\\{\bm{0}\right\\})=\left\\{{\bm{z}}_{i}\in{\mathbb{R}}^{d_{i}}:{\bm{F}}_{i}^{0}{\bm{z}}_{i}\geq\bm{0},~{}\begin{array}[]{l}{\bm{z}}_{i}^{\top}\bm{f}^{+}_{ik}\geq 1,~{}\forall k\in{\cal K}_{i},\\\\[4.30554pt] {\bm{z}}_{i}^{\top}\bm{f}_{il}^{-}\leq 1,~{}\forall l\in{\cal L}_{i}\end{array}\right\\}.$ (A.6) ###### Lemma A.1 Given the representation of $\operatorname{conv}({\cal Z}_{i}\backslash\left\\{\bm{0}\right\\})$ as in (A.6) for any $i\in[p]$, then $\displaystyle\operatorname{conv}(\Delta_{p})=\left\\{({\bm{w}},{\bm{z}})\in{\mathbb{R}}^{p+d}:\begin{array}[]{l}\bm{0}\leq{\bm{w}}\leq\bm{1},~{}{\bm{1}}^{\top}{\bm{w}}\leq 1,~{}{\bm{F}}_{i}^{0}{\bm{z}}_{i}\geq\bm{0},~{}\forall i\in[p],\\\\[4.30554pt] {\bm{z}}_{i}^{\top}\bm{f}^{+}_{ik}\geq w_{i},~{}\forall i\in[p],\forall k\in{\cal K}_{i},\\\\[4.30554pt] {\bm{z}}_{i}^{\top}\bm{f}_{il}^{-}\leq 1,~{}\forall i\in[p],\forall l\in{\cal L}_{i},\\\\[4.30554pt] {\bm{z}}_{i}^{\top}\bm{f}_{il}^{-}\leq{\bm{z}}_{i}^{\top}\bm{f}^{+}_{ik},~{}\forall i\in[p],\forall k\in{\cal K},\forall l\in{\cal L}\end{array}\right\\}.$ ###### Proof ​ _of Lemma A.1 _ The set $\Delta_{p}$ can be written as the union of $p+1$ sets. Namely, $\displaystyle\Delta_{p}$ $\displaystyle=\left(\left\\{\bm{0}\right\\}\times\left\\{\bm{0}\right\\}\right)\bigcup\limits_{i\in[p]}\left(\left\\{\bm{0},{\bm{e}}_{i}\right\\}\times\left(\times_{k\in[i-1]}\left\\{\bm{0}\right\\}\times{\cal Z}_{i}\backslash\left\\{\bm{0}\right\\}\times_{k\in\\{i+1,\dots,p\\}}\left\\{\bm{0}\right\\}\right)\right).$ Since $\operatorname{conv}(A\times B)=\operatorname{conv}(A)\times\operatorname{conv}(B)$ and $\operatorname{conv}(A\cup B)=\operatorname{conv}(\operatorname{conv}(A)\cup\operatorname{conv}(B))$, we arrive at $\displaystyle\operatorname{conv}(\Delta_{p})=\left\\{({\bm{w}},{\bm{z}})\in{\mathbb{R}}^{p+d}:\exists\bm{\lambda}\in{\mathbb{R}}^{p}\text{~{}s.t.~{}}\begin{array}[]{l}\bm{0}\leq\bm{\lambda}\leq{\bm{1}},~{}{\bm{1}}^{\top}\bm{\lambda}\leq 1,~{}{\bm{w}}\leq\bm{\lambda},\\\\[4.30554pt] {\bm{F}}_{i}^{0}{\bm{z}}_{i}\geq\bm{0},~{}\forall i\in[p]\\\\[4.30554pt] {\bm{z}}_{i}^{\top}\bm{f}^{+}_{ik}\geq\lambda_{i},~{}\forall i\in[p],\forall k\in{\cal K}_{i},\\\\[4.30554pt] {\bm{z}}_{i}^{\top}\bm{f}_{il}^{-}\leq\lambda_{i},~{}\forall i\in[p],\forall l\in{\cal L}_{i}\end{array}\right\\}$ The proof concludes by projecting out the variable $\bm{\lambda}$ using the Fourier-Motzkin elimination approach. ∎ ###### Proof ​ _of Lemma 13 _ Note that the matrix $\displaystyle{\bm{A}}=\begin{bmatrix}{\bm{1}}^{\top}&~{}\phantom{-}\bm{0}^{\top}\\\ \\!\\!{\bm{I}}&~{}-{\bm{I}}\\\ \bm{0}^{\top}&\phantom{-}{\bm{1}}^{\top}\end{bmatrix}$ is totally unimodular. To see this, we partition the rows of ${\bm{A}}$ into the two disjoint sets $\\{1\\}$ and $\\{2,\dots,d+2\\}$. As such, ${\bm{A}}$ is totally unimodular because every entry in ${\bm{A}}$ is $0$, $+1$, or $-1$, every column of ${\bm{A}}$ contains at most two non-zero entries, and two nonzero entries in a column of ${\bm{A}}$ with the same signs belong to either $\\{1\\}$ or $\\{2,\dots,d+2\\}$, whereas two nonzero entries in a column of ${\bm{A}}$ with the opposite signs belong to $\\{2,\dots,d+2\\}$. Hence, by (Heller and Tompkins, 1956, Theorem 2), the matrix ${\bm{A}}$ is totally unimodular. As the feasible set of $\Omega$ is represented by the matrix $[{\bm{A}}^{\top},~{}{\bm{I}},~{}-{\bm{I}}]^{\top}$ and an integer vector, the set $\Omega$ is totally unimodular, and $\operatorname{conv}(\Omega)$ is given by its continuous relaxation. ∎ We next provide ideal and lifted descriptions of $\operatorname{conv}(\Omega)$ for general ${\cal Z}$. Lemma A.2 (i) is particularly useful when the set $\Omega\backslash\left\\{\bm{0}\right\\}$ is totally unimodular. In this case, we can provide an ideal description for $\operatorname{conv}(\Omega)$. On the other hand, Lemma A.2 (ii) is useful when the set ${\cal Z}$ is totally unimodular. ###### Lemma A.2 The following holds. 1. (i) Suppose that $\operatorname{conv}(\Omega\backslash\left\\{\bm{0}\right\\})$ admits the following ideal representation $\displaystyle\operatorname{conv}(\Omega\backslash\left\\{\bm{0}\right\\})=\left\\{\bm{\delta}\in{\mathbb{R}}^{2d}:{\bm{F}}^{0}\bm{\delta}\geq\bm{0},~{}\begin{array}[]{l}\bm{\delta}^{\top}\bm{f}^{+}_{k}\geq 1,~{}\forall k\in{\cal K},\\\\[4.30554pt] \bm{\delta}^{\top}\bm{f}_{l}^{-}\leq 1,~{}\forall l\in{\cal L}\end{array}\right\\}.$ Then, we have $\displaystyle\operatorname{conv}(\Omega)\\!=\\!\left\\{\bm{\delta}\in{\mathbb{R}}^{d+d}:{\bm{F}}^{0}\bm{\delta}\geq\bm{0},~{}\begin{array}[]{l}\bm{\delta}^{\top}\bm{f}^{+}_{k}\geq 0,~{}\forall k\in{\cal K},\\\\[4.30554pt] \bm{\delta}^{\top}\bm{f}_{l}^{-}\leq 1,~{}\forall l\in{\cal L},\\\\[4.30554pt] \bm{\delta}^{\top}\bm{f}_{l}^{-}\leq\bm{\delta}^{\top}\bm{f}^{+}_{k},~{}\forall k\in{\cal K},\forall l\in{\cal L}\end{array}\right\\}.$ 2. (ii) Let ${\cal O}_{i}=\\{{\bm{z}}\in{\mathbb{R}}^{d}:z_{i}=0\\}$. Then, we have $\displaystyle\operatorname{conv}(\Omega)=\left\\{({\bm{w}},{\bm{z}})\\!:\\!\exists\tilde{\bm{z}}^{0},\dots,\tilde{\bm{z}}^{d}\text{ s.t. }\begin{array}[]{l}{\bm{w}}\in{\mathbb{R}}_{+}^{d},~{}{\bm{1}}^{\top}{\bm{w}}\leq 1,\\\\[4.30554pt] {\bm{z}}=\tilde{\bm{z}}^{0}+\sum_{i\in[d]}\tilde{\bm{z}}^{i}\\\\[4.30554pt] \tilde{\bm{z}}^{0}\in(1-{\bm{1}}^{\top}{\bm{w}})\cdot\operatorname{conv}({\cal Z})\\\\[4.30554pt] \tilde{\bm{z}}^{i}\in w_{i}\cdot\operatorname{conv}({\cal Z}\backslash{\cal O}_{i}),~{}\forall i\in[d]\end{array}\right\\}.$ ###### Proof For case (i), notice that the set $\Omega$ can be written as the union of two sets $\Omega=\left(\left\\{\bm{0}\right\\}\times\left\\{\bm{0}\right\\}\right)\cup\left(\Omega\backslash\left\\{\bm{0}\right\\}\right)$. We thus have $\displaystyle\operatorname{conv}(\Omega)$ $\displaystyle=\operatorname{conv}\left(\left(\left\\{\bm{0}\right\\}\times\left\\{\bm{0}\right\\}\right)\cup\left(\operatorname{conv}(\Omega\backslash\\{\bm{0}\\})\right)\right)$ $\displaystyle=\left\\{\bm{\delta}\in{\mathbb{R}}^{d+d}:\exists\lambda\in[0,1]\text{~{}s.t.~{}}{\bm{F}}^{0}\bm{\delta}\geq\bm{0},~{}\begin{array}[]{l}\bm{\delta}^{\top}\bm{f}^{+}_{k}\geq\lambda,~{}\forall k\in{\cal K},\\\\[4.30554pt] \bm{\delta}^{\top}\bm{f}_{l}^{-}\leq\lambda,~{}\forall l\in{\cal L}\end{array}\right\\}.$ The proof concludes by projecting out the variable $\lambda$ using the Fourier-Motzkin elimination approach. For case (ii), note that the set $\Omega$ can be written as the union of $d+1$ sets, that is, $\Omega=\left(\left\\{\bm{0}\right\\}\times{\cal Z}\right)\bigcup\left(\cup_{i\in[d]}\left(\left\\{{\bm{e}}_{i}\right\\}\times({\cal Z}\backslash{\cal O}_{i})\right)\right)$. We thus have $\displaystyle\operatorname{conv}(\Omega)$ $\displaystyle=$ $\displaystyle\operatorname{conv}\left(\left(\left\\{\bm{0}\right\\}\times\operatorname{conv}({\cal Z})\right)\bigcup\left(\cup_{i\in[d]}\left\\{{\bm{e}}_{i}\right\\}\times\operatorname{conv}({\cal Z}\backslash{\cal O}_{i})\right)\right)$ $\displaystyle=$ $\displaystyle\left\\{({\bm{w}},{\bm{z}}):\begin{array}[]{l}\exists{\bm{w}}^{0},\dots,{\bm{w}}^{d}\\\ \exists{\bm{z}}^{0},\dots,{\bm{z}}^{d}\\\ \exists\bm{\lambda}\in{\mathbb{R}}_{+}^{d}\end{array}\text{ s.t. }\begin{array}[]{l}{\bm{1}}^{\top}\bm{\lambda}\leq 1,\\\\[4.30554pt] {\bm{w}}=(1-{\bm{1}}^{\top}\bm{\lambda}){\bm{w}}^{0}+\sum_{i\in[d]}\lambda_{i}{\bm{w}}^{i}\\\\[4.30554pt] {\bm{z}}=(1-{\bm{1}}^{\top}\bm{\lambda}){\bm{z}}^{0}+\sum_{i\in[d]}\lambda_{i}{\bm{z}}^{i}\\\\[4.30554pt] ({\bm{w}}^{0},{\bm{z}}^{0})\in\left\\{\bm{0}\right\\}\times\operatorname{conv}({\cal Z})\\\\[4.30554pt] ({\bm{w}}^{i},{\bm{z}}^{i})\in\left\\{{\bm{e}}_{i}\right\\}\times\operatorname{conv}({\cal Z}\backslash{\cal O}_{i}),~{}\forall i\in[d]\end{array}\right\\}.$ The proof concludes by introducing the new variables $\tilde{\bm{z}}^{0}=(1-{\bm{1}}^{\top}\bm{\lambda}){\bm{z}}^{0}$ and $\tilde{\bm{z}}^{i}=\lambda_{i}{\bm{z}}^{i}$ for every $i\in[d]$, and then projecting out the variable $\bm{\lambda}$ using the observation that ${\bm{w}}=\bm{\lambda}$. ∎ Armed with Lemma A.2, we are ready to provide characterizations for $\operatorname{conv}(\Omega)$ in the cases of weak and strong hierarchy constraints. ###### Proof ​ _of Lemma 14 _ First, note that $\displaystyle\Omega\backslash\\{\bm{0}\\}$ $\displaystyle\textstyle=\\{({\bm{w}},{\bm{z}})\in\\{0,1\\}^{d+d}:~{}{\bm{w}}\leq{\bm{z}},~{}{\bm{1}}^{\top}{\bm{w}}\leq 1,~{}\sum_{i\in[d-1]}z_{i}\geq 1\\}.$ Based on this let us consider the matrix $\displaystyle{\bm{A}}=\begin{bmatrix}{\bm{I}}_{d}\quad&-{\bm{I}}_{d}\\\\[2.15277pt] {\bm{1}}_{d}^{\top}\quad&\bm{0}_{d}^{\top}\\\\[2.15277pt] \bm{0}_{d}^{\top}\quad&(-{\bm{1}}_{d-1},0)^{\top}\end{bmatrix}.$ This matrix is totally unimodular as we can partition rows of ${\bm{A}}$ into the subsets $[d]$ and $\\{d+1,d+2\\}$ such that they satisfy the requirements of (Heller and Tompkins, 1956, Theorem 2). Thus, the set $\Omega\backslash\\{\bm{0}\\}$ is represented by constraints based on the matrix $[{\bm{A}}^{\top},~{}{\bm{I}},~{}-{\bm{I}}]^{\top}$ and an integer vector. Thus, $\Omega\backslash\\{\bm{0}\\}$ admits a totally unimodular representation, and $\operatorname{conv}(\Omega\backslash\\{\bm{0}\\})$ is given by its continuous relaxation. The proof concludes by applying Lemma A.2 (i). ∎ ###### Proof ​ _of Lemma 15 _ Note that ${\cal Z}$ is totally unimodular. Hence, the continuous relaxation of ${\cal Z}$ gives $\operatorname{conv}({\cal Z})$. For any $i\in[d-1]$, we also have ${\cal Z}\backslash{\cal O}_{i}=\\{{\bm{z}}\in\\{0,1\\}^{d}:z_{i}=1,~{}z_{d}\leq z_{j},~{}\forall j\in[d-1]\\}.$ It is easy to see that the continuous relaxation of ${\cal Z}\backslash{\cal O}_{i}$ gives its convex hull. Besides, we have ${\cal Z}\backslash{\cal O}_{d}=\\{\bm{1}\\}$. The proof follows by applying Lemma A.2 (ii). ∎ ## B Additional Numerical Results Figure B.1: Comparison of different continuous relaxations for $N=100$, $\pi=0.1$ as $p$ (and thus $d$) varies. Figure B.2: Performance of the B&B algorithm for $N=100$, $\pi=0.1$ as $p$ (and thus $d$) varies. Figure B.3: Comparison of different continuous relaxations for $(p,N)\\!=\\!(50,100)$ as $\pi$ varies. Figure B.4: Performance of the B&B algorithm for $(p,N)\\!=\\!(50,100)$ as $\pi$ varies.
capbtabboxtable[][] # Orthogonal calibration via posterior projections with applications to the Schwarzschild model Antik Chakraborty<EMAIL_ADDRESS>Department of Statistics, Purdue University Jonelle L. Walsh<EMAIL_ADDRESS>George P. and Cynthia Woods Mitchell Institute for Fundamental Physics and Astronomy, Department of Physics and Astronomy, Texas A&M University Louis Strigari<EMAIL_ADDRESS>George P. and Cynthia Woods Mitchell Institute for Fundamental Physics and Astronomy, Department of Physics and Astronomy, Texas A&M University Bani K. Mallick<EMAIL_ADDRESS>Department of Statistics, Texas A&M University Anirban Bhattacharya<EMAIL_ADDRESS>Department of Statistics, Texas A&M University ###### Abstract The orbital superposition method originally developed by Schwarzschild (1979) is used to study the dynamics of growth of a black hole and its host galaxy, and has uncovered new relationships between the galaxy’s global characteristics. Scientists are specifically interested in finding optimal parameter choices for this model that best match physical measurements along with quantifying the uncertainty of such procedures. This renders a statistical calibration problem with multivariate outcomes. In this article, we develop a Bayesian method for calibration with _multivariate outcomes_ using orthogonal bias functions thus ensuring parameter identifiability. Our approach is based on projecting the posterior to an appropriate space which allows the user to choose any nonparametric prior on the bias function(s) instead of having to model it (them) with Gaussian processes. We develop a functional projection approach using the theory of Hilbert spaces. A finite- dimensional analogue of the projection problem is also considered. We illustrate the proposed approach using a BART prior and apply it to calibrate the Schwarzschild model illustrating how a multivariate approach may resolve discrepancies resulting from a univariate calibration. Keywords: Bayesian; Computer model; Emulator; Scientific modeling; Regression trees. ## 1 Introduction Scientific endeavors typically aim to describe a natural physical phenomenon. In modern applications, these descriptions increasingly involve complex mathematical equations typically solved using expensive computer codes. For example, the MIT2D climate model (http://web.mit.edu/globalchange/www/climate.html) simulates probability distributions of ocean, surface and upper atmospheric temperatures. In our motivating application, Schwarzschild’s orbital integral method (Schwarzschild, 1979) is used to simulate mass distributions of galaxies. These orbit-based models are the main method for measuring the masses of supermassive black holes at the centers of nearby galaxies (Kormendy and Ho, 2013), and have led to the establishment of empirical relationships between the masses of black holes and large-scale galaxy properties, which surprisingly suggest that black holes and their host galaxies grow and evolve together over time (McConnell and Ma, 2013; Kormendy and Ho, 2013; Saglia et al., 2016). Besides determining black hole masses, Schwarzschild modeling is a powerful tool for measuring a galaxy’s mass-to-light ratio, dark matter halo properties, stellar orbital distribution, viewing orientation, and intrinsic three-dimensional shape, allowing for the further study of galaxy assembly histories (Mehrgan et al., 2019). These scientific models involve parameters which carry physical meaning. Inferring about the parameters while addressing the uncertainty of the data observed from the the physical process was first formulated as a statistical problem in Sacks et al. (1989), and was further studied from a Bayesian point of view in Kennedy and O’Hagan (2001), who popularized the term computer code calibration. Since then statistical calibration methods have been used in many scientific disciplines; e.g. climatology (Forest et al., 2008; Salter et al., 2018), cell biology (Xie and Xu, 2021), mechanical engineering (Gattiker et al., 2006) etc. Parallel to the widening applications of statistical calibration methods, substantial effort has also been dedicated to the development of more robust calibration methods within the statistics community, especially after the seminal work of Kennedy and O’Hagan (2001). In their work, the authors proposed a framework that simultaneously models the data observed from a physical process and data generated from computer code implementations of mathematical models emulating the physical process. The fundamental contribution of their work was to include a discrepancy/bias function in their model, which the authors interpreted as a unknown function that captures the inability of the mathematical model to explain variations in the observed data. A Bayesian nonparametric regression approach based on Gaussian processes (Rasmussen and Williams, 2005) was then used to model the unknown bias function as well as the computer code simulating the physical process. This enabled fast emulation of the computer code simulators and produced rapid estimates of input parameters of the computer code that best approximated the observed data. Since then several authors have proposed statistical methods for computer code calibration. Higdon et al. (2008) developed a basis vector approach to calibration when the computer code output is very high-dimensional using principal components; Bayarri et al. (2007) proposed a general framework for Bayesian calibration where a modular Bayes approach was advocated to overcome significant computational and inferential challenges associated with a fully Bayesian analysis of calibration problems; Chakraborty et al. (2013) modeled the computer code using multivariate adaptive regression splines (Friedman, 1991; Denison et al., 1998) to avoid overfitting the computer model; Pratola and Higdon (2016) proposed to model the bias and computer model using Bayesian additive regression trees (BART; Chipman et al. (2010)) which has gained immense popularity in the machine learning community recently. Tuo and Wu (2015, 2016) studied the general calibration model of Kennedy and O’Hagan (2001) from a theoretical point-of-view and showed that the calibration parameters are not identifiable. As a remedy, Tuo and Wu (2015) defined the calibration parameter as the point in the input parameter space that minimizes the $L^{2}$ loss function between the physical process and the computer code. Plumlee (2017) extended it to more general losses and then proposed a modified covariance function for Gaussian process priors used for the bias function; Xie and Xu (2021) treated the calibration parameter as a functional of the bias function and used a projected Bayesian approach to avoid identifiability issues. In this article, we propose a generalized Bayesian approach for computer code calibration with multivariate output which takes into account identifiability issues pointed out by Tuo and Wu (2016). We start from the definition of the optimal calibration parameter as the point in the input space of the computer code that minimizes a certain loss function. Following Plumlee (2017), this identifiability issue can be mitigated by imposing a orthogonality constraint on the bias function with respect to the derivatives of the computer code. We first generalize the restrictions on the bias function to the situation where there are multiple potentially correlated outcomes. In the univariate outcome case, Plumlee (2017) developed a modified Gaussian process prior on the bias function that respects the orthogonality constraint. However, as indicated by other authors (Pratola and Higdon, 2016) Gaussian process priors often suffer from scalability issues to big data sets and high-dimensional input spaces. The scalability issues are only exacerbated when one has to deal with multivariate covariance functions. On the other hand, while nonparametric priors such as the BART are highly scalable, introducing a priori constraints such as the orthogonality constraint indicated above is not immediate. To overcome these issues, we propose to posit a unconstrained prior on the bias function and project the obtained posterior on the relevant space. For inference based on the projected posterior we develop two algorithms for posterior sampling. The first one is essentially a Hilbert projection on the space of square-integrable functions and the second one borrows inspiration from a projection interpretation of the multivariate Gaussian distribution. Numerical experiments in Section 5 show superiority of the proposed method both in terms of computational scalability and modeling flexibility. Indeed, the method provides a more general framework for orthogonal calibration whereby practitioners are free to choose a nonparametric prior of their own choice; for example deep Gaussian process priors used recently in Marmin and Filippone (2022). We also note that our approach is significantly different from a related projection based method by Xie and Xu (2021) where the authors consider a Gaussian process prior on the bias function. The calibration parameter, defined as the minimizer of an appropriate loss function between the observed data and the computer model is then treated as a functional of the bias function. Hence, a prior is induced on the calibration parameter through the Gaussian process prior on the bias function. Although this serves the statistical issue of identifiability, properties of the induced prior is less understood. The rest of the paper is organized as follows. In Section 2, we briefly describe our motivating example which is then followed by the development of the orthogonal posterior projection approach in Section 3. We then compare the efficiency of the proposed method with other state-of-the-art methods in terms of estimating the calibration parameter and computational scalability. We end with an application on the computer code data from Schwazschild model. An R package for implementing the proposed method is available to download from https://github.com/antik015/OCal. ## 2 Schwarzschild’s model The study of the mass distribution within galaxies is central in the quest of understanding of black holes, stellar components, dark matter, and the growth of galaxies. Schwarzschild (1979) presented a orbit superposition method for constructing a self-consistent mass model of galaxies. It consists of “integrating a representative set of orbits in a static triaxial gravitational potential, and finding weights for these orbits such that their superposition reproduces the assumed mass distribution” (Quenneville et al., 2021). Several modifications to the initial method have since been proposed among which van den Bosch et al. (2008)’s triaxial orbit superposition has become very popular. In standard applications of the method, the model is compared with kinematic and photometric data to determine best-fit parameters such as black hole mass $(\theta_{1})$, stellar mass-to-light ratio $(\theta_{2})$, and fraction of dark matter halo $(\theta_{2})$. (a) Left: $v_{S}$; Right: $v_{F}$ (b) Left: $\tau_{S}$; Right: $\tau_{F}$ (c) Left: $h_{3,S}$; Right: $h_{3,F}$ (d) Left: $h_{4,S}$; Right: $h_{4,F}$ Figure 1: Orbital superposition output versus observed data on four moments of the velocity distribution for $\theta=(10.22,10,1)^{\mathrm{\scriptscriptstyle{T}}}$. The superposition method is implemented by a code originally developed by van den Bosch et al. (2008). In it’s current version (van den Bosch et al., 2008), for a given input parameter combination $t=(t_{1},t_{2},t_{3})^{\mathrm{\scriptscriptstyle{T}}}$, the code outputs the first four moments of the line-of-sight velocity distribution $f_{S}(x;t)=(v_{S}(x;t),\tau_{S}(x;t),h_{3,S}(x;t),h_{4,S}(x;t))^{\mathrm{\scriptscriptstyle{T}}}$ for points $x$ in a 2-dimensional spatial grid $\mathcal{X}$. The points $x$ are typically chosen to match the locations of the physical photometric data, providing further information about the mass distribution. Specifically, physical data $y_{F}(x_{i})=(v_{F}(x_{i}),\tau_{F}(x_{i}),h_{3,F}(x_{i}),h_{4,F}(x_{i}))^{\mathrm{\scriptscriptstyle{T}}}$ is available for $i=1,\ldots,n,\,n=105$ locations of $\mathcal{X}=[-1,1]^{2}$. In Figure 1, color coded output from the code and the observed data are shown when $t=(10.22,10,1)^{\mathrm{\scriptscriptstyle{T}}}$ at the $n$ locations. This superposition technique is implemented by a computer code (FORTRAN) which is extremely computationally intensive; e.g. for one input of $\theta$, approximately 3 hours is needed to generate the output. The code output is available for three different values of $t_{3}=1,2,3$, and for each value $t_{3}$, the output is available over a two-dimensional grid of values for $(t_{1},t_{2})$ where $t_{1}$ ranges from 10.22 to 10.31 and $t_{2}$ ranges from 10 to 10.25. Overall, the computer code output is available for $N=476$ different values of $t$. Naturally, an exhaustive search over the input parameter space that compares the code output to the observed data to find the best-fit parameter combination is prohibitive. However, from limited experiments performed separately on each of the 4 outputs of the model, scientists have seen that for the point $\tilde{\theta}=(9.92,9.29,1.27)$ the model outputs best approximated the observed data up to slight variations for each of the outcome. It is, however, unclear how a joint analysis of the model outputs and the physical data can be carried out that adequately addresses the uncertainty in the observed data, and whether such an analysis would reveal regions in the parameter space that perform relatively well. This is important for designing future experiments where scientists plan to include other parameters that could potentially lead to better predictions of the physical process on previously unobserved points - a task typically suited for a multivariate analysis when outcomes are correlated. Inspired by this application, in Section 3, we first develop a general methodology for orthogonal calibration of multivariate computer models where the key focus is on parameter identifiability and modeling flexibility. The validity of the proposed methodology is then investigated in a series of numerical experiments in Section 5, followed by an application on the Schwarzschild’s model in Section 6. ## 3 Method ### 3.1 Orthogonal calibration We begin with a quick review of orthogonal calibration, and introduce notation necessary to develop our method. Suppose we are interested in a real physical system $y_{R}(\cdot)\in\mathbb{R}^{q},\,q\geq 1$ with controllable inputs $x\in\mathcal{X}$ where $\mathcal{X}$ is some closed, bounded subset of $\mathbb{R}^{d}$. We do not directly observe the system. Instead, we observe stochastic field observations $y_{F}(\cdot)$ at $n$ input points called the design $\mathcal{D}=\\{x_{1},\ldots,x_{n}\\}$, for which a natural stochastic model is $y_{F}(x_{i})=y_{R}(x_{i})+\epsilon(x_{i}),\,\epsilon(x_{i})\overset{ind.}{\sim}{\mathrm{N}}_{q}(0,\Sigma_{F}),\,i=1,\ldots,n.$ (1) Suppose in addition to the field observations, we also have a simulator $f:\mathcal{X}\times\Theta\to\mathbb{R}^{q}$ for the physical system (depending on the controllable input $x$ as well as other parameters $t\in\Theta\subset\mathbb{R}^{p}$) which typically comes in the form of a computer code/model. The parameter $t$ represents our understanding of the physical system, and we hope that for some unknown $\theta^{*}\in\Theta$, the computer model closely approximates the physical process $y_{R}(\cdot)$. Formally, suppose $L\\{y_{R}(\cdot),f(\cdot,\cdot)\\}$ is a loss function that can distinguish between the best possible parameter $\theta^{*}$ and all other parameter values $t$. If there exists a unique $t^{*}$ such that $y_{R}(x)=f(x,t^{*})$ for all $x\in\mathcal{X}$ and the loss $L$ is strictly proper following Gneiting and Raftery (2007), then $\theta^{*}=t^{*}$. However, existence of such a $t^{*}$ is not always guaranteed for most practical computer models and loss functions. In the absence of such a parameter, the computer model is biased and $\theta^{*}$ is defined as the parameter combination at which the loss $L\\{y_{R}(\cdot),f(\cdot,\cdot)\\}$ is minimized , i.e. $\theta^{*}=\operatorname*{arg\,min}_{t\in\Theta}L\\{y_{R}(\cdot),f(\cdot,t)\\}$. To account for such bias, Kennedy and O’Hagan (2001) proposed the following model for the observed data on the system $y_{F}(x_{i})=f(x_{i},\theta)+b_{\theta}(x_{i})+\epsilon(x_{i}),\,\epsilon(x_{i})\overset{ind.}{\sim}{\mathrm{N}}(0,\Sigma_{F}),\,i=1,\ldots,n,$ (2) where $\theta$ represents the model parameter that targets the population parameter $\theta^{*}$ defined above, and $b_{\theta}(x)$ is interpreted as the discrepancy or bias between the physical process and the assumed computer model, i.e. $b_{\theta}(x)=y_{R}(x)-f(x,\theta)$. Without added restrictions in (2), $\theta$ is not identifiable in (2). We provide a sufficient condition for identifiablity under the loss $L\\{y_{R}(\cdot),f(\cdot,t)\\}=\sum_{k=1}^{q}\int_{\mathcal{X}}\\{y_{R,k}(x)-f_{k}(x,t)\\}^{2}dx,$ (3) in the following proposition, extending the result of Plumlee (2017) to multivariate outcomes. ###### Proposition 3.1. Consider the loss $L\\{y_{R}(\cdot),f(\cdot,t)\\}$ defined above. Let $g_{j,k}(x,t)=\frac{\partial}{\partial t_{j}}f_{k}(x,t_{j})$, the partial derivative with respect to $t_{j}$ of the $k$-th component $f_{k}(x,t)$ of the computer model $f(x,t)$, which is assumed to exist for all $j,k$. A sufficient condition for $\frac{\partial}{\partial t}L\\{y_{R}(\cdot),f(\cdot,t)\\}\rvert_{t=\theta^{*}}=0$ is $\int_{\mathcal{X}}\sum_{k=1}^{q}g_{j,k}(x,\theta^{*})b_{\theta^{*},k}(x)dx=0,\quad\text{for all }j=1,\ldots,p.$ (4) ###### Proof. Write $y_{R,k}(x)=f_{k,\theta}(x)+b_{\theta,k}(x)$ to obtain that the $j$-th element of $\frac{\partial}{\partial t}L\\{y_{R}(\cdot),f(\cdot,t)\\}\rvert_{t=\theta}=\\\ -2\int_{\mathcal{X}}\sum_{k=1}^{q}g_{j,k}(x)b_{\theta,k}(x)dx,$ and recall $\theta^{*}=\operatorname*{arg\,min}_{t\in\Theta}L\\{y_{R}(\cdot),f(\cdot,t)\\}$ which completes the proof. ∎ In other words, if the bias functions satisfy a linear constraint with respect to the gradients of the computer model at $\theta$, then $\theta$ is identifiable. ### 3.2 Bayesian inference on model (2) The influential paper Kennedy and O’Hagan (2001) originally proposed the model (2) for $q=1$, and considered Gaussian process priors (Rasmussen and Williams, 2005) as a prior distribution over the unknown function $b_{\theta}(x)$ together with a Normal/Uniform prior over the parameter $\theta$. For now, we assume that the functional form of the computer model $f(x,t)$ is explicitly known for every $x\in\mathcal{X},t\in\Theta$ and defer the extension of the proposed method to unavailable $f$ later. A Gaussian process prior over an unknown function $h(\cdot)$ is typically elicited by a mean function $m(\cdot)$ and a covariance function $C(\cdot,\cdot)$. Under this prior, for any finite collection of points $\\{u_{1},\ldots,u_{N}\\}$, the prior distribution over the vector $\\{h(u_{1}),\ldots,h(u_{N})\\}^{\mathrm{\scriptscriptstyle{T}}}$ has a multivariate Gaussian distribution with mean $\\{m(u_{1}),\ldots,m(u_{N})\\}$ and covariance matrix $K=(K_{ij})$ and $K_{ij}=C(u_{i},u_{j}),\,1\leq i,j\leq N$. Let $\Pi(\theta)$ be the prior distribution over $\theta$ and suppose $b_{\theta}$ is endowed with a Gaussian process prior with a zero mean function and a covariance kernel $C(\cdot,\cdot)$. Then with $n$ observations on $y_{F}(\cdot)$ on the design $\mathcal{D}$, the posterior distribution of $(\theta,\mathbf{b}),\,\mathbf{b}=\\{b_{\theta}(x_{1}),\ldots,b_{\theta}(x_{n})\\}^{\mathrm{\scriptscriptstyle{T}}}$ can be obtained using Bayes theorem as $\Pi(\theta,\mathbf{b}\mid y^{(n)})\propto\ell(y^{(n)}\mid\theta,\mathbf{b})\Pi(\theta)\Pi(\mathbf{b}),$ (5) where given $\theta$, $y^{(n)}=\\{y_{F}(x_{1})-f(x_{1},\theta),\ldots,y_{F}(x_{n})-f(x_{n},\theta)\\}^{\mathrm{\scriptscriptstyle{T}}}$ and $\ell(y^{(n)}\mid\theta,\mathbf{b})$ is the likelihood corresponding to a Gaussian distribution with mean $\mathbf{b}$ and covariance matrix $\sigma^{2}_{F}\mathrm{I}_{n}$. The marginal posterior distribution of $\theta$, $\Pi(\theta\mid y^{(n)})$ can be obtained by observing that $y^{(n)}\mid\theta\sim{\mathrm{N}}(0,K+\sigma_{F}^{2}\mathrm{I}_{n})$. Tuo and Wu (2016) proved that vanilla Gaussian process priors over the bias $b_{\theta}(\cdot)$ may lead to inaccurate inference on $\theta^{*}$, defined as the minimizer of the loss function discussed above, even when observations are available for a large number of points $n$. This is mainly due to the fact that a generic Gaussian process on the bias function is not guaranteed to satisfy the orthogonality condition (4). In light of this, Plumlee (2017) advocated a modified covariance kernel for $b_{\theta}(\cdot)$ such that realizations of such a Gaussian process prior satisfy (4) almost surely, see also Plumlee and Joseph (2018). This modified covariance kernel is $C_{\theta}(x,x^{\prime})=C(x,x^{\prime})-h_{\theta}(x)^{\mathrm{\scriptscriptstyle{T}}}H_{\theta}^{-1}h_{\theta}(x^{\prime}),$ (6) where the $j$-th element of the vector $h_{\theta}(x)\in\mathbb{R}^{p}$ is given by $\int_{\mathcal{X}}g_{j}(x,\theta)C(x,u)du$ and the $(j,k)$-th element of the matrix $H_{\theta_{0}}\in\mathbb{R}^{p\times p}$ is $\int_{\mathcal{X}}\int_{\mathcal{X}}g_{j}(u,\theta)g_{k}(u^{\prime},\theta)C(u,u^{\prime})dudu^{\prime}$ with $x,x^{\prime}\in\mathcal{X}$ for $1\leq j,k\leq p$. This covariance function is obtained by observing that under a Gaussian process prior on $b_{\theta_{0}}$, the prior on $(b_{\theta},\int_{\mathcal{X}}g_{1}(x,\theta)b_{\theta}(x)dx,\ldots,\int_{\mathcal{X}}g_{p}(x,\theta_{0})b_{\theta}(x)dx)$ is again a Gaussian process and one then simply finds the conditional covariance of $b_{\theta}$ given that the last $p$ elements of the above vector are 0, and thus the orthogonality condition in (4) is enforced. Of course, apriori we do not have knowledge on $\theta$. However, assuming the knowledge of the system $y_{R}(x)$ and the computer model $f(x,t)$ for all $t\in\Theta$, a consistent estimator of $\theta$ can be used for implementing orthogonal covariance function in practice (Plumlee, 2017). ### 3.3 Posterior projections Although the modified covariance kernel seemingly resolves idenitifiability issue of $\theta$, it hinges on the assumption that the user chooses to model $b_{\theta}(x)$ with a Gaussian process. This leaves out a plethora of other nonparametric priors for functions that have been widely popular in the literature, for example Bayesian additive regression trees (BART) (Chipman et al., 2010), Bayesian multivariate adaptive regression splines (Denison et al., 1998), Bayesian neural networks (Neal, 2012) etc. Furthermore, these methods have been successfully applied in computer code calibration problems (Pratola and Higdon, 2016; Higdon et al., 2008; Chakraborty et al., 2013). Many of these priors have been shown to have optimal theoretical properties along with sufficient computational scalability (Ročková and van der Pas, 2020; Linero and Yang, 2018). In this article, our aim is to develop a procedure where a user is free to choose any nonparametric prior for $b_{\theta}(x)$ and the procedure automatically takes care of orthogonality constraints discussed above. Let $L^{2}_{q}(\mathcal{X})$ be the space of $q$-dimensional square-integrable functions on $\mathcal{X}$ equipped with the inner-product $\langle f_{1},f_{2}\rangle=\sum_{k=1}^{q}\int f_{1,k}(x)f_{2,k}(x)dx$. Then $L^{2}_{q}(\mathcal{X})$ is a tensor Hilbert space with norm $\|\cdot\|_{L^{2}_{q}}$ induced by the inner product: $\|f\|_{L^{2}_{q}}=\langle f,f\rangle=\sum_{k=1}^{q}\int f_{k}^{2}(x)dx$. We assume that for every $t\in\Theta$, $g_{j}(\cdot,t)=(g_{j,1}(\cdot,t),\ldots,g_{j,q}(\cdot,t))^{\mathrm{\scriptscriptstyle{T}}}\in L^{2}_{q}(\mathcal{X})$ for all $j=1,\ldots,p$ and the corresponding bias $b_{t}(\cdot)\in L^{2}_{q}(\mathcal{X})$. Define $\mathcal{F}_{\tilde{\theta}}=\\{b_{\tilde{\theta}}:\int\sum_{k=1}^{q}g_{j,k}(x,\tilde{\theta})b_{\tilde{\theta},k}(x)dx=0,j=1,\ldots,p\\}$ for a fixed $\tilde{\theta}$. Here $\tilde{\theta}$ is our guess about the parameter combination at which the computer model best approximates the system given the observed data and the computer model. Such a data-dependent choice of $\tilde{\theta}$ involves minimizing the corresponding empirical risk. Let $\tilde{\theta}$ be defined as $\tilde{\theta}=\operatorname*{arg\,min}_{t\in\Theta}\frac{1}{nq}\sum_{k=1}^{q}\sum_{i=1}^{n}\\{y_{F,k}(x_{i})-f_{k}(x_{i},t)\\}^{2}.$ (7) Validity of this definition is provided in the following result where we first establish $\theta^{*}$ as the minimizer of population risk and then leverage on empirical risk minimization results to show that $\tilde{\theta}$ converges to $\theta^{*}$ for large $n$. ###### Proposition 3.2. Let $P=P_{y\mid x}P_{x}$ denote the data generating distribution where $P_{x}$ is the uniform measure on $\mathcal{X}$. Suppose that $\Theta$ is compact and that the $k$-th computer model is Lipshitz in $t\in\Theta$ uniformly over $x\in\mathcal{X}$, that is there exists $L_{k}>0$ such that $|f_{k}(x,t_{1})-f_{k}(x,t_{2})|\leq L_{k}\|t_{1}-t_{2}\|_{2}^{2}$ for $k=1,\ldots,q$. Then $\tilde{\theta}\overset{P}{\to}\theta^{*}$. A proof is provided in the appendix. Our definition of $\tilde{\theta}$ is different from the definition used in (Plumlee, 2017, Proposition 1) where the author assumes the knowledge of $y_{R}(\cdot)$ along with the computer model $f(\cdot,t)$ for all $t$. We argue that explicit functional form of $y_{R}(\cdot)$ may not be available in many practical scenarios, and hence define $\tilde{\theta}$ as detailed above. The compactness of $\Theta$ is a technical assumption which nevertheless is often practical in many calibration problems. Having defined the point at which we want the bias function $b(x)$ to satisfy the orthogonality condition (4), we now proceed to define the projection posterior. Instead of defining a prior distribution on $b_{\tilde{\theta}}\in\mathcal{F}_{\tilde{\theta}}$, our strategy is to start with a generic prior and then project the posterior distribution to $\mathcal{F}_{\tilde{\theta}}$. Standard Hilbert space theory implies that $\mathcal{F}_{\tilde{\theta}}$ is a non-empty, closed and convex subset of $L^{2}_{q}(\mathcal{X})$. Then by the Hilbert projection theorem (Tsiatis, 2006), there exists a unique projection $b_{\tilde{\theta}}^{*}$ of any $b_{\tilde{\theta}}\in L^{2}_{q}(\mathcal{X})$. Define the projection as $T_{\mathcal{F}_{\tilde{\theta}}}(b_{\tilde{\theta}})=\\{b_{\tilde{\theta}}^{*}\in\mathcal{F}_{\tilde{\theta}}:\|b_{\tilde{\theta}}^{*}-b_{\tilde{\theta}}\|_{L^{2}_{q}}=\inf_{b\in\mathcal{F}_{\tilde{\theta}}}\|b-b_{\tilde{\theta}}\|_{L^{2}_{q}}\\}.$ (8) We now describe how $T_{{\mathcal{F}}_{\tilde{\theta}}}$ is used to define the projection posterior. Suppose a prior distribution $\Pi(\theta,b)=\Pi(\theta)\Pi(b)$ is elicited on $\Theta\times L^{2}_{q}(\mathcal{X})$. Let $\widetilde{B}=\widetilde{B}_{1}\times\widetilde{B}_{2}$ be a measurable subset of the Borel $\sigma$-algebra of $\Theta\times{\mathcal{F}}_{\tilde{\theta}}$ . Then given $y^{(n)}$, we define the posterior probability $\Pi_{\text{proj}}(\widetilde{B}\mid y^{(n)})$ of $\widetilde{B}$ under the prior $\Pi(\theta,b)$ as $\Pi(B\mid y^{(n)})$ where $B=\widetilde{B}_{1}\times T_{\mathcal{F}_{\tilde{\theta}}}^{-1}(\widetilde{B}_{2})$ where $T_{\widetilde{\mathcal{F}}_{\theta}}^{-1}(\widetilde{B}_{2})=\\{b_{\tilde{\theta}}:T_{\mathcal{F}_{\tilde{\theta}}}(b_{\theta})\in\widetilde{B}_{2}\\}$, that is, $\Pi_{\text{proj}}(\widetilde{B}\mid y^{(n)})=\Pi(B\mid y^{(n)})=\dfrac{\int_{B}\ell(y^{(n)}\mid\theta,b)d\Pi(\theta,b)}{\int\ell(y^{(n)}\mid\theta,b)d\Pi(\theta,b)}.$ (9) Measurability of $T$ is guaranteed since $\mathcal{F}_{\tilde{\theta}}$ is non-empty, closed and convex; see also Sen et al. (2018). Although (8) is defined as a solution to an optimization problem, in this particular case, it has an explicit form which we call functional projection. ###### Lemma 3.3. Fix $b\in L^{2}_{q}(\mathcal{X})$ and let $\tilde{\theta}$ be defined as above. Let the functions $g_{j,k}(x),\,j=1,\ldots,p$ satisfy the following: $\sum_{j=1}^{p}\alpha_{j}g_{j,k}(x)=0$ for all $x\in\mathcal{X}$ and for all $k=1,\ldots,q$ iff $\alpha_{j}=0,\,j=1,\ldots,p$. Then $T_{\mathcal{F}_{\tilde{\theta}}}(b)=b^{*}(x)=b(x)-\sum_{j=1}^{p}\lambda_{j}^{*}g_{j}(x)$ where the vector $\lambda=(\lambda_{1}^{*},\ldots,\lambda_{p}^{*})^{\mathrm{\scriptscriptstyle{T}}}$ satisfies $Q\lambda=\eta$ with $\eta=(\sum_{k=1}^{q}\langle b_{k},g_{1,k}\rangle,\ldots,\sum_{k=1}^{q}\langle b_{k},g_{p,k}\rangle)^{\mathrm{\scriptscriptstyle{T}}}$ and $Q$ is a $p\times p$ matrix with elements $Q_{jj^{\prime}}=\sum_{k=1}^{q}\langle g_{j,k},g_{j^{\prime},k}\rangle$. The proof is provided in the Appendix. Having defined the projection for our purpose, we can then devise an MCMC algorithm to sample from $\Pi_{\text{proj}}(\cdot\mid y^{(n)})$ defined in (9). 1\. Update $b_{\tilde{\theta}}\sim\Pi(b_{\tilde{\theta}}\mid\theta,y^{(n)})$ 2\. Project $b_{\tilde{\theta}}$ onto $\mathcal{F}_{\tilde{\theta}}$ following the Proposition 3.3 to obtain $b^{*}_{\tilde{\theta}}$. 3\. Update $\theta\sim\Pi(\theta\mid b^{*}_{\tilde{\theta}},y^{(n)})$. Algorithm 1 Projection sampler 1 As an example, consider the case when $b_{\tilde{\theta}}$ is endowed with a zero mean Gaussian process prior with covariance kernel $C(\cdot,\cdot)$. Then following Algorithm 1, we update $b_{\tilde{\theta}}\sim\Pi(b\mid\tilde{\theta},y^{(n)})$ which is a multivariate Gaussian distribution. Next, we compute the corresponding projection $b^{*}_{\tilde{\theta}}$ using Lemma 3.3. Finally, we sample $\theta\sim\Pi(\theta\mid b^{*}_{\tilde{\theta}},y^{(n)})$. This strategy also works for other priors such as BART, where in the first step we update the parameters of the BART parameters using their respective full conditionals. The next steps are exactly the same. An alternative finite-dimensional projection method is described in the supplementary document. For a full Bayesian inference on (2) under the projection posterior framework described above, one should ideally elicit a prior $\Pi(\Sigma_{F})$ on the unknown covariance matrix $\Sigma_{F}$. However, this often adds to computational challenges already present in a calibration problem and may not influence the uncertainty observed in $\theta$ \- the central goal of these problems (Bayarri et al., 2007). As an alternative, a modular Bayes approach is considered here where we use a plug-in estimate $\hat{\Sigma}_{F}$. This estimate is constructed by fitting a nonparametric regression model, i.e. $y_{F,k}(x)=y_{R,k}(x)+\epsilon_{k}(x)$ to each outcome separately with idiosyncratic variance parameters $\sigma_{F,k}^{2}$ with conjugate Inverse Gamma priors and $y_{R,k}\sim\Pi_{k}$ for some nonparametric prior on $y_{R,k}$. We then set $\hat{\Sigma}_{F}=\text{Cov}(E)$ where $E$ is the error matrix with columns $E_{k}=y_{F,k}-\bar{y}_{R}$ where $\bar{y}_{R}$ is the posterior mean of predictions on the training set. Given $\hat{\Sigma}_{F}$, it can be assumed without loss of generality that $\epsilon(x)\sim{\mathrm{N}}(0,\mathrm{I}_{q})$ in (2). This also makes prior elicitation on the multivariate bias $b$ simpler; a natural choice is $\Pi(b)=\prod_{k=1}^{q}\Pi(b_{k})$. ## 4 Explicitly unavailable computer model Until now we have assumed that the computer model $f(x,t)$ is either explicitly specified or can be evaluated cheaply using a code for every $(x,t)\in\mathcal{X}\times\Theta$. However, in many calibration problems, including ours, this is not the case; the computer code can be computationally very expensive; see also Section 2. The standard approach in the literature has been to approximate the computer model using a nonparametric method, typically the Gaussian process (Santner et al., 2003). While the original calibration framework developed by Kennedy and O’Hagan (2001) modeled the observed data and the computer model simultaneously, a modular approach to inference has been advocated by many authors, e.g. Bayarri et al. (2007); Plumlee (2017) wherein the modeling of the explicitly defined computer model is done separately from the modeling of the observed data. Here, we adopt this modular approach with one crucial difference with Plumlee (2017) where the author uses a probabilistic definition of the computer model using a Gaussian process. We, on the other hand, use a deterministic definition $\hat{f}$ of the computer model obtained using any nonparametric approximator such as the posterior mean of a BART fit, a neural net obtained by minimizing the least square error between code evaluations and the neural net output, or a random forest fit; such an approach was also adopted by Xie and Xu (2021). Our motivation for doing so is to allow for more computational and modeling flexibility than a Gaussian process framework. Specifically, we work with the posterior mean of a BART fit which proved to be superior among all other choices of models for $f(x,t)$ in our numerical experiments as well as in our motivating example. In particular, we compared the squared error loss on a held-out test data of size $n_{t}=50$ for these methods: $(n_{t})^{-1}\sum_{i=1}^{n_{t}}\\{f(x,t_{i})-\hat{f}(x,t_{i})\\}^{2}$ for each location $x$ in the observed data $x\in\mathcal{X}$ and each output of the computer model. In the best case scenario, the ratio of the squared error loss for the posterior predictive mean improved over the second best method (Random Forest) by a factor of almost 5. ## 5 Simulations and univariate applications ### 5.1 Univariate simulations In this section, we compare our method to the GP orthogonal calibration by Plumlee (2017) (OGP) and the projected calibration (PCAL) method by Xie and Xu (2021) in several simulation experiments. Since these methods were originally developed for one outcome, we restrict here to the case $q=1$. We focus on each of these methods’ ability to estimate $\theta^{*}$, uncertainty quantification, and their associated computing time. For generating the data, we consider two situations when the definition of the computer model is available explicitly: 1. 1. Model 1: $y_{R}(x)=4x+x\sin 5x$, $f(x,t)=tx$ where $x\in[0,1]$ (Plumlee, 2017, Example 5.1), 2. 2. Model 2: $f(x,t)=7\\{\sin(2\pi t_{1}-\pi)\\}^{2}+2\\{(2\pi t_{2}-\pi)^{2}\sin(2\pi x-\pi)\\}$, $y_{R}(x)=f(x,\theta^{*})$ where $\theta^{*}=(0.2,0.3)$, $x\in[0,1]$ (Xie and Xu, 2021, Configuration 1). In Model 1, $\theta^{*}=3.56$. We additionally consider one more situation where we do not assume the knowledge of $f(x,t)$ in Model 2 and approximate it by the posterior predictive mean of a BART when $N=7^{2}$ code outputs are available on the observed values of $x$, meaning we have for each value $\\{\theta_{1},\ldots,\theta_{N}\\}$ the computer code output is available for $f(\theta_{k},x_{i}),\,k=1,\ldots,N,i=1,\ldots,n$. * • Model 3: $f(x,t)$ is same as Model 2 but only outputs $f_{1},\ldots,f_{N}$ are available. Everything else is same as Model 2. In this case, we set $\tilde{\theta}$ to be the parameter value at which the sample $L^{2}$-distance is minimized between the observed data and the posterior predictive mean of $f$. For all cases we simulate $n=100$ field observations from the model $y^{F}(x_{i})=y^{R}(x_{i})+\epsilon_{i},\,\epsilon_{i}\sim{\mathrm{N}}(0,0.2^{2})$ where we sample the $x_{i}$’s uniformly over $[0,1]$. For the proposed method, we consider two priors on the bias function - GP and BART. For the GP prior, we use the covariance kernel $C(x,x^{\prime})=\sigma^{2}(1+|x-x^{\prime}|/\psi)\exp\\{-|x-x^{\prime}|/\psi\\}$. We set $\psi=1/2$ and use a plug-in estimate of $\sigma^{2}$ which is obtained by fitting a non-parametric BART regression model to the observed field data $(y_{F,i},x_{i})_{i=1}^{n}$. The corresponding projected versions are abbreviated as PGP and PBART. We also consider the functional and finite dimensional projections for PGP and PBART, with the finite-dimensional version abbreviated as PGP(Fn) and PBART(Fn). For Model 3, we only consider the infinite dimensional projections. In Tables 1, 2 and 3 we report the results of our experiments from a replicated simulation study. All numbers are averaged over 100 replications and for each model we draw 5000 posterior samples using MCMC with 1000 burn in samples. In terms of estimation quality of the calibration parameters, all the different methods perform similarly. Indeed, the posterior mean and standard deviation in Table 1 and Table 2 are almost identical. However, the key difference is in the respective runtimes of these methods. The runtimes reported in the tables are comparable up to coding language and efficiencies of these implementations. Nonetheless, PBART outperforms all other methods by quite a large margin providing at least an order of magnitude improvement over the next fastest method. We note here, the runtimes reported for PCAL correspond to the standard implementation of the method and not the approximated calibration method proposed in Xie and Xu (2021); the approximated calibration method is much faster compared to its standard version although even for that the average runtime for Model 2 is 36.33 seconds which is almost 4 times the time of PBART. In terms of uncertainty quantification, all the methods in consideration provide roughly the nominal coverage although it seems that PGP and PBART is biased downwards (but only slightly). This is also apparent in the relatively sharper posterior distributions of $\theta$ plotted in Figure 2. | Model 1 ($\theta^{*}=3.56$) ---|--- | PGP | PGP(Fn) | PBART | PBART(Fn) | OGP | PCAL Mean | 3.56 | 3.59 | 3.57 | 3.58 | 3.57 | 3.57 Std. Dev. | 0.008 | 0.007 | 0.02 | 0.02 | 0.003 | 0.003 Coverage | 0.93 | 0.93 | 0.92 | 0.93 | 0.95 | 0.95 Runtime | 3.78 sec | 30.72 sec | 1.86 sec | 17 min | 1.88 | 13.92 min Table 1: Simulation results for Model 1. We report the posterior mean, posterior standard deviation, coverage of the 95% credible intervals and the average runtime. | Model 2 ($\theta^{*}=(0.2,0.3)$) | ---|---|--- | PGP | PGP(Fn) | PBART | PBART(Fn) | OGP | PCAL Mean | (0.2, 0.3) | (0.2, 0.31) | (0.19, 0.3) | (0.2, 0.31) | (0.2, 0.3) | (0.2, 0.3) Std. Dev. | (0.0002, 0.0002) | (0.0003, 0.0004) | (0.0008, 0.0007) | (0.0008, 0.0008) | (0.001, 0.002) | (0.003, 0.0006) Coverage | 0.91 | 0.91 | 0.92 | 0.92 | 0.96 | 0.96 Runtime | 18.18 sec | 2.65 min | 9.38 sec | 1.35 hrs | 16.58 min | Table 2: Simulation results for Model 2. We report the posterior mean, posterior variance, coverage of the 95% credible intervals and the average runtime. | Model 3 ($\theta_{0}=(0.2,0.3)$) ---|--- | PGP | PBART Mean | (0.21, 0.33) | (0.22, 0.34) Std. Dev. | (0.03, 0.03) | (0.04, 0.03) Runtime | 24 min | 20 min Table 3: Simulation results for Model 3. We report the posterior mean, posterior variance and the average runtime. (a) Model 1: $\pi(\theta\mid\text{data})$ (b) Model 2: $\pi(\theta_{1}\mid\text{data})$ (c) Model 2: $\pi(\theta_{2}\mid\text{data})$ Figure 2: Posterior distribution of the calibration parameter $\theta$. ### 5.2 Univariate analysis on Schwarzschild model outputs We apply the proposed posterior projection technique on each output of the Schwarzschild model separately. We illustrate the specifics of our application with the first output of the Schwarzschild model, which is the mean velocity. Similar to the Kennedy and O’Hagan (2001) setup, we have the following model for the observed velocities $v_{F}(x)$ and the corresponding code output $v_{S}(x)$ at location $x\in\mathcal{X}$: $v_{F}(x_{i})=v_{R}(x_{i})+\epsilon_{i}=v_{S}(x_{i};\tilde{\theta})+b_{v,\tilde{\theta}}(x_{i})+\epsilon_{i},\quad\epsilon_{i}\sim{\mathrm{N}}(0,\sigma_{v}^{2}),$ (10) where we let $\tilde{\theta}$ to be the parameter value where the $L^{2}$ loss defined in previous sections is minimized and $b_{v,\tilde{\theta}}(\cdot)$ is the bias function corresponding to mean velocity at $\tilde{\theta}$. Since the explicit form of the model is not available, we use the posterior mean of a BART fit as the definition of $v_{S}(x;t)$ based on the model outputs only. We fix $\sigma_{v}$ to the posterior mean of $\sigma$ of a BART regression fit to $(v_{F,i},x_{i})_{i=1}^{n}$. We set $\theta_{j}\sim{\mathrm{N}}(0,\gamma^{2})$ with $\gamma=10$ as priors for $\theta_{j}$, and $\Pi(\theta)=\prod_{j=1}^{p}\Pi(\theta_{j})$. We implement PGP and PBART under these settings where for implementing Step 3 of Algorithm 1, we use an adaptive Metropolis-Hastings sampler Haario et al. (2001) which is tuned to maintain an acceptance rate of approximately 0.3. In Table 4, we summarize the posterior distribution of the calibration parameters for all the four different outputs of the Schwarzschild model. Specifically, we report the posterior mean, standard deviation and the 95% credible interval for all the three calibration parameters $(\theta_{1},\theta_{2},\theta_{3})$. As expected, there are some differences in the posterior means for different velocity moments being fit. For example, the posterior mean for $\theta_{1}$ differs when using $h_{4}$ compared to when using $\tau$. The difference may be due to the physical degeneracy between the orbital anisotropy and black hole mass, such that large $\tau$ at the center of the 2-dimensional spatial grid can imply a large black hole mass but so too can a smaller $\tau$ with radial anisotropy, which is reflected in the values of $h_{4}$. Or, it could be due simply to the model inadequacy of a univariate treatment of the problem. In addition, there seems to a broad agreement between the posterior means for $\theta_{1}$ and $\theta_{2}$ under the two priors. However, the key difference is in the posterior mean of $\theta_{3}$. For the GP prior, the posterior mean for the 4 outcomes is roughly around 1, whereas for BART, the posterior mean is around 1.4. One possible explanation for this discrepancy is that for $\theta_{3}$, the computer code output is only available for $\theta_{3}=1,2,3$. The scarcity of data along this dimension may potentially lead to unstable estimates. A joint model accounting for the covariance between the measurements can reconcile this apparent discrepancy in the posterior distribution and the problems induced by sparse data. Numerical experiments in the next subsection illustrates the benefits of a joint model when outcomes are correlated. | | PGP | PBART ---|---|---|--- | | $\theta_{1}$ | $\theta_{2}$ | $\theta_{3}$ | $\theta_{1}$ | $\theta_{2}$ | $\theta_{3}$ $v$ | Mean | 9.46 | 9.06 | 0.82 | 9.59 | 9.16 | 1.22 Std. Dev. | 0.14 | 0.08 | 0.09 | 0.27 | 0.15 | 0.07 Interval | (9.21, 9.80) | (8.93, 9.18) | (0.65, 1.00) | (9.20, 9.97) | (8.93, 9.50) | (1.12, 1.32) $\tau$ | Mean | 9.98 | 9.12 | 1.02 | 9.59 | 9.03 | 1.42 Std. Dev. | 0.09 | 0.07 | 0.10 | 0.10 | 0.13 | 0.18 Interval | (9.81, 10.14) | (8.99, 9.26) | (0.87, 1.23) | (9.41, 9.76) | (8.84, 9.31) | (1.15, 1.76) $h_{3}$ | Mean | 9.96 | 9.36 | 1.04 | 10.09 | 9.27 | 1.41 Std. Dev. | 0.07 | 0.08 | 0.12 | 0.07 | 0.08 | 0.06 Interval | (9.86, 10.11) | (9.15, 9.49) | (0.87, 1.25) | (9.97, 10.23) | (9.12, 9.41) | (1.25 , 1.41) $h_{4}$ | Mean | 10.18 | 9.07 | 1.09 | 9.73 | 9.09 | 1.53 Std. Dev. | 0.11 | 0.06 | 0.06 | 0.06 | 0.12 | 0.09 Interval | (9.89, 10.62) | (8.97, 9.20) | (0.97, 1.19) | (9.61, 9.82) | (8.85, 9.28) | (1.35, 1.69) Table 4: Posterior summaries for the calibration parameters $\theta_{1}$ = logarithm of mass of the black hole, $\theta_{2}$ = stellar mass-to-light ratio and $\theta_{3}$ = fraction of dark matter. The posterior mean, standard deviation (Std. Dev.) and the 95% credible interval is reported here. ### 5.3 Multivariate simulations Here we consider a simulation scenario where more than outcome is measured, i.e. $q>1$. We illustrate this with the case $q=2$. For $x\in[0,1]$, consider the model $\begin{pmatrix}y_{F,1}(x)\\\ y_{F,2}(x)\end{pmatrix}=\begin{pmatrix}f_{1}(x,\theta)\\\ f_{2}(x,\theta)\end{pmatrix}+\begin{pmatrix}b_{\theta,1}(x)\\\ b_{\theta,2}(x)\end{pmatrix}+\begin{pmatrix}\epsilon_{1}\\\ \epsilon_{2}\end{pmatrix}$ (11) where $y_{R,1}(x)=4x+x\sin 5x$, $y_{R,2}(x)=[1+\exp\\{-6(x-0.5)\\}]^{-1}$ are the real data-generating processes. The respective computer models are $f_{1}(x,t)=tx$, $f_{2}(x,t)=\Phi\\{t(x-0.5)\\}$ with $\Phi(\cdot)$ being the cdf of $Z\sim{\mathrm{N}}(0,1)$. The errors are assumed to follow ${\mathrm{N}}(0,\Sigma)$. The loss $L\\{y_{R}(\cdot),f(\cdot,t)\\}=\sum_{k=1}^{2}\int_{\mathcal{X}}\\{y_{R,k}(x)-f_{k}(x,t)\\}^{2}$ is minimized at $\theta^{*}=3.56$ approximately. However, the behavior of the $L_{2}$-loss in these two cases is very different, especially near the minimum. Indeed, as shown in the left panel of Figure 3, for the pair $(y_{R,1},f_{1})$, the loss increases quite sharply around 3.56, whereas for $(y_{R,2},f_{2})$, the increase in loss for $t>3.56$ is much slower. As such, when data from model (11) is fitted separately in the univariate framework, one should expect more uncertainty in $\theta$ for $(y_{R,2},f_{2})$. On the other hand, a joint multivariate fit should reflect the fact when the two real processes and their corresponding computer models are compared together, the posterior distribution of $\theta$ is much more centered around 3.56. (a) $L(\theta)=\int_{0}^{1}\left\\{y_{R,1}(x)-f_{1}(x,t)\right\\}^{2}$ (b) $L(\theta)=\int_{0}^{1}\left\\{y_{R,2}(x)-f_{2}(x,t)\right\\}^{2}$ Figure 3: Loss comparison For data generation, we consider $n=100$ and a covariance matrix $\Sigma_{0}$ such that the diagonal elements are $0.2^{2}$ and the off-diagonal elements are $0.012$. We consider PGP and PBART for this setup and use the functional projection. We fit this data separately in the univariate framework and then a joint model is fitted. For a fair comparison all hyperparameter values were the same. We plot the posterior distribution of $\theta$ obtained using the PBART method in Figure 4. In the first and second panel, the posterior distribution of $\theta$ is shown when two separate univariate models are fitted, and in the third panel we show that for a multivariate fit. As mentioned earlier, the posterior distribution of $\theta$ when data for the second outcome is fitted separately, is more dispersed and has relatively large mass in the interval [3, 4]. This is not the case for the first outcome, as almost the entire mass is within [3.5, 3.65]. However, the posterior distribution of $\theta$ form a multivariate fit is able to borrow information across the two outcomes so that most of the mass of the distribution is within [3.4,3.7]. Figure 4: Posterior distribution of $\theta$ for univariate and multivariate PBART fits for model 11. Left two panels show posteriors for two separate univariate fits, and the third panel shows posterior from a combined multivariate fit. ## 6 Calibrating the Schwarzschild model Here we carry out a joint calibration of the Schwarzschild model. For this, we first estimate the covariance matrix as $\Sigma=\text{Cov}(E)$, where $E$ is a $n\times q$ error matrix obtained by fitting univariate BART regressions to the field data of each outcome. Here we fit the PBART method with functional projection. The marginal posterior distribution of $\theta$ is shown in Figure 5. The 95% symmetric posterior (marginal) credible intervals are [9.54, 9.87], [8.78, 9.13] and [0.97, 1.38] for $\theta_{1},\theta_{2},\theta_{3}$, respectively. Intervals for $\theta_{1},\theta_{2}$ from the joint analysis clearly seem to lie at the intersection of the intervals from the univariate model, but the posterior of $\theta_{3}$ exhibits slight bimodality. Overall, by modeling the four outcomes simultaneously uncertainty in $\theta$ have reduced which may significantly cut down computation time for future applications with an expanded parameter space. We also show the marginal posterior predictive mean of the outcomes. In the first panel of Figure 6, we plot the posterior predictive mean at the observed locations for $v_{R}(\cdot)$. For reference, the observed data is plotted on the third panel of Figure 6. Similar plots for $\tau_{R}(\cdot),h_{3,R}(\cdot)$ and $h_{4,R}(\cdot)$ are provided in Figures 7, 8 and 9, respectively. As mentioned earlier, the outcomes in this context represent the first four moments of velocity, and the model performance decreases as the moments increase. We also observed a similar phenomenon when we looked at the relative squared error loss for a held-out data set of size 20. For $v_{F}(x)$, We define the relative squared error loss as $n_{t}^{-1}\sum_{i=1}^{n_{t}}(1-\hat{v}_{F}(x_{i})/v_{F}(x_{i}))^{2}$ where $\hat{v_{F}}(x_{i})$ represents the posterior mean of samples of $[v_{S}(x_{i},\theta)+b_{v,\theta}(x_{i})]$. Here $n_{t}$ is the test data size. The definition is similar for other outcomes. The values we obtained are 0.17, 0.003, 3.92 and 30.42 for $v,\tau,h_{3},h_{4}$, respectively. Figure 5: Posterior distribution of $\theta$ from multivariate PBART fit for the Schwarzschild model. Figure 6: Posterior predictive mean of PBART is plotted for $v_{R}(\cdot)$ in the first panel. In the second panel, the observed values $v_{F}(\cdot)$ are also shown for reference. Figure 7: Posterior predictive mean of PBART is plotted for $\tau_{R}(\cdot)$ in the first panel. In the second panel, the observed values $\tau_{F}(\cdot)$ are also shown for reference. Figure 8: Posterior predictive mean of PBART is plotted for $h_{3,R}(\cdot)$ in the first panel. In the second panel, the observed values $h_{3,F}(\cdot)$ are also shown for reference. Figure 9: Posterior predictive mean of PBART is plotted for $h_{4,R}(\cdot)$ in the first panel. In the second panel, the observed values $h_{4,R}(\cdot)$ are also shown for reference. ## 7 Discussion The Schwarzschild model, although computationally intensive, is an important tool in understanding the dynamics of a black hole and its host galaxy. Motivated by this application we developed a multivariate calibration method that ensures parameter identifiablity under the squared error loss. The key benefits of the proposed projection posterior approach is its flexibility in accommodating user-specified prior distributions on the multivariate bias function, and the fact that such a projection is available analytically. Benefits of a multivariate analysis is demonstrated through numerical experiments and it seems to impact the analysis of Schwarzschild model positively as well. Conceptually, the projection approach can be extended to alternative loss functions that lead to different sufficiency conditions for identifiability and high-dimensional outcomes where low-rank models for the error covariance might be warranted. ## References * Bayarri et al. (2007) Bayarri, M., D. Walsh, J. Berger, J. Cafeo, G. Garcia-Donato, F. Liu, J. Palomo, R. Parthasarathy, R. Paulo, and J. Sacks (2007). Computer model validation with functional output. Annals of Statistics 35(5), 1874–1906. * Bayarri et al. (2007) Bayarri, M. J., J. O. Berger, R. Paulo, J. Sacks, J. A. Cafeo, J. Cavendish, C.-H. Lin, and J. Tu (2007). A framework for validation of computer models. Technometrics 49(2), 138–154. * Chakraborty et al. (2013) Chakraborty, A., B. K. Mallick, R. G. Mcclarren, C. C. Kuranz, D. Bingham, M. J. Grosskopf, E. M. Rutter, H. F. Stripling, and R. P. Drake (2013). Spline-based emulators for radiative shock experiments with measurement error. Journal of the American Statistical Association 108(502), 411–428. * Chipman et al. (2010) Chipman, H. A., E. I. George, and R. E. McCulloch (2010). BART: Bayesian additive regression trees. The Annals of Applied Statistics 4(1), 266–298. * Denison et al. (1998) Denison, D. G., B. K. Mallick, and A. F. Smith (1998). Bayesian mars. Statistics and Computing 8(4), 337–346. * Forest et al. (2008) Forest, C. E., B. Sansó, and D. Zantedeschi (2008). Inferring climate system properties using a computer model. Bayesian Analysis 3(1), 1–37. * Friedman (1991) Friedman, J. H. (1991). Multivariate adaptive regression splines. The annals of statistics 19(1), 1–67. * Gattiker et al. (2006) Gattiker, J., D. Higdon, S. Keller-McNulty, M. McKay, L. Moore, and B. Williams (2006). Combining experimental data and computer simulations, with an application to flyer plate experiments. Bayesian Analysis 1(4), 765–792. * Gneiting and Raftery (2007) Gneiting, T. and A. E. Raftery (2007). Strictly proper scoring rules, prediction, and estimation. Journal of the American statistical Association 102(477), 359–378. * Haario et al. (2001) Haario, H., E. Saksman, J. Tamminen, et al. (2001). An adaptive Metropolis algorithm. Bernoulli 7(2), 223–242. * Higdon et al. (2008) Higdon, D., J. Gattiker, B. Williams, and M. Rightley (2008). Computer model calibration using high-dimensional output. Journal of the American Statistical Association 103(482), 570–583. * Kennedy and O’Hagan (2001) Kennedy, M. C. and A. O’Hagan (2001). Bayesian calibration of computer models. Journal of the Royal Statistical Society: Series B (Statistical Methodology) 63(3), 425–464. * Kormendy and Ho (2013) Kormendy, J. and L. C. Ho (2013, August). Coevolution (Or Not) of Supermassive Black Holes and Host Galaxies. Annual Review of Astronomy and Astrophysics 51(1), 511–653. * Linero and Yang (2018) Linero, A. R. and Y. Yang (2018). Bayesian regression tree ensembles that adapt to smoothness and sparsity. Journal of the Royal Statistical Society: Series B (Statistical Methodology) 80(5), 1087–1110. * Marmin and Filippone (2022) Marmin, S. and M. Filippone (2022). Deep gaussian processes for calibration of computer models. Bayesian Analysis 1(1), 1–30. * McConnell and Ma (2013) McConnell, N. J. and C.-P. Ma (2013, February). Revisiting the Scaling Relations of Black Hole Masses and Host Galaxy Properties. The Astrophysical Journal 764(2), 184. * Mehrgan et al. (2019) Mehrgan, K., J. Thomas, R. Saglia, X. Mazzalay, P. Erwin, R. Bender, M. Kluge, and M. Fabricius (2019, December). A 40 Billion Solar-mass Black Hole in the Extreme Core of Holm 15A, the Central Galaxy of Abell 85. The Astrophysical Journal 887(2), 195. * Neal (2012) Neal, R. M. (2012). Bayesian learning for neural networks, Volume 118. Springer Science & Business Media. * Plumlee (2017) Plumlee, M. (2017). Bayesian calibration of inexact computer models. Journal of the American Statistical Association 112(519), 1274–1285. * Plumlee and Joseph (2018) Plumlee, M. and V. R. Joseph (2018). Orthogonal gaussian process models. Statistica Sinica, 601–619. * Pollard and Radchenko (2006) Pollard, D. and P. Radchenko (2006). Nonlinear least-squares estimation. Journal of Multivariate Analysis 97(2), 548–562. * Pratola and Higdon (2016) Pratola, M. T. and D. M. Higdon (2016). Bayesian additive regression tree calibration of complex high-dimensional computer models. Technometrics 58(2), 166–179. * Quenneville et al. (2021) Quenneville, M. E., C. M. Liepold, and C.-P. Ma (2021). Dynamical modeling of galaxies and supermassive black holes: Axisymmetry in triaxial schwarzschild orbit superposition models. The Astrophysical Journal Supplement Series 254(2), 25\. * Rasmussen and Williams (2005) Rasmussen, C. E. and C. K. I. Williams (2005). Gaussian Processes for Machine Learning (Adaptive Computation and Machine Learning). The MIT Press. * Ročková and van der Pas (2020) Ročková, V. and S. van der Pas (2020). Posterior concentration for bayesian regression trees and forests. The Annals of Statistics 48(4), 2108–2131. * Sacks et al. (1989) Sacks, J., W. J. Welch, T. J. Mitchell, and H. P. Wynn (1989). Design and analysis of computer experiments. Statistical science 4(4), 409–423. * Saglia et al. (2016) Saglia, R. P., M. Opitsch, P. Erwin, J. Thomas, A. Beifiori, M. Fabricius, X. Mazzalay, N. Nowak, S. P. Rusli, and R. Bender (2016, February). The SINFONI Black Hole Survey: The Black Hole Fundamental Plane Revisited and the Paths of (Co)evolution of Supermassive Black Holes and Bulges. The Astrophysical Journal 818(1), 47. * Salter et al. (2018) Salter, J. M., D. B. Williamson, J. Scinocca, and V. Kharin (2018). Uncertainty quantification for spatio-temporal computer models with calibration-optimal bases. arXiv preprint arXiv:1801.08184 12. * Santner et al. (2003) Santner, T. J., B. J. Williams, and W. I. Notz (2003). The Design and Analysis of Computer Experiments. Springer series in statistics. Springer. * Schwarzschild (1979) Schwarzschild, M. (1979). A numerical model for a triaxial stellar system in dynamical equilibrium. The Astrophysical Journal 232, 236–247. * Sen et al. (2018) Sen, D., S. Patra, and D. Dunson (2018). Constrained inference through posterior projections. arXiv preprint arXiv:1812.05741. * Tsiatis (2006) Tsiatis, A. A. (2006). Semiparametric theory and missing data, Volume 4. Springer. * Tuo and Wu (2015) Tuo, R. and C. J. Wu (2015). Efficient calibration for imperfect computer models. Annals of Statistics 43(6), 2331–2352. * Tuo and Wu (2016) Tuo, R. and C. J. Wu (2016). A theoretical framework for calibration in computer models: parametrization, estimation and convergence properties. SIAM/ASA Journal on Uncertainty Quantification 4(1), 767–795. * van den Bosch et al. (2008) van den Bosch, R., G. Van De Ven, E. Verolme, M. Cappellari, and P. De Zeeuw (2008). Triaxial orbit based galaxy models with an application to the (apparent) decoupled core galaxy ngc 4365. Monthly Notices of the Royal Astronomical Society 385(2), 647–666. * Xie and Xu (2021) Xie, F. and Y. Xu (2021). Bayesian projected calibration of computer models. Journal of the American Statistical Association 116(536), 1965–1982. * Yang et al. (2017) Yang, Y., A. Bhattacharya, and D. Pati (2017). Frequentist convergence and sup-norm convergence rate in gaussian process regression. arxiv preprint arXiv:1708.04753. ## Appendix A Proofs ### A.1 Proof of Proposition 3.2 ###### Proof. We first characterize $\theta$ in terms of the expected loss. Recall that for $Q=\mathrm{I}_{q}$, $\theta=\operatorname*{arg\,min}_{t\in\Theta}\int_{x}\sum_{k=1}^{q}\\{y_{R,k}(x)-f_{k}(x,t)\\}^{2}dx$. For any fixed $t\in\Theta$, let $e(x)=\\{y^{F}(x)-f(x,t)\\}$, then $\displaystyle E_{P}\\{e(x)^{\mathrm{\scriptscriptstyle{T}}}e(x)\\}=(\text{Vol}(\mathcal{X}))^{-1}\int_{x}\left[\sum_{k=1}^{q}E_{P_{y\mid x}}\\{e^{2}_{k}(x)\\}\right]dx$ $\displaystyle=(\text{Vol}(\mathcal{X}))^{-1}\int_{x}\sum_{k=1}^{q}E_{P_{y\mid x}}\\{y_{F,k}-y_{R,k}(x)\\}^{2}dx+(\text{Vol}(\mathcal{X}))^{-1}\int_{x}\sum_{k=1}^{q}\\{y_{R,k}(x)-f_{k}(x,t)\\}^{2}dx$ $\displaystyle=\text{tr}(\Sigma_{F})+(\text{Vol}(\mathcal{X}))^{-1}\int_{x}e(x)^{\mathrm{\scriptscriptstyle{T}}}e(x)dx,$ where $\text{tr}(A)$ denotes the trace of a matrix $A$. Hence, $\theta$ can be seen as $\operatorname*{arg\,min}_{t\in\Theta}\int_{x}\sum_{k=1}^{q}\\{y_{R,k}(x)-f_{k}(x,t)\\}^{2}dx=\operatorname*{arg\,min}_{t\in\Theta}E_{P}[e(x)^{\mathrm{\scriptscriptstyle{T}}}e(x)].$ Now define $Q_{n,k}(t)=n^{-1}\sum_{i=1}^{n}\\{y_{F,k}(x_{i})-f_{k}(x_{i},t)\\}^{2}$ and $Q_{n}(t)=q^{-1}\sum_{k=1}^{q}Q_{n,k}(t)$ which is simply $E_{P_{n}}[e(x)^{\mathrm{\scriptscriptstyle{T}}}e(x)]$ where $P_{n}$ is the corresponding joint empirical measure. Invoking Theorem 3 of Pollard and Radchenko (2006) and the compactness of $\Theta$, we get that $\sup_{f}|Q_{n}(t)-E_{P}(Q_{n}(t))|\overset{P}{\to}0$ which implies that $\theta_{n}^{*}\overset{P}{\to}\theta$. ∎ ### A.2 Proof of Lemma 3.3 ###### Proof. The projection operator is the solution to the optimization problem $\displaystyle\min\|b_{\theta}-b_{\theta}^{*}\|_{L^{2}_{q}}\quad\text{subject to }\sum_{k=1}^{q}\langle g_{j,k},b_{\theta,k}^{*}\rangle=0,\,j=1,\ldots,p.$ Hence, the Lagrangian dual of the optimization problem is $\displaystyle\min\sum_{k=1}^{q}\|b_{\theta,k}-b_{\theta,k}^{*}\|_{L}^{2}+\sum_{j=1}^{p}\lambda_{j}\left[\sum_{k=1}^{q}\langle g_{j,k},b_{\theta,k}^{*}\rangle\right].$ Solving for $b_{\theta,k}^{*}$ gives $b_{\theta,k}^{*}=b_{\theta,k}+\sum_{j=1}^{p}\lambda_{j}g_{j,k}$. Thus, $\|b_{\theta,k}-b_{\theta,k}^{*}\|_{L^{2}_{q}}=\lambda^{\mathrm{\scriptscriptstyle{T}}}(\sum_{k=1}^{q}Q_{k})\lambda$ where $\lambda=(\lambda_{1},\ldots,\lambda_{p})^{\mathrm{\scriptscriptstyle{T}}}$ and $Q_{k}^{p\times p}$ is a matrix with $(j,j^{\prime})$-th element $\langle g_{j,k},g_{j^{\prime},k}\rangle$. Also, $\sum_{k=1}^{q}\langle g_{j,k},b_{\theta,k}^{*}\rangle=\sum_{k=1}^{q}\langle g_{j,k},b_{\theta,k}\rangle+\sum_{j^{\prime}=1}^{p}\lambda_{j^{\prime}}Q_{k,jj^{\prime}}$. This implies that $Q\lambda=-\eta$ where $\eta\in\mathbb{R}^{p\times 1}$ with $\eta_{j}=\sum_{k=1}^{q}\langle g_{j,k},b_{\theta,k}\rangle$ which proves the result. ∎ Supplementary materials for “Orthogonal calibration via posterior projections with applications to the Schwarzschild model” ## Appendix S.1 Finite-dimensional projection When the central focus of analysis is assessing uncertainty in $\theta$, and $\Pi(b_{k})$ is a Gaussian process, then the projection posterior approach can be implemented within a finite dimensional setting by ensuring $\sum_{k=1}^{q}\mathbf{b}_{k}^{\mathrm{\scriptscriptstyle{T}}}\mathbf{g}_{j,k}=0$ where $\mathbf{b}_{k}=(b_{k}(x_{1}),\ldots,b_{k}(x_{n}))^{\mathrm{\scriptscriptstyle{T}}}$ and $\mathbf{g}_{j,k}=(g_{j,k}(x_{1}),\ldots,g_{j,k}(x_{n}))$. This is a consequence of posteriors of $\mathbf{b}_{k}$ being Gaussian under a Gaussian process prior and projections of multivariate Gaussian random vectors to linear subspaces. For example, suppose $X^{d\times 1}\sim{\mathrm{N}}(\mu,\Sigma)$ and consider the set $S=\\{x:A^{\mathrm{\scriptscriptstyle{T}}}x=0\\}\subset\mathbb{R}^{n}$ where $A^{d\times p}$ is matrix of rank $p$. The conditional distribution of $X\mid X\in S$ is again a multivariate Gaussian with mean $\mu-\Sigma A^{\mathrm{\scriptscriptstyle{T}}}(A\Sigma A^{\mathrm{\scriptscriptstyle{T}}})^{-1}A\mu$ and covariance matrix $\Sigma-\Sigma A^{\mathrm{\scriptscriptstyle{T}}}(A\Sigma A^{\mathrm{\scriptscriptstyle{T}}})^{-1}A\Sigma$. Next, define the following projection for any $x\in\mathbb{R}^{d}$ $P_{S}(x)=\Sigma^{1/2}\operatorname*{arg\,min}_{y\in S}\|y-\Sigma^{-1/2}x\|.$ (S.1) Standard linear algebra yields that $P_{S}(x)$ has the form $\Sigma^{1/2}(I-P_{A})\Sigma^{-1/2}y$ where $P_{A}=A(A^{\mathrm{\scriptscriptstyle{T}}}A)^{-1}A^{\mathrm{\scriptscriptstyle{T}}}$ is the projection matrix of $A$. This leads to the following interpretation of the multivariate Gaussian distribution conditional on linear equality constraints. ###### Proposition S.1.1. Suppose $X\sim{\mathrm{N}}(\mu,\Sigma)$ and consider the set $S=\\{x:A^{\mathrm{\scriptscriptstyle{T}}}x=0\\}\subset\mathbb{R}^{d}$ where $A^{d\times p}$ is matrix of rank $p$. Then the random variable $P_{S}(X)\sim{\mathrm{N}}(\mu-\Sigma A^{\mathrm{\scriptscriptstyle{T}}}(A\Sigma A^{\mathrm{\scriptscriptstyle{T}}})^{-1}A\mu,\Sigma-\Sigma A^{\mathrm{\scriptscriptstyle{T}}}(A\Sigma A^{\mathrm{\scriptscriptstyle{T}}})^{-1}A\Sigma).$ ###### Proof. We have $\displaystyle\text{Cov}(X\mid X\in S)$ $\displaystyle=\Sigma-\Sigma A^{\mathrm{\scriptscriptstyle{T}}}(A\Sigma A^{\mathrm{\scriptscriptstyle{T}}})^{-1}A\Sigma$ $\displaystyle=\Sigma^{1/2}(I-\Sigma^{1/2}A^{\mathrm{\scriptscriptstyle{T}}}(A\Sigma A^{\mathrm{\scriptscriptstyle{T}}})A\Sigma^{1/2})\Sigma^{1/2}$ $\displaystyle=\Sigma^{1/2}(I-P_{B})\Sigma^{1/2}=\Sigma^{1/2}(I-P_{B})^{2}\Sigma^{1/2}=\text{Cov}(P_{S}(X)),$ where $B=A\Sigma^{1/2}$ and $P_{B}=P_{A}$ since $A$ is full rank and $\Sigma$ is non-singular by assumption. A similar analysis shows that the mean also agrees. ∎ The proof is provided in the appendix. We illustrate this within the orthogonal calibration context. For the sake of simplicity, consider $q=1$. We set $A^{n\times p}=(a_{1},\ldots,a_{p})$ where $a_{j}=(g_{j}(x_{1},\tilde{\theta}),\ldots,g_{j}(x_{n},\tilde{\theta}))^{\mathrm{\scriptscriptstyle{T}}}$. Let $b=(b(x_{1}),\ldots,b(x_{n}))^{\mathrm{\scriptscriptstyle{T}}}$ denote the vector of bias function evaluated at $x_{1},\ldots,x_{n}$. Then the finite- dimensional analogue of the orthogonality condition is $A^{\mathrm{\scriptscriptstyle{T}}}b=0$. When $b_{\tilde{\theta}}(\cdot)$ is endowed with a zero mean Gaussian process prior with covariance kernel $C(\cdot,\cdot)$, then apriori the vector $b_{\tilde{\theta}}\sim{\mathrm{N}}(0,K)$ with $K_{ij}=C(x_{i},x_{j}),\,1\leq i,j\leq n$. Also, $b_{\tilde{\theta}}\mid\theta,y^{(n)}\sim{\mathrm{N}}((K+\sigma^{2}_{F})^{-1}z^{(n)},(K+\sigma^{2}_{F})^{-1})$ where $z^{(n)}=(y(x_{1})-f(x_{1},\theta),\ldots,y(x_{2})-f(x_{n},\theta))^{\mathrm{\scriptscriptstyle{T}}}$. Using Proposition S.1.1, we can then obtain $b^{*}$. Hence, for Gaussian process priors, only the second step in Algorithm 1 from the main draft will change under the finite-dimensional projection setup. We now demonstrate how the finite-dimensional projection can be executed when $b_{\tilde{\theta}}$ is specified a non-Gaussian process prior. Here the posterior of $\mathbf{b}_{k}$ is also non-Gaussian. Suppose the prior is parameterized by $\eta$. For instance, with a BART prior, $\eta$ contains the trees and leaf node parameters. A prior on $b_{\tilde{\theta}}$ is specified by a prior distribution on $\eta$ which then produces a full conditional distribution of $\eta\mid\theta,y^{(n)}$. A posterior sample of $b_{\tilde{\theta}}\in\mathbb{R}^{n}$ can be drawn by sampling from $\eta\mid\theta,y^{(n)}$. For many priors, the joint distribution can be well approximated by a multivariate Gaussian with some mean and covariance $(\beta,\Phi)$ (Yang et al., 2017). To find the approximating mean and covariance, we simply draw $M$ independent draws of $b_{\tilde{\theta}}$ by sampling $\\{\eta_{1},\ldots,\eta_{M}\\}$ independently from $\eta\mid\theta,y^{(n)}$ and set $\beta=M^{-1}\sum_{m=1}^{M}b_{\tilde{\theta},m}$ and $\phi=(M-1)^{-1}(b_{\tilde{\theta},m}-\beta)(b_{\tilde{\theta},m}-\beta)^{\mathrm{\scriptscriptstyle{T}}}$ where $b_{\tilde{\theta},m}$ is the draw of $b_{\tilde{\theta}}$ corresponding to $\eta_{m},\,m=1,\ldots,M$. We note here, that $M$ can be made arbitrarily large compared to the dimension $n$ of $b_{\tilde{\theta}}$ so that the sample mean and covariance provide reasonable approximations to their population counterparts. This technique is summarized in the following algorithm. 1\. Sample $M$ independent copies of parameters $\eta\mid\theta,y^{(n)}$ and consider the corresponding draws of $b_{\tilde{\theta}}$ as $\\{b_{\tilde{\theta},1},\ldots,b_{\tilde{\theta,M}}\\}$. 2\. Compute the quantities $\beta=M^{-1}\sum_{m=1}^{M}b_{\tilde{\theta},m}$ and $\Phi=(M-1)^{-1}(b_{\tilde{\theta},m}-\beta)(b_{\tilde{\theta},m}-\beta)^{\mathrm{\scriptscriptstyle{T}}}$. 2\. Set the projection as $b_{\tilde{\theta}}^{*}=\Phi^{1/2}(I-P_{G})\Phi^{-1/2}b_{\tilde{\theta}}$. 3\. Update $\theta\sim\Pi(\theta\mid b^{*}_{\tilde{\theta}},y^{(n)})$. Algorithm 2 Projection sampler 2: non-GP priors Clearly, for non-Gaussian priors for $b_{k}$, the finite-dimensional projection approach is computationally much more expensive. However, if the prior is a GP, this approach is relatively less expensive to implement and yields similar results to the functional projection approach described in the main draft.
# Linear mixed modelling of federated data when only the mean, covariance, and sample size are available Marie Analiz April Limpoco Interuniversity Institute for Biostatistics and Statistical Bioinformatics (I-BioStat), Data Science Institute (DSI), Hasselt University, Hasselt, Belgium Christel Faes Interuniversity Institute for Biostatistics and Statistical Bioinformatics (I-BioStat), Data Science Institute (DSI), Hasselt University, Hasselt, Belgium Niel Hens Interuniversity Institute for Biostatistics and Statistical Bioinformatics (I-BioStat), Data Science Institute (DSI), Hasselt University, Hasselt, Belgium Centre for Health Economic Research and Modelling Infectious Diseases (CHERMID), Vaccine & Infectious Disease Institute, Antwerp University, Antwerp, Belgium ###### Abstract In medical research, individual-level patient data provide invaluable information, but the patients’ right to confidentiality remains of utmost priority. This poses a huge challenge when estimating statistical models such as linear mixed models, which is an extension of linear regression models that can account for potential heterogeneity whenever data come from different data providers. Federated learning algorithms tackle this hurdle by estimating parameters without retrieving individual-level data. Instead, iterative communication of parameter estimate updates between the data providers and analyst is required. In this paper, we propose an alternative framework to federated learning algorithms for fitting linear mixed models. Specifically, our approach only requires the mean, covariance, and sample size of multiple covariates from different data providers once. Using the principle of statistical sufficiency within the framework of likelihood as theoretical support, this proposed framework achieves estimates identical to those derived from actual individual-level data. We demonstrate this approach through real data on 15 068 patient records from 70 clinics at the Children’s Hospital of Pennsylvania (CHOP). Assuming that each clinic only shares summary statistics once, we model the COVID-19 PCR test cycle threshold as a function of patient information. Simplicity, communication efficiency, and wider scope of implementation in any statistical software distinguish our approach from existing strategies in the literature. Keywords: linear mixed model, federated learning, sufficiency principle, data privacy, aggregated data Correspondence: Marie Analiz April Limpoco Interuniversity Institute for Biostatistics and Statistical Bioinformatics (I-BioStat), Data Science Institute (DSI), Hasselt University, Hasselt, Belgium Email<EMAIL_ADDRESS> ## 1 Introduction Understanding and extracting useful information from data are some of the shared goals between data providers and data analysts. However, both parties must also respect the right to data privacy of individuals from whom the data were collected. This imposes restrictions on how much and which kind of data can be disclosed by the data providers to the data analysts.1 Consequently, this adds to the challenge of estimating statistical models. For example, in health research involving patient data from different hospitals, estimating a linear mixed model to account for potential heterogeneity across hospitals requires individual-level patient records. In the interest of data confidentiality, hospitals might be reluctant to share the full data unless the data analyst goes through a series of processes and paperwork, which may take considerable time.2 A possible compromise is to employ federated learning algorithms.3 In this setting, only parameter estimate updates and not the individual-level data are sent by data providers to a centralized server to build a global model.4 Prevalent models for which a federated learning algorithm was developed include linear regression,5 generalized linear mixed models,6, 7 logistic regression, support vector machine (SVM), K-Means, neural network, Bayesian network, and random forest.8, 9 To implement this strategy, a network of computer connections that enable iterative communication between the data providers and data analysts has to be set up. In practice, at least in the healthcare context, it is not easy to set up such networks that fulfill the requirements conducive to federated learning.10, 11, 12 Instead of an iterative communication, data providers may be more willing to share summary data once. For a linear regression model, if these summary data contain sufficient statistics, then model estimation is possible even when the individual-level data are unavailable.13 For a linear mixed model involving random effects per data provider, Luo et al14 expressed the likelihood in terms of aggregate matrices, which in turn are composed of sufficient statistics. They did this by applying the Woodbury matrix identity and some linear algebra concepts. Since most of the existing functionalities in statistical softwares require the individual observations as input, model estimation using summaries requires the development of new functionalities. Luo et al14 developed an R package called pda 15 to implement their proposed method, which they referred to as distributed linear mixed model (DLMM), but its structure is in a distributed model estimation context. Specifically, in practice, data providers must send their aggregate matrices to a central online server. In the meta-analysis setting where the “data providers" are the relevant studies, there are advantages of using individual participant data (IPD) over aggregate data (AD) especially when the number of studies is small.16, 17 However, IPD is seldom available from studies and so constructing a substitute for the unavailable IPD, called pseudo-IPD, was an alternative. One strategy is to require the specification of the joint distribution and correlation structure from which several sets of pseudo-IPD will be simulated to obtain parameter estimates per set.16 These will then be aggregated to arrive at a single estimate per parameter of interest. Since this method involves simulating from an assumed distribution whose parameters are based on the available AD, a point to consider in practice is how many sets should be produced to achieve the desired accuracy. Another approach requires only one set of pseudo-IPD to yield estimates exactly equal to the actual IPD estimates.17 In this approach, pseudo-IPD are constructed by first generating random numbers from any distribution and then transforming them such that the mean and standard deviation of the resulting data are exactly equal to those indicated in the studies. Unlike DLMM which also yields exactly the same estimates as the actual IPD estimates, this pseudo-IPD approach can be implemented in any statistical software with existing linear mixed model functionalities. A limitation of both pseudo-IPD methods though, in the context of meta-analysis, is that the covariance structure of the relevant variables is rarely available from studies, thus making it difficult to include more covariates in the model. In particular, their scope only includes studies with two treatments or exposure groups. In the context of federated data, this is not a problem since data providers like hospitals may be willing to supply the covariance structure of the variables, aside from the mean vector. In this paper, we apply the pseudo-IPD meta-analysis framework of Papadimitropoulou et al 17 to the federated data setting to estimate a linear mixed model with random effects per data provider when only the mean, covariance, and sample size are made available once. More importantly, we extend it so that multiple covariates can be included, distinguishing our approach from theirs. This principle of using pseudo-data in the context of federated data analysis is novel. To summarize the difference of our approach from those that exist in the literature, we present Table 1. Among the existing methods, DLMM and our proposed strategy employ the concept of sufficient statistics and are thus expected to yield theoretically the same results. The major difference lies in how the sufficient statistics are utilized: DLMM uses them directly in the re-expressed form of the likelihood, while we use them to generate pseudo-data before feeding these into the classical form of the likelihood. Hence, our proposed strategy may be considered as an alternative and is not necessarily superior over DLMM; though it has some advantages in terms of generalisability and implementation. The following points distinguish our approach from DLMM: 1. 1. Extension to more complex models. The principle of generating pseudo-data facilitates model building, performing post hoc procedures, and extension to more complex models such as generalized linear models (GLMs) while keeping the communication exchange with the data providers to just once. Although the idea of DLMM has already been extended to GLMs, at least one communication exchange is needed. In addition, in their current framework, only linear models have the option to include random effects.18 On the contrary, we have found in our current research work that it is possible to apply a similar principle of pseudo-data generation from one-time shared summary statistics to the other members of the GLM family. While in this paper we focus on linear models, we will discuss the extension towards GLMs in the discussion of the paper. 2. 2. Practical implementation. Our proposed method makes use of the classical form of the likelihood which feeds on individual observations. This is the basis for existing LMM functionalities in various statistical softwares. This means that after the pseudo-data are generated, we can use any of the existing software programs available to fit the mixed model. This is advantageous due to the strength of existing functionalities for linear mixed modelling which can be found not only in R 15 but also in other statistical softwares. This implies that compared to pda,19 current versions of existing software packages (e.g. lmer in lme4 20) have been optimized in terms of numerical stability, computational efficiency, optimization algorithms, and bugs fixing.20 Furthermore, user support is more available online for possible problems that a user may encounter when using these existing functionalities. Thus, utilizing these existing functionalities may be preferred by practitioners, if possible. Table 1: Difference of proposed strategy from existing methods Aspect | Our | Federated | DLMM | pseudo-IPD ---|---|---|---|--- | approach | learning | | in meta-analysis Input | one set of | parameter estimate | aggregate | at least one set | pseudo-data | updates until | data | of pseudo-data | with identical | convergence | matrices | with similar | sufficient statistics | | | sufficient statistics | as actual data | | | as actual data Number of | multiple | multiple | multiple | one/limited covariates | | | | Estimation | use pseudo-data | global model is | likelihood is rewritten | same as our strategy | as substitute to | estimated iteratively | in terms of aggregate | approach but | actual unavailable | from local parameter | matrices instead of | repeated multiple | data | estimates of | individual observations | times; estimates | | data providers | | aggregated | | | | into one Theoretical estimates | identical to | close to | identical to | close to produced | estimates from | estimates from | estimates from | estimates from | actual data | actual data | actual data | actual data Communication rounds | once | more than once | once | once between data providers | | | | and data analyst | | | | Infrastructure | none | central server must be | online server where | none requirements | | connected to data | data providers send | | | providers’ databases | aggregate matrices | Software | any statistical | TensorFlow Federated, | R package pda 15 | any statistical implementation | software with | OpenFL, PySyft | | software with | LMM functionality | | | LMM functionality In the next section (2), we present the details of our proposed framework. We then demonstrate it in section 3 on deidentified publicly available real data consisting of the results of COVID-19 testing at the Children’s Hospital of Pennsylvania (CHOP) in 2020 which can be found in the R15 package medicaldata.21 Finally, we discuss implications as well as potential research directions in section 4, before we close with a conclusion. ## 2 Methods ### 2.1 Principles of data reduction This section briefly revisits the principles of data reduction, which are thoroughly discussed by Casella and Berger.22 We aim to draw insights about dealing with parameter estimation given limited data. We begin with the sufficiency principle, which guarantees that the entire sample need not be available, as inferences about a parameter $\theta$ can be derived from the sufficient statistic $T(\mathbf{X})$, if it exists. In other words, even if the only information known is $T(\mathbf{x})$, inference about the parameter of interest $\theta$ can still be made. In connection with our objective, if the data providers supply sufficient statistics for the model of interest, which in this case is the linear mixed model, then parameter estimation is still possible even without disclosing the individual-level data. Once sufficient statistics are identified, if they exist, parameter estimation can then proceed by directly plugging in the sufficient statistics instead of the individual observations into the log-likelihood of the model. An alternative strategy is to generate a sample $\mathbf{x_{2}}$ which is different from the original sample $\mathbf{x_{1}}$, but such that $T(\mathbf{x_{1}})=T(\mathbf{x_{2}})$, and use $\mathbf{x_{2}}$ as input to the existing functionalities that estimate a linear mixed model. The sufficiency principle will still guarantee that the same conclusion can be drawn even though $\mathbf{x_{1}}\neq\mathbf{x_{2}}$. Casella and Berger22 explicitly mentioned that the conditional probability distribution given a sufficient statistic $T(\mathbf{x_{1}})$ can be used to draw a sample $\mathbf{x_{2}}$ and generate equivalent information about $\theta$. However, they did not discuss how to obtain this probability distribution in practice. To supplement the sufficiency principle, we use the likelihood principle (Appendix A.1). Specifically, if we aim to exactly estimate a parameter through its likelihood based on $\mathbf{x_{1}}$ but only its sufficient statistic $T(\mathbf{x_{1}})$ is available, we can generate a sample $\mathbf{x_{2}}$ such that $T(\mathbf{x_{1}})=T(\mathbf{x_{2}})$ and use the likelihood based on $\mathbf{x_{2}}$ to estimate the parameter of interest. To this end, the distribution that generated $\mathbf{x_{2}}$ becomes immaterial. In the succeeding sections, we implement this idea. Specifically, we first identify the sufficient statistics and then generate another sample which we refer to as pseudo-data such that the sufficient statistics of the actual and pseudo-data are equal. We start with sufficient statistics for a linear regression model and extend the idea to a linear mixed model. ### 2.2 Sufficient statistics for a linear regression model The concepts presented in section 2.1 were demonstrated by Lee, Brown, and Ryan13 for a linear regression model. In particular, given $n$ observations, an intercept, and $p-1$ predictors, if $\mathbf{X}$ denotes the $n\times p$ design matrix and $\mathbf{y}$ is the $n\times 1$ vector of continuous responses, the linear regression coefficients $\bm{\beta}$ are estimated as $\displaystyle\hat{\bm{\beta}}$ $\displaystyle=(\mathbf{X}^{T}\mathbf{X})^{-1}\mathbf{X}^{T}\mathbf{y},$ where knowing the aggregate information $\mathbf{X}^{T}\mathbf{X}$ and $\mathbf{X}^{T}\mathbf{y}$ instead of the individual-level data $\mathbf{X}$ and $\mathbf{y}$ will still yield exactly the same regression coefficient estimates $\hat{\bm{\beta}}$. We expand on this and explicitly show that the sample mean, sample covariance matrix, and sample size are sufficient to produce the same parameter estimates as with using the individual-level data. In addition to the work of Lee, Brown, and Ryan13, we include estimation of the variance $\displaystyle\hat{\sigma}^{2}_{MLE}$ $\displaystyle=\frac{1}{n}\sum_{i=1}^{n}(y_{i}-\mathbf{x}_{i}\hat{\bm{\beta}})^{2},\text{ or}$ $\displaystyle\hat{\sigma}^{2}_{OLS}$ $\displaystyle=\frac{1}{n-p}\sum_{i=1}^{n}(y_{i}-\mathbf{x}_{i}\hat{\bm{\beta}})^{2}.$ For more specific details regarding the derivations, see Appendix A.2. We begin by looking at the log-likelihood $\displaystyle l(\bm{\beta},\sigma^{2};\mathbf{y},\mathbf{X})$ $\displaystyle=-\frac{n}{2}\text{ln}(2\pi)-\frac{n}{2}\text{ln}(\sigma^{2})-\frac{1}{2\sigma^{2}}\sum_{i=1}^{n}(y_{i}-\mathbf{x}_{i}^{T}\bm{\beta})^{2},$ where $\mathbf{x}_{i}$ is a vector representing the $i$th row in the design matrix $\mathbf{X}$. We see here that information from the sample is required only in the last term. Moreover, the sum of squares of this term can be expressed as $\displaystyle\sum_{i=1}^{n}(y_{i}-\mathbf{x}_{i}^{T}\bm{\beta})^{2}$ $\displaystyle=\sum_{i=1}^{n}y_{i}^{2}-2\sum_{i=1}^{n}y_{i}\mathbf{x}_{i}^{T}\bm{\beta}+\sum_{i=1}^{n}\bm{\beta}^{T}\mathbf{x}_{i}\mathbf{x}_{i}^{T}\bm{\beta}.$ From this, we find that knowing $n$, $\sum_{i=1}^{n}y_{i}^{2}$, $\sum_{i=1}^{n}y_{i}\mathbf{x}_{i}^{T}$, and $\sum_{i=1}^{n}\mathbf{x}_{i}\mathbf{x}_{i}^{T}$ is sufficient to construct the log-likelihood and estimate the parameters, even in the absence of individual-level data. In particular, $\sum_{i=1}^{n}y_{i}\mathbf{x}_{i}^{T}$ and $\sum_{i=1}^{n}\mathbf{x}_{i}\mathbf{x}_{i}^{T}$ are sufficient to estimate the coefficients $\bm{\beta}$ while the variance $\sigma^{2}$ also requires $\sum_{i=1}^{n}y_{i}^{2}$ in addition to the other two. Furthermore, these values can be obtained from the vector of sample means and sample covariance matrix of the response variable and the predictors. Specifically, performing some algebraic manipulations will show that $\sum_{i=1}^{n}y_{i}^{2}$ can be derived from the sample variance $s^{2}_{\mathbf{y}}$, the sample mean $\bar{\mathbf{y}}$, and the sample size $n$ $\displaystyle\sum_{i=1}^{n}y_{i}^{2}$ $\displaystyle=s^{2}_{\mathbf{y}}(n-1)+n\bar{\mathbf{y}}^{2}.$ For $\sum_{i=1}^{n}y_{i}\mathbf{x}_{i}^{T}$, we note that it is a $1\times p$ matrix $\displaystyle\begin{bmatrix}\sum_{i=1}^{n}y_{i}&\sum_{i=1}^{n}y_{i}x_{i1}&\sum_{i=1}^{n}y_{i}x_{i2}&...&\sum_{i=1}^{n}y_{i}x_{ij}&...&\sum_{i=1}^{n}y_{i}x_{i(p-1)}\end{bmatrix},$ such that the first element can be obtained from the sample mean $\bar{\mathbf{y}}$ while the rest of the elements needs the sample covariance between $\mathbf{y}$ and each of the predictors ($s_{\mathbf{yx}_{j}}$). Lastly, since $\sum_{i=1}^{n}\mathbf{x}_{i}\mathbf{x}_{i}^{T}$is a $p\times p$ matrix $\displaystyle\begin{bmatrix}n&\sum_{i}{x_{i1}}&\ldots&\sum_{i}{x_{i(p-1)}}\\\ \sum_{i}{x_{i1}}&\sum_{i}{x_{i1}^{2}}&\ldots&\sum_{i}{x_{i1}x_{i(p-1)}}\\\ \vdots&\vdots&\ddots&\vdots\\\ \sum_{i}{x_{i(p-1)}}&\sum_{i}{x_{i(p-1)}x_{i1}}&\ldots&\sum_{i}{x_{i(p-1)}^{2}}\end{bmatrix},$ performing similar derivations as above reveals that computing $\sum_{i=1}^{n}\mathbf{x}_{i}\mathbf{x}_{i}^{T}$ only requires the sample mean ($\bar{\mathbf{x}}_{j}$), sample variance ($s^{2}_{\mathbf{x}_{j}}$), and sample covariances among predictors $j$ and $k$ ($s_{\mathbf{x}_{j}\mathbf{x}_{k}}$). ### 2.3 Sufficient statistics for a linear mixed model A more realistic assumption when handling federated data is that the observations from the same data provider are more similar than observations from different sources, violating the independence assumption of a linear regression model. To account for this, a linear mixed model is more appropriate. Assuming that there are $m$ data providers, let $y_{hi}$ be the continuous response of individual $i$ from data provider $h$; $\mathbf{x}_{hi}$ be a $p$-dimensional vector consisting of an intercept and $p-1$ predictors; $\bm{\beta}$ be the p-dimensional vector of fixed effects; $\mathbf{z}_{hi}$ be the $q$-dimensional vector corresponding to the $q$ random effects; ${\mathbf{u}}_{h}$ be the $q$-dimensional random effects vector, which represents the deviation of data provider $h$ from the overall pattern; and $\epsilon_{hi}\sim N(0,\sigma^{2})$ be the random error. For a linear mixed model with random effects for each data provider, the model structure will be $\displaystyle y_{hi}$ $\displaystyle=\mathbf{x}_{hi}^{T}\bm{\beta}+{\mathbf{z}}_{hi}^{T}{\mathbf{u}}_{h}+\epsilon_{hi}.$ For a model with a random intercept and slope, $q=2$, $\mathbf{z}_{hi}=[1,z_{hi}]$ and $\mathbf{u}_{h}\sim N(\bm{0},\mathbf{G})$ where $\mathbf{G}$ is the $2\times 2$ random effects covariance matrix. To estimate parameters, the marginal log-likelihood used is $\displaystyle l(\bm{\beta},\sigma^{2},\mathbf{G};\mathbf{y},\mathbf{X})$ $\displaystyle=-\frac{1}{2}\sum_{h=1}^{m}\\{\text{log}|\bm{\Sigma}_{h}|+(\mathbf{y}_{h}-\mathbf{X}_{h}\bm{\beta})^{T}\bm{\Sigma}_{h}^{-1}(\mathbf{y}_{h}-\mathbf{X}_{h}\bm{\beta})\\},$ where $\mathbf{X}_{h}$ and $\mathbf{y}_{h}$ are the design matrix and response vector, respectively, of data provider $h$, $|.|$ is the matrix determinant and $\bm{\Sigma}_{h}=\bm{\Sigma}_{h}(\sigma^{2},\mathbf{G})=\mathbf{Z}_{h}\mathbf{G}\mathbf{Z}_{h}^{T}+\sigma^{2}I_{n_{h}}$. Due to the seemingly entangled data and parameter matrices, it is not straightforward to identify the aggregate statistics that can be used when the individual-level data are not available. Luo et al14 showed that by utilizing the Woodbury matrix identity and some linear algebra concepts, the data can be disentangled from the parameters to reconstruct the profile log-likelihood. In their approach, only aggregate matrices $\mathbf{X}_{h}^{T}\mathbf{X}_{h}$, $\mathbf{X}_{h}^{T}\mathbf{y}_{h}$, $\mathbf{y}_{h}^{T}\mathbf{y}_{h}$, and $n_{h}$ from each data provider $h$ are required to produce identical estimates as those produced with the individual-level data. We have shown in the previous section that these aggregate data can actually be derived from the sample mean, sample covariance matrix, and sample size of each data set. Therefore, these summary statistics per data provider (e.g. per hospital) are also sufficient to estimate a linear mixed model in the absence of individual- level data. ### 2.4 Proposed method In this section, we provide details for creating the pseudo-data for a single variable and then extend it to a set of variables. As mentioned earlier, we opt to use these pseudo-data as input to the model estimation process rather than directly utilizing the sufficient statistics so that we can still use the existing linear regression or linear mixed model functionality in any statistical software such as the lmer function in the R package lme4.20 #### 2.4.1 Single variable The goal of constructing pseudo-data for linear models is to have exactly the same sample mean and variance as the original unavailable individual-level data. We do this by performing a linear transformation. For instance, suppose the original univariate sample $\mathbf{x}_{d}$ is unknown, but its sample mean $\bar{x}_{d}$ and sample standard deviation $s_{d}$ are available. We consider a linear transformation of a randomly generated data set $\mathbf{x}_{r}$ into pseudo-data $\mathbf{x}_{\pi}$ which has equal mean and standard deviation as $\mathbf{x}_{d}$. To this end, we let $\displaystyle x_{\pi_{i}}$ $\displaystyle=a+bx_{r_{i}},$ where $\mathbf{x}_{r}$ can come from any distribution and has sample mean $\bar{x}_{r}$ and sample standard deviation $s_{r}$. Examining the relationship between the mean of the randomly generated data $\bar{x}_{r}$ and that of the transformed data $\bar{x}_{\pi}$, we find that $\displaystyle\bar{x}_{\pi}=a+b\bar{x}_{r},$ while for the variances: $\displaystyle s_{\pi}^{2}=b^{2}s_{r}^{2},$ from which we find that to obtain identical means $\bar{x}_{d}=\bar{x}_{\pi}$ and standard deviations $s_{d}=s_{\pi}$ between the original unknown data $\mathbf{x}_{d}$ and the pseudo-data $\mathbf{x}_{\pi}$, we should let $\displaystyle b$ $\displaystyle=\frac{s_{\pi}}{s_{r}}=\frac{s_{d}}{s_{r}},\text{ and}$ $\displaystyle a$ $\displaystyle=\bar{x}_{\pi}-\frac{s_{\pi}}{s_{r}}\bar{x}_{r}=\bar{x}_{d}-\frac{s_{d}}{s_{r}}\bar{x}_{r},$ so that $\displaystyle x_{\pi_{i}}$ $\displaystyle=\bar{x}_{d}+s_{d}\frac{x_{r_{i}}-\bar{x}_{r}}{s_{r}}.$ In summary, the algorithm to generate pseudo-data for a single variable is: 1. 1. Generate $n$ random numbers $\mathbf{x}_{r}$ from any distribution. 2. 2. Compute the sample mean $\bar{x}_{r}$ and sample standard deviation $s_{r}$ of $\mathbf{x}_{r}$. 3. 3. Transform $\mathbf{x}_{r}$ into the desired pseudo-data $\mathbf{x}_{\pi}$ using: $\displaystyle x_{\pi_{i}}$ $\displaystyle=\bar{x}_{d}+s_{d}\frac{x_{r_{i}}-\bar{x}_{r}}{s_{r}},$ where $\bar{x}_{d}$ and $s_{d}$ are the sample mean and sample standard deviation, respectively, of the original unavailable data. #### 2.4.2 Set of variables The strategy to construct pseudo-data for more than one variable is analogous to that for a single variable. Ripley 23 presents an approach for multivariate normal distribution, but for our case, we do not have to impose normality on the pseudo-data nor on the initial set of random numbers. We just have to ensure that the resulting pseudo-data has exactly the same sample mean vector and sample covariance matrix as the individual-level data. Thus, from the algorithm for a single variable, we replace $\bar{x}_{d}$ with the sample mean vector for all variables ($\hat{\bm{\mu}}_{d}$). As for $s_{d}$, we take the Cholesky decomposition of the covariance matrix of all variables ($\hat{\bm{\Sigma}}_{d}$). Note that for a multiple linear regression or a linear mixed model, the set of variables comprises both the predictors and the response variable. The following algorithm provides a summary to generate pseudo-data with sample size $n$ for $p$ variables consisting of $p-1$ predictors and a continuous response variable: 1. 1. Generate random numbers $\mathbf{R}=[\mathbf{r}_{1},...,\mathbf{r}_{i},...\mathbf{r}_{n}]^{T}$ which is an $n\times p$ matrix where each column is generated independently from any distribution. 2. 2. Compute the mean vector $\hat{\bm{\mu}}_{r}$ and the covariance matrix $\hat{\bm{\Sigma}}_{r}$ of $\mathbf{R}$. 3. 3. Generate the $i$th pseudo-data point as $\bm{\pi}_{i}=\hat{\bm{\mu}}_{d}+L_{\hat{\bm{\Sigma}}_{d}}(L_{\hat{\bm{\Sigma}}_{r}})^{-1}(\mathbf{r}_{i}-\hat{\bm{\mu}}_{r}),$ where $L_{\hat{\bm{\Sigma}}_{d}}$ and $L_{\hat{\bm{\Sigma}}_{r}}$ are the lower triangular matrices of the Cholesky decomposition of $\hat{\bm{\Sigma}}_{d}$ and $\hat{\bm{\Sigma}}_{r}$. Here, $\bm{\pi}_{i}$ is the pseudo-data vector consisting of the response variable and the predictors; that is, $[y_{\pi_{i}},x_{\pi_{i1}},...,x_{\pi_{ij}},...,x_{\pi_{i(p-1)}}]^{T}$. An issue that arises with using the Cholesky decomposition is the need to have a positive definite covariance matrix. In practical settings, especially when there are binary variables involved, this may not be true. An alternative procedure is to perform a singular value decomposition (SVD) on the centered values of $\mathbf{R}$ to obtain the matrix of right singular vectors denoted as $\mathbf{V}$. SVD can always be performed on any rectangular matrix such as this $n\times p$ matrix $\mathbf{R}$. The product of $\mathbf{R}$ and $\mathbf{V}$ is then computed and the resulting matrix elements are then divided by the root mean square $\sqrt{\sum_{i}x_{rv_{ij}}^{2}/(n-1)}$ of each column $j$. Denoting this as $\mathbf{R_{V}}$, the $i$th pseudo-data point is then generated as $\displaystyle\bm{\pi}_{i}$ $\displaystyle=\hat{\bm{\mu}}_{d}+\mathbf{U}\bm{\Lambda}^{1/2}\mathbf{r_{v}}_{i}\hskip 2.84526pt,$ where $\mathbf{U}$ is the matrix of eigenvectors of $\hat{\bm{\Sigma}}_{d}$ and $\bm{\Lambda}^{1/2}$ is a diagonal matrix whose elements are the square root of the eigenvalues of $\hat{\bm{\Sigma}}_{d}$. Eigendecomposition only requires that the matrix is diagonalized. Since a covariance matrix is always symmetric, it follows that it is always diagonalized and thus eigenvectors and eigenvalues can always be computed. This alternative procedure is implemented by the function mvrnorm in the R package MASS24 where each column of $\mathbf{R}$ is generated from a standard normal distribution. The user may wish to explore using a different distribution such as a uniform distribution to generate $\mathbf{R}$ by slightly altering the code for this function. Note that mvrnorm returns an error whenever the sample size of the pseudo-data $n$ is smaller than the number of variables $p$. The reason behind this is that the function svd used in mvrnorm returns a reduced matrix $\mathbf{V}$, affecting the dimension of the matrix $\mathbf{R_{V}}$ and making the matrix multiplication incompatible. We modified this setting so that the full SVD is returned. This R code is provided in Appendix LABEL:modified_mvrnorm. ## 3 An illustrative example: COVID-19 testing at CHOP We demonstrate the proposed approach on a real dataset: the COVID-19 testing results from different clinics at the Children’s Hospital of Pennsylvania (CHOP). It is a publicly available and deidentified dataset consisting of all patients who got tested at the hospital, and can be accessed from the R package medicaldata 21. This dataset provides information about patients at CHOP who got tested for COVID-19 from days four to 107 of the pandemic in 2020. A total of 88 clinics from this hospital provided 15 524 patient records which were anonymized, time-shifted, and permuted. The COVID-19 test was performed via PCR. For a description of the variables, the reader may refer to the documentation of the medicaldata 21 package which is available online. Since we have access to the entire individual-level data, we will demonstrate how data providers can preprocess their raw data to achieve the expected summary data for our proposed method. In general, the data provider must supply the name and a brief description of each variable, the number of observations and mean per variable, and the covariance matrix. For model selection purposes, they must also provide the summary statistics for the standardized version, log- and squared transformations of the numeric variables, as well as for the two-way interactions among the variables. We illustrate these using R software, but these results can also be implemented using any other statistical software. Suppose we are interested in how factors such as gender, age, and whether the specimen was collected in a drive-thru site affect the cycle at which threshold reached during PCR (numeric variable from 14 to 45). For this data set, the cycle threshold is the response variable while gender, age, drive thru indicator, and an interaction term for age and gender are the predictors. The cycle at which threshold reached during PCR is a measure of how much amplification is necessary to detect the target viral gene, and is inversely proportional to viral load.25 This means that if more cycles are needed to detect a viral gene, then the presence of the viral gene is less likely. Some studies found a negative correlation between the cycle threshold and disease severity 26 and mortality among patients.27 In this illustrative example, we will model how the aforementioned regressors influence this measure to hypothesize about the clinical outcomes among patients at CHOP. ### 3.1 Preliminary analysis by the data provider Each data provider should supply the data analyst with a good overview of all the available variables. The data provider may use the function skim from the R package skimr28 to accomplish this. Since the CHOP data from R is already composed of the pooled individual-level records from all 88 clinics, applying the aforementioned function covers all clinics already. Table 2 displays the metadata for all 88 clinics having a total of 15 524 patient records during the COVID-19 pandemic in 2020. Table 2: Metadata of all clinics at CHOP Variable | Type | Number of | Complete | Number of | Number of | ---|---|---|---|---|---|--- name | | missing observations | rate | empty cells | unique values | Fake first name | char | 0 | 1.00 | 0 | 832 | Fake last name | char | 0 | 1.00 | 0 | 27 | Gender | char | 0 | 1.00 | 0 | 2 | Test ID | char | 0 | 1.00 | 0 | 2 | Clinic name | char | 0 | 1.00 | 0 | 88 | Result | char | 0 | 1.00 | 0 | 3 | Demographic group | char | 0 | 1.00 | 0 | 5 | Payor group | char | 7087 | 0.54 | 0 | 7 | Patient class | char | 7077 | 0.54 | 0 | 9 | Subject ID | num | 0 | 1.00 | - | - | Day of | | | | | | pandemic | num | 0 | 1.00 | - | - | Age | num | 0 | 1.00 | - | - | Drive thru | | | | | | indicator | num | 0 | 1.00 | - | - | Cycle threshold | | | | | | result | num | 209 | 0.99 | - | - | Orderset | num | 0 | 1.00 | - | - | Collection to | | | | | | receive time | num | 0 | 1.00 | - | - | Receive to | | | | | | verification time | num | 0 | 1.00 | - | - | Our proposed method assumes that the summary data per data provider were computed from complete observations. Additionally, for categorical variables, the data provider must also indicate the levels. For instance, for binary variables such as gender, the variable name in the summary data must reflect the non-reference category (e.g. gendermale). For this dataset, of the 88 clinics, only 70 clinics with a total of 15 068 observations were included in the analysis after filtering out incomplete observations and invalid values such as NA. Those clinics with only one observation were removed as well because in a federated data setting, it is not so common for a data provider (e.g. a clinic) to have only one patient. Moreover, even if a clinic with only one patient exists, summarizing the data does not make sense and neither does handing over the single patient record to the data analyst because it goes against that patient’s right to data confidentiality. Should the clinic do so after fulfilling some legal requirements to ensure privacy, this observation itself can be combined with the generated pseudo-data. In Appendix LABEL:chop_app, we provide an R code for preprocessing this data set. As an example, Table 3 displays the summary statistics for the Inpatient Ward A clinic at CHOP as well as the covariance matrix for the variables we are including in our model. Table 3: Summary statistics for the Inpatient Ward A clinic at CHOP Variable | n | Mean | Variance-Covariance ---|---|---|--- name | | | log of Cycle | Gendermale | Age | Drive thru | Gendermale $\times$ | | | threshold | | | | Age log of Cycle threshold | 208 | 3.803 | 0.001 | 0.001 | 0.001 | 0.000 | -0.001 Gendermale | 208 | 0.529 | 0.001 | 0.250 | 0.053 | 0.002 | 0.369 Age | 208 | 1.373 | 0.001 | 0.053 | 10.506 | 0.003 | 6.621 Drive thru | 208 | 0.005 | 0.000 | 0.002 | 0.003 | 0.005 | 0.006 Gendermale $\times$ Age | 208 | 0.779 | -0.001 | 0.369 | 6.621 | 0.006 | 7.085 ### 3.2 LMM estimation by the data analyst After receiving the sufficient statistics from the data providers, the data analyst formulates the linear mixed model. To study the relationship between COVID-19 PCR test cycle threshold and patient information namely gender, age, their interaction, and the drive thru indicator, we fit two linear mixed models: one with only a random intercept per clinic and another with a random intercept and a random slope for age. This allows variations in mean cycle threshold and age effects across the clinics. We use the logarithm of the cycle threshold to ensure nonnegative values for this variable and we standardize age to avoid numerical problems during the optimization of the log-likelihood. One set of pseudo-data was generated for each clinic using the proposed method. Using the lmer function in the R package lme4 on the pseudo- data and on the actual data, we estimate the parameters of the model. Table 4 displays the results for the model with only a random intercept while Table 5 presents the estimates for an LMM with an additional random slope for age. For both models, we find that only the scaled residuals are different between the LMM using pseudo-data and using actual data. This is expected since residuals are computed from individual observations and thus cannot be reproduced from a different set of values such as the pseudo-data. Additionally, we can reproduce the AIC and confidence intervals, as shown in the same tables. We select the model with both random intercept and random slope since it has lower AIC and BIC values. From the estimates of the selected model, we observe that among the fixed effects, the interaction of gender and age significantly affect the cycle threshold, whereas drive-thru testing does not. This significant interaction effect indicate that the overall impact of a patient’s gender on the log cycle threshold also depends on the age group of the patient, and vice versa. Moreover, since the effects of age are allowed to vary across clinics, the interpretation also varies per clinic. For instance, for a male patient who belongs to clinic $h$ and is one standard deviation or around 16 years older, the log cycle threshold changes by $-0.0057+b_{age_{h}}$. If he belongs to the Inpatient Ward A, the log cycle threshold decreases by $0.008$. On the other hand, a female patient belonging to the same clinic is expected to have $0.003$ decrease in log cycle threshold for every one standard deviation increase in age. This suggests that for this clinic, older patients tend to have more viral load, although the difference across age groups is slightly more pronounced among males and among females. Note that this per clinic interpretation is also supported by our proposed approach since the random effects prediction are exactly equal to those derived from the actual data.. Lastly, the estimated variance components of the random effects suggest that there is not much variation across the clinics. Table 4: Comparison of LMMs with random intercept only based on pseudo-data and based on actual CHOP data. | LMM estimates ---|--- | pseudo-data | actual data REML criterion at convergence | -20473 | -20473 Scaled residuals: | | Min | -4.5919 | -9.3287 $Q_{1}$ | -0.5773 | 0.1482 Median | 0.0249 | 0.2174 $Q_{3}$ | 0.5760 | 0.2538 Max | 3.7464 | 1.1855 (Intercept) | 3.7871 (0.0039)*** | 3.7871 (0.0039)*** Gendermale | 0.0021 (0.002)*** | 0.0021 (0.002)*** Std. Age | -0.0046 (0.0015)*** | -0.0046 (0.0015)*** Drive thru | -0.0043 (0.0058)*** | -0.0043 (0.0058)*** Gendermale $\times$ Std. Age | -0.0061 (0.0020)*** | -0.0061 (0.0020)*** AIC | -20459.04 | -20459.04 BIC | -20405.7 | -20405.7 n | 15068 | 15068 number of clinics | 70 | 70 $\sigma_{Int}$ | 0.0216 | 0.0216 $\sigma$ | 0.1222 | 0.1222 | 95% confidence bounds ---|--- | pseudo-data | actual data | 2.5 % | 97.5 % | 2.5 % | 97.5 % $\sigma_{Int}$ | 0.0160 | 0.0282 | 0.0160 | 0.0282 $\sigma$ | 0.1208 | 0.1235 | 0.1208 | 0.1235 (Intercept) | 3.7793 | 3.7949 | 3.7793 | 3.7949 Gendermale | -0.0018 | 0.006 | -0.0018 | 0.006 Std. Age | -0.0076 | -0.0015 | -0.0076 | -0.0015 Drive thru | -0.0156 | 0.0071 | -0.0156 | 0.0071 Gendermale $\times$ Std. Age | -0.0100 | -0.0022 | -0.0100 | -0.0022 ${}^{***}p<0.001$; ${}^{**}p<0.01$; ${}^{*}p<0.05$ | | Table 5: Comparison of LMMs with random intercept and random slope for age based on pseudo-data and based on actual CHOP data. | LMM estimates ---|--- | pseudo-data | actual data REML criterion at convergence | -20513.2 | -20513.2 Scaled residuals: | | Min | -4.7364 | -9.3604 $Q_{1}$ | -0.5789 | 0.1229 Median | 0.0241 | 0.2030 $Q_{3}$ | 0.5777 | 0.2560 Max | 3.7682 | 1.8698 (Intercept) | 3.7851 (0.0045)*** | 3.7851 (0.0045)*** Gendermale | 0.0021 (0.0020)*** | 0.0021 (0.0020)*** Std. Age | -0.0005 (0.0037)*** | -0.0005 (0.0037)*** Drive thru | -0.0038 (0.0059)*** | -0.0038 (0.0059)*** Gendermale $\times$ Std. Age | -0.0052 (0.0020)*** | -0.0052 (0.0020)*** AIC | -20495.15 | -20495.15 BIC | -20426.57 | -20426.57 n | 15068 | 15068 number of clinics | 70 | 70 $\sigma_{Int}$ | 0.0249 | 0.0249 $\sigma_{Age}$ | 0.0128 | 0.0128 $\sigma_{\text{Int}\times\text{Age}}$ | -0.10 | -0.10 $\sigma$ | 0.1219 | 0.1219 | 95% confidence bounds ---|--- | pseudo-data | actual data | 2.5 % | 97.5 % | 2.5 % | 97.5 % $\sigma_{Int}$ | 0.0185 | 0.0323 | 0.0185 | 0.0323 $\sigma_{\text{Int}\times\text{Age}}$ | -0.5797 | 0.3917 | -0.5797 | 0.3917 $\sigma_{Age}$ | 0.008 | 0.019 | 0.008 | 0.019 $\sigma$ | 0.1205 | 0.1233 | 0.1205 | 0.1233 (Intercept) | 3.7762 | 3.7942 | 3.7762 | 3.7942 Gendermale | -0.0018 | 0.006 | -0.0018 | 0.006 Age | -0.0081 | 0.0074 | -0.0081 | 0.0074 Drive thru | -0.0152 | 0.0077 | -0.0152 | 0.0077 Gendermale $\times$ Age | -0.0092 | -0.0013 | -0.0092 | -0.0013 ${}^{***}p<0.001$; ${}^{**}p<0.01$; ${}^{*}p<0.05$ | | ## 4 Discussion In this paper, we have demonstrated that data privacy in a federated data setting can be achieved not only through machine learning algorithms but even through age-old statistical concepts such as sufficiency and likelihood. These principles enable data reduction without losing important information about the parameter of interest. Specifically, since the sufficient statistics already contain the information required to estimate the parameters of a statistical model, data providers do not have to share individual observations anymore. This approach is very useful and applicable to settings where individual-level data are too sensitive to be shared, such as patient data from hospitals. For a linear regression model and a linear mixed model, this is true since their sufficient statistics exist. For other more complex models such as generalized linear models with and without random effects, this may not be as straightforward. Since our approach produces identical estimates as those obtained from pooling the individual-level records across multiple data providers, we have confidence that our estimates are as good as the estimates that the established estimation techniques claim. Among the methods proposed in the literature, maximum likelihood (ML) and residual or restricted maximum likelihood (REML) have become the standard methods for estimating the parameters of a linear mixed model.29 However, between these two, ML estimators do not account for the degrees of freedom lost when estimating the fixed effects, resulting in biased variance parameter estimates towards the null, especially for small samples.30, 31 For this reason, REML estimation of the variance parameters is preferred over ML. Moreover, REML estimators have shown improved properties whenever the number of clusters is small,32, 33 which suffers from finite sample bias more than models involving small cluster sizes.34 Despite these desirable properties, REML does not completely solve the issues related to inflated Type I error rates for fixed effects.35 To address this, the Kenward-Roger correction has been recommended as best practice in the literature since it has been shown to maintain nominal Type I error rates.32 Due to the nature of our proposed strategy, versatility in specifying the estimation procedure (ML or REML) and applying corrections (e.g. Kenward-Roger) whenever the sample size is small can be easily implemented with identical results as with the actual observations. In contrast to the study of Luo et al 14 which also yields exactly the same estimates for LMM, our approach is a simple one and thus can more easily be implemented in practice. Another advantage of our proposed framework is that we do not need to specify a distribution from which the pseudo-data come from, unlike the method proposed by Song et al.16 Additionally, the concept can be applied using any statistical software that can estimate LMM, thus enabling a wider scope of implementation. Another edge we have is the computational efficiency of generating only one set of pseudo-data compared to methodologies that simulate data multiple times and aggregate the estimates to form a single parameter estimate. As a consequence, we are spared from the question of how many simulations to run and which aggregation method to best implement. Lastly, in contrast to federated learning algorithms in the literature, our approach does not require more than one communication iteration between the data providers and data analyst, nor do we need to set up a network among the databases. Hence, we are significantly minimizing, if not totally eliminating, the risk of disclosing sensitive data. A limitation of our proposed approach is the inability to compute residuals, which require individual response values from the original data. The pseudo- data we generate, although similar in some characteristics to the original, cannot be used to compute residuals. Thus, model diagnostics through residual plots cannot be performed. In general, even when the individual-level data are available, model checking through residual analysis is a non-trivial task.36, 37 This is especially true for visual assessment because of the element of subjectivity and the difficulty in discerning patterns when the sample size is very large, which is important when deciding possible corrective measures.38 Formal tests on residuals, on the other hand, can become very sensitive to large samples, which may lead to falsely concluding violations of the assumption. Hence, a good recourse is to be aware of (1) the consequences of potential violations of the model assumptions and (2) possible remedies to mitigate these consequences. To begin with, the normality assumption regarding the response variable has the least impact on tests and inference derived from linear regression 39, 40, 41 and linear mixed models 42 as long as outliers are handled properly. Specifically for linear mixed models, inference on the fixed effects remain valid even when the random effects do not follow a normal distribution.43 Gelman and Hill41 do not recommend checking the residuals for normality while Galecki and Burzykowski 43 note that normality is not important for ordinary least squares although it is for maximum likelihood estimation. On the contrary, heteroscedasticity affects the standard errors of the parameter estimates of a classical linear regression model even though the point estimates remain unbiased.44 This affects confidence interval construction as well as inference about the covariate effects. One way to address this without altering the interpretation of effects through variable transformation 45, 46 is by using robust standard errors.47 We can show that with additional summary statistics namely the third and fourth joint sample moments, our approach can also achieve identical robust variance estimates as the ones derived from actual data (Appendix A.3). Analogously, a robust or empirical variance estimator has been proposed for linear mixed models and has been shown to be consistent under misspecification of the correlation structure as long as the mean is correctly specified.48 Validity, additivity and linearity, and independence of errors are the top most important assumptions when utilizing linear regression models.41 Although these cannot be evaluated in our proposed framework, model selection procedures may help mitigate the impact of violations to these assumptions, if they exist. Harrell38 proposed fitting a flexible parametric model that allows for most departures from the assumptions as an alternative to residual analysis. In light of our proposed strategy, selecting from candidate models that consider potential violations is recommended in practice (e.g. considering different combination of regressors, polynomial terms). The independence assumption may be relaxed by considering a random effects model instead of the classical linear regression model. Another consequence of our approach is the inability to perform training and testing since partitioning the original data is not possible. As a result, model validation would not be an option, and predictive accuracy of the resulting model might be difficult to assess. A potential remedy is to generate pseudo-data which embodies the statistical properties of the original data more closely than just having identical summary statistics such as the mean vector and covariance matrix. Several studies present different strategies to generate and analyze synthetic data from a statistical disclosure control perspective as well as from a machine learning perspective 49, 50, 51, 52, but these would of course not be as simple anymore and would require more data processing from the data providers’ end. An interesting point to consider is the impact of rounding off sufficient statistics. In the current implementation of the proposed framework, we assume that the mean vector and covariance matrix from each data provider contain the exact values and not the rounded off values. In practice, these sufficient statistics may be rounded off to a few decimal places. To examine this, we implemented the proposed approach on sufficient statistics that were rounded off to two decimal places. We observed that there were only slight differences in the model estimates when compared to using the exact values of the sufficient statistics (e.g. the difference starts from the third decimal place). The direction of effect and the overall inference also remained the same. This suggests that the method is robust to rounded values of the sufficient statistics in this setting, although a more thorough sensitivity analysis is encouraged to draw more conclusive findings. A field related to federated learning is meta-analysis. Like federated learning, meta-analysis aims to build a global model that synthesizes information from multiple studies. Since meta-analysis dates back to as early as 1976 53 while federated learning is fairly recent,54 it is worthwhile to explore meta-analysis techniques in addressing the challenges of a federated data setting. Traditionally, meta-analysis directly utilizes the aggregated information from studies. These techniques are called aggregate data meta- analysis (ADMA). However, individual participant data meta-analysis (IPDMA) is now regarded as the “gold standard", but accessing individual-level data remains an obstacle.55 The work of Papadimitropoulou et al17 involving pseudo- IPD thus provides a good solution to performing IPDMA even when studies only include aggregate information. Because of its similarity to a meta-analysis setting, the federated data setting also benefits from this framework. In contrast to a meta-analysis setting though, multiple variables can be included in a federated data setting because data providers may be more willing to share the covariance structure of the variables. Some future research avenues include dealing with missing data and generating sufficient statistics for interaction terms and transformed variables e.g. log- and quadratic-transformed variables, from the sufficient statistics of the main variables. Presently, our approach assumes that the data providers hand over summary data for complete observations. They must also supply the sufficient statistics for the transformed variables and for possible interaction terms on top of those for the main variables. Including random slopes and more complex hierarchical models is also a viable direction. Currently, the authors are working on generalized linear models with and without random intercept wherein we find the potential of extending the idea of pseudo-data generation that matches the summary statistics of the actual data. ## 5 Conclusion In this paper, we have demonstrated that parameter estimation of a linear mixed model can be performed on federated data by generating pseudo-data from the sample size, mean vector, and covariance matrix supplied by each data provider. The principles of statistical sufficiency and likelihood provide a good theoretical support to the validity of the proposed framework. Estimates achieved from this approach are identical to those obtained from the actual individual-level data, which are difficult to access due to privacy reasons. Simplicity, computational and communication efficiency, and potentially wider scope of implementation through any statistical software distinguish our approach from the existing strategies in the literature. Extending this approach to generalized linear mixed models is a current work in progress. ## 6 Conflict of Interest The author(s) declare(s) no potential conflicts of interest with respect to the research, authorship and/or publication of this article. ## 7 Funding The author(s) disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: The author(s) acknowledge(s) support from the Methusalem financement program of the Flemish Government. ## References * 1 Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation) (Text with EEA relevance). European Commission. https://eur-lex.europa.eu/eli/reg/2016/679/oj. Updated 2020. Accessed September 13, 2023. * 2 Chiruvella V, Guddati AK. Ethical issues in patient data ownership. Interact J Med Res. 2021; 10(2): e22269. https://doi.org/10.2196/22269. * 3 Sadilek A, Liu L, Nguyen D, et al. Privacy-first health research with federated learning. npj Digit Med. 2021; 4(132). https://doi.org/10.1038/s41746-021-00489-2. * 4 Banabilah S, Aloqaily M, Alsayed E, Malik N, Jararweh Y. Federated learning review: Fundamentals, enabling technologies, and future applications. Inf Process Manag. 2022; 59(6): 103061. https://doi.org/10.1016/j.ipm.2022.103061. * 5 Yue X, Kontar RA, Gómez AME. Federated data analytics: A study on linear models. IISE Trans. 2023; 0(0): 1-13. https://doi.org/10.1080/24725854.2022.2157912. * 6 Li W, Tong J, Anjum MM, Mohammed N, Chen Y, Jiang X. Federated learning algorithms for generalized mixed-effects model (GLMM) on horizontally partitioned data from distributed sources. BMC Med Inform Decis Mak. 2022; 22(1): 269. https://doi.org/10.1186/s12911-022-02014-1. * 7 Yan Z, Zachrison KS, Schwamm LH, Estrada JJ, Duan R. A privacy-preserving and computation-efficient federated algorithm for generalized linear mixed models to analyze correlated electronic health records data. PLoS One. 2023; 18(1): e0280192. https://doi.org/10.1371/journal.pone.0280192. * 8 Crowson MG, Moukheiber D, Arévalo AR, et al. A systematic review of federated learning applications for biomedical data. PLOS Digit Health. 2022; 1(5): e0000033. https://doi.org/10.1371/journal.pdig.0000033. * 9 Antunes RS, CostaA. d C, Küderle A, Yari IA, Eskofier B. Federated Learning for Healthcare: Systematic Review and Architecture Proposal. ACM Trans Intell Syst Technol. 2022; 13(4). https://doi.org/10.1145/3501813. * 10 Wen J, Zhang Z, Lan Y, Cui Z, Cai J, Zhang W. A survey on federated learning: challenges and applications. Int J Mach Learn Cybern. 2023; 14(2): 513-535. https://doi.org/10.1007/s13042-022-01647-y. * 11 Li T, Sahu AK, Talwalkar A, Smith V. Federated Learning: Challenges, Methods, and Future Directions. IEEE Signal Process Mag. 2020; 37(3): 50-60. https://doi.org/10.1109/MSP.2020.2975749. * 12 Rieke N, Hancox J, Li W, et al. The future of digital health with federated learning. npj Digit Med. 2020; 3(1): 119. https://doi.org/10.1038/s41746-020-00323-1. * 13 Lee JYL, Brown JJ, Ryan LM. Sufficiency Revisited: Rethinking Statistical Algorithms in the Big Data Era. Am Stat. 2017; 71(3): 202-208. https://doi.org/10.1080/00031305.2016.1255659. * 14 Luo C, Islam M, Sheils N, et al. DLMM as a lossless one-shot algorithm for collaborative multi-site distributed linear mixed models. Nat Commun. 2022; 13(1). https://doi.org/10.1038/s41467-022-29160-4. * 15 R Core Team. R: A Language and Environment for Statistical Computing. Vienna, Austria: R Foundation for Statistical Computing; 2024. * 16 Song Y, Sun F, Redline S, Wang R. Random-effects meta-analysis of combined outcomes based on reconstructions of individual patient data. Res Synth Methods. 2020; 11(5): 594–616. https://doi.org/10.1002/jrsm.1406. * 17 Papadimitropoulou K, Stijnen T, Dekkers O, Le Cessie S. One-stage random effects meta-analysis using linear mixed models for aggregate continuous outcome data. Res Synth Methods. 2018; 10. https://doi.org/10.1002/jrsm.1331. * 18 Privacy-preserving distributed algorithms: A solution for next generation data sharing for collaborative modelling. Penn Computing, Inference and Learning Lab. https://pdamethods.org. Accessed July 16, 2024. * 19 Luo C, Duan R, Edmondson M, et al. R package, Version 1.2.7: Privacy-Preserving Distributed Algorithms. Penn Computing, Inference and Learning Lab; 2024. * 20 Bates D, Mächler M, Bolker B, Walker S. Fitting Linear Mixed-Effects Models Using lme4. J Stat Softw. 2015; 67(1): 1–48. https://doi.org/10.18637/jss.v067.i01. * 21 Higgins P. R package, Version 0.2.0: Data Package for Medical Datasets. 2021\. * 22 Casella G, Berger R. Statistical Inference. 2nd ed. California: Duxbury Resource Center; 2001. * 23 Ripley BD. Stochastic Simulation. New York: John Wiley & Sons, Inc.; 1987. * 24 Venables WN, Ripley BD. Modern Applied Statistics with S. 4th ed. New York: Springer; 2002. ISBN 0-387-95457-0. * 25 Rao SN, Manissero D, Steele VR, Pareja J. A Systematic Review of the Clinical Utility of Cycle Threshold Values in the Context of COVID-19. Infect Dis Ther. 2020; 9(3): 573-586. https://doi.org/10.1007/s40121-020-00328-z. * 26 Shah VP, Farah WH, Hill JC, et al. Association Between SARS-CoV-2 Cycle Threshold Values and Clinical Outcomes in Patients With COVID-19: A Systematic Review and Meta-analysis. Open Forum Infect Dis. 2021; 8(9): ofab453. https://doi.org/10.1093/ofid/ofab453. * 27 Choudhuri J, Carter J, Nelson R, et al. SARS-CoV-2 PCR cycle threshold at hospital admission associated with patient mortality. PLoS One. 2021; 15(12): 1-14. https://doi.org/10.1371/journal.pone.0244777. * 28 Waring E, Quinn M, McNamara A, Arino de la Rubia E, Zhu H, Ellis S. R package, Version 2.1.5: Compact and Flexible Summaries of Data. 2022. * 29 Gumedze F, Dunne T. Parameter estimation and inference in the linear mixed model. Linear Algebra Appl. 2011; 435(8): 1920-1944. https://doi.org/10.1016/j.laa.2011.04.015. * 30 Lin C, McAllister A. Monte Carlo Comparison of Four Methods for Estimation of Genetic Parameters in the Univariate Case1. J Dairy Sci. 1984; 67(10): 2389-2398. https://doi.org/10.3168/jds.S0022-0302(84)81587-8. * 31 Swallow WH, Monahan JF. Monte Carlo Comparison of ANOVA, MIVQUE, REML, and ML estimators of variance components. Technometrics. 1984; 26(1): 47. https://doi.org/10.1080/00401706.1984.10487921. * 32 Ferron JM, Bell BA, Hess MR, Rendina-Gobioff G, Hibbard ST. Making treatment effect inferences from multiple-baseline data: The utility of multilevel modeling approaches. Behav Res Methods. 2009; 41(2): 372-384. https://doi.org/10.3758/BRM.41.2.372. * 33 McNeish D, Stapleton LM. Modeling clustered data with very few clusters. Multivariate Behav Res. 2016; 51(4): 495–518. https://doi.org/10.1080/00273171.2016.1167008. * 34 Snijders TAB, Bosker RJ. Standard Errors and Sample Sizes for Two-Level Research. J Educ Behav Stat. 1993; 18(3): 237–259. https://doi.org/10.3102/10769986018003237. * 35 McNeish D. Small Sample Methods for Multilevel Modeling: A Colloquial Elucidation of REML and the Kenward-Roger Correction. Multivariate Behav Res 2017; 52(5): 661–670. https://doi.org/10.1080/00273171.2017.1344538. * 36 Bradburn MJ, Clark TG, Love SB, Altman DG. Survival Analysis Part III: Multivariate data analysis – choosing a model and assessing its adequacy and fit. Br J Cancer. 2003; 89(4): 605-611. https://doi.org/10.1038/sj.bjc.6601120. * 37 Lindsey J. Nonlinear Models in Medical Statistics. Oxford statistical science series. New York, USA: Oxford University Press; 2001. * 38 Harrell F. Regression Modeling Strategies: With Applications to Linear Models, Logistic and Ordinal Regression, and Survival Analysis. Springer Series in Statistics. Switzerland, AG: Springer International Publishing; 2015. * 39 Ali MM, Sharma SC. Robustness to nonnormality of regression F-tests. J Econom. 1996; 71(1): 175-205. https://doi.org/10.1016/0304-4076(94)01700-X. * 40 Box GEP, Watson GS. Robustness to non-normality of regression tests. Biometrika 1962; 49(1/2): 93–106. https://doi.org/10.2307/2333470. * 41 Gelman A, Hill J. Data Analysis Using Regression and Multilevel/Hierarchical Models. Analytical Methods for Social Research. New York, USA: Cambridge University Press; 2006. * 42 Schielzeth H, Dingemanse NJ, Nakagawa S, et al. Robustness of linear mixed-effects models to violations of distributional assumptions. Methods Ecol Evol. 2020; 11(9): 1141-1152. https://doi.org/10.1111/2041-210X.13434. * 43 Galecki A, Burzykowski T. Linear Mixed-Effects Models Using R: A Step-by-Step Approach. Springer Texts in Statistics. Springer New York; 2013. * 44 Long JS, Ervin LH. Using heteroscedasticity consistent standard errors in the linear regression model. Am Stat. 2000; 54(3): 217–224. https://doi.org/10.2307/2685594. * 45 Weisberg S. Applied Linear Regression. Wiley Series in Probability and Statistics. 3rd ed. New Jersey, USA: John Wiley & Sons, Inc.; 2005. * 46 Carroll R, Ruppert D. Transformation and Weighting in Regression. Chapman & Hall/CRC Monographs on Statistics & Applied Probability. Taylor & Francis; 1988. * 47 White H. A heteroskedasticity-consistent covariance matrix estimator and a direct test for heteroskedasticity. Econometrica 1980; 48(4): 817–838. https://doi.org/10.2307/1912934. * 48 Liang KY, Zeger SL. Longitudinal data analysis using generalized linear models. Biometrika 1986; 73(1): 13-22. https://doi.org/10.1093/biomet/73.1.13. * 49 Raghunathan TE. Synthetic Data. Annu Rev Stat Appl. 2021; 8(1): 129-140. https://doi.org/10.1146/annurev-statistics-040720-031848. * 50 Hernandez M, Epelde G, Alberdi A, Cilla R, Rankin D. Synthetic data generation for tabular health records: A systematic review. Neurocomputing (Amst). 2022; 493: 28-45. https://doi.org/10.1016/j.neucom.2022.04.053. * 51 Figueira A, Vaz B. Survey on Synthetic Data Generation, Evaluation Methods and GANs. Mathematics. 2022; 10(15): 2733. https://doi.org/10.3390/math10152733. * 52 Murtaza H, Ahmed M, Khan NF, Murtaza G, Zafar S, Bano A. Synthetic data generation: State of the art in health care domain. Comput Sci Rev. 2023; 48: 100546. https://doi.org/10.1016/j.cosrev.2023.100546. * 53 Glass GV. Primary, Secondary, and Meta-Analysis of Research. Educ Res. 1976; 5(10): 3-8. https://doi.org/10.3102/0013189X005010003. * 54 McMahan HB, Moore E, Ramage D, Hampson S, Arcasy BA. Communication-efficient learning of deep networks from decentralized data. Talk presented at: 20th International conference on artificial intelligence and statistics (AISTATS); April 20-22, 2017; Florida, USA. https://proceedings.mlr.press/v54/mcmahan17a/mcmahan17a.pdf. * 55 Nevitt SJ, Marson AG, Davie B, Reynolds S, Williams L, Smith CT. Exploring changes over time and characteristics associated with data retrieval across individual participant data meta-analyses: systematic review. BMJ. 2017; 357: j1390. https://doi.org/10.1136/bmj.j1390. ## Appendix A Appendix ### A.1 Likelihood principle Another principle of data reduction discussed by 22 is the likelihood principle. Stated more formally, > “If $\mathbf{x_{1}}$ and $\mathbf{x_{2}}$ are two samples such that the > likelihood $L(\theta|\mathbf{x_{1}})$ is proportional to > $L(\theta|\mathbf{x_{2}})$, that is, there exists a constant > $C(\mathbf{x_{1}},\mathbf{x_{2}})$ such that > > > $L(\theta|\mathbf{x_{1}})=C(\mathbf{x_{1}},\mathbf{x_{2}})L(\theta|\mathbf{x_{2}})$ > > then the conclusions drawn from $\mathbf{x_{1}}$ and $\mathbf{x_{2}}$ should > be identical.” They demonstrated this principle for the case of a normal distribution and showed that the likelihood of a parameter $\mu$ given a sample $\mathbf{x_{1}}$ ($L(\mu|\mathbf{x_{1}})$) can be exactly equal to the likelihood of the same parameter given another sample $\mathbf{x_{2}}$ ($L(\mu|\mathbf{x_{2}})$) if their sample means are equal ($\mathbf{\bar{x}}_{1}=\mathbf{\bar{x}}_{2}$). ### A.2 Showing the sufficient statistics for a linear regression model Given $n$ observations, an intercept, and $p-1$ predictors, if $\mathbf{X}$ denotes the $n\times p$ design matrix and $\mathbf{y}$ is the $n\times 1$ vector of continuous responses, the linear regression coefficients $\bm{\beta}$ and the variance $\sigma^{2}$ can be estimated through the log- likelihood $\displaystyle l(\bm{\beta},\sigma^{2};\mathbf{y},\mathbf{X})$ $\displaystyle=-\frac{n}{2}\text{ln}(2\pi)-\frac{n}{2}\text{ln}(\sigma^{2})-\frac{1}{2\sigma^{2}}\sum_{i=1}^{n}(y_{i}-\mathbf{x}_{i}^{T}\bm{\beta})^{2}$ where $\mathbf{x}_{i}$ is a vector containing the $i$th row in the design matrix. We see here that information from the sample is required only in the last term. Moreover, the sum of squares of this term can be expressed as $\displaystyle\sum_{i=1}^{n}(y_{i}-\mathbf{x}_{i}^{T}\bm{\beta})^{2}$ $\displaystyle=\sum_{i=1}^{n}(y_{i}^{2}-2y_{i}\mathbf{x}_{i}^{T}\bm{\beta}+(\mathbf{x}_{i}^{T}\bm{\beta})(\mathbf{x}_{i}^{T}\bm{\beta})^{T}).$ Since $\mathbf{x}_{i}^{T}\bm{\beta}$ is just the dot product of two vectors $\mathbf{x}_{i}$ and $\bm{\beta}$, commutativity applies such that the equation above can also be written as $\displaystyle\sum_{i=1}^{n}(y_{i}-\mathbf{x}_{i}^{T}\bm{\beta})^{2}$ $\displaystyle=\sum_{i=1}^{n}(y_{i}^{2}-2y_{i}\mathbf{x}_{i}^{T}\bm{\beta}+(\bm{\beta}^{T}\mathbf{x}_{i})(\bm{\beta}^{T}\mathbf{x}_{i})^{T})$ which when simplified further yields $\displaystyle\sum_{i=1}^{n}(y_{i}-\mathbf{x}_{i}^{T}\bm{\beta})^{2}$ $\displaystyle=\sum_{i=1}^{n}(y_{i}^{2}-2y_{i}\mathbf{x}_{i}^{T}\bm{\beta}+\bm{\beta}^{T}\mathbf{x}_{i}\mathbf{x}_{i}^{T}\bm{\beta})$ $\displaystyle=\sum_{i=1}^{n}y_{i}^{2}-2\sum_{i=1}^{n}y_{i}\mathbf{x}_{i}^{T}\bm{\beta}+\sum_{i=1}^{n}\bm{\beta}^{T}\mathbf{x}_{i}\mathbf{x}_{i}^{T}\bm{\beta}$ From this, we find that knowing $n$, $\sum_{i=1}^{n}y_{i}^{2}$, $\sum_{i=1}^{n}y_{i}\mathbf{x}_{i}^{T}$, and $\sum_{i=1}^{n}\mathbf{x}_{i}\mathbf{x}_{i}^{T}$ is sufficient to construct the log-likelihood and estimate the parameters, even in the absence of individual-level data. In particular, $\sum_{i=1}^{n}y_{i}\mathbf{x}_{i}^{T}$ and $\sum_{i=1}^{n}\mathbf{x}_{i}\mathbf{x}_{i}^{T}$ are sufficient to estimate the coefficients $\bm{\beta}$ while the variance $\sigma^{2}$ also requires $\sum_{i=1}^{n}y_{i}^{2}$ in addition to the other two. Furthermore, these values can be obtained from the vector of sample means and sample covariance matrix of the response variable and the predictors. Specifically, since the sample variance $s_{\mathbf{y}}^{2}$ is computed as $\displaystyle s^{2}_{\mathbf{y}}$ $\displaystyle=\frac{1}{n-1}\sum_{i=1}^{n}(y_{i}-\bar{\mathbf{y}})^{2}$ $\displaystyle=\frac{1}{n-1}\sum_{i=1}^{n}(y_{i}^{2}-2y_{i}\bar{\mathbf{y}}+\bar{\mathbf{y}}^{2})$ $\displaystyle=\frac{1}{n-1}\left(\sum_{i=1}^{n}y_{i}^{2}-2\bar{\mathbf{y}}\sum_{i=1}^{n}y_{i}+n\bar{\mathbf{y}}^{2}\right)$ $\displaystyle=\frac{1}{n-1}\left(\sum_{i=1}^{n}y_{i}^{2}-n\bar{\mathbf{y}}^{2}\right),$ performing some algebraic manipulations will show that $\sum_{i=1}^{n}y_{i}^{2}$ can be derived from the sample variance $s^{2}_{\mathbf{y}}$, the sample mean $\bar{\mathbf{y}}$, and the sample size $n$ $\displaystyle\sum_{i=1}^{n}y_{i}^{2}$ $\displaystyle=s^{2}_{\mathbf{y}}(n-1)+n\bar{\mathbf{y}}^{2}.$ For $\sum_{i=1}^{n}y_{i}\mathbf{x}_{i}^{T}$, we note that $y_{i}\mathbf{x}_{i}^{T}$ is a $1\times p$ matrix $\displaystyle\begin{bmatrix}y_{i}&y_{i}x_{i1}&y_{i}x_{i2}&...&y_{i}x_{ij}&...&y_{i}x_{i(p-1)}\end{bmatrix}$ where the first element of $\mathbf{x}_{i}$ is 1 corresponding to the intercept. Thus, $\sum_{i=1}^{n}y_{i}\mathbf{x}_{i}^{T}$ is a $1\times p$ matrix $\displaystyle\begin{bmatrix}\sum_{i=1}^{n}y_{i}&\sum_{i=1}^{n}y_{i}x_{i1}&\sum_{i=1}^{n}y_{i}x_{i2}&...&\sum_{i=1}^{n}y_{i}x_{ij}&...&\sum_{i=1}^{n}y_{i}x_{i(p-1)}\end{bmatrix}.$ The first element can be obtained from the sample mean $\bar{\mathbf{y}}$ while the rest of the elements needs the sample covariance between $\mathbf{y}$ and each of the predictors: $\displaystyle s_{\mathbf{yx}_{j}}$ $\displaystyle=\frac{1}{n-1}\sum_{i=1}^{n}(y_{i}-\bar{\mathbf{y}})(x_{ij}-\bar{\mathbf{x}}_{j})$ $\displaystyle=\frac{1}{n-1}\sum_{i=1}^{n}(y_{i}x_{ij}-\bar{\mathbf{y}}x_{ij}-y_{i}\bar{\mathbf{x}}_{j}+\bar{\mathbf{y}}\bar{\mathbf{x}}_{j})$ $\displaystyle=\frac{1}{n-1}\left(\sum_{i=1}^{n}y_{i}x_{ij}-\bar{\mathbf{y}}\sum_{i=1}^{n}x_{ij}-\bar{\mathbf{x}}_{j}\sum_{i=1}^{n}y_{i}+n\bar{\mathbf{y}}\bar{\mathbf{x}}_{j}\right)$ $\displaystyle=\frac{1}{n-1}\left(\sum_{i=1}^{n}y_{i}x_{ij}-\bar{\mathbf{y}}\sum_{i=1}^{n}x_{ij}\right),$ and thus, $\displaystyle\sum_{i=1}^{n}{y_{i}x_{ij}}$ $\displaystyle=s_{\mathbf{yx}_{j}}(n-1)+\bar{\mathbf{y}}\sum_{i=1}^{n}x_{ij}.$ Lastly, for $\sum_{i=1}^{n}\mathbf{x}_{i}\mathbf{x}_{i}^{T}$, each summand $\mathbf{x}_{i}\mathbf{x}_{i}^{T}$ yields a $p\times p$ matrix $\displaystyle\begin{bmatrix}1&x_{i1}&\ldots&x_{i(p-1)}\\\ x_{i1}&x_{i1}^{2}&\ldots&x_{i1}x_{i(p-1)}\\\ \vdots&\vdots&\ddots&\vdots\\\ x_{i(p-1)}&x_{i(p-1)}x_{i1}&\ldots&x_{i(p-1)}^{2}\end{bmatrix}$ which when summated over $n$ observations yields $\displaystyle\begin{bmatrix}n&\sum_{i}{x_{i1}}&\ldots&\sum_{i}{x_{i(p-1)}}\\\ \sum_{i}{x_{i1}}&\sum_{i}{x_{i1}^{2}}&\ldots&\sum_{i}{x_{i1}x_{i(p-1)}}\\\ \vdots&\vdots&\ddots&\vdots\\\ \sum_{i}{x_{i(p-1)}}&\sum_{i}{x_{i(p-1)}x_{i1}}&\ldots&\sum_{i}{x_{i(p-1)}^{2}}\end{bmatrix}.$ Performing similar derivations as above reveals that computing $\sum_{i=1}^{n}\mathbf{x}_{i}\mathbf{x}_{i}^{T}$ only requires the sample mean ($\bar{\mathbf{x}}_{j}$), variance ($s^{2}_{\mathbf{x}_{j}}$), and covariances among predictors $j$ and $k$ ($s_{\mathbf{x}_{j}\mathbf{x}_{k}}$), namely $\displaystyle\sum_{i=1}^{n}x_{ij}^{2}$ $\displaystyle=s^{2}_{\mathbf{x}_{j}}(n-1)+n\bar{\mathbf{x}}_{j}^{2},$ $\displaystyle\sum_{i=1}^{n}{x_{ij}x_{ik}}$ $\displaystyle=s_{\mathbf{x}_{j}\mathbf{x}_{k}}(n-1)+\bar{\mathbf{x}}_{j}\sum_{i=1}^{n}x_{ik}.$ ### A.3 Robust variance estimation from summary statistics When estimating the robust variance of linear regression coefficients, the following estimator is used when the individual observations are available: $\displaystyle\hat{V}(\hat{\bm{\beta}})$ $\displaystyle=(\mathbf{X}^{T}\mathbf{X})^{-1}(\mathbf{X}^{T}\mathbf{W}\mathbf{X})^{-1}(\mathbf{X}^{T}\mathbf{X})^{-1}$ where $\mathbf{X}$ is the design matrix and $\mathbf{W}$ is an $n\times n$ diagonal matrix whose elements consist of the squared residuals $\hat{e}_{i}^{2}=(y_{i}-\mathbf{x}_{i}^{T}\bm{\beta})^{2}$. $\mathbf{X}^{T}\mathbf{X}$ is a $p\times p$ matrix that is equivalent to $\sum_{i=1}^{n}\mathbf{x}_{i}\mathbf{x}_{i}^{T}$, which we have shown (Appendix A.2) can be computed from the mean vector and covariance matrix of the variables. Recall that $\mathbf{x}_{i},i=1,..,n$ denotes the vector representing the $i$th row of $\mathbf{X}$. On the other hand, $(\mathbf{X}^{T}\mathbf{W}\mathbf{X})^{-1}$ can be shown to be composed of summary statistics involving the third and fourth joint sample moments. Specifically, $\displaystyle(\mathbf{X}^{T}\mathbf{W}\mathbf{X})^{-1}=$ $\displaystyle\sum_{i=1}^{n}\hat{e}_{i}^{2}\mathbf{x}_{i}\mathbf{x}_{i}^{T}$ $\displaystyle=$ $\displaystyle\sum_{i=1}^{n}(y_{i}-\mathbf{x}_{i}^{T}\bm{\beta})^{2}\mathbf{x}_{i}\mathbf{x}_{i}^{T}$ $\displaystyle=$ $\displaystyle\sum_{i=1}^{n}(y_{i}^{2}\mathbf{x}_{i}\mathbf{x}_{i}^{T}-2(y_{i}\mathbf{x}_{i}^{T}\bm{\beta})\mathbf{x}_{i}\mathbf{x}_{i}^{T}+(\bm{\beta}^{T}\mathbf{x}_{i}\mathbf{x}_{i}^{T}\bm{\beta})\mathbf{x}_{i}\mathbf{x}_{i}^{T})$ The first term $y_{i}^{2}\mathbf{x}_{i}\mathbf{x}_{i}^{T}$ is just a product of a scalar $y_{i}^{2}$ and the matrix $\mathbf{x}_{i}\mathbf{x}_{i}^{T}$, which results in the matrix $\displaystyle\begin{bmatrix}y_{i}^{2}&y_{i}^{2}x_{i1}&\ldots&y_{i}^{2}x_{i(p-1)}\\\ y_{i}^{2}x_{i1}&y_{i}^{2}x_{i1}^{2}&\ldots&y_{i}^{2}x_{i1}x_{i(p-1)}\\\ \vdots&\vdots&\ddots&\vdots\\\ y_{i}^{2}x_{i(p-1)}&y_{i}^{2}x_{i(p-1)}x_{i1}&\ldots&y_{i}^{2}x_{i(p-1)}^{2}\end{bmatrix}$ whose summation over all observations $i$ becomes $\displaystyle\begin{bmatrix}\sum_{i}y_{i}^{2}&\sum_{i}y_{i}^{2}x_{i1}&\ldots&\sum_{i}y_{i}^{2}x_{i(p-1)}\\\ \sum_{i}y_{i}^{2}x_{i1}&\sum_{i}y_{i}^{2}x_{i1}^{2}&\ldots&\sum_{i}y_{i}^{2}x_{i1}x_{i(p-1)}\\\ \vdots&\vdots&\ddots&\vdots\\\ \sum_{i}y_{i}^{2}x_{i(p-1)}&\sum_{i}y_{i}^{2}x_{i(p-1)}x_{i1}&\ldots&\sum_{i}y_{i}^{2}x_{i(p-1)}^{2}\end{bmatrix},$ where we find that availability of the third and fourth joint sample moments involving the response variable makes it possible for this matrix to be computed. Similarly, the second term $2(y_{i}\mathbf{x}_{i}^{T}\bm{\beta})\mathbf{x}_{i}\mathbf{x}_{i}^{T}$ is also the product of a scalar $2(y_{i}\mathbf{x}_{i}^{T}\bm{\beta})$ and the matrix $\mathbf{x}_{i}\mathbf{x}_{i}^{T}$ resulting in $\displaystyle\begin{bmatrix}2(y_{i}\mathbf{x}_{i}^{T}\bm{\beta})&2(y_{i}\mathbf{x}_{i}^{T}\bm{\beta})x_{i1}&\ldots&2(y_{i}\mathbf{x}_{i}^{T}\bm{\beta})x_{i(p-1)}\\\ 2(y_{i}\mathbf{x}_{i}^{T}\bm{\beta})x_{i1}&2(y_{i}\mathbf{x}_{i}^{T}\bm{\beta})x_{i1}^{2}&\ldots&2(y_{i}\mathbf{x}_{i}^{T}\bm{\beta})x_{i1}x_{i(p-1)}\\\ \vdots&\vdots&\ddots&\vdots\\\ 2(y_{i}\mathbf{x}_{i}^{T}\bm{\beta})x_{i(p-1)}&2(y_{i}\mathbf{x}_{i}^{T}\bm{\beta})x_{i(p-1)}x_{i1}&\ldots&2(y_{i}\mathbf{x}_{i}^{T}\bm{\beta})x_{i(p-1)}^{2}\end{bmatrix}.$ Summing over all observations leads to the matrix $\displaystyle\begin{bmatrix}2\sum_{i}y_{i}\mathbf{x}_{i}^{T}\bm{\beta}&2\sum_{i}x_{i1}y_{i}\mathbf{x}_{i}^{T}\bm{\beta}&\ldots&2\sum_{i}x_{i(p-1)}y_{i}\mathbf{x}_{i}^{T}\bm{\beta}\\\ 2\sum_{i}x_{i1}y_{i}\mathbf{x}_{i}^{T}\bm{\beta}&2\sum_{i}x_{i1}^{2}y_{i}\mathbf{x}_{i}^{T}\bm{\beta}&\ldots&2\sum_{i}x_{i1}x_{i(p-1)}y_{i}\mathbf{x}_{i}^{T}\bm{\beta}\\\ \vdots&\vdots&\ddots&\vdots\\\ 2\sum_{i}x_{i(p-1)}y_{i}\mathbf{x}_{i}^{T}\bm{\beta}&2\sum_{i}x_{i(p-1)}x_{i1}y_{i}\mathbf{x}_{i}^{T}\bm{\beta}&\ldots&2\sum_{i}x_{i(p-1)}^{2}y_{i}\mathbf{x}_{i}^{T}\bm{\beta}\end{bmatrix}$ which can be computed from the third and fourth joint sample moments. Likewise, the final term $(\bm{\beta}^{T}\mathbf{x}_{i}\mathbf{x}_{i}^{T}\bm{\beta})\mathbf{x}_{i}\mathbf{x}_{i}^{T}$ summed over all observations becomes $\displaystyle\begin{bmatrix}\sum_{i}\bm{\beta}^{T}\mathbf{x}_{i}\mathbf{x}_{i}^{T}\bm{\beta}&\sum_{i}\bm{\beta}^{T}x_{i1}\mathbf{x}_{i}\mathbf{x}_{i}^{T}\bm{\beta}&\ldots&\sum_{i}\bm{\beta}^{T}x_{i(p-1)}\mathbf{x}_{i}\mathbf{x}_{i}^{T}\bm{\beta}\\\ \sum_{i}\bm{\beta}^{T}x_{i1}\mathbf{x}_{i}\mathbf{x}_{i}^{T}\bm{\beta}&\sum_{i}\bm{\beta}^{T}x_{i1}^{2}\mathbf{x}_{i}\mathbf{x}_{i}^{T}\bm{\beta}&\ldots&\sum_{i}\bm{\beta}^{T}x_{i1}x_{i(p-1)}\mathbf{x}_{i}\mathbf{x}_{i}^{T}\bm{\beta}\\\ \vdots&\vdots&\ddots&\vdots\\\ \sum_{i}\bm{\beta}^{T}x_{i(p-1)}\mathbf{x}_{i}\mathbf{x}_{i}^{T}\bm{\beta}&\sum_{i}\bm{\beta}^{T}x_{i(p-1)}x_{i1}\mathbf{x}_{i}\mathbf{x}_{i}^{T}\bm{\beta}&\ldots&\sum_{i}\bm{\beta}^{T}x_{i(p-1)}^{2}\mathbf{x}_{i}\mathbf{x}_{i}^{T}\bm{\beta}\end{bmatrix}$ and is also composed of the third and fourth joint sample moments.
# On a Side Condition for Wronskian-Involving Differential Equations Nicoleta Bîlă 111Department of Mathematics and Computer Science, Fayetteville State University, 1200 Murchison Road, Fayetteville, NC 28301, E-mail: <EMAIL_ADDRESS> ###### Abstract The purpose of this paper is to make a few connections among specific concepts occurring in differential geometry and the theory of differential equations with the aim of identifying an intriguing class of undetermined nonlinear ordinary differential equations whose solutions satisfy a specific side condition consisting in a homogeneous third-order linear ordinary differential equation. A method for solving this class of Wronskian-involving differential equations based on the proposed side condition is presented. The Tzitzeica curve equation arising in the theory of space curves is considered as an example, and new closed and integral-form solutions for this equation are obtained. MSC 2000: 34A05, 34A30, 34A34, 53A04, 53A15. Keywords: Nonlinear Ordinary Differential Equations; Linear Ordinary Differential Equations; Wronskian; Tzitzeica Curves. ## 1 Introduction Since there is no general theory for integrating nonlinear ordinary differential equations (ODEs), it will always be a challenge to explore methods involving intriguing side conditions that would allow one “to undo” the nonlinearity encapsulated among the unknown functions and their derivatives and obtain particular solutions to the studied equation. The classical Lie method [7] is one of the most well-known techniques that may be applied in this situation and, for instance, would allow one to reduce the order of an ODE by one whenever the equation is invariant with respect to a particular one-parameter Lie group of transformations. However, this reduction may become more complicated in the case of underdetermined nonlinear ODEs of higher order that have fewer equations than unknowns and contain arbitrary constants or functions. In this paper, the focus is on a specific class of nonlinear differential equations (4) that expresses a relation between the Wronskian (1) of the unknown functions and the Wronskian (2) of their first derivatives. This class of autonomous ODEs is analyzed in the particular case when its solutions satisfy the auxiliary equation (9). For instance, the Tzitzeica curve equation (23) belongs to this class. It is important to point out that the side condition (9) is not found by using the classical Lie method but rather is related to how the nonlinearity among the unknown functions and their derivatives has occurred while deriving the equation. Introduced in 1812 by Józef Maria Hoëné-Wroński and later on mentioned by Thomas Muir in [6], the Wronskian determinant (or, simply, the Wronskian) of a set of $n$ smooth functions is the determinant of the $n\times n$ matrix whose entries are the functions and their derivatives up to the $(n-1)$st order. Namely, the functions are listed in the first row, their first derivatives are placed on the second row, and so on, and their $(n-1)$st derivatives are written on the last row ([4], p. 221). In particular, for $n=3$, the Wronskian $W(x,y,z)(t)$ of the smooth functions $x(t)$, $y(t)$, and $z(t)$ is given by (1). If the functions represent the set of fundamental solutions of a $n$th- order homogeneous linear ODE, then their Wronskian does not vanish and satisfies the Abel-Liouville-Ostrogradski formula (see, e.g., [4], p. 239). Interestingly, this formula allows us to determine the Wronskian associated with the solutions without effectively integrating the linear differential equation. On the other hand, the Wronskian also occurs in the theory of space curves. For a smooth regular space curve (20), the Wronskian $W(x,y,z)(t)$ of the functions defining parametrically the curve may arise, for instance, in the calculation of the distance $d(t)$ from the origin of the system of coordinates to the osculating plane at an arbitrary point of the curve while the Wronskian $W(x^{\prime},y^{\prime},z^{\prime})(t)$ of their first derivatives (2) is used in the computation of the curve’s torsion $\tau(t)$ as in (21). Therefore, for example, any condition imposed upon on $d(t)$ and $\tau(t)$ yields a Wronskian-involving underdetermined nonlinear differential equation for the curve’s defining functions. In this paper, the class of nonlinear ODEs (4) involving the Wronskians (1) and (2) of the unknown functions $x(t)$, $y(t)$, and $z(t)$ and, respectively, of their first derivatives $x^{\prime}(t)$, $y^{\prime}(t)$, and $z^{\prime}(t)$ is introduced and analyzed in the context of the family of solutions satisfying the auxiliary linear differential equation (9) such that the condition (3) holds. The motivation of considering differential equations of this kind lies in the study of the condition (22) that was introduced for specific curves by the Romanian mathematician Gheorghe Tzitzeica in 1911 during his work on affine invariants [11]. Nowadays, a space curve satisfying the relation (22) that may be written in an equivalent form as (23) is called a Tzitzeica curve. Along with Tzitzeica curves, Tzitzeica surfaces are affine invariants. A Tzitzeica surface is a surface for which the ratio of its Gaussian curvature and the fourth power of the distance from the origin to the tangent plane at any arbitrary point of the surface is constant [10]. It may be shown that the asymptotic curves on a Tzitzeica surface with negative Gaussian curvature are Tzitzeica curves (see, e.g., [2]). Although during the past years the Tzitzeica curves have been of interest to many geometers, their related nonlinear ODE has not been studied in detail so far, and, hence, there are known only a few examples of Tzitzeica curves defined explicitly in algebraic, transcendental, or integral forms (see [2], [3], [5], and [13]). In [2], Agnew et al. used the Mathematica software to find and illustrate the asymptotic curves on the Tzitzeica surface of revolution $z(x^{2}+y^{2})=1$ and expressed them in cylindrical coordinates in terms of logarithmic and exponential functions. In [5], Crâşmăreanu determined elliptic and hyperbolic cylindrical Tzitzeica curves written in integral form. In the paper by Williams [13], the nonlinear Tzitzeica curve equation has been derived explicitly, and new closed-form solutions have been presented. Bîlă and Eni [3] showed that the nonlinear ODE (23) admits particular solutions obtained by augmenting it with a side condition consisting in a third-order homogeneous linear ODE with constant coefficients. In this paper, it is shown that the Tzitzeica curve equation also admits solutions that satisfy the side condition (9). Although this is a slight generalization of the auxiliary condition introduced in [3], by Proposition 3.1, the new attached equation yields a larger family of solutions to the Tzitzeica curve equation. The new interesting solutions (16) and (26) are expressed in closed-form or in terms of Airy’s Ai and Bi functions. In Fig. 1, the software Geogebra has been used to visualize the Tzitzeica curve (26) on the Tzitzeica surface of equation $yz=1-4x^{2}$, surface that was found in [12] by applying the classical Lie method to the Tzitzeica surface partial differential equation. Additionally, for various functions $\gamma(t)$ in (9), new Tzitzeica curves expressed in terms of other special functions [1] such as Bessel or generalized hypergeometric functions may be investigated. Theorem 2.1 gives a generalization of these results for the class of nonlinear ODEs (4). The structure of the paper is the following. In Section 2, the side condition (9) is explored in the case of the class of underdetermined nonlinear Wronskian-involving ODEs (4) for which (3) holds. A new method for solving these types of nonlinear differential equations with the help of the auxiliary condition (9) is introduced. The novel approach is exemplified in Section 3 for the Tzitzeica curve equation (23) for which intriguing solutions are obtained. In the last section, conclusions of this work are presented. ## 2 A class of Wronskian-Involving Equations ### 2.1 On a related side condition If $x(t)$, $y(t)$, and $z(t)$ are smooth functions defined on an open interval $I\subset\mathbb{R}$, then their Wronskian is defined as $W(x,y,z)(t)=\left|\begin{array}[]{ccc}x(t)&y(t)&z(t)\\\ x^{\prime}(t)&y^{\prime}(t)&z^{\prime}(t)\\\ x^{\prime\prime}(t)&y^{\prime\prime}(t)&z^{\prime\prime}(t)\end{array}\right|.$ (1) Similarly, the Wronskian of the functions’ first derivatives is given by $W(x^{\prime},y^{\prime},z^{\prime})(t)=\left|\begin{array}[]{ccc}x^{\prime}(t)&y^{\prime}(t)&z^{\prime}(t)\\\ x^{\prime\prime}(t)&y^{\prime\prime}(t)&z^{\prime\prime}(t)\\\ x^{\prime\prime\prime}(t)&y^{\prime\prime\prime}(t)&z^{\prime\prime\prime}(t)\end{array}\right|.$ (2) In what follows, assume that $W(x,y,z)(t)\neq 0\quad\text{and}\quad W(x^{\prime},y^{\prime},z^{\prime})(t)\neq 0,$ (3) for all $t\in I$. This supposition implies that the functions $x(t)$, $y(t)$, and $z(t)$ and, respectively, their first derivatives $x^{\prime}(t)$, $y^{\prime}(t)$, and $z^{\prime}(t)$ are linearly independent (see [4], p. 221). Consider the differential equation ${\cal{F}}\left(W(x,y,z)(t),W(x^{\prime},y^{\prime},z^{\prime})(t)\right)=0$ (4) involving the Wronskians (1) and (2), where ${\cal{F}}$ is a smooth real- valued function in two variables. The equation above is a third-order underdetermined differential equation in the unknown functions $x(t)$, $y(t)$, and $z(t)$. Next, the problem of finding solutions to (4) is explored in the context of a specific side condition that would allow the Wronskian (2) to be expressed in terms of the Wronskian (1). Suppose that the functions $x(t)$, $y(t)$, and $z(t)$ satisfy the following homogeneous third-order linear ODE $u^{\prime\prime\prime}+\beta(t)u^{\prime\prime}+\gamma(t)u^{\prime}+\delta u=0,$ (5) where $\beta(t)$ and $\gamma(t)$ are smooth functions defined on the interval $I$, and $\delta$ is a nonzero constant. For simplicity, in this paper, $\delta$ is considered a nonzero constant. However, the method explained below may be modified to accommodate $\delta$ variable too. Since the functions $x(t)$, $y(t)$, and $z(t)$ are linearly independent, they form a fundamental set of solutions to (5). Therefore, the general solution to this equation may be written as $u(t)=C_{1}x(t)+C_{2}y(t)+C_{3}z(t)$, where $C_{i}$ are arbitrary real constants. If the leading derivatives $x^{\prime\prime\prime}=-\beta(t)x^{\prime\prime}-\gamma(t)x^{\prime}-\delta x$, $y^{\prime\prime\prime}=-\beta(t)y^{\prime\prime}-\gamma(t)y^{\prime}-\delta y$, and $z^{\prime\prime\prime}=-\beta(t)z^{\prime\prime}-\gamma(t)z^{\prime}-\delta z$ are substituted into (2), then after applying a few properties of determinants, this Wronskian may be expressed as $W(x^{\prime},y^{\prime},z^{\prime})(t)=-\delta W(x,y,z)(t),\quad t\in I.$ (6) Replacing the above relation into the differential equation (4) yields ${\cal{G}}\left(W(x,y,z)(t);\delta\right)=0,$ (7) where ${\cal{G}}$ denotes the resulting function that depends on $W(x,y,z)(t)$ (the notation used here shows that $\delta$ occurs as well in the reduced equation due to (6)). On the other hand, according to Abel-Liouville- Ostrogradski formula (see, e.g., [4], p. 239), the Wronskian of the set of fundamental solutions of (5) satisfies the first-order linear ODE $W^{\prime}(x,y,z)(t)=-\beta(t)W(x,y,z)(t).$ (8) The differentiation of (7) with respect to $t$ leads to $W^{\prime}(x,y,z)(t)=0$ for any $t\in I$. By (3), the Wronskian $W(x,y,z)(t)$ does not vanish, and, thus, the equation (8) implies $\beta(t)=0$ for all $t\in I$. In conclusion, the equation (5) reduces to $u^{\prime\prime\prime}+\gamma(t)u^{\prime}+\delta u=0,$ (9) where $\gamma(t)$ is a smooth function and $\delta$ is a nonzero constant. In this way, the equation (4) has been reduced to the homogeneous third-order linear ODE (9) along with the equation (7) provided that (3) holds. Observe that now the Wronskian $W(x,y,z)(t)=C_{0}$ is constant, and the equation (7) turns into the compatibility condition ${\cal{G}}\left(C_{0};\delta\right)=0.$ (10) In conclusion, the following result has been proven. ###### Theorem 2.1. Any three linearly independent solutions $x(t)$, $y(t)$, and $z(t)$ of the homogeneous third-order linear ODE (9) conditioned to (3) satisfy the nonlinear differential equation (4) provided that (10) holds. ### 2.2 Special side conditions In what follows, the ODE (9) is integrated in a few particular cases that yield to explicit or integral form solutions. Recall that in each of these cases the Wronskian of the solutions is constant. As it has been shown in the previous section, the equation (4) has been reduced to the side condition (9) and compatibility condition (10). ###### Remark 2.1. If $\gamma(t)=0$, then the equation (9) becomes $u^{\prime\prime\prime}+\delta u=0$, and its set of fundamental solutions is $\displaystyle x(t)$ $\displaystyle=\exp\left(-\delta^{1/3}t\right),$ $\displaystyle y(t)$ $\displaystyle=\exp\left(\frac{1}{2}\delta^{1/3}t\right)\cos\left(\frac{\sqrt{3}}{2}\delta^{1/3}t\right),$ $\displaystyle z(t)$ $\displaystyle=\exp\left(\frac{1}{2}\delta^{1/3}t\right)\sin\left(\frac{\sqrt{3}}{2}\delta^{1/3}t\right),$ (11) with $t\in\mathbb{R}$. In this case, $W(x,y,z)(t)=-3\delta\sqrt{3}/2$. ###### Remark 2.2. For $\gamma(t)=\tilde{\gamma}$, where $\tilde{\gamma}$ is a nonzero real constant, the equation (9) is reduced to the homogeneous linear ODE with constant coefficients $u^{\prime\prime\prime}+\tilde{\gamma}u^{\prime}+\delta u=0$ (12) whose solutions have been discussed in detail in the context of the Tzitzeica curve equation in [3]. The characteristic equation related to (12) is the depressed cubic equation $v^{3}+\tilde{\gamma}v+\delta=0$. The character of the solutions depends on the sign of the associated determinant $D=-4\gamma^{3}-27\delta^{2}.$ a) If $D>0$, the depressed cubic equation has three real distinct solutions $v_{i}\neq 0$, $i=1,2,3$ whose sum is zero. The conditions $v_{3}\neq v_{i}$, with $i=1,2$, imply $v_{2}\neq-2v_{1}$ and $v_{2}\neq-v_{1}/2$. In this case, the fundamental set of solutions of (12) is $x(t)=\exp\left(v_{1}t\right),\quad y(t)=\exp\left(v_{2}t\right),\quad z(t)=\exp\left(-\left(v_{1}+v_{2}\right)t\right),$ (13) with $t\in\mathbb{R}$, for which $W(x,y,z)(t)=(v_{2}-v_{1})(2v_{1}+v_{2})(v_{1}+2v_{2})$. b) For $D=0$, the characteristic equation has three real solutions out of which two are equal, i.e., $v_{1}=v_{2}\neq 0$ and $v_{3}=-2v_{1}$. Therefore, $x(t)=\exp(v_{1}t),\quad y(t)=t\exp(v_{1}t),\quad z(t)=\exp(-2v_{1}t),$ (14) with $t\in\mathbb{R}$, form a fundamental solution set to (12). In this case, their Wronskian is given by $W(x,y,z)(t)=9v_{1}^{2}$. c) If $D<0$, the depressed cubic equation has two nonreal complex conjugate solutions, $v_{1,2}=m\pm in$ and one real nonzero solution $v_{3}=-2m$, where $m,n\neq 0$ are real numbers. The integration of (12) yields $x(t)=\exp(mt)\cos(nt),\;\;y(t)=\exp(mt)\sin(nt),\;\;z(t)=\exp(-2mt),$ (15) where $t\in\mathbb{R}$, and for which $W(x,y,z)(t)=n(9m^{2}+n^{2})$. ###### Remark 2.3. If $\gamma(t)=\delta t$, then (9) may be integrated once and becomes $u^{\prime\prime}+\delta tu=C$, where $C\in\mathbb{R}$ (here $C$ is nonzero whenever three linearly independent solutions to (9) are needed). The solutions of the latter ODE are given in integral form in terms of the Airy’s Ai and Bi functions of the first and second kind, respectively ([8], p. 214), that is, $\displaystyle x(t)$ $\displaystyle=\text{Ai}\left(-\delta^{1/3}t\right),$ $\displaystyle y(t)$ $\displaystyle=\text{Bi}\left(-\delta^{1/3}t\right),$ $\displaystyle z(t)$ $\displaystyle=\pi\delta^{-1/3}\left(x(t)\int y(s)ds-y(t)\int x(s)ds\right),$ (16) with $t\in\mathbb{R}$, and their Wronskian is $W(x,y,z)(t)=-\delta^{1/3}/\pi$. ### 2.3 On an integration method involving a side condition Assume that $x_{0}(t)$ is a known solution to ODE (9). In this situation, a method for integrating the ODE (9) may be introduced as follows. Step 1. Substitute $u=x_{0}(t)$ into the ODE (9) and solve the equation for $\gamma(t)$. Denote $\gamma_{0}(t;\delta)$ the resulting solution that depends on $\delta$ too. Step 2. Replace $\gamma_{0}(t;\delta)$ in (9) and obtain $u^{\prime\prime\prime}+\gamma_{0}(t;\delta)u^{\prime}+\delta u=0$. By using the change of function $u(t)=x_{0}(t)\int w(s)ds,$ (17) the previous equation is reduced to $x_{0}(t)w^{\prime\prime}+3x^{\prime}_{0}(t)w^{\prime}+\left[3x^{\prime\prime}_{0}(t)+\gamma_{0}(\delta;t)x_{0}(t)\right]w=0$ (18) which is a homogeneous linear second-order ODE for the function $w(t)$. Step 3. Integrate (18) and determine its general solution $w(t)=C_{1}w_{1}(t)+C_{2}w_{2}(t)$, where $w_{1}(t)$ and $w_{2}(t)$ represent its fundamental solution set. Step 4. Replace $w(t)$ into (17) to have $u(t)=x_{0}(t)\int w(s)ds=C_{1}x_{0}(t)\int w_{1}(s)ds+C_{2}x_{0}(t)\int w_{2}(s)ds+C_{3}x_{0}(t),$ where $t\in I$. In conclusion, $x(t)=x_{0}(t),\quad y(t)=x_{0}(t)\int w_{1}(s)ds,\quad z(t)=x_{0}(t)\int w_{2}(s)ds,$ (19) with $t\in I$, form the fundamental set of solutions of the equation of (9). ## 3 Solutions to Tzitzeica Curve Equation ### 3.1 Tzitzeica Curve Equation In this section, the compatibility of the side condition (9) for the underdetermined nonlinear ODE (4) is presented in detail in the case of the Tzitzeica curve equation (23). Consider $\mathbf{r}(t)=\left(x(t),y(t),z(t)\right),\quad t\in I$ (20) a smooth, regular space curve, where $I\subset\mathbf{R}$ is an interval. Assume that the curve’s curvature $k(t)=\frac{||\mathbf{r^{\prime}}(t)\times\mathbf{r^{\prime\prime}}(t)||}{||\mathbf{r^{\prime}}(t)||^{3}},\quad t\in I$ and torsion $\tau(t)=\frac{\langle\mathbf{r^{\prime}}(t),\mathbf{r^{\prime\prime}}(t),\mathbf{r^{\prime\prime\prime}}(t)\rangle}{||\mathbf{r^{\prime}}(t)\times\mathbf{r^{\prime\prime}}(t)||^{2}},\quad t\in I$ (21) do not vanish on $I$. Here $||\mathbf{r^{\prime}}(t)\times\mathbf{r^{\prime\prime}}(t)||$ is the magnitude of the cross product of the tangent vector $\mathbf{r^{\prime}}(t)$ and the acceleration vector $\mathbf{r^{\prime\prime}}(t)$, $\mathbf{r^{\prime}}(t)\times\mathbf{r^{\prime\prime}}(t)$, $||\mathbf{r^{\prime}}(t)||$ is the magnitude of $\mathbf{r^{\prime}}(t)$, and $\langle\mathbf{r^{\prime}}(t),\mathbf{r^{\prime\prime}}(t),\mathbf{r^{\prime\prime\prime}}(t)\rangle=\left|\begin{array}[]{ccc}x^{\prime}(t)&y^{\prime}(t)&z^{\prime}(t)\\\ x^{\prime\prime}(t)&y^{\prime\prime}(t)&z^{\prime\prime}(t)\\\ x^{\prime\prime\prime}(t)&y^{\prime\prime\prime}(t)&z^{\prime\prime\prime}(t)\end{array}\right|$ is the mixed product of vectors $\mathbf{r^{\prime}}(t)$, $\mathbf{r^{\prime\prime}}(t)$, and $\mathbf{r^{\prime\prime\prime}}(t)$ (see, for instance, [9], p. 48). The curvature shows the amount by which a curve deviates from being a line, and its torsion shows how sharply the curve is twisting out of its osculating plane. If the torsion of the curve is nonzero then $W(x^{\prime},y^{\prime},z^{\prime})(t)\neq 0$ and, hence, the functions $x^{\prime}(t)$, $y^{\prime}(t)$, and $z^{\prime}(t)$ are linearly independent. On the other hand, the osculating plane at an arbitrary point of the curve (20) is the plane generated by the vectors $\mathbf{r^{\prime}}(t)$ and $\mathbf{r^{\prime\prime}}(t)$. Therefore, its equation in the determinant form is given by $\left|\begin{array}[]{ccc}x-x(t)&y-y(t)&z-z(t)\\\ x^{\prime}(t)&y^{\prime}(t)&z^{\prime}(t)\\\ x^{\prime\prime}(t)&y^{\prime\prime}(t)&z^{\prime\prime}(t)\end{array}\right|=0.$ If $d(t)$ denotes the distance from the origin to the osculating plane of the curve, it can be shown that $d(t)=\frac{1}{||\mathbf{r^{\prime}}(t)\times\mathbf{r^{\prime\prime}}(t)||}\left|\begin{array}[]{ccc}x(t)&y(t)&z(t)\\\ x^{\prime}(t)&y^{\prime}(t)&z^{\prime}(t)\\\ x^{\prime\prime}(t)&y^{\prime\prime}(t)&z^{\prime\prime}(t)\end{array}\right|.$ The distance $d(t)$ does not vanish iff $W(x,y,z)(t)\neq 0$. In what follows, in addition to the condition that the curvature and the torsion of the curve (20) are nonzero, the distance $d(t)$ is assumed nonzero on $I$ as well. Therefore, the curve’s defining functions $x(t)$, $y(t)$, and $z(t)$ are assumed to satisfy the condition (3). A Tzitzeica curve is defined as a space curve with the property that the ratio of its torsion $\tau(t)$ and the square of the distance $d(t)$ from the origin to the osculating plane at an arbitrary point of the curve is constant, i.e, $\frac{\tau(t)}{d^{2}(t)}=\alpha,$ (22) for all $t\in I$, where $\alpha\neq 0$ is a nonzero real number called here the curve’s constant. Substituting $\tau(t)$ and $d(t)$ into (22) yields $\left|\begin{array}[]{ccc}x^{\prime}(t)&y^{\prime}(t)&z^{\prime}(t)\\\ x^{\prime\prime}(t)&y^{\prime\prime}(t)&z^{\prime\prime}(t)\\\ x^{\prime\prime\prime}(t)&y^{\prime\prime\prime}(t)&z^{\prime\prime\prime}(t)\end{array}\right|=\alpha\left|\begin{array}[]{ccc}x(t)&y(t)&z(t)\\\ x^{\prime}(t)&y^{\prime}(t)&z^{\prime}(t)\\\ x^{\prime\prime}(t)&y^{\prime\prime}(t)&z^{\prime\prime}(t)\end{array}\right|^{2}.$ The above equation can be rewritten in the terms of the Wronskians (1) and (2) as follows $W(x^{\prime},y^{\prime},z^{\prime})(t)=\alpha\left[W(x,y,z)(t)\right]^{2}.$ (23) Based on Theorem 2.1, the Wronskian-involving equation (23) admits particular solutions satisfying the side condition (9) provided that (3) holds. Assume that the functions $x(t)$, $y(t)$, and $z(t)$ are solutions to the ODE (5). In this case, the Wronskian of their first derivatives (2) satisfies the relation (6) which may be used to rewrite the equation (23) in the form $-\delta W(x,y,z)(t)=\alpha\left[W(x,y,z)(t)\right]^{2},$ (24) or, equivalently, as follows $W(x,y,z)(t)\left[\alpha W(x,y,z)(t)-\delta)\right]=0.$ Thus, in this case, the above relation represents the compatibility condition (10) . Taking into account that $W(x,y,z)(t)$ does not vanish on $I$, it follows $\alpha=-\frac{\delta}{W(x,y,z)(t)}.$ (25) Since the Tzitzeica curve equation (24) is a particular case of the Wronskian- involving equation (4), according to Theorem 2.1, the following result has been proven. ###### Proposition 3.1. Any three linearly independent solutions $x$, $y$, and $z$ of the homogeneous third-order linear ODE (9) for which the condition (3) holds satisfy the Tzitzeica curve equation (24), where the curve’s constant is given by (25). ### 3.2 Examples of Tzitzeica Curves A few examples of Tzitzeica curves that derive from the side condition (9) are presented below. However, more examples may be given by exploring other options for the function $\gamma(t)$ in this auxiliary condition. Example 1. According to Remark 2.1, the functions (11) represent a fundamental set of solutions for the side condition (9) in the case $\gamma(t)=0$. Since the Tzitzeica curve’s equation (24) reduces to (25), then (11) represents a Tzitzeica curve with $\alpha=-2\sqrt{3}/9$ that lies on the surface $S:x(y^{2}+z^{2})=1$. After applying the affine transformation $T:\tilde{x}=z,\tilde{y}=x,\tilde{z}=y$ to the surface $S$, this becomes $\tilde{S}:z(x^{2}+y^{2})=1$. It may be shown that $\tilde{S}$ is a Tzitzeica surface with negative Gaussian curvature, and, therefore, its asymptotic curves are Tzitzeica curves. Agnew et al [2] have found the asymptotic curves (expressed in cylindrical coordinates) for $\tilde{S}$ by using the software Mathematica. It may be shown that the transformed Tzitzeica curve (11) via the transformation $T$ is an asymptotic curve on the surface $\tilde{S}$. Example 2. Consider the side condition (12) in Remark 2.2. Then there are three classes of transcendental Tzitzeica curves that are classified in terms of the sign of the determinant $D$ of the depressed equation. The solutions (13), (14), and (15) represent Tzitzeica curves for which their associated curve’s constants are given, respectively, by (25). These curves have been found and discussed in detailed in [3]. Example 3. The solution (16) represents a new Tzitzeica curve for which, by (25), the curve’s constant is $\alpha=\pi\delta^{2/3}$. Interestingly, this curve is expressed in terms of Airy Ai and Bi functions. Example 4. This example refers to the method introduced in Subsection 2.3. Assume that $x_{0}(t)=t^{-3/2}$ is a known solution to (9). At Step 1, after substituting $x_{0}(t)$ back in the equation and solving for $\gamma(t)$, it follows $\gamma(t)=\gamma_{0}(t;\delta)=\frac{8\delta t^{3}-105}{12t^{2}}.$ In particular, for $\delta=-27/8$, the ODE (9) becomes $u^{\prime\prime\prime}-\frac{9t^{3}+35}{4t^{2}}u^{\prime}-\frac{27}{8}u=0.$ At Step 2, the change of variable (17) reduces the above equation to $w^{\prime\prime}-\frac{9}{2t}w^{\prime}+\left(-\frac{9t}{4}+\frac{5}{2t^{2}}\right)w=0.$ The set of fundamental solutions is found at Step 3 to be $w_{1}(t)=\frac{3}{2}\left(t^{2}-\sqrt{t}\right)\exp\left({t^{3/2}}\right),\quad w_{2}(t)=-\frac{3}{2}\left(t^{2}+\sqrt{t}\right)\exp\left(-{t^{3/2}}\right),$ for $t>1$. Therefore, at Step 4, the following Tzitzeica curve is found $\displaystyle x(t)$ $\displaystyle=t^{-3/2},$ $\displaystyle y(t)$ $\displaystyle=\left(1-2t^{-3/2}\right)\exp\left({t^{3/2}}\right),$ $\displaystyle z(t)$ $\displaystyle=\left(1+2t^{-3/2}\right)\exp\left({-t^{3/2}}\right),$ (26) where $t>1$. The Wronskian (1) becomes $W(x,y,z)(t)=27/4$, and, from (25), the curve’s constant takes the value $\alpha=1/2$. The new Tzitzeica curve (26) lies on the Tzitzeica surface $M:\;yz=1-4x^{2}$ with negative Gaussian curvature, surface that was found by using the symmetry analysis theory in [12] (see Fig. 1). The curve (26) is not an asymptotic curve on $M$ as this is an hyperboloid with one sheet whose asymptotic curves are lines. Figure 1: The Tzitzeica curve defined by (26) for $t\in[1.001,8]$ represented on the Tzitzeica surface of equation $yz=1-4x^{2}$. ## 4 Discussion and Conclusions The motivation of this work is based on the study of the Tzitzeica curve equation (24) which is an underdetermined nonlinear autonomous ordinary differential equation arising from the condition stating that the ratio of the curve’s torsion $\tau(t)$ and the square of the distance $d(t)$ from the origin to its osculating plane at an arbitrary point of the curve is constant. This paper is a continuation of author’s work on closed-form solutions for the Tzitzeica curves that she has initiated with her students (see [3] and [13]). It is shown that the auxiliary equation (9) provides a large family of solutions for the class of Wronskian-involving equations (4) which includes the Tzitzeica curve nonlinear ODE (24). The side condition (9) has been derived by using a linear combination of derivatives of the unknown functions that would allow specific properties of determinants to be applied. Another key observation is the relation (6) that shows that the Wronskian (2) may be expressed in terms of the Wronskian (1). In Section 2, a method based on the side condition (9) is introduced and allows one to find a Tzitzeica curve if a solution $x_{0}(t)$ to (9) is known. As a consequence, new solutions for (4) and, in particular, for (24) are obtained. It is intriguing how the auxiliary condition (9) may provide a large variety of solutions to the nonlinear ODEs (4) and, hence, to (24). In the future, it will be interesting to make the connection of these results with the classical Lie symmetries associated with the Tzitzeica curve equation and with other particular Wronskian-type equations. Acknowledgments. Part of these results have been presented to the online XVth International Conference on Differential Geometry and Dynamical Systems, held on August 26–29, 2021, Bucharest, Romania, and, hence, the author would like to thank the organizers, in particular, Prof. Dr. C. Udrişte and Prof. Dr. V. Bălan. The author would also like to thank Prof. Dr. M. Crâşmăreanu for reading the manuscript and making interesting comments that have improved the presentation of the paper. ## References * [1] Abramowitz, M. and Stegun, I., eds. Handbook of Mathematical Functions. New York: Dover (1972) * [2] A. F. Agnew, A. Bobe, W. G. Boskoff and B. D. Suceava, Tzitzeica curves and surfaces, The Mathematica Journal, 12, 1-18 (2010) * [3] N. Bîlă and M. Eni, Particular solutions to the Tzitzeica curve equation, Differential Geometry - Dynamical Systems, 24, 38-47 (2022) * [4] W. E. Boyce and R. C. DiPrima, Elementary Differential Equations and Boundary Value Problems, 8th edition, John Wiley & Sons, Inc. (1986) * [5] M. Crâşmăreanu, Cylindrical Tzitzeica curves implies forced harmonic oscillators, Balkan J. Geom. Appl.,7, No. 1, 37-42 (2002) * [6] T. Muir, A treatise on the theorie of determinants, London. Macmillan (1882) * [7] P. J. Olver, Applications of Lie Groups to Differential Equations, Graduate Texts in Mathematics, vol. 107, Springer-Verlag, New York (1986) * [8] A. D. Polyanin, V. F. Zaitsev, Handbook of Exact Solutions for Ordinary Differential Equations Second Edition, Chapman & Hall/CRC Press, Boca Raton (2003) * [9] A. Pressley, Elementary Differential Geometry, Springer Undergraduate Mathematics Series, Springer-Verlag London Limited (2012) * [10] G. Tzitzeica, Sur une nouvelle classes de surfaces, Comptes Rendus, Acad. Sci. Paris, Paris, 144, 1257-1259 (1907) * [11] G. Tzitzeica, Sur certaines courbes gauches, Ann. de l’Ec. Normale Sup., 28, 9-32 (1911) * [12] C. Udrişte, N. Bîlă, Symmetry group of Tzitzeica surfaces PDE, Balkan J. Geom. Appl., 4(2), 123-140 (1999) * [13] L. R. Williams, On The Tzitzeica Curve Equation, Explorations: The Undergraduate Research and Creative Activities Journal for the State of North Carolina, VIII, 105-115 (2013)
# Study of semi-boosted top quark reconstruction performance on the line shape of a $t\bar{t}$ resonance J. Pácalt J. Kvita ###### Abstract We study the top quark pair events production in $pp$ collisions in the $\ell$+jets channel at the energy of $\sqrt{s}=14$ TeV for Standard Model as well as new physics processes. We explore the usage of semi-boosted topologies where the top quark decays into a high-transverse momentum (boosted) hadronic $W$-jet and an isolated $b$-jet and study their performance in the $t\bar{t}$ events kinematic reconstruction. An important event fraction is recovered and the correlation of selected kinematic variables between the detector and particle level is studied. Quality of the reconstructed mass line shape of a hypothetical scalar resonance decaying into $t\bar{t}$ is evaluated and compared for regimes of a different degree of the transverse boost. Unfolding performance is checked in terms of comparing the excess of events in spectra before and after the unfolding, concluding with the proof of a signal significance loss after the unfolding procedure for both energy and angle related observables, with possible applications in current LHC experiments. ## 1 Introduction This work studies the top quarks pair kinematic reconstruction in a collider detector close to that of the ATLAS detector [1] at the Large Hadron Collider (LHC) [2] at CERN using a parameterized detector simulation provided by Delphes [3]. The LHC nominally collides protons at four interaction points where the detectors are located. The two main multi-purpose detector facilities are the ATLAS and CMS [4] detectors which are versatile particle detectors with ability to discern all kinds of particles with the exception of neutrinos. Both are of similar phase-space coverage, resolution and detection and identification capabilities based on different experimental technologies. Quarks and gluons originating in collisions are not detected directly because they become confined in hadrons or, in case of the top quark, decay before their arrival to the detector. The process of hadronization forms showers of particles collimated in the direction of the original particle, resulting in a hadronic jet reaching the detector. The degree of collimation is proportional to the momentum of the parent particle and this results in particular perpetual positions of hadronic showers in the detector. If the primarily particle has a large momentum with respect to the beam (transverse momentum, ${p_{\rm T}}$), the corresponding particle shower is more collimated leading to a reconstructed hadronic final state with an imprint of the parent particle four-vector, including its mass. Also, stable particles of different origin can overlap in a jet. The energy of particles used in colliders increases with the advance of the experimental technology. This leads to enrichment of events with particles of higher transverse momenta. This paper studies the process of top and anti-top quark pair($t\bar{t}$) production $pp\rightarrow t\bar{t}$ at the LHC at CERN at the center-of-mass energy $\sqrt{s}=14$ TeV. This paper also considers a process with a hypothetical massive heavy scalar particle $y_{0}$ as a mediator for the process $pp\rightarrow y_{0}\rightarrow t\bar{t}$ through a triangle loop for the enhancement of events in the phase space of large transverse momenta. ## 2 The ${t\bar{t}}$ final states topologies The $t\bar{t}$ events are categorized into three channels, according to the decaying products of the top quarks. The top quark decay is described by the following process: $t\rightarrow W^{+}q$ $(q=b,s,d)$. The rest of allowed decay processes are weak neutral currents which are heavily suppressed and their contribution is negligible. Furthermore, the decay of the top quark is mainly to the bottom quark thanks to the large value of the CKM mixing matrix element between the bottom and top quarks. The $W$ boson has two main decay modes; hadronic (68%) and leptonic (32%) [5]. There are two $W$ boson decays in each $t\bar{t}$ event and the $t\bar{t}$ decay channels can thus be categorized based on the combination of $W$ decay modes to all-hadronic, semi- leptonic and dilepton channels. This analysis focuses on the semi-leptonic channel. The degree of collimation of the produced particle showers and their angular separation in the detector defines the topology of an event. In the resolved topology, $t\bar{t}$ decay products are reconstructed as individual jets and a lepton, see Fig. 1a). Events in this topology are usually produced at lower invariant masses of the $t\bar{t}$ pair. In the semi-boosted topology, decay products on the side where the $W$ boson is decaying hadronically are collimated enough to form one jet in the detector with exception of the jet from the $b$ quark, see Fig. 1b). In the semi-boosted mixed topology, which is a special case of semi-boosted topology, the angularly isolated jet is one of the $W$ boson hadronic decay products, see Fig. 1c). In the boosted topology, all products from the hadronically decaying top quark are collimated and form one large jet in the detector, see Fig. 1d). The fractions of the topologies are correlated to the energy spectrum of the $t\bar{t}$ pair forming a gradual transition from the resolved to boosted topologies. The number of events of the resolved topology drops significantly with increasing energy of the process. In an intermediate energy regime, the number of events in the boosted topology is not yet large enough to fill the gad after the resolved topology. Finding ways to improve the events reconstruction efficiency by adding the semi-boosted topologies, which reside in the aforementioned transition energy region, is one of the aims of this paper. This can help gain statistics in $t\bar{t}$ analyses. All these four topologies mentioned are explored in this paper. In addition, we study the resolution of the mass peak of an hypothetical scalar resonance decaying to ${t\bar{t}}$ and we check the performance of the unfolding procedure in terms of its ability to retain an excess of a possible new physics signal. | ---|--- a) | b) | c) | d) Figure 1: A schematic of $pp\rightarrow y_{0}\rightarrow t\bar{t}$ decays in the resolved a), the semi-boosted mixed b), the semi-boosted c) and the boosted d) topologies in the $\ell$+jets channel. Red (green) cones represent small-$R$ (large-$R$) jets. The semiboosted topologies have been used in analyses at the LHC as _e.g._ by CMS [6, 7, 8] or ATLAS [9, 10, 11, 12], although mostly at lower energies or in cases where the boosted $W$-tagged hadronic jet plays a key rôle in the analysis. We argue that their potential is still worth exploring in the ${t\bar{t}}$ final states also at the highest energies of the LHC to come. We employ the semiboosted topologies in the ${t\bar{t}}\rightarrow\ell$+jets final states and emphasize their ability to recover a non-negligible event fraction as well as explore their usage in unfolding differential distributions, i.e. in precision measurements as well in searches for new physics. ## 3 Samples Events were generated for the processes $pp\rightarrow{t\bar{t}}{}$ (SM) and $pp\rightarrow y_{0}^{\prime}\rightarrow{t\bar{t}}{}$ with the addition of a $y_{0}$ scalar particle to the Standard Model [13, 14, 15, 16, 17, 18, 19, 20] using the MadGraph5 version 2.6.4 simulation toolkit [21]. The spin-0 model [19] contains an s-channel color-singlet scalar mediator with purely flavour- diagonal couplings proportional to the masses of the SM particles, therefore leaving top quark as the only relevant SM fermion coupling to the new hypothetical scalar. The parton shower and the hadronization processes were simulated using Pythia8 [22]. Masses of the hypothetical $y_{0}$ particle, which serves effectively as a source of semi-boosted and boosted top quarks, were selected as 500, 600, 700, 800, 900 and 1000 GeV to sample through the region where the number of events in the resolved topology starts to decline rapidly (500 GeV) to where the number of the boosted topology events is becoming dominant (1000 GeV). The decay width of $y_{0}$ of 1%, 10% and 30% of its mass for each sample were studied, results and values in the tables and plots are shown for the decay width of 10%. The SM $t\bar{t}$-sample without the hypothetical $y_{0}$ particle ensures the correspondence with data measured in LHC experiments and is used as a background to $y_{0}$ and for corrections for the unfolding procedure. The top quark mass in simulation was set to 173 GeV (MadGraph5 default). To ensure the strength of the evidence from the contribution of different samples, all samples were weighted to the same luminosity ($\sim 12~{}\mathrm{fb}^{-1}$) for stacking and unfolding purposes, and which corresponds to the luminosity of the $t\bar{t}$ sample. The cross-sections of the samples used are summarized in Table 3. The numbers of events in the table are presented for one statistically independent sample and the generated samples also include charge conjugated processes in the decay, _i.e._ the top and anti-top quark decays were swapped. All samples were generated at the next-to-leading order (NLO) accuracy in perturbative quantum chromodynamics (QCD), allowing also hard process with the additional high-${p_{\rm T}}$ jet production. The samples for the description of the $W+$jets and $WW+$jets backgrounds were prepared within the same framework as the signal samples. The addition of the associated production of one $W$ boson and two $b$ quarks ($Wbb\rightarrow\ell\nu bb$ \+ jets) background and two $W$ bosons and two $b$ quarks ($WWbb\rightarrow\ell\nu bb+$jets) background brings the analysis close to those over data while the $y_{0}$ sample represents a signal of new physics. The ATLAS-like detector was simulated using the Delphes version 3.4.1 package [3] with a modified ATLAS card111The modification is the addition of information about $B$-hadrons and in the reconstruction of both small as well as large-$R$ jets.. This simulation is able to perform particle propagation through the magnetic field as well as hadronic and electromagnetic calorimeter simulation including the response of the detector, muon identification system and missing energy. The Delphes package has its own reconstruction procedure leading to detector-level jets with a simulated realistic energy response based on the performance of the ATLAS detector at LHC. Jets with two distance parameters 0.4 and 1.0 were reconstructed using the anti-$k_{t}$ algorithm [23] to form small-$R$ jets (small jets) and large-$R$ jets (large jets), using the FastJet algorithm [24]. The Delphes package has its built-in jet energy scale correction for jets, which is in our case used for small jets only as pre-correction, with a private jet energy scale correction applied on top of it for small-$R$ and as a fully private correction for the large-$R$ jets. More details can be found in Appendix A and in the selection section. For the large-$R$ jets see Section 4.3 while for small-$R$ jets see Section 4.4. The cross-section, processes details and the generated number of events for samples generated by the MadGraph5 package; c.c. stands for the charge conjugation and $\ell$ for an electron or muon. A cut of $p_{\mathrm{T,SJ}}>20$ GeV is applied at the generator level. In the left column, values on the $y_{0}$ lines indicate its generated mass. Sample | Cross-section [pb] | Generated process | Events ---|---|---|--- $Wbb$+jets | 153.40000 | $pp\rightarrow W^{+}+j,W^{+}\rightarrow\ell^{+}+\nu_{\ell}$+c.c. | 655,855 $WWbb$+jets | 180.30000 | $pp\rightarrow W^{+}W^{-}b\bar{b},W^{+}\rightarrow\ell^{+}+\nu_{\ell},W^{-}\rightarrow jj$+c.c. | 1,000,000 $y_{0}$ 1000 GeV | 0.031 | $pp\rightarrow y_{0}\rightarrow t\bar{t},t\rightarrow bjj,\bar{t}\rightarrow\bar{b}\ell^{-}\bar{\nu}_{\ell}$+c.c. | 500,000 $y_{0}$ 900 GeV | 0.053 | $pp\rightarrow y_{0}\rightarrow t\bar{t},t\rightarrow bjj,\bar{t}\rightarrow\bar{b}\ell^{-}\bar{\nu}_{\ell}$+c.c. | 500,000 $y_{0}$ 800 GeV | 0.091 | $pp\rightarrow y_{0}\rightarrow t\bar{t},t\rightarrow bjj,\bar{t}\rightarrow\bar{b}\ell^{-}\bar{\nu}_{\ell}$+c.c. | 500,000 $y_{0}$ 700 GeV | 0.160 | $pp\rightarrow y_{0}\rightarrow t\bar{t},t\rightarrow bjj,\bar{t}\rightarrow\bar{b}\ell^{-}\bar{\nu}_{\ell}$+c.c. | 500,000 $y_{0}$ 600 GeV | 0.270 | $pp\rightarrow y_{0}\rightarrow t\bar{t},t\rightarrow bjj,\bar{t}\rightarrow\bar{b}\ell^{-}\bar{\nu}_{\ell}$+c.c. | 500,000 $y_{0}$ 500 GeV | 0.410 | $pp\rightarrow y_{0}\rightarrow t\bar{t},t\rightarrow bjj,\bar{t}\rightarrow\bar{b}\ell^{-}\bar{\nu}_{\ell}$+c.c. | 500,000 $t\bar{t}+$jets | 178.60000 | $pp\rightarrow t\bar{t},t\rightarrow bjj,\bar{t}\rightarrow\bar{b}\ell^{-}\bar{\nu}_{\ell}$+c.c. | 2,934,961 ## 4 Object and event selection Events considered in the analysis are reconstructed at two levels; once with the Delphes ATLAS-like detector simulation, forming detector level spectra, and at the particle level. The event selection and the requirements differ slightly for the reconstruction level and for the boosted, semi-boosted, semi- boosted mixed and resolved topologies and are described below. Unless stated otherwise, the same object and event selection applies to the particle-level objects and selections. ### 4.1 Missing transverse energy requirement The missing transverse energy ($E_{\mathrm{T,miss}}$) is a measure of energy imbalance in the plane transverse to the beam and is equal to the value of negative vector sum in the transverse plane of energies of all objects leaving a calorimeter deposit. By definition this should equal to zero thanks to the law of energy conservation, but the energy taken away by the undetected neutrinos is not counted for in the detector and their contribution to $\ell/\mu$ channels via leptonic $\tau$ decays is small. The magnitude of the missing transverse energy is required to be $E_{\mathrm{T,miss}}>25$ GeV for all topologies as well as for both the detector and the particle levels. This ensures that only events in which neutrinos carry away a considerable amount of energy are chosen for the analysis. This is a standard requirement for the missing energy in most of top quark analyses in channels involving a charged lepton. ### 4.2 Lepton selection A requirement on the lepton (muon or electron) transverse momentum ensures the selected lepton comes from the hard process and a cut of $p_{\mathrm{T},\ell}>25$ GeV is used, a typical value in real experiment also due to trigger requirements. Tau leptons are not considered in this analysis as they decay before they enter the detector. In case more leptons fulfilling the ${p_{\rm T}}$ requirement, only the electron or muon with the highest transverse momentum is taken into account. This requirement is the same for all topologies. Charged leptons may radiate low energy photons which are highly collimated. E. g. for electrons the separation of such photons and the lepton is below the resolution of the detector and thus the photon energy is included by construction at the detector level. The lepton dressing procedure is performed at the particle level reconstruction to correct for this phenomenon, in which the photon four-vectors, fulfilling the condition of the angular separation threshold $\Delta R_{\mathrm{\gamma,\ell}}=\sqrt{\Delta\eta_{\mathrm{\gamma,\ell}}^{2}+\Delta\phi_{\mathrm{\gamma,\ell}}^{2}}<0.1$, are added to the lepton four-vector. ### 4.3 Large jet selection Jets are the experimental signatures of hadronic final states of quarks and gluons, which form particle showers entering the detector. In a typical collider detector, jet constituents are clustered energy deposits in calorimeters, or stable particles at the particle level. The jet four-vector is the result of the reconstruction with the anti-$k_{t}$ algorithm. We call jets reconstructed with a distance parameter $\Delta R=1$ as large jets or large-$R$ jets. A private jet energy scale correction is derived on the $t\bar{t}$ sample and applied to the detector level large jet before the selection in order to correct jet energies to the particle level. The magnitude of the jet energy scale correction is about 5% depending on jet pseudorapidity $\eta$222The pseudorapidity is defined as a function of the polar angle $\theta$ as $\eta\equiv-\ln\tan\frac{\theta}{2}$. and $p_{\mathrm{T}}$. In the event selection, the transverse momentum of large jets is required to be $p_{\mathrm{T,LJ}}>100$ GeV. This condition helps to reduce the number of events with jets not coming from top quark or $W$ decays. Furthermore, all large jets are considered in the pseudorapidity range $|\eta|<2.5$. This constraint ensures in practice better jet identification as the forward (large $|\eta|$) region is not well instrumented for tracking and has a worse energy resolution. The isolation criterion of jets to be isolated from the lepton ensures that the selected lepton is not contained within the large jet by following the requirement of $\Delta R_{\mathrm{LJ,\ell}}=\sqrt{\Delta\eta_{\mathrm{LJ,\ell}}^{2}+\Delta\phi_{\mathrm{LJ,\ell}}^{2}}>1$. All these requirements are applied to all three topologies333There is no large jet in the resolved topology. and both the detector and the particle levels. Each large jet is then probed for the top quark and $W$ boson tagging (see Appendix B for tagging and mistag efficiencies), first for the hypothesis as coming from the top quark decay, then, in the semi-boosted topology, as coming from the $W$ boson decay and in case none of the tagging was successful, the event is then considered as a candidate for the semi-boosted mixed or the resolved topology. Tagging for the boosted topology is based on the constraint on the mass of the large jet $110$ GeV$<M_{\mathrm{LJ}}<240$ GeV and a constraint combining the large jet mass and a jet substructure variable $\tau_{3,2}$ [25], roughly a consistency measure of finding three sub-jets inside the studied large jet rather than two sub-jets, as $M_{\mathrm{LJ}}/\tau_{3,2}>256$ GeV. The value for the second constraint was added to avoid background events, e.g. a large jet from the $W$ boson. The selection is depicted in Fig. 2 (right) by the area inside the dotted lines. The jet substructure variable $\tau_{\mathrm{N}}$ is defined as $\tau_{\mathrm{N}}=\frac{1}{d_{\mathrm{0}}}\sum_{k}p_{\mathrm{T},k}\min{R_{1,k},R_{2,k},...,R_{N,k}},$ (1) where $d_{0}$ is a normalization parameter computed as $d_{\mathrm{0}}=\Delta R\,\sum_{k}p_{\mathrm{T},k}$ (2) with $\Delta R$ being the jet distance parameter used (here 1.0). The subjettinesses are then combined into a ratio, such as $\tau_{2,1}=\frac{\tau_{2}}{\tau_{1}}$, which has the ability to distinguish the compatibility of the jet substructure with the one subjet hypothesis (ratio closer to unity) in comparison to the scenario with two subjets (ratio closer to zero). The substructure variable $\tau_{3,2}$ is defined in a similar manner and effectively compares the jet substructure consistency with two or three subjets. | ---|--- Figure 2: Distribution of the jet substructure variable $\tau_{2,1}$ (left) and $\tau_{3,2}$ (right) in the dependence on the large jet mass ($M_{\mathrm{LJ}}$) for the sample with $M_{\mathrm{y_{0}}}=700$ GeV at the detector level. The large jets in the red dotted area are selected for the reconstruction of the boosted $W$ boson and the top quark. The mass window is set around the expected masses of the $W$ boson and top quark, respectively. Tagging of large jets for the semi-boosted topology is based on the large jet mass window $60$ GeV $<M_{\mathrm{LJ}}<120$ GeV and the jet substructure variable $\tau_{2,1}$, reflecting the consistency of the jet to contain two sub-jets inside the studied large jets rather than a single sub-jet, as $\tau_{2,1}<0.6$. The selection is shown in Fig. 2 (left). The large jet in the semi-boosted topology can be tagged as coming from the $W$ boson, but there is no expected peak in the large jet mass spectrum in the semi-boosted mixed topology, and thus it cannot be tagged based on its mass. The case of the semi-boosted mixed topology is also considered, where the isolated small jet does not originate from the $b$-quark but from a light quark from the $W$ boson decay. To ensure the consistency with a $b$-quark being a part of the large jet, there is a requirement on the large jet to overlap with a small jet originating from a $b$-quark by requiring the condition $\Delta R_{\mathrm{LJ,SJ}}=\sqrt{\Delta\eta_{\mathrm{LJ,SJ}}^{2}+\Delta\phi_{\mathrm{LJ,SJ}}^{2}}<0.5$ at the detector level, see Section 4.4 for further information about small jets. A similar condition is set at the particle level for the large jet to contain a $B$-hadron within $\Delta R_{\mathrm{LJ,B-had}}<0.5$. ### 4.4 Small jets selection The general requirements for small jets (small-$R$ jets), which are jets reconstructed with a distance parameter $\Delta R=0.4$, are on the transverse momentum $p_{T}>25$ GeV, pseudorapidity $|\eta|<2.5$ and the isolation from the selected lepton $\Delta R_{\mathrm{SJ,\ell}}>0.5$. The jet energy scale correction is applied on the detector level small jet objects before the selection, which was derived on the $t\bar{t}$ sample on top of the Delphes default jet energy scale. The magnitude of this residual jet energy scale correction is about 2%. The identification of the small jets as coming from the $b$-quark, $b$-tagging, is done by the Delphes simulation at the detector level using the efficiency parameterization taken from [26], leading to the $b$-tagging efficiency of 67.4% (72.8%) for jets of ${p_{\rm T}}=50$ (100) GeV. At the particle level, $b$-tagging is done by the requirement of containing a $B$-hadron within the jet as $\Delta R_{\mathrm{SJ,B-had}}<0.2$ for $B$ hadrons of ${p_{\rm T}}>5$ GeV as recommended by the LHC Top WG [27]. #### 4.4.1 Small jet for the reconstruction of the leptonically decaying top quark The small jet for the reconstruction of the leptonically decaying top quark has to fulfill the angular condition $\Delta R_{\mathrm{SJ,\ell}}<2$, which ensures that it lies in the vicinity of the selected lepton, and must be $b$-tagged. The condition of a large jet isolation from the lepton $\Delta R_{\mathrm{SJ,LJ}}>1.5$ applies to all topologies with the exception of the resolved topology, where there is no large jet. Such a selected small jet is then removed from the jet collection and from further consideration. This is the only selected small jet in case of the boosted topology. #### 4.4.2 Small jet for the hadronically decaying top quark, semi-boosted topology For the reconstruction of the hadronically decaying top quark in the semi- boosted topology a small $b$-tagged jet is required in the vicinity to the selected large jet $1<\Delta R_{\mathrm{SJ,LJ}}<1.5$. Thus a partial overlap between the selected large jet and the considered small jet is allowed, _i.e._ the selected $b$-tagged small jet should be partially contained in the selected large jet. #### 4.4.3 Small jet for the hadronically decaying top quark, semi-boosted mixed topology The conditions for the semi-boosted mixed topology are similar to the conditions for the semi-boosted topology. The vicinity condition to the large jet remains unchanged but the small jet is required not to be $b$-tagged while a $b$-tag is required for the selected large jet. #### 4.4.4 Small jet for the hadronically decaying top quark, resolved topology The resolved topology selection is tried as the last option before the event is discarded. The reconstruction of the hadronically decaying top quark in the resolved topology requires three small jets, one of them $b$-tagged. The algorithm first takes two small non-$b$-tagged jets with the highest transverse momentum and tests their invariant mass $M_{\mathrm{SJ,SJ}}<120$ GeV to avoid dijets not corresponding to the mass of the $W$ boson. Then it adds the four-vector of the remaining $b$-tagged jet444One $b$-tagged jet is used in the reconstruction of the leptonically decaying top quark.. If all such three jets are found, the event is accepted. ## 5 Reconstruction The events passing the selection described in the previous chapter are entering the top anti-top quark pair four-vector reconstruction described in this section and illustrated on the $y_{0}\rightarrow t\bar{t}$ sample with $M_{\mathrm{y_{0}}}=700$ GeV, although all samples were processed the same way. ### 5.1 Leptonically decaying top quark The reconstruction of the leptonically decaying top quark is the same for all four studied topologies, starting with setting the transverse momentum of the neutrino ($p_{\mathrm{T,\nu}}$) with the missing transverse energy $E_{\mathrm{T,miss}}$. The missing energy together with the four-vector of the selected lepton is used to calculate the longitudinal component of the neutrino momentum ($p_{z,\nu}$) from the $W$ boson mass constraint $M_{W}=M_{\ell\nu}$, which leads to a quadratic equation with two solutions in general. The solution which leads to the more central neutrino in the rapidity is accepted in the reconstruction. If the solution leads to a complex number result, the imaginary part is discarded. This procedure is often used in top quark analyses, e.g. in the ATLAS experiment [1]. The $W$ boson is reconstructed as the sum of four-vectors of the lepton and the reconstructed neutrino. Finally, the top quark four-vector is formed from the reconstructed $W$ boson four-vector and the selected $b$-tagged small jet as described in Section 4.4.1. The mass of the reconstructed leptonically decaying top quark in the studied topologies is shown in Fig. 3 at both the detector and particles levels. | ---|--- Figure 3: Comparison between topologies for the shapes of the reconstructed leptonically decaying top quark mass ($M_{\mathrm{t,lep}}$) for the sample with $M_{\mathrm{y_{0}}}=700$ GeV at the particle (left) and the detector (right) levels. The vertical dashed lines indicate the position of the maximum value in the spectrum for each of the topologies, which happens to be the same bin for all topologies at the particle level. ### 5.2 Hadronically decaying top quark, boosted topology The recognition of the boosted event is done by selecting a large jet fulfilling conditions specified in Section 4.3. Since there is no reconstructed $W$ boson candidate in this case, the top-tagged large jet is considered to contain most of the products coming from the top quark decay. To verify this, the mass of the large jet corresponding to the reconstructed hadronically decaying top quark is shown in Fig. 4 at both the detector and the particle levels, showing a peak around the top quark mass. ### 5.3 Hadronically decaying top quark, semi-boosted topology The reconstruction of the hadronically decaying top quark in the semi-boosted topology uses the selected $W$-tagged large jet and one small $b$-tagged jet, fulfilling the conditions mentioned in Section 4.4.2. The large jet is considered as the hadronically decaying $W$ boson, with its mass shown in Fig. 5. The reconstructed top quark is formed by adding the selected $b$-tagged small jet four-vector and its mass is shown in Fig. 4, again exhibiting the expected peak which is sharper at the particle level due to finite detector resolution. ### 5.4 Hadronically decaying top quark, semi-boosted mixed topology The reconstruction of the hadronically decaying top quark in the semi-boosted mixed topology is performed by summing one large jet and one non-$b$-tagged small jet four-vectors, see Section 4.4.3 for details. The invariant mass of such a large jet does not produce a peak but in the combination with the selected small jet the resulting invariant mass peak should correspond to the one of the top quark as shown in Fig. 4. ### 5.5 Hadronically decaying top quark, resolved topology The resolved topology has the largest combinatorial ambiguity as it involves largest multiplicity of objects for the reconstruction, starting with the $W$ boson reconstruction from two highest transverse momentum small jets which are not tagged as $b$-jets. The reconstructed $W$ boson candidate mass, shown in Fig. 5, is required to be 60–120 GeV, otherwise the event is discarded. The third selected small jet for the reconstruction in the resolved topology is required to be $b$-tagged and is added to the reconstructed $W$ boson forming finally the four-vector of the hadronically decaying top quark with its mass shown in Fig. 4. | ---|--- Figure 4: Comparison between topologies for the hadronically decaying top quark mass ($M_{\mathrm{t,had}}$) for the sample with $M_{\mathrm{y_{0}}}=700$ GeV at the particle (left) and the detector (right) level. The vertical dashed lines indicate the position of the maximum value in the spectrum for each of the topologies. | ---|--- Figure 5: Comparison between topologies for the shapes of the reconstructed hadronically decaying $W$ boson mass ($M_{\mathrm{W,had}}$) for the sample with $M_{\mathrm{y_{0}}}=700$ GeV at the particle level (left) and the detector level (right). The vertical dashed lines indicate the position of the maximum value in the spectrum for each of the topologies. ### 5.6 Top anti-top quark pair system The four-vector of the top anti-top quark (${t\bar{t}}{}$) pair system is reconstructed as the combination of the leptonically and hadronically decaying top quarks. Its mass and the contributing fractions of events from the particular topologies are shown in Fig. 6. The fractions depend on the mass of the hypothetical $y_{0}$ particle as shown in Fig. 7. This plot illustrates the existence of the transition region between the resolved and the boosted topology which was mentioned in Section 1 and which benefits from the implementation of the semi-boosted and semi-boosted mixed topologies via their non-negligible event fractions. | ---|--- Figure 6: Contributions of different topologies to the reconstruction of the ${t\bar{t}}$ invariant mass ($M_{\mathrm{t\bar{t}}}$) for the sample with $M_{\mathrm{y_{0}}}=700$ GeV in descendant order: boosted (blue), semi-boosted (red), semi-boosted mixed (purple), and resolved (green) at the particle (left) and the detector (right) level. The corresponding percentage is presented in the legend for each topology. --- Figure 7: The fraction of events contributing to the $t\bar{t}$ reconstruction from each topology over samples with various masses of the hypothetical $y_{0}$ particle ($M_{y_{0}}$) at the detector (solid lines, full markers) and particle (dotted lines, open markers) levels. ### 5.7 Migration of events between topologies The kinematic reconstruction at the detector and particle levels is accomplished in parallel under the same selection requirements but the resulting event topologies are not necessarily the same at the two reconstruction levels. This is described by a migration matrix between the two levels in terms of the topologies as illustrated in Fig. 8 (left). Similar migrations further apply to values and bins of any studied observable. The example for the reconstructed ${t\bar{t}}{}$ pair transverse momentum ($p_{\mathrm{T,t\bar{t}}}$) is shown in Fig. 8 (right), this plot also shows migration of events between different bins. A matching condition, requiring the same topology at the detector and particle levels, is applied for the purpose of unfolding and only those events are taken into account to study migration between bins in selected spectra in the subsequent unfolding procedure. | ---|--- Figure 8: The migration of events between the detector and the particle levels between the topologies (left) and an example of the migration of events for the transverse momentum of the $t\bar{t}$ system $p_{\mathrm{T,t\bar{t}}}$ between the resolved (R), semi-boosted mixed (SBM), semi-boosted (SB) and boosted (B) topologies (right) for the sample with $M_{\mathrm{y_{0}}}=700$ GeV. The bin range for each of the sub-spectrum is 0–1000 GeV. ## 6 Results The results are summarized in this section consisting of the resolution of different samples, the performance of the unfolding process for selected variables in dedicated stacked samples; and the significance studies of a Beyond the Standard Model (BSM) signal over the Standard Model background before and after unfolding. ### 6.1 Resolution of the ${t\bar{t}}{}$ resonance mass peak As expected for the $y_{\mathrm{0}}\rightarrow t\bar{t}$ BSM process, the reconstructed $t\bar{t}$ mass ($M_{\mathrm{t\bar{t}}}$) spectrum peaks around the value of the mass of the studied hypothetical particle $y_{0}$ by construction. The width of the distribution is a measure of the resolution of the $t\bar{t}$ resonance mass in given topology. A Gaussian curve was used to determine its width at both the detector and the particle levels and is shown in Fig. 9 as absolute (top) as well as relative (bottom), i.e. when divided by the $y_{0}$ mass in the corresponding sample. | ---|--- Figure 9: The comparison of the $t\bar{t}$ system mass resolution for the samples with different masses of the hypothetical particle $y_{0}$ in the particular topologies the absolute (left) and relative (right) resolution with respect to $M_{\mathrm{y_{0}}}$. The horizontal shift in the position of markers around each mass point in the right plot is on the purpose to avoid the loss of information due to their overlap. The relative resolution is comparable between all the topologies with the exception for the semi-boosted mixed topology at the detector level where the resolution is slightly worse, being approximately 15% while the resolution of the other topologies is 9–13%. ### 6.2 The unfolding procedure and significance tests The unfolding procedure corrects for finite detector resolution effects in the measured spectra. The Fully Bayesian Unfolding (FBU) [28] was chosen for the purposes of this procedure as implemented with the PyMC3 package [29]. In short, FBU uses the Bayesian theorem to estimate the truth (here particle) level spectrum from the measured detector-level spectrum using the migration matrix. As such, the matrix must be evaluated using simulated events and is normalized so that it describes the migration of events in a selected spectrum from particle-level to detector level bins. FBU returns a binned posterior corresponding to the estimated probability distribution of the variable in each particle-level bin. Its maximum can be chosen as the truth-level estimate for the observable in given bin. Among the main advantages of this method are that the migration matrix does not need to be inverted (which is a numerically unstable task) nor the problems is regularized and therefore becoming modified as e.g. in the singular value decomposition method [30]. Further advantages are the absence of iterations (and a need to terminate them at some point) as in the iterative Bayesian unfolding [31]; and the full control over the result as the full probability density is revealed in each bin. In general, the unfolding process can be described by the following formula $\hat{T}_{\mathrm{i}}=\frac{1}{f_{\mathrm{i,eff}}}M_{\mathrm{ij}}^{-1}f_{\mathrm{j,acc}}(D_{\mathrm{j}}-B_{\mathrm{j}})\,,$ (3) where $\hat{T}_{i}$ is the estimate of the particle level spectrum in bin $i$, $f_{\mathrm{i,eff}}$ is the efficiency correction, $M_{\mathrm{ij}}^{-1}$ stands for the main unfolding procedure using the migration matrix555Fully Bayesian Unfolding does not use the inverted migration matrix for the estimation of the particle level spectra, the notation in the formula stands for a shorthand of the unfolding procedure in general. $M_{\mathrm{ij}}$, $f_{\mathrm{j,acc}}$ is the acceptance correction in detector level bin $j$, $D_{\mathrm{j}}$ is the measured detector level spectrum and $B_{\mathrm{j}}$ is the estimated background. The efficiency and acceptance correction factors are defined as $f_{\mathrm{i,eff}}=\frac{P_{\mathrm{t\bar{t},i}}^{\mathrm{match}}}{P_{\mathrm{t\bar{t},i}}}\quad\mathrm{and}\quad f_{\mathrm{j,acc}}=\frac{D_{\mathrm{t\bar{t},j}}^{\mathrm{match}}}{D_{\mathrm{t\bar{t},j}}}\,,$ (4) where, in the context of this study, $P_{\mathrm{t\bar{t},i}}$ is the particle-level spectrum in bin $i$ using the $t\bar{t}$ sample as the model process, while $P_{\mathrm{t\bar{t},i}}^{\mathrm{match}}$ is the particle level spectrum in bin $i$ for events matched to the detector level events, _i.e._ only events reconstructed at both particle and detector levels in the same topology contribute to this spectrum. Similarly, $D_{\mathrm{t\bar{t},j}}$ is the detector-level spectrum in bin $j$ and $D_{\mathrm{t\bar{t},j}}^{\mathrm{match}}$ is the detector level spectrum in bin $j$ for events matched to the particle level. Both correction factors are in the range between 0 and 1 by definition and were prepared using a statistically independent $t\bar{t}$ sample w.r.t. the ${t\bar{t}}{}$ sample used to compose the spectra to be unfolded. The test spectra entering the unfolding procedure have the addition of the $y_{0}$ signal sample with an amplified cross section by an ad hoc number of $10^{3}$ ($5\cdot 10^{3}$ in case of resolved topology) to study the impact of the unfolding on the strength of a well-present signal of similar significance in all topologies. ### 6.3 Unfolding selected spectra Unfolding of three selected spectra is described in this section on the example in the semi-boosted topology, namely the transverse momentum of the hadronically decaying top quark ($p_{\mathrm{T}}^{\mathrm{t,had}}$), the invariant mass of the reconstructed top anti-top quark pair ($M_{\mathrm{t\bar{t}}}$) and the production angle of the hadronically decaying top quark ($\cos\theta^{*}_{\mathrm{t,had}}$). The simulated samples were divided into two statistically independent halves. The sum of first such halves is taken as the pseudo-data serving as input into the unfolding procedure while their statistically independent counterparts are stacked to form the expected total prediction. The comparison between the pseudo-data (black markers) and stacked histograms of the prediction (filled) for the transverse momentum of the hadronically decaying top quark in the semi-boosted topology is shown in Fig. 10 (left). The histogram and the stacked spectra agree well, their difference is within the statistical uncertainties. The binning was selected to respect the resolution, falling statistics, and so that the unfolding proceeds fast enough but still delivers useful information about spectra shape. The $t\bar{t}$ system invariant mass spectrum was chosen due to the possibility to see the hints of events from the sample with the hypothetical particle $y_{0}$. The spectrum entering the unfolding procedure and its statistically independent counterpart is shown in Fig. 10 (right). The production angle of the hadronically decaying top quark was chosen to study the performance of this variable against the different topologies as an example of an angular variable. The general assumption is that boosted topology has jets preferably in smaller $|\eta|$ range due to the usually larger transverse momentum of the particles. This trend of the central $\eta$ preference drops in semi-boosted topologies and diminishes in the resolved topology. The spectrum used in unfolding and its statistically independent counterpart is also shown in Fig. 10 (bottom). The corrections used in the unfolding procedure were evaluated with the statistically independent sample from the one forming the pseudo-data using the $t\bar{t}$ sample. The corresponding acceptance and efficiency corrections are shown in Fig. 11 and the migration matrices are presented in Fig. 12. The unfolding results for the three spectra are shown in Fig. 13. A $\chi^{2}$ test was computed between the unfolded and the $t\bar{t}$ particle level spectra with the $y_{0}$ signal included, resulting in $\chi^{2}_{\mathrm{t\bar{t}+y_{0}}}$. As the input pseudo-data do contain the $y_{0}$ signal, this comparison is a closure test of the unfolding procedure ability to recover the particle level spectrum which consists of ${t\bar{t}}{}$ as well as the $y_{0}$ signal sample. Another $\chi^{2}$ test was performed between the unfolded spectrum and the $t\bar{t}$-only particle level spectrum, resulting in $\chi^{2}_{\mathrm{t\bar{t}}}$. This tests evaluates the incompatibility of the unfolded pseudo-data with the ${t\bar{t}}$-only hypothesis. The middle panels of Fig. 13 show the ratio of the unfolded spectra over the particle level spectra from the $t\bar{t}$ sample (black full markers)where the disagreement with the $t\bar{t}$-only particle level spectrum is caused by the presence of the $y_{0}$ sample in the pseudo-data. | ---|--- Figure 10: Comparison of the detector level spectra from two statistically independent parts (full markers and filled stack) for the $t\bar{t}$ sample with the addition of $Wbb$ and $WWbb$ backgrounds and an admixture of events from the sample with $M_{\mathrm{y_{0}}}=700$ GeV for the transverse momentum of the hadronically decaying top quark ($p_{\mathrm{T}}^{\mathrm{t,had}}$, left), for the top anti-top quark pair invariant mass ($M_{\mathrm{t\bar{t}}}$, right) and for the crossing angle of the hadronically decaying top quark ($\cos\theta^{*}_{\mathrm{t,had}}$, bottom), all spectra are reconstructed in the semi-boosted topology. The hatched bands in the top panel represent the statistical uncertainty in each sample. | ---|--- Figure 11: The acceptance and the efficiency for the two statistically independent $t\bar{t}$ samples for the reconstructed hadronically decaying top quark transverse momentum ($p_{\mathrm{T}}^{\mathrm{t,had}}$, left), the invariant mass of the reconstructed $t\bar{t}$ system ($M_{\mathrm{t\bar{t}}}$, right) and for the production angle of the hadronically decaying top quark ($\cos\theta^{*}_{\mathrm{t,had}}$, bottom). All spectra are reconstructed in the semi-boosted topology. Indices 1 and 2 denote the two statistically independent samples. | ---|--- Figure 12: The migration matrices for the reconstructed transverse momentum of the hadronically decaying top quark ($p_{\mathrm{T}}^{\mathrm{t,had}}$, left) for the reconstructed invariant mass of $t\bar{t}$ system ($M_{\mathrm{t\bar{t}}}$, right) and for the crossing angle of the hadronically decaying top quark ($\cos\theta^{*}_{\mathrm{t,had}}$, bottom), all in the semi-boosted topology. All matrices were derived from the $t\bar{t}$ sample and used in the unfolding procedure. The correlation factor $\rho$ is calculated between the detector and particle levels. | ---|--- Figure 13: The comparison between the unfolded spectrum (black full markers), the detector level spectrum (blue) with the acceptance and efficiency correction applied to be comparable to the particle level one; and the $t\bar{t}$-only particle level spectrum (red) for the transverse momentum of the hadronically decaying top quark ($p_{\mathrm{T}}^{\mathrm{t,had}}$, top left), for the invariant mass of the reconstructed $t\bar{t}$ spectrum ($M_{\mathrm{t\bar{t}}}$, top right) and for the production angle of the hadronically decaying top quark ($\cos\theta^{*}_{\mathrm{t,had}}$, bottom), all in the semi-boosted topology. The $\chi^{2}_{\mathrm{t\bar{t}}}$ test is performed between the unfolded and $t\bar{t}$ particle level spectra while $\chi^{2}_{\mathrm{t\bar{t}+y_{0}}}$ between the unfolded and the $t\bar{t}$ particle level spectra with the $y_{0}$ signal included (closure test). The middle panels show the ratio of the unfolded spectrum over the particle level spectrum of the $t\bar{t}$ sample (full markers). Here the disagreement with the $t\bar{t}$-only particle level spectrum is caused by the addition of the $y_{0}$ signal before unfolding, the yellow band shows the statistical uncertainty in the particle level spectrum from the $t\bar{t}$ sample. The bottom panels show the detector (open markers, dashed line) and unfolded (full markers, solid line) $y_{0}$ signal significances in each bin. The detector- level $y_{0}$ signal significance is calculated using the original detector- level spectrum, _i.e._ without applying the acceptance and efficiency correction. ### 6.4 Significance at the detector and unfolded levels The strength of the $y_{0}$ signal is quantified by the significance which considers the total statistical uncertainties in samples used in given bin. The significance $S$ in bin $i$ before unfolding is defined as $S_{\mathrm{i,det}}\equiv(P_{\mathrm{i}}^{t\bar{t}+y_{0}+B}-T_{\mathrm{i}}^{t\bar{t}}-B_{\mathrm{i},1}-\ldots- B_{\mathrm{i},k})/\sqrt{\sigma_{P_{\mathrm{i}}}^{2}+\sigma_{T_{\mathrm{i}}}^{2}+\sigma_{B_{\mathrm{i},1}}^{2}+\ldots}\,,$ (5) where $P_{\mathrm{i}}$ is the number of the the pseudo data events consisting from the signal and the background added to the expected $t\bar{t}$ sample in bin $i$; $T_{\mathrm{i}}$ is the detector level spectrum from the statistically independent $t\bar{t}$ sample, $B_{\mathrm{i},k}$ is the background contribution to the studied spectra from the $k$-th background sample; and $\sigma_{P_{\mathrm{i}}}$, $\sigma_{T_{\mathrm{i}}}$ and $\sigma_{B_{\mathrm{i},k}}$ are the statistical uncertainties in the pseudo data, $t\bar{t}$ and the $k$-th background samples, respectively (all at the detector level). The composition of the sample is denoted in the upper index, _e.g._ $t\bar{t}+y_{0}+B$ stands for the sum of the background, $t\bar{t}$ and the $y_{0}$ samples. Systematic uncertainties are not part of this study as their effect would be largely coherent across topologies, thus not changing the conclusions nor hierarchy of the observed patterns. A similar significance is defined after the unfolding procedure, before which the background was subtracted, as $S_{\mathrm{i,unf}}\equiv(U_{\mathrm{i}}^{t\bar{t}+y_{0}}-T_{\mathrm{i}}^{t\bar{t}})/\sqrt{\sigma_{U_{\mathrm{i}}}^{2}+\sigma_{T_{\mathrm{i}}}^{2}}\,,$ (6) where $U_{\mathrm{i}}$ is the number of the unfolded pseudo-data events in bin $i$, $T_{\mathrm{i}}$ is the particle level spectrum from the statistically independent $t\bar{t}$ sample; and $\sigma_{U_{\mathrm{i}}}$ and $\sigma_{T_{\mathrm{i}}}$ are the statistical uncertainties in the unfolded spectrum bin $i$ and in the statistically independent $t\bar{t}$ sample at the particle level, respectively. The binned detector and unfolded significances are shown under the ratio plots of the unfolded spectra in Fig. 13. The integral significance, strength of the signal over the whole spectrum, is defined similarly for both the detector and the unfolded level. The detector level integral significance is defined as $S_{\mathrm{I,det}}\equiv\sum_{i=0}^{m}(P_{\mathrm{i}}^{t\bar{t}+y_{0}+B}-T_{\mathrm{i}}^{t\bar{t}}-B_{\mathrm{i},1}-\ldots- B_{\mathrm{i},k})/\sqrt{\sum_{i=0}^{m}(\sigma_{P_{\mathrm{i}}}^{2}+\sigma_{T_{\mathrm{i}}}^{2}})\,,$ (7) where $m$ is number of bins in given spectrum. The detector level integral significance is the same for all variables. The integral significance at the unfolded level is defined as $S_{\mathrm{I,unf}}\equiv\sum_{i=0}^{m}(U_{\mathrm{i}}^{t\bar{t}+y_{0}}-T_{\mathrm{i}}^{t\bar{t}})/\sqrt{\sum_{i=0}^{m}(\sigma_{U_{\mathrm{i}}}^{2}+\sigma_{T_{\mathrm{i}}}^{2}})\,.$ (8) The unfolded integral significance varies slightly over spectra as in the unfolding procedure the integral of the spectrum may not be preserved. The values of both the detector and the unfolded integral significance are presented in the legend in Fig. 14. The significances were calculated for the three selected spectra in all topologies and at both detector and particle levels. The comparison between significances before and after the unfolding procedure over the studied topologies are shown in Fig. 14 for the spectra of $p_{\mathrm{T}}^{\mathrm{t,had}}$, $M_{\mathrm{t\bar{t}}}$ and $\cos\theta^{*}_{\mathrm{t,had}}$. The significance uncertainties were estimated by using 100 pseudo experiments for each spectrum with a smeared content in each detector-level bin. The smearing was performed by drawing a random number from the Gaussian distribution within the $\sigma$ parameter equal to the statistical uncertainty in the total detector-level spectrum in the given bin and with the mean parameter set to zero. Each such spectrum was unfolded using the same procedure and corrections and the binned significances were evaluated. The resulting standard deviation of significances in each bin is considered as the statistical uncertainty in the unfolded significance. The statistical uncertainty of the significances is already presented as error bars in Fig. 14. | ---|--- Figure 14: The detector (open markers, dashed line) and unfolded (full markers, solid line) significances for the transverse momentum of the hadronically decaying top quark ($p_{\mathrm{T}}^{\mathrm{t,had}}$, top left), invariant mass of the reconstructed $t\bar{t}$ pair ($M_{\mathrm{t\bar{t}}}$, top right) and the crossing angle of the hadronically decaying top quark ($\cos\theta^{*}_{\mathrm{t,had}}$, bottom) plotted for all the topologies in each bin. The orange band defines the area where the absolute value of the significance is below three, corresponding to the 3-$\sigma$ interval. The lower pads present the ratios of the unfolded over the detector level significances, without uncertainties which are highly correlated. The signal significances in the $p_{\mathrm{T}}^{\mathrm{t,had}}$ spectrum peaks at different values of $p_{\mathrm{T}}^{\mathrm{t,had}}$ depending on the topology as the event selection in each topology biases the spectrum and effectively selects different ranges in $M_{\mathrm{t\bar{t}}}$, too. On the other hand, significance in the $M_{\mathrm{t\bar{t}}}$ spectrum peaks around the value of the generated $y_{0}$ mass of 700 GeV as expected, with a slight tail to lower values in the resolved topology which is the least suitable one to reconstruct a resonance of such a large mass. In contrast, the $\cos\theta^{*}_{\mathrm{t,had}}$ is very flat also for the signal sample and there is no clear isolated excess of signal events in this spectrum, with the exception of the boosted topology which selects, by construction, high-${p_{\rm T}}$ large jets and thus also top quarks, consequently more localized in the central rapidity region, producing a pear around zero in $\cos\theta^{*}_{\mathrm{t,had}}$. The three selected spectra are thus good candidate observables to illustrate different behavior and spread of significances over bins, also presenting a selection of observables of a dimension of energy as well as dimensionless (angular). The binned significances are in general lower after the unfolding, for which an explicit proof is delivered by this study. The cause of this is as follows. While a sharper spectrum may be recovered by unfolding, the procedure in general correlates information among bins by maximizing a likelihood function in case of FBU, or minimizing (possibly modified and regularized) $\chi^{2}$ or iterating and sequentially improving the result for the case of other methods. An increase in the correlation across bins of the unfolded spectrum is a known and important fact and a correlation matrix should preferably be published along with unfolded spectra from real experiments, as done _e.g._ in [32, 33] where a correlation matrix between the observables was also evaluated. We observe that in case of the FBU method the posteriors variance usually increases, leading to larger absolute as well as relative uncertainties of the unfolded spectrum w.r.t. the particle level one. This effectively decreases the significance of the observed signal excess. An increase of the statistical uncertainties with the number of iterations in case of the Iterative Bayesian Unfolding [31] was also reported by other analyses [34]. We note that in our case the BSM signal significances decrease by 20–40%. Other more explicit regularization methods like the SVD [30] provide a spectrum with a smaller statistical uncertainty by definition (effectively ditching small regularized response matrix eigenvalues which would lead to large variations), but are prone to unfolding biases towards the underlying simulation spectrum. Also the FBU extension with a regularization term leads to more narrow posteriors (smaller statistical uncertainty) [35]. This places the standard FBU (without an explicit regularization term) among high-level unfolding methods with realistic statistical uncertainties. ## 7 Conclusions The results of the semi-boosted and semi-boosted mixed reconstruction algorithm show potential to enhance the number of events in the $t\bar{t}$ analyses in the semi-leptonic decay channel. The estimates show the enrichment in events between 20% and 50% in the $t\bar{t}$ pair mass region ranging from 500 GeV to 1000 GeV. The resolution in the semi-boosted topology and the resolved or boosted topology is comparable, only the semi-boosted mixed has a worse resolution roughly by factor of 1.5. The performance of the unfolding procedure including a simple background model shows results corresponding well to the particle level and the significance of the enhanced signal of the hypothetical $y_{0}$ particle is still visible after the unfolding. Values of the detector and the unfolded integral significance are comparable, yet there is 20–30% decrease in significance between the detector and the unfolding levels caused by the unfolding in the reconstructed $t\bar{t}$ mass spectrum and 5–20% in the reconstructed transverse momentum of the hadronically decaying top quark spectrum and the production angle of the top quarks, i.e. both for energy-dependent as well as angular variables. The concrete proof of diminishing a BSM signal significance by the unfolding procedure (using FBU) is, to our knowledge, shown explicitly for the first time in this study. Our findings thus support why the model-dependent searches for new physics at the LHC are mostly done at the detector level, using concrete detector-level (fully-simulated) BSM signals. We attribute the decrease of the significance to the increase of the statistical uncertainty in the unfolded spectrum, _i.e._ to the posterior widening in the case of the FBU method. Due to this, although the $M_{{t\bar{t}}}$ spectrum becomes sharper after unfolding, revealing a narrower peak of the $y_{0}$ resonance, the binned significance does not increase due to larger statistical uncertainties of the unfolded spectrum caused by unfolding-induced correlation across bins. On the other hand, the significance stays substantial even after unfolding, opening doors to a comparison of any theory prediction at the particle-level to the unfolded data. The described algorithm proves that semi-boosted and semi-boosted mixed topologies are sensitive to the possible presence of BSM signals. The selection criteria chosen close to those in real analyses make the studied algorithms applicable also in current LHC experiments. ## 8 Acknowledgments The authors gratefully acknowledge the support from the Czech Science Foundation project GAČR 19-21484S and project IGA_PrF_2021_004 of the Faculty of Science of the Palacky University Olomouc, Czech Republic. This work was performed as a part of fulfillment of doctoral studies of J. Pacalt at the Applied physics programme at the Faculty of Science of the Palacky University. ## References * [1] G. Aad et al. The ATLAS Experiment at the CERN Large Hadron Collider. JINST, 3:S08003, 2008. * [2] Lyndon R Evans and Philip Bryant. LHC Machine. JINST, 3:S08001. 164 p, 2008. This report is an abridged version of the LHC Design Report (CERN-2004-003). * [3] J. de Favereau, C. Delaere, P. Demin, A. Giammanco, V. Lemaître, A. Mertens, and M. Selvaggi. DELPHES 3, A modular framework for fast simulation of a generic collider experiment. JHEP, 02:057, 2014. * [4] S. Chatrchyan et al. The CMS Experiment at the CERN LHC. JINST, 3:S08004, 2008. * [5] Particle Data Group. Review of Particle Physics. Progress of Theoretical and Experimental Physics, 2020(8), 08 2020\. 083C01. * [6] Vardan Khachatryan et al. Identification techniques for highly boosted W bosons that decay into hadrons. JHEP, 12:017, 2014. * [7] Serguei Chatrchyan et al. Search for Anomalous $t\bar{t}$ Production in the Highly-Boosted All-Hadronic Final State. JHEP, 09:029, 2012. [Erratum: JHEP 03, 132 (2014)]. * [8] Serguei Chatrchyan et al. Search for Resonant $t\bar{t}$ Production in Lepton+Jets Events in $pp$ Collisions at $\sqrt{s}=7$ TeV. JHEP, 12:015, 2012. * [9] Georges Aad et al. Identification of boosted, hadronically decaying W bosons and comparisons with ATLAS data taken at $\sqrt{s}=8$ TeV. Eur. Phys. J. C, 76(3):154, 2016. * [10] Morad Aaboud et al. Measurement of jet-substructure observables in top quark, $W$ boson and light jet production in proton-proton collisions at $\sqrt{s}=13$ TeV with the ATLAS detector. JHEP, 08:033, 2019. * [11] Georges Aad et al. A new method to distinguish hadronically decaying boosted $Z$ bosons from $W$ bosons using the ATLAS detector. Eur. Phys. J. C, 76(5):238, 2016. * [12] Morad Aaboud et al. Performance of top-quark and $W$-boson tagging with ATLAS in Run 2 of the LHC. Eur. Phys. J. C, 79(5):375, 2019. * [13] Y. Afik, F. Maltoni, K. Mawatari, P. Pani, G. Polesello, Y. Rozen, and M. Zaro. DM+$b\bar{b}$ simulations with DMSimp: an update. In Dark Matter at the LHC 2018: Experimental and theoretical workshop, 11 2018. * [14] Chiara Arina, Mihailo Backović, Jan Heisig, and Michele Lucente. Solar $\gamma$ rays as a complementary probe of dark matter. Phys. Rev. D, 96(6):063010, 2017. * [15] Andreas Albert et al. Recommendations of the LHC Dark Matter Working Group: Comparing LHC searches for dark matter mediators in visible and invisible decay channels and calculations of the thermal relic density. Phys. Dark Univ., 26:100377, 2019. * [16] Sabine Kraml, Ursula Laa, Kentarou Mawatari, and Kimiko Yamashita. Simplified dark matter models with a spin-2 mediator at the LHC. Eur. Phys. J. C, 77(5):326, 2017. * [17] Goutam Das, Celine Degrande, Valentin Hirschi, Fabio Maltoni, and Hua-Sheng Shao. NLO predictions for the production of a spin-two particle at the LHC. Phys. Lett. B, 770:507–513, 2017. * [18] Matthias Neubert, Jian Wang, and Cen Zhang. Higher-Order QCD Predictions for Dark Matter Production in Mono-$Z$ Searches at the LHC. JHEP, 02:082, 2016. * [19] Mihailo Backović, Michael Krämer, Fabio Maltoni, Antony Martini, Kentarou Mawatari, and Mathieu Pellen. Higher-order QCD predictions for dark matter production at the LHC in simplified models with s-channel mediators. Eur. Phys. J. C, 75(10):482, 2015. * [20] Olivier Mattelaer and Eleni Vryonidou. Dark matter production through loop-induced processes at the LHC: the s-channel mediator case. Eur. Phys. J. C, 75(9):436, 2015. * [21] J. Alwall, R. Frederix, S. Frixione, V. Hirschi, F. Maltoni, O. Mattelaer, H. S. Shao, T. Stelzer, P. Torrielli, and M. Zaro. The automated computation of tree-level and next-to-leading order differential cross sections, and their matching to parton shower simulations. JHEP, 07:079, 2014. * [22] Torbjörn Sjöstrand, Stefan Ask, Jesper R. Christiansen, Richard Corke, Nishita Desai, Philip Ilten, Stephen Mrenna, Stefan Prestel, Christine O. Rasmussen, and Peter Z. Skands. An introduction to pythia 8.2. Computer Physics Communications, 191:159–177, Jun 2015. * [23] Matteo Cacciari, Gavin P Salam, and Gregory Soyez. The anti-ktjet clustering algorithm. Journal of High Energy Physics, 2008(04):063–063, Apr 2008. * [24] Matteo Cacciari, Gavin P. Salam, and Gregory Soyez. FastJet User Manual. Eur. Phys. J., C72:1896, 2012. * [25] Benjamin Nachman, Pascal Nef, Ariel Schwartzman, Maximilian Swiatlowski, and Chaowaroj Wanotayaroj. Jets from Jets: Re-clustering as a tool for large radius jet reconstruction and grooming at the LHC. JHEP, 02:075, 2015. * [26] Expected performance of the ATLAS $b$-tagging algorithms in Run-2. Technical report, CERN, Geneva, Jul 2015. All figures including auxiliary figures are available at https://atlas.web.cern.ch/Atlas/GROUPS/PHYSICS/PUBNOTES/ATL-PHYS-PUB-2015-022. * [27] LHC Top WG. Particle level objects and pseudo-top-quark definitions. https://twiki.cern.ch/twiki/bin/view/LHCPhysics/ParticleLevelTopDefinitions, 2014\. * [28] Georgios Choudalakis. Fully bayesian unfolding. https://arxiv.org/abs/1201.4612v4, 2012. * [29] John Salvatier, Thomas V Wiecki, and Christopher Fonnesbeck. Probabilistic programming in python using pymc3. PeerJ Computer Science, 2:e55, 2016. * [30] Andreas Höcker and Vakhtang Kartvelishvili. Svd approach to data unfolding. Nuclear Instruments and Methods in Physics Research Section A: Accelerators, Spectrometers, Detectors and Associated Equipment, 372(3):469–481, Apr 1996. * [31] G. D’Agostini. Improved iterative bayesian unfolding. https://arxiv.org/abs/1010.0632, 2010. * [32] Georges Aad et al. Measurements of normalized differential cross sections for $t\bar{t}$ production in pp collisions at $\sqrt{s}=7$ TeV using the ATLAS detector. Phys. Rev. D, 90(7):072004, 2014. * [33] M. Aaboud et al. Measurements of top-quark pair differential cross-sections in the lepton+jets channel in $pp$ collisions at $\sqrt{s}=13$ TeV using the ATLAS detector. JHEP, 11:191, 2017. * [34] Georges Aad et al. Differential top-antitop cross-section measurements as a function of observables constructed from final-state particles using pp collisions at $\sqrt{s}=7$ TeV in the ATLAS detector. JHEP, 06:100, 2015. * [35] P. Baron and J. Kvita. Extending the Fully Bayesian Unfolding with Regularization Using a Combined Sampling Method. Symmetry, 12:2100, 2020. ## Appendix A Jet energy scale derivation and closure tests The jet energy scale (JES) procedure corrects for a finite energy response of the detector to hadronic final states (jets) as function of their angle in the detector and energy. The goal is to correct detector-level jet energies to the particle level. The method to calculate the jet response thus uses information from Monte Carlo simulation and forms a ratio between value of the jet energy measured at the simulated detector $E_{\mathrm{det}}$ to the energy of a angularly matched the particle level jet ($E_{\mathrm{ptcl}}$). In detail, first a jet at the detector level is chosen, then the particle level jet candidate with the smallest distance parameter defined as $\Delta R=\sqrt{\Delta\eta^{2}+\Delta\phi^{2}}<R_{\mathrm{cut}}\,.$ (9) is chosen as the matching jet, with the $R_{\mathrm{cut}}$ parameter set to 0.2 for small ($R=0.4$) jets and 0.3 for large ($R=1$) jets. The correction is binned in the detector-level jet pseudorapidity and transverse momentum $\mathrm{JES}(\eta,p_{\mathrm{{T}}})=\left\langle\frac{E_{\mathrm{ptcl}}^{\mathrm{jet}}}{E_{\mathrm{det}}^{\mathrm{jet}}}\right\rangle,$ (10) and so the inverse value of the JES correction is the response of the detector-level jet. Histograms of the jet response corresponding to different energy intervals are filled, each fitted by a Gaussian function. The mean of the fit is plotted against the reconstructed energy and fitted by a polynomial logarithmic function which is used to interpolate the JES correction to any energy. The visualization of the derived JES correction in dependence on the jet ${p_{\rm T}}$ and $\eta$ is in Fig. 15 for small jets (left) and large jets (right). | ---|--- Figure 15: The visualization of derived JES correction functions for large jets (left) and for small jets (right). A closure test was performed, in which the JES factors are applied to jets on a statistically independent sample to the one used to derive the JES corrections, but otherwise generated under the same settings. The correction is re-derived on this already pre-corrected sample, thus the corrected detector level jet energy over the matched particle level jet energy is expected to be around unity by construction. The closure test results as function of the transverse momentum and pseudorapidity $\eta$ of jets are shown for both small jets and large jets in Fig. 16 where the mean deviation from unity is well within 5%. | ---|--- | Figure 16: Jet energy correction closure tests for the large jets (right) and small jets (left) as function of the transverse momentum (top) and $\eta$ (bottom) of jets. ## Appendix B Top quark and $W$ boson tagging efficiencies The tagging of the large-$R$ jets originating from the top quark or the $W$ boson is a common practice in high energy physics and was used also in this paper. In order to evaluate the tagging efficiencies, a comparison to the generator level information is needed to define a truth jet label as top, $W$ or light otherwise. The angularly closest large-$R$ jet to the direction of the original top quark was used as a probe for the truth top tagging efficiency $\varepsilon_{\mathrm{top}}$ which is defined as follows $\varepsilon_{\mathrm{top}}=\frac{N_{\mathrm{jet,top}}^{\mathrm{match\&tag}}}{N_{\mathrm{jet,top}}^{\mathrm{match}}}\,,$ (11) where $N_{\mathrm{jet,top}}^{\mathrm{match\&tag}}$ is the number of jets matched to the original top quark and top-tagged by the tagging technique; and $N_{\mathrm{jet,top}}^{\mathrm{match}}$ is number of all jets tagged as possibly originating from top quark, as marked by the algorithm. The truth $W$ boson tagging efficiency is defined in a similar manner. The mistag (fake) efficiency was also evaluated, which describes the false positivity of the tagger on jets not originating from the top quark or the $W$ boson. It is defined by the following formula in case of the top tagging $\varepsilon_{\mathrm{mis,top}}=\frac{N_{\mathrm{jet,top}}^{\mathrm{\neg match\&tag}}}{N_{\mathrm{jet,top}}^{\mathrm{\neg match}}}\,,$ (12) where $\varepsilon_{\mathrm{mis,top}}$ is the mistag efficiency of the top quark tagger, $N_{\mathrm{jet,top}}^{\mathrm{\neg match\&tag}}$ is the number of large jets which are not matched to the generator level top quark but are top-tagged by the tagger; and $N_{\mathrm{jet,top}}^{\mathrm{\neg match}}$ is the number of all large jets not matched to the generator level top quark. The $W$ mistag efficiency is calculated in a similar way for the both detector and particle levels. Both tag and mistag efficiencies are shown in Fig 17 for the $W$ (left) and top (right) tagger, studied at both detector and particle levels in dependence on the transverse momentum of the large jet. The sample for the mistag efficiency was the production of $2j2b$ events and contained 32.5M events, with events generated in exclusive ${p_{\rm T}}$ ranges of the leading and sub-leading jets in order to populate the phase space of higher transverse momenta. | ---|--- Figure 17: Tagging (blue) and mistag (red) efficiencies for the $W$ boson (left) and for the top quark (right) in dependence on the transverse momentum of the large jet studied on the $t\bar{t}$ sample at both particle and detector levels.
# Language Models can Infer Action Semantics for Classical Planners from Environment Feedback Wang Zhu Ishika Singh Robin Jia Jesse Thomason Department of Computer Science University of Southern California <EMAIL_ADDRESS><EMAIL_ADDRESS><EMAIL_ADDRESS><EMAIL_ADDRESS> ###### Abstract Classical planning approaches guarantee finding a set of actions that can achieve a given goal state when possible, but require an expert to specify logical action semantics that govern the dynamics of the environment. Researchers have shown that Large Language Models (LLMs) can be used to directly infer planning steps based on commonsense knowledge and minimal domain information alone, but such plans often fail on execution. We bring together the strengths of classical planning and LLM commonsense inference to perform domain induction, learning and validating action pre- and post- conditions based on closed-loop interactions with the environment itself. We propose PSALM, which leverages LLM inference to heuristically complete partial plans emitted by a classical planner given partial domain knowledge, as well as to infer the semantic rules of the domain in a logical language based on environment feedback after execution. Our analysis on 7 environments shows that with just one expert-curated example plans, using LLMs as heuristic planners and rule predictors achieves lower environment execution steps and environment resets than random exploration while simultaneously recovering the underlying ground truth action semantics of the domain. ## 1 Introduction Classical planning requires extensive domain knowledge to produce a sequence of actions that achieve a specified goal. The domain contains expert-annotated action semantics that govern the dynamics of the environment. For example, traditional symbolic methods, like Planning Domain Description Language (PDDL) solvers, have action semantics annotated in a domain file. Classical planning algorithms systematically explore the state space based on actions that can be executed in any given state as per the domain definition. Therefore, the resultant plan is guaranteed to succeed, if the specified goal is achievable. However, it is tedious to exhaustively define the domain to enable classical planning in an environment, and requires a human expert for every new environment. We propose a novel domain induction task setting in PDDL, in which an agent must automatically infer the action semantics of an environment without manual annotation or error correction. In this setting, the domain information, such as object properties and action functions headers, are given to an agent. The agent is then asked to infer the action semantics through interacting with the environment and learning from resulting feedback. Our proposed setting is motivated by the longer-term goal of building real-world robots that can explore a new environment (_e.g_., a kitchen) and learn to perform new tasks in the environment (_e.g_., cleaning) without human specification of the environment. To solve this domain induction problem, we draw inspiration from prior work that prompts Large Language Models (LLMs) to perform robotic planning tasks. These methods do not require explicit domain specification, as they assume implicit encoding of world knowledge in LLMs for generating unseen plans. For instance, ProgPrompt Singh et al. (2023) generates program-like high-level plans from LLM. However, LLM generated plans suffer from two major issues: the plan is not guaranteed to execute successfully, and the environment-specific prompt makes it hard to transfer similar performance to new environments. Hence, instead of using the LLM directly as a planner, we use the LLM in conjunction with a symbolic solver to iteratively explore the environment and predict the action semantics based on environment feedback. We propose Predicting Semantics of Actions with Language Models (PSALM), a novel method that combines language models with symbolic solvers to induce domain world models. At a high level, our method maintains a probabilistic memory of learned action semantics, which it iteratively improves by interacting with the environment. We use LLMs to sample (possibly incorrect) candidate trajectories and infer action semantics based on the result of executing those trajectories in the environment. This leverages LLMs’ strong commonsense reasoning abilities, as well as their ability to generate syntactically valid formal semantics. Meanwhile, we use a symbolic solver to search for ways to achieve the final goal state based on our current belief about the action semantics; this compensates for the LLMs’ comparatively poor ability to efficiently search through a large state space. Across 7 diverse environments, we demonstrate the effectiveness and efficiency of leveraging LLMs for domain induction. PSALM consistently induces correct domain files, and does so with substantially fewer total execution steps and numbers of resets than multiple baseline approaches. Overall, this integration of LLMs and symbolic solvers presents a promising avenue for advancing domain induction capabilities in robotics. ## 2 Related Works ### 2.1 Classical Planning Classical task planning algorithms have been widely applied in autonomous spacecrafts, military logistics, manufacturing, games, and robotics. The automated STRIPS planner was the first algorithm that operated the Shakey robot Fikes and Nilsson (1971) in 1970. Classical planners classically require finite, deterministic, and full state information to generate guaranteed plans when a path from the initial to the goal state is possible. Some other frameworks were also shown to be useful for robot planning Carbonell et al. (1991); Nau et al. (2003). Planning domain description language (PDDL) Jiang et al. (2019); Fox and Long (2003) and answer set programming (ASP) Brewka et al. (2011); Lifschitz (2002) are commonly used specification languages for classical planning domains. ### 2.2 Application of LLMs for Task Planning Most works assume access to training data knowledge contained in the LLM as a source of open-domain world understanding, and apply LLMs directly for task planning. While this method bypasses explicit domain definition as required in classical planning for plan generation, the plan is not guaranteed to succeed. Several works have shown that LLMs can act as planners Huang et al. (2022); Ichter et al. (2022), but such stochastic, generative approaches lose the success guarantees of classical planners. API-based programmatic plan generation Singh et al. (2023); Liang et al. (2023), which introduces some symbolic structure and constraints, has been shown to perform better but still does not ensure success. Program synthesis for planning has been previously proposed in LEAPS Trivedi et al. (2021), which generates programs using a learned latent space. In recent developments Silver et al. (2023); Liu et al. (2023), PDDL has been used as a translation medium, with LLMs used to generate either a PDDL plan or goal. Subsequently, a classical planner is used to plan for the PDDL goal. Generating a PDDL goal eliminates the need to track execution state, required when generating a plan using LLMs instead. However, using a classical planner necessitates specification of the full domain. In this paper, we propose a new problem of domain induction guided by LLMs. We use LLMs to guide preliminary environment interactions and planning, which exposes the domain functioning based on execution feedback. ### 2.3 Domain Induction and Memory Augmentation Knowledge acquisition for task planning through dialog and web access has been studied in the past. Some works Perera et al. (2015); Amiri et al. (2019) construct a knowledge base in an open domain to refer to for grounding user utterance. Recent works have shown LLM based memory augmentation to be effective. Sarch et al. (2023) builds a memory of language instruction and corresponding plan to retrieve from for prompting the LLM with a new instruction, where the retrieved interaction might inform planning. Oswald et al. (2024) utilizes LLM to generate PDDL actions given detailed text action descriptions. Closer to our paper, Smirnov et al. (2024) attempts to generate syntactically correct but not semantically guaranteed PDDL domain using LLMs, by exhaustively listing syntax errors in the prompt to correct the generated domains. Han et al. (2024) leverages language feedback from human during embodied interaction for domain predicates and action semantics prediction. In this work, we propose a simple and general approach for retrieving domain via LLM guided interactions with the environment. Our method, PSALM, aims to accurately recover the correct domain file without requiring human feedback. Figure 1: An example from the Blocksworld domain: PDDL domain file (left), PDDL problem file (mid) and PDDL plan (right). ## 3 Problem Formulation and Background We define the classical planning problem, the planing language PDDL we use, and the novel domain induction problem in PDDL we study. ### 3.1 Classical planning problem The basic framework for classical planning is the goal-directed deterministic planning problem. A classical planning problem $\mathrm{P}$ is formally defined as a tuple $\langle\mathcal{D},\mathcal{O},s^{i},\mathcal{S}^{g}\rangle$, where $\mathcal{D}$ represents the domain (environment), $\mathcal{O}$ represents a set of objects in the domain, $s^{i}$ denotes the initial state, and $\mathcal{S}^{g}$ denotes the goal specification, _i.e_., a set of goal states satisfying the goal conditions. A plan solution to $\mathrm{P}$ is a $T$-step sequence of actions $a_{1..T}$ (see Fig. 1 right for an example), which once executed at $s^{i}$ would lead to a state in $\mathcal{S}^{g}$. ### 3.2 PDDL Planning Definition and Domain Language (PDDL, Aeronautiques et al., 1998), is a standard encoding language to represent the planning problems. Fig. 1 lists an example of the PDDL files from the classical Blocksworld domain. The PDDL domain file (Fig 1 left) defines $\mathcal{D}$, which contains the predicate definition representing objects properties in the domain, and the symbolic action set $\mathcal{A}$ in the environment. Each action has an action name (_e.g_., Put-down), a list of parameters (_e.g_., ?ob), and action semantics. The action semantics $\Phi_{a}$ of an action $a\in\mathcal{A}$ includes the preconditions that describe when an action is valid to execute, and the postconditions (effects) that describe what happens when an action is executed. Fig. 2 depicts $\Phi_{a}$ as two lists of statements. The PDDL problem file (Fig 1 mid) defines the objects $\mathcal{O}$, the initial state $s^{i}$, and the goal states $\mathcal{S}_{g}$. ### 3.3 Domain induction in PDDL When performing classical planning in a new environment in PDDL, we have to define the domain file for the new environment. Human experts must carefully annotate the action semantics to enable the symbolic solver to find solutions. We propose a novel domain induction task, where agents must automatically find the action semantics for a new domain without human annotation or correction. In the domain induction task, the agent knows $\mathcal{D}\setminus\bigcup_{a\in\mathcal{A}}{\Phi_{a}}$, i.e., everything about the domain except the action semantics. In addition, the agent has access to one problem file $\langle\mathcal{O},s^{i},\mathcal{S}^{g}\rangle$, The goal of the agent is to learn the correct action semantics $\Phi_{a}$ for each action $a$. During the learning process, the agent interacts with the environment, but only starting at $s^{i}$, and can reset to $s^{i}$ as many times as it wants. Once the action semantics is learned, the ground truth domain file can be constructed and used for by the symbolic solver for efficient and robust task solving in the domain. We evaluate the domain induction learning process on three measures. (1) The accuracy (Acc): $\sum_{a\in\mathcal{A}}|\hat{\Phi}_{a}\cap\Phi_{a}|/\sum_{a\in\mathcal{A}}|\Phi_{a}|$, where $\hat{\Phi}_{a}$ is the predicted action semantics for action $a$. We use Acc to measure how close the agent is towards finding a solution of the given problem. Though a solution not ensures 100% Acc, matching the ground truth action semantics is guaranteed for a solution from the symbolic solver. We care about the follow two measures once we find a solution. (2) The number of resets (NR): how many times we put the robot back to $s_{i}$; (3) The number of executed steps (NES): how many total actions we take during the learning, including the failed ones. Figure 2: The pipeline of PSALM in four steps: (1) sample trajectories from a trajectory sampler; (2) execute the trajectories in the environment to get feedbacks (3) predict action semantics for each action with environment feedback, and update the memory based on the prediction; (4) sample action semantics from the memory to construct the domain file for the symbolic solver to check the success. ## 4 Proposed Approach: PSALM We propose the framework Predicting Semantics of Actions with Language Models (PSALM), to learn action semantics through LLM and solver’s interactions with the environment. ### 4.1 Overview As shown in Fig. 2, given the task we want to solve, we first use an LLM as the trajectory sampler to sample trajectories, and execute the trajectories in the environment to get feedbacks. We then use the LLM again together with a rule-based parser, as the action semantics predictor to predict action semantics, _i.e_., preconditions and postconditions, for each action, based on this environment feedback. We update the memory of the learned action semantics and finally sample the current belief of the action semantics to the symbolic solver to check the success. If the symbolic solver finds a solution, we execute the plan in the environment to check if the plan succeeds. If the plan reaches the goal in the environment, we finish the loop; otherwise, we will pass the result of the failed plan to the LLM to predict the action semantics again. If the symbolic solver does not find a solution, we will provide some partial candidate trajectories from the symbolic solver to the trajectory sampler, to seed the sampled trajectory. We elaborate on three components in the following section: the trajectory sampler, the action semantics predictor, and the memory of learned action semantics. ### 4.2 Trajectory sampler We sample trajectories by prompting an LLM to generate new trajectories. The LLM trajectory sampler takes in the domain information and problem information as input in the prompt, as described in the problem setup. Besides, the memory information from the sampled preconditions and postconditions is directly converted to text in the prompt. For example, the postconditions (on-table ?ob) (arm-empty) of the action put-down are converted to natural language The effects are (on-table ?ob), (arm-empty). As the symbolic solver is performing a tree-based search algorithm given a time-limit of $W$, if the solver finds any candidate trajectories, i.e., not a complete solution but a partial solution stopped by the timer or a dead end, we will leverage the deterministic search from the symbolic solver, and include the $k$ longest candidate trajectories as input to the trajectory sampler. The trajectory sampler will be asked to pick one of the candidate trajectories, and generate a trajectory starting from it. Additionally, we store previous failed trajectories, which are used to (1) filter out invalid candidate trajectories to avoid the trajectory sampler stuck at the same failed ones (2) list up to $g$ past failed trajectories in the prompt for higher quality sampling from the language model. We sample $l$ trajectories from the trajectory sampler. When $l>1$, we predict action semantics for each trajectory separately, and update the memory with all predictions. #### Prospection. To avoid unnecessary steps taken in the environment execution, and improve trajectory quality, we perform trajectory prospection. For a generated trajectory $\mathbf{\tau}=a_{1:T}$, we check if is aligned with the current belief of the environment action semantics for $v$ steps. Concretely, assuming $v\leq T$, if any action in $a_{1:v}$ does not satisfy the preconditions in the sampled action semantics, starting from that action, we will keep randomly sampling one action until the sampled action is valid in the current action semantics, and repeat this until we have $v$ total actions. Otherwise, if $a_{1:v}$ is all valid, we execute the original $a_{1:T}$ in the environment. #### Random sampler. To compare the LLM sampler with a random one, we assess the random sampler’s performance, which samples $v$ actions per trajectory. The random sampler also takes the solver information as input, i.e., it starts from the longest candidate trajectory if available. Random sampler can also take advantage of prospection. ### 4.3 Action semantics predictor We combine LLM-based and rule-based action semantics prediction in PSALM’s action semantics predictor. #### LLM-based predictor. We prompt the LLM to generate the action semantics of each action separately. Apart from the domain, problem, memory information, for a trajectory $\mathbf{\tau}=a_{1:T}$ failed at step $t+1$, the LLM action semantics predictor takes in valid actions $a_{1:t}$, state transitions $s_{j+1}-s_{j},j=1..t,s_{1}=s^{i}$, and error message for action $a_{t+1}$ as input in the prompt. We assume the error message is provided by the environment. When an action fails, the error message points out which precondition is not satisfied. When no action fails but the plan is still invalid, the error message points out the goal is not reached. We show in the analysis that the error message is necessary for predicting action preconditions. For each action, we prompt the LLM once to predict the preconditions and once for the postconditions. #### Rule-based predictor. As the error message is given, we can parse the output from the environment feedbacks for rule-based action semantics prediction. The rule-based predictor parses the error message from the environment output to infer one or more missing preconditions suggested. It also parses the state transition descriptions to derive guaranteed postconditions. For each loop, we add the rule-based action semantics to the memory. ### 4.4 Memory of action semantics We keep a memory of the learned action semantics. For each action’s action semantics, we store two lists of predicted statements for preconditions and postconditions. Each statement $\phi$ has a probability $p(\phi\in\Phi_{a}|a)$, in short $p(\phi|a)$. We use this probability to sample the current belief of action semantics, and then construct the current belief of the domain file for the symbolic solver. The first time a statement $\phi$ is predicted as part of the action semantics for $a$, the probability will be assigned to $1$. Afterwards, this probability will be updated following an exponential forgetting rule. Suppose at time step $t$, the predicted action semantics of $a$ is $\hat{\Phi}_{a,t}$, the memory updating rule of statement $\phi$ is $\displaystyle p_{t+1}(\phi|a)=\begin{cases}\mathbbm{1}[\phi\in\hat{\Phi}_{a,t+1}],&p_{t}(\phi|a)=0\\\ \gamma_{\phi}p_{t}(\phi|a)+(1-\gamma_{\phi})\mathbbm{1}[(\phi)\in\hat{\Phi}_{a,t+1}],&\text{otherwise}\\\ \end{cases}$ where $\gamma_{\phi}$ is the forgetting factor. If a statement $\phi$ has only been predicted by LLM, $\gamma_{\phi}=0.8$. Once a statement $\phi$ is predicted by rule-based predictor, $\gamma_{\phi}=1$, meaning the statement keeps at probability $1$ in the memory and never decays. Notice that all the statements predicted in the current time step and in the memory $\phi\in\bigcup_{t^{\prime}\in\\{1..t+1\\}}\hat{\Phi}_{a,t^{\prime}}$ will be updated following the rule. Figure 3: We compare PSALM with multiple variations over 7 domains. We report on NES and the results suggest (1) LLM as a trajectory sampler greatly reduces the execution steps; (2) LLM and rule-based action semantics predictors have complementary benefits; (3) Prospection is helpful overall. TS is short for trajectory sampler and ASP is short for action semantics predictor. ## 5 Experiments We discuss the experimental setups, the results, and the ablation studies, which show that language models can efficiently learn action semantics from environment feedback. ### 5.1 Experimental setups All experiments use Fast-Downward 111https://github.com/aibasel/downward/tree/release-22.12.0 planner with a search time limit of $W=30$ seconds. We use VAL 222https://github.com/KCL- Planning/VAL as simulator for plan validation and extracting state condition changes, error messages. We set the maximum number of loops to 1k for pure random baselines and to 100 for any method involving LLMs, considering the cost. We report the average over 3 runs for all the methods. For the main results, we use GPT-4 API333https://platform.openai.com/docs/models/gpt-4 as the language model agent, with temperature 0, one-shot prompting following Liu et al. (2023). We list the prompts in Appendix A. We use $v=10$ prospection steps, $l=1$ sampled trajectory, $k=3$ candidate trajectories, and $g=5$ failed trajectories per loop. We perform ablation studies over the hyperparameters in the analysis. The total cost of the API call is around $500. We experiment on 7 symbolic domains from International Planning Competitions; each defines 20 tasks that vary in number of the environment objects and optimal plan length. (1) Barman: The robot is a bartender with 2 hands preparing cocktails for a customer’s order, using the specified ingredients and appropriate tools; (2) Blocksworld: The robot reorganizes a collection of block piles arranged on a table, into a specified configuration while adhering to the simple physics principles; (3) Floortile: A set of robots painting color patterns on floor tiles, allowed to move around but not to step on painted tiles; (4) Grippers: A set of robots with 2 grippers each are given a task to move objects among different rooms; (5) Storage: The robot lifts and drops crates initially stored in different areas, into a depot, using a given set of hoists; (6) Termes: The robot constructs complex structures by transporting and positioning blocks, as well as using them as a means to move adjacent blocks; (7) Tyreworld: The robot is assigned with changing flat tires, which involves tasks such as removing flat tires, inflating the intact tires, tightening nuts, and returning tools to the boot. ### 5.2 Main results Fig. 3 compares PSALM with multiple baselines over 7 domains. Everything in the figure gets 100% accuracy, so we focus on NES. We do not put the pure random baselines in the figure, since none of the pure random baselines complete the task within 1k resets. #### LLM as a sampler greatly reduces the execution steps. Comparing the blue bars and the purple bars (w/o LLM-TS), we show that the LLM trajectory sampler greatly reduces the execution steps over all domains. The random sampler, even with prospection, lacks the commonsense reasoning knowledge for choosing trajectories likely be solutions of the problem. #### LLM and rule-based action semantics predictions have complementary benefits. Comparing the orange bars (w/o rule-ASP) and the green bars (w/o LLM-ASP), rule-based action semantics predictor is more important in Floortile, Storage and Tyreworld, while LLM action semantics predictor is more important in the other four domains. With the commonsense prior knowledge provided by the LLM, and the exactness of the information from the rule-based parser, combining both predictions is always a better solution over all domains. #### Prospection induces redundant actions sometimes, but is necessary overall. For certain domains with a small number of actions $|\mathcal{A}|$, such as Grippers ($|\mathcal{A}|=3$) and Termes ($|\mathcal{A}|=7$), a large number of prospection steps causes unnecessary exploration, comparing to the red bars (w/o prospect.). However, prospection is overall helpful, as the red bar shows around 1.4 times execution steps on average compared to our method. Especially, in the Barman domain, prospection reduces more than a half of the execution steps. On the other hand, notice that all the exploration during prospection time is simulated by not executed, a large number of prospection steps is not time-consuming during the trajectory sampling time. Figure 4: Additional analysis for PSALM. (Left) We vary the type of LLM and show that PSALM works with GPT-3.5 and Mistral-7B on the Termes domain. (Middle) Using the LLM prior before trajectory sampling (darker bars) enables the random baselines to work better compared to not having the prior (lighter bars), though it can adversely affect the full PSALM method. (Right) Experiments where we remove the error message from input to the LLM action semantics predictor. Without error messages, PSALM works only on easy domains. For the experiments that fail to find a solution of the problem, we put the accuracy number on top of their bars. ### 5.3 Analysis We first show PSALM works on some weaker or even public LLMs. We then discuss the language model prior can effectively enable the random baselines to work, as the starting point of the dynamics model. Finally, we show that the error message is important for the action semantics prediction, without which the domain induction is not possible with the current LLMs for some domains. #### PSALM can work on less powerful and public LLMs. Fig. 4 left shows PSALM can work on GPT-3.5 and Mistral-7B, though with more number of resets (NR) and number of execution steps (NES). Prospection reduces the gap between the different LLMs, perhaps because it offsets differences in commonsense reasoning ability. #### LLM prior enables random baselines. We try to use LLM to predict the action semantics $\hat{\Phi}_{0,a}$ for each action $a$, before having any sampled trajectory, and then start the iteration with the memory of the dynamics model initialized from $\hat{\Phi}_{0,a}$. This prediction is solely based on LLM’s commonsense reasoning capability. Fig. 4 middle shows on the Termes domain the LLM prior can enable the random baseline without prospection to work within 1k resets, and greatly reduce NR and NES for the random baseline with prospection. However, on more complex domains such as Barman, the LLM prior does not enable the random baselines, _i.e_., failing to find a solution within 1k resets, with and without prospection. The finding implies we can perform domain induction on some easier domains with limited access, or even one call, of language models. On the other hand, LLM prior is harmful to PSALM, as it inserts noisy predictions of action semantics which requires additional resets to erase. #### Error messages are crucial for domain induction. For real-world use, error messages may be hard to obtain from the environment. So, we study whether PSALM can succeed without receiving error messages about failed actions. Note that without the error message, the action semantics predictor can only be LLM-based, not rule-based. Fig. 4 right shows that on slightly complex domains like Termes, PSALM cannot predict the correct dynamics model, _i.e_., the accuracy is not 100. PSALM works on very simple domains like Grippers, but it requires 7x more NES to learn. Moreover, prospection is essential when we have no error message. ### 5.4 Ablation studies We perform ablation studies on prospection steps, number of sampled trajectories, number of candidate trajectories, and number of failed trajectories per run, to show their influence on the PSALM framework. The results validate our choice of the hyperparameters in the main results. #### Few prospection steps is enough for PSALM. We vary prospection steps $v$ from $0,1,5,10$ in Fig, 5(a). Notice the random baseline does not find a solution when $v=0,1,5$, and reaches an Acc of 78.5. We conclude the number of prospection steps matters more when the trajectory sampler is random, while a small number of prospection is enough for language model trajectory sampler. (a) (b) (c) (d) Figure 5: Ablation studies for PSALM. (a). Varying the number of prospection steps $v$. Few prospection steps is enough for PSALM. (b). Varying the number of sampled trajectories $l$. One sampled trajectory per run is enough. (c). Varying the number of candidate trajectories $k$ passed from the symbolic solver to the LLM trajectory sampler. More candidate trajectories help when prospection is used. (d). Varying the number of failed trajectories $g$ shown to the LLM trajectory sampler. A certain amount of failed trajectories is required for the LLM. #### One sampled trajectory per run is enough. We vary the number of sampled trajectories $l$ from $1,3,5$ in Fig. 5(b). For both LLM and random samplers, We see the total number of steps grows linearly with number of sampled trajectory, which means LLMs are hard to learn more information from just sampling more each run, and one sampled trajectory per run is enough. #### More candidate trajectories help on prospection. We vary the number of candidate trajectories $k$ from $0,1,3$ in the LLM prompt. The results in Fig. 5(c) suggest including candidate trajectory in the prompt is beneficial, while one candidate trajectory is enough for pure LLM prediction, more candidate trajectories help LLM with prospection. On the other hand, as the candidate trajectories can be very long for certain domain like Barman, and thus result in a very long prompt, we do not experiment with more than 3 candidate trajectories in the prompt. #### Choose the number of failed trajectories with care. We vary the number of failed trajectories $g$ from $0,1,5$ in the LLM prompt. To select $g$ failed trajectories, we first filter trajectories with no less than 3 steps, and then sample $g$ trajectories from the filtered ones for the LLM prompt to avoid generating those trajectories as prefix. The results in Fig. 5(d) suggest that multiple failed trajectories help LLM action semantics prediction, but only one failed trajectory could be harmful. We hypothesize the one failed trajectory in the input might distract the model from following the candidate trajectories. ## 6 Conclusion and Discussions In conclusion, we propose a novel domain induction problem in PDDL, where agents automatically infer the action semantics for a new domain without human annotation. We introduce a simple but strong framework PSALM, which combines the commonsense reasoning ability of large language models with the precision of symbolic solvers for domain induction. To update the action semantics in a memory, PSALM uses LLMs as agents for trajectory sampling and action semantics prediction. We demonstrate the effectiveness and efficiency of leveraging LLMs for domain induction over 7 domains. This integration of LLMs and symbolic solvers presents a promising avenue for advancing domain induction capabilities in robotics. ## 7 Limitations Though PSALM shows to be relatively robust across multiple LLMs, we are not claim that PSALM can work with any LLM. Besides, as discussed in the analysis, the error message, which might be hard to get from some real-world environment, is crucial in the PSALM framework, given the current LLM’s reasoning ability. Future works may explore methods without the error message as the input. On the other hand, to apply the domain induction setup to the real world, the environment predicates and object types have to be annotated or derived from the environment. The domain induction problem without these annotations can be a harder. We will leave this extension for future studies. ## Acknowledgments and Disclosure of Funding WZ and RJ were supported in part by a grant from Open Philanthropy. IS and JT were supported in part by a grant from the Army Research Lab (ARL) Army AI Innovations Institute (A2I2), award number W911NF-23-2-0010. ## References * Aeronautiques et al. [1998] Constructions Aeronautiques, Adele Howe, Craig Knoblock, ISI Drew McDermott, Ashwin Ram, Manuela Veloso, Daniel Weld, David Wilkins Sri, Anthony Barrett, Dave Christianson, et al. Pddl| the planning domain definition language. _Technical Report, Tech. Rep._ , 1998. * Amiri et al. [2019] Saeid Amiri, Sujay Bajracharya, Cihangir Goktolgal, Jesse Thomason, and Shiqi Zhang. Augmenting knowledge through statistical, goal-oriented human-robot dialog. In _2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)_ , 2019. * Brewka et al. [2011] Gerhard Brewka, Thomas Eiter, and Mirosław Truszczyński. Answer set programming at a glance. _Commun. ACM_ , 2011. * Carbonell et al. [1991] Jaime Carbonell, Oren Etzioni, Yolanda Gil, Robert Joseph, Craig Knoblock, Steve Minton, and Manuela Veloso. Prodigy: An integrated architecture for planning and learning. _SIGART Bull._ , 1991. * Fikes and Nilsson [1971] Richard E. Fikes and Nils J. Nilsson. Strips: A new approach to the application of theorem proving to problem solving. _Artificial Intelligence_ , 1971. * Fox and Long [2003] Maria Fox and Derek Long. Pddl2.1: An extension to pddl for expressing temporal planning domains. _ArXiv_ , 2003. * Han et al. [2024] Muzhi Han, Yifeng Zhu, Song-Chun Zhu, Ying Nian Wu, and Yuke Zhu. Interpret: Interactive predicate learning from language feedback for generalizable task planning. In _Robotics: Science and Systems (RSS)_ , 2024. * Huang et al. [2022] Wenlong Huang, Pieter Abbeel, Deepak Pathak, and Igor Mordatch. Language models as zero-shot planners: Extracting actionable knowledge for embodied agents. In Kamalika Chaudhuri, Stefanie Jegelka, Le Song, Csaba Szepesvári, Gang Niu, and Sivan Sabato, editors, _International Conference on Machine Learning, ICML 2022, 17-23 July 2022, Baltimore, Maryland, USA_ , Proceedings of Machine Learning Research, 2022. * Ichter et al. [2022] Brian Ichter, Anthony Brohan, Yevgen Chebotar, Chelsea Finn, Karol Hausman, Alexander Herzog, Daniel Ho, Julian Ibarz, Alex Irpan, Eric Jang, Ryan Julian, Dmitry Kalashnikov, Sergey Levine, Yao Lu, Carolina Parada, Kanishka Rao, Pierre Sermanet, Alexander T Toshev, Vincent Vanhoucke, Fei Xia, Ted Xiao, Peng Xu, Mengyuan Yan, Noah Brown, Michael Ahn, Omar Cortes, Nicolas Sievers, Clayton Tan, Sichun Xu, Diego Reyes, Jarek Rettinghouse, Jornell Quiambao, Peter Pastor, Linda Luu, Kuang-Huei Lee, Yuheng Kuang, Sally Jesmonth, Kyle Jeffrey, Rosario Jauregui Ruano, Jasmine Hsu, Keerthana Gopalakrishnan, Byron David, Andy Zeng, and Chuyuan Kelly Fu. Do as i can, not as i say: Grounding language in robotic affordances. In _6th Annual Conference on Robot Learning_ , 2022. * Jiang et al. [2019] Yu-qian Jiang, Shi-qi Zhang, Piyush Khandelwal, and Peter Stone. Task planning in robotics: an empirical comparison of pddl- and asp-based systems. _Frontiers of Information Technology & Electronic Engineering_, 2019. * Liang et al. [2023] Jacky Liang, Wenlong Huang, Fei Xia, Peng Xu, Karol Hausman, Brian Ichter, Pete Florence, and Andy Zeng. Code as policies: Language model programs for embodied control. In _2023 IEEE International Conference on Robotics and Automation (ICRA)_ , 2023. * Lifschitz [2002] Vladimir Lifschitz. Answer set programming and plan generation. _Artificial Intelligence_ , 2002. * Liu et al. [2023] Bo Liu, Yuqian Jiang, Xiaohan Zhang, Qiang Liu, Shiqi Zhang, Joydeep Biswas, and Peter Stone. LLM+P: Empowering large language models with optimal planning proficiency. _ArXiv preprint_ , 2023. * Nau et al. [2003] Dana Nau, Tsz-Chiu Au, Okhtay Ilghami, Ugur Kuter, J William Murdock, Dan Wu, and Fusun Yaman. Shop2: An htn planning system. _J. Artif. Intell. Res. (JAIR)_ , 2003. * Oswald et al. [2024] James Oswald, Kavitha Srinivas, Harsha Kokel, Junkyu Lee, Michael Katz, and Shirin Sohrabi. Large language models as planning domain generators. In _Proceedings of the International Conference on Automated Planning and Scheduling_ , volume 34, pages 423–431, 2024. * Perera et al. [2015] Vittorio Perera, Robin Soetens, Thomas Kollar, Mehdi Samadi, Yichao Sun, Daniele Nardi, René van de Molengraft, and Manuela Veloso. Learning task knowledge from dialog and web access. _Robotics_ , 2015. * Sarch et al. [2023] Gabriel Sarch, Yue Wu, Michael Tarr, and Katerina Fragkiadaki. Open-ended instructable embodied agents with memory-augmented large language models. In _Findings of the Association for Computational Linguistics: EMNLP 2023_ , 2023. * Silver et al. [2023] Tom Silver, Soham Dan, Kavitha Srinivas, Joshua B. Tenenbaum, Leslie Pack Kaelbling, and Michael Katz. Generalized planning in pddl domains with pretrained large language models. _ArXiv preprint_ , 2023. * Singh et al. [2023] Ishika Singh, Valts Blukis, Arsalan Mousavian, Ankit Goyal, Danfei Xu, Jonathan Tremblay, Dieter Fox, Jesse Thomason, and Animesh Garg. Progprompt: Generating situated robot task plans using large language models. In _2023 IEEE International Conference on Robotics and Automation (ICRA)_ , 2023. * Smirnov et al. [2024] Pavel Smirnov, Frank Joublin, Antonello Ceravola, and Michael Gienger. Generating consistent pddl domains with large language models. _ArXiv preprint_ , 2024. * Trivedi et al. [2021] Dweep Trivedi, Jesse Zhang, Shao-Hua Sun, and Joseph J. Lim. Learning to synthesize programs as interpretable and generalizable policies. In Marc’Aurelio Ranzato, Alina Beygelzimer, Yann N. Dauphin, Percy Liang, and Jennifer Wortman Vaughan, editors, _Advances in Neural Information Processing Systems 34: Annual Conference on Neural Information Processing Systems 2021, NeurIPS 2021, December 6-14, 2021, virtual_ , 2021. ## Appendix A LLM Prompts We list the prompt template for LLM trajectory sampler (Fig. 6) and LLM action semantics predictor (Fig. 7), and provide one complete example for each. Fig. 8, 9 is the example for trajectory sampler, and Fig. 10, 11 is the example for action semantics predictor. Figure 6: LLM trajectory sampler prompt template Figure 7: LLM action semantics predictor prompt template Figure 8: LLM trajectory sampler prompt example Figure 9: LLM trajectory sampler prompt example (cont’) Figure 10: LLM action semantics predictor prompt example Figure 11: LLM action semantics predictor prompt example (cont’)
# OFA: Unifying Architectures, Tasks, and Modalities Through a Simple Sequence-to-Sequence Learning Framework Peng Wang, An Yang, Rui Men, Junyang Lin, Shuai Bai Zhikang Li, Jianxin Ma, Chang Zhou, Jingren Zhou, Hongxia Yang DAMO Academy, Alibaba Group {zheluo.wp, ya235025, menrui.mr, junyang.ljy, baishuai.bs, zhikang.lzk, jason.mjx, ericzhou.zc, jingren.zhou<EMAIL_ADDRESS> Correspondence to: Chang<EMAIL_ADDRESS> ###### Abstract In this work, we pursue a unified paradigm for multimodal pretraining to break the scaffolds of complex task/modality-specific customization. We propose OFA, a Task-Agnostic and Modality-Agnostic framework that supports Task Comprehensiveness. OFA unifies a diverse set of cross-modal and unimodal tasks, including image generation, visual grounding, image captioning, image classification, language modeling, etc., in a simple sequence-to-sequence learning framework. OFA follows the instruction-based learning in both pretraining and finetuning stages, requiring no extra task-specific layers for downstream tasks. In comparison with the recent state-of-the-art vision & language models that rely on extremely large cross-modal datasets, OFA is pretrained on only $20$M publicly available image-text pairs. Despite its simplicity and relatively small-scale training data, OFA achieves new SOTAs in a series of cross-modal tasks while attaining highly competitive performances on uni-modal tasks. Our further analysis indicates that OFA can also effectively transfer to unseen tasks and unseen domains. Our code and models are publicly available at https://github.com/OFA-Sys/OFA. Figure 1: Examples of various tasks supported by OFA. _K_ eywords Unified frameworks $\cdot$ Multimodal pretraining $\cdot$ Multitask learning $\cdot$ Zero-shot learning ## 1 Introduction Building an omnipotent model that handles as many tasks and modalities as human beings is an attractive goal in the AI community. The possibilities of achieving this goal may largely depend on whether massive varieties of modalities, tasks and training regimes can be represented with only a few forms that can be unified and managed by a single model or system. Recent developments of the Transformer [1] architecture have shown its potential for being a universal computation engine [2, 3, 4, 5, 6, 7, 8]. In the settings of supervised learning, the “pretrain-finetune” paradigm achieves excellent success in many domains. In the regimes of few-/zero-shot learning, language models with prompt / instruction tuning prove powerful zero-/few-shot learners [3, 9, 10]. These advances have provided more significant than ever opportunities for the emergence of an omni-model. To support better generalization for open-ended problems while maintaining multitask performance and ease of use, we advocate that an omnipotent model should have the following three properties: 1\. Task-Agnostic (TA): unified task representation to support different types of tasks, including classification, generation, self-supervised pretext tasks, etc., and to be agnostic to either pretraining or finetuning. 2\. Modality-Agnostic (MA): unified input and output representation shared among all tasks to handle different modalities. 3\. Task Comprehensiveness (TC): enough task variety to accumulate generalization ability robustly. However, it is challenging to satisfy these properties while maintaining superior performance in downstream tasks. Current language and multimodal pretrained models readily fail at parts of these properties, due to their following design choices. 1\. Extra learnable components for finetuning, e.g., task-specific heads [2], adapters [11], soft prompts [12]. This makes the model structure task-specific and poses discrepancy between pretraining and finetuning. Such designs are also not friendly to supporting unseen tasks in a zero-shot manner. 2\. Task-specific formulation. For most current methods, pretraining, finetuning and zero-shot tasks usually differ in task formulation and training objectives. This violates TA and it is burdensome to scale up the task population to achieve TC. 3\. Entangling modality representation with downstream tasks. It is a common practice for Vision-Language models to take the detected objects as part of the image input features [8, 13, 14, 15, 16, 17]. Though it demonstrates better downstream task performance on some closed- domain datasets, it depends on an extra object detector which usually fails at open-domain data. Therefore, we explore an omni-model for multimodal pretraining and propose OFA, hopefully “One For All”, which achieves the objectives of unifying architectures, tasks, and modalities, and supports the three properties above.111This work is the latest one of our M6 series [18, 19, 20, 21]. We formulate both pretraining and finetuning tasks in a unified sequence-to- sequence abstraction via handcrafted instructions [9, 10] to achieve Task- Agnostic. A Transformer is adopted as the Modality-Agnostic compute engine, with a constraint that no learnable task- or modality-specific components will be added to downstream tasks. It is available to represent information from different modalities within a globally shared multimodal vocabulary across all tasks. We then support Task Comprehensiveness by pretraining on varieties of uni-modal and cross-modal tasks. To summarize: * • We propose OFA, a Task-Agnostic and Modality-Agnostic framework that supports Task Comprehensiveness. OFA is the first attempt to unify the following vision & language, vision-only and language-only tasks, including understanding and generation, e.g., text-to-image generation, visual grounding, visual question answering (VQA), image captioning, image classification, language modeling, etc., via a simple sequence-to-sequence learning framework with a unified instruction-based task representation. * • OFA is pretrained on the publicly available datasets of $20$M image-text pairs, in comparison with recent models that rely on paired data of a much larger scale [22, 23]. OFA achieves state-of-the-art performances in a series of vision & language downstream tasks, including image captioning, visual question answering, visual entailment, referring expression comprehension, etc. * • OFA, as a multimodal pretrained model, achieves comparable performances on unimodal tasks with SOTA pretrained models in language or vision, e.g., RoBERTa, ELECTRA and DeBERTa for natural language understanding, UniLM, Pegasus and ProphetNet for natural language generation, and MoCo-v3, BEiT and MAE for image classification. * • We verify that OFA achieves competitive performance in zero-shot learning. Also, it can transfer to unseen tasks with new task instructions and adapt to out-of-domain information without finetuning. ## 2 Related Work Figure 2: A demonstration of the pretraining tasks, including visual grounding, grounded captioning, image-text matching, image captioning, VQA, object detection, image infilling as well as text infilling. #### Language Pretraining & Vision Pretraining Natural language pretraining has revolutionized the whole NLP research community. A representation of this track is the birth of BERT [2] and GPT [24]. A number of studies have been progressively advancing pretraining by improving pretraining tasks and designing more sophisticated model architectures [25, 26, 27, 28, 29, 30, 31]. Having witnessed the success of natural language pretraining, researchers have promoted self-supervised learning (SSL) in computer vision [32, 33, 34, 35]. Recently, mirroring masked language modeling (MLM) in language pretraining, generative pretraining [36, 37] with ViT architecture [6] further boosts downstream performance. #### Multimodal Pretraining Multimodal pretraining has been developing rapidly [38, 13, 39, 40, 14, 41, 42, 43, 44, 15, 16, 17, 45, 46, 47]. Researchers have applied the masking strategies and the encoder-decoder architecture to adapt models to generation tasks [15, 17, 18, 22]. Besides, to simplify preprocessing, patch projection has received attention and helped Transformer achieve SOTA performance in downstream tasks [22, 48]. To make full use of large-scale weakly supervised data, [49] trains a bi-encoder on $400$ million pairs and demonstrates excellent performance in retrieval tasks. Another line of work is text-to- image synthesis. A bunch of works [50, 51, 18, 52] incorporate Transformer with VQVAE [53] or VQGAN [54] to generate high-quality images with high resolution. However, the previously mentioned methods are limited in processing a single type of data, such as cross-modal data only or limited in their capabilities. Also, the discrepancy between pretraining and finetuning behaviors limits the transferability to open-ended data. #### Unified Frameworks To pursue the unified models, [55] demonstrate a uniform format to represent tasks. In NLP, recent studies unify diverse tasks covering natural language understanding and generation to text-to-text transfer [30] or language modeling [3]. Following this idea, [56] and [57] demonstrate text-generation- based multimodal pretrained models. [7] and [58] propose a simple framework that can process information from multiple modalities with a uniform byte- sequence representation. [59] and [60] unify tasks of different modalities by designing various task-specific layers. [61] explores to employ a retrieval- based unified paradigm. However, these multimodal pretrained models suffer from performance degradation in downstream tasks, e.g., VQA, image captioning, etc., and they have no image generation capability. ## 3 OFA In this work, we propose OFA, a unified Seq2Seq framework for the unification of I/O & architectures, tasks, and modalities. The overall framework is illustrated in Figure 2. ### 3.1 I/O & Architecture #### I/O The most common practice of multimodal pretraining is the pretraining of Transformer models on image-text pair corpus at scale. This requires data preprocessing or modality-specific adaptors to enable the joint training of both visual and linguistic information with the Transformer architecture. Compared with the complex, resource&time-consuming object feature extraction, we aim for simplicity and directly use ResNet modules to convolve $\textrm{x}_{v}\in\mathbb{R}^{H\times W\times C}$ to $P$ patch features of the hidden size, following [62] and [22]. As to processing the linguistic information, we follow the practice of GPT [24] and BART [31] that we apply byte-pair encoding (BPE) [63] to the given text sequence to transform it into a subword sequence and then embed them to features. To process different modalities without task-specific output schema, it is essential to represent data of various modalities in a unified space. A possible solution is to discretize text, image, and object and represent them with tokens in a unified vocabulary. Recent advances in image quantization [53, 54] have demonstrated effectiveness in text-to-image synthesis [50, 18, 51, 19], and thus we utilize this strategy for the target-side image representations. Sparse coding is effective in reducing the sequence length of image representation. For example, an image of the resolution of $256\times 256$ is represented as a code sequence of the length of $16\times 16$. Each discrete code strongly correlates with the corresponding patch [36]. Apart from representing images, it is also essential to represent objects within images as there are a series of region-related tasks. Following [64], we represent objects as a sequence of discrete tokens. To be more specific, for each object, we extract its label and its bounding box. The continuous corner coordinates (the top left and the bottom right) of the bounding box are uniformly discretized to integers as location tokens $\langle x_{1},y_{1},x_{2},y_{2}\rangle$. As to the object labels, they are intrisically words and thus can be represented with BPE tokens. Finally, we use a unified vocabulary for all the linguistic and visual tokens, including subwords, image codes, and location tokens. #### Architecture Following the previous successful practices in multimodal pretraining [14, 17, 22], we choose Transformer as the backbone architecture, and we adopt the encoder-decoder framework as the unified architecture for all the pretraining, finetuning, and zero-shot tasks. Specifically, both the encoder and the decoder are stacks of Transformer layers. A Transformer encoder layer consists of a self attention and a feed-forward network (FFN), while a Transformer decoder layer consists of a self attention, an FFN and a cross attention for building the connection between the decoder and the encoder output representations. To stabilize training and accelerate convergence, we add head scaling to self attention, a post-attention layer normalization (LN) [65], and an LN following the first layer of FFN [66]. For positional information, we use two absolute position embeddings for text and images, respectively. Instead of simply adding the position embeddings, we decoupling the position correlation from token embeddings and patch embeddings [67]. In addition, we also use 1D relative position bias for text [30] and 2D relative position bias for image [22, 62]. ### 3.2 Tasks & Modalities A unified framework is designed to provide architecture compatibility across different modalities and downstream tasks so that opportunities can arise to generalize to unseen tasks within the same model. Then we have to represent the possible downstream tasks concerning different modalities in a unified paradigm. Therefore, an essential point for the design of pretraining tasks is the consideration of multitask and multimodality. To unify tasks and modalities, we design a unified sequence-to-sequence learning paradigm for pretraining, finetuning, and inference on all tasks concerning different modalities. Both pretraining tasks and downstream tasks of cross-modal and uni-modal understanding and generation are all formed as Seq2Seq generation. It is available to perform multitask pretraining on multimodal and uni-modal data to endow the model with comprehensive capabilities. Specifically, we share the identical schema across all tasks, while we specify handcrafted instructions for discrimination [9]. For cross-modal representation learning, we design $5$ tasks, including visual grounding (VG), grounded captioning (GC), image-text matching (ITM), image captioning (IC), and visual question answering (VQA). For VG, the model learns to generate location tokens specifying the region position $\langle x_{1},y_{1},x_{2},y_{2}\rangle$ based on the input of the image $x^{i}$ and the instruction “Which region does the text $x^{t}$ describe?” where $x^{t}$ refers to the region caption. GC is an inverse task of VG. The model learns to generate a description based on the input image $x^{i}$ and the instruction “What does the region describe? region: $\langle x_{1},y_{1},x_{2},y_{2}\rangle$”. For ITM, we use each original image-text pair as the positive sample and construct a new one as the negative by pairing the image with a randomly substituted caption. The model learns to discriminate whether the given image and text are paired by learning to generate “Yes” or “No” based on the input image $x^{i}$ and the instruction “Does the image describe $x^{t}$?”. As to image captioning, this task can naturally adapt to the sequence-to-sequence format. The model learns to generate the caption based on the given image and the instruction “What does the image describe?”. For VQA, we send the image and the question as the input and require the model to learn to generate correct answers. For uni-modal representation learning, we design $2$ tasks for vision and $1$ task for language, respectively. The model is pretrained with image infilling and object detection for vision representation learning. Recent advances in generative self-supervised learning for computer vision show that masked image modeling is an effective pretraining task [36, 37]. In practice, we mask the middle part of the images as the input. The model learns to generate the sparse codes for the central part of the image based on the corrupted input and the specified instruction “What is the image in the middle part?”. We additionally add object detection to pretraining following [44]. The model learns to generate human-annotated object representations, i.e., the sequence of object position and label, based on the input image and the text “What are the objects in the image?” as the instruction. Both tasks strengthen the representation learning on both pixel and object levels. For language representation learning, following the practice of [31], we pretrain the unified model on plain text data with text infilling. In this way, we unify multiple modalities and multiple tasks to a single model and pretraining paradigm. OFA is pretrained jointly with those tasks and data. Thus, it can perform different tasks concerning natural language, vision, and cross-modality. ### 3.3 Pretraining Datasets We construct pretraining datasets by incorporating Vision & Language data (i.e., image-text pairs), Vision data (i.e., raw image data, object-labeled data), and Language data (i.e., plain texts). For replication, we only use datasets that are publicly available. We carefully filter our pretraining data and exclude images that appear in the validation and test sets of downstream tasks to avoid data leakage. We provide more details about pretraining datasets in Appendix A.1. ### 3.4 Training & Inference We optimize the model with the cross-entropy loss. Given an input $x$, an instruction $s$ and an output $y$, we train OFA by minimizing $\mathcal{L}=-\sum_{i=1}^{|y|}{\rm log}P_{\theta}(y_{i}|y_{<i},x,s)$, where $\theta$ refers to the model parameters. For inference, we apply the decoding strategies, e.g., beam search, to enhance the quality of generation. However, this paradigm has several problems in classification tasks: 1\. optimizing on the entire vocabulary is unnecessary and inefficient; 2\. the model may generate invalid labels out of the closed label set during inference. To overcome these issues, we introduce a search strategy based on prefix tree (Trie, [68]). Experimental results show that the Trie-based search can enhance the performance of OFA on classification tasks. See Appendix B for more details. ### 3.5 Scaling Models Table 1: Detailed hyperparameters of OFA model configuration. We list the configuration for OFA of $5$ different sizes. Model #Param. Backbone Hidden size Intermediate Size #Head #Enc. Layers #Dec. Layers $\text{OFA}\rm_{Tiny}$ 33M ResNet50 256 1024 4 4 4 $\text{OFA}\rm_{Medium}$ 93M ResNet101 512 2048 8 4 4 $\text{OFA}\rm_{Base}$ 182M ResNet101 768 3072 12 6 6 $\text{OFA}\rm_{Large}$ 472M ResNet152 1024 4096 16 12 12 $\text{OFA}\rm_{Huge}$ 930M ResNet152 1280 5120 16 24 12 In order to investigate how OFA of different model sizes perform in downstream tasks, we have developed $5$ versions of OFA models, scaling from $33$M to $940$M parameters, and we list their detailed hyperparameters in Table 1. To be more specific, we have built basic models of $\rm Base$ and $\rm Large$ sizes, $\text{OFA}\rm_{Base}$ and $\text{OFA}\rm_{Large}$. As our network configuration is similar to BART [31], their sizes are similar to those of $\text{BART}\rm_{Base}$ and $\text{BART}\rm_{Large}$. Additionally, we have developed OFA of a larger size, which we name it $\text{OFA}\rm_{Huge}$, or OFA without specific mentioning in the tables. Its size is comparable to that of $\text{SimVLM}\rm_{Huge}$ or $\text{ViT}\rm_{Huge}$. To investigate whether smaller OFA can still reach satisfactory performance, we have developed $\text{OFA}\rm_{Medium}$ and $\text{OFA}\rm_{Tiny}$, which are solely around half and less than $20\%$ as large as $\text{OFA}\rm_{Base}$. ## 4 Experiments This section provides experimental details and analyses to demonstrate our model’s effectiveness. See Appendix A for implementation details. ### 4.1 Results on Cross-modal Tasks Table 2: Experimental results on cross-modal understanding tasks including VQA and visual entailment. Note that we report the best results from the previous SOTAs, and specifically SimVLM is a huge-size model comparable to ViT-Huge pretrained on 1.8B image-text pairs, and Florence is built with CoSwin-H and RoBERTa and it is pretrained on 900M image-text pairs. Model VQA SNLI-VE test-dev test-std dev test UNITER [14] 73.8 74.0 79.4 79.4 OSCAR [15] 73.6 73.8 - - VILLA [16] 74.7 74.9 80.2 80.0 VL-T5 [56] - 70.3 - - VinVL [17] 76.5 76.6 - - UNIMO [46] 75.0 75.3 81.1 80.6 ALBEF [69] 75.8 76.0 80.8 80.9 METER [70] 77.7 77.6 80.9 81.2 VLMo [48] 79.9 80.0 - - SimVLM [22] 80.0 80.3 86.2 86.3 Florence [23] 80.2 80.4 - - $\text{OFA}\rm_{Tiny}$ 70.3 70.4 85.3 85.2 $\text{OFA}\rm_{Medium}$ 75.4 75.5 86.6 87.0 $\text{OFA}\rm_{Base}$ 78.0 78.1 89.3 89.2 $\text{OFA}\rm_{Large}$ 80.3 80.5 90.3 90.2 OFA 82.0 82.0 91.0 91.2 Table 3: Experimental results on MSCOCO Image Captioning. We report the results on the Karpathy test split. Note that SimVLM and LEMON are huge-size models. Model Cross-Entropy Optimization CIDEr Optimization BLEU@4 METEOR CIDEr SPICE BLEU@4 METEOR CIDEr SPICE VL-T5 [56] 34.5 28.7 116.5 21.9 - - - - OSCAR [15] 37.4 30.7 127.8 23.5 41.7 30.6 140.0 24.5 UNICORN [57] 35.8 28.4 119.1 21.5 - - - - VinVL [17] 38.5 30.4 130.8 23.4 41.0 31.1 140.9 25.2 UNIMO [46] 39.6 - 127.7 - - - - - LEMON [71] 41.5 30.8 139.1 24.1 42.6 31.4 145.5 25.5 SimVLM [22] 40.6 33.7 143.3 25.4 - - - - $\text{OFA}\rm_{Tiny}$ 35.9 28.1 119.0 21.6 38.1 29.2 128.7 23.1 $\text{OFA}\rm_{Medium}$ 39.1 30.0 130.4 23.2 41.4 30.8 140.7 24.8 $\text{OFA}\rm_{Base}$ 41.0 30.9 138.2 24.2 42.8 31.7 146.7 25.8 $\text{OFA}\rm_{Large}$ 42.4 31.5 142.2 24.5 43.6 32.2 150.7 26.2 OFA 43.9 31.8 145.3 24.8 44.9 32.5 154.9 26.6 Table 4: Experimental results on the $3$ datasets of referring expression comprehension, namely RefCOCO, RefCOCO+, and RefCOCOg. We report the<EMAIL_ADDRESS>on different test splits of the datasets. Model RefCOCO RefCOCO+ RefCOCOg val testA testB val testA testB val-u test-u VL-T5 [56] - - - - - - - 71.3 UNITER [14] 81.41 87.04 74.17 75.90 81.45 66.70 74.86 75.77 VILLA [16] 82.39 87.48 74.84 76.17 81.54 66.84 76.18 76.71 MDETR [72] 86.75 89.58 81.41 79.52 84.09 70.62 81.64 80.89 UNICORN [57] 88.29 90.42 83.06 80.30 85.05 71.88 83.44 83.93 $\text{OFA}\rm_{Tiny}$ 80.20 84.07 75.00 68.22 75.13 57.66 72.02 69.74 $\text{OFA}\rm_{Medium}$ 85.34 87.68 77.92 76.09 83.04 66.25 78.76 78.58 $\text{OFA}\rm_{Base}$ 88.48 90.67 83.30 81.39 87.15 74.29 82.29 82.31 $\text{OFA}\rm_{Large}$ 90.05 92.93 85.26 85.80 89.87 79.22 85.89 86.55 OFA 92.04 94.03 88.44 87.86 91.70 80.71 88.07 88.78 We evaluate our models on different cross-modal downstream tasks, covering cross-modal understanding and generation. Specifically, we implement experiments on multimodal understanding datasets including VQAv2 for visual question answering and SNLI-VE [73] for visual entailment, and multimodal generation including MSCOCO Image Caption [74] for image captioning, RefCOCO / RefCOCO+ / RefCOCOg [75, 76] for referring expression comprehension as this task can be viewed as bounding box generation, and MSCOCO Image Caption for text-to-image generation. More details are provided in Appendix A.3. Table 2 presents the performance of OFA and baseline models on VQA and SNLI- VE. In general, OFA achieves the best performance in both tasks with $82.0$ on the VQA test-std set and $91.2$ on the SNLI-VE test set. For smaller-size models, $\text{OFA}\rm_{Large}$ can outperform the recent SOTAs, e.g., VLMo and SimVLM, and $\text{OFA}\rm_{Base}$ can beat the SOTAs before the aforementioned two models in both tasks. This demonstrates that OFA can achieve superior performance on cross-modal understanding tasks and scaling up OFA can bring significant improvements, reflecting the strong potential of large-scale pretrained models. Table 3 presents the performance of OFA and baseline models on the MSCOCO image captioning dataset. We report the results on the Karpathy test split, and we demonstrate the performance of models trained with Cross-Entropy optimization and additionally with CIDEr optimization based on reinforcement learning. In comparison with the previous SOTA $\text{SimVLM}\rm_{Huge}$ for Cross-Entropy optimization, OFA outperforms it by around $2$ points in CIDEr evaluation. For CIDEr optimization, OFA of the $3$ sizes all outperform the huge-size LEMON, and OFA demonstrates a new SOTA of $154.9$ CIDEr score. By May 31 2022, the single-model OFA had topped the MSCOCO Image Caption Leaderboard.222https://competitions.codalab.org/competitions/3221#results To evaluate the capability of visual grounding, we conduct experiments on RefCOCO, RefCOCO+, and RefCOCOg. While we unify locations to the vocabulary, visual grounding can be viewed as a sequence generation task. As there is only one target for each query, we limit the generation length to $4$ in order to generate a bounding box by $<x_{1},y_{1},x_{2},y_{2}>$. Experimental results in Table 4 show that OFA reaches the SOTA performance on the $3$ datasets. Compared with the previous SOTA UNICORN [57], OFA achieves significant improvement with a gain of $3.61$, $6.65$ and $4.85$ points on the testA sets of RefCOCO and RefCOCO+ as well as the test-u set of RefCOCOg. Figure 3: Qualitative comparison with state-of-the-art models for text-to- image generation task. We present more qualitative examples of text-to-image generation for better demonstration in Appendix C. Text-to-image generation is a challenging task even for pretrained model. As we pretrain OFA with the task “image-infilling”, i.e., recovering masked patches by generating the corresponding codes [36], and thus OFA is able to generate code. We thus directly finetune OFA on the MSCOCO Image Caption dataset for text-to-code generation. At the inference stage, we additionally transform the generated codes to an image with the code decoder. Specifically, we use the codes from VQGAN [54] following [52]. Experimental results show that OFA outperforms the baselines in all the metrics. Note that increasing the sampling size during inference is expected to bring clear improvements on FID and IS. Compared with DALLE [50], CogView [51] and NÜWA [52], whose sampling sizes are $512$, $60$ and $60$, respectively, OFA outperforms these SOTA methods on FID and IS with a much smaller sampling size $24$. This illustrates that OFA has learned better correspondence among the query text, the image and the image codes. Table 5: Experimental results on text-to-image generation. Models are evaluated on FID, CLIPSIM, and IS scores. OFA outperforms the baselines, including the concurrent SOTA NÜWA. We report the results of $\text{OFA}_{\rm Large}$. Note that GLIDE additionally has $1.5B$ parameters for upsampling except for the $3.5B$ parameters. Model FID$\downarrow$ CLIPSIM$\uparrow$ IS$\uparrow$ DALLE [50] 27.5 - 17.9 CogView [51] 27.1 33.3 18.2 GLIDE [77] 12.2 - - Unifying [78] 29.9 30.9 - NÜWA [52] 12.9 34.3 27.2 OFA 10.5 34.4 31.1 We compare OFA with CogView and GLIDE on generation quality with normal and counterfactual queries.333For more implementation details, please refer to Appendix A.3 Normal queries describe existing things in the real world, while counterfactual queries refer to those describing things that could only exist in our imagination. For normal queries, both CogView and OFA generate images semantically consistent with the given texts, in comparison with GLIDE. The generated examples from our model can provide more sophisticated details of objects, say the horse and the double-decker bus. For counterfactual queries, we find that OFA is the only one that can generate the three imaginary scenes, which indicates its imaginative power based on its strong capability to align text to the image. See Appendix C for more qualitative examples. ### 4.2 Results on Uni-modal Tasks As the design of OFA unifies different modalities, we evaluate its performance on unimodal tasks, namely tasks of natural language and computer vision. For natural language tasks, we evaluate OFA on $6$ tasks of the GLUE benchmark [79] for natural language understanding and Gigaword abstractive summarization [80] for natural language generation. For computer vision, we evaluate OFA on the classic ImageNet-1K [81] dataset for image classification. More details are provided in Appendix A.3. Table 6: Experimental results on the GLUE benchmark datasets [79]. For comparison, we list the performance of multimodal pretrained models as well the recent SOTA models that were pretrained on natural language data only. Following [28], we finetune RTE and MRPC starting from the checkpoint finetuned on MNLI. Model SST-2 RTE MRPC QQP MNLI QNLI Multimodal Pretrained Baseline Models VisualBERT [38] 89.4 56.6 71.9 89.4 81.6 87.0 UNITER [14] 89.7 55.6 69.3 89.2 80.9 86.0 VL-BERT [8] 89.8 55.7 70.6 89.0 81.2 86.3 VilBERT [13] 90.4 53.7 69.0 88.6 79.9 83.8 LXMERT [40] 90.2 57.2 69.8 75.3 80.4 84.2 Uni-Perceiver [61] 90.2 64.3 86.6 87.1 81.7 89.9 SimVLM [22] 90.9 63.9 75.2 90.4 83.4 88.6 FLAVA [60] 90.9 57.8 81.4 90.4 80.3 87.3 UNIMO [46] 96.8 - - - 89.8 - Natural- Language-Pretrained SOTA Models BERT [2] 93.2 70.4 88.0 91.3 86.6 92.3 RoBERTa [28] 96.4 86.6 90.9 92.2 90.2 93.9 XLNet [25] 97.0 85.9 90.8 92.3 90.8 94.9 ELECTRA [82] 96.9 88.0 90.8 92.4 90.9 95.0 DeBERTa [83] 96.8 88.3 91.9 92.3 91.1 95.3 Ours OFA 96.6 91.0 91.7 92.5 90.2 94.8 Table 7: Experimental results on Gigaword abstractive summarization. We report performance on the ROUGE evaluation [84] . Model Gigaword ROUGE-1 ROUGE-2 ROUGE-L BERTSHARE [85] 38.13 19.81 35.62 MASS [86] 38.73 19.71 35.96 UniLM [29] 38.45 19.45 35.75 PEGASUS [87] 39.12 19.86 36.24 ProphetNet [88] 39.55 20.27 36.57 UNIMO [46] 39.71 20.37 36.88 OFA 39.81 20.66 37.11 As OFA has been pretrained on plain text data, it can be directly transferred to natural language downstream tasks. For natural language generation, it is essentially a sequence-to-sequence generation task, and for natural language understanding, typically text classification, we regard them as generation tasks where labels are essentially word sequences. Additionally, for each task, we design a manual instruction to indicate the model what types of questions it should answer. We list our instruction design in Appendix A.3. We demonstrate that even a unified multimodal pretrained model can achieve highly competitive performance in natural language tasks. Specifically, in the evaluation of natural language understanding, OFA surpasses multimodal pretrained models by large margins in all tasks. In comparison with the state- of-the-art natural language pretrained models, including RoBERTa [28], XLNET [25], ELECTRA [82], and DeBERTa [83], OFA reaches a comparable performance. In the evaluation of natural language generation, OFA even reaches a new state- of-the-art performance on the Gigaword dataset. Also, OFA can reach a competitive performance in image classification. Table 8 shows the performance of OFA on image classification. $\text{OFA}\rm_{Large}$ achieves higher accuracy than previous backbone models such as EfficientNet-B7 [89] and ViT-L [6]. We also compare OFA with self-supervised pretraining models based on contrastive learning and masked image modeling. OFA outperforms contrastive-based models such as SimCLR [32] and MoCo-v3 [33, 35] with similar parameters. Compared with pretrained models based on masked image modeling, e.g., BEiT-L [36] and MAE-L [37], OFA can achieve similar performance. These aforementioned results in both natural language and vision tasks indicate that a unified multimodal pretrained model is not only effective in multimodal tasks but also capable of tackling unimodal tasks, and in the future, it might be sufficient for such a model to solve complex tasks concerning different modality combinations. Table 8: ImageNet-1K finetuning results. All the listed models do not use extra labeled image classification samples during training for fair comparison. We report the results of $\text{OFA}\rm_{Large}$. Model Top-1 Acc. EfficientNet-B7 [89] 84.3 ViT-L/16 [6] 82.5 DINO [90] 82.8 SimCLR v2 [32] 82.9 MoCo v3 [35] 84.1 BEiT384-L/16 [36] 86.3 MAE-L/16 [37] 85.9 OFA 85.6 ### 4.3 Zero-shot Learning & Task Transfer The instruction-guided pretraining enables OFA to perform zero-shot inference. Following Uni-Perceiver [61], we evaluate our model on the $6$ tasks of the GLUE benchmark, including single-sentence classification and sentence pair classification. Table 9 demonstrates that OFA generally outperforms Uni- Perceiver. However, both models do not achieve satisfactory performance in sentence-pair classification (with $\text{Acc}.<60\%$). We hypothesize that the missing sentence-pair data in the pretraining dataset attributes to the performance. Also, we find that the model performance is highly sensitive to the design of instructions. To obtain the best result, one should search a proper instruction template possibly from a large pool of candidates. A slight change to manual prompts or model parameters may drastically influence the model performance, which is not robust. We leave this issue to the future work. We observe that the model can transfer to unseen tasks well with new task instructions. We design a new task called grounded question answering and present examples in Figure 4. In this scenario, given a question about a certain region on the image, the model should provide a correct answer. We find that the model can achieve a satisfactory performance in this new task, which reflects its strong transferability. Besides, OFA can solve tasks with the out-of-domain input data. For example, OFA without finetuning achieves satisfactory performance in VQA for the out-of-domain images. Examples are demonstrated in Figure 5. OFA can also perform accurate visual grounding on the out-of-domain images, e.g., anime pictures, synthetic images, etc., and we demonstrate more examples on Figure 11 in Appendix C. Table 9: Zero-shot performance on $6$ GLUE tasks and SNLI-VE. Model SST-2 RTE MRPC QQP QNLI MNLI SNLI-VE Acc. Acc. F1 F1 Acc. Acc. Acc. (dev/test) Uni-Perceiver 70.6 55.6 76.1 53.6 51.0 49.6 - $\text{OFA}_{\rm Base}$ 71.6 56.7 79.5 54.0 51.4 37.3 49.71 / 49.18 Figure 4: Qualitative results on an unseen task grounded QA. We design a new task called grounded question answering, where the model should answer a question about a certain region in the image. More samples are provided in Figure 10 in Appendix C. Figure 5: Qualitative results on unseen domain VQA. During pretraining, only real-world photographs are used for VQA. We present cases of VQA on out-of-domain images, i.e., the iconic and sci-fi images, and demonstrate their capability of transferring to unseen domains. More samples are provided in Figure 9 in Appendix C. ### 4.4 Ablation on Multitask Pretraining Thanks to the unified framework, OFA has been pretrained on multiple tasks and thus endowed with comprehensive capabilities. However, the effects of each task are still undiscovered. We verify their effects on multiple downstream tasks, including image captioning, VQA, image classification, and text-to- image generation. We first evaluate how uni-modal pretraining tasks influence the performance in both cross-modal and uni-modal tasks. Table 10 demonstrates our experimental results. We observe some interesting phenomena about the effects of uni-modal pretraining tasks. Text infilling brings improvement on image caption ($+0.8$ CIDEr) and VQA ($+0.46$ Acc.). Natural language pretraining betters the contextualized representation of language and thus enhances performance in cross-modal tasks. However, it is noticed that the language pretraining task may degrade the performance in image classification, leading to the decrease in ImageNet-1K ($-1.0$ Acc.). Also, it is interesting to find that it does not encourage improvement in text-to-image generation ($-0.1$ CLIPSIM). It may attribute to the simplicity of text in this task, which indicates that improved representation of language does not affect the performance. As to image infilling, it significantly improves the performance in image classification ($+1.0$ Acc.) and text-to-image generation ($+0.6$ CLIPSIM). Learning to recover images is an effective self-supervised task for image representation, and it also encourages the decoder’s ability to generate image codes. However, it hurts the performance in image captioning and VQA. Both tasks require a strong capability in generating texts, and the decoder’s learning of image generation naturally brings performance degradation in captioning ($-0.7$ CIDEr) and VQA ($-0.3$ Acc.). Table 10: Ablation results of OFA. All models are pretrained for $250k$ steps. w/o ground. represents the removal of both visual grounding and grounded captioning tasks. Note that all models are only finetuned with the cross- entropy loss in image captioning. Model Caption VQA ImageNet Image Generation CIDEr Test-dev Top-1 Acc. FID / CLIPSIM / IS $\text{OFA}\rm_{Base}$ 135.6 76.0 82.2 20.8 / 31.6 / 21.5 w/o text infill. 134.8 75.6 83.2 20.3 / 31.7 / 21.8 w/o image infill. 136.3 76.3 81.8 23.2 / 31.0 / 20.0 w/o det. 133.3 75.4 81.4 20.9 / 31.5 / 21.6 w/o ground. 134.2 75.5 82.0 21.2 / 31.5 / 21.5 Furthermore, we evaluate how multimodal tasks impact the performance. Previous studies have provided evidence of the contribution of conventional pretraining tasks, e.g., MLM, MOC, ITM, VQA, image captioning, etc. [14, 17]. However, they miss other tasks, e.g., detection and visual grounding & grounded captioning. We conduct experiments on these tasks and find that tasks predicting regions are crucial to multimodal tasks, with a performance increase in image captioning ($+2.3$ CIDEr & $+1.4$ CIDEr) and VQA ($+0.6$ Acc. & $+0.5$ Acc.). It suggests that detection and visual grounding & grounded captioning help the model grasp fined-grained alignments between vision and language. Region information contributes little to text-to-image generation ($+0.1$ CLIPSIM & $+0.1$ CLIPSIM), as this task requires far less text-region alignment information. We surprisingly find that detection can encourage the performance in visual understanding ($+0.8$ Acc.). It indicates that incorporating region information might be essential to visual understanding, especially on images with complex objects. ## 5 Conclusion In this work, we propose OFA, a Task-Agnostic and Modality-Agnostic framework supporting Task Comprehensiveness. OFA achieves the unification in architecture, tasks and modalities, and thus is capable of multimodal & uni- modal understanding and generation, without specification in additional layers or tasks. Our experiments show that OFA creates new SOTAs in a series of tasks, including image captioning, VQA, visual entailment, and referring expression comprehension. OFA also demonstrates a comparable performance with language / vision pretrained SOTA models in uni-modal understanding and generation tasks, e.g., GLUE, abstractive summarization, and image classification. We provide a further analysis to demonstrate its capability in zero-shot learning and domain & task transfer, and we also verify the effectiveness of pretraining tasks. In the future, we will continue exploring the issues discovered in this work. Also, we endeavor to figure out a reasonable solution to building an omni- model essentially generalizable to the complex real world. ## Acknowledgments We would like to thank Jie Zhang, Yong Li, Jiamang Wang, Shao Yuan, and Zheng Cao for their support to this project, and we would like to thank Guangxiang Zhao and Fei Sun for their insightful comments to our paper. ## References * [1] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. Attention is all you need. In NeurIPS 2017, pages 5998–6008, 2017. * [2] Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. BERT: pre-training of deep bidirectional transformers for language understanding. In Jill Burstein, Christy Doran, and Thamar Solorio, editors, NAACL-HLT 2019, pages 4171–4186. Association for Computational Linguistics, 2019. * [3] Tom B Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Language models are few-shot learners. arXiv preprint arXiv:2005.14165, 2020. * [4] Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Jacob Hilton, Reiichiro Nakano, Christopher Hesse, and John Schulman. Training verifiers to solve math word problems. arXiv preprint arXiv:2110.14168, 2021. * [5] Steffen Schneider, Alexei Baevski, Ronan Collobert, and Michael Auli. wav2vec: Unsupervised pre-training for speech recognition. arXiv preprint arXiv:1904.05862, 2019. * [6] Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, et al. An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929, 2020. * [7] Andrew Jaegle, Felix Gimeno, Andrew Brock, Andrew Zisserman, Oriol Vinyals, and Joao Carreira. Perceiver: General perception with iterative attention. arXiv preprint arXiv:2103.03206, 2021. * [8] Weijie Su, Xizhou Zhu, Yue Cao, Bin Li, Lewei Lu, Furu Wei, and Jifeng Dai. Vl-bert: Pre-training of generic visual-linguistic representations. In International Conference on Learning Representations, 2019. * [9] Jason Wei, Maarten Bosma, Vincent Y Zhao, Kelvin Guu, Adams Wei Yu, Brian Lester, Nan Du, Andrew M Dai, and Quoc V Le. Finetuned language models are zero-shot learners. arXiv preprint arXiv:2109.01652, 2021. * [10] Victor Sanh, Albert Webson, Colin Raffel, Stephen H Bach, Lintang Sutawika, Zaid Alyafeai, Antoine Chaffin, Arnaud Stiegler, Teven Le Scao, Arun Raja, et al. Multitask prompted training enables zero-shot task generalization. arXiv preprint arXiv:2110.08207, 2021. * [11] Neil Houlsby, Andrei Giurgiu, Stanislaw Jastrzebski, Bruna Morrone, Quentin De Laroussilhe, Andrea Gesmundo, Mona Attariyan, and Sylvain Gelly. Parameter-efficient transfer learning for nlp. In International Conference on Machine Learning, pages 2790–2799. PMLR, 2019. * [12] Brian Lester, Rami Al-Rfou, and Noah Constant. The power of scale for parameter-efficient prompt tuning. arXiv preprint arXiv:2104.08691, 2021. * [13] Jiasen Lu, Dhruv Batra, Devi Parikh, and Stefan Lee. Vilbert: Pretraining task-agnostic visiolinguistic representations for vision-and-language tasks. In NeurIPS, 2019. * [14] Yen-Chun Chen, Linjie Li, Licheng Yu, Ahmed El Kholy, Faisal Ahmed, Zhe Gan, Yu Cheng, and Jingjing Liu. Uniter: Universal image-text representation learning. In ECCV, 2020. * [15] Xiujun Li, Xi Yin, Chunyuan Li, Xiaowei Hu, Pengchuan Zhang, Lei Zhang, Lijuan Wang, Houdong Hu, Li Dong, Furu Wei, Yejin Choi, and Jianfeng Gao. Oscar: Object-semantics aligned pre-training for vision-language tasks. In ECCV, 2020. * [16] Zhe Gan, Yen-Chun Chen, Linjie Li, Chen Zhu, Yu Cheng, and Jingjing Liu. Large-scale adversarial training for vision-and-language representation learning. ArXiv, abs/2006.06195, 2020. * [17] Pengchuan Zhang, Xiujun Li, Xiaowei Hu, Jianwei Yang, Lei Zhang, Lijuan Wang, Yejin Choi, and Jianfeng Gao. Vinvl: Revisiting visual representations in vision-language models. 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 5575–5584, 2021. * [18] Junyang Lin, Rui Men, An Yang, Chang Zhou, Ming Ding, Yichang Zhang, Peng Wang, Ang Wang, Le Jiang, Xianyan Jia, et al. M6: A chinese multimodal pretrainer. arXiv preprint arXiv:2103.00823, 2021. * [19] Zhu Zhang, Jianxin Ma, Chang Zhou, Rui Men, Zhikang Li, Ming Ding, Jie Tang, Jingren Zhou, and Hongxia Yang. M6-ufc: Unifying multi-modal controls for conditional image synthesis. arXiv preprint arXiv:2105.14211, 2021. * [20] An Yang, Junyang Lin, Rui Men, Chang Zhou, Le Jiang, Xianyan Jia, Ang Wang, Jie Zhang, Jiamang Wang, Yong Li, et al. Exploring sparse expert models and beyond. arXiv preprint arXiv:2105.15082, 2021. * [21] Junyang Lin, An Yang, Jinze Bai, Chang Zhou, Le Jiang, Xianyan Jia, Ang Wang, Jie Zhang, Yong Li, Wei Lin, et al. M6-10t: A sharing-delinking paradigm for efficient multi-trillion parameter pretraining. arXiv preprint arXiv:2110.03888, 2021. * [22] Zirui Wang, Jiahui Yu, Adams Wei Yu, Zihang Dai, Yulia Tsvetkov, and Yuan Cao. Simvlm: Simple visual language model pretraining with weak supervision. ArXiv, abs/2108.10904, 2021. * [23] Lu Yuan, Dongdong Chen, Yi-Ling Chen, Noel C. F. Codella, Xiyang Dai, Jianfeng Gao, Houdong Hu, Xuedong Huang, Boxin Li, Chunyuan Li, Ce Liu, Mengchen Liu, Zicheng Liu, Yumao Lu, Yu Shi, Lijuan Wang, Jianfeng Wang, Bin Xiao, Zhen Xiao, Jianwei Yang, Michael Zeng, Luowei Zhou, and Pengchuan Zhang. Florence: A new foundation model for computer vision. ArXiv, abs/2111.11432, 2021. * [24] Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. Improving language understanding by generative pre-training. URL https://s3-us-west-2.amazonaws.com/openai-assets/researchcovers/ languageunsupervised/language understanding paper. pdf, 2018. * [25] Zhilin Yang, Zihang Dai, Yiming Yang, Jaime G. Carbonell, Ruslan Salakhutdinov, and Quoc V. Le. Xlnet: Generalized autoregressive pretraining for language understanding. In NeurIPS 2019, pages 5754–5764, 2019. * [26] Yu Sun, Shuohuan Wang, Yu-Kun Li, Shikun Feng, Xuyi Chen, Han Zhang, Xin Tian, Danxiang Zhu, Hao Tian, and Hua Wu. ERNIE: enhanced representation through knowledge integration. CoRR, abs/1904.09223, 2019. * [27] Yu Sun, Shuohuan Wang, Yu-Kun Li, Shikun Feng, Hao Tian, Hua Wu, and Haifeng Wang. ERNIE 2.0: A continual pre-training framework for language understanding. CoRR, abs/1907.12412, 2019. * [28] Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. Roberta: A robustly optimized BERT pretraining approach. CoRR, abs/1907.11692, 2019. * [29] Li Dong, Nan Yang, Wenhui Wang, Furu Wei, Xiaodong Liu, Yu Wang, Jianfeng Gao, Ming Zhou, and Hsiao-Wuen Hon. Unified language model pre-training for natural language understanding and generation. In NeurIPS 2019, pages 13042–13054, 2019. * [30] Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu. Exploring the limits of transfer learning with a unified text-to-text transformer. Journal of Machine Learning Research, 21(140):1–67, 2020. * [31] Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. BART: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. In ACL 2020, July 2020. * [32] Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey Hinton. A simple framework for contrastive learning of visual representations. In International conference on machine learning, pages 1597–1607. PMLR, 2020. * [33] Xinlei Chen, Haoqi Fan, Ross Girshick, and Kaiming He. Improved baselines with momentum contrastive learning. arXiv preprint arXiv:2003.04297, 2020. * [34] Jean-Bastien Grill, Florian Strub, Florent Altché, Corentin Tallec, Pierre H Richemond, Elena Buchatskaya, Carl Doersch, Bernardo Avila Pires, Zhaohan Daniel Guo, Mohammad Gheshlaghi Azar, et al. Bootstrap your own latent: A new approach to self-supervised learning. arXiv preprint arXiv:2006.07733, 2020. * [35] Xinlei Chen and Kaiming He. Exploring simple siamese representation learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 15750–15758, 2021. * [36] Hangbo Bao, Li Dong, and Furu Wei. Beit: Bert pre-training of image transformers. arXiv preprint arXiv:2106.08254, 2021. * [37] Kaiming He, Xinlei Chen, Saining Xie, Yanghao Li, Piotr Dollár, and Ross Girshick. Masked autoencoders are scalable vision learners. arXiv preprint arXiv:2111.06377, 2021. * [38] Liunian Harold Li, Mark Yatskar, Da Yin, Cho-Jui Hsieh, and Kai-Wei Chang. Visualbert: A simple and performant baseline for vision and language. ArXiv, abs/1908.03557, 2019. * [39] Luowei Zhou, Hamid Palangi, Lei Zhang, Houdong Hu, Jason J. Corso, and Jianfeng Gao. Unified vision-language pre-training for image captioning and VQA. In AAAI 2020, pages 13041–13049, 2020. * [40] Hao Tan and Mohit Bansal. Lxmert: Learning cross-modality encoder representations from transformers. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 5100–5111, 2019. * [41] Gen Li, Nan Duan, Yuejian Fang, Daxin Jiang, and Ming Zhou. Unicoder-vl: A universal encoder for vision and language by cross-modal pre-training. CoRR, abs/1908.06066, 2019. * [42] Junyang Lin, An Yang, Yichang Zhang, Jie Liu, Jingren Zhou, and Hongxia Yang. Interbert: Vision-and-language interaction for multi-modal pretraining. arXiv preprint arXiv:2003.13198, 2020. * [43] Jiasen Lu, Vedanuj Goswami, Marcus Rohrbach, Devi Parikh, and Stefan Lee. 12-in-1: Multi-task vision and language representation learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 10437–10446, 2020. * [44] Haiyang Xu, Ming Yan, Chenliang Li, Bin Bi, Songfang Huang, Wenming Xiao, and Fei Huang. E2e-vlp: End-to-end vision-language pre-training enhanced by visual learning. arXiv preprint arXiv:2106.01804, 2021. * [45] Fei Yu, Jiji Tang, Weichong Yin, Yu Sun, Hao Tian, Hua Wu, and Haifeng Wang. Ernie-vil: Knowledge enhanced vision-language representations through scene graphs. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 35, pages 3208–3216, 2021. * [46] Wei Li, Can Gao, Guocheng Niu, Xinyan Xiao, Hao Liu, Jiachen Liu, Hua Wu, and Haifeng Wang. UNIMO: towards unified-modal understanding and generation via cross-modal contrastive learning. In Chengqing Zong, Fei Xia, Wenjie Li, and Roberto Navigli, editors, ACL/IJCNLP 2021, pages 2592–2607. Association for Computational Linguistics, 2021. * [47] Zhicheng Huang, Zhaoyang Zeng, Bei Liu, Dongmei Fu, and Jianlong Fu. Pixel-bert: Aligning image pixels with text by deep multi-modal transformers. ArXiv, abs/2004.00849, 2020. * [48] Wenhui Wang, Hangbo Bao, Li Dong, and Furu Wei. Vlmo: Unified vision-language pre-training with mixture-of-modality-experts. ArXiv, abs/2111.02358, 2021. * [49] Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, and Ilya Sutskever. Learning transferable visual models from natural language supervision. In Marina Meila and Tong Zhang, editors, ICML 2021, volume 139 of Proceedings of Machine Learning Research, pages 8748–8763. PMLR, 2021. * [50] Aditya Ramesh, Mikhail Pavlov, Gabriel Goh, Scott Gray, Chelsea Voss, Alec Radford, Mark Chen, and Ilya Sutskever. Zero-shot text-to-image generation. arXiv preprint arXiv:2102.12092, 2021. * [51] Ming Ding, Zhuoyi Yang, Wenyi Hong, Wendi Zheng, Chang Zhou, Da Yin, Junyang Lin, Xu Zou, Zhou Shao, Hongxia Yang, et al. Cogview: Mastering text-to-image generation via transformers. arXiv preprint arXiv:2105.13290, 2021. * [52] Chenfei Wu, Jian Liang, Lei Ji, Fan Yang, Yuejian Fang, Daxin Jiang, and Nan Duan. N$\backslash$" uwa: Visual synthesis pre-training for neural visual world creation. arXiv preprint arXiv:2111.12417, 2021. * [53] Aäron van den Oord, Oriol Vinyals, and Koray Kavukcuoglu. Neural discrete representation learning. In NIPS, 2017. * [54] Patrick Esser, Robin Rombach, and Bjorn Ommer. Taming transformers for high-resolution image synthesis. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 12873–12883, 2021. * [55] Lukasz Kaiser, Aidan N Gomez, Noam Shazeer, Ashish Vaswani, Niki Parmar, Llion Jones, and Jakob Uszkoreit. One model to learn them all. arXiv preprint arXiv:1706.05137, 2017. * [56] Jaemin Cho, Jie Lei, Haochen Tan, and Mohit Bansal. Unifying vision-and-language tasks via text generation. In ICML, 2021. * [57] Zhengyuan Yang, Zhe Gan, Jianfeng Wang, Xiaowei Hu, Faisal Ahmed, Zicheng Liu, Yumao Lu, and Lijuan Wang. Crossing the format boundary of text and boxes: Towards unified vision-language modeling. ArXiv, abs/2111.12085, 2021. * [58] Andrew Jaegle, Sebastian Borgeaud, Jean-Baptiste Alayrac, Carl Doersch, Catalin Ionescu, David Ding, Skanda Koppula, Daniel Zoran, Andrew Brock, Evan Shelhamer, et al. Perceiver io: A general architecture for structured inputs & outputs. arXiv preprint arXiv:2107.14795, 2021. * [59] Ronghang Hu and Amanpreet Singh. Unit: Multimodal multitask learning with a unified transformer. arXiv preprint arXiv:2102.10772, 2021. * [60] Amanpreet Singh, Ronghang Hu, Vedanuj Goswami, Guillaume Couairon, Wojciech Galuba, Marcus Rohrbach, and Douwe Kiela. Flava: A foundational language and vision alignment model. arXiv preprint arXiv:2112.04482, 2021. * [61] Xizhou Zhu, Jinguo Zhu, Hao Li, Xiaoshi Wu, Xiaogang Wang, Hongsheng Li, Xiaohua Wang, and Jifeng Dai. Uni-perceiver: Pre-training unified architecture for generic perception for zero-shot and few-shot tasks. arXiv preprint arXiv:2112.01522, 2021. * [62] Zihang Dai, Hanxiao Liu, Quoc V Le, and Mingxing Tan. Coatnet: Marrying convolution and attention for all data sizes. arXiv preprint arXiv:2106.04803, 2021. * [63] Rico Sennrich, Barry Haddow, and Alexandra Birch. Neural machine translation of rare words with subword units. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1715–1725, 2016. * [64] Ting Chen, Saurabh Saxena, Lala Li, David J Fleet, and Geoffrey Hinton. Pix2seq: A language modeling framework for object detection. arXiv preprint arXiv:2109.10852, 2021. * [65] Lei Jimmy Ba, Jamie Ryan Kiros, and Geoffrey E. Hinton. Layer normalization. CoRR, abs/1607.06450, 2016. * [66] Sam Shleifer, Jason Weston, and Myle Ott. Normformer: Improved transformer pretraining with extra normalization. arXiv preprint arXiv:2110.09456, 2021. * [67] Guolin Ke, Di He, and Tie-Yan Liu. Rethinking positional encoding in language pre-training. In International Conference on Learning Representations, 2020. * [68] Thomas H Cormen, Charles E Leiserson, Ronald L Rivest, and Clifford Stein. Introduction to algorithms. MIT press, 2009. * [69] Junnan Li, Ramprasaath R Selvaraju, Akhilesh Deepak Gotmare, Shafiq Joty, Caiming Xiong, and Steven Hoi. Align before fuse: Vision and language representation learning with momentum distillation. In Thirty-Fifth Conference on Neural Information Processing Systems, 2021. * [70] Zi-Yi Dou, Yichong Xu, Zhe Gan, Jianfeng Wang, Shuohang Wang, Lijuan Wang, Chenguang Zhu, Nanyun Peng, Zicheng Liu, and Michael Zeng. An empirical study of training end-to-end vision-and-language transformers. ArXiv, abs/2111.02387, 2021. * [71] Xiaowei Hu, Zhe Gan, Jianfeng Wang, Zhengyuan Yang, Zicheng Liu, Yumao Lu, and Lijuan Wang. Scaling up vision-language pre-training for image captioning. CoRR, abs/2111.12233, 2021. * [72] Aishwarya Kamath, Mannat Singh, Yann LeCun, Ishan Misra, Gabriel Synnaeve, and Nicolas Carion. Mdetr - modulated detection for end-to-end multi-modal understanding. ArXiv, abs/2104.12763, 2021. * [73] Ning Xie, Farley Lai, Derek Doran, and Asim Kadav. Visual entailment: A novel task for fine-grained image understanding. arXiv preprint arXiv:1901.06706, 2019. * [74] Xinlei Chen, Hao Fang, Tsung-Yi Lin, Ramakrishna Vedantam, Saurabh Gupta, Piotr Dollár, and C Lawrence Zitnick. Microsoft coco captions: Data collection and evaluation server. arXiv preprint arXiv:1504.00325, 2015. * [75] Licheng Yu, Patrick Poirson, Shan Yang, Alexander C Berg, and Tamara L Berg. Modeling context in referring expressions. In European Conference on Computer Vision, pages 69–85. Springer, 2016. * [76] Junhua Mao, Jonathan Huang, Alexander Toshev, Oana Camburu, Alan L Yuille, and Kevin Murphy. Generation and comprehension of unambiguous object descriptions. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 11–20, 2016. * [77] Alex Nichol, Prafulla Dhariwal, Aditya Ramesh, Pranav Shyam, Pamela Mishkin, Bob McGrew, Ilya Sutskever, and Mark Chen. Glide: Towards photorealistic image generation and editing with text-guided diffusion models. arXiv preprint arXiv:2112.10741, 2021. * [78] Yupan Huang, Hongwei Xue, Bei Liu, and Yutong Lu. Unifying multimodal transformer for bi-directional image and text generation. In Proceedings of the 29th ACM International Conference on Multimedia, pages 1138–1147, 2021. * [79] Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R Bowman. Glue: A multi-task benchmark and analysis platform for natural language understanding. arXiv preprint arXiv:1804.07461, 2018. * [80] Alexander M Rush, Sumit Chopra, and Jason Weston. A neural attention model for abstractive sentence summarization. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 379–389, 2015. * [81] Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition, pages 248–255. Ieee, 2009. * [82] Kevin Clark, Minh-Thang Luong, Quoc V. Le, and Christopher D. Manning. ELECTRA: pre-training text encoders as discriminators rather than generators. In 8th International Conference on Learning Representations, ICLR 2020. OpenReview.net, 2020. * [83] Pengcheng He, Xiaodong Liu, Jianfeng Gao, and Weizhu Chen. Deberta: decoding-enhanced bert with disentangled attention. In 9th International Conference on Learning Representations, ICLR 2021. OpenReview.net, 2021. * [84] Chin-Yew Lin. ROUGE: A package for automatic evaluation of summaries. In Text Summarization Branches Out, Barcelona, Spain, July 2004\. Association for Computational Linguistics. * [85] Sascha Rothe, Shashi Narayan, and Aliaksei Severyn. Leveraging pre-trained checkpoints for sequence generation tasks. Transactions of the Association for Computational Linguistics, 8:264–280, 2020. * [86] Kaitao Song, Xu Tan, Tao Qin, Jianfeng Lu, and Tie-Yan Liu. MASS: masked sequence to sequence pre-training for language generation. In ICML 2019, pages 5926–5936, 2019. * [87] Jingqing Zhang, Yao Zhao, Mohammad Saleh, and Peter Liu. Pegasus: Pre-training with extracted gap-sentences for abstractive summarization. In International Conference on Machine Learning, pages 11328–11339. PMLR, 2020. * [88] Weizhen Qi, Yu Yan, Yeyun Gong, Dayiheng Liu, Nan Duan, Jiusheng Chen, Ruofei Zhang, and Ming Zhou. Prophetnet: Predicting future n-gram for sequence-to-sequence pre-training. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: Findings, pages 2401–2410, 2020. * [89] Mingxing Tan and Quoc Le. Efficientnet: Rethinking model scaling for convolutional neural networks. In International Conference on Machine Learning, pages 6105–6114. PMLR, 2019. * [90] Mathilde Caron, Hugo Touvron, Ishan Misra, Hervé Jégou, Julien Mairal, Piotr Bojanowski, and Armand Joulin. Emerging properties in self-supervised vision transformers. arXiv preprint arXiv:2104.14294, 2021. * [91] Soravit Changpinyo, Piyush Sharma, Nan Ding, and Radu Soricut. Conceptual 12m: Pushing web-scale image-text pre-training to recognize long-tail visual concepts. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 3558–3568, 2021. * [92] Piyush Sharma, Nan Ding, Sebastian Goodman, and Radu Soricut. Conceptual captions: A cleaned, hypernymed, image alt-text dataset for automatic image captioning. In ACL 2018, pages 2556–2565, 2018. * [93] Vicente Ordonez, Girish Kulkarni, and Tamara L. Berg. Im2text: Describing images using 1 million captioned photographs. In NeurIPS 2011, pages 1143–1151, 2011. * [94] Ranjay Krishna, Yuke Zhu, Oliver Groth, Justin Johnson, Kenji Hata, Joshua Kravitz, Stephanie Chen, Yannis Kalantidis, Li-Jia Li, David A. Shamma, Michael S. Bernstein, and Li Fei-Fei. Visual genome: Connecting language and vision using crowdsourced dense image annotations. International Journal of Computer Vision, 123(1):32–73, 2017. * [95] Yash Goyal, Tejas Khot, Douglas Summers-Stay, Dhruv Batra, and Devi Parikh. Making the v in vqa matter: Elevating the role of image understanding in visual question answering. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 6904–6913, 2017. * [96] Drew A Hudson and Christopher D Manning. Gqa: A new dataset for real-world visual reasoning and compositional question answering. In CVPR 2019, pages 6700–6709, 2019. * [97] Bart Thomee, David A Shamma, Gerald Friedland, Benjamin Elizalde, Karl Ni, Douglas Poland, Damian Borth, and Li-Jia Li. Yfcc100m: The new data in multimedia research. Communications of the ACM, 59(2):64–73, 2016. * [98] Alina Kuznetsova, Hassan Rom, Neil Alldrin, Jasper Uijlings, Ivan Krasin, Jordi Pont-Tuset, Shahab Kamali, Stefan Popov, Matteo Malloci, Alexander Kolesnikov, et al. The open images dataset v4. International Journal of Computer Vision, 128(7):1956–1981, 2020\. * [99] Shuai Shao, Zeming Li, Tianyuan Zhang, Chao Peng, Gang Yu, Xiangyu Zhang, Jing Li, and Jian Sun. Objects365: A large-scale, high-quality dataset for object detection. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 8430–8439, 2019. * [100] Leo Gao, Stella Biderman, Sid Black, Laurence Golding, Travis Hoppe, Charles Foster, Jason Phang, Horace He, Anish Thite, Noa Nabeshima, et al. The pile: An 800gb dataset of diverse text for language modeling. arXiv preprint arXiv:2101.00027, 2020. * [101] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In CVPR 2016, pages 770–778, 2016. * [102] Ilya Loshchilov and Frank Hutter. Decoupled weight decay regularization. In ICLR 2019, 2019. * [103] Gao Huang, Yu Sun, Zhuang Liu, Daniel Sedra, and Kilian Q. Weinberger. Deep networks with stochastic depth. In ECCV, 2016. * [104] Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th annual meeting of the Association for Computational Linguistics, pages 311–318, 2002. * [105] Satanjeev Banerjee and Alon Lavie. Meteor: An automatic metric for mt evaluation with improved correlation with human judgments. In Proceedings of the acl workshop on intrinsic and extrinsic evaluation measures for machine translation and/or summarization, pages 65–72, 2005. * [106] Ramakrishna Vedantam, C Lawrence Zitnick, and Devi Parikh. Cider: Consensus-based image description evaluation. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 4566–4575, 2015. * [107] Peter Anderson, Basura Fernando, Mark Johnson, and Stephen Gould. Spice: Semantic propositional image caption evaluation. In European conference on computer vision, pages 382–398. Springer, 2016. * [108] Andrej Karpathy and Li Fei-Fei. Deep visual-semantic alignments for generating image descriptions. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 3128–3137, 2015. * [109] Martin Heusel, Hubert Ramsauer, Thomas Unterthiner, Bernhard Nessler, and Sepp Hochreiter. Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems, 30, 2017. * [110] Tim Salimans, Ian Goodfellow, Wojciech Zaremba, Vicki Cheung, Alec Radford, and Xi Chen. Improved techniques for training gans. Advances in neural information processing systems, 29:2234–2242, 2016. * [111] Steven J Rennie, Etienne Marcheret, Youssef Mroueh, Jerret Ross, and Vaibhava Goel. Self-critical sequence training for image captioning. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 7008–7024, 2017. * [112] Guangxiang Zhao, Wenkai Yang, Xuancheng Ren, Lei Li, and Xu Sun. Well-classified examples are underestimated in classification with deep neural networks. CoRR, abs/2110.06537, 2021. * [113] Ekin D Cubuk, Barret Zoph, Jonathon Shlens, and Quoc V Le. Randaugment: Practical automated data augmentation with a reduced search space. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pages 702–703, 2020. * [114] Zhun Zhong, Liang Zheng, Guoliang Kang, Shaozi Li, and Yi Yang. Random erasing data augmentation. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 34, pages 13001–13008, 2020. * [115] Hongyi Zhang, Moustapha Cissé, Yann N. Dauphin, and David Lopez-Paz. mixup: Beyond empirical risk minimization. In 6th International Conference on Learning Representations, ICLR 2018, Vancouver, BC, Canada, April 30 - May 3, 2018, Conference Track Proceedings. OpenReview.net, 2018. * [116] Sangdoo Yun, Dongyoon Han, Sanghyuk Chun, Seong Joon Oh, Youngjoon Yoo, and Junsuk Choe. Cutmix: Regularization strategy to train strong classifiers with localizable features. In 2019 IEEE/CVF International Conference on Computer Vision, ICCV 2019, Seoul, Korea (South), October 27 - November 2, 2019, pages 6022–6031. IEEE, 2019. * [117] Christoph Schuhmann, Richard Vencu, Romain Beaumont, Robert Kaczmarczyk, Clayton Mullis, Aarush Katta, Theo Coombes, Jenia Jitsev, and Aran Komatsuzaki. Laion-400m: Open dataset of clip-filtered 400 million image-text pairs. arXiv preprint arXiv:2111.02114, 2021. ## Appendix A Implementation Details ### A.1 Pretraining Datasets We construct pretraining datasets by incorporating Vision & Language data (i.e., image-text pairs), Vision data (i.e., raw image data, object-labeled data), and Language data (i.e., plain texts). For replication, the pretraining datasets are publicly available. We carefully filter our pretraining data and exclude images that appear in the validation and test sets of downstream tasks to avoid data leakage. The statistics on the pretraining datasets are listed in Table 11. #### Cross-modal Data For vision & language pretraining, we mainly apply image-text pairs, including image-caption pairs, image-QA pairs, and image-region pairs, as the pretraining data. For the pretraining tasks of image captioning and image-text matching, we collect Conceptual Caption 12M (CC12M) [91], Conceptual Captions (CC3M) [92], SBU [93], MSCOCO image captions (COCO) [74], and Visual Genome Captions (VG Captions) [94]. Specifically, the part of data from VG requires some additional processing. As texts in VG captions describe local regions on the images, we retrieve regions with area larger than $16,384$ pixels and construct region-caption pairs. For visual question answering, we collect VQAv2 [95], VG-QA [94], as well as GQA [96]. VQAv2 is a visual question answering dataset with real-world photographs from COCO. VG-QA is also a visual question answering dataset with real-world photographs from VG. The questions of VG-QA are related to specific regions on the images. GQA is a large VQA dataset featuring compositional questions. The images of GQA are also collected from VG. For visual grounding and grounded captioning, we collect data from RefCOCO [75], RefCOCO+ [75], RefCOCOg [76] and VG captions. Additional processing is applied to VG Captions for this task. Specifically, we use the data of VG that contains regions with area smaller than $16,384$ pixels for Visual Grounding, in order to encourage model to grasp fine-grained alignments between vision and language. #### Uni-modal Data Uni-modal data includes vision and language data. Vision data consists of raw images for image infilling and object-labeled images for object detection. For image infilling, we collect raw images from OpenImages, YFCC100M [97] and ImageNet-21K [81], and exclude annotations. Thus the model is unable to access labels in the pretraining stage. For object detection, we collect OpenImages [98], Object365 [99], VG and COCO for object detection. Language data consists of plain texts, i.e., passages consisting of sentences. We use around 140GB of data from Pile [100] to leverage its diversity. Specifically, we extract natural language data and implement preprocessing methods, including truncation to the length of $512$. Table 11: Statistics on the datasets of pretraining tasks. “#Image” denotes the number of distinct images, and “#Sample” denotes the number of samples. *For language data, we report its storage following the previous studies [2, 28]. Type Pretraining Task Source #Image #Sample Vision & Language Image Captioning CC12M, CC3M, SBU, COCO, VG-Cap 14.78M 15.25M Image-Text Matching Visual Question Answering VQAv2, VG-QA, GQA 178K 2.92M Visual Grounding RefCOCO, RefCOCO+, RefCOCOg, VG-Cap 131K 3.20M Grounded Captioning Vision Detection OpenImages, Object365, VG, COCO 2.98M 3.00M Image Infilling OpenImages, YFCC100M, ImageNet-21K 36.27M - Language Masked Language Modeling Pile (Filtered) - 140GB* ### A.2 Pretraining Details For the image processing, we first resize and crop the images into different resolutions, $256\times 256$ for $\text{OFA}\rm_{Tiny}$ and $\text{OFA}\rm_{Medium}$, $384\times 384$ for $\text{OFA}\rm_{Base}$, $480\times 480$ for $\text{OFA}\rm_{Large}$ and $\text{OFA}\rm_{Huge}$, with a fixed patch size of $16\times 16$. Note that training $\text{OFA}\rm_{Large}$ and $\text{OFA}\rm_{Huge}$ are time and computation consuming, we first train them with images of the resolution of $384\times 384$ and $256\times 256$, and continue pretraining with images of the resolution of $480\times 480$. For each patch, we obtain its feature vector with the first three blocks of ResNet [101]. The ResNet module is jointly trained along with the transformer module. Note that through extensive experiments we find that random sampling patches [47] does not bring additional benefits in our scenario. For the text processing, we tokenize the texts with the same BPE Tokenizer [63] as BART [31]. The maximum text sequence length of both encoder and decoder is set to $256$. We share parameters between the embedding and the decoder softmax output layer. From our preliminary experiments, we find that the initialization for Transformer plays an important role. For $\text{OFA}\rm_{Base}$ and $\text{OFA}\rm_{Large}$, we initialize the transformer with most of the weights of $\text{BART}\rm_{Base}$ and $\text{BART}\rm_{Large}$ considering the slight difference between OFA Transformer and BART as described in Sec 3.1. For OFA of the other sizes, we pretrain language models with the same pretraining strategy with BART and use the pretrained weights to initialize the Transformer in OFA. We use the AdamW [102] optimizer with $(\beta_{1},\beta_{2})=(0.9,0.999)$ and $\epsilon=1e\text{-}8$ to pretrain our models. We set the peak learning rate to $2e\text{-}4$, and apply a scheduler with linear decay with a warmup ratio of $0.01$ to control the learning rate. For regulation, we set dropout to $0.1$ and use weight decay with $0.01$. We employ stochastic depth [103] with a $0.1$ rate (applied to encoder and decoder except for convolution blocks). We mix all the pretraining data within each batch, which contains $2,048$ vision&language samples, $256$ object detection samples, $256$ image-only samples and $512$ text-only samples. All models are pretrained for at least $300K$ steps except the models used for ablation study. ### A.3 Details of Downstream Tasks We verify the capability of OFA on various downstream tasks in both finetuning and zero-shot settings. We design various task-specific instructions to transfer the knowledge learned from pretraining to downstream tasks effectively. The instructions of different tasks are listed in Table 12. For finetuning, if not specified, the input image resolution is set to $480\times 480$, and the other hyper-parameters remain the same as for pretraining. The experimental details of different downstream tasks, including both multimodal and uni-modal tasks, are listed below: #### Image Captioning Image captioning is a standard vision&language task that requires models to generate an appropriate and fluent caption for an image. We adopt the most widely used MSCOCO Image Caption dataset [74] to evaluate the multi-modal generation capability of OFA. We report BLEU-4 [104], METEOR [105], CIDEr [106], and SPICE [107] scores on the Karpathy test split [108]. Following the previous standard practice, we first finetune OFA with cross-entropy loss for $2$ epochs with a batch size of $128$ and a learning rate of $1e-5$, and label smoothing is set to $0.1$. We then finetune the model with CIDEr optimization for $3$ epochs with a batch size of $64$, and disable dropout and stochastic depth. We report both scores at the two stages. #### Visual Question Answering Visual question answering (VQA) is a cross-modal task that requires the models to answer the question given an image. Previous works such as VLMo [48] or SimVLM [22] define VQA as a classification task. They use a linear output layer to predict the probability of each candidate answer on a given set. In contrast with these studies, to adapt the generative OFA model to VQA benchmark, we use the Trie-based search strategy mentioned in Sec. 3.4 to ensure that the answer generated by OFA is constrained in the candidate set. We evaluate our model with other baselines on the commonly used VQAv2 dataset [95]. Accuracy scores on both test-dev and test-std sets are reported. The OFA models of all the reported sizes are finetuned for $40,000$ steps with a batch size of $512$. The learning rate is $5e-5$ with the label smoothing of $0.1$. When finetuning $\text{OFA}\rm_{Large}$ and $\text{OFA}\rm_{Huge}$, we increase the image resolution from $480$ to $640$. Linear interpolation of the image absolute positional embedding proposed in [6] is employed when transferring the pretrained OFA to VQA finetuning. During Trie-based searching, we constrain the generated answers over the most frequent $3,129$ answer candidates. Exponential moving average (EMA) with decay rate $0.9999$ is employed in finetuning. Table 12: Instructions for downstream tasks. Task Dataset Instruction Target Image Captioning COCO $\rm\left[\textbf{Image}\right]$ What does the image describe? {Caption} Visual Question Answering VQA $\rm\left[\textbf{Image}\right]$ {Question} {Answer} Visual Entailment SNLI-VE $\rm\left[\textbf{Image}\right]$ Can image and text1 “{Text1}" imply text2 “{Text2}"? Yes/No/Maybe Referring Expression Comprehension RefCOCO, RefCOCO+, RefCOCOg $\rm\left[\textbf{Image}\right]$ Which region does the text “{Text}" describe? {Location} Image Generation COCO What is the complete image? caption: {Caption} {Image} Image Classification ImageNet-1K $\rm\left[\textbf{Image}\right]$ What does the image describe? {Label} Single-Sentence Classification SST-2 Is the sentiment of text “{Text}" positive or negative? Positive/Negative Sentence-Pair Classification RTE Can text1 “{Text1}" imply text2 “{Text2}"? Yes/No MRPC Does text1 “{Text1}" and text2 “{Text2}" have the same semantics? Yes/No QQP Is question “{Question1}" and question “{Question2}" equivalent? Yes/No MNLI Can text1 “{Text1}" imply text2 “{Text2}"? Yes/No/Maybe QNLI Does “{Text}" contain the answer to question “{Question}"? Yes/No Text Summarization Gigaword What is the summary of article “{Article}"? {Summary} #### Visual Entailment Visual entailment requires the model to evaluate how the given image and text are semantically correlated, i.e., entailment, neutral, or contradiction. We perform experiments on the SNLI-VE dataset [73]. The image premise, text premise and text hypothesis are fed to the encoder, and the decoder generates appropriate labels. To transfer the knowledge learned by pretraining to this task, we convert the labels entailment/neutral/contradiction to yes/maybe/no. We also use the Trie-based search strategy to constrain the generated labels over the candidate set. We report accuracy on both dev and test sets. The OFA model is finetuned for $6$ epochs with a learning rate of $2e-5$ and a batch size of $256$. #### Referring Expression Comprehension Referring expression comprehension requires models to locate an image region described by a language query. Different from the approach taken by most previous methods [13, 14] which ranks a set of candidate bounding boxes detected by a pretrained object detector, our method directly predicts the best matching bounding box without any proposals. We perform experiments on RefCOCO [75], RefCOCO+ [75], and RefCOCOg [76]. Consistent with other downstream tasks, we formulate referring expression comprehension as a conditional sequence generation task. In detail, given an image and a language query, OFA generates the box sequence (e.g., $\langle x_{1},y_{1},x_{2},y_{2}\rangle$) in an autoregressive manner. We report the standard metric<EMAIL_ADDRESS>on the validation and test sets. For finetuning, the input image resolution is set to $512\times 512$. We finetune the OFA model on each dataset for about $10$ epochs with a batch size of $128$. The learning rate is $3e-5$ with the label smoothing of $0.1$. Each query only corresponds to an image region, so we limit the maximum generated length to $4$ during inference. #### Image Generation Following the same setting with [52], we train our model on the MS COCO train split and evaluate our model on the validation split by randomly sampling $30,000$ images. We use Fréchet Inception Distance (FID) [109] and Inception Score (IS) [110] to evaluate the quality of the images. Following the previous studies [78, 52], we also compute CLIP Similarity Score (CLIPSIM) to evaluate the semantic similarity between the query text and the generated images. During finetuning, OFA learns to generate the image code sequence according to the given text query only. The model is first finetuned with cross-entropy and then with CLIPSIM optimization following [111, 78]. In the first stage, we finetune the OFA model for about $50$ epochs with a batch size of $512$ and a learning rate of $1e-3$. In the second stage, the model is finetuned for extra $5000$ steps with a batch size of $32$ and a learning rate of $1e-6$. During the evaluation, we sample $24$ images with the resolution of $256\times 256$ for each query and choose the best one using the pretrained CLIP model [49]. For case study, we compare OFA with CogView and GLIDE. CogView provides an API website 444https://wudao.aminer.cn/CogView/index.html. Note that this API samples 8 images of resolution of $512\times 512$ for each query. We select the first one of generated images and resize it to the resolution of $256\times 256$. GLIDE provides a Colab notebook.555https://colab.research.google.com/drive/1q6tJ58UKod1eCOkbaUNGzF3K5BbXlB5m. Note that the only publicly available GLIDE model is of base size ($\sim$385M). #### Image Classification We provide finetuning results on ImageNet-1K [81] following recent studies in self-supervised learning for computer vision. During finetuning and inference, a Trie-based search strategy is employed to constrain the generated text into the set of $1,000$ candidate labels. We finetune OFA for $32$ epochs and a batch size of $256$. The learning rate is $5e-5$. The ratio for label smoothing is $0.1$. The encouraging loss proposed in [112] is employed with the hyperparameter LE set to $0.75$. Following [36], we use the same random resize cropping, random flipping, RandAug [113] and random erasing [114] transformations as data augmentation strategies. Mixup [115] and CutMix [116] are used with overall $0.5$ probability to be performed on each batch and alpha is $0.8$ and $1.0$, respectively. To adapt the mixed soft target of Mixup and CutMix into generation paradigm during finetuning, we run the decoder twice each with one of the target sequences to be mixed and sum the loss weighted by the mixing ratio. #### Natural Language Understanding To verify the natural language understanding ability of OFA, we select $6$ language understanding tasks from GLUE benchmark [79], including both single- sentence classification tasks and sentence-pair classification tasks. To adapt to sentence-pair classification, previous models [2, 28] usually use segment embeddings to distinguish different sentences. Unlike those models, OFA can apply the model to sentence-pair classification tasks by constructing appropriate instructions without introducing additional segment embeddings. For the hyper-parameters of finetuning, we tune the training epochs among $\\{5,7,10\\}$, learning rate among $\\{3e-5,5e-5,6e-5,7e-5,1e-4\\}$, batch size among $\\{32,64,128\\}$, weight decay among $\\{0.01,0.05\\}$, and dropout rate among $\\{0.0,0.1\\}$. We report the best performance on the development set for each task. #### Natural Language Generation We verify the natural language generation ability of OFA in the Gigaword dataset [80]. We report ROUGE-1/ROUGE-2/ROUGE-L to evaluate the generation results following [80]. We finetune the OFA models for $6$ epochs with a batch size of $512$. The learning rate is $1e-4$ with the label smoothing of $0.1$, and the maximum input text sequence length is set to $512$. During inference, we set the length penalty to $0.7$ and beam size to $6$, and limit the maximum generated length to 32. ## Appendix B Trie-based Search This section describes how to use Trie-based search to improve model performance on downstream classification tasks. When dealing with classification tasks, we first construct a Trie where nodes are annotated with tokens from the candidate label-set. During finetuning, the model computes the log-probabilities of the target tokens based on their positions on the Trie. As shown in Figure 6, when computing the log-probabilities of the target token “sky”, we only consider tokens in {“sky”, “ocean”} and forcefully set the logits for all invalid tokens to $-\infty$. During inference, we constrain the generated labels over the candidate set. As shown in Table 13, Trie-based search strategy can boost the performance of OFA in various downstream classification tasks. Figure 6: Example of Trie-based search where the constraint labels are “blue sky”, “blue ocean” and “green”. When computing the log-prob of token “sky”, we only consider tokens in {“sky”, “ocean”} and forcefully set the logits for all invalid tokens to $-\infty$. Table 13: Ablation results of Trie. The removal of Trie-based search degenerates the performance on downstream tasks. Note that the baseline $\text{OFA}\rm_{Base}$ is only pre-trained for 250k steps, which is also used in Table 10. Model VQA SNLI-VE ImageNet MRPC QQP Test-dev Acc. Dev Acc. Top-1 Acc. F1 F1 $\text{OFA}\rm_{Base}$ 76.03 89.2 82.2 90.6 88.4 w/o Trie 75.86(-0.17) 89.0(-0.2) 81.9(-0.3) 90.1(-0.5) 88.2(-0.2) ## Appendix C Qualitative Examples This section provides more qualitative examples of multiple tasks, including text-to-image generation, open-domain VQA, grounded question answering, and open-domain visual grounding, from the generation of OFA. By reading this section, we hope that readers can better perceive OFA. Figure 7: Examples of text-to-image generation. For better demonstration, we continue finetuning OFA on a subset of LAION-400M [117]. Figure 8: Examples of text-to-image generation. Figure 9: More samples of VQA task on unseen domains. The answers are generated by pretrained OFA without finetuning. The datasets used in VQA pretraining task only contain real-world photographs. We present more cases of VQA task on out-of-domain (non-photographic) images and demonstrate the capability of transferring OFA to these unseen domains. Figure 10: Samples of the unseen grounded question answering task. In this task, the model should answer a question about a particular region in the image. This task is unseen in pretraining. We demonstrate that directly transferring pretrained OFA to this new task without finetuning works well. Figure 11: Samples of visual grounding task generated by OFA for various unseen domains: (a) anime (the corresponding animations are Pokemon and One Piece); (b) synthetic images with attribute combinations.
# CenterFusion: Center-based Radar and Camera Fusion for 3D Object Detection Ramin Nabati, Hairong Qi University of Tennessee Knoxville {rnabati<EMAIL_ADDRESS> ###### Abstract The perception system in autonomous vehicles is responsible for detecting and tracking the surrounding objects. This is usually done by taking advantage of several sensing modalities to increase robustness and accuracy, which makes sensor fusion a crucial part of the perception system. In this paper, we focus on the problem of radar and camera sensor fusion and propose a middle-fusion approach to exploit both radar and camera data for 3D object detection. Our approach, called CenterFusion, first uses a center point detection network to detect objects by identifying their center points on the image. It then solves the key data association problem using a novel frustum-based method to associate the radar detections to their corresponding object’s center point. The associated radar detections are used to generate radar-based feature maps to complement the image features, and regress to object properties such as depth, rotation and velocity. We evaluate CenterFusion on the challenging nuScenes dataset, where it improves the overall nuScenes Detection Score (NDS) of the state-of-the-art camera-based algorithm by more than 12%. We further show that CenterFusion significantly improves the velocity estimation accuracy without using any additional temporal information. The code is available at https://github.com/mrnabati/CenterFusion. ## 1 Introduction Autonomous vehicles are usually equipped with different types of sensors to take advantage of their complimentary characteristics. Using multiple sensor modalities increases robustness and accuracy, but also introduces new challenges in designing the perception system. Sensor fusion is one of these challenges, which has motivated many studies in 2D and 3D object detection [4, 10, 14, 19], semantic segmentation [33, 16] and object tracking [1, 7] in recent years. Most of the recent sensor fusion methods focus on exploiting LiDAR and camera for 3D object detection. LiDARs use the time of flight of laser light pulses to calculate distance to surrounding objects. LiDARs provide accurate 3D measurement at close range, but the resulting point cloud becomes sparse at long range, reducing the system’s ability to accurately detect far away objects. Cameras provide rich appearance features, but are not a good source of information for depth estimation. These complementary features have made LiDAR-camera sensor fusion a topic of interest in recent years. This combination has been proven to achieve high accuracy in 3D object detection for many applications including autonomous driving, but it has its limitations. Cameras and LiDARs are both sensitive to adverse weather conditions (e.g. snow, fog, rain), which can significantly reduce their field of view and sensing capabilities. Additionally, LiDARs and cameras are not capable of detecting objects’ velocity without using temporal information. Estimating objects’ velocity is a crucial requirement for collision avoidance in many scenarios, and relying on the temporal information might not be a feasible solution in time-critical situations. Figure 1: CenterFusion network architecture. Preliminary 3D boxes are first obtained using the image features extracted by the backbone. The frustum association module uses the preliminary boxes to associate radar detections to objects and generate radar feature maps. The image and radar features maps are then concatenated and used to refine the preliminary detections by recalculating depth and rotation as well as estimating objects’ velocity and attributes. For many years, radars have been used in vehicles for Advanced Driving Assistance System (ADAS) applications such as collision avoidance and Adaptive Cruise Control (ACC). Compared to LiDARs and cameras, radars are very robust to adverse weather conditions and are capable of detecting objects at very long range (up to 200 meters for automotive radars). Radars use the Doppler effect to accurately estimate the velocities of all detected objects, without requiring any temporal information. Additionally, compared to LiDARs, Radar point clouds require less processing before they can be used as object detection results. These features and their lower cost compared to LiDARs, have made radars a popular sensor in autonomous driving applications. Despite radar’s popularity in the automotive industry, few studies have focused on fusing radar data with other sensors. One reason for this is the fact that there are not many datasets containing radar data for autonomous driving applications, which makes conducting research in this area difficult. Additionally, due to inherent differences between LiDAR and radar point clouds, applying or adapting existing LiDAR-based algorithms to radar point cloud proves to be extremely difficult. Radar point clouds are significantly more sparse than their LiDAR counter parts, making it unfeasible to use for extracting objects’ geometry information. Aggregating multiple radar sweeps increases the density of the points, but also introduces delay into the system. Moreover, although radar point clouds are usually represented as points in the 3D coordinate system, the reported vertical measurement of the points are usually not accurate or even non-existent, as most automotive radars only report the distance and azimuth angle to objects. In order to effectively combine multiple sensing modalities, a variety of sensor fusion schemes have been developed [8] taking advantage of the hierarchical feature representation in neural networks. In an early fusion approach, the raw or pre-processed sensory data from different sensor modalities are fused together. With this approach, the network learns a joint representation from the sensing modalities. Early fusion methods are usually sensitive to spatial or temporal misalignment of the data [8]. On the other hand, a late fusion approach combines the data from different modalities at the decision level, and provides more flexibility for introducing new sensing modalities to the network. However, a late fusion approach does not exploit the full potential of the available sensing modalities, as it does not acquire the intermediate features obtained by learning a joint representation. A compromise between the early and late fusion approaches is referred to as middle fusion. It extracts features from different modalities individually and combines them at an intermediate stage, enabling the network to learn joint representations and creating a balance between sensitivity and flexibility. We propose CenterFusion, a middle-fusion approach to exploit radar and camera data for 3D object detection. CenterFusion focuses on associating radar detections to preliminary detection results obtained from the image, then generates radar feature maps and uses it in addition to image features to accurately estimate 3D bounding boxes for objects. Particularly, we generate preliminary 3D detections using a key point detection network, and propose a novel frustum-based radar association method to accurately associate radar detections to their corresponding objects in the 3D space. These radar detections are then mapped to the image plane and used to create feature maps to complement the image-based features. Finally, the fused features are used to accurately estimate objects’ 3D properties such as depth, rotation and velocity. The network architecture for CenterFusion is shown in Fig. 1. We evaluate CenterFusion on the challenging nuScenes [2] dataset, where it outperforms all previous camera-based object detection methods in the 3D object detection benchmark. We also show that exploiting radar information significantly improves velocity estimation for objects without using any temporal information. ## 2 Related Work ### 2.1 Single-modality Methods Monocular 3D object detection methods use a single camera to estimate 3D bounding boxes for objects. Many studies have been reported, taking different approaches to extract the depth information from monocular images. 3D RCNN [11] uses Fast R-CNN [9] with an additional head and 3D projection. It also uses a collection of CAD models to learn class-specific shape priors for objects. Deep3DBox [17] regresses a set of 3D object properties using a convolutional neural network first, then uses the geometric constraints of 2D bounding boxes to produce a 3D bounding box for the object. CenterNet [34] takes a different approach and uses a keypoint detection network to find objects’ center point on the image. Other object properties such as 3D dimension and location are obtained by regression using only the image features at the object’s center point. LiDARs have been widely used for 3D object detection and tracking in autonomous driving applications in recent years. The majority of LiDAR-based methods either use 3D voxels [12, 35] or 2D projections [13, 5, 29, 31] for point cloud representation. Voxel-based methods are usually slow as a result of the voxel grid’s high dimensionality, and projection-based methods might suffer from large variances in object shapes and sizes depending on the projection plane. PointRCNN [25] directly operates on raw point clouds and generates 3D object proposals in a bottom-up manner using point cloud segmentation. These proposals are refined in the second stage to generate the final detection boxes. Figure 2: Difference between actual and radial velocity. For target A, velocity in the vehicle coordinate system and the radial velocity are the same ($v^{A}$). For target B on the other hand, radial velocity ($v_{r}$) as reported by the radar is different from the actual velocity of the object ($v^{B}$) in the vehicle coordinate system. ### 2.2 Fusion-based Methods Most existing sensor fusion methods focus on the LiDAR and camera fusion problem. MV3D [4] extracts features from the front view and Bird’s Eye View (BEV) representations of the LiDAR data, in addition to the RGB image. The features obtained from the LiDAR’s BEV are then used to generate 3D object proposals, and a deep fusion network is used to combine features from each view and predict the object class and box orientations. PointFusion [28] processes the image and LiDAR data using a CNN and a PointNet model respectively, and then generate 3D object proposals using the extracted features. Frustum PointNet [23] directly operates on the raw point clouds obtained from an RGB-D camera and uses the RGB image and a 2D object detector to localize objects in the point cloud. Few studies have focused on fusing radars with other sensors for autonomous driving applications. RadarNet [30] fuses radar and LiDAR data for 3D object detection. It uses an early fusion mechanism to learn joint representations from the two sensors, and a late-fusion mechanism to exploit radar’s radial velocity evidence and improve the estimated object velocity. In [3], Chadwick _et al_. project radar detections to the image plane and use them to boost the object detection accuracy for distant objects. In [20] authors use radar detections to generate 3D object proposals first, then project them to the image plane to perform joint 2D object detection and depth estimation. CRF-Net [22] also projects radar detections to the image plane, but represents them as vertical lines where the pixel values correspond to the depth of each detection point. The image data is then augmented with the radar information and used in a convolutional network to perform 2D object detection. Figure 3: Frustum association. An object detected using the image features (left), generating the ROI frustum based on object’s 3D bounding box (middle), and the BEV of the ROI frustum showing radar detections inside the frustum (right). $\delta$ is used to increase the frustum size in the testing phase. $\hat{d}$ is the ground truth depth in the training phase and the estimated object depth in the testing phase. Figure 4: Expanding radar points to 3D pillars (top image). Directly mapping the pillars to the image and replacing with radar depth information results in poor association with objects’ center and many overlapping depth values (middle image). Frustum association accurately maps the radar detections to the center of objects and minimizes overlapping (bottom image). Radar detections are only associated to objects with a valid ground truth or detection box, and only if all or part of the radar detection pillar is inside the box. Frustum association also prevents associating radar detections caused by background objects such as buildings to foreground objects, as seen in the case of pedestrians on the right hand side of the image. ## 3 Preliminary ### 3.1 Radar Point Cloud Radars are active sensors that transmit radio waves to sense the environment and measure the reflected waves to determine the location and velocity of objects. Automotive radars usually report the detected objects as 2D points in BEV, providing the azimuth angle and radial distance to the object. For every detection, the radar also reports the instantaneous velocity of the object in the radial direction. This radial velocity does not necessarily match the object’s actual velocity vector in it’s direction of movement. Fig. 2 illustrates the difference between the radial as reported by the radar, and actual velocity of the object in the vehicle’s coordinate system. We represent each radar detection as a 3D point in the egocentric coordinate system, and parameterize it as $P=(x,y,z,v_{x},v_{y}$) where $(x,y,z)$ is the position and $(v_{x},v_{y})$ is the reported radial velocity of the object in the $x$ and $y$ directions. The radial velocity is compensated by the ego vehicle’s motion. For every scene, we aggregate 3 sweeps of the radar point cloud (detections within the past 0.25 seconds). The nuScenes dataset provides the calibration parameters needed for mapping the radar point clouds from the radar coordinates system to the egocentric and camera coordinate systems. ### 3.2 CenterNet CenterNet [34] represents the state-of-the-art in 3D object detection using single camera. It takes an image $I\in\mathbb{R}^{W\times H\times 3}$ as input and generates a keypoint heatmap $\hat{Y}\in[0,1]^{\frac{W}{R}\times\frac{H}{R}\times C}$ as output where $W$ and $H$ are the image width and height, $R$ is the downsampling ratio and $C$ is the number of object categories. A prediction of $\hat{Y}_{x,y,c}=1$ as the output indicates a detected object of class $c$ centered at position $(x,y)$ on the image. The ground-truth heatmap $Y\in[0,1]^{\frac{W}{R}\times\frac{H}{R}\times C}$ is generated from the ground-truth 2D bounding boxes using a Gaussian kernel. For each bounding box center point $p_{i}\in\mathcal{R}^{2}$ of class $c$ in the image, a Gaussian heatmap is generated on $Y_{:,:,c}$. The final value of $Y$ for class $c$ at position $q\in\mathcal{R}^{2}$ is defined as [34]: $Y_{qc}=\max_{i}\exp(-\frac{(p_{i}-q)^{2}}{2\sigma_{i}^{2}})$ (1) where $\sigma_{i}$ is a size-adaptive standard deviation, controlling the size of the heatmap for every object based on its size. A fully convolutional encode-decoder network is used to predict $\hat{Y}$. To generate 3D bounding boxes, separate network heads are used to regress object’s depth, dimensions and orientation directly from the detected center points. Depth is calculated as an additional output channel $\hat{D}\in[0,1]^{\frac{W}{R}\times\frac{H}{R}}$ after applying the inverse sigmoidal transformation used in Eigen _et al_. [6] to the original depth domain. The object dimensions are directly regressed to their absolute values in meter as three output channels $\hat{\Gamma}\in[0,1]^{\frac{W}{R}\times\frac{H}{R}\times 3}$. Orientation is encoded as two bins with 4 scalars in each bin, following the orientation representation in Mousavian _et al_. [18]. For each center point, a local offset is also predicted to compensate for the discretization error caused by the output strides in the backbone network [34]. Given the annotated objects ${p_{0},p_{1},...}$ in an image, the training objective is defined as below based on the focal loss [15]: $L_{k}=\frac{1}{N}\sum_{xyc}\begin{cases}(1-\hat{Y}_{xyc})^{\alpha}\log(\hat{Y}_{xyc})&\\!Y_{xyc}=1\vspace{2mm}\\\ (1-Y_{xyc})^{\beta}(\hat{Y}_{xyc})^{\alpha}\log(1-\hat{Y}_{xyc})&\\!\text{otherwise}\end{cases},$ where $N$ is the number of objects, $Y\in[0,1]^{\frac{W}{R}\times\frac{H}{R}\times C}$ is the annotated objects’ ground-truth heatmap and $\alpha$ and $\beta$ are focal loss hyperparameters. ## 4 CenterFusion In this section we present our approach to radar and camera sensor fusion for 3D object detection. The overall CenterFusion architecture is shown in Fig. 1. We adopt [34] as our center-based object detection network to detect objects’ center points on the image plane, and regress to other object properties such as 3D location, orientation and dimensions. We propose a middle-fusion mechanism that associates radar detections to their corresponding object’s center point and exploits both radar and image features to improve the preliminary detections by re-estimating their depth, velocity, rotation and attributes. The key in our fusion mechanism is accurate association of radar detections to objects. The center point object detection network generates a heat map for every object category in the image. The peaks in the heat map represent possible center points for objects, and the image features at those locations are used to estimate other object properties. To exploit the radar information in this setting, radar-based features need to be mapped to the center of their corresponding object on the image, which requires an accurate association between the radar detections and objects in the scene. ### 4.1 Center Point Detection We adopt the CenterNet [34] detection network for generating preliminary detections on the image. The image features are first extracted using a fully convolutional encoder-decoder backbone network. We follow CenterNet [34] and use a modified version of the Deep Layer Aggregation (DLA) network [32] as the backbone. The extracted image features are then used to predict object center points on the image, as well as the object 2D size (width and height), center offset, 3D dimensions, depth and rotation. These values are predicted by the primary regression heads as shown in Fig. 1. Each primary regression head consists of a $3\times 3$ convolution layer with 256 channels and a $1\times 1$ convolutional layer to generate the desired output. This provides an accurate 2D bounding box as well as a preliminary 3D bounding box for every detected object in the scene. ### 4.2 Radar Association The center point detection network only uses the image features at the center of each object to regress to all other object properties. To fully exploit radar data in this process, we first need to associate the radar detections to their corresponding object on the image plane. To accomplish this, a naïve approach would be mapping each radar detection point to the image plane and associating it to an object if the point is mapped inside the 2D bounding box of that object. This is not a very robust solution, as there is not a one-to- one mapping between radar detections and objects in the image; Many objects in the scene generate multiple radar detections, and there are also radar detections that do not correspond to any object. Additionally, because the $z$ dimension of the radar detection is not accurate (or does not exist at all), the mapped radar detection might end up outside the 2D bounding box of its corresponding object. Finally, radar detections obtained from occluded objects would map to the same general area in the image, which makes differentiating them in the 2D image plane difficult, if possible at all. Frustum Association Mechanism: We develop a frustum association method that uses the object’s 2D bounding box as well as its estimated depth and size to create a 3D Region of Interest (RoI) frustum for the object. Having an accurate 2D bounding box for an object, we create a frustum for that object as shown in Fig. 3. This significantly narrows down the radar detections that need to be checked for association, as any point outside this frustum can be ignored. We then use the estimated object depth, dimension and rotation to create a RoI around the object, to further filter out radar detections that are not associated with this object. If there are multiple radar detections inside this RoI, we take the closest point as the radar detection corresponding to this object. In the training phase, we use the object’s 3D ground truth bounding box to create a tight RoI frustum and associate radar detections to the object. In the test phase, the RoI frustum is calculated using the object’s estimated 3D bounding box as explained before. In this case, we use a parameter $\delta$ to control the size of the RoI frustum as shown in Fig. 3. This is to account for inaccuracy in the estimated depth values, as the depth of the object at this stage is solely determined using the image-based features. Enlarging the frustum using this parameter increases the chance of including the corresponding radar detections inside the frustum even if the estimated depth is slightly off. The value of $\delta$ should be carefully selected, as a large RoI frustum can include radar detections of nearby objects. The RoI frustum approach makes associating overlapping objects effortless, as objects are separated in the 3D space and would have separate RoI frustums. It also eliminates the multi-detection association problem, as only the closest radar detection inside the RoI frustum is associated to the object. It does not, however, help with the inaccurate $z$ dimension problem, as radar detections might be outside the ROI frustum of their corresponding object due to their inaccurate height information. Pillar Expansion: To address the inaccurate height information problem, we introduce a radar point cloud preprocessing step called pillar expansion, where each radar point is expanded to a fixed-size pillar, as illustrated in Fig. 4. Pillars create a better representation for the physical objects detected by the radar, as these detections are now associated with a dimension in the 3D space. Having this new representation, we simply consider a radar detection to be inside a frustum if all or part of its corresponding pillar is inside the frustum, as shown in Fig. 1. ### 4.3 Radar Feature Extraction After associating radar detections to their corresponding objects, we use the depth and velocity of the radar detections to create complementary features for the image. Particularly, for every radar detection associated to an object, we generate three heat map channels centered at and inside the object’s 2D bounding box, as shown in Fig. 4. The width and height of the heatmaps are proportional to the object’s 2D bounding box, and are controlled by a parameter $\alpha$. The heatmap values are the normalized object depth ($d$) and also the $x$ and $y$ components of the radial velocity ($v_{x}$ and $v_{y}$) in the egocentric coordinate system: $F_{x,y,i}^{j}=\frac{1}{M_{i}}\begin{cases}f_{i}&\\!|x-c_{x}^{j}|\leq\alpha w^{j}\hskip 8.0pt\text{and}\\\ &|y-c_{y}^{i}|\leq\alpha h^{j}\\\ 0&\\!\text{otherwise}\end{cases},$ where $i\in{1,2,3}$ is the feature map channel, $M_{i}$ is a normalizing factor, $f_{i}$ is the feature value ($d$, $v_{x}$ or $v_{y}$), $c_{x}^{j}$ and $c_{y}^{j}$ are the $x$ and $y$ coordinates of the $j$th object’s center point on the image and $w^{j}$ and $h^{j}$ are the width and height of the $j$th object’s 2D bounding box. If two objects have overlapping heatmap areas, the one with a smaller depth value dominates, as only the closest object is fully visible in the image. The generated heat maps are then concatenated to the image features as extra channels. These features are used as inputs to the secondary regression heads to recalculate the object’s depth and rotation, as well as velocity and attributes. The velocity regression head estimates the $x$ and $y$ components of the object’s actual velocity in the vehicle coordinate system. The attribute regression head estimates different attributes for different object classes, such as moving or parked for the Car class and standing or sitting for the Pedestrian class. The secondary regression heads consist of three convolutional layers with $3\times 3$ kernels followed by a $1\times 1$ convolutional layer to generate the desired output. The extra convolutional layers compared to the primary regression heads help with learning higher level features from the radar feature maps. The last step is decoding the regression head results into 3D bounding boxes. The box decoder block uses the estimated depth, velocity, rotation, and attributes from the secondary regression heads, and takes the other object properties from the primary heads. ## 5 Implementation Details We use the pre-trained CenterNet [34] network with the DLA [32] backbone as our object detection network. DLA uses iterative deep aggregation layers to increase the resolution of feature maps. CenterNet compares its performance using different backbone architectures, with the Hourglass network [21] performing better than others. We choose to use the DLA network as it takes significantly less time to train while providing a reasonable performance. We directly use the released CenterNet model that is trained for 140 epochs on the nuScenes dataset. This model by default does not provide velocity and attribute predictions. We train the velocity and attribute heads for 30 epochs, and use the resulting model as our baseline. The secondary regression heads in our network are added on top of the CenterNet backbone network, and are trained using the image and radar features for an additional 60 epochs with a batch size of 26 on two Nvidia P5000 GPUs. During both training and testing, we reduce the image resolution from the original 1600$\times$900 pixels to 800$\times$450 pixels. Data augmentation is used during training, with random right-left flipping (with a probability of 0.5) and random shifting (from 0 to 20 percent of image size). The same augmentations are also applied to the radar point cloud in reference to the camera coordinate system. We do not apply any scaling augmentation as it changes the 3D measurements. At testing time, we only use flip test augmentation where an image and its flipped version are fed into the network and the average of the network outputs is used for decoding the 3D bounding boxes. We do not use the multi-scale test augmentation as used by CenterNet. The pillar size is set to $[0.2,0.2,1.5]$ meters in the $[x,y,z]$ directions and $\delta$ is set to increase the length of the RoI frustum by 20% in the radial direction at test time. We use the L1 loss for most of the regression heads, with the exception of the center point heat map head which uses the focal loss and the attributes regression head that uses the Binary Cross Entropy (BCE) loss. Table 1: Performance comparison for 3D object detection on nuScenes dataset. mATE, mASE, mAOE, mAVE and mAAE stand for average translation, scale, orientation, velocity and attribute errors respectively. $\uparrow$ indicates that higher is better and $\downarrow$ indicates that lower is better. ”C”, ”R” and ”L” specify camera, radar and LIDAR modalities respectively. | | Modality | | | Error $\downarrow$ ---|---|---|---|---|--- Method | Dataset | C | R | L | NDS $\uparrow$ | mAP $\uparrow$ | mATE | mASE | mAOE | mAVE | mAAE InfoFocus [27] | test | | | ✓ | 0.395 | 0.395 | 0.363 | 0.265 | 1.132 | 1.000 | 0.395 OFT [24] | test | ✓ | | | 0.212 | 0.126 | 0.820 | 0.360 | 0.850 | 1.730 | 0.480 MonoDIS [26] | test | ✓ | | | 0.384 | 0.304 | 0.738 | 0.263 | 0.546 | 1.533 | 0.134 CenterNet (HGLS) [34] | test | ✓ | | | 0.400 | 0.338 | 0.658 | 0.255 | 0.629 | 1.629 | 0.142 Ours (DLA) | test | ✓ | ✓ | | 0.449 | 0.326 | 0.631 | 0.261 | 0.516 | 0.614 | 0.115 CenterNet (DLA) [34] | val | ✓ | | | 0.328 | 0.306 | 0.716 | 0.264 | 0.609 | 1.426 | 0.658 Ours (DLA) | val | ✓ | ✓ | | 0.453 | 0.332 | 0.649 | 0.263 | 0.535 | 0.540 | 0.142 Table 2: Per-class performance comparison for 3D object detection on nuScenes dataset. | | Modality | mAP $\uparrow$ ---|---|---|--- Method | Dataset | C | R | L | Car | Truck | Bus | Trailer | Const. | Pedest. | Motor. | Bicycle | Traff. | Barrier InfoFocus [27] | test | | | ✓ | 0.779 | 0.314 | 0.448 | 0.373 | 0.107 | 0.634 | 0.290 | 0.061 | 0.465 | 0.478 MonoDIS [26] | test | ✓ | | | 0.478 | 0.220 | 0.188 | 0.176 | 0.074 | 0.370 | 0.290 | 0.245 | 0.487 | 0.511 CenterNet (HGLS) [34] | test | ✓ | | | 0.536 | 0.270 | 0.248 | 0.251 | 0.086 | 0.375 | 0.291 | 0.207 | 0.583 | 0.533 Ours (DLA) | test | ✓ | ✓ | | 0.509 | 0.258 | 0.234 | 0.235 | 0.077 | 0.370 | 0.314 | 0.201 | 0.575 | 0.484 CenterNet (DLA) [34] | val | ✓ | | | 0.484 | 0.231 | 0.340 | 0.131 | 0.035 | 0.377 | 0.249 | 0.234 | 0.550 | 0.456 Ours (DLA) | val | ✓ | ✓ | | 0.524 | 0.265 | 0.362 | 0.154 | 0.055 | 0.389 | 0.305 | 0.229 | 0.563 | 0.470 ## 6 Results We compare our radar and camera fusion network with the state-of-the-art camera-based models on the nuScenes benchmark, and also a LIDAR based method. Table 1 shows the results on both test and validation splits of the nuScenes dataset. We compare with OFT [24], MonoDIS [26] and CenterNet [34] which are camera-based 3D object detection networks, as well as InfoFocus [27] which is a LIDAR-based method. As seen in Table 1, CenterFusion outperforms all other methods in the nuScenes NDS score, which is a weighted sum of the mAP and the error metrics. On the test dataset, CenterFusion shows a 12.25% and 16.9% relative increase in the NDS score compared to CenterNet and MonoDIS respectively. The LIDAR-based method InfoFocus shows a better performance in the mAP score compared to other methods, but is significantly outperformed by CenterFusion in the orientation, velocity and attribute error metrics. While CenterNet with the Hourglass [21] backbone network results in a better mAP score compared to CenterFusion (1.2% difference) on the test split, the results on the validation split show that CenterFusion outperforms CenterNet by 2.6% when both networks use the same DLA [32] backbone. The validation set results also show CenterFusion improving CenterNet in all the other metrics. CenterFusion shows an absolute gain of 38.1% and 62.1% relative increase in the NDS and velocity error metrics compared to CenterNet, which demonstrates the effectiveness of using radar features. Table 2 compares the per-class mAP results for both test and validation splits. While CenterNet with an Hourglass backbone has a higher mAP than CenterFusion for most classes in the test set, it is outperformed by CenterFusion on the validation set where the DLA backbone is used for both methods. The most improved classes on the validation set are the motorcycle and car with 5.6% and 4.0% absolute mAP increase respectively. Fig. 5 demonstrates the 3D object detection results in both camera and BEV. It shows the detection results from CenterFusion (row 1 & 2) and CenterNet (row 3 & 4) for 4 different scenes. The radar point clouds are also shown in the CenterFusion BEV results. Compared to CenterNet, the results from CenterFusion show a better fit for 3D boxes in most cases, especially objects at a larger distance, such as the far vehicle in the second scene. Additionally, the velocity vectors estimated by CenterFusion show a significant improvement compared to the CenterNet results, as seen in the second and third scenes. Table 3: Overall ablation study on nuScenes validation set. Improvement percentages in each row are relative to the baseline method. (PE: Pillar Expansion, FA: Frustum Association, FT: Flip Test) Method | Cam | Rad | PE | FA | FT | NDS $\uparrow$ | mAP $\uparrow$ | mATE $\downarrow$ | mASE $\downarrow$ | mAOE $\downarrow$ | mAVE $\downarrow$ | mAAE $\downarrow$ ---|---|---|---|---|---|---|---|---|---|---|---|--- Baseline | ✓ | - | - | - | - | 0.328 | 0.306 | 0.716 | 0.264 | 0.609 | 1.426 | 0.658 Ours | ✓ | ✓ | ✓ | - | - | +15.4% | +1.0% | -2.0% | +1.1% | -4.4% | -13.1% | -68.6% Ours | ✓ | ✓ | - | ✓ | - | +25.9% | +2.0% | -2.8% | +1.0% | -7.4% | -48.1% | -75.9% Ours | ✓ | ✓ | ✓ | ✓ | - | +34.5% | +4.3% | -5.3% | +1.1% | -10.0% | -61.9% | -78.0% Ours | ✓ | ✓ | ✓ | ✓ | ✓ | +37.8% | +8.4% | -9.4% | -0.5% | -11.6% | -62.0% | -78.3% Table 4: Class-based ablation study results on nuScenes validation set. Method | Cam | Rad | PE | FA | FT | Car | Truck | Bus | Trailer | Const. | Pedest. | Motor. | Bicycle | Traff. | Barrier ---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|--- Baseline | ✓ | - | - | - | - | 48.4 | 23.1 | 34.0 | 13.1 | 3.5 | 37.7 | 24.9 | 23.4 | 55.0 | 45.6 Ours | ✓ | ✓ | ✓ | - | - | +0.6 | +0.7 | -2.1 | +0.9 | +0.6 | +0.9 | +1.9 | -2.5 | +0.1 | +0.8 Ours | ✓ | ✓ | - | ✓ | - | +1.0 | +1.0 | -2.1 | +0.9 | +0.9 | 0.0 | +2.1 | -1.9 | +0.2 | +0.8 Ours | ✓ | ✓ | ✓ | ✓ | - | +2.8 | +2.1 | -1.2 | +1.4 | +1.1 | +0.1 | +3.8 | -1.1 | +0.4 | +0.8 Ours | ✓ | ✓ | ✓ | ✓ | ✓ | +4.1 | +3.4 | +2.7 | +1.8 | +1.8 | +1.2 | +5.5 | -0.7 | +1.3 | +1.5 Figure 5: Qualitative results from CenterFusion (row 1 & 2) and CenterNet (row 3 & 4) in camera view and BEV. In the BEV plots, detection boxes are shown in cyan and ground truth boxes in red. The radar point cloud is shown in green. Red and blue arrows on objects show the ground truth and predicted velocity vectors respectively. ## 7 Ablation Study We validate the effectiveness of our fusion approach by conducting an ablation study on the nuScenes validation set. We use the CenterNet model as our baseline, and study the effectiveness of the pillar expansion, frustum association and flip testing on the detection results. Table 3 shows the overall detection results of the ablation study. In the first experiment, we only apply pillar expansion to the radar point clouds, and map the 3D pillars to the image plane and obtain their equivalent 2D bounding boxes. These boxes are then filled with the depth and velocity values of their corresponding radar detections and used as the radar feature maps, as shown in Fig. 4. According to Table 3, this simple association method results in a 15.4% relative improvement on the NDS score and 1.0% absolute improvement on the mAP compared to the baseline. In the next experiment we only use the frustum association method by directly applying it on the radar point clouds without converting them to pillars first. This improves the NDS score by 25.9% relatively and mAP by 2.0%. Applying both pillar expansion and frustum association results in a relative 35.5% and absolute 4.3% improvement on the NDS and mAP scores respectively. Flip testing adds another 3.3% improvement on the NDS score and 3.9% on the mAP, resulting in a total of 37.8% and 8.4% improvement on NDS and mAP compared to the baseline method. Table 4 shows the per-class contribution of each step on the mAP. According to the results, both pillar expansion and frustum association steps have contributed to the improvement of mAP in most object classes. The only class that has not improved from the baseline is the bicycle class, in which the CenterNet mAP score is better than CenterFusion by 0.5%. ## 8 Conclusion In summary, we proposed a new radar and camera fusion algorithm called CenterFusion, to exploit radar information for robust 3D object detection. CenterFusion accurately associates radar detections to objects on the image using a frustum-based association method, and creates radar-based feature maps to complement the image features in a middle-fusion approach. Our frustum association method uses preliminary detection results to generate a RoI frustum around objects in 3D space, and maps the radar detection to the center of objects on the image. We also used a pillar expansion method to compensate for the inaccuracy in radar detections’ height information, by converting radar points to fixed-size pillars in the 3D space. We evaluated our proposed method on the challenging nuScenes 3D detection benchmark, where CenterFusion outperformed the state-of-the-art camera-based object detection methods. ## References * [1] A. Asvadi, P. Girão, P. Peixoto, and U. Nunes. 3d object tracking using rgb and lidar data. In 2016 IEEE 19th International Conference on Intelligent Transportation Systems (ITSC), pages 1255–1260, 2016. * [2] Holger Caesar, Varun Bankiti, Alex H. Lang, Sourabh Vora, Venice Erin Liong, Qiang Xu, Anush Krishnan, Yu Pan, Giancarlo Baldan, and Oscar Beijbom. nuscenes: A multimodal dataset for autonomous driving. arXiv preprint arXiv:1903.11027, 2019. * [3] Simon Chadwick, Will Maddern, and Paul Newman. Distant Vehicle Detection Using Radar and Vision. arXiv:1901.10951 [cs], May 2019. * [4] Xiaozhi Chen, Huimin Ma, Ji Wan, Bo Li, and Tian Xia. Multi-view 3d object detection network for autonomous driving. 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Jul 2017. * [5] Xiaozhi Chen, Huimin Ma, Ji Wan, Bo Li, and Tian Xia. Multi-View 3D Object Detection Network for Autonomous Driving. arXiv:1611.07759 [cs], June 2017. * [6] David Eigen, Christian Puhrsch, and Rob Fergus. Depth map prediction from a single image using a multi-scale deep network. In Advances in neural information processing systems, pages 2366–2374, 2014. * [7] Yongkun Fang, Huijing Zhao, Hongbin Zha, Xijun Zhao, and Wen Yao. Camera and lidar fusion for on-road vehicle tracking with reinforcement learning. In 2019 IEEE Intelligent Vehicles Symposium (IV), pages 1723–1730. IEEE, 2019. * [8] Di Feng, Christian Haase-Schuetz, Lars Rosenbaum, Heinz Hertlein, Claudius Glaeser, Fabian Timm, Werner Wiesbeck, and Klaus Dietmayer. Deep multi-modal object detection and semantic segmentation for autonomous driving: Datasets, methods, and challenges. arXiv:1902.07830 [cs], Feb. 2020. * [9] Ross Girshick. Fast r-cnn. 2015 IEEE International Conference on Computer Vision (ICCV), Dec 2015. * [10] Jason Ku, Melissa Mozifian, Jungwook Lee, Ali Harakeh, and Steven L. Waslander. Joint 3d proposal generation and object detection from view aggregation. 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Oct 2018. * [11] Abhijit Kundu, Yin Li, and James M. Rehg. 3D-RCNN: Instance-Level 3D Object Reconstruction via Render-and-Compare. In 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 3559–3568, Salt Lake City, UT, USA, June 2018\. IEEE. * [12] Bo Li. 3D Fully Convolutional Network for Vehicle Detection in Point Cloud. arXiv:1611.08069 [cs], Jan. 2017. * [13] Bo Li, Tianlei Zhang, and Tian Xia. Vehicle Detection from 3D Lidar Using Fully Convolutional Network. arXiv:1608.07916 [cs], Aug. 2016. * [14] Ming Liang, Bin Yang, Yun Chen, Rui Hu, and Raquel Urtasun. Multi-task multi-sensor fusion for 3d object detection. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2019. * [15] Tsung-Yi Lin, Priya Goyal, Ross Girshick, Kaiming He, and Piotr Dollár. Focal Loss for Dense Object Detection. arXiv:1708.02002 [cs], Feb. 2018. * [16] Gregory P. Meyer, Jake Charland, Darshan Hegde, Ankit Laddha, and Carlos Vallespi-Gonzalez. Sensor fusion for joint 3d object detection and semantic segmentation. 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Jun 2019. * [17] Arsalan Mousavian, Dragomir Anguelov, John Flynn, and Jana Kosecka. 3D Bounding Box Estimation Using Deep Learning and Geometry. arXiv:1612.00496 [cs], Apr. 2017. * [18] Arsalan Mousavian, Dragomir Anguelov, John Flynn, and Jana Kosecka. 3d bounding box estimation using deep learning and geometry. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 7074–7082, 2017. * [19] Ramin Nabati and Hairong Qi. RRPN: Radar region proposal network for object detection in autonomous vehicles. 2019 IEEE International Conference on Image Processing (ICIP), Sep 2019. * [20] Ramin Nabati and Hairong Qi. Radar-Camera Sensor Fusion for Joint Object Detection and Distance Estimation in Autonomous Vehicles. arXiv:2009.08428 [cs], Sept. 2020. * [21] Alejandro Newell, Kaiyu Yang, and Jia Deng. Stacked Hourglass Networks for Human Pose Estimation. arXiv:1603.06937 [cs], July 2016. * [22] Felix Nobis, Maximilian Geisslinger, Markus Weber, Johannes Betz, and Markus Lienkamp. A Deep Learning-based Radar and Camera Sensor Fusion Architecture for Object Detection. In 2019 Sensor Data Fusion: Trends, Solutions, Applications (SDF), pages 1–7, Oct. 2019. * [23] Charles R. Qi, Wei Liu, Chenxia Wu, Hao Su, and Leonidas J. Guibas. Frustum PointNets for 3D Object Detection from RGB-D Data. arXiv:1711.08488 [cs], Apr. 2018. * [24] Thomas Roddick, Alex Kendall, and Roberto Cipolla. Orthographic feature transform for monocular 3d object detection. arXiv preprint arXiv:1811.08188, 2018. * [25] Shaoshuai Shi, Xiaogang Wang, and Hongsheng Li. PointRCNN: 3D Object Proposal Generation and Detection from Point Cloud. arXiv:1812.04244 [cs], May 2019. * [26] Andrea Simonelli, Samuel Rota Rota Bulò, Lorenzo Porzi, Manuel López-Antequera, and Peter Kontschieder. Disentangling Monocular 3D Object Detection. arXiv:1905.12365 [cs], May 2019. * [27] Jun Wang, Shiyi Lan, Mingfei Gao, and Larry S Davis. Infofocus: 3d object detection for autonomous driving with dynamic information modeling. arXiv preprint arXiv:2007.08556, 2020. * [28] Danfei Xu, Dragomir Anguelov, and Ashesh Jain. PointFusion: Deep Sensor Fusion for 3D Bounding Box Estimation. arXiv:1711.10871 [cs], Aug. 2018. * [29] Yan Yan, Yuxing Mao, and Bo Li. SECOND: Sparsely Embedded Convolutional Detection. Sensors, 18(10):3337, Oct. 2018. * [30] Bin Yang, Runsheng Guo, Ming Liang, Sergio Casas, and Raquel Urtasun. RadarNet: Exploiting Radar for Robust Perception of Dynamic Objects. arXiv:2007.14366 [cs], July 2020. * [31] Bin Yang, Wenjie Luo, and Raquel Urtasun. PIXOR: Real-time 3D Object Detection from Point Clouds. arXiv:1902.06326 [cs], Mar. 2019. * [32] Fisher Yu, Dequan Wang, Evan Shelhamer, and Trevor Darrell. Deep Layer Aggregation. In 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 2403–2412, Salt Lake City, UT, June 2018. IEEE. * [33] R. Zhang, S. A. Candra, K. Vetter, and A. Zakhor. Sensor fusion for semantic segmentation of urban scenes. In 2015 IEEE International Conference on Robotics and Automation (ICRA), pages 1850–1857, 2015. * [34] Xingyi Zhou, Dequan Wang, and Philipp Krähenbühl. Objects as points. arXiv preprint arXiv:1904.07850, 2019. * [35] Yin Zhou and Oncel Tuzel. VoxelNet: End-to-End Learning for Point Cloud Based 3D Object Detection. arXiv:1711.06396 [cs], Nov. 2017.
# Evaluating Gradient Inversion Attacks and Defenses in Federated Learning Yangsibo Huang Princeton University Princeton, NJ 08540 <EMAIL_ADDRESS> &Samyak Gupta Princeton University Princeton, NJ 08540 <EMAIL_ADDRESS> &Zhao Song Adobe Research San Jose, CA 95110 <EMAIL_ADDRESS> &Kai Li Princeton University Princeton, NJ 08540 <EMAIL_ADDRESS> &Sanjeev Arora Princeton University Princeton, NJ 08540 <EMAIL_ADDRESS> ###### Abstract Gradient inversion attack (or input recovery from gradient) is an emerging threat to the security and privacy preservation of Federated learning, whereby malicious eavesdroppers or participants in the protocol can recover (partially) the clients’ private data. This paper evaluates existing attacks and defenses. We find that some attacks make strong assumptions about the setup. Relaxing such assumptions can substantially weaken these attacks. We then evaluate the benefits of three proposed defense mechanisms against gradient inversion attacks. We show the trade-offs of privacy leakage and data utility of these defense methods, and find that combining them in an appropriate manner makes the attack less effective, even under the original strong assumptions. We also estimate the computation cost of end-to-end recovery of a single image under each evaluated defense. Our findings suggest that the state-of-the-art attacks can currently be defended against with minor data utility loss, as summarized in a list of potential strategies. Our code is available at: https://github.com/Princeton-SysML/GradAttack. ## 1 Introduction Federated learning (McMahan et al., 2016; Kairouz et al., 2021) is a framework that allows multiple clients in a distributed environment to collaboratively train a neural network model at a central server, without moving their data to the central server. At every training step, each client computes a model update —i.e., gradient— on its local data using the latest copy of the global model, and then sends the gradient to the central server. The server aggregates these updates (typically by averaging) to construct a global model, and then sends the new model parameters to all clients. By allowing clients to participate in training without directly sharing their data, such protocols align better with data privacy regulations such as Health Insurance Portability and Accountability Act (HIPPA) (Act, 1996), California Consumer Privacy Act (CCPA) (Legislature, 2018), and General Data Protection Regulation (Commission, 2018). While sharing gradients was thought to leak little information about the client’s private data, recent papers (Zhu et al., 2019; Zhao et al., 2020; Geiping et al., 2020; Yin et al., 2021) developed a “gradient inversion attack” by which an attacker eavesdropping on a client’s communications with the server can begin to reconstruct the client’s private data. The attacker can also be a malicious participant in the Federated Learning scheme, including a honest-but-curious server who wishes to reconstruct private data of clients, or a honest-but-curious client who wishes to reconstruct private data of other clients. These attacks have been shown to work with batch sizes only up to $100$ but even so they have created doubts about the level of privacy ensured in Federated Learning. The current paper seeks to evaluate the risks and suggest ways to minimize them. Several defenses against gradient inversion attacks have been proposed. These include perturbing gradients (Zhu et al., 2019; Wei et al., 2020) and using transformation for training data that clients can apply on the fly (Zhang et al., 2018a; Huang et al., 2020). More traditional cryptographic ideas including secure aggregation (Bonawitz et al., 2016) or homomorphic encryption (Phong et al., 2018) for the gradients can also be used and presumably stop any eavesdropping attacks completely. They will not be studied here due to their special setups and overhead. We are not aware of a prior systematic evaluation of the level of risk arising from current attacks and the level of security provided by various defenses, as well as the trade-off (if any) between test accuracy, computation overhead, and privacy risks. The paper makes two main contributions. First, we draw attention to two strong assumptions that a current gradient inversion attack (Geiping et al., 2020) implicitly makes. We show that by nullifying these assumptions, the performance of the attack drops significantly and can only work for low- resolution images. The findings are explored in Section 3 and already imply some more secure configurations in Federated Learning (Section 6). Second, we summarize various defenses (Section 4) and systematically evaluate (Section 5) some of their performance of defending against a state-of-the-art gradient inversion attack, and present their data utility and privacy leakage trade-offs. We estimate the computation cost of end-to-end recovery of a single image under each evaluated defense. We also experimentally demonstrate the feasibility and effectiveness of combined defenses. Our findings are summarized as strategies to further improve Federated Learning’s security against gradient inversion attacks (Section 6). In Appendix B, we provide theoretical insights for mechanism of each evaluated defense. ## 2 Gradient Inversion Attacks Previous studies have shown the feasibility of recovering input from gradient (i.e. gradient inversion) for image classification tasks, by formulating it as an optimization problem: given a neural network with parameters $\theta$, and the gradient $\nabla_{\theta}\mathcal{L}_{\theta}(x^{*},y^{*})$ computed with a private data batch $(x^{*},y^{*})\in\mathbb{R}^{b\times d}\times\mathbb{R}^{b}$ ($b,d$ being the batch size, image size), the attacker tries to recover $x\in\mathbb{R}^{b\times d}$, an approximation of $x^{*}$: $\arg\min_{x}\mathcal{L}_{\rm grad}(x;\theta,\nabla_{\theta}\mathcal{L}_{\theta}(x^{*},y^{*}))+\alpha\mathcal{R}_{\rm aux}(x)$ (1) The optimization goal consists of two parts: $\mathcal{L}_{\rm grad}(x;\theta,\nabla_{\theta}\mathcal{L}_{\theta}(x^{*},y^{*}))$ enforces matching of the gradient of recovered batch $x$ with the provided gradients $\mathcal{L}_{\theta}(x^{*},y^{*})$, and $\mathcal{R}_{\rm aux}(x)$ regularizes the recovered image based on image prior(s). (Phong et al., 2017) brings theoretical insights on this task by proving that such reconstruction is possible with a single-layer neural network. (Zhu et al., 2019) is the first to show that accurate pixel-level reconstruction is practical for a maximum batch size of 8. Their formulation uses $\ell_{2}$-distance as $\mathcal{L}_{\rm grad}(\cdot,\cdot)$ but no regularization term $\mathcal{R}_{\rm aux}(x)$. The approach works for low- resolution CIFAR datasets (Krizhevsky et al., 2009), with simple neural networks with sigmoid activations, but cannot scale up to high-resolution images, or larger models with ReLU activations. A follow-up (Zhao et al., 2020) proposes a simple approach to extract the ground-truth labels from the gradient, which improves the attack but still cannot overcome its limitations. With a careful choice of $\mathcal{L}_{\rm grad}$ and $\mathcal{R}_{\rm aux}(x)$, (Geiping et al., 2020) substantially improves the attack and succeeds in recovering a single ImageNet (Deng et al., 2009) image from gradient: their approach uses cosine distance as $\mathcal{L}_{\rm grad}$, and the total variation as $\mathcal{R}_{\rm aux}(x)$. Their approach is able to reconstruct low-resolution images with a maximum batch size of 100 or a single high-resolution image. Based on (Geiping et al., 2020), (Wei et al., 2020) analyzes how different configurations in the training may affect attack effectiveness. A more recent work (Yin et al., 2021) further improves the attack on high- resolution images, by introducing to $\mathcal{R}_{\rm aux}(x)$ a new image prior term based on batch normalization (Ioffe and Szegedy, 2015) statistics, and a regularization term which enforces consistency across multiple attack trials. An orthogonal line of work (Zhu and Blaschko, 2021) proposes to formulate the gradient inversion attack as a closed-form recursive procedure, instead of an optimization problem. However, their implementation can recover only low- resolution images under the setting where batch size = 1. ## 3 Strong Assumptions Made by SOTA Attacks ### 3.1 The state-of-the-art attacks Two recent attacks (Geiping et al., 2020; Yin et al., 2021) achieve best recovery results. Our analysis focuses on the former as the implementation of the latter is not available at the time of writing this paper. We plan to include the analysis for the latter attack in the final version of this paper if its implementation becomes available. (Geiping et al., 2020)’s attack optimizes the following objective function: $\arg\min_{x}1-\frac{\left\langle\nabla_{\theta}\mathcal{L}_{\theta}(x,y),\nabla_{\theta}\mathcal{L}_{\theta}\left(x^{*},y^{*}\right)\right\rangle}{\left\|\nabla_{\theta}\mathcal{L}_{\theta}(x,y)\left|\left\|\mid\nabla_{\theta}\mathcal{L}_{\theta}\left(x^{*},y^{*}\right)\right\|\right.\right.}+\alpha_{\rm TV}\mathcal{R}_{\rm TV}(x)$ (2) where $\langle\cdot,\cdot\rangle$ is the inner-product between vectors, and $\mathcal{R}_{\rm TV}(\cdot)$ is the total variation of images. We notice that Geiping et al. has made two strong assumptions (Section 3.2). Changing setups to invalidate those assumptions will substantially weaken the attacks (Section 3.3). We also summarize whether other attacks have made similar assumptions in Table 1. ### 3.2 Strong assumptions We find that previous gradient inversion attacks have made different assumptions about whether the attacker knows Batch normalization statistics or private labels, as shown in Table 3.2. Note that (Geiping et al., 2020)’s attack makes both strong assumptions. #### Assumption 1: Knowing BatchNorm statistics. Batch normalization (BatchNorm) (Ioffe and Szegedy, 2015) is a technique for training neural networks that normalizes the inputs to a layer for every mini- batch. It behaves differently during training and evaluation. Assume the model has $L$ batch normalization layers. Given $x^{*}$, a batch of input images, we use $x^{*}_{l}$ to denote the input features to the $l$-th BatchNorm layer, where $l\in[L]$. During training, the $l$-th BatchNorm layer normalizes $x^{*}_{l}$ based on the batch’s mean ${\rm mean}(x^{*}_{l})$ and variance ${\rm var}(x^{*}_{l})$, and keeps a running estimate of mean and variance of all training data points, denoted by $\mu_{l}$ and $\sigma^{2}_{l}$. During inference, $\\{\mu_{l}\\}_{l=1}^{L}$ and $\\{\sigma^{2}_{l}\\}_{l=1}^{L}$ are used to normalize test images. In the following descriptions, we leave out $\\{\cdot\\}_{l=1}^{L}$ for simplicity (i.e. use $\mu,\sigma^{2}$ to denote $\\{\mu_{l}\\}_{l=1}^{L},\\{\sigma^{2}_{l}\\}_{l=1}^{L}$, and ${\rm mean}(x^{*})$, ${\rm var}(x^{*})$ to denote $\\{{\rm mean}(x^{*}_{l})\\}_{l=1}^{L}$, $\\{{\rm var}(x^{*}_{l})\\}_{l=1}^{L}$). We notice that (Geiping et al., 2020)’s implementation111The official implementation of (Geiping et al., 2020): https://github.com/JonasGeiping/invertinggradients. assumes that BatchNorm statistics of the private batch, i.e., ${\rm mean}(x^{*})$, ${\rm var}(x^{*})$, are jointly provided with the gradient. Knowing BatchNorm statistics would enable the attacker to apply the same batch normalization used by the private batch on his recovered batch, to achieve a better reconstruction. This implicitly increases the power of the attacker, as sharing private BatchNorm statistics are not necessary in Federated learning (Andreux et al., 2020; Li et al., 2021). Note that this assumption may be realistic in some settings: 1) the neural network is shallow, thus does not require using BatchNorm layers, or 2) the neural network is deep, but adapts approaches that normalize batch inputs with a fixed mean and variance (as alternative to BatchNorm), e.g. Fixup initialization (Zhang et al., 2019). #### Assumption 2: Knowing or able to infer private labels. Private labels are not intended to be shared in Federated learning, but knowing them would improve the attack. (Zhao et al., 2020) finds that label information of a single private image can be inferred from the gradient (see Section 3.3 for details). Based on this, (Geiping et al., 2020) assumes the attacker knows private labels (see remark at the end of Section 4 in their paper). However, this assumption may not hold true when multiple images in a batch share the same label, as we will show in the next section. Assumptions | (Zhu et al., 2019) | (Zhao et al., 2020) | (Geiping et al., 2020) | (Yin et al., 2021) ---|---|---|---|--- Knowing BN statistics | N/A† | N/A† | Yes | Yes∗ Knowing private labels | No | No‡ | Yes | No‡ Table 1: Assumptions of gradient inversion attacks. †: its evaluation uses a simple model without a BatchNorm layer; ‡: it proposes a method to infer private labels, which works when images in a batch have unique labels (see Section 3.3); ∗: although the paper discusses a setting where BatchNorm statistics are unknown, its main results assume knowing BatchNorm statistics. ### 3.3 Re-evaluation under relaxed assumptions We re-evaluate the performance of the gradient inversion attack in settings where two assumptions above are relaxed. For each relaxation, we re-design the attack (if needed) based on the knowledge that the attacker has. #### Relaxation 1: Not knowing BatchNorm statistics. We refer to the previous threat model as $\rm BN_{exact}$, where the attacker knows exact BatchNorm statistics of the private batch. We consider a more realistic threat model where these statistics are not exposed, and re-design the attack based on it. Threat model. In each training step, the client normalizes its private batch $x^{*}$ using the batch’s mean ${\rm mean}(x^{*})$ and variance ${\rm var}(x^{*})$, keeps the running estimate of mean and variance locally as in (Li et al., 2021), and shares the gradient. The client releases the final aggregated mean $\mu$, and aggregated variance $\sigma^{2}$ of all training data points at the end of training. Same as before, the attacker has access to the model and the gradient during training. Re-design A: $\rm BN_{proxy}$, attacker naively uses $\mu$ and $\sigma^{2}$. A simple idea is that the attacker uses $(\mu,\sigma^{2})$ as the proxy for $({\rm mean}(x^{*}),{\rm var}(x^{*}))$, and uses them to normalize $x$, his guesses of the private batch. Other operations of the gradient inversion attack remain the same as before. However, Figure 1.d and 1.h show poor- quality reconstruction with this re-design. Re-design B: $\rm BN_{infer}$, attacker infers $({\rm mean}(x^{*}),{\rm var}(x^{*}))$ based on $(\mu,\sigma^{2})$. A more reasonable attacker will try to infer $({\rm mean}(x^{*}),{\rm var}(x^{*}))$ while updating $x$, his guesses of the private batch, and uses $({\rm mean}(x),{\rm var}(x))$ to normalize the batch. In this case, $(\mu,\sigma^{2})$ could be used as a prior of BatchNorm statistics to regularize the recovery, as suggested in (Yin et al., 2021): $\arg\min_{x}1-\frac{\left\langle\nabla_{\theta}\mathcal{L}_{\theta}(x,y),\nabla_{\theta}\mathcal{L}_{\theta}\left(x^{*},y^{*}\right)\right\rangle}{\left\|\nabla_{\theta}\mathcal{L}_{\theta}(x,y)\left|\left\|\mid\nabla_{\theta}\mathcal{L}_{\theta}\left(x^{*},y^{*}\right)\right\|\right.\right.}+\alpha_{\rm TV}\mathcal{R}_{\rm TV}(x)+\alpha_{\rm BN}\mathcal{R}_{\rm BN}(x)$ (3) where $\mathcal{R}_{\mathrm{BN}}(x)=\sum_{l}\|{\rm mean}(x_{l})-\mu_{l}\|_{2}+\sum_{l}\|{\rm var}(x_{l})-\sigma_{l}^{2}\|_{2}$. We tune $\alpha_{\rm BN}$ and present the best result in Figure 1.c and 1.g (see results of different $\alpha_{\rm BN}$’s in Appendix A). As shown, for a batch of low-resolution images, $\rm BN_{infer}$ gives a much better reconstruction result than $\rm BN_{proxy}$, but still cannot recover some details of the private batch when compared with $\rm BN_{exact}$. The result for a single high-resolution image is worse: the attacker fails to return a recognizable reconstruction with $\rm BN_{infer}$. This suggests not having access to BatchNorm statistics of the private batch already weakens the state- of-the-art gradient inversion attack. (a) Original (b) $\rm BN_{exact}$ (c) $\rm BN_{infer}$ (d) $\rm BN_{proxy}$ (e) Original (f) $\rm BN_{exact}$ (g) $\rm BN_{infer}$ (h) $\rm BN_{proxy}$ Figure 1: Attacking a batch of 16 low-resolution images from CIFAR-10 (a-d) and a single high-resolution image from ImageNet (e-h) with different knowledge of BatchNorm statistics. Attack is weakened when BatchNorm statistics are not available (c, d versus b, and g, h versus f). See Appendix A for more examples and quantitative results. #### Relaxation 2: Not knowing private labels. (Zhao et al., 2020) notes that label information of a single private image can be computed analytically from the gradients of the layer immediately before the output layer. (Yin et al., 2021) further extends this method to support recovery of labels for a batch of images. However, if multiple images in the private batch belong to a same label, neither approach can tell how many images belong to that label, let alone which subset of images belong to that label. Figure 2a demonstrates that with CIFAR-10, for batches of various sizes it is possible for many of the training samples to be have the same label, and the distribution of labels is not uniform - and hence, inferring labels becomes harder and the attack would be weakened. In Figure 2b we evaluate the worst-case for an attacker in this setting by comparing recoveries where the batch labels are simultaneously reconstructed alongside the training samples. (a) Distribution of labels in a batch (b) Reconstructions with and without private labels Figure 2: Attack is weakened when private labels are not available. (a) shows that for CIFAR-10, when the batch size is large, many images in the batch belong to the same class, which essentially weakens label restoration (Zhao et al., 2020; Yin et al., 2021). (b) visualizes a reconstructed batch of 16 images with and without private labels known. The quality of the reconstruction drops without knowledge of private labels. ## 4 Defenses Against the Gradient Inversion Attack Several defense ideas have been proposed to mitigate the risks of gradient inversion. ### 4.1 Encrypt gradients Cryptography-based approaches encrypt gradient to prevent gradient inversion. (Bonawitz et al., 2016) presents a secure aggregation protocol for Federated learning by computing sum of gradient vectors based on secret sharing (Shamir, 1979). (Phong et al., 2018) proposes using homomorphic encryption to encrypt the gradients before sending. These approaches require special setup and can be costly to implement. Moreover, with secure aggregation protocol, an honest-but-curious server can still launch the gradient inversion attack on the summed gradient vector. Similarly, an honest-but-curious client can launch the gradient inversion attack on the model returned by the server to reconstruct other clients’ private data, even with homomorphic encryption. As alternatives, two other types of defensive mechanisms have been proposed to mitigate the risks of attacks on plain-text gradient. ### 4.2 Perturbing gradients #### Gradient pruning. When proposing the first practical gradient inversion attack, (Zhu et al., 2019) also suggests a defense by setting gradients of small magnitudes to zero (i.e. gradient pruning). Based on their attack, they demonstrate that pruning more than $70\%$ of the gradients would make the recovered images no longer visually recognizable. However, the suggested prune ratio is determined based on weaker attacks, and may not remain safe against the state-of-the-art attack. #### Adding noise to gradient. Motivated by DPSGD (Abadi et al., 2016) which adds noise to gradients to achieve differential privacy (Dwork, 2009; Dwork and Roth, 2014), (Zhu et al., 2019; Wei et al., 2020) also suggests defending by adding Gaussian or Laplacian noise to gradient. They show that a successful defense requires adding high noise level such that its accuracy drops by more than $30\%$ with CIFAR-10 tasks. Recent works (Papernot et al., 2020; Tramèr and Boneh, 2021) suggest using better pre-training techniques and a large batch size (e.g. $4,096$) to achieve a better accuracy for DPSGD training. Since most DPSGD implementations for natural image classification tasks (Abadi et al., 2016; Papernot et al., 2020; Tramèr and Boneh, 2021) use a pre- training and fine-tuning pipeline, it is hard to fairly compare with other defense methods that can directly apply when training the model from scratch. Thus, we leave the comparison with DPSGD to future work. ### 4.3 Weak encryption of inputs (i.e. encoding inputs) #### MixUp. MixUp data augmentation (Zhang et al., 2018a) trains neural networks on composite images created via linear combination of image pairs. It has been shown to improve the generalization of the neural network and stabilizes the training. Recent work also suggests that MixUp increases the model’s robustness to adversarial examples (Pang et al., 2020; Lamb et al., 2019). #### InstaHide. Inspired by MixUp, (Huang et al., 2020) proposes InstaHide as a light-weight instance-encoding scheme for private distributed learning. To encode an image $x\in{\mathbb{R}}^{d}$ from a private dataset, InstaHide first picks $k-1$ other images $s_{2},s_{3},\ldots,s_{k}$ from that private dataset, or a large public dataset, and $k$ random nonnegative coefficients $\\{\lambda_{i}\\}_{i=1}^{k}$ that sum to $1$, and creates a composite image $\lambda_{1}x+\sum_{i=2}^{k}\lambda_{i}s_{i}$ ($k$ is typically small, e.g., $4$). A composite label is also created using the same set of coefficients.222Only the labels of examples from the private dataset will get combined. See (Huang et al., 2020) for details. Then it adds another layer of security: pick a random sign-flipping pattern $\sigma\in\\{-1,1\\}^{d}$ and output the encryption $\tilde{x}=\sigma\circ(\lambda_{1}x+\sum_{i=2}^{k}\lambda_{i}s_{i})$, where $\circ$ is coordinate-wise multiplication of vectors. The neural network is then trained on encoded images, which look like random pixel vectors to the human eye and yet lead to good classification accuracy ($<6\%$ accuracy loss on CIFAR-10, CIFAR-100, and ImageNet). Recently, (Carlini et al., 2020) gives an attack to recover private images of a small dataset, when the InstaHide encodings are revealed to the attacker (not in a Federated learning setting). Their first step is to train a neural network on a public dataset for similarity annotation, to infer whether a pair of InstaHide encodings contain the same private image. With the inferred similarities of all pairs of encodings, the attacker then runs a combinatorial algorithm (cubic time in size of private dataset) to cluster all encodings based on their original private images, and finally uses a regression algorithm (with the help of composite labels) to recover the private images. Neither (Huang et al., 2020) or (Carlini et al., 2020) has evaluated their defense or attack in the Federated learning setting, where the attacker observes gradients of the encoded images instead of original encoded images. This necessitates the systematic evaluation in our next section. ## 5 Evaluation of defenses The main goal of our experiments is to understand the trade-offs between data utility (accuracy) and securely defending the state-of-the-art gradient inversion attack even in its strongest setting, without any relaxation of its implicit assumptions. Specifically, we grant the attacker the knowledge of 1) BatchNorm statistics of the private batch, and 2) labels of the private batch. We vary key parameters for each defense, and evaluate their performance in terms of the test accuracy, computation overhead, and privacy risks (Section 5.2). We then investigate the feasibility of combining defenses (Section 5.3). We also estimate the computation cost of end-to-end recovery of a single image under evaluated defenses (Section 5.4). As the feasibility of the state-of-the-art attack (Geiping et al., 2020) on a batch of high-resolution images remain elusive when its implicit assumptions no longer hold (see Figure 1), we focus on the evaluation with low-resolution in trying to understand whether current attacks can be mitigated. ### 5.1 Experimental setup #### Key parameters of defenses. We evaluate following defenses on CIFAR-10 dataset (Krizhevsky et al., 2009) with ResNet-18 architecture (He et al., 2016). * • GradPrune (gradient pruning): gradient pruning set gradients of small magnitudes to zero. We vary the pruning ratio $p$ in $\\{0.5,0.7,0.9,0.95,0.99,0.999\\}$. * • MixUp: MixUp encodes a private image by linearly combining it with $k-1$ other images from the training set. Following (Huang et al., 2020), we vary $k$ in $\\{4,6\\}$, and set the upper bound of a single coefficient to $0.65$ (coefficients sum to $1$). * • Intra-InstaHide: InstaHide (Huang et al., 2020) proposes two versions: Inter- InstaHide and Intra-InstaHide. The only difference is that at the mixup step, Inter-Instahide mixes up an image with images from a public dataset, whereas Intra-InstaHide only mixes with private images. Both versions apply a random sign flipping pattern on each mixed image. We evaluate Intra-InstaHide in our experiments, which is a weaker version of InstaHide. Similar to the evaluation of MixUp, we vary $k$ in $\\{4,6\\}$, and set the upper bound of a single coefficient to $0.65$. Note that InstaHide flips signs of pixels in the image, which destroys the total variation prior. However, the absolute value of adjacent pixels should still be close. Therefore, for the InstaHide defense, we apply the total variation regularizer on $|x|$, i.e. taking absolute value of each pixel in the reconstruction. We train the ResNet-18 architecture on CIFAR-10 using different defenses, and launch the attack. We provide more details of the experiments in Appendix A. #### The attack. We use a subset of 50 CIFAR-10 images to evaluate the attack performance. Note that attacking MixUp and InstaHide involves another step to decode private images from the encoded images. We apply (Carlini et al., 2020)’s attack here as the decode step, where the attacker needs to eavesdrop $T$ epochs of training, instead of a single training step. We set $T=20$ in our evaluation. We also grant the attacker the strongest power for the decode step to evaluate the upper bound of privacy leakage. Given a MixUp or Intra-InstaHide image which encodes $k$ private images, we assume the attacker knows: 1. 1. The indices of $k$ images in the private dataset. In a realistic scenario, the attacker of (Carlini et al., 2020) would need to train a neural network to detect similarity of encodings, and run a combinatorial algorithm to solve an approximation of this mapping. 2. 2. The mixing coefficients for each of the $k$ private image. In real Federated learning, this information is not available. #### Hyper-parameters of the attack. The attack minimize the objective function given in Eq.3. We search $\alpha_{\rm TV}$ in $\\{0,0.001,0.005,0.01,0.05,0.1,0.5\\}$ for all defenses, and apply the best choice for each defense: $0.05$ for GradPrune, $0.1$ for MixUp, and $0.01$ for Intra-InstaHide. We apply $\alpha_{\rm BN}=0.001$ for all defenses after searching it in $\\{0,0.0005,0.001,0.01,0.05,0.01\\}$. We optimize the attack for $10,000$ iterations using Adam (Kingma and Ba, 2015), with initial learning rate $0.1$. We decay the learning rate by a factor of $0.1$ at $3/8,5/8,7/8$ of the optimization. #### Batch size of the attack. (Zhu et al., 2019; Geiping et al., 2020) have shown that a small batch size is important for the success of the attack. We intentionally evaluate the attack with three small batch sizes to test the upper bound of privacy leakage, including the minimum (and unrealistic) batch size 1, and two small but realistic batch sizes, 16 and 32. #### Metrics for reconstruction quality. We visualize reconstructions obtained under different defenses. Following (Yin et al., 2021), we also use the learned perceptual image patch similarity (LPIPS) score (Zhang et al., 2018b) to measure mismatch between reconstruction and original images: higher values suggest more mismatch (less privacy leakage). Figure 3: Reconstruction results under different defenses with batch size being 1, 16 and 32. When batch size is 32, combining gradient pruning and Intra-InstaHide makes the reconstruction almost unrecognizable (the last column). See Figure 7 in Appendix A for the full version. | None | GradPrune ($p$) | MixUp ($k$) | Intra-InstaHide ($k$) | GradPrune ($p=0.9$) ---|---|---|---|---|--- | \+ MixUp | \+ Intra-InstaHide Parameter | - | 0.5 | 0.7 | 0.9 | 0.95 | 0.99 | 0.999 | 4 | 6 | 4 | 6 | $k=4$ | $k=4$ Test Acc. | 93.37 | 93.19 | 93.01 | 90.57 | 89.92 | 88.61 | 83.58 | 92.31 | 90.41 | 90.04 | 88.20 | 91.37 | 86.10 Time (train) | $1\times$ | $1.04\times$ | $1.06\times$ | $1.06\times$ | $1.10\times$ Attack batch size $=1$ Avg. LPIPS $\downarrow$ | 0.19 | 0.19 | 0.22 | 0.35 | 0.42 | 0.52 | 0.52 | 0.34 | 0.46 | 0.58 | 0.61 | 0.41 | 0.60 Best LPIPS $\downarrow$ | 0.02 | 0.02 | 0.05 | 0.14 | 0.22 | 0.32 | 0.36 | 0.12 | 0.25 | 0.41 | 0.42 | 0.21 | 0.43 (LPIPS std.) | 0.16 | 0.17 | 0.16 | 0.13 | 0.11 | 0.08 | 0.06 | 0.08 | 0.07 | 0.06 | 0.09 | 0.07 | 0.09 Attack batch size $=16$ Avg. LPIPS $\downarrow$ | 0.45 | 0.46 | 0.47 | 0.51 | 0.55 | 0.58 | 0.61 | 0.34 | 0.31 | 0.62 | 0.63 | 0.46 | 0.68 Best LPIPS $\downarrow$ | 0.18 | 0.19 | 0.19 | 0.31 | 0.43 | 0.47 | 0.51 | 0.11 | 0.13 | 0.41 | 0.44 | 0.22 | 0.54 (LPIPS std.) | 0.12 | 0.12 | 0.11 | 0.07 | 0.05 | 0.04 | 0.03 | 0.09 | 0.09 | 0.08 | 0.08 | 0.10 | 0.07 Attack batch size $=32$ Avg. LPIPS $\downarrow$ | 0.45 | 0.46 | 0.48 | 0.52 | 0.54 | 0.58 | 0.63 | 0.50 | 0.49 | 0.69 | 0.69 | 0.62 | 0.73 Best LPIPS $\downarrow$ | 0.18 | 0.18 | 0.22 | 0.31 | 0.43 | 0.48 | 0.54 | 0.31 | 0.28 | 0.56 | 0.56 | 0.37 | 0.65 (LPIPS std.) | 0.11 | 0.11 | 0.09 | 0.07 | 0.05 | 0.04 | 0.04 | 0.10 | 0.10 | 0.06 | 0.07 | 0.10 | 0.05 (a) (b) Table 2: Utility-security trade-off of different defenses. We train the ResNet-18 model on the whole CIFAR-10 dataset, and report the averaged test accuracy and running time of 5 independent runs. We evaluate the attack on a subset of 50 CIFAR-10 images, and report the LPIPS score ($\downarrow$: lower values suggest more privacy leakage). We mark the least-leakage defense measured by the metric in green. ### 5.2 Performance of defense methods We summarize the performance of each defense in Table 2, and visualize reconstructed images in Figure 3. We report the averaged and the best results for the metric of reconstruction quality, as a proxy for average-case and worst-case privacy leakage. #### No defense. Without any defense, when batch size is 1, the attack can recover images well from the gradient. Increasing the batch size makes it difficult to recover well, but the recovered images are visually similar to the originals (see Figure 3). #### Gradient pruning (GradPrune). Figure 3 shows that as the pruning ratio $p$ increases, there are more artifacts in the reconstructions. However, the reconstructions are still recognizable even when the pruning ratio $p=0.9$, thus the previous suggestion of using $p=0.7$ by (Zhu et al., 2019) is no longer safe against the state-of- the-art attack. Our results suggest that, for CIFAR-10, defending the strongest attack with gradient pruning may require the pruning ratio $p\geq 0.999$. As a trade-off, such a high pruning ratio would introduce an accuracy loss of around $10\%$ (see Table 2). #### MixUp. MixUp introduces a small computational overhead to training. MixUp with $k=4$ only has a minor impact (~$2\%$) on test accuracy, but it is not sufficient to defend the gradient inversion attack (see Figure 3). Increasing $k$ from $4$ to $6$ slightly reduces the leakage, however, the reconstruction is still highly recognizable. This suggests that MixUp alone may not be a practical defense against the state-of-the-art gradient inversion attack. #### Intra-InstaHide. Intra-InstaHide with $k=4$ incurs an extra ~$2\%$ accuracy loss compared with MixUp, but it achieves better defense performance: when batch size is 32, there are obvious artifacts and color shift in the reconstruction (see Figure 3). However, with batch size 32, Intra-InstaHide alone also cannot defend the state-of-the-art gradient inversion, as structures of private images are still vaguely identifiable in reconstructions. Appendix A provides the whole reconstructed dataset under MixUp and Intra- InstaHide. ### 5.3 Performance of combined defenses We notice that two types of defenses (i.e perturbing gradient and encoding inputs) are complementary to each other, which motivates an evaluation of combining gradient pruning with MixUp or Intra-InstaHide. As shown in Figure 3, when the batch size is 32, combining Intra-InstaHide ($k=4$) with gradient pruning ($p=0.9$) makes the reconstruction almost unrecognizable. The combined defense yields a higher LPIPS score than using gradient pruning with $p=0.999$, but introduces a smaller accuracy loss (~$7\%$ compared with a no-defense pipeline). Note that our evaluation uses the strongest attack and relatively small batch sizes. As shown in Appendix A, invalidating assumptions in Section 3 or increasing the batch size may hinder the attack with an even weaker defense (e.g. with a lower $p$, or smaller $k$), which gives a better accuracy. ### 5.4 Time estimate for end-to-end recovery of a single image Table 3 shows time estimates for the end-to-end recovery of a single image in a Federated learning setting with GradPrune or InstaHide defense. We do not estimate for MixUp since it has been shown to be a weak defense (see Section 5.2). Our time estimates consider three fairly small dataset sizes. The largest size in our estimate is a small fraction of a dataset of ImageNet scale. We consider a client holds a dataset of $N$ private images and participates in Federated learning, which trains a ResNet-18 model with batch size $b=128$. Assumes that the resolution of the client’s data is $32\times 32\times 3$. If the attacker uses a single NVIDIA GeForce RTX 2080 Ti GPU as his computation resource, and runs gradient inversion with 10,000 iterations of optimization, then $t$, the running time for attacking a single batch is ~$0.25$ GPU hours (batch size $b$ has little impact on the attack’s running time, but a larger $b$ makes the attack less effective). #### Non-defended and gradient pruning. Recovering a single image in a non-defended pipeline (or a pipeline that applies gradient pruning alone as the defense) only requires the attacker to invert gradient of a single step of training, which takes time $t$. #### InstaHide. When InstaHide is applied, the current attack (Carlini et al., 2020) suggests that recovering a single image would involve recovering the whole dataset first. As discussed in Section 4, Carlini et al.’s attack consists of two steps: 1) recover InstaHide images from gradient of $T$ epochs. This would take $(NT/b)\times t$ GPU hours. 2) Run the decode attack (Carlini et al., 2020) on InstaHide images to recover the private dataset, which involves: 1. 2a Train a neural network to detect similarity in recovered InstaHide images. Assume that training the network requires at least $n$ recovered InstaHide images, then collecting these images by running gradient inversion would take $(n/b)\times t$ GPU hours. The training takes $10$ GPU hours according to (Carlini et al., 2020), so training the similarity network would take $(n/b)\times t+10$ GPU hours in total. 2. 2b Run the combinotorial algorithm to recover original images. Running time of this step has been shown to be at least quadratic in $m$, the number of InstaHide encodings (Chen et al., 2021). This step takes $1/6$ GPU hours with $m=5\times 10^{3}$. Therefore for $m=NT$, the running time is at least $1/6\times(\frac{NT}{5\times 10^{3}})^{2}$ GPU hours. In total, an attack on InstaHide in this real-world setting would take $(NT/b)\times t+(n/b)\times t+10+1/6\times(\frac{NT}{5\times 10^{3}})^{2}$ GPU hours. We use $T=50$ (used by (Carlini et al., 2020)), $n=10,000$ and give estimate in Table 3. As shown, when InstaHide is applied on a small dataset ($N=5,000$), the end-to-end recovery of a single image takes $>3,000\times$ longer than in a no-defense pipeline or GradPrune pipeline; when InstaHide is applied on a larger dataset ($N=500,000$), the computation cost for end-to-end recovery is enormous. Size of client’s dataset ($N$) | No defense | GradPrune | InstaHide ---|---|---|--- 5,000 | 0.25 | 0.25 | 934.48 50,000 | 46,579.01 ($\approx$ 5.5 GPU years) 500,000 | 4,215,524.32 ($\approx$ 493.4 GPU years) Table 3: Time estimates (NVIDIA GeForce RTX 2080 Ti GPU hours) of recovering a single image from the client’s dataset using the state-of-the-art gradient inversion attack (Geiping et al., 2020) under different defenses. We assume image resolution of the client’s data is $32\times 32\times 3$. ## 6 Conclusions This paper first points out that some state-of-the-art gradient inversion attacks have made strong assumptions about knowing BatchNorm statistics and private labels. Relaxing such assumptions can significantly weaken these attacks. The paper then reports the performance of a set of proposed defenses against gradient inversion attacks, and estimates the computation cost of an end-to- end recovery of a single image in different dataset sizes. Our evaluation shows that InstaHide without mixing with data from a public dataset combined with gradient pruning can defend the state-of-the-art attack, and the estimated time to recover a single image in a medium-size client dataset (e.g. of 500,000 images) is enormous. Based on our evaluation of the attack by (Geiping et al., 2020) and multiple defenses for plain-text gradients, we have the following observations: * • Using BatchNorm layers in your deep net but don’t share BatchNorm statistics of the private batch during Federated learning weakens the attack. We have demonstrated in Section 3 that exposing BatchNorm statistics to the attacker significantly improves the quality of gradient inversion. So a more secure configuration of Federated Learning would be to use BatchNorm layers, but do not share BatchNorm statistics in training, which has been shown feasible in (Andreux et al., 2020; Li et al., 2021). * • Using a large batch size weakens the attack; a batch size smaller than 32 is not safe. We have shown that a larger batch size hinders the attack by making it harder to guess the private labels (Section 3) and to recover the private images even with correct private labels (Section 5). Our experiments suggest that even with some weak defenses applied, a batch size smaller than 32 is not safe against the strongest gradient inversion attack. * • Combining multiple defenses may achieve a better utility-privacy trade-off. In our experiment, for a batch size of 32, combining InstaHide ($k=4$) with gradient pruning ($p=0.9$) achieves the best utility-privacy trade-off, by making the reconstruction almost unrecognizable at a cost of ~$7\%$ accuracy loss (using InstaHide also makes the end-to-end recovery of a single image more computationally expensive). Best parameters would vary for different deep learning tasks, but we strongly encourage Federated learning participants to explore the possibility of combining multiple defensive mechanisms, instead of only using one of them. We hope to extend our work by including evaluation of defenses for high- resolution images, the attack by (Yin et al., 2021) (when its implementation becomes available), and more defense mechanisms including those rely on adding noise to gradients. ## Acknowledgments This project is supported in part by Ma Huateng Foundation, Schmidt Foundation, NSF, Simons Foundation, ONR and DARPA/SRC. Yangsibo Huang and Samyak Gupta are supported in part by the Princeton Graduate Fellowship. We would like to thank Quanzheng Li, Xiaoxiao Li, Hongxu Yin and Aoxiao Zhong for helpful discussions, and members of Kai Li’s and Sanjeev Arora’s research groups for comments on early versions of the work. ## References * Abadi et al. [2016] Martin Abadi, Andy Chu, Ian Goodfellow, H Brendan McMahan, Ilya Mironov, Kunal Talwar, and Li Zhang. Deep learning with differential privacy. In _Proceedings of the 2016 ACM SIGSAC conference on computer and communications security_ , pages 308–318, 2016. * Act [1996] Accountability Act. Health insurance portability and accountability act of 1996. _Public law_ , 104:191, 1996. * Andreux et al. [2020] Mathieu Andreux, Jean Ogier du Terrail, Constance Beguier, and Eric W Tramel. Siloed federated learning for multi-centric histopathology datasets. In _Domain Adaptation and Representation Transfer, and Distributed and Collaborative Learning_ , pages 129–139. Springer, 2020. * Bonawitz et al. [2016] K. A. Bonawitz, Vladimir Ivanov, Ben Kreuter, Antonio Marcedone, H. Brendan McMahan, Sarvar Patel, Daniel Ramage, Aaron Segal, and Karn Seth. Practical secure aggregation for federated learning on user-held data. In _NIPS Workshop on Private Multi-Party Machine Learning_ , 2016\. * Carlini et al. [2020] Nicholas Carlini, Samuel Deng, Sanjam Garg, Somesh Jha, Saeed Mahloujifar, Mohammad Mahmoody, Shuang Song, Abhradeep Thakurta, and Florian Tramer. An attack on instahide: Is private learning possible with instance encoding? In _IEEE Symposium on Security and Privacy_ , 2020. * Chen et al. [2021] Sitan Chen, Xiaoxiao Li, Zhao Song, and Danyang Zhuo. On instahide, phase retrieval, and sparse matrix factorization. In _ICLR_ , 2021. * Cohen et al. [2019] Michael B Cohen, Yin Tat Lee, and Zhao Song. Solving linear programs in the current matrix multiplication time. In _Proceedings of the 51st Annual ACM Symposium on Theory of Computing (STOC)_ , 2019. * Commission [2018] European Commission. 2018 reform of eu data protection rules. https://gdpr-info.eu/, 2018. * Deng et al. [2009] Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In _CVPR_ , 2009. * Deng [2012] Li Deng. The mnist database of handwritten digit images for machine learning research. _IEEE Signal Processing Magazine_ , 29(6):141–142, 2012. * Dwork [2009] Cynthia Dwork. The differential privacy frontier. In _Theory of Cryptography Conference (TCC)_ , pages 496–502, 2009\. * Dwork and Roth [2014] Cynthia Dwork and Aaron Roth. The algorithmic foundations of differential privacy. _Foundations and Trends in Theoretical Computer Science_ , 9(3–4):211–407, 2014. * Geiping et al. [2020] Jonas Geiping, Hartmut Bauermeister, Hannah Dröge, and Michael Moeller. Inverting gradients–how easy is it to break privacy in federated learning? In _NeurIPS_ , 2020. * He et al. [2016] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In _CVPR_ , 2016. * Huang et al. [2020] Yangsibo Huang, Zhao Song, Kai Li, and Sanjeev Arora. Instahide: Instance-hiding schemes for private distributed learning. In _ICML_ , 2020. * Ioffe and Szegedy [2015] Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In _ICML_ , 2015. * Kairouz et al. [2021] Peter Kairouz, H Brendan McMahan, Brendan Avent, Aurélien Bellet, Mehdi Bennis, Arjun Nitin Bhagoji, Keith Bonawitz, Zachary Charles, Graham Cormode, Rachel Cummings, et al. Advances and open problems in federated learning. _Foundations and Trends in Machine Learning_ , 14(1–2):1–210, 2021. * Kingma and Ba [2015] Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In _ICLR_ , 2015. * Krizhevsky et al. [2009] Alex Krizhevsky et al. Learning multiple layers of features from tiny images. 2009\. * Lamb et al. [2019] Alex Lamb, Vikas Verma, Juho Kannala, and Yoshua Bengio. Interpolated adversarial training: Achieving robust neural networks without sacrificing too much accuracy. In _Proceedings of the 12th ACM Workshop on Artificial Intelligence and Security_ , pages 95–103, 2019. * Legislature [2018] California State Legislature. California consumer privacy act. https://oag.ca.gov/privacy/ccpa, 2018. * Li et al. [2021] Xiaoxiao Li, Meirui Jiang, Xiaofei Zhang, Michael Kamp, and Qi Dou. Fedbn: Federated learning on non-iid features via local batch normalization. In _ICLR_ , 2021. * McMahan et al. [2016] H Brendan McMahan, Eider Moore, Daniel Ramage, Seth Hampson, et al. Communication-efficient learning of deep networks from decentralized data. In _Artificial Intelligence and Statistics (AISTATS)_ , pages 1273–1282, 2016. * Pang et al. [2020] Tianyu Pang, Kun Xu, and Jun Zhu. Mixup inference: Better exploiting mixup to defend adversarial attacks. In _ICLR_ , 2020. * Papernot et al. [2020] Nicolas Papernot, Steve Chien, Shuang Song, Abhradeep Thakurta, and Ulfar Erlingsson. Making the shoe fit: Architectures, initializations, and tuning for learning with privacy, 2020. URL https://openreview.net/forum?id=rJg851rYwH. * Phong et al. [2017] Le Trieu Phong, Yoshinori Aono, Takuya Hayashi, Lihua Wang, and Shiho Moriai. Privacy-preserving deep learning: Revisited and enhanced. In _ICATIS_ , pages 100–110, 2017. * Phong et al. [2018] Le Trieu Phong, Yoshinori Aono, Takuya Hayashi, Lihua Wang, and Shiho Moriai. Privacy-preserving deep learning via additively homomorphic encryption. _IEEE Transactions on Information Forensics and Security_ , 2018. * Shamir [1979] Adi Shamir. How to share a secret. _Communications of the ACM_ , 22(11):612–613, 1979. * Tramèr and Boneh [2021] Florian Tramèr and Dan Boneh. Differentially private learning needs better features (or much more data). In _ICLR_ , 2021. * Wei et al. [2020] Wenqi Wei, Ling Liu, Margaret Loper, Ka-Ho Chow, Mehmet Emre Gursoy, Stacey Truex, and Yanzhao Wu. A framework for evaluating gradient leakage attacks in federated learning. _arXiv preprint arXiv:2004.10397_ , 2020. * Yin et al. [2021] Hongxu Yin, Arun Mallya, Arash Vahdat, Jose M Alvarez, Jan Kautz, and Pavlo Molchanov. See through gradients: Image batch recovery via gradinversion. _arXiv preprint arXiv:2104.07586_ , 2021. * Zhang et al. [2018a] Hongyi Zhang, Moustapha Cisse, Yann N Dauphin, and David Lopez-Paz. Mixup: Beyond empirical risk minimization. In _ICLR_ , 2018a. * Zhang et al. [2019] Hongyi Zhang, Yann N Dauphin, and Tengyu Ma. Fixup initialization: Residual learning without normalization. In _ICLR_ , 2019. * Zhang et al. [2018b] Richard Zhang, Phillip Isola, Alexei A Efros, Eli Shechtman, and Oliver Wang. The unreasonable effectiveness of deep features as a perceptual metric. In _CVPR_ , 2018b. * Zhao et al. [2020] Bo Zhao, Konda Reddy Mopuri, and Hakan Bilen. idlg: Improved deep leakage from gradients. _arXiv preprint arXiv:2001.02610_ , 2020. * Zhu and Blaschko [2021] Junyi Zhu and Matthew Blaschko. R-gap: Recursive gradient attack on privacy. In _ICLR_ , 2021. * Zhu et al. [2019] Ligeng Zhu, Zhijian Liu, and Song Han. Deep leakage from gradients. In _NeurIPS_ , 2019. ## Appendix A Experimental details and more results We run all the experiments on Nvidia RTX 2080 Ti GPUs and V100 GPUs. Table 4 summarizes the set of images used in each figure or table in the main paper. Figure/Table | Comments ---|--- Figure 1a | We’ve tuned hyperparams for the attack (see Appendix A.1) and carried out evaluations on the whole CIFAR-subset. The first sampled batch of size 16 from CIFAR-subset was used in Figure 1a to demonstrate the quality of recovery for low-resolution images when BatchNorm statistics are not assumed to be known. Figure 1b | We’ve tuned hyperparams for the attack (see Appendix A.1) and carried out evaluations on the whole ImageNet-subset. The best-reconstructed image in ImageNet-subset was used in Figure 1b to demonstrate the quality of recovery for high-resolution images when BatchNorm statistics are not assumed to be known. Figure 2a | Percentages of class labels per batch were evaluated over the entire CIFAR10 dataset, for a random seed. Figure 2b | The first sampled batch of size 16 was used in Figure 2b to demonstrate the quality of recovery when labels are not assumed to be known. Table 2 and Figure 3 | We’ve tuned hyperparams for the attack and carried out evaluations on the whole CIFAR-subset. Table 2 summarizes the performance of the attack on the whole CIFAR-subset and Figure 3 shows example images. Table 4: Summary of experimental testbed for each evaluation. ### A.1 Hyper-parameters #### Training. For all experiments, we train ResNet-18 for 200 epochs, with a batch size of 128. We use SGD with momentum 0.9 as the optimizer. The initial learning rate is set to 0.1 by default, except for gradient pruning with $p=0.99$ and $p=0.999$. where we set the initial learning rate to 0.02. We decay the learning rate by a factor of 0.1 every 50 epochs. #### The attack. We report the performance under different $\alpha_{\rm TV}$’s (Figure 4) and $\alpha_{\rm BN}$’s (Figure 5). (a) Original (b) $\alpha_{\rm TV}$=0 (c) $\alpha_{\rm TV}$=1e-3 (d) $\alpha_{\rm TV}$=5e-3 (e) $\alpha_{\rm TV}$=1e-2 (f) $\alpha_{\rm TV}$=5e-2 (g) $\alpha_{\rm TV}$=1e-1 (h) $\alpha_{\rm TV}$=5e-1 Figure 4: Attacking a single CIFAR-10 images in $\rm BN_{exact}$ setting, with different coefficients for the total variation regularizer ($\alpha_{\rm TV}$’s). $\alpha_{\rm TV}$=1e-2 gives the best reconstruction. (a) Original (b) $\alpha_{\rm BN}$=0 (c) $\alpha_{\rm BN}$=5e-4 (d) $\alpha_{\rm BN}$=1e-3 (e) $\alpha_{\rm BN}$=5e-3 (f) $\alpha_{\rm BN}$=1e-2 Figure 5: Attacking a batch of 16 CIFAR-10 images in $\rm BN_{infer}$ setting, with different coefficients for the BatchNorm regularizer ($\alpha_{\rm BN}$’s). $\alpha_{\rm TV}$=1e-3 gives the best reconstruction. ### A.2 Details and more results for Section 3 #### Attacking a single ImageNet image. We launched the attack on ImageNet using the objective function in Eq. 3, where $\alpha_{\rm TV}=0.1$, $\alpha_{\rm BN}=0.001$. We run the attack for 24,000 iterations using Adam optimizer, with initial learning rate 0.1, and decay the learning rate by a factor of $0.1$ at $3/8,5/8,7/8$ of training. We rerun the attack 5 times and present the best results measured by LPIPS in Figure 1. #### Qualitative and quantitative results for a more realistic attack. We also present results of a more realistic attack in Table 5 and Figure 6, where the attacker does not know BatchNorm statistics but knows the private labels. We assume the private labels to be known in this evaluation, because for those batches whose distribution of labels is uniform, the restoration of labels should still be quite accurate [Yin et al., 2021]. As shown, in the evaluated setting, the attack is no longer effective when the batch size is 32 and Intra-InstaHide with $k=4$ is applied. The accuracy loss to stop the realistic attack is only around $3\%$ (compared to around $7\%$ to stop the strongest attack) . (a) Figure 6: Reconstruction results under different defenses for a more realistic setting (when the attacker knows private labels but does not know BatchNorm statistics). We also present the results for the strongest attack from Figure 3 for comparison. Using Intra-InstaHide with $k=4$ and batch size 32 seems to stop the realistic attack. | None | GradPrune ($p$) | MixUp ($k$) | Intra-InstaHide ($k$) | GradPrune ($p=0.9$) ---|---|---|---|---|--- | \+ MixUp | \+ Intra-InstaHide Parameter | - | 0.5 | 0.7 | 0.9 | 0.95 | 0.99 | 0.999 | 4 | 6 | 4 | 6 | $k=4$ | $k=4$ Test Acc. | 93.37 | 93.19 | 93.01 | 90.57 | 89.92 | 88.61 | 83.58 | 92.31 | 90.41 | 90.04 | 88.20 | 91.37 | 86.10 Time (train) | $1\times$ | $1.04\times$ | $1.06\times$ | $1.06\times$ | $1.10\times$ Attack batch size $=16$, the strongest attack Avg. LPIPS $\downarrow$ | 0.41 | 0.41 | 0.42 | 0.46 | 0.48 | 0.50 | 0.55 | 0.50 | 0.49 | 0.69 | 0.69 | 0.62 | 0.73 Best LPIPS $\downarrow$ | 0.21 | 0.22 | 0.27 | 0.29 | 0.30 | 0.29 | 0.48 | 0.31 | 0.28 | 0.56 | 0.56 | 0.37 | 0.65 (LPIPS std.) | 0.09 | 0.08 | 0.07 | 0.06 | 0.06 | 0.06 | 0.04 | 0.10 | 0.10 | 0.06 | 0.07 | 0.10 | 0.05 Attack batch size $=16$, attacker knows private labels but does not know BatchNorm statistics Avg. LPIPS $\downarrow$ | 0.49 | 0.51 | 0.48 | 0.51 | 0.52 | 0.56 | 0.60 | 0.71 | 0.71 | 0.75 | 0.75 | 0.74 | 0.74 Best LPIPS $\downarrow$ | 0.30 | 0.33 | 0.31 | 0.33 | 0.34 | 0.39 | 0.44 | 0.48 | 0.53 | 0.65 | 0.63 | 0.61 | 0.63 (LPIPS std.) | 0.08 | 0.09 | 0.08 | 0.08 | 0.07 | 0.07 | 0.05 | 0.08 | 0.07 | 0.04 | 0.05 | 0.08 | 0.05 Attack batch size $=32$, the strongest attack Avg. LPIPS $\downarrow$ | 0.45 | 0.46 | 0.48 | 0.52 | 0.54 | 0.58 | 0.63 | 0.50 | 0.49 | 0.69 | 0.69 | 0.62 | 0.73 Best LPIPS $\downarrow$ | 0.18 | 0.18 | 0.22 | 0.31 | 0.43 | 0.48 | 0.54 | 0.31 | 0.28 | 0.56 | 0.56 | 0.37 | 0.65 (LPIPS std.) | 0.11 | 0.11 | 0.09 | 0.07 | 0.05 | 0.04 | 0.04 | 0.10 | 0.10 | 0.06 | 0.07 | 0.10 | 0.05 Attack batch size $=32$, attacker knows private labels but does not know BatchNorm statistics Avg. LPIPS $\downarrow$ | 0.48 | 0.50 | 0.53 | 0.53 | 0.55 | 0.60 | 0.63 | 0.73 | 0.72 | 0.76 | 0.76 | 0.76 | 0.77 Best LPIPS $\downarrow$ | 0.29 | 0.32 | 0.32 | 0.31 | 0.40 | 0.41 | 0.55 | 0.63 | 0.60 | 0.68 | 0.63 | 0.66 | 0.65 (LPIPS std.) | 0.08 | 0.07 | 0.07 | 0.08 | 0.08 | 0.06 | 0.04 | 0.06 | 0.06 | 0.04 | 0.05 | 0.06 | 0.05 Table 5: Utility-security trade-off of different defenses for a more realistic setting (when the attacker knows private labels but does not know BatchNorm statistics). We also present the results for the strongest attack from Table 2 for comparison. We evaluate the attack on 50 CIFAR-10 images and report the LPIPS score ($\downarrow$: lower values suggest more privacy leakage). We mark the least-leakage defense measured by the metric in green. ### A.3 More results for the strongest attack #### Full version of Figure 3. Figure 7 provides more examples for reconstructed images by the strongest attack under different defenses and batch sizes. (a) Batch size $=1$ (b) Batch size $=16$ (c) Batch size $=32$ Figure 7: Reconstruction results under different defenses with batch size 1 (a), 16 (b) and 32 (c). Full version of Figure 3. #### Results with MNIST dataset. We’ve repeated our main evaluation of defenses and attacks (Table 2) on MNIST dataset [Deng, 2012] with a simple 6-layer ConvNet model. Note that the simple ConvNet does not contain BatchNorm layers. We evaluate the following defenses on the MNIST dataset with a 6-layer ConvNet architecture against the strongest attack (private labels known): * • GradPrune (gradient pruning): gradient pruning sets gradients of small magnitudes to zero. We vary the pruning ratio $p$ in {0.5, 0.7, 0.9, 0.95, 0.99, 0.999, 0.9999}. * • MixUp: we vary $k$ in {4,6}, and set the upper bound of a single coefficient to 0.65 (coefficients sum to 1). * • Intra-InstaHide: we vary $k$ in {4,6}, and set the upper bound of a single coefficient to 0.65 (coefficients sum to 1). * • A combination of GradPrune and MixUp/Intra-InstaHide. We run the evaluation against the strongest attack and batch size 1 to estimate the upper bound of privacy leakage. Specifically, we assume the attacker knows private labels, as well as the indices of mixed images and mixing coefficients for MixUp and Intra-InstaHide. Figure 8: Reconstruction results of MNIST digits under different defenses with the strongest atttack and batch size 1. For MNIST with a simple 6-layer ConvNet, defending the strongest attack with gradient pruning may require the pruning ratio $p\geq 0.9999$. MixUp with $k=4$ or $k=6$ are not sufficient to defend the gradient inversion attack. Combining MixUp ($k=4$) with gradient pruning ($p=0.99$) improves the defense, however, the reconstructed digits are still highly recognizable. Intra- InstaHide alone ($k=4$ or $k=6$) gives a bit better defending performance than MixUp and GradPrune. Combining InstaHide ($k=4$) with gradient pruning ($p=0.99$) further improves the defense and makes the reconstruction almost unrecognizable. ### A.4 More results for encoding-based defenses We visualize the whole reconstructed dataset under MixUp and Intra-InstaHide defenses with different batch sizes in Figure 10, 11 and 12. Sample results of the original and the reconstructed batches are provided in Figure 9. Figure 9: Original and reconstructed batches of 16 images under MixUp and Intra-InstaHide defenses. We visualize both the original and the absolute images for the Intra-InstaHide defense. Intra-InstaHide makes pixel-wise matching harder. (a) Original (b) MixUp, $k$=4 (c) MixUp, $k$=6 (d) MixUp+GradPrune, $k$=4, $p$=0.9 (e) Original (f) InstaHide, $k$=4 (g) InstaHide, $k$=6 (h) InstaHide+GradPrune, $k$=4, $p$=0.9 Figure 10: Reconstrcuted dataset under MixUp and Intra-InstaHide against the strongest attack (batch size is 1). (a) Original (b) MixUp, $k$=4 (c) MixUp, $k$=6 (d) MixUp+GradPrune, $k$=4, p=0.9 (e) Original (f) InstaHide, $k$=4 (g) InstaHide, $k$=6 (h) InstaHide+GradPrune, $k$=4, $p$=0.9 Figure 11: Reconstrcuted dataset under MixUp and Intra-InstaHide against the strongest attack (batch size is 16). (a) Original (b) MixUp, $k$=4 (c) MixUp, $k$=6 (d) MixUp+GradPrune, $k$=4, $p$=0.9 (e) Original (f) InstaHide, $k$=4 (g) InstaHide, $k$=6 (h) InstaHide+GradPrune, $k$=4, $p$=0.9 Figure 12: Reconstrcuted dataset under MixUp and Intra-InstaHide against the strongest attack (batch size is 32). ## Appendix B Theoretical Insights for Defenses’ Working Mechanism In this section, we provide some theoretical insights for the mechanism of each defense. ### B.1 Gradient pruning Gradient pruning is a non-oblivious case of applying sketching techniques [Cohen et al., 2019] to compress the gradient vector. Usually, if we only observe the vector after sketching, it is hard to recover the original vector unless certain assumptions of the vector itself and the sketching technique have been made. Therefore, gradient pruning prevents the attacker from seeing the original gradient and make the inversion harder. ### B.2 MixUp and InstaHide Intuitively, MixUp and InstaHide’s working mechanism may come from mixing $k$ images in a single encoded image, which appears to be similar to multiplying the batch size by a factor of $k$, thus makes the attack less effective. In Section B.3, we provide theoretical analysis for this intuition, by showing that mixing $k$ images and using a batch size of $k$ are essentially similar, with any neural network that has a fully connected layer as its first layer. Another layer of security of InstaHide seems to come from applying random sign-flipping on the mixed images. As mentioned in Section 5.1, for an InstaHide-encoded image $x\in\operatorname{{\mathbb{R}}}^{d}$, we apply the total variation regularizer on $|x|$ instead of $x$, which pushes the gap between absolute value of adjacent pixels (i.e., $||x_{j}|-|x_{j+1}||$) to be small. However having $||x_{j}|-|x_{j+1}||=\delta$ for some small $\delta<10^{-4}$ does not imply that $|x_{j}-x_{j+1}|=\delta$; in fact, $|x_{j}-x_{j+1}|$ can be as large as $1-\delta$. Therefore, the random sign flipping operation in InstaHide could potentially make the total variation image prior less effective in some sense (see Figure 9). ### B.3 Property of gradient in a small batch The goal of this section is to present the following results, ###### Lemma B.1. Given a neural network with ReLU activation function, each row of the gradient of first layer weights is a linear combination of images, i.e. $\displaystyle(\frac{\partial{\cal L}(W)}{\partial W_{1}})_{i}=\sum_{j=1}^{b}\alpha_{i,j}x_{i}^{\top}$ where the $b$ is the number of images in a small batch, $\\{x_{1},\cdots,x_{b}\\}\in\operatorname{{\mathbb{R}}}^{d}$ are images in that small batch. In Section B.4 and B.5, we show the above observation holds for one/two-hidden layer neural network. In Section B.6, we generalize it to multiple layer neural network. The standard batched $k$-vector sum can be defined as follows: ###### Definition B.2. Give a database $X$ list of vectors $x_{1},\cdots,x_{n}$. Given a list of observations $y_{1},\cdots,y_{m}$ where for each $j\in[m]$, there is a set $S_{j}$ such that $y_{j}=\sum_{i\in S_{j}}x_{i}$ and $|S_{j}|=b$. We can observe $y_{1},\cdots y_{m}$ but has no access to database, the goal is to recover $S_{j}$ and the vectors $x_{i}$ being use, for each $j$. The above definition is a mathematical abstraction of MixUp recovery/attack. It can be further generalized to InstaHide, if we only observe the $|y_{j}|$. We also want to remark that in the above definition, we simplify the formulation by using coefficients $1$ for all vectors. It can be easily generalized to settings where random coefficients are assigned to vectors in the database for MixUp/InstaHide. Using Lemma B.1, we notice that ###### Lemma B.3. Under the condition of Lemma B.1, given a list of observation of gradients, and the problem recovering images is also a batched vector sum problem. Thus, gradient attack is essentially an variation of MixUp/Instahide attack. ### B.4 One Hidden Layer We consider a one-hidden layer ReLU activated neural network with $m$ neurons in the hidden layer: $\displaystyle f(x)=a^{\top}\phi(Wx)$ where $a\in\operatorname{{\mathbb{R}}}^{m}$ and $W\in\operatorname{{\mathbb{R}}}^{m\times d}$. We define objective function $L$ as follows: $\displaystyle L(W)=\frac{1}{2}\sum_{i=1}^{n}(y_{i}-f(W,x_{i},a))^{2}$ We can compute the gradient of $L$ in terms of $w_{r}$ $\displaystyle\frac{\partial L(W)}{\partial w_{r}}=\sum_{i=1}^{n}(f(W,x_{i},a)-y_{i})a_{r}x_{i}{\bf 1}_{\langle w_{r},x_{i}\rangle}$ Let $\widetilde{x}=\frac{1}{n}\sum_{i=1}^{n}x_{i}$, $\displaystyle\frac{\partial L(W)}{\partial w_{r}}=(f(W,\widetilde{x},a)-y)a_{r}\widetilde{x}{\bf 1}_{\langle w_{r},\widetilde{x}\rangle}$ Another version $\displaystyle\frac{\partial L(W)}{\partial w_{r}}=\sum_{i=1}^{n}(f(W,x_{i},a)-y_{i})\cdot\Big{(}a_{r}\widetilde{x}{\bf 1}_{\langle w_{r},\widetilde{x}\rangle}\Big{)}$ ### B.5 Two Hidden Layers Suppose $a\in\operatorname{{\mathbb{R}}}^{m}$, $V\in\operatorname{{\mathbb{R}}}^{m\times d},W\in\operatorname{{\mathbb{R}}}^{m\times m}$. The neural network is defined as $f:\operatorname{{\mathbb{R}}}^{d}\rightarrow\operatorname{{\mathbb{R}}}$, here we slightly deviate from the standard setting and assume the input dimension is $m$, in order to capture the general setting. $\displaystyle f(x)=a^{\top}\phi(W\phi(Vx))$ Consider the mean square loss $\displaystyle L(W,V,a)=\frac{1}{2}\sum_{i=1}^{n}|f(x_{i})-y_{i}|^{2}$ The gradient with respect to $W$ is $\displaystyle\frac{\partial L(W,V,a)}{\partial W}=\sum_{i=1}^{n}(f(x_{i})-y_{i})\underbrace{\mathrm{diag}\\{\phi^{\prime}(W\phi(Vx_{i}))\\}}_{m\times m}\underbrace{a}_{m\times 1}\underbrace{\phi(Vx_{i})^{\top}}_{1\times m}$ and the gradient with respect to $V$ is $\displaystyle\frac{\partial L(W,V,a)}{\partial V}=\sum_{i=1}^{n}(f(x_{i})-y_{i})\underbrace{\mathrm{diag}\\{\phi^{\prime}(Vx_{i})\\}}_{m\times m}\underbrace{W^{\top}}_{m\times m}\underbrace{\mathrm{diag}\\{\phi^{\prime}(W\phi(Vx_{i}))\\}}_{m\times m}\underbrace{a}_{m\times 1}\underbrace{x_{i}^{\top}}_{1\times d}$ ### B.6 The multi-layers case The following multiple layer neural network definition is standard in literature. Consider a $L$ layer neural network with one vector $a\in\operatorname{{\mathbb{R}}}^{m_{L}}$ and $L$ matrices $W_{L}\in\operatorname{{\mathbb{R}}}^{m_{L}\times m_{L-1}}$, $\cdots$, $W_{2}\in\operatorname{{\mathbb{R}}}^{m_{2}\times m_{1}}$ and $W_{1}\in\operatorname{{\mathbb{R}}}^{m_{1}\times m_{0}}$. Let $m_{0}=d$. In order to write gradient in an elegant way, we define some artificial variables as follows $\displaystyle g_{i,1}=$ $\displaystyle~{}W_{1}x_{i},$ $\displaystyle h_{i,1}=\phi(W_{1}x_{i}),$ $\displaystyle\in\operatorname{{\mathbb{R}}}^{m_{1}}$ $\displaystyle\forall i\in[n]$ $\displaystyle g_{i,\ell}=$ $\displaystyle~{}W_{\ell}h_{i,\ell-1},$ $\displaystyle h_{i,\ell}=\phi(W_{\ell}h_{i,\ell-1}),$ $\displaystyle\in\operatorname{{\mathbb{R}}}^{m_{\ell}}$ $\displaystyle\forall i\in[n],\forall\ell\in\\{2,3,\cdots,L\\}$ (4) $\displaystyle D_{i,1}=$ $\displaystyle~{}\text{diag}\big{(}\phi^{\prime}(W_{1}x_{i})\big{)},$ $\displaystyle\in\operatorname{{\mathbb{R}}}^{m_{1}\times m_{1}}$ $\displaystyle\forall i\in[n]$ $\displaystyle D_{i,\ell}=$ $\displaystyle~{}\text{diag}\big{(}\phi^{\prime}(W_{\ell}h_{i,\ell-1})\big{)},$ $\displaystyle\in\operatorname{{\mathbb{R}}}^{m_{\ell}\times m_{\ell}}$ $\displaystyle\forall i\in[n],\forall\ell\in\\{2,3,\cdots,L\\}$ where $\phi(\cdot)$ is the activation function and $\phi^{\prime}(\cdot)$ is the derivative of activation function. Let $f:\operatorname{{\mathbb{R}}}^{m_{0}}\rightarrow\operatorname{{\mathbb{R}}}$ denote neural network function: $\displaystyle f(W,x)=a^{\top}\phi(W_{L}(\phi(\cdots\phi(W_{1}x))))$ Thus using definition of $f$ and $h$, we have $\displaystyle f(W,x_{i})=a^{\top}h_{i,L},~{}~{}~{}\in\operatorname{{\mathbb{R}}},~{}~{}~{}\forall i\in[n]$ Given $n$ input data points $(x_{1},y_{1}),(x_{2},y_{2}),\cdots(x_{n},y_{n})\in\operatorname{{\mathbb{R}}}^{d}\times\operatorname{{\mathbb{R}}}$. We define the objective function $\mathcal{L}$ as follows $\displaystyle\mathcal{L}(W)=\frac{1}{2}\sum_{i=1}^{n}(y_{i}-f(W,x_{i}))^{2}.$ We can compute the gradient of ${\cal L}$ in terms of $W_{\ell}\in\operatorname{{\mathbb{R}}}^{m_{\ell}\times m_{\ell-1}}$, for all $\ell\geq 2$ $\displaystyle\frac{\partial\mathcal{L}(W)}{\partial W_{\ell}}=\sum_{i=1}^{n}(f(W,x_{i})-y_{i})\underbrace{D_{i,\ell}}_{m_{\ell}\times m_{\ell}}\left(\prod_{k=\ell+1}^{L}\underbrace{W_{k}^{\top}}_{m_{k-1}\times m_{k}}\underbrace{D_{i,k}}_{m_{k}\times m_{k}}\right)\underbrace{a}_{m_{L}\times 1}\underbrace{h_{i,\ell-1}^{\top}}_{1\times m_{\ell-1}}$ (5) Note that the gradient for $W_{1}\in\operatorname{{\mathbb{R}}}^{m_{1}\times m_{0}}$ (recall that $m_{0}=d$) is slightly different and can not be written by general form. Here is the form $\displaystyle\frac{\partial\mathcal{L}(W)}{\partial W_{1}}=\sum_{i=1}^{n}(f(W,x_{i})-y_{i})\underbrace{D_{i,1}}_{m_{1}\times m_{1}}\left(\prod_{k=2}^{L}\underbrace{W_{k}^{\top}}_{m_{k-1}\times m_{k}}\underbrace{D_{i,k}}_{m_{k}\times m_{k}}\right)\underbrace{a}_{m_{L}\times 1}\underbrace{x_{i}^{\top}}_{1\times m_{0}}$ (6)
# On the Baire class of n-Dimensional Boundary Functions Connor Paul Wilson 530 Church Street Ann Arbor, MI 48109<EMAIL_ADDRESS> ###### Abstract. We show an extention of a theorem of Kaczynski to boundary functions in n-dimensional space. Let $H$ denote the upper half-plane, and let $X$ denote its frontier, the $x$-axis. Suppose that $f$ is a function mapping $H$ into some metric space $Y.$ If $E$ is any subset of $X,$ we will say that a function $\varphi:E\rightarrow Y$ is a boundary function for $f$ if and only if for each $x\in E$ there exists an arc $\gamma$ at $x$ such that $\lim_{z\rightarrow x\atop z\in\gamma}f(z)=\varphi(x)$ ## 1\. Introduction ### 1.1. Preliminaries and Notation Let $H$ denote the upper half-plane, and let $X$ denote its frontier, the $x$-axis. If $x\in X,$ then by an arc at $x$ we mean a a simple arc $\gamma$ with one endpoint at $x$ such that $\gamma-\\{x\\}\subseteq H.$ Suppose that $f$ is a function mapping $H$ into some metric space $Y.$ If $E$ is any subset of $X,$ we will say that a function $\varphi:E\rightarrow Y$ is a boundary function for $f$ if and only if for each $x\in E$ there exists an arc $\gamma$ at $x$ such that $\lim_{z\rightarrow x\atop z\in\gamma}f(z)=\varphi(x)$ We will also define Baire classes as Kaczynski does in [1], such that a function $f:M\rightarrow Y$ is said to be of Baire class $O(M,Y)$ if and only if it is continuous; if $\xi$ is an ordinal number greater than or equal to $1,$ then f is said to be of Baire class $\xi(M,Y)$ if and only if there exists a sequence of functions $\left\\{f_{n}\right\\}_{n=1}^{\infty}$ mapping M into $Y,f_{n}$ being of Baire class $\eta_{n}(M,Y)$ for some $\eta_{n}<\xi$, such that $f_{n}\rightarrow f$ pointwise. ## 2\. Boundary functions for discontinuous functions ###### Theorem 2.1. Let $Y$ be a separable arc-wise connected metric space, with $f:H\rightarrow Y$ a function of Baire class $\xi(H,Y)$ where $\xi\geq 1,$ $E$ a subset of $X$, and $\varphi:E\rightarrow Y$ a boundary function for $f$. Therefore $\varphi$ is of Baire class $\xi+1(E,Y).$ ###### Proof. Let $U$ be an open subset of $Y$ such that $V=Y-\operatorname{clos}(U)$. Set $A=\varphi^{-1}(U),\ B=\varphi^{-1}(V),\ C=A\cup B.$ Notice that we clearly have an empty intersection between $A$ and $B$. $\forall x\in C$, choose an arc $\gamma_{x}$ at $x$ such that: $\lim_{z\rightarrow x\atop z\in\gamma_{x}}f(z)=\varphi(x)$ with $\gamma_{x}\subseteq\\{z:\mid z-x\mid\leq 1\\}$ where $\begin{cases}\gamma_{x}-\\{x\\}\subseteq f^{-1}(U)&\text{ if }x\in A\\\ \gamma_{x}-\\{x\\}\subseteq f^{-1}(V)&\text{ if }x\in B\end{cases}$ Note once again the empty intersection $\gamma_{x}\cap\gamma_{y}=\emptyset$ for $x\in A\ \wedge\ y\in B$ Let us define the terminology $\gamma_{x}$ meets $\gamma_{y}$ in $\operatorname{clos}(H_{n})$ provided that the two arcs have respective subarcs, $\gamma_{x}\prime$ and $\gamma_{y}\prime$ with $x\in\gamma_{x}\prime\subseteq\operatorname{clos}(H_{n})$ and $x\in\gamma_{y}\prime\subseteq\operatorname{clos}(H_{n})$, with $\gamma_{x}\prime\cap\gamma_{y}\prime\neq\varnothing$ Let: $L_{a}:={x\in A:\forall n\exists y,\text{ such that }y\in C,\ y\neq x,\ \gamma_{y}\text{ meets }\gamma_{x}\text{ in }\operatorname{clos}(H_{n})}$ $L_{b}:={x\in B:\forall n\exists y,\text{ such that }y\in C,\ y\neq x,\ \gamma_{y}\text{ meets }\gamma_{x}\text{ in }\operatorname{clos}(H_{n})}$ $M_{a}:={x\in A:\exists n,\gamma_{x}\text{ meets no }\gamma_{y}\text{ in }\operatorname{clos}(H_{n})}$ $M_{b}:={x\in B:\exists n,\gamma_{x}\text{ meets no }\gamma_{y}\text{ in }\operatorname{clos}(H_{n})}$ $L=L_{a}\cup L_{b}$ $M=M_{a}\cup M_{b}$ $L_{a},L_{b},M_{a},$ and $M_{b}$ are notably pairwise disjoint, and $A=L_{a}\cup M_{a},$ $B=L_{b}\cup M_{b}.$ Let $n(x)\in\mathbb{Z}^{+}$ for each $x\in M$ such that $\gamma_{x}$ meets no $\gamma_{y}$ in $\operatorname{clos}(H_{n(x)})$ such that $x\neq y,$ where $n\geq n(x)$ gives the obvious no meeting case in $\operatorname{clos}(H_{n})$. Moreover, take $K_{n}:=\\{x\in C:\gamma_{x}\text{ meets }X_{n}\wedge\text{ if }x\in M,n\geq n(x)\\}$ It is clear that we have for every $n,K_{n}\subseteq K_{n+1}$, as well as $C=\bigcup_{n=1}^{\infty}K_{n}.$ It follows by the work of Kaczynski[1] and the following lemma that we have Theorem $2.1$. ###### Lemma 2.2. Let Y be a separable arc-wise connected metric space, E any metric space, and $\varphi:E\rightarrow Y$ a function such that for every open set $U\subseteq Y$ there exists a set $T\in P^{\xi+1}(E)$ where $\varphi^{-1}(U)\subseteq T\subseteq\varphi^{-1}(\operatorname{clos}(U))$. Thus for $\xi\geq 2,$ $\varphi$ is of Baire class $\xi(E,Y).$ ###### Proof. Let $\mathcal{B}$ be a countable base for $Y,$ and suppose $W$ is some open subset of $Y.$ Let: $\mathcal{A}(W)=\\{U\in\mathcal{B}:\operatorname{clos}(U)\subseteq W\\}$ By Kaczynski, we take: $W=\bigcup_{U\in\mathcal{A}(W)}U=\bigcup_{U\in\mathcal{A}(W)}\operatorname{clos}(U).$ $\forall U\in\mathcal{B},$ let $T(U)\in P^{\xi+1}(E)$ be chosen so that $\varphi^{-1}(U)\subseteq T(U)\subseteq\varphi^{-1}(\operatorname{clos}(U)).$ Thus we have: $\displaystyle\varphi^{-1}(W)$ $\displaystyle=\bigcup_{U\in\mathcal{A}(W)}\varphi^{-1}(U)\subseteq\bigcup_{U\in\mathcal{A}(W)}T(U)$ $\displaystyle\subseteq\bigcup_{U\in\mathcal{A}(W)}\varphi^{-1}(\operatorname{clos}(U))=\varphi^{-1}(W).$ Therefore $\varphi^{-1}(W)=\bigcup_{U\in\mathcal{A}(W)}T(U)$, and given $P^{\xi+1}(E)$ is closed under countable unions, we have $\varphi^{-1}(W)\in P^{\xi+1}(E),$ and $\varphi$ is of Baire class $\xi(E,Y)$ ∎ And following from above we therefore have: $\varphi^{-1}(U)=A\subseteq T\cap E\subseteq E-B=E-\varphi^{-1}(V)=\varphi^{-1}(\operatorname{clos}(U))$ for some $T\in P^{\xi+2}(X)$, and we know $T\cap E\in P^{\xi+2}(E)$ which by the above lemma gives us $\varphi$ of Baire class $\xi+1(E,Y).$ ∎ ## 3\. Sets of curvilinear convergence for $\mathbb{R}^{3}$ Let $f:H\rightarrow Y$ of Baire class $\xi(H,Y)$, and $\varphi:E\rightarrow Y$ a boundary function of $f.$ Let us define some function to analyse the properties of $M_{a},$ following Kaczynski. Take $\pi:\mathbb{R}^{3}\rightarrow\mathbb{R}^{2}$ such that $\pi(\langle x,y,z\rangle)=\left\|\langle x,y\rangle\right\|_{2}.$ If $\left\|\langle x,y\rangle\right\|_{2}\in M\cap K_{n},$ then let us define $p_{n}(\left\|\langle x,y\rangle\right\|_{2})$ as the first point of $X_{n}$ reached along $\gamma_{x}$ starting from $x.$ It is clear thus that by Kaczynski, the function $\pi(p_{n}(\left\|\langle x,y\rangle\right\|_{2})$ is strictly increasing on $M\cap K_{n}.$ Thus by the above logic, and Lemma $2.2$, we can show the following theorem: ###### Theorem 3.1. Let $Y$ be a separable arc-wise connected metric space in $\mathbb{R}^{n}$, with $f:H\rightarrow Y$ a function of Baire class $\xi+n-1(H,Y)$ where $\xi\geq 1,$ $E$ a subset of $X$, and $\varphi:E\rightarrow Y$ a boundary function for $f$. Therefore $\varphi$ is of Baire class $\xi+n(E,Y).$ Although this does not resolve the fourth open problem of Kaczynski’s work, it does provide an extension to a theorem shown, and is valuable nonetheless to the field. ## References * [1] Kaczynski, Theodore J., Boundary functions, Doctoral Dissertation, University of Michigan (1967).
# Active Learning for Video Classification with Frame Level Queries ††thanks: This research was supported in part by the National Science Foundation under Grant Number: 2143424 Debanjan Goswami Department of Computer Science Florida State University Shayok Chakraborty Department of Computer Science Florida State University ###### Abstract Deep learning algorithms have pushed the boundaries of computer vision research and have depicted commendable performance in a variety of applications. However, training a robust deep neural network necessitates a large amount of labeled training data, acquiring which involves significant time and human effort. This problem is even more serious for an application like video classification, where a human annotator has to watch an entire video end-to-end to furnish a label. Active learning algorithms automatically identify the most informative samples from large amounts of unlabeled data; this tremendously reduces the human annotation effort in inducing a machine learning model, as only the few samples that are identified by the algorithm, need to be labeled manually. In this paper, we propose a novel active learning framework for video classification, with the goal of further reducing the labeling onus on the human annotators. Our framework identifies a batch of exemplar videos, together with a set of informative frames for each video; the human annotator needs to merely review the frames and provide a label for each video. This involves much less manual work than watching the complete video to come up with a label. We formulate a criterion based on uncertainty and diversity to identify the informative videos and exploit representative sampling techniques to extract a set of exemplar frames from each video. To the best of our knowledge, this is the first research effort to develop an active learning framework for video classification, where the annotators need to inspect only a few frames to produce a label, rather than watching the end- to-end video. Our extensive empirical analyses corroborate the potential of our method to substantially reduce human annotation effort in applications like video classification, where annotating a single data instance can be extremely tedious. ###### Index Terms: active learning, video classification, deep learning ## I Introduction With the widespread deployment of modern sensors and cameras, images and videos have become ubiquitous. This has encouraged the development of video classification algorithms to analyze their semantic content for various applications, such as search, summarization, security and surveillance among others. Deep neural networks (CNN and LSTM architectures) have depicted commendable performance in this field [1]. Common methods include obtaining global video-level descriptors using CNN architectures [2], processing videos at two spatial resolutions: a low-resolution context stream and a high- resolution fovea stream [3], fusion technique to integrate data representations at the frame level and video level [4] among others. However, for all these models to work reliably, a large amount of labeled training data is essential, gathering which is an expensive process in terms of time, labor and human expertise. Thus, an algorithm to reduce the human labeling effort is of immense importance in video classification applications. (a) Conventional AL query for video classification (b) Proposed active query and annotation mechanism Figure 1: (a) Conventional AL query for video classification, where the human annotator has to watch a video end-to-end to provide a label. (b) Proposed active query and annotation mechanism where we actively select a batch of videos, together with a subset of frames from each video; the human annotator merely needs to inspect the frames and provide a label for the video. Active Learning (AL) is a machine learning paradigm, where the goal is to automatically identify the salient and exemplar samples from large amounts of redundant data [5]. This tremendously reduces the human annotation effort in inducing a machine learning model, since the human expert only has to label the samples queried by the algorithm. Further, since the model gets trained on the exemplar samples from the data population, it typically depicts better generalization performance than a model where the training data is selected at random. This is an extremely relevant paradigm in today’s world, where an enormous amount of digital data is being generated, but there is a serious dearth of human labor to annotate the data to induce learning models. AL has been successfully used in a variety of applications, including computer vision [6], text analysis [7], computational biology [8] and medical diagnosis [9] among others. Active learning is particularly relevant in the context of deep learning, in order to reduce human annotation effort in training the data- hungry deep neural networks [10]. Designing an AL algorithm for a video classification application entails the human annotator to meticulously watch each queried video end-to-end in order to furnish a label 111we use the terms annotators, oracles, labelers and users synonymously in this paper. This is an extremely time-consuming and laborious process; the annotators may get bored and fatigued quickly and lose interest in the task. This necessitates specialized and more user-friendly query and annotation mechanisms, to utilize the available human labor more efficiently. In this paper, we propose a novel active learning algorithm to address this challenging and practical problem. Our algorithm identifies a batch of informative videos, together with a set of exemplar frames from each; the human annotator merely has to review the queried frames and furnish a label for each video. This is illustrated in Figure 1. Providing such feedback is significantly less time-consuming and burdensome than watching an end-to-end video. We formulate an optimization problem based on an uncertainty and diversity based criterion to identify a batch of informative videos, and exploit representative sampling techniques to select a subset of exemplar frames from each. To our knowledge, this is the first active learning framework for video classification which poses label queries based on a set of exemplar frames, rather than the complete video. We hope this research will motivate the development of AL algorithms with other novel annotation mechanisms, with the goal of further reducing the labeling burden on human oracles in a video classification application. The rest of the paper is organized as follows: we present a survey of related research in Section II, our active sampling framework is detailed in Section III, the results of our empirical studies are presented in Section IV, and we conclude with discussions in Section V. ## II Related Work In this section, we present an overview of active learning in general, followed by a survey of AL for video classification. Active Learning: AL has received significant research attention in the machine learning community. Uncertainty sampling is by far the most common strategy for active learning, where unlabeled samples with highest classification uncertainties are queried for their labels. The uncertainty of an unlabeled sample can be computed by its entropy [11], its distance from the separating hyperplane in the feature space for SVM classifiers [7], the disagreement among a committee of classifiers regarding the label of the sample [12], the expected error reduction of the future learner [13] and so on. Submodular optimization techniques have also been exploited for active data sampling [14, 15]. The growing success and popularity of deep learning has motivated research in the field of deep active learning (DAL), where the goal is to select informative unlabeled samples to efficiently train a deep neural network [10]. A task agnostic AL framework was proposed by Yoo and Kweon [6] that incorporated a loss prediction module in the network architecture, to predict the loss value of an unlabeled sample and query samples accordingly. A DAL framework based on core-set selection was proposed by Sener and Savarese [16], which selected a batch of samples, such that the deep model trained on the selected samples depicts similar performance to that trained on the whole dataset. DAL has also been studied in conjunction with neural architecture search [17], which queries samples for labeling and simultaneously searches for the best neural architectures on-the-fly. A novel training loss function for DAL was proposed by Shui et al., where active sample selection and traning the network parameters were achieved through alternating optimization [18]. Deep active learning techniques based on adversarial training have depicted particularly impressive performance [19, 20, 21, 22]. Active learning has also been studied in conjunction with other learning paradigms such as transfer learning [23], reinforcement learning [24] etc. Moreover, the idea of identifying an informative set of samples for human inspection has been extended to other problem domains, such as matrix completion [25], video summarization [26] and feature selection [27] among others. Recently, there have been efforts to design AL systems with novel query and annotation mechanisms, with the goal of further reducing the labeling burden on human annotators. Joshi et al. [28] proposed a binary query mechanism which queried an unlabeled sample together with a potential class label and the user had to provide the binary answer as to whether the queried unlabeled sample belonged to the selected class or not. Along similar lines, Biswas and Jacobs proposed an AL algorithm for clustering, which queried a pair of samples and the oracles needed to specify whether or not the samples in a pair correspond to the same cluster [29]. Xiong et al. [30] proposed a triplet query framework to learn approximate distance metrics for a nearest neighbor classifier; the algorithm queried unlabeled data triplets $(x_{i},x_{j},x_{k})$ and posed the question whether instance $x_{i}$ was more similar to $x_{j}$ than to $x_{k}$. Qian et al. [31] proposed an active learning algorithm where the query strategy was to place an ordering (or partial ordering) on the similarity of the neighbors of the selected unlabeled sample, rather than querying its actual label. Active Learning for Video Classification: While AL has been extensively studied for image recognition [6, 32, 22, 33], it is much less explored for video classification. Similar to image classification, uncertainty sampling (using metrics like entropy, error reduction) is a popular AL query strategy for video recognition [34, 35]. Yan et al. [36] proposed a multi-class AL framework for video classification using expected error reduction. Since the estimation of the posterior probability distribution $P(y|x)$ may be unreliable due to the lack of sufficient training data, simple heuristics were also proposed to simplify the sample selection strategies. Another approach was developed in the context of SVMs, which queried a set of samples which can produce the maximum expected reduction in the SVM objective [37, 38]. Bandla and Grauman [39] used AL to train an action detector for videos which selected the video which was expected to maximally reduce the entropy among all unlabeled videos. The core idea was to use the current trained detector to extract relevant portions in the video where the action of interest occurs, so that the video segment outside the interval does not introduce noise in the entropy computation. However, this method is specifically designed to actively learn an action detector from videos. Active contrastive learning has also been explored for learning audio-visual representations from unlabeled videos [40]. All these methods require the human annotator to watch an unlabeled video end- to-end in order to provide a label, which may be extremely time-consuming and arduous. In contrast, our framework identifies a subset of exemplar frames, and the human labeler has to label a video by merely reviewing the frames, which is a much more efficient annotation strategy. Our method is applicable to any type of videos and does not make any assumptions about the contents of the video. Other related efforts include AL for video tracking [41], video description [42], video recommendation [43] and video segmentation [44]. However, these methods attempt to solve a different problem than video classification, which is the focus of this paper. We now describe our framework. ## III Proposed Framework Consider an active learning problem for video classification, where we are given a labeled training set $L$ and an unlabeled set $U$, with $|L|\ll|U|$. Each data sample $x$ in $L$ and $U$ is a video. Let $w$ be the deep neural network trained on $L$ and $C$ be the number of classes in the data. Our objective is two-fold: $(i)$ select a batch $B$ containing $b$ unlabeled videos so that the model trained on $L\cup B$ has maximum generalization capability; $(ii)$ however, we are not allowed to show an entire video to a human annotator and ask for its label; we are required to select a subset of $k$ exemplar frames from each queried video, so that only those can be shown to an annotator for labeling the video. Both these objectives are critical in improving the generalization capability of the deep model. The first objective ensures that the salient videos are selected from the unlabeled set for active query. The second objective ensures that the most representative frames are selected from each video for query. This is important, as otherwise, the annotator may not be confident enough to provide a label or may provide an incorrect label, both of which will result in a wastage of query budget and degrade the performance of the model. In the following sections, we discuss our active sampling strategies for sampling videos and frames. ### III-A Active Video Sampling We quantified the utility of a batch of $b$ videos and selected a batch furnishing the maximal utility. The informativeness and diversity metrics were used to compute the utility of a batch of videos in this research. An active learning framework driven by these conditions ensures that the video samples in the batch augment useful knowledge to the underlying deep neural network, and there is high diversity (minimum redundancy) of information among the samples in the batch. These conditions have been used in previous active learning research [45]. Computing informativeness: The informativeness of an unlabeled video sample $x_{i}$ was computed as the uncertainty of the deep model $w$ in predicting a label for $x_{i}$. The Shannon’s entropy was used to compute the prediction uncertainty: $e(x_{i})=-\sum_{y=1}^{C}P(y|x_{i},w)\log P(y|x_{i},w)$ (1) Computing diversity: We computed a diversity matrix $R\in\Re^{|U|\times|U|}$ where $R(i,j)$ denotes the diversity between videos $x_{i}$ and $x_{j}$ in the unlabeled set. We used the kernelized distance on the deep feature representations to compute the diversity between a pair of videos in this research: $R(i,j)=K(x_{i},x_{j})$ (2) where $K=(.,.)$ denotes the distance in the Reproducing Kernel Hilbert Space (RKHS) [46]. #### III-A1 Active Video Selection By definition, all the entries in $e$ and $R$ are non-negative, that is, $e_{i}\geq 0$ and $R(i,j)\geq 0,\forall i,j$. Given $e$ and $R$, our objective is to select a batch of videos with high uncertainties (given by the entries in $e$) and high mutual diversity (given by the entries in $R$). We define a binary selection vector $z\in\Re^{|U|\times 1}$ where $z_{i}$ denotes whether the unlabeled video $x_{i}$ will be selected in the batch $(z_{i}=1)$ or not $(z_{i}=0)$. Our batch selection task (with batch size $b$) can thus be posed as the following NP-hard integer quadratic programming (IQP) problem: $\max_{z}e^{T}z+\mu z^{T}Rz$ $\text{s.t.}\hskip 7.22743ptz_{i}\in\\{0,1\\},\forall i\hskip 7.22743pt\text{and}\hskip 7.22743pt\sum_{i=1}^{|U|}z_{i}=b$ (3) where $\mu$ is a weight parameter governing the relative importance of the two terms. The binary integer constraints on $z$ allow us to combine $e$ and $R$ into a single matrix $Q\in\Re^{|U|\times|U|}$ and express the optimization problem as follows: $\max_{z}z^{T}Qz$ $\text{s.t.}\hskip 7.22743ptz_{i}\in\\{0,1\\},\forall i\hskip 7.22743pt\text{and}\hskip 7.22743pt\sum_{i=1}^{|U|}z_{i}=b$ (4) where the matrix $Q$ is constructed as follows: $Q(i,j)=\begin{cases}\mu R(i,j),&\text{if}\hskip 2.168pti\neq j\\\ e(i),&\text{if}\hskip 2.168pti=j\end{cases}$ (5) The binary integer constraints on the variable $z$ make the IQP in Equation (4) NP-hard. We used the Iterative Truncated Power algorithm [47] to solve this optimization problem. #### III-A2 The Iterative Truncated Power Algorithm This algorithm was originally proposed in the context of the sparse eigenvalue and the densest $k$-subgraph problems. It attempts to solve an optimization problem similar to that in Equation (4). The algorithm starts with an initial solution $z_{0}$ and then generates a sequence of solutions $z_{1},z_{2},\ldots$. The solution $z_{t}$ at iteration $t$ is obtained by multiplying the solution $z_{t-1}$ at iteration $(t-1)$ by the matrix $Q$ and then truncating all the entries to $0$, except the $b$ largest entries. The process is repeated until convergence. The algorithm is guaranteed to converge monotonically for a positive semi-definite (psd) matrix $Q$. When the matrix $Q$ is not psd, the algorithm can be run on the shifted quadratic function (with a positive scalar added to the diagonal elements) to guarantee a monotonic convergence [47]. The algorithm is computationally efficient and converges fast. It benefits from a good starting point. In our empirical studies, the initial solution $z_{0}$ was taken as the indicator vector corresponding to the $b$ largest column sums of the matrix $Q$, as it produced competitive results in our preliminary experiments. The pseudo-code for our active video sampling algorithm is presented in Algorithm 1. Algorithm 1 The Proposed Active Video Sampling Algorithm 0: Training set $L$, Unlabeled set $U$, batch size $b$ and weight parameter $\mu$ 1: Train a deep neural network $w$ on the training set $L$ 2: Compute the entropy vector $e$ (Equation (1)) and the diversity matrix $R$ (Equation (2)) 3: Compute the matrix $Q$, as described in Equation (5) 4: Derive the initial solution $z_{0}$ as the indicator vector containing the $b$ largest column sums of the matrix $Q$ 5: t = 1 6: repeat 7: Compute $\widehat{z_{t}}=Q.z_{t-1}$ 8: Identify $F_{t}$ as the index set of $\widehat{z_{t}}$ with top $b$ values 9: Set $z_{t}$ to be $1$ on the index set $F_{t}$ and $0$ otherwise 10: t = t + 1 11: until Convergence 12: Select a batch of $b$ unlabeled videos based on the final solution $z_{t}$ #### III-A3 Computational Considerations Computing the diversity matrix $R$ involves quadratic complexity. We first note that $R$ needs to be computed only once in our framework, before the start of the AL iterations. As the unlabeled videos get queried through AL, we can keep deleting the corresponding rows and columns in $R$ to derive the new diversity matrix. Moreover, random projection algorithms can be used to speed up computations. The theory of random projections states that, if we have a point cloud in a high dimensional space, they may be projected into a suitable lower-dimensional space such that the distances between the points are approximately preserved [48]. A data matrix $A\in\Re^{N\times D}$ in the $D$ dimensional space is multiplied by a random projection matrix $X\in\Re^{D\times d}$ $(d\ll D)$ to obtain a projected matrix $B\in\Re^{N\times d}$ in the lower dimensional space $d$: $B=AX$ [49]. This can be used to substantially reduce the computational overhead, as distance computations are more efficient in the low dimensional space. We will explore this as part of future research. ### III-B Active Frame Sampling Once we select $b$ videos from the unlabeled set, our next task is to identify a subset of $k$ frames from each of these videos; we exploited representative sampling techniques for this purpose. These techniques identify the exemplar data points which well-represent a given dataset. In particular, the coreset algorithm selects a subset of points such that a model trained over the selected subset is maximally similar to that trained on the whole dataset. For the sake of completeness, we discuss the main ideas here and request interested readers to refer to [16] for further details. Coreset poses the subset selection problem as: $\min_{s:|s|=k}\Bigg{|}\frac{1}{n}\sum_{i\in[n]}l(x_{i},y_{i},A_{i})-\frac{1}{|s|}\sum_{j\in s}l(x_{j},y_{j},A_{j})\Bigg{|}$ (6) where $(x_{i},y_{i})$ denotes a training sample and its label, $A_{i}$ denotes a learning algorithm which outputs a set of parameters by minimizing a loss function $l(.,.,.)$ on a given labeled set $i$. Informally, given a budget $k$, the goal is to select a set of samples $s$, such that the model trained on $s$ depicts similar performance as the model trained on the whole dataset with $n$ samples. This function cannot be directly optimized, as the labels of the samples in the unlabeled set are unknown. An upper bound of this function was derived and the problem of active sampling was shown to be equivalent to the $k$-center problem (also called min-max facility location problem) [50]. The objective of this problem is to select $k$ center points from $n$ samples, such that the largest distance between a data point and its nearest center is minimized. Formally, this can be posed as follows: $\min_{s:|s|=k}\max_{i}\min_{j\in s}\Delta(x_{i},x_{j})$ (7) This problem is NP-Hard [51]. However, a greedy algorithm, as detailed in Algorithm 2, is guaranteed to produce a solution $s$ such that: $\max_{i}\min_{j\in s}\Delta(x_{i},x_{j})\leq 2\times OPT$, where $OPT$ is the optimal solution. We used this algorithm to select a subset of $k$ frames from each of the queried videos. As evident from the formulation, our method does not make any assumptions about the contents of the video, and is applicable to any type of video. Algorithm 2 The Active Frame Sampling Algorithm [16] 0: A video with $n$ frames ($x_{1},x_{2},\ldots x_{n}$) and frame budget $k$ 1: Initialize $s=\Phi$ 2: repeat 3: $u=\operatorname*{arg\,max}_{i\in[n]\backslash s}\min_{j\in s}\Delta(x_{i},x_{j})$ 4: $s=s\cup\\{u\\}$ 5: until $|s|=k$ 6: Select $k$ frames from the video based on the set $s$ ## IV Experiments and Results ### IV-A Datasets We used the UCF-101 [52] and the Kinetics datasets [53] to study the performance of our algorithm. Both these datasets contain videos of humans performing a variety of actions, captured under unconstrained, real-world conditions, and are extensively used to study the performance of video classification algorithms. We used data from $5$ classes at random from each dataset for our experiments. ### IV-B Oracle Simulation All the publicly available video datasets contain annotations for the complete videos; we did not find any datasets which contain annotations based on a subset of frames. Also, different active sampling algorithms will select different subsets of frames, and it is challenging to obtain annotations from a human labeler for every possible subset of frames for a given video, to conduct experiments. We therefore used a deep neural network to simulate the human labeling oracle in our empirical studies. The oracle model was trained on a completely different subset of the data. No information about the oracle model was used in the design and development of our active learning algorithm. During AL, when a video sample was selected for query, the selected frames were passed as an input to the trained oracle model and its prediction entropy on the sample was computed. If the entropy exceeded a particular threshold $\tau_{oracle}$, the oracle was assumed to be not confident enough to produce a label, and no label was returned; otherwise, the oracle returned the predicted label (which may be correct or incorrect). These were done to appropriately mimic a real-world data annotation setup with a human annotator. ### IV-C Implementation Details Base Model: We used a CNN-RNN architecture in our experiments where InceptionV3 pretrained on the ImageNet-1k dataset was used as the feature extractor and a GRU network as the decoder 222https://keras.io/examples/vision/video_classification/. The input frames were scaled and normalized to a fixed input size of $224\times 224$ pixels and fed into the Convolutional Neural Network (CNN). The features extracted were fed into a $5$-layer GRU network which consists of $2$ GRU layers and $1$ fully connected layer with one dropout layer. The $2$ GRU layers had $20$ and $12$ neurons, while the first fully connected layer had $8$ neurons with the ReLU activation function. We used the adam optimizer with a learning rate of $0.001$, momentum of $0.99$, batch size of $32$, and the network was trained for $20$ epochs in each active learning iteration. Oracle Model: We used a similar CNN-RNN architecture as the oracle model. However, for the oracle model, the $2$ GRU layers of the GRU network had $40$ and $16$ neurons. We used the adam optimizer with a learning rate of $0.001$ for the UCF dataset and $0.01$ for the Kinetics dataset, momentum of $0.99$, batch size of $64$, and the network was trained for $30$ epochs. As part of future research, we plan to study the performance of our framework with other architectures for the oracle model, and also conduct experiments with real people as annotators. ### IV-D Experimental Setup Each dataset was split into $5$ parts: $(i)$ an initial training set $L$; $(ii)$ unlabeled set $U$; $(iii)$ test set $T$; $(iv)$ training set to train the oracle model $L_{oracle}$; and $(v)$ test set $T_{oracle}$ to test the oracle model and compute the entropy threshold $\tau_{oracle}$. The number of samples (videos) in each of these sets, together with the accuracy of the oracle model ($A_{oracle}$) for each dataset are depicted in Table I. We note that a better trained oracle could have potentially improved the performance of our algorithm; however, we wanted to validate our algorithm in a challenging real-world setup, where the annotators can abstain from labeling samples and can also provide incorrect labels. We therefore used an oracle model with moderate accuracy ($\approx 70-75\%$) in our empirical studies. Dataset | L | U | T | $L_{oracle}$ | $T_{oracle}$ | $A_{oracle}$ ---|---|---|---|---|---|--- UCF | 250 | 320 | 150 | 697 | 185 | $75.61\%$ Kinetics | 500 | 750 | 300 | 584 | 211 | $71.34\%$ TABLE I: Dataset split used in our experiments, together with the accuracy of the oracle model for the two datasets. (a) UCF (b) Kinetics Figure 2: AL performance comparison. The $x$-axis denotes the iteration number and the $y$-axis denotes the accuracy on the test set. Best viewed in color. UCF Dataset: Video budget $b=25$, Frame Budget $k=100$ --- Methods | Total Videos Queried | Correct (%) | Incorrect (%) | Discarded (%) RR | 250 | $54.4\pm 1.38$ | $42.4\pm 1.6$ | $3.2\pm 0.8$ ER | 250 | $55.2\pm 1.05$ | $40.4\pm 1.05$ | $4.4\pm 0.69$ EK | 250 | $53.06\pm 1.40$ | $30.93\pm 2.20$ | $16\pm 0.8$ Proposed | 250 | $58.66\pm 1.8$ | $31.06\pm 0.61$ | $10.26\pm 2.41$ TABLE II: Performance of the oracle on the UCF dataset. Kinetics Dataset: Video budget $b=25$, Frame Budget $k=100$ --- Methods | Total Videos Queried | Correct (%) | Incorrect (%) | Discarded (%) RR | 250 | $67.33\pm 1.40$ | $23.33\pm 1.40$ | $9.33\pm 1.40$ ER | 250 | $64\pm 1.2$ | $23.86\pm 2.83$ | $12.13\pm 4.02$ EK | 250 | $64.66\pm 1.28$ | $25.06\pm 3.02$ | $10.26\pm 4.31$ Proposed | 250 | $66\pm 1.44$ | $22.93\pm 1.51$ | $11.06\pm 1.28$ TABLE III: Performance of the oracle on the Kinetics dataset. The oracle model was trained on $L_{oracle}$; each sample in $T_{oracle}$ was then passed as an input to the trained oracle and the prediction entropy was noted. The $50^{th}$ percentile of the prediction entropy distribution was taken as the entropy threshold $\tau_{oracle}$; during the AL iterations, if the entropy of any queried video exceeded this threshold, the oracle was assumed to abstain from labeling. The base model was first trained on the set $L$. In each AL iteration, each algorithm queried $b$ videos from the set $U$, and $k$ frames from each of the $b$ videos. The $k$ frames of each video were then passed as an input to the oracle model. Based on its prediction entropy on the sample, the oracle may or may not furnish a label for a given unlabeled video sample. If the oracle does not furnish a label for a given video, it was discarded. The other unlabeled videos (which were labeled by the oracle), together with the returned labels were then appended to the training set, the base model was updated, and its accuracy was computed on the test set. The process was repeated for $10$ iterations, which was taken as the stopping criterion in this work. All the results were averaged over $3$ runs (with different training, unlabeled and test sets) to rule out the effects of randomness. The video budget $b$ was taken as $25$ and the frame budget $k$ as $100$ in each AL iteration. The weight parameter $\mu$ in Equation (3) was taken as $0.01$ and a Gaussian kernel was used to compute the diversity matrix in Equation (2). ### IV-E Comparison Baselines As mentioned in Section II, existing AL techniques for video classification query the complete videos for annotation and the labels obtained are assumed to be always correct. In our framework, the labeling oracle may refuse to provide a label to a queried video and may also provide an incorrect label. This is a more challenging and real-world scenario. It will thus not be fair to compare our method against the existing techniques. We used three comparison baselines in this work: $(i)$ Random-Random (RR), where we selected a batch of $b$ videos at random and a subset of $k$ frames from each video at random; $(ii)$ Entropy-Random (ER), where the $b$ videos with the highest classification entropies were queried and $k$ frames were queried from each at random; and $(iii)$ Entropy-kmeans (EK), where $b$ videos were first selected using entropy sampling; $k$-means clustering was then performed and the $k$ frames corresponding to the $k$ cluster centroids were selected for query from each video. (a) Frame Budget k = 10 (b) Frame Budget k = 20 (c) Frame Budget k = 50 (d) Frame Budget k = 100 Figure 3: Study of the number of queried frames per video. The result with $k=100$ (default setting) is the same as in Figure 2(a) and is included here for the sake of completeness. Best viewed in color. (a) Video Budget b = 15 (b) Video Budget b = 20 (c) Video Budget b = 25 (d) Video Budget b = 30 Figure 4: Study of the number of queried videos in each AL iteration. The result with $b=25$ (default setting) is the same as in Figure 2(a) and is included here for the sake of completeness. Best viewed in color. ### IV-F Active Learning Performance The AL performance results are depicted in Figure 2. In each figure, the $x$-axis represents the iteration number, and the $y$-axis denotes the accuracy on the test set. The proposed method comprehensively outperforms the RR method on both datasets. The ER method depicts random fluctuations in the test accuracy over the AL iterations; our method, on the other hand, depicts a more steady growth in the test accuracy. The EK method depicts the best performance among the baselines, but is not as good as our method. Our method outperforms EK in most of the AL iterations across both the datasets. It also attains the highest accuracy after $10$ AL iterations for both the datasets. We can conclude the following: $(i)$ our video selection criterion based on uncertainty and diversity identifies the most informative videos in the unlabeled set; and $(ii)$ our frame selection criterion based on representative sampling selects a subset of exemplar frames from each queried video, so that a large percentage of them can be correctly annotated by the oracle, which enriches the quality of our training data. As a result, our method augments maximal useful information to the deep neural network, which boosts its generalization capability. These results unanimously corroborate the potential of our framework in substantially reducing the human annotation effort in real-world video classification applications, where labeling a single sample involves significant time and human labor. The performance of the oracle model is reported in Tables II and III for the UCF and Kinetics datasets respectively. A total of $250$ videos were queried from these datasets ($25$ videos in each of the $10$ AL iterations). The tables show the percentage of these videos that were correctly annotated, incorrectly annotated and discarded by the labeling oracle. For the UCF dataset, and for the proposed method, the oracle correctly annotated $58.66\%$ of the queried videos (the highest among all the methods). This shows that representative sampling through coreset is an effective strategy to identify the exemplar frames from a queried video, which have a high probability of receiving the correct label from the oracle, and augmenting useful information to the training set. For the Kinetics dataset, $66\%$ of the videos queried by our method were correctly annotated by the oracle, where as $67.33\%$ of the videos queried by Random Sampling were annotated correctly by the oracle. However, we note that, having a high percentage of unlabeled videos correctly labeled by the oracle does not necessarily mean that the useful samples are being queried. For instance, it is easy to select a batch of videos, which do not have much useful content and are easy to label, and get a high percentage of them correctly labeled by the oracle. However, these videos, even if correctly labeled, will not augment much useful information to the training set, as they are devoid of any useful content. Even though RR depicts a slightly higher percentage of correctly labeled samples than our method in Table III, its generalization accuracy is much worse than our method, as evident from Figure 2(b). The key challenge is to query a set of informative videos and get a high percentage of them correctly labeled by the oracle; both of these are crucial in improving the generalization capability of the model over time. The results in Figure 2 jointly capture both these aspects, and show that our method outperforms the baselines. ### IV-G Effect of the Number of Queried Frames per Video In this experiment, we studied the effect of the frame budget $k$ (number of frames allowed to be selected from a queried video) on the AL performance. The results on the UCF dataset, with frame budgets $10$, $20$, $50$ and $100$ are presented in Figure 3. Our method depicts impressive performance across different frame budgets. For frame budgets $20$, $50$ and $100$, our framework attains the highest test accuracy after $10$ AL iterations. Note that querying lesser number of frames from a video lessens the labeling burden on the oracle, as the oracle has to review an even smaller number of frames to furnish a label. These results show the promise and potential of our technique to further reduce human annotation effort in a video classification application. ### IV-H Effect of the Number of Queried Videos The goal of this experiment was to study the effect of the video budget $b$ (number of videos queried in each AL iteration) on the AL performance. The results on the UCF dataset with $b=15,20,25$ and $30$ are shown in Figure 4. Our framework once again surpasses the baselines across the different budgets. These results are important from the standpoint of a real-world application, where the batch size is governed by the time, man-power and other available resources in a given application, and is different for different applications. ## V Conclusion and Future Work The goal of this research was to devise an efficient annotation mechanism to reduce human annotation effort in video classification applications, where annotating a single data instance is extremely tedious. Our framework identifies a batch of informative videos, together with a set of exemplar frames from each; the human annotator has to produce a label for each video just by reviewing the subset of frames, instead of watching the complete video end-to-end. To the best of our knowledge, this is the first research effort to develop an AL technique for video classification, with this kind of a query and annotation mechanism. Our empirical results validated the promise and potential of our framework to drastically reduce human annotation effort in training a deep neural network for video classification. We hope this research will motivate the development of AL algorithms with other annotation mechanisms, with the goal of further reducing the human annotation effort in video classification. As part of future work, we plan to validate the performance of our algorithm on other applications where the data has a temporal nature. For instance, the proposed query mechanism will also be very relevant in a text classification application to identify informative text snippets, so that a human annotator can furnish a label by reviewing only the snippets, rather than reading the document end-to-end. We will also study the performance of our framework with different size of the data splits, as outlined in Table I. ## References * [1] V. Sharma, M. Gupta, A. Kumar, and D. Mishra, “Video processing using deep learning techniques: A systematic literature review,” _IEEE Access_ , vol. 9, 2021. * [2] J. Ng, M. Hausknecht, S. Vijayanarasimhan, O. Vinyals, R. Monga, and G. Toderici, “Beyond short snippets: Deep networks for video classification,” in _IEEE Conference on Computer Vision and Pattern Recognition (CVPR)_ , 2015. * [3] A. Karpathy, G. Toderici, S. Shetty, T. Leung, R. Sukthankar, and L. Fei-Fei, “Large-scale video classification with convolutional neural networks,” in _IEEE Conference on Computer Vision and Pattern Recognition (CVPR)_ , 2014\. * [4] H. Tian, Y. Tao, S. Pouyanfar, S. Chen, and M. Shyu, “Multimodal deep representation learning for video classification,” _World Wide Web_ , vol. 22, no. 3, pp. 1325 – 1341, 2019. * [5] B. Settles, “Active learning literature survey,” in _Technical Report: University of Wisconsin-Madison_ , 2010. * [6] D. Yoo and I. Kweon, “Learning loss for active learning,” in _IEEE Conference on Computer Vision and Pattern Recognition (CVPR)_ , 2019. * [7] S. Tong and D. Koller, “Support vector machine active learning with applications to text classification,” _Journal of Machine Learning Research (JMLR)_ , vol. 2, pp. 45–66, 2001. * [8] H. Osmanbeyoglu, J. Wehner, J. Carbonell, and M. Ganapathiraju, “Active machine learning for transmembrane helix prediction,” _BMC Bioinformatics_ , vol. 11, no. 1, 2010. * [9] M. Gorriz, A. Carlier, E. Faure, and X. G. i Nieto, “Cost-effective active learning for melanoma segmentation,” in _Neural Information processing Systems (NeurIPS) Workshop_ , 2017. * [10] P. Ren, Y. Xiao, X. Chang, P. Huang, Z. Li, B. Gupta, X. Chen, and X. Wang, “A survey of deep active learning,” _ACM Computing Surveys_ , vol. 54, no. 9, 2021. * [11] A. Holub, P. Perona, and M. Burl, “Entropy-based active learning for object recognition,” in _IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPR-W)_ , 2008. * [12] Y. Freund, S. Seung, E. Shamir, and N. Tishby, “Selective sampling using the query by committee algorithm,” _Machine Learning_ , vol. 28, no. 2-3, pp. 133–168, 1997. * [13] W. Fu, M. Wang, S. Hao, and X. Wu, “Scalable active learning by approximated error reduction,” in _ACM SIGKDD International Conference on Knowledge Discovery and Data Mining_ , 2018. * [14] K. Wei, R. Iyer, and J. Bilmes, “Submodularity in data subset selection and active learning,” in _International Conference on Machine Learning (ICML)_ , 2015. * [15] K. Fujii and H. Kashima, “Budgeted stream-based active learning via adaptive submodular maximization,” in _Neural Information Processing Systems (NeurIPS)_ , 2016. * [16] O. Sener and S. Savarese, “Active learning for convolutional neural networks: A core-set approach,” in _International Conference on Learning Representations (ICLR)_ , 2018. * [17] Y. Geifman and R. El-Yaniv, “Deep active learning with a neural architecture search,” in _Neural Information Processing Systems (NeurIPS)_ , 2019. * [18] C. Shui, F. Zhou, C. Gagne, and B. Wang, “Deep active learning: Unified and principled method for query and training,” in _International Conference on Artificial Intelligence and Statistics (AISTATS)_ , 2020. * [19] M. Ducoffe and F. Precioso, “Adversarial active learning for deep networks: a margin based approach,” in _International Conference on Machine Learning (ICML)_ , 2018. * [20] C. Mayer and R. Timofte, “Adversarial sampling for active learning,” in _IEEE Winter Conference on Applications of Computer Vision (WACV)_ , 2020\. * [21] B. Zhang, L. Li, S. Yang, S. Wang, Z. Zha, and Q. Huang, “State-relabeling adversarial active learning,” in _IEEE Conference on Computer Vision and Pattern Recognition (CVPR)_ , 2020. * [22] S. Sinha, S. Ebrahimi, and T. Darrell, “Variational adversarial active learning,” in _IEEE International Conference on Computer Vision (ICCV)_ , 2019. * [23] R. Chattopadhyay, W. Fan, I. Davidson, S. Panchanathan, and J. Ye, “Joint transfer and batch-mode active learning,” in _International Conference on Machine Learning (ICML)_ , 2013. * [24] D. Krueger, J. Leike, O. Evans, and J. Salvatier, “Active reinforcement learning: Observing rewards at a cost,” in _Neural Information Processing Systems (NeurIPS) Workshop_ , 2016. * [25] N. Ruchansky, M. Crovella, and E. Terzi, “Matrix completion with queries,” in _ACM Conference on Knowledge Discovery and Data Mining (KDD)_ , 2015. * [26] A. Molino, X. Boix, J. Lim, and A. Tan, “Active video summarization: Customized summaries via on-line interaction with the user,” in _Association for the Advancement of Artificial Intelligence (AAAI)_ , 2017\. * [27] H. Shim, S. Hwang, and E. Yang, “Joint active feature acquisition and classification with variable-size set encoding,” in _Neural Information Processing Systems (NeurIPS)_ , 2018. * [28] A. Joshi, F. Porikli, and N. Papanikolopoulos, “Breaking the interactive bottleneck in multi-class classification with active selection and binary feedback,” in _IEEE Conference on Computer Vision and Pattern Recognition (CVPR)_ , 2010. * [29] A. Biswas and D. Jacobs, “Active image clustering: Seeking constraints from humans to complement algorithms,” in _IEEE Conference on Computer Vision and Pattern Recognition (CVPR)_ , 2012. * [30] S. Xiong, Y. Pei, R. Rosales, and X. Fern, “Active learning from relative comparisons,” _IEEE Transactions on Knowledge and Data Engineering_ , vol. 27, no. 12, 2015. * [31] B. Qian, X. Wang, F. Wang, H. Li, J. Ye, and I. Davidson, “Active learning from relative queries,” in _International Joint Conference on Artificial Intelligence (IJCAI)_ , 2013. * [32] A. Bhattacharya and S. Chakraborty, “Active learning with n-ary queries for image recognition,” in _IEEE Winter Conference on Applications of Computer Vision (WACV)_ , 2019. * [33] A. Joshi, F. Porikli, and N. Papanikolopoulos, “Scalable active learning for multiclass image classification,” _IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI)_ , vol. 34, no. 11, pp. 2259 – 2273, 2012. * [34] T. Sabata, P. Pulc, and M. Holena, “Semi-supervised and active learning in video scene classification from statistical features,” in _Workshop at the European Conference on Machine Learning (ECML)_ , 2018. * [35] S. Sivaraman and M. Trivedi, “A general active-learning framework for on-road vehicle recognition and tracking,” _IEEE Transactions on Intelligent Transportation Systems (TITS)_ , vol. 11, no. 2, pp. 267 – 276, 2010. * [36] R. Yan, J. Yang, and A. Hauptmann, “Automatically labeling video data using multi-class active learning,” in _IEEE International Conference on Computer Vision (ICCV)_ , 2003. * [37] S. Vijayanarasimhan, P. Jain, and K. Grauman, “Far-sighted active learning on a budget for image and video recognition,” in _IEEE Conference on Computer Vision and Pattern Recognition (CVPR)_ , 2010. * [38] L. Zhao, G. Sukthankar, and R. Sukthankar, “Robust active learning using crowdsourced annotations for activity recognition,” in _Workshop at the AAAI Conference on Artificial Intelligence_ , 2011. * [39] S. Bandla and K. Grauman, “Active learning of an action detector from untrimmed videos,” in _IEEE International Conference on Computer Vision (ICCV)_ , 2013. * [40] S. Ma, Z. Zeng, D. McDuff, and Y. Song, “Active contrastive learning of audio-visual video representations,” in _International Conference on Learning Representations (ICLR)_ , 2021. * [41] S. Behpour, “Active learning in video tracking,” in _arXiv:1912.12557_ , 2020\. * [42] D. Chan, S. Vijayanarasimhan, D. Ross, and J. Canny, “Active learning for video description with cluster-regularized ensemble ranking,” in _Asian Conference on Computer Vision (ACCV)_ , 2020. * [43] J. Cai, J. Tang, Q. Chen, Y. Hu, X. Wang, and S. Huang, “Multi-view active learning for video recommendation,” in _International Joint Conference on Artificial Intelligence (IJCAI)_ , 2019. * [44] A. Fathi, M. Balcan, X. Ren, and J. Rehg, “Combining self training and active learning for video segmentation,” in _British Machine Vision Conference (BMVC)_ , 2011. * [45] D. Shen, J. Zhang, J. Su, G. Zhou, and C. Tan, “Multi-criteria based active learning for named entity recognition,” in _Association for Computational Linguistics (ACL)_ , 2004. * [46] B. Sriperumbudur, K. Fukumizu, and G. Lanckriet, “Universality, characteristic kernels and rkhs embedding of measures,” _Journal of Machine Learning Research (JMLR)_ , vol. 12, 2011. * [47] X. Yuan and T. Zhang, “Truncated power method for sparse eigenvalue problems,” _Journal of Machine Learning Research (JMLR)_ , vol. 14, pp. 899 – 925, 2013. * [48] W. Johnson and J. Lindenstrauss, “Extensions of lipschitz mappings into a hilbert space,” in _Conference in Modern Analysis and Probability_ , 1984\. * [49] S. Vempala, “The random projection method,” in _Americal Mathematical Society_ , 2004. * [50] R. Farahani and M. Hekmatfar, _Facility Location: Concepts, Models, Algorithms and Case Studies_. Physica-Verlag HD, 2009. * [51] W. Cook, W. Cunningham, W. Pulleyblank, and A. Schrijver, _Combinatorial Optimization_. Springer, 1998. * [52] K. Soomro, A. Zamir, and M. Shah, “Ucf 101: A dataset of 101 human action classes from videos in the wild,” in _Techical Report, UCF_ , 2012. * [53] W. Kay, J. Carreira, K. Simonyan, B. Zhang, C. Hillier, S. Vijayanarasimhan, F. Viola, T. Green, T. Back, P. Natsev, M. Suleyman, and A. Zisserman, “The kinetics human action video dataset,” in _arXiv:1705.06950_ , 2017.
# Pressure Tuning of Berry Curvature in CrGeTe3 G. Scharf Raymond and Beverly Sackler School of Physics and Astronomy, Tel- Aviv University, Tel Aviv, 69978, Israel B. Hen Raymond and Beverly Sackler School of Physics and Astronomy, Tel-Aviv University, Tel Aviv, 69978, Israel P. M. Sarte Materials Department, University of California, Santa Barbara, California, 93106, USA B. R. Ortiz Materials Department, University of California, Santa Barbara, California, 93106, USA G. Kh. Rozenberg Raymond and Beverly Sackler School of Physics and Astronomy, Tel-Aviv University, Tel Aviv, 69978, Israel S. D. Wilson Materials Department, University of California, Santa Barbara, California, 93106, USA A. Ron Raymond and Beverly Sackler School of Physics and Astronomy, Tel-Aviv University, Tel Aviv, 69978, Israel ###### Abstract The integrated Berry curvature is a geometric property that has dramatic implications on material properties. This study investigates the integrated Berry curvature and other scattering mechanisms through their contribution to the anomalous Hall effect in CrGeTe3. The Anomalous Hall effect is absent in the insulating phase of CrGeTe3 and evolves with pressure in a dome-like fashion as pressure is applied. The dome’s edges are characterized by Fermi surface deformations, manifested as mixed electron and hole transport. We discuss the possibility that in CrGeTe3 the integrated Berry curvature is tuned by the application of hydrostatic pressure due to its relation to the Fermi surface deformations. For electrons in solids, Berry phase is a geometric property of the band structure that has dramatic implications on materials’ properties [1]. It is acquired when a system is subject to a cyclic adiabatic transformation in its parameter space, and it is determined by the integrated Berry curvature. As a band-structure property, one may conjecture that dramatic changes to the Fermi surface will result in considerable changes to the Berry curvature and thus may result in variation of its integrated value. An extreme case of such a change would be the metal-insulator transition [2], where at the insulating state, there is no Fermi surface, and at the metallic state, a Fermi surface forms, which may host a nonzero integrated Berry curvature. Such effects were recently observed and understood theoretically in graphene moiré superlattices where Berry curvature features were tuned by varying the electric displacement field and carrier density [3, 4]. A common manifestation of the Berry phase in electronic transport properties is the anomalous Hall effect (AHE). The AHE is an additional contribution to the transverse resistivity ($\rho_{xy}$) on top of the ordinary Hall effect. It occurs in materials where time-reversal symmetry is broken in the presence of spin-orbit interaction [5]. As such, it can be probed through measurements of $\rho_{xy}$ as a function of the magnetic field and serve as a superb probe for investigating the integrated Berry curvature of electrons in solids. We chose CrGeTe3, a ferromagnetic insulator undergoing a metal-insulator transition with the application of hydrostatic pressure, as a laboratory for investigating the evolution of the integrated Berry curvature when the Fermi surface is strongly deformed. CrGeTe3 is a layered ferromagnetic insulator with a Curie temperature (TCurie) of $\mathrm{\sim 67\ K}$ [6] which has recently attracted a lot of attention. Inelastic neutron scattering suggests that CrGeTe3 is a topological magnonic insulator [7] that is predicted to sustain ferromagnetism to the 2D limit [8, 9, 10]. Additionally, short-range fluctuations seem to play an important role above the transition temperature [11, 12], in great similarity to the closely related compound CrSiTe3 [13, 14, 15]. Application of hydrostatic pressure changes TCurie of CrGeTe3. Up to $\mathrm{4.5\ GPa}$, the Curie temperature decreases as pressure is applied [16, 2]. Above $\mathrm{4.5\ GPa}$, TCurie increases dramatically, rising from $\mathrm{\sim 54\ K}$ to $\mathrm{\sim 250\ K}$ at $\mathrm{9.1\ GPa}$ [2]. Within this pressure range, CrGeTe3 undergoes a metal-insulator transition at $\mathrm{\sim 6\ GPa}$ [2]. The coexistence of time-reversal symmetry breaking with the metal-insulator transition in a material with a high Z element (Te) makes CrGeTe3 an ideal candidate to search for an evolution of the Berry curvature as the system is tuned through the metal-insulator transition. In this letter, we demonstrate that the various contributions to the AHE in CrGeTe3 can be tuned by hydrostatic pressure which at certain pressures is present also above $\mathrm{300\ K}$ suggesting possible enhancement of TCurie. Measurements of the AHE at a wide range of hydrostatic pressures at T=2K reveal a dome-like behavior that onsets at the metal-insulator transition and is quenched towards higher pressures. The AHE dome coincides with the pressure range where Fermi surface deformations are observed through the ordinary Hall effect. We discuss a scenario where the AHE dome emerges due to the appearance of a nonzero integrated Berry curvature due to the observed Fermi surface deformation. ## I Methods To create the CrGeTe3 crystals, Cr powder (99.95%, alfa), Ge powder (99.9999%), and Te lump (99.999+%) were sealed in a fused silica ampule at an approximate ratio of 1:1:8 Cr:Ge:Te. Fluxes were heated to 900°C at a rate of 200°C/hr, soaked at 900°C for 24h, and then slowly cooled down to 550°C at a rate of 2°C/h. The resulting fluxes were centrifuged at 550°C to remove molten Te from the crystals, after which thin platelets of dimensions 1mm x 2mm x 0.1mm were mechanically isolated. The pressure was exerted on the samples using miniature diamond anvil cells (DACs) [17], with diamond anvil culets of $\mathrm{300\ \mu m}$. A rhenium gasket was drilled, then filled and covered with a powder layer of 75% Al2O3 and 25% NaCl for electrical insulation. Two pressure cells were loaded with $\mathrm{\sim 5\ \mu m}$ thick CrGeTe3 flakes and placed on top of the insulating layer, which functions as a pressure-transmitting medium. A $\mathrm{\sim 5\ \mu m}$ thick Pt foil was cut into triangular pieces and placed in contact with the CrGeTe3 flakes allowing electrical transport measurements at elevated pressures in the Van der Pauw geometry. As such, the $\rho_{xx}$ and $\rho_{xy}$ are inferred from the measured resistance up to a factor of order unity due to uncertainties in the sample geometry and thickness, which are inevitable inside a DAC. In addition, Ruby fragments were placed between the Pt leads for pressure determination [18]. The samples were compressed in steps of $\mathrm{2-4\ GPa}$ and then cooled down from ambient temperature to $\mathrm{2\ K}$. Measurements from the two cells are shown in this manuscript. ## II Results and Discussion Figure 1(a) shows that gradual application of pressure results in a significant drop to the sample resistance, which begins to saturate at pressures of $\mathrm{\sim 6\ GPa}$ where a metal-insulator transition occurs in agreement with Ref. [19, 2] (see sections S1 and S4 in the supplementary material for resistivity versus temperature plots). TCurie as a function of pressure from our AHE at high temperatures, is also shown in Figure 1(a) and will be discussed later in this section. We note that the pressure in which we observe the metal-insulator transition, and the dependence of Curie temperatures on pressure, are consistent with previous reports, indicating the similarity in sample quality and pressure conditions. The inset of Figure 1(a) shows the Hall coefficient as a function of pressure at $\mathrm{T=2\ K}$, which exhibits a dome-like behavior as a function of pressure. In the metallic state (at $\mathrm{6<P<14.5\ GPa}$), the negative sign of the Hall coefficient indicates that transport is electron-dominated at all temperatures, as it is demonstrated for $\mathrm{10.6\ GPa}$ in Figure 1(b). The full data set is available in section S5. Figure 1(c) shows that at the edges of the dome, both electrons and holes contribute to transport, as can be seen by the flattening and sign change of the Hall slope as a function of temperature. We note that a similar behavior also occurs at other pressures in the vicinity of $\mathrm{14\ GPa}$, as shown in supplementary sections S5 and S6. These most likely originate from hole-like Te 5p and electron-like Cr 3d bands. It should be mentioned that measurements of the Hall effect were infeasible at pressures below $\mathrm{3.2\ GPa}$ due to the large longitudinal resistivity relative to the magnitude of the Hall effect (supplementary material section S6). Figure 1: The left axis of the panel (a) shows a significant decrease in the samples’ resistance as pressure is applied due to the metal-insulator transition. The Blue and red points represent measurements from cells 1 and 2, respectively. The values are scaled by a single geometric factor of order unity, which is used throughout the manuscript for any longitudinal resistivity measurement. On the right axis, we show the Curie temperature as a function of applied pressure. The cyan dots represent measurements from this work based on the AHE, which extend the pressure range covered in Ref. [2], shown in green. The uncertainties in the value of TCurie are estimated as the interval between our sampling points, and the pressure uncertainties are estimated to be about $\mathrm{0.5\ GPa}$. The arrows signify that the value of $\mathrm{300\ K}$ is a lower bound for the Curie point, as it is the highest temperature in which data was taken. The inset shows the Hall coefficient as a function of applied pressure at $\mathrm{2\ K}$ extracted from a linear fit in the field range between $\mathrm{4\ kOe}$ and $\mathrm{10\ kOe}$. The Blue and the red points are measurements of $\rho_{xy}$ from the first and second cells, respectively. A single geometric factor of order unity was used to scale the values of $\rho_{xy}$ here and throughout the manuscript. Panels (b) and (c) show the antisymmetrized Hall measurements at pressures of $\mathrm{10.6\ GPa}$ and $\mathrm{3.2\ GPa}$ at different temperatures as a function of the magnetic field at the range which was used to calculate the Hall coefficient. Above $\mathrm{5.6\ GPa}$, when CrGeTe3 enters the metallic state, a significant AHE signal is observed. Figure 2(a) shows a characteristic behavior of the AHE in the intermediate pressure regime. Here, the AHE is the strongest at low temperatures and monotonically weakens as the temperature increases. From this data, it is clear that the AHE persists to much higher temperatures than the Curie temperature (TCurie) under ambient pressure ($\mathrm{\sim 67\ K}$), which we interpret as an increase of TCurie. We note that the AHE can also occur in paramagnetic materials [20, 21], and therefore its presence at high temperatures does not necessarily signify enhancement of TCurie. However, we find this scenario less likely since the trend observed by our measurements is a smooth continuation of the trend observed by magnetometry measurements [2] shown in Figure 1(a). Figure 2(b) shows measurements of the AHE at $\mathrm{13.5\ GPa}$, characteristic of the high- pressure regime between $\mathrm{13.5}$ and $\mathrm{17.6\ GPa}$. At low temperatures (blue curves), the AHE is completely absent from measurements of $\rho_{xy}$, as can be seen by the absence of the steep low field AHE slope. As the temperature increases, the AHE gradually appears and is enhanced at elevated temperatures. We note that the AHE does not decay even at room temperature, continuing the trend observed in Ref. [2], thus possibly indicating that TCurie in CrGeTe3 surpasses room temperature at this pressure range. Figure 2: Measurements of the Hall effect in sample 1 at different temperatures at pressures of $\mathrm{10.6\ GPa}$ (panel (a)) and $\mathrm{13.5\ GPa}$ (panel (b)). The steep slopes at low fields are due to the AHE. The qualitative differences in the evolution of the AHE with temperature suggest that different mechanisms are at play in different pressure regimes. To disentangle the contributions to the AHE, we separate the anomalous Hall resistivity $\rho_{AHE}$ to intrinsic and extrinsic sources and follow Hou $et\ al.$ [22] who further distinguish between extrinsic scattering originating from static (temperature independent) and dynamic (temperature dependent) scattering mechanisms. At low temperatures, quasiparticles are frozen, and there are no dynamic scattering events. Therefore, $\rho_{AHE}$$(T=0)$ is a static effect, either intrinsic with geometric origins or extrinsic emanating from disorder. At higher temperatures, quasiparticles are thermally activated and dynamic extrinsic scattering mechanisms may contribute to the AHE. We turn to interpret our results in light of these distinctions in the three pressure regimes. $\rho_{AHE}$ is extracted using a procedure similar to Ref. [23] detailed in the supplementary section S3. In the insulating state, the values of the longitudinal resistance ($\mathrm{R_{xx}}$) intermixed in the measurements of $\mathrm{R_{xy}}$ are dominant for fields smaller than $\mathrm{0.2\ T}$ (see section S6 in the supplementary material). Therefore, our measurements are insensitive to small anomalous Hall signals at this pressure range. Figure 3 shows $\rho_{AHE}$ as a function of temperature for various pressures above $\mathrm{5.6\ GPa}$. At the intermediate pressure regime, at low temperatures, $\rho_{AHE}$ $\neq 0$. This suggests that the pressure tuning into the metallic state activates either an intrinsic contribution to the AHE or a static extrinsic scattering mechanism. The signal decays as the temperature increases, and its disappearance marks the Curie temperature. In the high- pressure regime ($\mathrm{P>13.5\ GPa}$), $\rho_{AHE}$ $=0$ at low temperatures and smoothly increases as the temperature increases, persisting up to room temperature above which we could not heat our DACs. The absence of AHE indicates either a perfect cancellation of contributions from various scattering mechanisms, meaning that the sum of all the various contributions to the AHE is zero, or the complete nullification of all of them. Perfect cancellation typically occurs when two mechanisms contribute to $\rho_{AHE}$ with opposite signs, which typically occurs at a specific temperature as seen, for example, in Ref. [24, 25, 26, 27]. In contrast, in CrGeTe3 at $\mathrm{P>13\ GPa}$, $\rho_{AHE}$ $=0$ at a wide temperature range (over $\mathrm{150\ K}$ at $\mathrm{17.6\ GPa}$) rather than crossing zero at a particular temperature. Perfect accidental cancellation of various scattering mechanisms at such a wide temperature range is unlikely. Therefore we deduce that for $\mathrm{P>13\ GPa}$, all scattering mechanisms are negligible at low temperatures, and the behavior shown in Figure 3 is dominated by scattering off of thermally activated quasi-particles. Figure 3: $\rho_{AHE}$ as a function of temperature for various pressures, measured in sample 2. A similar plot for sample 1 is shown in S2 in the supplementary material. At the intermediate pressure regime, between the metal-insulator transition and $\mathrm{13\ GPa}$, $\rho_{AHE}$ $\neq 0$ at low temperatures and decays smoothly as the temperature increases. In contrast, in the high-pressure regime, at low temperatures $\rho_{AHE}$ $=0$, and increases as the temperature increases. In Figure 4, we plot $\rho_{AHE}$ at $\mathrm{2\ K}$ as a function of the pressure, which exhibits a dome-like shape starting at the metal-insulator transition and finishing around $\mathrm{13\ GPa}$. To the best of our knowledge, this behavior has not been observed in the past. Typically, ferromagnets exhibit a monotonic behavior of the AHE as a function of pressure, as seen, for example, in $\mathrm{CeAlSi}$ [28] and $\mathrm{Co_{3}Sn_{2}S_{2}}$ [29], where in the former, the AHE is generated by skew scattering and in the latter by the intrinsic Berry phase. To understand this behavior, we look at the relation between $\rho_{AHE}$ and $\rho_{xx}$, which at low temperatures, in the absence of dynamic scattering, simplifies to [22]: $\rho_{AHE}=\alpha\rho_{xx}+\beta_{0}\rho_{xx}^{2}$ (1) where $\alpha$ represents contributions from skew scattering, and $\beta_{0}$ is a mixture of intrinsic and static side jump mechanisms. In our experiment, we tune $\rho_{xx}$ by changing the hydrostatic pressure P. The inset to Figure 4 shows $\rho_{AHE}$ as a function of $\rho_{xx}$$(P)$ at a constant temperature $\mathrm{T=2\ K}$, where a clear non-parabolic hysteretic behavior is observed. This means that the application of pressure changes not only $\rho_{xx}$ but also $\alpha$ and $\beta_{0}$. Transport measurements alone could not disentangle the contributions of the intrinsic and the static extrinsic mechanisms to the AHE. However, in CrGeTe3, the AHE dome onsets and ends in regimes where mixed transport of electrons and holes is observed (Figure 1(a) inset), indicative of Fermi surface deformations. We suggest that the Fermi surface deformations result in the appearance of nonzero integrated Berry curvature, which is in some similarity to what has been observed in graphene moiré superlattices [3, 4]. Generally, the intrinsic contribution to the AHE depends on the integrated Berry curvature over all occupied states [1]. Therefore, it can be tuned by either changing the Fermi level or by causing changes to the band structure that manifest changes to the Berry curvature. A toy model calculation, based on Ref [30], where the integrated Berry curvature is tuned by changing the Fermi energy can be found in Ref [1]. Similar phenomena were observed experimentally and understood theoretically in graphene moiré superlattices, where both the band structure and the Fermi level were changed[3, 4]. We suggest that in CrGeTe3, at the metal-insulator transition, the effects of nonzero integrated Berry’s curvature appear, contributing to $\rho_{AHE}$ at low temperatures. As pressure increases, those effects get stronger due to the hybridization of the Te and the Cr bands, which mark the rising part of the AHE dome. As the pressure further increases, the AHE weakens and goes to zero at the point where mixed electron/hole transport is observed again. We note that at the end of the AHE dome, structural anomalies were observed by x-ray diffraction as nonmonotonic behavior of the $\angle$Te-Cr-Te angle [19]. In light of the nonmonotonic behavior of atomic positions, one may not be surprised by the nonmonotonic behavior we report in the electronic properties of CrGeTe3. Additionally, Dong and coauthors [31] observed a kink in the axial ratio at $\mathrm{14\ GPa}$ in CrGeTe3 and claim it is indicative of an isostructural phase transition, which is possibly related to the mixed electron-hole transport we observe. It is worth noting that the behavior depicted in Figure 4 can also be explained through changes in the skew-scattering and static side jumps contributions to the AHE as a function of the pressure. However, since the AHE dome onsets at the metal-insulator transition and ends where another Fermi surface deformation was observed strengthens our belief that the observed dome-like behavior of the AHE at low temperatures is due to changes in the integrated Berry’s curvature tuned by the application of hydrostatic pressure. Figure 4: $\rho_{AHE}$ measured at $\mathrm{2\ K}$ as a function of applied pressure. The black line is a guide to the eye. The inset shows $\rho_{AHE}$, measured at $\mathrm{2\ K}$, as a function of the longitudinal resistivity $\rho_{xx}$, showing a hysteretic behavior that deviates from the parabolic relation in equation 1. The Blue and the red points are from the first and second cells, respectively. Their resistivity values are scaled by a single geometric factor of order unity, as was previously mentioned in Figure 1. In summary, we have measured the AHE in CrGeTe3 as a function of applied hydrostatic pressure and temperature. We suggest that the application of hydrostatic pressure to CrGeTe3 results in the tuning of the intrinsic contribution to the AHE, which originates from nonzero integrated Berry’s curvature in proximity to the metal-insulator transition and a Fermi surface deformation. We also found that at elevated pressures, the AHE appears at temperatures above $\mathrm{300\ K}$ suggestive of possible enhancement of TCurie, continuing and agreeing with the trend observed by Bhoi $et\ al.$ [2]. ###### Acknowledgements. A. R. acknowledges support from the Zuckerman Foundation, and the Israel Science Foundation (Grant No. 1017/20). S.D.W., B.R.O., and P.S. gratefully acknowledge support via the UC Santa Barbara NSF Quantum Foundry funded via the Q-AMASE-i program under award DMR-1906325. G. Kh. R. acknowledges the Israel Science Foundation (Grants No. 1748/20). G. S. thanks Shay Sandik, Itai Silber, and Gal Tuvia for the help with the cryogenic equipment. ## References * Xiao _et al._ [2010] D. Xiao, M.-C. Chang, and Q. Niu, Rev. Mod. Phys. 82, 1959 (2010). * Bhoi _et al._ [2021] D. Bhoi, J. Gouchi, N. Hiraoka, Y. Zhang, N. Ogita, T. Hasegawa, K. Kitagawa, H. Takagi, K. H. Kim, and Y. Uwatoko, Physical Review Letters 127, 217203 (2021). * Sinha _et al._ [2022] S. Sinha, P. C. Adak, A. Chakraborty, K. Das, K. Debnath, L. V. Sangani, K. Watanabe, T. Taniguchi, U. V. Waghmare, A. Agarwal, _et al._ , Nature Physics 18, 765 (2022). * Kuiri _et al._ [2022] M. Kuiri, C. Coleman, Z. Gao, A. Vishnuradhan, K. Watanabe, T. Taniguchi, J. Zhu, A. H. MacDonald, and J. Folk, Nature Communications 13, 6468 (2022). * Nagaosa _et al._ [2010] N. Nagaosa, J. Sinova, S. Onoda, A. H. MacDonald, and N. P. Ong, Reviews of modern physics 82, 1539 (2010). * Carteaux _et al._ [1995] V. Carteaux, D. Brunet, G. Ouvrard, and G. Andre, Journal of Physics: Condensed Matter 7, 69 (1995). * Zhu _et al._ [2021] F. Zhu, L. Zhang, X. Wang, F. J. Dos Santos, J. Song, T. Mueller, K. Schmalzl, W. F. Schmidt, A. Ivanov, J. T. Park, _et al._ , Science advances 7, eabi7532 (2021). * Li and Yang [2014] X. Li and J. Yang, Journal of Materials Chemistry C 2, 7071 (2014). * Xu _et al._ [2018] C. Xu, J. Feng, H. Xiang, and L. Bellaiche, npj Computational Materials 4, 1 (2018). * Zhuang _et al._ [2015] H. L. Zhuang, Y. Xie, P. Kent, and P. Ganesh, Physical Review B 92, 035407 (2015). * Chen _et al._ [2022] L. Chen, C. Mao, J.-H. Chung, M. B. Stone, A. I. Kolesnikov, X. Wang, N. Murai, B. Gao, O. Delaire, and P. Dai, Nature communications 13, 1 (2022). * Lin _et al._ [2017] G. Lin, H. Zhuang, X. Luo, B. Liu, F. Chen, J. Yan, Y. Sun, J. Zhou, W. Lu, P. Tong, _et al._ , Physical Review B 95, 245212 (2017). * Ron _et al._ [2019] A. Ron, E. Zoghlin, L. Balents, S. Wilson, and D. Hsieh, Nature communications 10, 1 (2019). * Ron _et al._ [2020] A. Ron, S. Chaudhary, G. Zhang, H. Ning, E. Zoghlin, S. Wilson, R. Averitt, G. Refael, and D. Hsieh, Physical Review Letters 125, 197203 (2020). * Williams _et al._ [2015] T. J. Williams, A. A. Aczel, M. D. Lumsden, S. E. Nagler, M. B. Stone, J.-Q. Yan, and D. Mandrus, Physical Review B 92, 144404 (2015). * Sakurai _et al._ [2021] T. Sakurai, B. Rubrecht, L. Corredor, R. Takehara, M. Yasutani, J. Zeisner, A. Alfonsov, S. Selter, S. Aswartham, A. Wolter, _et al._ , Physical Review B 103, 024404 (2021). * Sterer _et al._ [1990] E. Sterer, M. Pasternak, and R. Taylor, Review of scientific instruments 61, 1117 (1990). * Dewaele _et al._ [2008] A. Dewaele, M. Torrent, P. Loubeyre, and M. Mezouar, Phys. Rev. B 78, 104102 (2008). * Yu _et al._ [2019] Z. Yu, W. Xia, K. Xu, M. Xu, H. Wang, X. Wang, N. Yu, Z. Zou, J. Zhao, L. Wang, _et al._ , The Journal of Physical Chemistry C 123, 13885 (2019). * Maryenko _et al._ [2017] D. Maryenko, A. Mishchenko, M. Bahramy, A. Ernst, J. Falson, Y. Kozuka, A. Tsukazaki, N. Nagaosa, and M. Kawasaki, Nature communications 8, 1 (2017). * Culcer _et al._ [2003] D. Culcer, A. MacDonald, and Q. Niu, Phys. Rev. B 68, 045327 (2003). * Hou _et al._ [2015] D. Hou, G. Su, Y. Tian, X. Jin, S. A. Yang, and Q. Niu, Physical review letters 114, 217203 (2015). * Liu _et al._ [2018] E. Liu, Y. Sun, N. Kumar, L. Muechler, A. Sun, L. Jiao, S.-Y. Yang, D. Liu, A. Liang, Q. Xu, _et al._ , Nature physics 14, 1125 (2018). * Fang _et al._ [2003] Z. Fang, N. Nagaosa, K. S. Takahashi, A. Asamitsu, R. Mathieu, T. Ogasawara, H. Yamada, M. Kawasaki, Y. Tokura, and K. Terakura, Science 302, 92 (2003). * Pureur _et al._ [2004] P. Pureur, F. W. Fabris, J. Schaf, and I. A. Campbell, Europhysics Letters 67, 123 (2004). * Zeng _et al._ [2006] C. Zeng, Y. Yao, Q. Niu, and H. H. Weitering, Phys. Rev. Lett. 96, 037204 (2006). * Haham _et al._ [2011] N. Haham, Y. Shperber, M. Schultz, N. Naftalis, E. Shimshoni, J. W. Reiner, and L. Klein, Physical Review B 84, 174439 (2011). * Piva _et al._ [2023] M. M. Piva, J. C. Souza, V. Brousseau-Couture, S. Sorn, K. R. Pakuszewski, J. K. John, C. Adriano, M. Côté, P. G. Pagliuso, A. Paramekanti, and M. Nicklas, Phys. Rev. Res. 5, 013068 (2023). * Liu _et al._ [2020] Z. Liu, T. Zhang, S. Xu, P. Yang, Q. Wang, H. Lei, Y. Sui, Y. Uwatoko, B. Wang, H. Weng, _et al._ , Physical Review Materials 4, 044203 (2020). * Onoda _et al._ [2006] S. Onoda, N. Sugimoto, and N. Nagaosa, Phys. Rev. Lett. 97, 126602 (2006). * Dong _et al._ [2020] E. Dong, B. Liu, Q. Dong, X. Shi, X. Ma, R. Liu, X. Zhu, X. Luo, X. Li, Y. Li, _et al._ , Physica B: Condensed Matter 595, 412344 (2020). ## III Supplemental material ## Appendix S2 - S1 - The MIT - R(T) at different pressures The longitudinal resistance as a function of temperature, normalized at $\mathrm{9.5\ K}$, measured at different pressures in the first cell. The change in the behavior of the graph from decreasing to increasing as a function of the temperature in the low-temperature regime by application of pressure indicates a metal-insulator transition driven by the application of pressure on the CrGeTe3. Figure S1: The longitudinal resistance as a function of temperature, normalized at $\mathrm{9.5\ K}$, measured at different pressures in the first cell. The change in the behavior of the graph from decreasing to increasing as a function of the temperature by application of pressure indicates a MIT driven by the application of pressure on the CrGeTe3. ## Appendix S3 - S2 - THE AHE measured in the first cell Here we present our measurements of $\rho_{AHE}$ as a function of temperature for the various pressures measured in the first sample. As was also observed in the second sample ( in the main text), At pressures below $\mathrm{13\ GPa}$, $\rho_{AHE}\neq 0$ at low temperatures and decays smoothly as the temperature increases. In contrast, for $\mathrm{P>13\ GPa}$, at low temperatures $\rho_{AHE}=0$, and increases as the temperature increases. Figure S2: $\rho_{AHE}$ as a function of temperature for the various pressures measured for sample 1. ## Appendix S4 - S3 - The extraction of $\rho_{AHE}$ from the measurements $\rho_{AHE}$ in a specific temperature and pressure, is extracted from the measurements by measuring $\mathrm{R_{xy}(H)}$ and antisymmetrize it. This results in graphs as shown in section S6. Then we do a linear fit to the high- field regime ($\mathrm{4\ kOe<H}$), and the intersection of the fit with the y-axis is the AHE resistance ($\mathrm{R_{AHE}}$) (see Fig.S3). Finally, by multiplying $\mathrm{R_{AHE}}$ with the width of the sample, we get $\rho_{AHE}$. Figure S3: Here we show an example of how we extracted the AHE resistance ($\mathrm{R_{AHE}}$) for each pressure at different temperatures. The figure displays the antisymmetrization of the raw data of $\mathrm{R_{xy}}$ as a function of the applied field H, measured at several different temperatures for the first sample in the metallic state at a pressure of $\mathrm{13.5\ GPa}$. The solid lines are the linear fit for the high-fields regime ($\mathrm{4\ kOe<H}$) at each temperature, and the big dots represent the intersection of each fit with the y-axis. The intersection of each fit is $\mathrm{R_{AHE}}$ measured at each temperature. ## Appendix S5 - S4 - $\rho_{xx}$ at different pressures and temperatures In Figure S5, we present the longitudinal resistivity as a function of temperature in the metallic state, measured on both samples. As can be seen, they show very similar behavior as all of them are monotonic - increasing with the temperature and showing similar values. As such, going back to our measurements of the AHE as a function of the temperature (see Figures 2 and S2), the observed behaviors cannot be explained just by the scaling of $\rho_{AHE}$ with the $\rho_{xx}$. First, the scaling of $\rho_{xx}$ cannot explain the change in the behavior of the AHE between the intermediate pressure regime ($\mathrm{5.6\ GPa<P<13\ GPa}$) and the high-pressure regime ($\mathrm{13\ Gpa<P}$). Second, it cannot explain why at $\mathrm{13.5\ GPa}$, the AHE is stronger than at higher pressures in the first sample and thus probably not also in the second cell. Finally, going back to the low- temperature behavior of the AHE as shown in Figure 4, the values of $\rho_{xx}$ at low temperatures (shown in log-scale in Figure S6) cannot explain the dome-like behavior of the AHE. Figure S4: The longitudinal resistivity as a function of temperature at different pressures presented in log-scale, on the left in the first sample and on the right in the second sample. Figure S5: The longitudinal resistivity as a function of temperature at different pressures in the metallic state, on the left in the first sample and on the right in the second sample. Figure S6: The longitudinal resistivity at low temperature (2K) as a function of the pressure in log-scale. The Blue and the red points are from the first and second cells, respectively. Their resistivity values are scaled by a single geometric factor of order unity which was used throughout the manuscript for each longitudinal measurement. ## Appendix S6 - S5 - The Hall slopes measured at different pressures and temperatures Here, we present the Hall slopes as a function of temperature, measured at different pressures. In most measurements, the Hall slope is negative, meaning that although there is a mix of electrons and holes in all pressures, in most of the pressures, we can treat the transport as of electron-like charge carriers. However, at $\mathrm{3.2\ GPa}$ and at $\mathrm{14.5\ GPa}$, there is a change in the sign of the Hall slope, indicating that these pressures, both holes and electrons contribute to the transport where their contributions are temperature dependent, which can be seen in Figure 1(b) in the text as well. At these pressures, we cannot treat the transport as dominated by a single charge carrier. The fact that the Hall slope changes sign as a function of the temperature at those pressures but not before might indicate changes in the band structure of the CrGeTe3, which may result in a change in the integrated Berry curvature. Figure S7: The Hall slopes as a function of temperature, measured at $\mathrm{0.87\ GPa}$ and at $\mathrm{3.2\ GPa}$. Figure S8: The Hall slopes as a function of temperature, measured at $\mathrm{0.87\ GPa}$, $\mathrm{3.2\ GPa}$, and $\mathrm{5.6\ GPa}$. Figure S9: The Hall slopes as a function of temperature, measured at different pressures. The dots represent data from the first cell, and the Xs denote measurements from the second cell. ## Appendix S7 - S6 - Raw Data measurements of $\mathrm{R_{xy}}$ and the resulted anti-symmetric plots for all pressures and cells Here we present measurements of $\mathrm{R_{xy}}$ as a function of the applied field H at various pressures and temperatures, both in their raw form and after undergoing antisymmetrization. When there is significant mixing of $\mathrm{R_{xx}}$ and $\mathrm{R_{xy}}$ in the measurements, it is reflected in the raw data, which appears neither symmetric nor antisymmetric. This effect has been observed multiple times, particularly in the low-pressure regime before the metal-insulator transition. Figure S10: The left panel displays the raw data of $\mathrm{R_{xy}}$ as a function of the applied field H for the first sample in the metallic state at a pressure of $\mathrm{7.5\ GPa}$. The right panel shows the same data after antisymmetrization. Figure S11: The left panel displays the raw data of $\mathrm{R_{xy}}$ as a function of the applied field H for the first sample in the metallic state at a pressure of $\mathrm{9.5\ GPa}$. The right panel shows the same data after antisymmetrization. Figure S12: The left panel displays the raw data of $\mathrm{R_{xy}}$ as a function of the applied field H for the first sample in the metallic state at a pressure of $\mathrm{10.6\ GPa}$. The right panel shows the same data after antisymmetrization. Figure S13: The left panel displays the raw data of $\mathrm{R_{xy}}$ as a function of the applied field H for the first sample in the metallic state at a pressure of $\mathrm{11.7\ GPa}$. The right panel shows the same data after antisymmetrization. Figure S14: The left panel displays the raw data of $\mathrm{R_{xy}}$ as a function of the applied field H for the first sample in the metallic state at a pressure of $\mathrm{13.5\ GPa}$. The right panel shows the same data after antisymmetrization. Figure S15: The left panel displays the raw data of $\mathrm{R_{xy}}$ as a function of the applied field H for the first sample in the metallic state at a pressure of $\mathrm{14.5\ GPa}$. The right panel shows the same data after antisymmetrization. Figure S16: The left panel displays raw data of $\mathrm{R_{xy}}$ as a function of the applied field H for the second sample in the insulating state at a pressure of $\mathrm{0.87\ GPa}$. The right panel presents the same data after antisymmetrization. The presence of significant intermixing between $\mathrm{R_{xx}}$ and $\mathrm{R_{xy}}$ can be easily identified by the absence of symmetry or antisymmetry in the raw data. Figure S17: The left panel displays raw data of $\mathrm{R_{xy}}$ as a function of the applied field H for the second sample in the insulating state at a pressure of $\mathrm{3.2\ GPa}$. The right panel presents the same data after antisymmetrization. The presence of significant intermixing between $\mathrm{R_{xx}}$ and $\mathrm{R_{xy}}$ can be easily identified by the absence of symmetry or antisymmetry in the raw data. Figure S18: The left panel displays the raw data of $\mathrm{R_{xy}}$ as a function of the applied field H for the second sample in the metallic state at a pressure of $\mathrm{5.6\ GPa}$. The right panel shows the same data after antisymmetrization. Figure S19: The left panel displays the raw data of $\mathrm{R_{xy}}$ as a function of the applied field H for the second sample in the metallic state at a pressure of $\mathrm{7.5\ GPa}$. The right panel shows the same data after antisymmetrization. Figure S20: The left panel displays the raw data of $\mathrm{R_{xy}}$ as a function of the applied field H for the second sample in the metallic state at a pressure of $\mathrm{8.9\ GPa}$. The right panel shows the same data after antisymmetrization. Figure S21: The left panel displays the raw data of $\mathrm{R_{xy}}$ as a function of the applied field H for the second sample in the metallic state at a pressure of $\mathrm{10.8\ GPa}$. The right panel shows the same data after antisymmetrization. Figure S22: The left panel displays the raw data of $\mathrm{R_{xy}}$ as a function of the applied field H for the second sample in the metallic state at a pressure of $\mathrm{13.5\ GPa}$. The right panel shows the same data after antisymmetrization. Figure S23: The left panel displays the raw data of $\mathrm{R_{xy}}$ as a function of the applied field H for the second sample in the metallic state at a pressure of $\mathrm{17.6\ GPa}$. The right panel shows the same data after antisymmetrization.
# Newsalyze: Enabling News Consumers to Understand Media Bias Felix Hamborg1, Anastasia Zhukova2, Karsten Donnay3,4, Bela Gipp2 1Dept. of Computer and Information Science, University of Konstanz, Germany, <EMAIL_ADDRESS>2Data & Knowledge Engineering Group, University of Wuppertal, Wuppertal, Germany<EMAIL_ADDRESS>3Dept. of Political Science, University of Zurich, Switzerland<EMAIL_ADDRESS>4Dept. of Politics and Public Administration, University of Konstanz, Germany, <EMAIL_ADDRESS> (2020) ###### Abstract. News is a central source of information for individuals to inform themselves on current topics. Knowing a news article’s slant and authenticity is of crucial importance in times of “fake news,” news bots, and centralization of media ownership. We introduce _Newsalyze_ , a bias-aware news reader focusing on a subtle, yet powerful form of media bias, named bias by word choice and labeling (WCL). WCL bias can alter the assessment of entities reported in the news, e.g., “freedom fighters” vs. “terrorists.” At the core of the analysis is a neural model that uses a news-adapted BERT language model to determine target-dependent sentiment, a high-level effect of WCL bias. While the analysis currently focuses on only this form of bias, the visualizations already reveal patterns of bias when contrasting articles (overview) and in- text instances of bias (article view). ††journalyear: 2020††copyright: rightsretained††conference: Proceedings of the ACM/IEEE Joint Conference on Digital Libraries in 2020; August 1–5, 2020; Virtual Event, China††booktitle: Proceedings of the ACM/IEEE Joint Conference on Digital Libraries in 2020 (JCDL ’20), August 1–5, 2020, Virtual Event, China††doi: 10.1145/3383583.3398561††isbn: 978-1-4503-7585-6/20/06††copyright: none (0cm,1cm) This is a preprint. The accepted manuscript can be found at: https://doi.org/10.1145/3383583.3398561 ## 1\. Introduction and Related Work People rely on the news to inform themselves on current topics and events. Especially news articles, which the public commonly deems most trustworthy (Urban, 1999), are a central part of individual and societal opinion formation and decision making. Media bias, e.g., slanted or biased news coverage, thus can have severe effects on democratic processes (Meyrowitz, 1986). A subtle, yet powerful form of media bias is bias by word choice and labeling (WCL), which occurs when news authors sway readers’ perception of persons, actions, or other semantic concepts by using different terms or phrases to refer to the concepts, e.g., ”undocumented immigrant” vs. ”illegal alien.” Previous works have struggled to automatically identify WCL bias (Balahur et al., 2013; Godbole et al., 2007; Hamborg et al., 2019a), mainly due its implicitness, subjectivity, and high context dependence (Card et al., 2015), requiring actual understanding of the text at hand. However, the advent of deep learning and language models, such as BERT, has led to a significant leap towards natural language understanding (NLU), thereby strongly improving the performance in many tasks deemed traditionally as difficult (Wang et al., 2018). To our knowledge, there is no news reader that enables bias comparison of articles reporting on the same topic and exploration of bias instances within an article. More importantly, no bias-related approaches leverage most recent advancements in NLU, which could help to significantly improve the detection performance of biases that could not be addressed well before. We propose _Newsalyze_ , a news reader that analyzes and visualizes WCL bias in news articles. The prototype currently focuses on visualizing a high-level effect of WCL bias and determines whether a target, i.e., a semantic concept, is portrayed positively or negatively within a sentence. ## 2\. System and User’s Workflow The system performs a five task workflow (cf. (Hamborg et al., 2019a, b)): article gathering, preprocessing, target concept analysis, frame identification, and visualization. For article gathering, we crawl and extract news articles, currently for given a set of user-defined URLs (Hamborg et al., 2017) for each topic. We then perform state-of-the-art NLP preprocessing using Stanford CoreNLP. Target concept analysis finds and resolves semantic concepts, such as persons or countries, across each topic’s articles, going beyond regular coreference resolution by finding also broadly or abstractly defined as well as contrarily mentioned concepts, such as ”freedom fighters” vs. ”terrorists” (Hamborg et al., 2019a). Frame identification determines how concepts are portrayed in their mentions, e.g., ranging from sentiment polarity (positive or negative) to fine-grained framing effects, e.g., whether a person is portrayed as being ”competent”, ”weak” or ”aggressive” (Hamborg et al., 2019a). Identifying frames is a challenging task, for human coders (Card et al., 2015) as well as for previous automated approaches, which either yield mixed results if aiming to find universally valid frames (Hamborg et al., 2019a) or are specialized to only one or a few topics (Greussing and Boomgaarden, 2017). Thus, we currently focus on targeted sentiment, which is a high-level effect of WCL bias but also a universal perception dimension. To achieve state of-the-art performance in target-dependent sentiment classification (TSC) on news articles, we use NewsTSC, a BERT-based neural model (Hamborg, 2020). Lastly, the system visualizes the identified instances of WCL bias using two visualizations, which follow the overview first, details on demand mantra (Shneiderman, 1996). First, an overview, similar to the overview offered by news aggregators such as Google News, shows current topics and for each topic a selection of articles reporting on it. Newsalyze’s overview enables users to efficiently compare how articles portray the topic’s most important concepts: besides each article snippet, the visualization shows a histogram representing the article’s normalized sentiment of the topic’s most frequent concepts. Figure 2 shows histograms of two articles reporting on the Iran deal topic published by HuffPost (left-slanted outlet) and Breitbart (right) in April 2018. Second, an article-view helps users to understand WCL bias while reading an article, e.g., by visually highlighting concept mentions as to the bias categories identified for them on sentence-level. Figure 1 shows an excerpt of the left-slanted article. Figure 1. Newsalyze’s article view highlights mentions of semantic concepts, such as persons, according to their target-dependent sentiment, a high-level effect of bias by word choice and labeling (green: positive, red: negative). Using the overview, users can quickly understand current topics. In contrast to common news aggregators, the overview is bias-aware: its framing histogram shown besides each article snippet enables users to quickly compare how important actors are portrayed across the topic. For example, Figure 2 shows aggregated polarities of Trump and other most frequent NEs of the Iran deal topic. The visual comparison immediately reveals that Trump is portrayed rather negatively in the left outlet but strongly positively in the right outlet. In common news aggregators, users would have to read whole articles to come to this conclusion. Lastly, the article-view aids user to understand bias simply while reading the article, because, for example, phrases of WCL bias are visually highlighted. Figure 2. Framing histograms of a topic’s most frequent semantic concepts, shown for a left-slanted (L) and a right-slanted (R) article. Each bar’s height represents the frequency of its concept, the color aggregated positive (green) or negative (red) sentiment of the concept. Framing histogram of a topic’s most frequent semantic concepts. ## 3\. Conclusion and Future Work Newsalyze is the first bias-aware news reader that supports the full news consumption process, from getting an overview of current topics as well as reading articles. By contrasting how a topic’s actors are portrayed by each article, users can efficiently get an overview not only of the topic but also of the slant of each article. Afterward, when reading an article of interest, users are aided to see bias with the help of in-text bias markers. The system currently analyzes and visualizes a high-level effect of bias by word choice and labeling (WCL), i.e., target-dependent sentiment. In the future, we plan to devise and train a neural model to additionally classify more fine-grained perception dimensions, e.g., framing effects such as whether a person is portrayed as competent or incompetent. We also plan to classify causes of the identified WCL instances, e.g., the use of emotional language (see Figure 1). We hope that in the future systems such as Newsalyze will help people to become aware of bias conveniently during their daily news consumption. The recently increased interest in this topic, not only in research communities but also in society, emphasizes the issue’s importance. ###### Acknowledgements. The work described in this paper is partially funded by the WIN program of the Heidelberg Academy of Sciences and Humanities, financed by the Ministry of Science, Research and the Arts of the State of Baden-Wurttemberg, Germany. ## References * (1) * Balahur et al. (2013) Alexandra Balahur, Ralf Steinberger, Mijail Kabadjov, Vanni Zavarella, Erik Van Der Goot, Matina Halkia, Bruno Pouliquen, and Jenya Belyaeva. 2013. Sentiment analysis in the news. _arXiv preprint arXiv:1309.6202_ (2013). * Card et al. (2015) Dallas Card, Amber E. Boydstun, Justin H. Gross, Philip Resnik, and Noah A. Smith. 2015\. The Media Frames Corpus: Annotations of Frames Across Issues. In _Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 2: Short Papers)_. Association for Computational Linguistics, Stroudsburg, PA, USA, 438–444. https://doi.org/10.3115/v1/P15-2072 * Godbole et al. (2007) Namrata Godbole, Manja Srinivasaiah, and Steven Skiena. 2007\. Large-Scale Sentiment Analysis for News and Blogs. _ICWSM_ 7, 21 (2007), 219–222. * Greussing and Boomgaarden (2017) Esther Greussing and Hajo G. Boomgaarden. 2017. Shifting the refugee narrative? An automated frame analysis of Europe’s 2015 refugee crisis. _Journal of Ethnic and Migration Studies_ 43, 11 (8 2017), 1749–1774. https://doi.org/10.1080/1369183X.2017.1282813 * Hamborg (2020) Felix Hamborg. 2020\. Media Bias, the Social Sciences, and NLP: Automating Frame Analyses to Identify Bias by Word Choice and Labeling. In _Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics (ACL): Student Research Workshop_. Association for Computational Linguistics, 1–9. * Hamborg et al. (2017) Felix Hamborg, Norman Meuschke, Corinna Breitinger, and Bela Gipp. 2017. news-please: A Generic News Crawler and Extractor. In _Proceedings of the 15th International Symposium of Information Science_. Verlag Werner Hülsbusch, 218–223. * Hamborg et al. (2019a) Felix Hamborg, Anastasia Zhukova, and Bela Gipp. 2019a. Automated Identification of Media Bias by Word Choice and Labeling in News Articles. In _2019 ACM/IEEE Joint Conference on Digital Libraries (JCDL)_. IEEE, Urbana-Champaign, IL, USA, 196–205. https://doi.org/10.1109/JCDL.2019.00036 * Hamborg et al. (2019b) Felix Hamborg, Anastasia Zhukova, and Bela Gipp. 2019b. Illegal Aliens or Undocumented Immigrants? Towards the Automated Identification of Bias by Word Choice and Labeling. In _Proceedings of the iConference 2019_. Springer, Cham, Washington, DC, USA, 179–187. https://doi.org/10.1007/978-3-030-15742-5{_}17 * Meyrowitz (1986) Joshua Meyrowitz. 1986\. _No sense of place: The impact of electronic media on social behavior_. Oxford University Press. * Shneiderman (1996) B. Shneiderman. 1996\. The eyes have it: a task by data type taxonomy for information visualizations. _Proceedings 1996 IEEE Symposium on Visual Languages_ (1996), 336–343. https://doi.org/10.1109/VL.1996.545307 * Urban (1999) Christine D Urban. 1999\. _Examining Our Credibility: Perspectives of the Public and the Press_. Asne Foundation. 1–108 pages. * Wang et al. (2018) Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel Bowman. 2018\. GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding. In _Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP_. Association for Computational Linguistics, Stroudsburg, PA, USA, 353–355. https://doi.org/10.18653/v1/W18-5446
# Random Fully Connected Neural Networks as Perturbatively Solvable Hierarchies Boris Hanin111BH gratefully acknowledges support by the NSF through DMS-2143754, DMS-1855684, and DMS-2133806 as well support from an ONR MURI on Foundations of Deep Learning. Department of Operations Research and Financial Engineering Princeton University <EMAIL_ADDRESS> ###### Abstract This article considers fully connected neural networks with Gaussian random weights and biases as well as $L$ hidden layers, each of width proportional to a large parameter $n$. For polynomially bounded non-linearities we give sharp estimates in powers of $1/n$ for the joint cumulants of the network output and its derivatives. Moreover, we show that network cumulants form a perturbatively solvable hierarchy in powers of $1/n$ in that $k$-th order cumulants in one layer have recursions that depend to leading order in $1/n$ only on $j$-th order cumulants at the previous layer with $j\leq k$. By solving a variety of such recursions, however, we find that the depth-to-width ratio $L/n$ plays the role of an effective network depth, controlling both the scale of fluctuations at individual neurons and the size of inter-neuron correlations. Thus, while the cumulant recursions we derive form a hierarchy in powers of $1/n$, contributions of order $1/n^{k}$ often grow like $L^{k}$ and are hence non-negligible at positive $L/n$. We use this to study a somewhat simplified version of the exploding and vanishing gradient problem, proving that this particular variant occurs if and only if $L/n$ is large. Several key ideas in this article were first developed at a physics level of rigor in a recent monograph of Daniel A. Roberts, Sho Yaida, and the author. This article not only makes these ideas mathematically precise but also significantly extends them, opening the way to obtaining corrections to all orders in $1/n$. ## 1 Introduction We live in an era of big data and cheap computation. This has led to remarkable progress in domains ranging from self-driving cars [35] to automatic drug discovery [33] and machine translation [8]. Underpinning many of these exciting practical developments is a class of computational models called neural networks. While they were originally developed in the 1940’s and 1950’s [26, 54], the complexity of state-of-the-art neural nets is unprecedented. And yet, despite their empirical utility, a theoretical understanding of how they work and how to make them better is nascent. In fact, it is sometimes said that neural networks are simply too complicated to allow for a rigorous understanding of their key features. This article adds to the growing body of literature to the contrary. Namely, in the simplest important setting of fully connected networks, we develop a flexible set of probabilistic tools for studying correlation functions of random fully connected neural networks at finite width. A random fully connected neural network, defined precisely in §2.1, is a random field whose law is determined by a fixed and typically non-linear function $\sigma:\mathbb{R}\rightarrow\mathbb{R}$ as well as two structural parameters: a depth $L\in\mathbb{N}$ and a width $n\in\mathbb{N}$. For a fixed depth $L$, in the limit of infinite width $n\rightarrow\infty$ random neural networks converge in distribution to Gaussian processes (see Theorem 2.2). To study finite width effects we therefore consider the higher cumulants for the distribution of the outputs. Our approach is recursive in the network depth $L$ and perturbatively in the reciprocal $1/n$ of the network width. Our main results are: * • We give in Theorem 3.1 sharp estimates in powers of $1/n$ for the the size of all joint cumulants of networks outputs and their derivatives. These can be viewed as quantitative results refinements of the central limit theorem at fixed $L$ and large $n$ (cf Theorems 2.2 and Theorem 3.2). * • We derive exact recursions with respect to depth that describe cumulants at one layer in terms of cumulants at the previous layer (see e.g. Corollary 3.4 and Proposition 11.2). These recursions take the form of perturbatively solvable hierarchies in the sense that the recursion for the $k$-th cumulant at layer $\ell+1$ involves, at leading order in $1/n$ only the $j$-th cumulants at layer $\ell$ with $j\leq k$ (see Corollaries 3.3 and 3.4). This is distinct from but similar in spirit to the remarkable article [29], which derived a dynamical perturbative hierarchy for the so-called neural tangent kernel. As in our work, the perturbative parameter is $1/n$. However, the emphasis in [29] is primarily on training dynamics while ours is on understanding the effect of depth. * • Solving some special cases of the cumulant recursions to leader order in $1/n$ reveals that it is the depth-to-width ratio $L/n$, which we term the effective network depth, rather than the apparent depth $L$, that is a more informative measure of neural network depth and complexity. Mathematically, this suggests a non-trivial double scaling limit for random neural networks in which $n,L\rightarrow\infty\qquad\text{and}\qquad L/n\rightarrow\xi\in[0,\infty).$ For non-linear networks this scaling limit has only started to be considered [17, 21, 53, 40]. Even in the very special case of product of $L$ iid random $n\times n$ matrices (sometimes called deep linear networks) the simultaneous large $n,L$ regime has revealed a range of interesting and not fully understood properties (see e.g. references in [1, 2] as well as [16, 21, 22, 42]). In contrast to the $\xi=0$ regime typically considered in previous work on neural networks (c.f. e.g. [12, 13, 32, 41]), we show that when $\xi>0$ our double scaling limit is capable of exhibiting non-Gaussian and non-linear effects. We find the following effects to leading order in $1/n$: * – We prove in Corollary 4.1 that for any non-linearity $\sigma$ from the $K_{*}=0$ universality class (defined in §4.2.2) and for $k=2,3,4$, the $2k^{th}$ cumulants of the output of a random neural network with non- linearity $\sigma$ grow like $(L/n)^{k/2-1}$. This implies, for instance, that both the correlations between neurons and the fluctuations of a single neuron grow like the effective depth $L/n$ at large $L,n$ (see Remark 4.2 and just after Theorem 4.4). Since the components of the output of a depth $0$ neural network are simply iid Gaussians, we see that the effective depth $L/n$ can be thought of as a measure of how close a random fully connected network is to a Gaussian process. Related questions were considered in [4, 15]. * – We show in Theorems 4.4 and 4.5 that, for random neural networks initialized as in practice (see §4.1), the variance of the gradient of the network output with respect to either its input or a trainable parameter in its first layer grows like $L/n$ at large $L$. As explained in §4.4, this gives the first mathematical characterization for fairly general fully connected networks of the so-called exploding and vanishing gradient problem (EVGP). Previously, this problem was solved rigorously in the important special case of random ReLU networks [17, 21] and solved at a physics level of rigor by a somewhat different but related set of ideas in the monograph [53] of Roberts, Yaida, and the author. The remainder of the introduction is structured as follows. We begin by giving in §2.1 the precise definition of random fully connected neural networks. We then formulate and motivate the main question taken up in this article in §2.2. ## 2 Background and Motivation ### 2.1 What is a (Random) Fully Connected Neural Network? Neural networks are parameterized families of functions. The simplest kind of networks are called fully connected. Each such network is specified by its architecture, which consists of an input dimension $n_{0}\in\mathbb{Z}_{+}$, a network depth $L\in\mathbb{Z}_{+}$, hidden layer widths $n_{1},\ldots,n_{L}\in\mathbb{Z}_{+}$, an output dimension $n_{L+1}\in\mathbb{Z}_{+}$, and a non-linearity $\sigma:\mathbb{R}\rightarrow\mathbb{R}$. The functions computed by a fully connected network with a given architecture are all maps from $\mathbb{R}^{n_{0}}$ to $\mathbb{R}^{n_{L+1}}$ that associate to each network input $x_{\alpha}\in\mathbb{R}^{n_{0}}$ an output $z_{\alpha}^{{}_{(L+1)}}\in\mathbb{R}^{n_{L+1}}$ through a sequence of intermediate representations $z_{\alpha}^{{}_{(\ell)}}\in\mathbb{R}^{n_{\ell}}$ as follows: $z_{\alpha}^{(\ell+1)}:=\begin{cases}b^{(\ell+1)}+W^{(\ell+1)}\sigma(z_{\alpha}^{(\ell)}),&\quad\ell\geq 1\\\ b^{(1)}+W^{(1)}x_{\alpha},&\quad\ell=0\end{cases},\qquad W^{(\ell+1)}\in\mathbb{R}^{n_{\ell+1}\times n_{\ell}},\,b^{(\ell+1)}\in\mathbb{R}^{n_{\ell+1}}.$ (2.1) In this recursion, the univariate function $\sigma$ applied to a vector $z_{\alpha}^{{}_{(\ell)}}\in\mathbb{R}^{n_{\ell}}$ is short-hand for applying it separately to each component. The entries of the matrices $W^{(\ell)}$ and the components of the vectors $b^{(\ell)}$ are called the weights and biases in layer $\ell$, respectively. One typically refers to $z_{\alpha}^{(\ell)}=\left(z_{1;\alpha}^{(\ell)},\ldots,z_{n_{\ell};\alpha}^{(\ell)}\right)\in\mathbb{R}^{n_{\ell}}$ as the vector of pre-activations at layer $\ell$ corresponding to the input $x_{\alpha}$. The most popular choices of $\sigma$ in practice include $\mathrm{ReLU}(t):=\max\left\\{0,t\right\\}$ as well the hyperbolic tangent and their variations. We will analyze these cases in detail later (see §4.2), but for our general results make only the following mild assumption ###### Assumption 2.1. There exists $r\geq 1$ so that the $r$-th derivative of $\sigma$ exists almost everywhere and grows at most polynomially: $\exists k\geq 1\text{ s.t. }\mathrm{sup}_{x\in\mathbb{R}}\left|(1+\left|x\right|)^{-k}\frac{d^{r}}{dx^{r}}\sigma(x)\right|<\infty.$ The primary objects of study in this article are random fully connected neural networks, obtained by choosing network weights and biases to be independent centered Gaussians: $W_{ij}^{(\ell)}\sim\mathcal{N}(0,C_{W}/n_{\ell-1}),\qquad b_{i}^{(\ell)}\sim\mathcal{N}(0,C_{b})\qquad\text{independent}.$ (2.2) Here $C_{b}\geq 0,C_{W}>0$ are fixed constants. The $1/n_{\ell-1}$ scaling in the weight variance ensures that the moments of the outputs $z_{\alpha}^{{}_{(L+1)}}$ remain uniformly bounded as $n_{1},\ldots,n_{L},\rightarrow\infty$ (see e.g. Theorem 2.2 and (3.6)). While we carry out a substantial portion of our analysis for any choice of $C_{b},C_{W}$, as we will see in §4.1, there often exist distinguished $\sigma$-dependent settings of $C_{b},C_{W}$ for which random neural networks at infinite width $n_{1},\ldots,n_{L}\rightarrow\infty$ are well-behaved at large depth $L$. ### 2.2 Statement and Motivation for Questions Addressed in this Article The main problem we take up in the present article is to characterize the finite dimensional distributions of a random neural network $x_{\alpha}\mapsto z_{\alpha}^{{}_{(L+1)}}$ (and its derivatives with respect to $x_{\alpha}$) in the regime where the input dimension $n_{0}$ is arbitrary with the inputs $x_{\alpha}$ satisfying $\frac{1}{n_{0}}\left|\left|x_{\alpha}\right|\right|_{2}^{2}<\infty,$ the output dimension $n_{L+1}$ is fixed, and the hidden layer widths $n_{\ell}$ are large but finite: $\exists c,C>0\text{ s.t. }\qquad cn\leq n_{1},\ldots,n_{L}\leq Cn,\qquad\qquad n\gg 1.$ (2.3) Our approach will be to describe the random field $x_{\alpha}\mapsto z_{\alpha}^{{}_{(L+1)}}$ perturbatively in $1/n$ and recursively in $L$. Before proceeding to the technical statements of our results, we pause to address below the following motivational questions: * • Why study random neural networks? * • Why treat $1/n$ as a perturbative parameter? * • Why study networks at finite width? * • What role does $\sigma$ play? The primary use of a neural network with a fixed architecture is to find a setting of its parameters $W^{{(\ell)}},b^{{(\ell)}}$ giving rise to a mapping $x_{\alpha}\mapsto z_{\alpha}^{{}_{(L+1)}}$ as in (2.1) that matches a dataset $\left\\{(x_{i},f(x_{i}))\right\\}$ of values for some otherwise unknown function $f:\mathbb{R}^{n_{0}}\rightarrow\mathbb{R}^{n_{L+1}}$. The optimization procedure for finding these weights and biases is almost always some variant of gradient descent starting with $W^{{(\ell)}},b^{{(\ell)}}$ drawn at random from the distribution (2.2). Thus, studying the properties of random neural networks gives direct insights into the starting conditions for neural network optimization. For instance, understanding the behavior of neural networks at the start of training gives a principled way to set optimization hyperparameters (e.g. the variances $C_{b},C_{W}$ and the step size used for gradient descent). We refer the interested reader our discussion of criticality in §4 as well as to §3.2 in [53] the articles articles [25, 43, 52, 67] for more on this point. Next, to address why $1/n$ can reasonably be treated perturbatively, we recall that networks used in practice often have a very large number of parameters. This is reflected in the fact that both the layer widths $n_{\ell}$ and the network depth $L$ are often big. Thus, it is sensible to first understand various limits of neural networks in which the number of parameters tends to infinity. The most well-studied regime of this type (though not the only option cf eg [45, 55, 57, 58, 64]) is the infinite width limit, also known as the NTK regime. By definition, this regime is accessed by fixing the depth $L$, the input and output dimensions $n_{0},n_{L+1}$, and the non-linearity $\sigma$, the initialization scheme (2.2) and considering the limit when $n_{1},\ldots,n_{L}\rightarrow\infty$. From the random matrix theory point of view this is the free probability regime. In view of the relation (2.3) the NTK regime is obtained by taking $n\rightarrow\infty$ at fixed $L$ and has two salient features: * • At the start of training, neural networks converge to a Gaussian process (see Theorem 2.2 below) [36, 44, 48, 49, 62, 63, 65]. * • For the purposes of optimization of a squared loss, the network can be replaced by its linearization at the start of training (see [9, 12, 32, 37, 41]). Taken together, these two points show that at least at infinite width and finite depth, it is the structure of the network at initialization that determines not only the start of training but really the entire training trajectory. However, the infinite width limit is too rigid to capture the ability of real work networks to learn data-dependent features (see e.g. [20, 29, 53, 64]). Only finite width networks can capture these effects! Since the starting point for our analysis of random neural networks at finite width is their infinite width behavior, we take this opportunity to record the following result about random neural networks in the NTK regime. ###### Theorem 2.2 (Random Networks at fixed $L$ and Infinite Width are Gaussian Processes). Fix a non-negative integer $r\geq 0$, and suppose $\sigma:\mathbb{R}\rightarrow\mathbb{R}$ is $r$-times differentiable and that its $r$-th derivative is polynomially bounded: $\exists k\geq 1\text{ s.t. }\sup_{x\in\mathbb{R}}\left|(1+\left|x\right|)^{-k}\frac{d^{r}}{dx^{r}}\sigma(x)\right|<\infty.$ Then the finite-dimensional distributions of the stochastic process $x_{\alpha}\mapsto z_{\alpha}^{(L+1)}$ and its derivatives of order up to $r$ converge to those of a centered Gaussian process with $n_{L+1}$ iid components. The limiting variance of each component $K_{\alpha\beta}^{(L+1)}:=\lim_{n_{1},\ldots,n_{L}\rightarrow\infty}\mathrm{Cov}\left(z_{i;\alpha}^{(L+1)},z_{i;\beta}^{(L+1)}\right),\qquad x_{\alpha},x_{\beta}\in\mathbb{R}^{n_{0}},$ satisfies the recursion $\displaystyle K_{\alpha\beta}^{(\ell+1)}=\begin{cases}C_{b}+C_{W}\left\langle\sigma(z_{\alpha})\sigma(z_{\beta})\right\rangle_{K^{(\ell)}},&\quad\ell\geq 1\\\ C_{b}+\frac{C_{W}}{n_{0}}\sum_{j=1}^{n_{0}}x_{j;\alpha}x_{j;\beta},&\quad\ell=0\end{cases}.$ (2.4) In the statement of Theorem 2.2 and henceforth we reserve the symbol $\left\langle f(z_{\alpha},z_{\beta})\right\rangle_{\kappa}$ to denote the expectation of $f(z_{\alpha},z_{\beta})$ with respect to the Gaussian distribution $\left(z_{\alpha},z_{\beta}\right)\sim\mathcal{N}\left(0,\left(\begin{array}[]{cc}\kappa_{\alpha\alpha}&\kappa_{\alpha\beta}\\\ \kappa_{\alpha\beta}&\kappa_{\beta\beta}\end{array}\right)\right),$ where $\kappa_{\alpha\beta}=\kappa(x_{\alpha},x_{\beta})$ is a given covariance function. The conclusion in Theorem 2.2 is not new, having been obtained many times and under a variety of different assumptions (including for more general architectures) [19, 36, 44, 51, 63]. We refer the interested reader to [19] for a discussion of prior work and note only that convergence of the derivatives of the field $z_{\alpha}^{{}_{(L+1)}}$ to its Gaussian limit does not seem to have been previously considered. We give a short proof that includes convergence of derivatives along the lines of the arguments in [19, 36] in Appendix §A. Let us remark that the relation between $K^{{(\ell+1)}}$ to $K^{(\ell)}$ supplied by the recursion (2.4) is in general non-linear due to the presence of $\sigma$ and depends crucially on the weight and bias variances $C_{W},C_{b}$. Given $\sigma$, finding a choice of $C_{b},C_{W}$ so that this recursion is well-behaved (e.g. not exponentially growing or decaying) at large $\ell$ is an important practical matter [30, 36, 50]. We explain this in §4.1, where we also point out that the different possible large $\ell$ limits suggest the existence of universality classes of random neural networks. Finally, as we alluded to above, the tremendous simplification of random neural networks in the NTK regime comes at a steep cost in terms of the descriptive power of the resulting model in the sense that such models cannot learn data-dependent features. We again refer to reader to articles such as [10, 45, 55, 57, 58, 64] in which a different initialization scheme leads to infinite width limits capable of feature learning. Since we do not treat feature learning in this article, we will not elaborate further on this point, referring the interested reader to [9, 20, 29, 53, 60, 64] instead. Finally, before formulating the new results in this article, we point the interested to a final tranche of physics-style references such as [3, 11, 14, 34, 38, 46, 47, 56], which analyze a variety of questions related to either very wide or infinitely wide networks. ## 3 Results In this section we give precise formulations of our results on random neural networks. We start in §3.1 by stating a structural result, Theorem 3.1, on the size of the joint cumulants of a random neural networks and its derivatives in powers of $1/n$. We then extend this order of magnitude estimate to obtain in Theorem 3.2 a general prescription for obtaining series expansions in powers of $1/n$ for expectation values of functions of a random depth $\ell+1$ neural network in terms of those of a random depth $\ell$ network. An application of Theorem 3.2 yields explicit recursions for the $4^{th},6^{th},8^{th}$ joint cumulants for components $z_{i;\alpha}^{{}_{(\ell)}}$ of the vector of pre- activations at layer $\ell$ corresponding to a fixed network input $x_{\alpha}.$ These recursions are recorded in Corollary 3.4. See §5 for an overview of the proof of Theorems 3.1 and 3.2 as well as Corollary 3.4. After stating these results, we describe in §4 the important notion of criticality in random neural networks, which corresponds to choosing values of the weight and bias variance parameters $C_{b},C_{W}$ in (2.2) depending on $\sigma$ in such a way that the infinite width covariance $K_{\alpha\beta}^{{}_{(\ell)}}$ (and in fact the higher cumulants at finite width as well) is well-behaved at large $\ell$. The variety of resulting large $\ell$ behaviors determines universality classes of random neural networks, as we explain in §4.2. We then turn in §4.2.1 and §4.2.2 to solving to the $2k$-th cumulant recursions for $k=1,2,3,4$ from Corollary 3.4 in random networks tuned to criticality with non-linearities either from the universality class of ReLU or of tanh. We will see that, at large network depth $L$ and leading order in $1/n$, these cumulants depend only on the effective network depth $L/n$. We formalize our belief that this is a universal phenomenon in Conjecture 4.3. We then solve in §4.4 a simplified version of the so-called exploding and vanishing gradient problem (EVGP), which concerns the empirical variance of parameter gradients in a random neural network. More precisely, for non- linearities in the $K_{*}=0$ universality class in prove (see Theorem 4.5) that to leading order in $1/n$ the empirical variance over weights in the first layer of network gradients grows linearity in the effective network depth $L/n$. These results depend on Theorem 4.4, which gives for non- linearities in the $K_{*}=0$ universality class asymptotics at large $\ell$ for joint fourth cumulants between the values of partial derivatives of the output of a random neural network. ### 3.1 Cumulants of Random Neural Networks at Finite Width Since in the infinite $n$ limit, the field $z_{\alpha}^{{}_{(L+1)}}$ is Gaussian (see Theorem 2.2), it is natural to measure perturbations around this regime by considering the behavior of the cumulants of $z_{\alpha}^{{}_{(L+1)}}$ and its derivatives. Let us therefore agree that, given random variables $X_{1},\ldots,X_{k}$ with finite moments defined on the same probability space, we will denote their mixed cumulant by $\kappa\left(X_{1},\ldots,X_{k}\right):=i^{k}\frac{\partial^{k}}{\partial t_{1}\cdots\partial t_{k}}\bigg{|}_{t=0}\log\mathbb{E}\left[\exp\left[-i(t_{1}X_{1}+\cdots+t_{k}X_{k})\right]\right].$ (3.1) Thus, for example, $\kappa(X_{1})=\mathbb{E}\left[X_{1}\right]$ and $\kappa(X_{1},X_{2})=\mathrm{Cov}(X_{1},X_{2}).$ We refer the reader to §6.1 for background on cumulants. Our first result, Theorem 3.1, gives estimates on the order in $1/n$ of the cumulants of $z_{\alpha}^{{}_{(L+1)}}$ and its derivatives. To state it, we fix a finite collection $\left\\{x_{\alpha},\,\alpha\in\mathcal{A}\right\\}\subseteq\mathbb{R}^{n_{0}},$ of $\left|\mathcal{A}\right|$ distinct network inputs. Moreover, we fix a collection of $p$ directional derivatives: $D=\left(d_{1},\ldots,d_{p}\right),\qquad d_{j}:=\nabla_{v_{j}}=\sum_{i=1}^{n_{0}}v_{ij}\partial_{x_{i}}.$ (3.2) and for any multi-index $J=\left(j_{1},\ldots,j_{p}\right)\in\mathbb{N}^{p}$ set $D_{\alpha}^{J}:=d_{1}^{j_{1}}\cdots d_{m}^{j_{p}}\bigg{|}_{x=x_{\alpha}}$ for the corresponding differential operator of order $\left|J\right|:=j_{1}+\cdots+j_{p}.$ ###### Theorem 3.1 (order of magnitude for cumulants of random neural networks). Fix $r,L\geq 1$ and suppose that $\sigma:\mathbb{R}\rightarrow\mathbb{R}$ satisfies Assumption (2.1) with this value of $r$. Suppose further that one of the following two conditions holds: * • $\sigma$ is smooth * • the limiting covariance matrix $\left(\lim_{n\rightarrow\infty}\mathrm{Cov}\left(D_{\alpha_{1}}^{J_{1}}z_{1;\alpha_{1}}^{(\ell)},D_{\alpha_{2}}^{J_{2}}z_{1;\alpha_{2}}^{(\ell)}\right)\right)_{\begin{subarray}{c}\left|J_{1}\right|,\left|J_{2}\right|\leq r\\\ \alpha_{1},\alpha_{2}\in\mathcal{A}\end{subarray}}$ (3.3) of derivatives of order at most $r$ in the directional derivatives $d_{1},\ldots,d_{p}$ of the scalar field $z_{1;\alpha}^{(\ell)}$ is strictly positive definite in the infinite width limit for all $\ell\leq L$. Then, for each $k,\ell\geq 1$, as $n\rightarrow\infty$ $\kappa\left(D_{\alpha_{1}}^{J_{1}}z_{i_{1};\alpha_{1}}^{(\ell)},\ldots,D_{\alpha_{k}}^{J_{k}}z_{i_{k};\alpha_{k}}^{(\ell)}\right)=\begin{cases}0,&\quad k\text{ odd}\\\ O(n^{-\frac{k}{2}+1}),&\quad k\text{ even}\end{cases},$ (3.4) where the implicit constant in the error term depends on $k$, the inputs $x_{\alpha_{1}},\ldots,x_{\alpha_{k}},$ the multi-indices $J_{1},\ldots,J_{k}$, the weight and bias variances $C_{b},C_{W},$ the non- linearity $\sigma$, and the layer index $\ell$. We prove Theorem 3.1 in §7. At a physics level of rigor, Theorem 3.1 with $k=4$ and no derivatives was already derived in the breakthrough work of Yaida [61]. In fact, Yaida’s original article went much further: it obtained a recursive formula with respect to $\ell$ for the fourth cumulant $\kappa(z_{i_{1};\alpha_{1}}^{{}_{(\ell)}},\ldots,z_{i_{4};\alpha_{4}}^{{}_{(\ell)}})$ at layer $\ell$ in terms of the second and fourth cumulants at layer $\ell-1$. This is analogous to the recursion (2.4) for the infinite width covariance $K_{\alpha_{1}\alpha_{2}}^{{}_{(\ell)}}$. This theme was then picked up and significantly expanded upon in the physics monograph [53], which computes, among other things, at order $1/n$ the leading corrections to the field $z_{\alpha}^{{}_{(\ell)}}$ and its derivatives with respect to both $x_{\alpha}$ and model parameters. We will reproduce some of these recursions and obtain new ones of a similar flavor below. Compared to this prior work the main novelty of Theorem 3.1 is two-fold. First, it gives sharp estimates in powers of $1/n$ for cumulants of all orders (the sharpness can already be seen when $\ell=2$). Second, it treats in a uniform way the cumulants for not only the values but also all derivatives of $z_{\alpha}^{{}_{(\ell)}}$. In order to put Theorem 3.1 into context, we take this opportunity to make an important remark. Namely, it is no accident that the order of magnitude $O(n^{-k+1})$ for the $2k$-th cumulant in Theorem 3.1 is the same as that of the $k$-th cumulant of an average of $n$ iid random variables. To see why, let us denote by $\mathcal{F}^{(\ell)}$ the sigma algebra generated by the weights and biases in layers up to and including $\ell$. Since we’ve assumed weights and biases to be Gaussian and independent for different neurons, we find that conditional on $\mathcal{F}^{(\ell)}$ the vectors $z_{i;\mathcal{A}}^{(\ell+1)}:=\left(z_{i;\alpha}^{(\ell+1)},\,\alpha\in\mathcal{A}\right)$ are independent centered Gaussians with the covariance $\Sigma_{\alpha\alpha^{\prime}}^{(\ell)}:=\mathrm{Cov}\left(z_{i;\alpha}^{(\ell+1)},z_{i;\alpha^{\prime}}^{(\ell+1)}~{}|~{}\mathcal{F}^{(\ell)}\right)=C_{b}+\frac{C_{W}}{n_{\ell}}\sum_{j=1}^{n_{\ell}}\sigma\left(z_{j;\alpha}^{(\ell)}\right)\sigma\left(z_{j;\alpha^{\prime}}^{(\ell)}\right).$ (3.5) Put another way, we have the following equality in distribution: $\left(z_{i;\alpha}^{(\ell+1)}\right)_{i=1}^{n_{\ell+1}}~{}\stackrel{{\scriptstyle d}}{{=}}~{}\left(\left(\Sigma^{(\ell)}\right)^{1/2}Z_{i}\right)_{i=1}^{n_{\ell+1}},$ (3.6) where $Z_{i}$ are i.i.d. standard Gaussians and are independent of the random covariance matrix $\Sigma^{(\ell)}:=\left(\Sigma_{\alpha\alpha^{\prime}}^{(\ell)}\right)_{\begin{subarray}{c}\alpha,\alpha^{\prime}\in\mathcal{A}\end{subarray}}.$ Inspecting (3.5) shows this conditional covariance has the structure of an average of $n_{\ell}$ identically distributed random matrices, each depending only on the pre-activations $z_{i;\mathcal{A}}^{{}_{(\ell)}}$ of a single neuron at layer $\ell$. We will refer to such random variables as collective observables. The key point is that while the $z_{i;\mathcal{A}}^{{}_{(\ell)}}$ are not independent for different $i$ at finite width, we will show that they are sufficiently weakly correlated that the cumulants of $\Sigma_{\alpha\alpha^{\prime}}^{{}_{(\ell)}}$ have the same order in $n$ as if they were exactly independent (see Theorem 7.3 and Lemma 7.5). This reasoning applies equally well to derivatives of $z_{i;\alpha}^{{}_{(\ell+1)}}$ and explains the order of magnitude of the estimates in Theorem 3.1. Collective observables such as $\Sigma_{\alpha\alpha^{\prime}}^{{}_{(\ell)}}$ are a convenient book-keeping device for studying the fully distribution of $D^{\leq r}z_{i;\mathcal{A}}^{(\ell+1)}:=\left(D_{\alpha}^{J}z_{i;\alpha}^{(\ell+1)},\quad\alpha\in\mathcal{A},\,J\in\mathbb{N}^{p},\,\left|J\right|\leq r\right).$ Indeed, the conditional Gaussian structure (3.6) means the cumulants of $D^{\leq r}z_{i;\mathcal{A}}^{{}_{(\ell+1)}}$ are easily expressible in terms of those of $D_{\alpha}^{J}D_{\alpha^{\prime}}^{J^{\prime}}\Sigma_{\alpha\alpha^{\prime}}^{{}_{(\ell)}}$. For example, using Wick’s Theorem and the multi-linearity of cumulants reveals that the fourth cumulant $\displaystyle\kappa\left(D_{\alpha_{1}}^{J_{1}}z_{i_{1};\alpha_{1}}^{(\ell+1)},D_{\alpha_{2}}^{J_{2}}z_{i_{2};\alpha_{2}}^{(\ell+1)},D_{\alpha_{3}}^{J_{3}}z_{i_{3};\alpha_{3}}^{(\ell+1)},D_{\alpha_{4}}^{J_{4}}z_{i_{4};\alpha_{4}}^{(\ell+1)}\right)$ equals $\delta_{i_{1}i_{2}}\delta_{i_{3}i_{4}}\kappa_{(\alpha_{1}\alpha_{2})(\alpha_{3}\alpha_{4})}^{(J_{1}J_{2})(J_{3}J_{4}),(\ell)}+\delta_{i_{1}i_{3}}\delta_{i_{2}i_{4}}\kappa_{(\alpha_{1}\alpha_{3})(\alpha_{2}\alpha_{4})}^{(J_{1}J_{3})(J_{2}J_{4}),(\ell)}+\delta_{i_{1}i_{4}}\delta_{i_{2}i_{3}}\kappa_{(\alpha_{1}\alpha_{4})(\alpha_{2}\alpha_{3})}^{(J_{1}J_{4})(J_{2}J_{3}),(\ell)},$ where we’ve abbreviated $\kappa_{(\alpha_{1}\alpha_{1}^{\prime}),\ldots,(\alpha_{k}\alpha_{k}^{\prime})}^{{(J_{1}J_{1}^{\prime}),\ldots,(J_{k}J_{k}^{\prime}),(\ell)}}:=\kappa\left(D_{\alpha_{1}}^{J_{1}}D_{\alpha_{1}^{\prime}}^{J_{1}^{\prime}}\Sigma_{\alpha_{1}\alpha_{1}^{\prime}}^{(\ell)},\ldots,D_{\alpha_{k}}^{J_{k}}D_{\alpha_{k}^{\prime}}^{J_{k}^{\prime}}\Sigma_{\alpha_{k}\alpha_{k}^{\prime}}^{(\ell)}\right).$ (3.7) The general pattern is that odd mixed cumulants in the $z_{\alpha}^{(\ell)}$ and its derivatives vanish by symmetry and $2k$-th cumulants are finite sums of cumulants $\kappa_{(\alpha_{1}\alpha_{1}^{\prime}),\ldots,(\alpha_{k}\alpha_{k}^{\prime})}^{{}_{(J_{1}J_{1}^{\prime}),\ldots,(J_{k}J_{k}^{\prime}),(\ell)}}$. In Corollary 3.4 and Proposition 10.2 below we will obtain a range of recursions for these cumulants and thereby implicitly for the cumulants of the original field $z_{\alpha}^{{}_{(\ell)}}$ as well. To obtain such recursions note that, at first glance, the estimate (3.4) only gives the order of magnitude in powers of $1/n$ of the cumulants of $\kappa(D_{\alpha_{1}}^{J_{1}}z_{i_{1};\alpha_{1}}^{(\ell)},\ldots,D_{\alpha_{k}}^{J_{k}}z_{i_{k};\alpha_{k}}^{{}_{(\ell)}})$ but does not seem to provide information about their structural dependence on the remaining model parameters $C_{b},C_{W},\sigma,\ell$. However, such information can be obtained by combining (3.4) with an additional argument. To state the result, let us agree to write $\kappa_{\alpha_{1}\alpha_{2}}^{(\ell)}:=\mathrm{Cov}\left(z_{i;\alpha_{1}}^{(\ell)},z_{i;\alpha_{2}}^{(\ell)}\right),\qquad\qquad\kappa_{\alpha_{1}\alpha_{2}}^{J_{1}J_{2},(\ell)}:=D_{\alpha_{1}}^{J_{1}}D_{\alpha_{2}}^{J_{2}}\kappa_{\alpha_{1}\alpha_{2}}^{(\ell)}.$ (3.8) Physicists might refer to $\kappa_{\alpha_{1}\alpha_{2}}^{{}_{(\ell)}}$ as a dressed two point function. Moreover, let us denote by $\left\langle\cdot\right\rangle_{\kappa^{(\ell)}}$ the expectation with respect to a collection of centered jointly Gaussian random vectors $D_{\mathcal{A}}^{\leq r}z_{i}=\left(D_{\alpha}^{J}z_{i;\alpha},\,\alpha\in\mathcal{A},\,\left|J\right|\leq r\right)$ with the same covariance $\mathrm{Cov}\left(D_{\alpha_{1}}^{J_{1}}z_{i_{1};\alpha_{1}},\,D_{\alpha_{2}}^{J_{2}}z_{i_{2};\alpha_{2}}\right):=\mathrm{Cov}\left(D_{\alpha_{1}}^{J_{1}}z_{i_{1};\alpha_{1}}^{(\ell)},\,D_{\alpha_{2}}^{J_{2}}z_{i_{2};\alpha_{2}}^{(\ell)}\right)=\delta_{i_{1}i_{2}}\kappa_{\alpha_{1}\alpha_{2}}^{J_{1}J_{2},(\ell)}$ as the true vectors of derivatives $D_{\mathcal{A}}^{\leq r}z_{i;\mathcal{A}}^{{}_{(\ell)}}$ in each component separately but zero covariance for different $i$. ###### Theorem 3.2 (perturbative expansions of expectations of observables at finite width). Fix $r\geq 0$ and suppose that $f$ is both a continuous function and a tempered distribution. Then for any $q_{*}\geq 0$ we have $\displaystyle\mathbb{E}\left[f\left(D^{\leq r}z_{1,\mathcal{A}}^{(\ell+1)},\ldots,D^{\leq r}z_{m,\mathcal{A}}^{(\ell+1)}\right)\right]$ $\displaystyle\quad=\sum_{q=0}^{2q_{*}}\frac{(-1)^{q}}{2^{q}q!}\bigg{\langle}\mathbb{E}\bigg{[}\bigg{(}\sum_{\begin{subarray}{c}\left|J\right|,\left|J^{\prime}\right|\leq r\\\ \alpha,\alpha^{\prime}\in\mathcal{A}\end{subarray}}\Delta_{\alpha\alpha^{\prime}}^{JJ^{\prime},(\ell)}\sum_{j=1}^{m}\partial_{D_{\alpha}^{J}z_{j;\alpha}}\partial_{D_{\alpha^{\prime}}^{J^{\prime}}z_{j;\alpha^{\prime}}}\bigg{)}^{q}\bigg{]}f\left(D_{\mathcal{A}}^{\leq r}z_{1},\ldots,D_{\mathcal{A}}^{\leq r}z_{m}\right)\bigg{\rangle}_{\kappa^{(\ell+1)}}$ $\displaystyle\quad+O(n^{-q_{*}-1}),$ (3.9) where the sum is over multi-indices $J,J^{\prime}\in\mathbb{N}^{p}$ of order at most $r$, we’ve set $\Delta_{\alpha\alpha^{\prime}}^{JJ^{\prime},(\ell)}:=D_{\alpha}^{J}D_{\alpha^{\prime}}^{J^{\prime}}\Sigma_{\alpha\alpha^{\prime}}^{(\ell)}-\mathbb{E}\left[D_{\alpha}^{J}D_{\alpha^{\prime}}^{J^{\prime}}\Sigma_{\alpha\alpha^{\prime}}^{(\ell)}\right],$ (3.10) and the derivatives $\partial_{D_{\alpha}^{J}z_{j;\alpha}}$ are interpreted in the weak sense if $f$ is not differentiable. We prove Theorem 3.2 in §9 and give the main idea of the proof in §5. By substituting various polynomials in $z_{\alpha}^{{}_{(\ell+1)}}$ and its derivatives for $f$ into the perturbative expansion (3.9), it is now straightforward in principle to deduce recursions for the cumulants $\kappa_{(\alpha_{1}\alpha_{1}^{\prime}),\ldots,(\alpha_{k}\alpha_{k}^{\prime})}^{{}_{(J_{1}J_{1}^{\prime}),\ldots,(J_{k}J_{k}^{\prime}),(\ell+1)}}$ at layer $\ell+1$ in terms of objects of the same type at layer $\ell$. In particular, we have the following ###### Corollary 3.3 (cumulants in random neural networks form a hierarchy to leading order in $1/n$). With the assumptions of Theorem 3.1,the mixed cumulant $\kappa\left(D_{\alpha_{1}}^{J_{1}}z_{i_{1};\alpha_{1}}^{(\ell+1)},\ldots,D_{\alpha_{2k}}^{J_{2k}}z_{i_{2k};\alpha_{2k}}^{(\ell+1)}\right)$ equals $\sum_{\begin{subarray}{c}j\leq k\\\ J_{i}^{\prime},\,i=1,\ldots,2j\\\ \left|J_{1}^{\prime}\right|+\cdots+\left|J_{2j}^{\prime}\right|\\\ \quad\leq\left|J_{1}\right|+\cdots+\left|J_{2k}\right|\end{subarray}}C(J_{i}^{\prime},K_{\alpha_{i}\alpha_{i^{\prime}}}^{(\ell)}i,i^{\prime}=1,\ldots,2j)~{}\kappa\left(D_{\alpha_{1}}^{J_{1}^{\prime}}z_{1;\alpha_{1}}^{(\ell)},\ldots,D_{\alpha_{2j}}^{J_{2j}^{\prime}}z_{2j;\alpha_{2j}}^{(\ell)}\right)+O\left(n^{-k}\right),$ (3.11) where the sum if over multi-indices $J_{i}^{\prime}$, the constants $C(J_{i}^{\prime},K_{\alpha_{i}\alpha_{j}}^{(\ell)}i,j=1,\ldots,2k)$ depend only on the multi-indices $J_{i}^{\prime}$ and the infinite width covariance $K^{(\ell)}$, while the implicit constant in the error term depends on $k$, the inputs $x_{\alpha_{1}},\ldots,x_{\alpha_{k}},$ the multi-indices $J_{1},\ldots,J_{k}$, the weight and bias variances $C_{b},C_{W},$ the non- linearity $\sigma$, and the layer index $\ell$. We do not know how to efficiently compute the coefficients $C$ in the recursion (3.11) for arbitrary $k$. Instead, we will compute them by hand for small values of $k$ in the two settings: * • We fix a single input $x_{\alpha}\in\mathbb{R}^{n_{0}}$ and obtain a recursion for the cumulants: $\kappa_{2k;\alpha}^{(\ell)}:=\kappa_{(\alpha\alpha)\cdots(\alpha\alpha)}^{(00)\cdots(00),(\ell)}=\frac{1}{(2k-1)!!}\kappa\bigg{(}\underbrace{z_{i;\alpha}^{(\ell+1)},\ldots,z_{i;\alpha}^{(\ell+1)}}_{2k\text{ times}}\bigg{)}$ (3.12) when $k=2,3,4$ (the pairs $(\alpha\alpha)$ and $(00)$ both appear $k$ times). See Corollary 3.4. * • We again fix a single input $x_{\alpha}\in\mathbb{R}^{n_{0}}$ and consider any two partial derivatives $d_{j}:=\sum_{i=1}^{n_{0}}c_{i,j}\partial_{x_{j}}\bigg{|}_{x=x_{\alpha}},\qquad j=1,2.$ We obtain recursions for $\displaystyle\kappa_{(i_{1}i_{1}^{\prime})(i_{2}i_{2}^{\prime});\alpha}^{(\ell)}:=\kappa\left(d_{i_{1}}d_{i_{1}^{\prime}}\Delta_{\alpha\alpha}^{(\ell)},d_{i_{2}}d_{i_{2}^{\prime}}\Delta_{\alpha\alpha}^{(\ell)}\right).$ (3.13) These cumulants are the building blocks for understanding the fourth mixed cumulants of $z_{i;\alpha}^{{}_{(\ell+1)}},d_{1}z_{i;\alpha}^{{}_{(\ell+1)}},d_{2}z_{i;\alpha}^{{}_{(\ell+1)}}$. For example, $\kappa\left(d_{1}z_{i;\alpha}^{(\ell+1)},d_{1}z_{i;\alpha}^{(\ell+1)},d_{2}z_{i;\alpha}^{(\ell+1)},d_{2}z_{i;\alpha}^{(\ell+1)}\right)=\kappa_{(11)(22);\alpha}^{(\ell)}+2\kappa_{(12)(12);\alpha}^{(\ell)}.$ This will be done in the course of proving Theorem 4.5. We refer the interested reader to Proposition 10.2 for the precise recursions and to Proposition 11.2 their solutions for non-linearities from the $K_{*}=0$ universality class (including $\tanh$). In order to facilitate a compact form for the recursions described in first bullet point, let us write $T_{i,j;\alpha}^{(\ell)}:=C_{W}^{j}\left\langle\partial_{z}^{i}\left\\{\left(\sigma^{2}(z)-\left\langle\sigma^{2}(z)\right\rangle_{K_{\alpha\alpha}^{(\ell)}}\right)^{j}\right\\}\right\rangle_{K_{\alpha\alpha}^{(\ell)}},$ (3.14) with the derivatives interpreted in the weak sense if $\sigma$ is not sufficiently smooth and where we remind the reader of our standing notation $\left\langle f(z)\right\rangle_{K}=\int_{-\infty}^{\infty}f(zK^{1/2})e^{-\frac{z^{2}}{2}}\frac{dz}{\sqrt{2\pi}},\qquad K\geq 0.$ ###### Corollary 3.4. Fix $r\geq 1$ and suppose that $\sigma:\mathbb{R}\rightarrow\mathbb{R}$ satisfies Assumption (2.1) with this value of $r$. Consider a depth $L$ random neural network with input dimension $n_{0},$ hidden layer widths $n_{1},\ldots,n_{L}$, output dimension $n_{L+1}$ and non-linearity $\sigma$. Fix $x_{\alpha}\in\mathbb{R}^{n_{0}}$ and define $\chi_{||;\alpha}^{(\ell)}:=\frac{1}{2}T_{2,1;\alpha}^{(\ell)}=\frac{C_{W}}{2}\left\langle\partial_{z}^{2}\sigma(z)^{2}\right\rangle_{K_{\alpha\alpha}^{(\ell)}},$ where the second derivative is interpreted in the weak sense if $\sigma$ is not twice differentiable. For each $\ell=1,\ldots,L$, in the notation of (3.12), the fourth cumulant satisfies $\displaystyle\kappa_{4;\alpha}^{(\ell+1)}$ $\displaystyle=\frac{T_{0,2;\alpha}^{(\ell)}}{n_{\ell}}+\left(\chi_{||;\alpha}^{(\ell)}\right)^{2}\kappa_{4,\alpha}^{(\ell)}+O(n^{-2}).$ (3.15) Further, the $6$-th cumulant satisfies $\displaystyle\kappa_{6;\alpha}^{(\ell+1)}$ $\displaystyle=\frac{T_{0,3;\alpha}}{n_{\ell}^{2}}+\frac{3T_{2,2;\alpha}^{(\ell)}}{2n_{\ell}}\chi_{||;\alpha}^{(\ell)}\kappa_{4;\alpha}^{(\ell)}-\frac{3T_{4,1;\alpha}^{(\ell)}}{8}\left(\chi_{||;\alpha}^{(\ell)}\kappa_{4;\alpha}^{(\ell)}\right)^{2}+\left(\chi_{||;\alpha}^{(\ell)}\right)^{3}\kappa_{6;\alpha}^{(\ell)}+O(n^{-3}).$ (3.16) Finally, the $8$-th cumulant satisfies: $\displaystyle\kappa_{8;\alpha}^{(\ell+1)}$ $\displaystyle=\frac{1}{n_{\ell}^{3}}\left(T_{0,4;\alpha}^{(\ell)}-3\left(T_{0,2;\alpha}^{(\ell)}\right)^{2}\right)$ $\displaystyle+\frac{1}{n_{\ell}^{2}}\left[2T_{2,3;\alpha}^{(\ell)}\chi_{||;\alpha}^{(\ell)}-12T_{0,2;\alpha}^{(\ell)}\left(\chi_{||;\alpha}^{(\ell)}\right)^{2}+\frac{3}{2}\left(T_{2,2;\alpha}^{(\ell)}\right)^{2}-\frac{3}{2}T_{4,1;\alpha}^{(\ell)}T_{0,2;\alpha}^{(\ell)}\right]\kappa_{4;\alpha}^{(\ell)}$ $\displaystyle-\frac{1}{n_{\ell}}\left[2T_{2,2;\alpha}^{(\ell)}T_{4,1;\alpha}^{(\ell)}\chi_{||;\alpha}^{(\ell)}-\frac{1}{2}T_{4,2;\alpha}^{(\ell)}\left(\chi_{||;\alpha}^{(\ell)}\right)^{2}+\left(\chi_{||;\alpha}^{(\ell)}\right)^{4}\right]\left(\kappa_{4;\alpha}^{(\ell)}\right)^{2}$ $\displaystyle+\frac{1}{n_{\ell}}\left[5T_{0,2;\alpha}^{(\ell)}T_{4,1;\alpha}^{(\ell)}\chi_{||;\alpha}^{(\ell)}+12T_{2,2;\alpha}^{(\ell)}\left(\chi_{||;\alpha}^{(\ell)}\right)^{2}\right]\kappa_{6;\alpha}^{(\ell)}$ $\displaystyle+\frac{3}{32}\left(T_{4,1;\alpha}^{(\ell)}\right)^{2}\left(\chi_{||;\alpha}^{(\ell)}\right)^{2}\left(\kappa_{4;\alpha}^{(\ell)}\right)^{3}-\frac{1}{2}\left(\chi_{||;\alpha}^{(\ell)}\right)^{3}T_{4,1;\alpha}^{(\ell)}\kappa_{4;\alpha}^{(\ell)}\kappa_{6;\alpha}^{(\ell)}$ $\displaystyle+\left(\chi_{||;\alpha}^{(\ell)}\right)^{4}\kappa_{8;\alpha}^{(\ell)}+O(n^{-4}).$ (3.17) The initial condition for the recursions (3.15)-(3.17) is that $\kappa_{2k;\alpha}^{{}_{(1)}}=0$ for all $k\geq 2$. ###### Remark 3.5. Note that for $k=2,3,4$, we therefore see that to leading order in $1/n$ the recursions for $\kappa_{2k;\alpha}^{{}_{(\ell+1)}}$ depends only on $\kappa_{2j;\alpha}^{{}_{(\ell)}}$ for $j\leq k$. This allows us to interpret (3.15) - (3.17) as a forming the start of hierarchy in powers of $1/n$ describing the cumulants of the output of a random neural network. Let us briefly compare Corollary 3.4 to results in prior work: * • In the special case when $\sigma$ is $1-$homogeneous (i.e. is linear, ReLU, leaky ReLU, etc, see (4.5)), the full distribution of a neuron pre-activation $z_{i;\alpha}^{{}_{(\ell)}}$ can be worked out in closed form. Namely, as we explain in §4.2.1 and Appendix C, is has the same distribution as a Gaussian with an independent random variance given by a product of independent weighted chi-squared random variables. This was first pointed out in [17, 21] and described in the language of special functions (namely Meijer G functions) in [66]. For such non-linearities obtaining the recursions (3.15)-(3.17) is not new. * • The breakthrough work of Yaida [61] was the first to obtain, at a physics level of rigor, the recursion (3.15) and probe its solutions at large $\ell$. * • The ideas of Yaida in [61] then seeded the development in the monograph [53] of Roberts, Yaida, and the author a much richer analysis, producing at a physics level of rigor many recursions similar in flavor to (3.15)-(3.17) that describe the the behavior of objects such as network derivatives at the start of training, the NTK at the start of training, and even the change in the NTK and the resulting output of a trained network. Many of these results go far beyond what we are currently capable of doing mathematically. * • The analysis in [53] never required studying cumulants $\kappa_{2k;\alpha}^{{}_{(\ell)}}$ for $k\geq 3$, and while the techniques developed there can certainly be used to obtain the recursions (3.16) and (3.17) we take a rather different approach in this article that produces such recursions more directly. The functional $\chi_{||;\alpha}^{{}_{(\ell)}}$ plays a fundamental role in the recursive description of random neural networks supplied by Corollary 3.4, whose proof is in §9. In the following section we explain a principled procedure, called tuning to criticality, that reveals its origin (as well as that of a similar object we denote $\chi_{\perp;\alpha}^{{}_{(\ell)}}$) and explains how to choose $C_{b},C_{W}$ so that these functionals are approximately equal to $1$ at large $\ell$. As we will see, such a choice will ensure that the recursions in Corollary 3.4 and their infinite width counterpart (2.4) have well-behaved solutions at large $\ell$. We will then return in §4.2.1 and §4.2.2 to solving the recursions from Corollary 3.4 in random networks tuned to criticality (see (4.7) and Corollary 4.1). ## 4 Criticality and Universality in Random Neural Networks at Large Depth In the previous section we presented two kinds of results about the structure of random neural networks at large but finite width. The first, Theorem 3.1, concerned the order of magnitude for cumulants of the output of such a random network and its derivatives. The second, Theorem 3.2 and Corollary 3.4, spelled out recursions with respect to the layer index $\ell$ that describe, to leading order in $1/n$, network cumulants at layer $\ell+1$ in terms of those at layer $\ell$. Our purpose in forthcoming sections is to analyze these recursions at large $\ell$ and to apply this analysis to obtain results about the structure of gradients in deep fully connected networks. Before doing this, we must take a step back and ask: for which $\sigma,C_{b},C_{W}$ are the recursions (2.4) describing the infinite width covariance $K^{(\ell)}$ well- behaved at large $\ell$? In section §4.1, we recall a more or less canonical answer to this question whose roots are in the early articles [51, 52] and that was recently spelled out in the generality presented here in [53]. This procedure, called tuning to criticality, prescribes combinations of $\sigma,C_{b},C_{W}$ for which $K^{(\ell)}$ is indeed well-behaved at large $\ell$. As we shall see below, the term criticality is meant to be evocative of it’s use in the analysis of 2d systems in statistical mechanics in that tuning to criticality consists of choosing $C_{b},C_{W}$ so that the infinite width covariance function $K^{{(\ell)}}$ is as close to constant as a function $\ell$ as possible. At a high level, there are two reasons to ask that $K^{(\ell)}$ be slowly varying as a function of $\ell$. First, it arguably only makes sense to study perturbative corrections in $1/n$ recursively in $\ell$ if the limiting $n\rightarrow\infty$ covariance structure does not change too rapidly between consecutive layers. Second, and perhaps more importantly, as explained and thoroughly validated in [50, 52], deep fully connected networks (without residual connections [24], batch normalization [31], etc) are numerically stable enough for gradient-based optimization to succeed only if they are tuned to criticality. We discuss in §4.2 how considerations underlying criticality naturally give rise to a notion of universality classes for random neural networks. Even the correct definition of universality is still not fully understood. Unlike in random matrix theory, universality for random neural networks depends not on the statistics of the individual weights and biases (though this is also an interesting direction to consider e.g. [19]) but rather on the effect of the non-linearity $\sigma$ on the behavior of the infinite width covariance $K^{(\ell)}$ at large values of the depth $\ell$. Before giving the details, we take this opportunity to emphasize, as we have elsewhere, that the definitions of criticality and universality, the approach to solving the recursions for $\kappa_{2k;\alpha}^{{}_{(\ell)}}$ from Corollary 3.4, and the resulting lessons learned about the role of the effective network depth $L/n$ closely follow the ideas developed in the monograph [53]. Though we pursue them in a somewhat different way, the author would nonetheless like to acknowledge that his co-authors Dan Roberts and Sho Yaida in the book deserve significant credit. ### 4.1 Tuning to Criticality As originally explained in [51, 52] and recently spelled out in a definitive way in [53], tuning a neural network to criticality means seeking choices of $(C_{b},C_{W})$ that lead to critical fixed points of the form $(K_{*},K_{*},K_{*})$ for the recursion (2.4), viewed as a dynamical system describing $(K_{\alpha\alpha}^{{}_{(\ell)}},K_{\beta\beta}^{{}_{(\ell)}},K_{\alpha\beta}^{{}_{(\ell)}})$ with time parameter $\ell$. Specifically, criticality requires $\displaystyle\exists K_{*}\geq 0\qquad\text{s.t.}\qquad$ $\displaystyle K_{*}=C_{b}+C_{W}\left\langle\sigma^{2}(z)\right\rangle_{K_{*}}$ ($*$) $\displaystyle\forall\ell\geq 1\qquad$ $\displaystyle\frac{\partial K_{\alpha\alpha}^{(\ell)}}{\partial K_{\alpha\alpha}^{(1)}}\bigg{|}_{K_{\alpha\alpha}^{(1)}=K_{*}}=1$ ($||$) $\displaystyle\forall\ell\geq 1\qquad$ $\displaystyle\frac{\partial K_{\alpha\beta}^{(\ell)}}{\partial K_{\alpha\beta}^{(1)}}\bigg{|}_{K_{\alpha\alpha}^{(1)}=K_{\alpha\alpha}^{(1)}=K_{\alpha\beta}^{(1)}=K_{*}}=1,$ ($\perp$) where $K_{\alpha\alpha}^{(1)}=C_{b}+C_{W}K_{\alpha\beta}^{(0)},\qquad K_{\alpha\beta}^{(0)}:=\frac{1}{n_{0}}\sum_{j=1}^{n_{0}}x_{j;\alpha}x_{j;\beta},\qquad x_{\alpha},x_{\beta}\in\mathbb{R}^{n_{0}}.$ The intuitive meaning of these conditions is as follows. Due to Theorem 2.2, the first guarantees the existence of a fixed point $K_{*}$ for of the recursion $K_{\alpha\alpha}^{(\ell+1)}=C_{b}+C_{W}\left\langle\sigma^{2}(z)\right\rangle_{K_{\alpha\alpha}^{(\ell)}}$ (4.1) of the infinite width variance. In particular, ($*$ ‣ 4.1) implies $K_{\alpha\alpha}^{(1)}=C_{b}+\frac{C_{W}}{n_{0}}\left|\left|x_{\alpha}\right|\right|^{2}=K_{*}\qquad\Longrightarrow\qquad K_{\alpha\alpha}^{(\ell)}=\lim_{n\rightarrow\infty}\mathrm{Var}\left[z_{i;\alpha}^{(\ell)}\right]=K_{*}\quad\forall\ell\geq 1.$ Thus, if a network is tuned to criticality, there is a critical radius $K_{\text{crit}}^{2}:=n_{0}C_{W}^{-1}(K_{*}-C_{b})$ such that for inputs $x_{\alpha}$ on the sphere of radius $K_{\text{crit}}$ the variance of $z_{i;\alpha}^{{}_{(\ell)}}$ is independent of $\ell$ in the infinite width limit. In non-critical networks, we expect this variance to either grow or decay exponentially in $\ell$, leading to numerical instabilities. The second condition ($||$ ‣ 4.1) considers the infinite width limit of the variance of $z_{i;\alpha}^{{}_{(\ell)}}$ for an input $x_{\alpha}$ for which $K_{\alpha\alpha}^{{}_{(1)}}$ is close to $K_{*}$. Specifically, condition ($||$ ‣ 4.1) requires for all $\ell\geq 1$ that $K_{\alpha\alpha}^{(1)}=\mathrm{Var}[z_{i;\alpha}^{(1)}]=K_{*}+\delta K\qquad\Longrightarrow\qquad K_{\alpha\alpha}^{(\ell)}=\lim_{n\rightarrow\infty}\mathrm{Var}\left[z_{i;\alpha}^{(\ell)}\right]=K_{*}+\delta K+o(\delta K).$ This guarantees that the fixed point $K_{*}$ of the recursion (4.1) is critical and hence that for inputs near the sphere of radius $K_{\text{crit}}$ the variance of the resulting pre-activations $z_{i;\alpha}^{{}_{(\ell)}}$ is approximately constant in $\ell$ at large $n$. The final condition ($\perp$ ‣ 4.1) considers instead the covariance between two inputs on the sphere of radius $K_{\text{crit}}$. Namely, given two nearby network inputs $x_{\alpha},x_{\beta}\in\mathbb{R}^{n_{0}}$ with $K_{\alpha\alpha}^{(1)}=K_{\beta\beta}^{(1)}=K_{*},\qquad K_{\alpha\beta}^{(1)}=C_{b}+\frac{C_{W}}{n_{0}}\sum_{j=1}^{n_{0}}x_{j;\alpha}x_{j;\beta}=K_{*}-\delta K,$ the third condition asks that $K_{\alpha\beta}^{(\ell)}=\lim_{n\rightarrow\infty}\mathrm{Cov}\left(z_{i;\alpha}^{(\ell)},z_{i;\beta}^{(\ell)}\right)=K_{*}-\delta K+o(\delta K),\quad\forall\ell.$ This ensures that the covariance between pre-activations $z_{i;\alpha}^{{}_{(\ell)}}$ and $z_{i;\beta}^{{}_{(\ell)}}$ corresponding to two nearby inputs on the $K_{\text{crit}}$-sphere are approximately independent of $\ell$ at large $n$. A simple computation directly from the recursion (2.4) shows that $\displaystyle\chi_{||}(K)$ $\displaystyle:=\frac{\partial K_{\alpha\alpha}^{(\ell+1)}}{\partial K_{\alpha\alpha}^{(\ell)}}\bigg{|}_{K_{\alpha\alpha}^{(\ell)}=K}=\frac{C_{W}}{2}\left\langle\partial_{z}^{2}(\sigma^{2}(z))\right\rangle_{K}$ (4.2) $\displaystyle\chi_{\perp}(K)$ $\displaystyle:=\frac{\partial K_{\alpha\beta}^{(\ell+1)}}{\partial K_{\alpha\beta}^{(\ell)}}\bigg{|}_{K_{\alpha\alpha}^{(\ell)}=K_{\beta\beta}^{(\ell)}=K_{\alpha\beta}^{(\ell)}=K}=C_{W}\left\langle(\partial_{z}\sigma(z))^{2}\right\rangle_{K}.$ (4.3) Hence, all together, tuning to criticality requires $\boxed{K_{*}\geq 0\text{ s.t. }K_{*}=C_{b}+C_{W}\left\langle\sigma^{2}(z)\right\rangle_{K_{*}}\qquad\text{and}\qquad\chi_{||}(K_{*})=\chi_{\perp}(K_{*})=1.}$ (4.4) ### 4.2 Universality Classes of Random Neural Networks: Two Examples In Search of a General Definition We now turn to discussing the notion of universality classes for random neural networks. To start, recall from Theorem 2.2 that the behavior at large depth $\ell$ of random fully connected neural networks at infinite width is completely specified by the asymptotics of the limiting covariance function $K^{{(\ell)}}$. Observe, moreover, that the coefficients in the recursions for $k=2,3,4$ of the cumulants $\kappa_{2k;\alpha}^{{}_{(\ell+1)}}$ from Corollary 3.4, which by Theorem 3.1 determine the behavior of random neural networks at finite width to the first four orders in $1/n$, are completely determined by $\sigma$, the infinite width covariance $K^{(\ell)}$, and cumulants $\kappa_{2j;\alpha}^{{}_{(\ell)}},\,j\leq k$. It is therefore in terms of the large $\ell$ behavior of $K^{{(\ell)}}$ that we should hope to define universality classes of random neural networks at large depth. At present it is not clear what the correct general definition of such a universality class should be. We content ourselves instead with studying two important classes of examples. #### 4.2.1 The Universality Class of ReLU The most popular non-linearities used in practice are positively homogeneous of degree $1$, i.e. have the form $\sigma(t)=(a_{-}{\bf 1}_{\left\\{t<0\right\\}}+a_{+}{\bf 1}_{\left\\{t>0\right\\}})t,\qquad a_{-},a_{+}\in\mathbb{R},\quad a_{-}\neq a_{+},\quad a_{-}^{2}+a_{+}^{2}\neq 0.$ (4.5) Such non-linearities include the ReLU ($a_{-}=0,a_{+}=1$) and the leaky ReLU ($a_{-}\in(0,1),a_{+}=1$). A direct computation, left to the reader, shows that criticality is achieved if and only if $K_{*}\geq 0\text{ is arbitrary}\quad\text{and}\quad C_{b}=0,\,C_{W}=\frac{2}{a_{+}^{2}+a_{-}^{2}}.$ Thus, the first property of the ReLU universality class is that setting $(C_{b},C_{W})=(0,2/(a_{+}^{2}+a_{-}^{2}))$ allows all non-negative $K_{*}$ to satisfy ($*$). In fact, at criticality, a simple symmetrization argument shows that the variance of neuron pre-activations is preserved exactly even at finite width $\mathrm{Var}\left[z_{i;\alpha}^{(\ell)}\right]=\mathrm{Var}\left[z_{i;\alpha}^{(1)}\right]=\frac{C_{W}}{n_{0}}\left|\left|x_{\alpha}\right|\right|^{2}\qquad\forall\ell,n_{0},\ldots,n_{\ell}\geq 1,\,x_{\alpha}\in\mathbb{R}^{n_{0}}$ (4.6) and, relatedly, that we have $\chi_{||;\alpha}^{(\ell)}:=\chi_{||}(K_{\alpha\alpha}^{(\ell)})=1=\chi_{\perp;}(K_{\alpha\alpha}^{(\ell)})=:\chi_{\perp;\alpha}^{(\ell)},\qquad\forall\ell\geq 1,\,x_{\alpha}\in\mathbb{R}^{n_{0}}.$ The remarkable property (4.6) is much stronger than the criticality condition $(*)$, which requires only that this condition holds for some value of $n_{0}^{-1}\left|\left|x_{\alpha}\right|\right|^{2}$ and only in the limit when $n\rightarrow\infty$. It implies that the cumulant recursions from Corollary 3.4 for $1-$homogeneous non-linearities have constant coefficients and are therefore particularly simple to solve. For instance, we find at leading order in $1/n$ $\displaystyle\kappa_{4;\alpha}^{(\ell+1)}$ $\displaystyle=\frac{C_{W}^{2}}{n_{\ell}}\left[\left\langle\sigma(z)^{4}\right\rangle_{K_{\alpha\alpha}^{(\ell)}}-\left\langle\sigma(z)^{2}\right\rangle_{K_{\alpha\alpha}^{(\ell)}}^{2}\right]+\left(\chi_{||;\alpha}^{(\ell)}\right)^{2}\kappa_{4;\alpha}^{(\ell)}$ $\displaystyle=\left(\frac{2}{(a_{+}^{2}+a_{-}^{2})n_{0}}\left|\left|x_{\alpha}\right|\right|^{2}\right)^{2}\left(6\frac{a_{+}^{4}+a_{-}^{4}}{(a_{+}^{2}+a_{-}^{2})^{2}}-1\right)\sum_{\ell^{\prime}=1}^{\ell}\frac{1}{n_{\ell^{\prime}}},$ which shows that while $\kappa_{4,\alpha}^{{}_{(\ell)}}$ is suppressed by one power of $1/n$ relative to the infinite width variance $K_{\alpha\alpha}^{{}_{(\ell)}}$, it also grows one order faster in $\ell$. This illustrates an important and general theme: depth amplifies finite width effects. It is the effective depth $\ell/n$ of neurons at layer $\ell$ that measures the distance to the infinite width Gaussian regime. Moreover, in the special setting of $1$-homogeneous non-linearities there is a simple method for obtaining the full distribution of the pre-activation vector $z_{\alpha}^{{}_{(\ell)}}$ at a single input at any finite values of $n_{0},\ldots,n_{\ell}$. This was first pointed out in [17, 21, 66] and is briefly reviewed in Appendix C. A key takeaway is that if we take the hidden layer widths $n_{1}=\cdots=n_{L}=n$, then we have following convergence in distribution to product of independent normal and log-normal random variables: $\lim_{\begin{subarray}{c}n,L\rightarrow\infty\\\ L/n\rightarrow\xi\in[0,\infty)\end{subarray}}z_{i;\alpha}^{(L)}\quad\stackrel{{\scriptstyle d}}{{=}}\quad\left(\frac{2\left|\left|x_{\alpha}\right|\right|^{2}}{(a_{+}^{2}+a_{-}^{2})n_{0}}\right)^{1/2}Z_{1}\exp\left[-\mu(\xi,a_{+},a_{-})+\sigma(\xi,a_{+},a_{-})Z_{2}\right],$ (4.7) where $\mu(\xi,a_{+},a_{-})=\sigma^{2}(\xi,a_{+},a_{-}):=\frac{\xi}{4}\left(6\frac{a_{+}^{4}+a_{-}^{4}}{(a_{+}^{2}+a_{+}^{2})^{2}}-1\right),\quad Z_{1},Z_{2}\sim\mathcal{N}(0,1)\text{ iid}.$ The convergence (4.7) reveals that for a fixed input the distribution of the output of a random with $1-$homogeneous non-linearities at large depth and width depends in a simple way on the limiting effective network depth $\xi$. This bolsters the claim that they are all part of the same universality class. It also means that increasing the network depth $L$ drives it away from the infinite width Gaussian behavior observed at $\xi=0$ and that the outputs of deep and wide networks are not well-approximated by a Gaussian at all, unless $\xi$ is infinitesimal, in which case the log-normal term $\exp\left[-\mu(\xi,a_{+},a_{-})+\sigma(\xi,a_{+},a_{-})Z_{2}\right]$ is negligible. Prior work [20, 21, 23] of the author shows that when $\sigma=\mathrm{ReLU}$ (or any other $1$-homogeneous non-linearity), the distribution at large $n,L$ of not only the network output $z_{i;\alpha}^{{}_{(L+1)}}$ but also is derivatives with respect to inputs $x_{\alpha}$ and model parameters (e.g. weights and biases) depends only on the effective depth $L/n.$ We will return to this theme in our discussion of exploding and vanishing gradients (Theorem 4.5). Finally, we note that it has also been observed that log-normal random variables describe the structure of gradients in residual networks, even during/after training [39]. To complete our discussion of the ReLU universality class, we make two final remarks. First, a direct computation (reviewed briefly in Proposition C.1 of Appendix C) shows that at criticality for any non-zero inputs $x_{\alpha_{1}},x_{\alpha_{2}}\in\mathbb{R}^{n_{0}}$ with the same norm we have $\lim_{n\rightarrow\infty}\mathrm{Corr}\left(z_{i;\alpha_{1}}^{(\ell)},z_{i;\alpha_{2}}^{(\ell)}\right)=1-\frac{2(a_{+}-a_{-})^{2}}{3\pi(a_{+}^{2}+a_{-}^{2})}\ell^{-2}(1+o(1)).$ (4.8) The power law exponent $2$ that appears in this estimate is common to all $1-$homogeneous non-linearities and is another reason to believe they fall into the same universality class. In contrast, this exponent equals one for non-linearities in the $K_{*}=0$ universality class presented below. The estimate (4.8) suggests that in order to define a double scaling limit $n,L\rightarrow\infty$ and $L/n\rightarrow\xi$ in which the entire field $x_{\alpha}\mapsto z_{\alpha}^{{}_{(L+1)}}$ is non-degenerate (rather than just its value at a single input) one must rescale distances in the input space to prevent the collapse of correlations otherwise guaranteed in (4.8). We leave this as an interesting direction for future work. #### 4.2.2 The Universality Class of Hyperbolic Tangent The second class of non-linearities we study is what [53] termed the $K_{*}=0$ universality class, which we take to mean non-linearities $\sigma$ such that * • $\sigma$ is a smooth, odd function satisfying Assumption 2.1. * • $\sigma$ satisfies $\sigma_{1}\sigma_{3}<0,\qquad\sigma_{j}:=\frac{1}{j!}\frac{d^{j}}{dt^{j}}\bigg{|}_{t=0}\sigma(t).$ (4.9) * • $K_{*}=0$ is the unique fixed point of equation ($*$ ‣ 4.1). * • At criticality, for every non-zero network input $x_{\alpha}\in\mathbb{R}^{n_{0}}$ and each $\delta\in(0,1)$ we have as $L\rightarrow\infty$ that $K_{\alpha\alpha}^{(L)}=\frac{1}{aL}\left(1+O(L^{-1+\delta})\right),$ (4.10) where the implicit constant depends on $\delta$ and $x_{\alpha}$ and we’ve set $a:=-6\frac{\sigma_{3}}{\sigma_{1}}.$ This specific value of $a$, which is positive by (4.9), is the only possible candidate for decay of the form (4.10) that is consistent with the recursion (2.4). Some remarks are in order. First, if $K_{*}=0$ is the unique fixed point for ($*$ ‣ 4.1), then a simple computation shows that criticality is achieved if and only if $K_{*}=0,\qquad C_{b}=0,\qquad C_{W}=\sigma_{1}^{-2}.$ (4.11) Next, our definition of the $K_{*}=0$ universality class does not make apparent whether it is empty. As we will see in Proposition B.1, however, the $K_{*}=0$ universality class is in fact quite large and contains for example the hyperbolic tangent and more generally any non-linearity that is $\tanh$-like in the sense that is smooth with $\sigma_{1}\neq 0$, has the opposite sign as its second derivative $\text{for almost every }z,~{}\mathrm{sgn}\left(\sigma(z)\sigma^{\prime\prime}(z)\right)=-1,$ is sub-linear $\exists C>0\,\text{ s.t. }\forall z\in\mathbb{R}\,\left|\sigma(z)\right|\leq\left|\sigma_{1}z\right|,$ and is controlled by its first few non-zero Taylor series coefficients at $0$: $\exists C\geq 0\text{ s.t. }\forall z\geq 0,\quad\sigma_{1}z+\sigma_{3}z^{3}\leq\sigma(z)\leq\sigma_{1}z+\sigma_{3}z^{3}+Cz^{4}.$ Further, by definition, for the $K_{*}=0$ universality class, the infinite width variance $K_{\alpha\alpha}^{{}_{(\ell)}}$ of neuron pre-activations $z_{i;\alpha}^{(\ell)}$ is qualitatively different from that of $1-$homogeneous non-linearities. Indeed, $K_{\alpha\alpha}^{{}_{(L)}}$ depends on $L$, decaying polynomially to $0$. Moreover, at large $L$, the value of $K_{\alpha\alpha}^{{}_{(L)}}$ is independent of the initial condition $K_{\alpha\alpha}^{{}_{(0)}}$ to leading order in $L$. As a final remark let us point out that searching for non-linearities $\sigma$ so that $K_{*}=0$ at criticality is quite natural. Indeed, for any $\sigma$ that is twice differentiable, we have $\chi_{||}(K)=\chi_{\perp}(K)+C_{W}\left\langle\sigma(z)\sigma^{\prime\prime}(z)\right\rangle_{K}$ Hence, if $K>0$, then $\chi_{||}(K)=1,\,\chi_{\perp}(K)=1\qquad\Longrightarrow\qquad\left\langle\sigma(z)\sigma^{\prime\prime}(z)\right\rangle_{K}=0.$ But if $\sigma$ is a sigmoidal function such as $\tanh$, then $\sigma(z)\sigma^{\prime\prime}(z)<0$ for all $z\neq 0$. Hence, $\left\langle\sigma(z)\sigma^{\prime\prime}(z)\right\rangle_{K}=0$ can only occur when $K=0$. As in the monograph [53], let us now probe the role of network depth by studying the large $L$ behavior of the cumulants $\kappa_{2k;\alpha}^{{}_{(L)}},\,k=2,3,4,$ in networks with non-linearities from the $K_{*}=0$ universality class tuned to criticality. Note that in (4.10) the limiting behavior of the variance $K_{\alpha\alpha}^{{}_{(L)}}$ depends (mildly) on the non-linearity $\sigma$ in terms of is first few Taylor coefficients at $0$. As we are about to see, however, the behavior of the higher cumulants $\kappa_{2k;\alpha}^{{}_{(L)}},\,k=2,3,4,$ when normalized by the appropriate power of $K_{\alpha\alpha}^{{}_{(L)}}$, is independent of $\sigma$ at leading order in $n$ and $L$ and depends only on universal constants and the effective network depth $L/n$. ###### Corollary 4.1. Suppose $\sigma$ is a non-linearity from the $K_{*}=0$ universality class, and consider a depth $L$ random neural network with input dimension $n_{0}$, output dimension $n_{L+1}$, hidden layer widths satisfying $n_{1},\ldots,n_{L}=n\gg 1,$ and non-linearity $\sigma$ that has been tuned to criticality as in (4.11). Write $\xi=L/n$ and define the normalized cumulants $\widehat{\kappa}_{2k;\alpha}^{(L)}:=\frac{\kappa_{2k;\alpha}^{(\ell)}}{(K_{2;\alpha}^{(\ell)})^{k}}.$ We have for $k=2,3,4$ that $\displaystyle\widehat{\kappa}_{2k;\alpha}^{(L)}$ $\displaystyle=C_{2k}\xi^{k-1}\left(1+O(L^{-1})\right)+O(n^{-k}),$ (4.12) where $C_{4}=\frac{2}{3},\qquad C_{6}=\frac{28}{15},\qquad C_{8}=\frac{8756}{315}$ are some positive universal constants. The implicit constants in error terms $O(L^{-1})$ depend on $\sigma,C_{b},C_{W}$ and constants in $O(n^{-j}),\,j=2,3,4$ may depend in addition on $L$. ###### Remark 4.2. Combining estimate (4.12) for $k=2$ with the definition (4.10) of the $K_{*}=0$ universality class and (3.12) yields that in the setting of Corollary 3.4 we have to first order in $1/n$ and to leading order in $1/L$ that $\displaystyle\widehat{\mathrm{Var}}\left[\left(z_{1;\alpha}^{(\ell+1)}\right)^{2}\right]$ $\displaystyle=2(1+\xi),\qquad\mathrm{Corr}\left(\left(z_{1;\alpha}^{(\ell+1)}\right)^{2},\,\left(z_{2;\alpha}^{(\ell+1)}\right)^{2}\right)=\frac{2}{3}\xi,\qquad\xi:=\frac{L}{n},$ where $\widehat{\mathrm{Var}}[X]=\mathrm{Var}[X]/\mathbb{E}\left[X^{2}\right]$. Thus, both the fluctuations of a single neuron pre-activation and the correlation between different neurons is controlled to first order in $1/n,1/L$ by the effective network depth $\xi$. We prove Corollary 3.4 in §11.2. The formula (4.12) derived in [61] for $k=2$ at a physics level of rigor. Moreover, Corollary 3.4 represents a partial analog of the convergence result (4.7), which lends credence to the following ###### Conjecture 4.3 (Existence of Double Scaling Limit for Random Neural Networks). Consider a random depth $L$ neural network with input dimension $n_{0}$, hidden layer widths $n_{1},\ldots,n_{L}=n\gg 1,$ output dimension $n_{L+1}$ and non-linearity $\sigma$. Suppose further that this network is tuned to criticality in the sense that (4.4) is satisfied. Fix a non-zero network input $x_{\alpha}\in\mathbb{R}^{n_{0}}$ and write $\xi=L/n$. For each $k\geq 1$ there exists $C_{2k}>0$ depending on the universality class of $\sigma$ so that $\frac{\kappa_{2k;\alpha}^{(L)}}{\left(K_{2;\alpha}^{(L)}\right)^{k}}=C_{2k}\xi^{k-1}+O\left(\xi^{k}\right).$ Moreover, for each $\xi\in[0,\infty)$ there exists a probability distribution $\mathbb{P}_{\xi,\sigma}$ on $\mathbb{R}$, depending only on $\xi$ and $\sigma$, such that in the double scaling limit $n,L\rightarrow\infty,\qquad\frac{L}{n}\rightarrow\xi,$ the random variable $z_{i;\alpha}^{(\ell)}$ converges in distribution to a random variable with law $\mathbb{P}_{\xi,\sigma}$. ### 4.3 Gradients in Random Neural Networks We presented in §3 and §4 a range of results about random fully connected neural networks. The high-level takeaway was that such networks are succinctly described when $n,L$ are large by the effective network depth $\xi=L/n$. We saw that when $\xi=0$ the network outputs are a Gaussian process (Theorem 2.2). When $\xi>0$, however, a different picture emerges. Namely, the higher order cumulants of the network output grow like powers of $\xi$ (see Corollary 3.4 and Conjecture 4.3). In this section we continue to consider a random neural network $x_{\alpha}\mapsto z_{\alpha}^{{}_{(L+1)}}$ and study for any $\ell$ the partial derivatives $\partial_{x_{j;\alpha}}z_{i;\alpha}^{(\ell)},\qquad j=0,\ldots,n_{0},$ where by definition when $j=0$ we set $\partial_{x_{0;\alpha}}z_{i;\alpha}^{(\ell)}:=z_{i;\alpha}^{(\ell)}.$ In §10.2 \- §10.4 we obtain recursions with respect to $\ell$ for both the infinite width covariances $K_{(ij)}^{(\ell)}:=\lim_{n\rightarrow\infty}\mathrm{Cov}\left(\frac{\partial z_{i;\alpha}^{(\ell+1)}}{\partial x_{i;\alpha}},\frac{\partial z_{i;\alpha}^{(\ell+1)}}{\partial x_{j;\alpha}}\right),\quad i,j=1,2.$ and the fourth cumulants $\displaystyle\kappa_{(ii)(jj)}^{(\ell)}$ $\displaystyle:=(1-\frac{2}{3}\delta_{ij})\kappa\left(\frac{\partial z_{i;\alpha}^{(\ell+1)}}{\partial x_{i;\alpha}},\frac{\partial z_{i;\alpha}^{(\ell+1)}}{\partial x_{i;\alpha}},\frac{\partial z_{i;\alpha}^{(\ell+1)}}{\partial x_{j;\alpha}},\frac{\partial z_{i;\alpha}^{(\ell+1)}}{\partial x_{j;\alpha}}\right),\qquad i,j=0,1,2.$ $\displaystyle=\kappa\left(\frac{\partial z_{1;\alpha}^{(\ell+1)}}{\partial x_{i;\alpha}},\frac{\partial z_{1;\alpha}^{(\ell+1)}}{\partial x_{i;\alpha}},\frac{\partial z_{2;\alpha}^{(\ell+1)}}{\partial x_{j;\alpha}},\frac{\partial z_{2;\alpha}^{(\ell+1)}}{\partial x_{j;\alpha}}\right).$ These recursions are valid for any non-linearity and any choice of $C_{b},C_{W}$. Then, in §11.2 we solve these recursions in the context of a random neural network with a non-linearity from the $K_{*}=0$ universality class tuned to criticality. We record several of these solutions in the following result. ###### Theorem 4.4 (Second and Fourth Joint Cumulants for Values and Derivatives of Random Neural Networks). Fix $r\geq 1$ and suppose that $\sigma:\mathbb{R}\rightarrow\mathbb{R}$ satisfies Assumption 2.1 with this value of $r$. Fix also an $x_{\alpha}\neq 0\in\mathbb{R}^{n_{0}}$. Consider a random neural network $z_{\alpha}^{{}_{(L+1)}}$ with input dimension $n_{0}$, output dimension $n_{L+1}$, hidden layer widths $n_{1},\ldots,n_{L}=n$, and non-linearity $\sigma$ belonging to the $K_{*}=0$ universality class that has been tuned to criticality as in (4.11). We have for any $\delta\in(0,1)$ $\displaystyle K_{(00)}^{(\ell+1)}$ $\displaystyle=\frac{1}{a\ell}+O_{\delta}(\ell^{-2+\delta})$ (4.13) $\displaystyle K_{(10)}^{(\ell+1)}$ $\displaystyle=\frac{C_{W}e^{-2\gamma}x_{1;\alpha}}{n_{0}\ell^{2}}\left(1+O(\ell^{-1})\right)$ (4.14) $\displaystyle K_{(11)}^{(\ell+1)}$ $\displaystyle=\frac{C_{W}e^{-\gamma}}{n_{0}\ell}\left(1+O(\ell^{-1})\right)+O\left(\frac{(x_{1;\alpha})^{2}}{n_{0}^{2}\ell^{3}}\right)$ (4.15) $\displaystyle K_{(12)}^{(\ell+1)}$ $\displaystyle=O\left(\frac{x_{1;\alpha}x_{2;\alpha}}{n_{0}^{2}\ell^{3}}\right),$ (4.16) where $\gamma$ is the Euler-Mascheroni constant and the implicit constant in the $O_{\delta}(\ell^{-2+\delta})$ error depends on $\delta$. Moreover, denoting by $\xi=L/n$ the effective network depth, we have $\displaystyle\frac{\kappa_{(11)(00)}^{(\ell+1)}}{K_{(11)}^{(\ell)}K_{(00)}^{(\ell)}}$ $\displaystyle=-\frac{1}{3}\xi(1+O(\ell^{-1}))+O(n^{-2})$ (4.17) $\displaystyle\frac{\kappa_{(11)(11)}^{(\ell+1)}}{(K_{(11)}^{(\ell)})^{2}}$ $\displaystyle=\frac{8}{3}\xi(1+O(\ell^{-1}))+O(n^{-2})$ (4.18) $\displaystyle\frac{\kappa_{(11)(22)}^{(\ell+1)}}{K_{(11)}^{(\ell)}K_{(22)}^{(\ell)}}$ $\displaystyle=\frac{2}{3}\xi(1+O(\ell^{-1}))+O(n^{-2}),$ (4.19) where the implicit constants in the $O(\ell^{-1})$ error terms depend only on $x_{\alpha},\sigma$ and the implicit constants in the $O(n^{-2})$ error terms may depend in addition on $\ell$. Theorem 4.4 reveals several interesting phenomena. First, we find from (4.13) and (4.14) that at infinite width values and derivatives become uncorrelated at large $L$: $\lim_{L\rightarrow\infty}\lim_{n\rightarrow\infty}\mathrm{Corr}\left(z_{1;\alpha}^{(L+1)},\,\partial_{x_{1;\alpha}}z_{1;\alpha}^{(L+1)}\right)=0.$ Moreover, at finite width, the effective network depth $\xi$ controls the distribution of gradients both in terms of their fluctuations at a single neuron and their correlations between neurons. For instance, to leading order in $n,L$: $\displaystyle\frac{\mathrm{Var}\left[\left(\partial_{x_{1;\alpha}}z_{1;\alpha}^{(L+1)}\right)^{2}\right]}{\mathbb{E}\left[\left(\partial_{x_{1;\alpha}}z_{1;\alpha}^{(L+1)}\right)^{2}\right]}$ $\displaystyle=\frac{8}{3}\xi,\qquad\mathrm{Corr}\left(\left(\frac{\partial z_{1;\alpha}^{(L+1)}}{\partial x_{1;\alpha}}\right)^{2},\left(\frac{\partial z_{1;\alpha}^{(L+1)}}{\partial x_{2;\alpha}}\right)^{2}\right)=\frac{2}{3}\xi.$ In the next section, we will apply such estimates to understand a simple version of the so-called exploding and vanishing gradient problem. ### 4.4 Exploding and Vanishing Gradient Problem for First Layer Weights On its face, Theorem 4.4 concerns only derivatives of $z_{i;\alpha}^{{}_{(L+1)}}$ with respect to the network input $x_{\alpha}$. However, it is the derivatives of $z_{i;\alpha}^{{}_{(L+1)}}$ with respect to the model parameters (weights and biases) that are arguably more important. To explain this more precisely, consider a fully connected neural network $x_{\alpha}\mapsto z_{\alpha}^{{}_{(L+1)}}$ with input dimension $n_{0}$, output dimension $n_{L+1}$, and hidden layer widths $n_{1},\ldots,n_{L}$. For any $m\in\mathbb{N}$ let us set $[m]:=\left\\{1,\ldots,m\right\\}$ and denote by $\theta=\left(\theta_{\mu},\,\mu\in[\\#\text{params}]\right)=\left(W_{ij}^{(\ell)},b_{i}^{(\ell)},\,\ell\in[L+1],\,i\in[n_{\ell}],\,j\in[n_{\ell-1}]\right)$ the vector of its trainable parameters, where we’ve set $\\#\text{params}:=\\#\text{weights and biases}=\sum_{\ell=1}^{L+1}n_{\ell}\left(1+n_{\ell-1}\right).$ As we briefly discussed in §2.2 the typical use case for a network is to optimize its parameters $\theta$ by some variant of gradient descent $\theta_{\mu}(t+1)=\theta_{\mu}(t)-\eta\partial_{\mu}\mathcal{L}(\theta(t)),\qquad\mu\in[\\#\text{params}]$ (4.20) starting from a random initialization $\theta(0)$ drawn from the distribution (2.2). The goal of this procedure is to minimize a loss $\mathcal{L}(\theta)=\mathcal{L}(z_{\mathcal{A}_{\text{train}}}^{(L+1)}(\theta)),\qquad z_{\mathcal{A}_{\text{train}}}^{(L+1)}:=\left(z_{\alpha}^{(L+1)},\,\alpha\in\mathcal{A}_{\text{train}}\right),$ which often depends only on the network output and measures for each $\theta$ how well the resulting neural net function $x_{\alpha}\mapsto z_{\alpha}^{(L+1)}(\theta)$ matches given training data $\left\\{(x_{\alpha},f(x_{\alpha})),\,\alpha\in\mathcal{A}_{\text{train}}\right\\}$ for a function $f:\mathbb{R}^{n_{0}}\rightarrow\mathbb{R}^{n_{L+1}}$. For example, we might have $\mathcal{L}(\theta)=\frac{1}{\left|\mathcal{A}_{\text{train}}\right|}\sum_{\alpha\in\mathcal{A}_{\text{train}}}\left|\left|f(x_{\alpha})-z_{\alpha}^{(L+1)}\right|\right|_{2}^{2},$ though many different losses are used in practice. The parameter $\eta$ is called the learning rate, or step size. An important and well-known [5, 17, 27] numerical stability issue that comes up in the course of using a first order method such as (4.20) to optimize the parameters of a deep neural network is called the exploding and vanishing gradient problem (EVGP). Informally, the EVGP occurs when the update (4.20) is numerically unstable: $\mathrm{EVGP}\qquad\longleftrightarrow\qquad\text{for many parameters }\theta_{\mu}\quad\frac{\left|\eta\partial_{\mu}\mathcal{L}(\theta)\right|}{\left|\theta_{\mu}\right|}\approx 0\text{ or }\infty.$ The presence of the EVGP causes optimization to break down since if $\left|\partial_{\mu}\mathcal{L}(\theta)\right|/\left|\theta_{\mu}\right|\approx 0\text{ or }\infty$, then the relative change in the parameter $\theta_{\mu}$ is either too small to be useful or so large that it amounts to a large random step in the space of parameters and therefore is unlikely to decrease the loss. It has long been known that sufficiently deep fully connected neural networks are prone to suffer from the EVGP [5, 27, 28]. Indeed, many important practical innovations such as residual connections [24] were invented in part to address such numerical issues, though by now they are seen as useful for many other reasons. To see why deep enough neural nets tend to have numerically unstable gradients note that since the loss only depends on the network outputs on a finite training set $z_{\mathcal{A}_{\text{train}}}^{{}_{(L+1)}}$, we may use the chain rule to write $\partial_{\mu}\mathcal{L}(\theta)=\sum_{\alpha\in\mathcal{A}_{\text{train}}}J_{\theta_{\mu}}z_{\alpha}^{(L+1)}J_{z_{\alpha^{(L+1)}}}\mathcal{L}(z_{\mathcal{A}_{\text{train}}}^{(L+1)}),$ where for any function $f$ we denote by $J_{x}f(x)$ its Jacobian with respect to $x.$ Moreover, suppose that $\theta_{\mu}$ is a weight or bias in layer $\ell_{0}$. Then, again by the chain rule, the Jacobian of the network output with respect to $\mu$ equals $J_{\theta_{\mu}}z_{\alpha}^{(L+1)}=J_{\theta_{\mu}}z_{\alpha}^{(\ell_{0})}J_{z_{\alpha}^{(\ell_{0})}}z_{\alpha}^{(L+1)}.$ Both of these terms may have large fluctuations. Perhaps most importantly, the second term can be rewritten as a product $J_{z_{\alpha}^{(\ell_{0})}}z_{\alpha}^{(L+1)}=\prod_{\ell=\ell_{0}}^{L}J_{z_{\alpha}^{(\ell)}}z_{\alpha}^{(\ell+1)},$ of $L-\ell_{0}+1$ layer-to-layer Jacobian matrices (which are random at the start of training), with each term having the following explicit representation: $J_{z_{\alpha}^{(\ell)}}z_{\alpha}^{(\ell+1)}=D^{(\ell+1)}W^{{(\ell+1)}},\qquad D^{(\ell+1)}:=\mathrm{Diag}\left(\sigma^{\prime}\left(z_{i;\alpha}^{(\ell+1)}\right),\,i=1,\ldots,n_{\ell+1}\right).$ Except in the simple case when $\sigma$ is the identity and we require $W^{{(\ell+1)}}$ to be orthogonal, the top singular value of each layer-to- layer Jacobian will tend to deviate from $1$, causing the norms of their products to either grow or decay exponentially with $L$. This poor conditioning for large matrix products is a key culprit behind the EVGP. Often, though not always, the gradients $\partial_{\mu}\mathcal{L}$ have the largest fluctuations at the start of training. Thus, the EVGP is primarily a property of random neural networks, and the techniques in this article can be used to shed light on when it occurs. At the start of training the entries of the Jacobian $J_{z_{\alpha}^{(\ell_{0})}}z_{\alpha}^{{}_{(L+1)}}$ are precisely the derivatives of a random neural network with depth $L-\ell_{0}+1$ with respect to its input. One can therefore hope to use Theorems 3.1 and 3.2 to address the following more precise formulation of the EVGP: $\displaystyle\text{EVGP}\quad\longleftrightarrow\quad$ for a typical random draw of weights and biases the size of the relative Jacobians of the network output $\displaystyle\qquad\qquad\qquad\qquad\qquad|J_{\theta_{\mu}}z_{j;\alpha}^{(L+1)}|/\left|\theta_{\mu}\right|$ $\displaystyle\text{has a large variance over $\mu$ (i.e. over network parameters)}.$ This characterization of the EVGP says in essence that if we can choose only a single learning rate $\eta$ for a group of otherwise indistinguishable parameters, such as the weights in a single layer, then it make sense to set $\eta=\left(\text{average relative gradient }|J_{\theta_{\mu}}z_{j;\alpha}^{(L+1)}|/\left|\theta_{\mu}\right|\right)^{-1}.$ The EVGP will then occur when the size of relative gradients have a large variance across parameters in a typical random draw of weights and biases, so that for some parameters our setting of $\eta$ is too small whereas for others it is too large. We will leave the study of the EVGP in this general form as pertaining to future work and consider here a special case. Specifically, we compute in Theorem 4.5 below the average of the empirical variance of the squared gradients $|J_{\theta_{\mu}}z_{j;\alpha}^{{}_{(L+1)}}|^{2}$ when $\theta_{\mu}=W_{ij}^{{}_{(1)}}$ varies over all weights in the first layer. By construction, the typical size of $\left|\theta_{\mu}\right|$ is the same for all such $\mu$. The EVGP will thus occur for weights in the first layer if and only if the empirical fluctuations over $\mu$ of the raw Jacobians $|J_{{W_{ij}^{(1)}}}z_{j;\alpha}^{{}_{(L+1)}}|$ is large. To state the precise result we fix $x_{\alpha}\neq 0\in\mathbb{R}^{n_{0}}$ and $q\in[n_{L+1}]$. We then write $\mathrm{Grad~{}Mean}^{(1)}:=\frac{1}{n_{0}n_{1}}\sum_{\begin{subarray}{c}1\leq i\leq n_{1}\\\ 1\leq j\leq n_{0}\end{subarray}}\left(\partial_{W_{ij}^{(1)}}z_{q;\alpha}^{(L+1)}\right)^{2}$ for the empirical mean of the squared gradients of $z_{q;\alpha}^{(L+1)}$ with respect to the weights $W_{ij}^{{}_{(1)}}$ in the first layer and analogously $\mathrm{Grad~{}Var}^{(1)}:=\frac{1}{n_{0}n_{1}}\sum_{\begin{subarray}{c}1\leq i\leq n_{1}\\\ 1\leq j\leq n_{0}\end{subarray}}\left(\partial_{W_{ij}^{(1)}}z_{q;\alpha}^{(L+1)}\right)^{4}-\left(\mathrm{Grad~{}Mean}^{(1)}\right)^{2}$ for the empirical variance of the squared gradients. ###### Theorem 4.5 (EVGP for First Layer Weights in Fully Connected Networks). Let $x_{\alpha}\mapsto z_{\alpha}^{(L+1)}$ be a random neural network with input dimension $n_{0}$, output dimension $n_{L+1}$, hidden layer widths $n_{1},\ldots,n_{L}=n\gg 1,$ and non-linearity $\sigma$. Suppose also that * • $\sigma$ satisfies Assumption. 2.1 * • $\sigma$ belongs to the $K_{*}=0$ universality class in the sense described in §4.1 as well as §4.2.2. * • $C_{b},C_{W}$ are tuned to criticality in the sense described as in (4.11). Then, for any non-zero $x_{\alpha}\in\mathbb{R}^{n_{0}}$ we have $\frac{\mathbb{E}\left[\mathrm{Grad~{}Var}^{(1)}\right]}{\mathbb{E}\left[\mathrm{Grad~{}Mean}^{(1)}\right]^{2}}=C_{\sigma,\alpha}\bigg{(}1+\frac{8}{3}\xi+O(L^{-1})+O(n^{-1})+O_{L}(n^{-2})\bigg{)},$ where $\xi=\frac{L}{n}$ is the effective network depth and $C_{\sigma,\alpha}:=3\frac{\left\langle(\sigma^{\prime})^{4}\right\rangle_{K_{\alpha\alpha}^{(1)}}}{\left\langle(\sigma^{\prime})^{2}\right\rangle_{K_{\alpha\alpha}^{(1)}}^{2}}\frac{n_{0}^{-1}\left|\left|x_{\alpha}\right|\right|_{4}^{4}}{(n_{0}^{-1}\left|\left|x_{\alpha}\right|\right|_{2}^{2})^{2}}-1$ is a strictly positive constant depending only on $x_{\alpha},\sigma$, the implicit constants in the error terms $O(n^{-1}),O(L^{-1})$ depend only on $x_{\alpha},\sigma$ whereas the implicit constant in the error term $O_{L}(n^{-2})$ depends on $x_{\alpha},\sigma,L$. ###### Remark 4.6. Theorem 4.5 shows that, at least to leading order in $1/n$, the exploding and vanishing gradient problem occurs for first layer weights in a critically tuned random fully connected with non-linearity from the $K_{*}=0$ universality class if and only if the effective network depth $L/n$ is large. We prove Theorem 4.5 in §11. To put this result into the context, let us make several remarks. First, the analog of Theorem 4.5 for $1-$homogeneous non- linearities was derived in prior work [17, 21]. As we’ve alluded to before, the behavior of a random fully connected neural network with such non- linearities can be studied by more specialized methods that reveal not only the leading order dependence in $1/n$ but actually the full dependence on $L/n$ plus errors of size $L/n^{2}$ (see Appendix C). The conclusion, just as for non-linearities from the $K_{*}=0$ universality class, is that the EVGP occurs if and only if $L/n$ is large. Second, Theorem 4.5 is the first time the EVGP has been characterized mathematically in random fully connected networks for a broad class of non- linearities beyond those that are $1-$homogeneous. However, as throughout, the author would like to acknowledge the prior work [53] of Roberts, Yaida, and the author which also studies the EVGP and shows, at a physics level of rigor, that the variance over random initialization of a single squared gradient $(\partial_{\mu}z_{q;\alpha}^{{}_{(L+1)}})^{2}$ also scales like $L/n$ at large $L$. This is conceptually different from showing that in a typical initialization the empirical variance of $(\partial_{\mu}z_{q;\alpha}^{{}_{(L+1)}})^{2}$ over $\mu$ in the first layer is large, which is what Theorem 4.5 establishes. Nonetheless, the conclusions are the essentially the same and the basic techniques used to derive the results are similar. ## 5 Overview of Proofs In this section, we present the essential idea for how we analyze a random fully connected neural network $x_{\alpha}\mapsto z_{\alpha}^{(L+1)}$ at finite width. Our approach is based on the following structural properties: * • The sequence of fields $z_{\alpha}^{(\ell)}$ is a Markov Chain with respect to $\ell$. * • Conditional on the sigma algebra $\mathcal{F}^{(\ell)}$ defined by $z_{\alpha}^{(\ell)}$ is a Gaussian field with independent components $z_{i;\alpha}^{(\ell+1)}$. See Lemma 7.1. * • The variance of each component $z_{i;\alpha}^{(\ell+1)}$ depends on $z_{\alpha}^{(\ell)}$ only through random variables of the form $\mathcal{O}_{f}^{(\ell)}:=\frac{1}{n_{\ell}}\sum_{j=1}^{n_{\ell}}f(z_{i;\alpha}^{(\ell)}),$ which we refer to as collective observables. See (3.6). * • Collective observables are self-averaging to a similar extent as if the random variables $f(z_{i;\alpha}^{(\ell)})$ were independent in the sense that for any $q\geq 0$, we have $\mathbb{E}\left[\left(\mathcal{O}_{f}^{(\ell)}-\mathbb{E}\left[\mathcal{O}_{f}^{(\ell)}\right]\right)^{q}\right]=O_{q}\left(n^{-\lceil\frac{q}{2}\rceil}\right).$ (5.1) See Theorem 7.3 and Lemma 7.5. Let us briefly explain, mostly dispensing with rigor, how these four ideas come together to obtain a recursive description of the distribution of the field $z_{\alpha}^{(\ell+1)}$ in terms of that of $z_{\alpha}^{(\ell)}$. To keep the notation to a minimum, we fix a network input $x_{\alpha}$ and focus on describing the joint distribution of $z_{i;\alpha}^{(\ell+1)},\,i=1,\ldots,m$. Extensions to multiple inputs and derivatives proceed along very similar lines. Denoting by $\xi=\left(\xi_{1},\ldots,\xi_{m}\right)$ dual variables, consider the characteristic function $p^{(\ell+1)}(\xi):=\mathbb{E}\left[\exp\left[-i\sum_{i=1}^{m}\xi_{i}z_{i;\alpha}^{(\ell+1)}\right]\right].$ Conditioning on $z_{\alpha}^{(\ell)}$ and using (3.6) allows us to write $p^{(\ell+1)}(\xi):=\mathbb{E}\left[\exp\left[-\frac{1}{2}\left|\left|\xi\right|\right|^{2}\Sigma_{\alpha\alpha}^{(\ell)}\right]\right],$ where we remind the reader that $\Sigma_{\alpha\alpha}^{(\ell)}=\mathrm{Var}\left[z_{i;\alpha}^{(\ell+1)}~{}\big{|}~{}\mathcal{F}^{(\ell)}\right]=C_{b}+\frac{C_{W}}{n_{\ell}}\sum_{j=1}^{n_{\ell}}\sigma(z_{j;\alpha}^{(\ell)})^{2}$ is a collective observable at the previous layer. Writing $\kappa_{\alpha\alpha}^{(\ell)}:=\mathbb{E}\left[\Sigma_{\alpha\alpha}^{(\ell)}\right],\qquad\Delta_{\alpha\alpha}^{(\ell)}:=\Sigma_{\alpha\alpha}^{(\ell)}-\mathbb{E}\left[\Sigma_{\alpha\alpha}^{(\ell)}\right],$ we find $p^{(\ell+1)}(\xi):=\mathbb{E}\left[\exp\left[-\frac{1}{2}\left|\left|\xi\right|\right|^{2}\Delta_{\alpha\alpha}^{(\ell)}\right]\right]\exp\left[-\frac{1}{2}\left|\left|\xi\right|\right|^{2}\kappa_{\alpha\alpha}^{(\ell)}\right].$ The second term is precisely the characteristic function of a centered $m$-dimensional Gaussian with iid components of variance $\kappa_{\alpha\alpha}^{(\ell)}$. Moreover, at least heuristically, the first term can be written $\mathbb{E}\left[\exp\left[-\frac{1}{2}\left|\left|\xi\right|\right|^{2}\Delta_{\alpha\alpha}^{(\ell)}\right]\right]=\sum_{q\geq 0}\mathbb{E}\left[\left(\Delta_{\alpha\alpha}^{(\ell)}\right)^{q}\right]\frac{(-1)^{q}}{2^{q}q!}\left|\left|\xi\right|\right|^{2q}.$ The concentration estimates (5.1) ensure that this series converges. Moreover, since the Fourier transform turns polynomials into derivatives we have $-\left|\left|\xi\right|\right|^{2}=\text{ Laplacian in the variables }z_{i;\alpha}^{(\ell+1)}.$ Hence, we obtain for any reasonable test function $f$ that $\mathbb{E}\left[f(z_{i;\alpha}^{(\ell+1)},\,i=1,\ldots,m)\right]=\sum_{q=0}^{\infty}\frac{1}{2^{q}q!}\mathbb{E}\left[\left(\Delta_{\alpha\alpha}^{(\ell)}\right)^{q}\right]\left\langle\left(\sum_{i=1}^{m}\partial_{z_{i;\alpha}}^{2}\right)^{q}f(z_{i;\alpha},\,i=1,\ldots,m)\right\rangle_{\kappa_{\alpha\alpha}^{(\ell)}},$ where $(z_{i;\alpha},\,i=1,\ldots,m)$ is a vector of iid centered Gaussians with variance $\kappa_{\alpha\alpha}^{(\ell)}$. The concentration estimates (5.1) ensure that this expression is a power series in $1/n$. In particular, $\displaystyle\mathbb{E}\left[f(z_{i;\alpha}^{(\ell+1)},\,i=1,\ldots,m)\right]$ $\displaystyle=\left\langle f(z_{i;\alpha},\,i=1,\ldots,m)\right\rangle_{\kappa_{\alpha\alpha}^{(\ell)}}$ (5.2) $\displaystyle+\frac{\mathbb{E}\left[(\Delta_{\alpha\alpha}^{(\ell)})^{2}\right]}{8}\left\langle\left(\sum_{i=1}^{m}\partial_{z_{i;\alpha}}^{2}\right)^{2}f(z_{i;\alpha},\,i=1,\ldots,m)\right\rangle_{\kappa_{\alpha\alpha}^{(\ell)}}+O(n^{-2}).$ This is the essence of Theorem 3.2. To derive usable recursions for cumulants of $z_{i;\alpha}^{(\ell+1)}$, note for instance that, in the notation of Corollary 3.4, $\kappa_{4;\alpha}^{(\ell)}:=\frac{1}{3}\kappa\left(z_{i;\alpha}^{(\ell+1)},z_{i;\alpha}^{(\ell+1)},z_{i;\alpha}^{(\ell+1)},z_{i;\alpha}^{(\ell+1)}\right)=\mathbb{E}\left[(\Delta_{\alpha\alpha}^{(\ell)})^{2}\right].$ Writing $X_{j}:=\sigma(z_{j;\alpha}^{(\ell+1)})^{2}-\mathbb{E}\left[\sigma(z_{j;\alpha}^{(\ell+1)})^{2}\right]$ we thus have $\kappa_{\alpha\alpha}^{(\ell+1)}=\mathbb{E}\left[\left(\Delta_{\alpha\alpha}^{(\ell)}\right)^{2}\right]=\frac{C_{W}^{2}}{n_{\ell}}\mathbb{E}\left[X_{1}^{2}\right]+C_{W}^{2}\left(1-n_{\ell}^{-1}\right)\mathbb{E}\left[X_{1}X_{2}\right].$ Applying the expansion (5.2) to both these terms and a bit of algebra already yields $\displaystyle\kappa_{\alpha\alpha}^{(\ell+1)}$ $\displaystyle=\mathbb{E}\left[\left(\Delta_{\alpha\alpha}^{(\ell)}\right)^{2}\right]$ $\displaystyle=\frac{C_{W}^{2}}{n_{\ell}}\left(\left\langle\sigma^{4}\right\rangle_{\kappa_{\alpha\alpha}^{(\ell)}}-\left\langle\sigma^{2}\right\rangle_{\kappa_{\alpha\alpha}^{(\ell)}}^{2}\right)$ $\displaystyle+C_{W}^{2}\left(1-n_{\ell}^{-1}\right)\left(\left(\left\langle\sigma^{2}\right\rangle_{\kappa_{\alpha\alpha}^{(\ell)}}-\mathbb{E}\left[\sigma(z_{1;\alpha}^{(\ell)})^{2}\right]\right)^{2}+\frac{1}{4}\left\langle\partial^{2}\sigma^{2}\right\rangle_{\kappa_{\alpha\alpha}^{(\ell)}}^{2}\kappa_{4;\alpha}^{(\ell)}\right)+O(n^{-2}).$ A short argument supplied in §9 shows that $\left\langle\sigma^{2}\right\rangle_{\kappa_{\alpha\alpha}^{(\ell)}}=\mathbb{E}\left[\sigma(z_{1;\alpha}^{(\ell)})^{2}\right]+O(n^{-1})$ and that we may replace $\kappa_{\alpha\alpha}^{(\ell)}$ by its infinite width limit $K_{\alpha\alpha}^{(\ell)}$ in all remaining expectations at the cost of an $O(n^{-1})$ error. This already yields the recursion (3.15) of Corollary 3.4. ## 6 Background ### 6.1 Properties of Cumulants Recall that, given random variables $X_{1},\ldots,X_{k}$ on the same probability space, we denote their mixed cumulant by $\kappa\left(X_{1},\ldots,X_{k}\right):=i^{k}\frac{\partial^{k}}{\partial t_{1}\cdots\partial t_{k}}\bigg{|}_{t=0}\log\mathbb{E}\left[\exp\left[-i(t_{1}X_{1}+\cdots+t_{k}X_{k})\right]\right].$ In the following result, we recall the key properties of these mixed cumulants that we will need. ###### Proposition 6.1 (See Theorem 2.3.1 in [7]). Mixed cumulants satisfy the following properties. 1. 1. Suppose $X=\left(X_{1},\ldots,X_{k}\right)$ is a random vector with finite moments of all orders. Then $\displaystyle\kappa\left(X_{1},\ldots,X_{k}\right)=\sum_{\pi=\left(\pi_{1},\ldots\pi_{b}\right)}\kappa\left(\kappa\left(X_{\pi_{1}}~{}|~{}\mathcal{F}\right),\ldots,\kappa\left(X_{\pi_{b}}~{}|~{}\mathcal{F}\right)\right),$ (6.1) where the sum is over all partitions $\pi$ of $[k]$ and for each $a=1,\ldots,b$ $X_{\pi_{a}}:=\left(X_{i},\,i\in\pi_{a}\right).$ This is known as the law of total cumulance. See [6]. 2. 2. Suppose $X=\left(X_{1},\ldots,X_{k}\right)$ is a random vector with finite moments of all orders. When $X$ can be partitioned into two independent subsets, the mixed cumulant $\kappa(X)$ vanishes. More precisely, suppose $I\subseteq[k]$ and that $I,I^{c}\neq\emptyset.$ Then $X_{I}:=\left(X_{i},\,i\in I\right)\perp X_{I^{c}}=\left(X_{i},\,i\not\in I\right)\qquad\Longrightarrow\qquad\kappa\left(X_{1},\ldots,X_{k}\right)=0.$ (6.2) 3. 3. Mixed cumulants are multi-linear. More precisely if $\left\\{X_{i,j},\,1\leq j\leq k,\,i\leq T_{j}\right\\}$ are random variables with finite moments defined on the same probability space, then $\kappa\left(\sum_{i_{1}=1}^{T_{1}}a_{i_{1},1}X_{i_{1},1},\ldots,\sum_{i_{k}=1}^{T_{k}}a_{i_{k},k}X_{i_{k},k}\right)=\sum_{i_{1}=1}^{T_{1}}\cdots\sum_{i_{k}=1}^{T_{k}}a_{i_{1},1}\cdots a_{i_{k},k}\kappa\left(X_{i_{1},1},\ldots,X_{i_{k},k}\right)$ (6.3) for any $a_{i,j}\in\mathbb{R}$. 4. 4. Suppose $X=\left(X_{1},\ldots,X_{j}\right)\sim\mathcal{N}(0,\Sigma)$ is a centered Gaussian with covariance $\Sigma$. Then for any $i_{1},\ldots,i_{k}\in[j]$ $\kappa\left(X_{i_{1}},\ldots,X_{i_{k}}\right)=\begin{cases}\Sigma_{i_{1}i_{2}},&\quad k=2\\\ 0,&\quad\text{otherwise}\end{cases}.$ (6.4) 5. 5. Moments are polynomials in cumulants. Specifically, suppose $X=\left(X_{1},\ldots,X_{k}\right)$ is a random vector with finite moments of all orders. Then, $\mathbb{E}\left[X_{1}\cdots X_{k}\right]=\sum_{\pi=\left(\pi_{1},\ldots,\pi_{b}\right)}\prod_{a=1}^{b}\kappa\left(X_{\pi_{a}}\right),$ (6.5) where the sum is over all partitions of $[k]$ and for each $a\in[b]$ we’ve written $X_{\pi_{a}}:=\left(X_{i},\,i\in\pi_{a}\right).$ 6. 6. Cumulants are polynomials in moments. Specifically, $\kappa\left(X_{1},\ldots,X_{k}\right)=\sum_{\pi=\left(\pi_{1},\ldots,\pi_{b}\right)}(-1)^{b-1}(b-1)!\prod_{a=1}^{b}\mathbb{E}\left[\prod_{i\in\pi_{a}}X_{i}\right],$ (6.6) where the sum is over all partitions of $[k]$ and for each $a\in[b]$ we’ve written $X_{\pi_{a}}:=\left(X_{i},\,i\in\pi_{a}\right).$ ### 6.2 Lemmata In this section, we collect two simple auxiliary results that we will need. The first is a Lemma for Solving Certain Recursions. ###### Lemma 6.2. Fix $C_{1},C_{2},\psi>0$ satisfying $C_{2}\geq 1,\quad\psi\neq C_{2}+1$ as well as $*\in\left\\{\leq,\,\geq\right\\}$. Suppose also that for each $\ell\geq 0$ we have $a_{\ell+1}~{}*~{}\xi_{\ell}+(1-\zeta_{\ell})a_{\ell},\qquad\zeta_{\ell}\in[0,1]$ (6.7) with $a_{0}\in\mathbb{R}$ given and that there exist $C_{1}^{\prime},C_{2}^{\prime}>0$ so that $\left|\xi_{\ell}-C_{1}\ell^{-\psi}\right|\leq C_{1}^{\prime}\ell^{-1-\psi},\qquad\left|\zeta_{\ell}-C_{2}\ell^{-1}\right|\leq C_{2}^{\prime}\ell^{-2}.$ Then $a_{\ell+1}*\frac{\ell^{1-\psi}}{1-\psi+C_{2}}\left(1+O(\ell^{-1})\right)+e^{-C_{2}\gamma}\ell^{-C_{2}}a_{0}\left(1+O(\ell^{-1})\right)$ (6.8) where $\gamma$ is the Euler-Mascheroni constant and the implied constants depend only $C_{1},C_{2},C_{1}^{\prime},C_{2}^{\prime}.$ ###### Proof. By unfolding the recursion (6.7) we find $\displaystyle a_{\ell+1}~{}*~{}\sum_{\ell^{\prime}=1}^{\ell}\xi_{\ell^{\prime}}\prod_{\ell^{\prime\prime}=\ell^{\prime}+1}^{\ell}\left(1-\zeta_{\ell^{\prime\prime}}\right)+a_{0}\prod_{\ell^{\prime\prime}=0}^{\ell}(1-\zeta_{\ell^{\prime\prime}}).$ We have $\displaystyle\prod_{\ell^{\prime\prime}=1}^{\ell}(1-\zeta_{\ell^{\prime\prime}})$ $\displaystyle=\exp\left[\sum_{\ell^{\prime\prime}=1}^{\ell}\log\left(1-C_{2}(\ell^{\prime\prime})^{-1}+O(\ell^{-2})\right)\right]$ $\displaystyle=\exp\left[O(\ell^{-1})+\sum_{\ell^{\prime\prime}=1}^{\ell}-C_{2}(\ell^{\prime\prime})^{-1}\right]$ $\displaystyle=\exp\left[O(\ell^{-1})-C_{2}\log(\ell)-C_{2}\gamma\right]$ $\displaystyle=e^{-C_{2}\gamma}\ell^{-C_{2}}\left(1+O(\ell^{-1})\right).$ This gives the second term in (6.8). For the first term, we write $\displaystyle\sum_{\ell^{\prime}=1}^{\ell}\xi_{\ell^{\prime}}\prod_{\ell^{\prime\prime}=\ell^{\prime}+1}^{\ell}\left(1-\zeta_{\ell}\right)$ $\displaystyle=\sum_{\ell^{\prime}=1}^{\ell}\xi_{\ell^{\prime}}\exp\left[\sum_{\ell^{\prime\prime}=\ell^{\prime}+1}^{\ell}\log\left(1-\zeta_{\ell}\right)\right]$ $\displaystyle=\sum_{\ell^{\prime}=1}^{\ell}\xi_{\ell^{\prime}}\exp\left[\sum_{\ell^{\prime\prime}=\ell^{\prime}+1}^{\ell}-C_{2}(\ell^{\prime\prime})^{-1}+O((\ell^{\prime\prime})^{-2})\right]$ $\displaystyle=\sum_{\ell^{\prime}=1}^{\ell}C_{1}(\ell^{\prime})^{-\psi}(1+O(\ell^{\prime})^{-1})\exp\left[-C_{2}\log\left(\frac{\ell}{\ell^{\prime}}\right)+O((\ell^{\prime})^{-1})\right]$ $\displaystyle=\ell^{-C_{2}}\sum_{\ell^{\prime}=1}^{\ell}C_{1}(\ell^{\prime})^{-\psi+C_{2}}(1+O(\ell^{\prime})^{-1})$ $\displaystyle=\frac{C_{1}}{1+C_{2}-\psi}\ell^{1-\psi}\left(1+O(\ell^{-1})\right).$ This completes the proof of (6.8). ∎ The second result we will need is the following simple Lemma about Gaussian Integrals. ###### Lemma 6.3. Fix $r\geq 1$, a $r\times r$ matrix $\Sigma$ and measurable function $g:\mathbb{R}^{r}\rightarrow\mathbb{R}$ that is polynomially bounded: $\exists k\geq 1\text{ s.t. }\sup_{x\in\mathbb{R}^{k}}\left|\left(1+\left|\left|x\right|\right|\right)^{-k}g(x)\right|<\infty.$ If $X$ is a standard Gaussian random vector in $\mathbb{R}^{r}$, then the function $\Sigma\mapsto\mathbb{E}\left[g\left(\Sigma^{1/2}X\right)\right]$ (6.9) is smooth on the open set of strictly positive definite $k\times k$ matrices. Further, if $g$ is a smooth function and each of its derivatives is polynomially bounded, then the map (6.9) is extends to a smooth function on the closed set of positive semi-definite matrices and, in particular, $\frac{\partial}{\partial\Sigma_{ij}}\mathbb{E}\left[g\left(\Sigma^{1/2}X\right)\right]=\mathbb{E}\left[(\partial_{i}\partial_{j}g)(\Sigma^{1/2}X)\right].$ (6.10) ###### Proof. On the open set of strictly positive definite matrices, the Gaussian density $\Sigma\mapsto\exp\left[-\frac{1}{2}x^{T}\Sigma^{-1}x-\frac{1}{2}\log\det(2\pi\Sigma)\right]$ is a smooth function of $\Sigma$ with derivatives that are polynomials in $x$ and the entries of $\Sigma,\Sigma^{-1}$. The assumption that $f$ is polynomially bounded shows that we may differentiate under the integral sign and see that that $\mathbb{E}\left[g(\Sigma^{1/2}X)\right]=\int_{\mathbb{R}^{r}}g(x)\exp\left[-\frac{1}{2}x^{T}\Sigma^{-1}x-\frac{1}{2}\log\det(2\pi\Sigma)\right]dx$ is indeed a smooth function of $\Sigma$. Suppose instead that $g$ is a smooth function and that it’s derivatives are all polynomially bounded. Suppose first that $g$ is in fact a Schwartz function. Then, writing $\widehat{g}$ for its Fourier transform we have $\mathbb{E}\left[g(\Sigma^{1/2}X)\right]=\int_{\mathbb{R}^{r}}\widehat{g}(\xi)\exp\left[-\frac{1}{2}\xi^{T}\Sigma\xi\right]d\xi.$ Since $\widehat{g}$ is also Schwartz, we may differentiate under the integral sign to obtain $\frac{\partial}{\partial\Sigma_{ij}}\mathbb{E}\left[g(\Sigma^{1/2}X)\right]=-\int_{\mathbb{R}^{r}}\xi_{i}\xi_{j}\widehat{g}(\xi)\exp\left[-\frac{1}{2}\xi^{T}\Sigma\xi\right]d\xi=\mathbb{E}\left[\partial_{x_{i}}\partial_{x_{j}}\bigg{|}_{x=\Sigma^{1/2}X}g(x)\right].$ (6.11) Finally, if $g$ is not Schwartz but is smooth with all derivatives being polynomially bounded, we consider the convolution $g_{\epsilon}(x):=(g*\psi_{\epsilon})(x),\qquad\psi_{\epsilon}(y)=\exp\left[-\frac{\left|\left|y\right|\right|^{2}}{2\epsilon}-\frac{1}{2}\log(2\pi\epsilon)\right].$ Then, $g_{\epsilon}$ is Schwartz for all $\epsilon>0$. Moreover, note that $g_{\epsilon}(\Sigma^{1/2}x)$ is also polynomially bounded for any PSD matrix $\Sigma.$ Specifically, for any fixed PSD matrix $\Sigma_{0}$ we have for any $k\geq 1$ $\displaystyle\sup_{\epsilon\in[0,1]}\sup_{\begin{subarray}{c}\left|\left|\Sigma-\Sigma_{0}\right|\right|\leq 1\\\ \Sigma\text{ PSD}\end{subarray}}\sup_{x\in\mathbb{R}^{r}}\left|(1+\left|\left|x\right|\right|)^{-k}g_{\epsilon}(\Sigma^{1/2}x)\right|$ $\displaystyle\qquad=\sup_{\epsilon\in[0,1]}\sup_{\begin{subarray}{c}\left|\left|\Sigma-\Sigma_{0}\right|\right|\leq 1\\\ \Sigma\text{ PSD}\end{subarray}}\sup_{x\in\mathbb{R}^{r}}\left|(1+\left|\left|x\right|\right|)^{-k}\int_{\mathbb{R}^{r}}g(\Sigma^{1/2}(x-y))\psi_{\epsilon}(y)dy\right|$ $\displaystyle\qquad\leq\sup_{\epsilon\in[0,1]}\sup_{\begin{subarray}{c}\left|\left|\Sigma-\Sigma_{0}\right|\right|\leq 1\\\ \Sigma\text{ PSD}\end{subarray}}\sup_{x\in\mathbb{R}^{r}}\left\\{(1+\left|\left|x\right|\right|)^{-k}\int_{\mathbb{R}^{r}}\left(1+\left|\left|\Sigma^{1/2}(x-y)\right|\right|^{k}\right)\psi_{\epsilon}(y)dy\right\\}$ $\displaystyle\qquad<\infty,$ (6.12) Note that there exists $K>0$ depending only $k,r,\Sigma_{0}$ so that $\sup_{\begin{subarray}{c}\left|\left|\Sigma-\Sigma_{0}\right|\right|\leq 1\end{subarray}}\left|\left|\Sigma^{1/2}(x-y)\right|\right|^{k}\leq K\left(1+\left|\left|\Sigma_{0}^{1/2}\right|\right|\right)^{k}(\left|\left|x\right|\right|^{k}+\left|\left|y\right|\right|^{k}).$ Hence, since $\sup_{\epsilon\in[0,1]}\int_{\mathbb{R}^{r}}\left|\left|y\right|\right|^{k}\psi_{\epsilon}(y)dy<\infty$ we find that $\displaystyle\sup_{\epsilon\in[0,1]}\sup_{\begin{subarray}{c}\left|\left|\Sigma-\Sigma_{0}\right|\right|\leq 1\\\ \Sigma\text{ PSD}\end{subarray}}\sup_{x\in\mathbb{R}^{r}}\left|(1+\left|\left|x\right|\right|)^{-k}g_{\epsilon}(\Sigma^{1/2}x)\right|<\infty.$ (6.13) The estimate above allows us to use dominate convergence to see that for any PSD $\Sigma$ $\mathbb{E}\left[g(\Sigma^{1/2}X)\right]=\lim_{\epsilon\rightarrow 0}\mathbb{E}\left[g_{\epsilon}(\Sigma^{1/2}X)\right].$ (6.14) To complete the proof we note that $g_{\epsilon}$ and $\partial_{i}\partial_{j}g_{\epsilon}$ are both Schwartz for any positive $\epsilon$. Moreover, $\partial_{i}\partial_{j}\partial_{k}\partial_{m}g_{\epsilon}$ satisfies (6.13). Hence, we conclude by applying (6.11) that for any PSD matrix $\Sigma_{0}$ there exists $C>0$ so that $\displaystyle\sup_{\begin{subarray}{c}\left|\left|\Sigma-\Sigma_{0}\right|\right|\leq 1\\\ \Sigma\text{ PSD}\end{subarray}}\sup_{\epsilon\in[0,1]}\frac{\left|\mathbb{E}\left[g_{\epsilon}(\Sigma^{1/2}X)\right]-\mathbb{E}\left[g_{\epsilon}(\Sigma_{0}^{1/2}X)\right]-\sum_{i,j=1}^{r}\mathbb{E}\left[(\partial_{i}\partial_{j}g_{\epsilon})(\Sigma_{0}^{1/2}X)\right]\left(\Sigma-\Sigma_{0}\right)_{ij}\right|}{\left|\left|\Sigma-\Sigma_{0}\right|\right|^{2}}$ $\displaystyle\qquad\leq\sup_{\epsilon\in[0,1]}\sup_{\left|\left|\Sigma-\Sigma_{0}\right|\right|\leq 1}\sum_{i,j,k,m=1,\ldots,r}\left|\mathbb{E}\left[(\partial_{i}\partial_{j}\partial_{k}\partial_{m})g_{\epsilon}(\Sigma^{1/2}X)\right]\right|$ $\displaystyle\qquad\leq C.$ Thus, if $\Sigma-\Sigma_{0}/\left|\left|\Sigma-\Sigma_{0}\right|\right|\rightarrow\Sigma_{1}$, we find by applying (6.14) to $\partial_{i}\partial_{j}g$ that $\displaystyle\lim_{\Sigma\rightarrow\Sigma_{0}}\frac{\mathbb{E}\left[g(\Sigma^{1/2}X)\right]-\mathbb{E}\left[g(\Sigma_{0}^{1/2}X)\right]}{\left|\left|\Sigma-\Sigma_{0}\right|\right|}$ $\displaystyle=\lim_{\Sigma\rightarrow\Sigma_{0}}\lim_{\epsilon\rightarrow 0}\frac{\mathbb{E}\left[g_{\epsilon}(\Sigma^{1/2}X)\right]-\mathbb{E}\left[g_{\epsilon}(\Sigma_{0}^{1/2}X)\right]}{\left|\left|\Sigma-\Sigma_{0}\right|\right|}$ $\displaystyle=\lim_{\Sigma\rightarrow\Sigma_{0}}\lim_{\epsilon\rightarrow 0}\left\\{\sum_{i,j=1}^{r}\mathbb{E}\left[(\partial_{i}\partial_{j}g_{\epsilon}(\Sigma_{0}^{1/2}X))\right]\frac{(\Sigma-\Sigma_{0})_{ij}}{\left|\left|\Sigma-\Sigma_{0}\right|\right|}\right\\}$ $\displaystyle=\sum_{i,j=1}^{r}\mathbb{E}\left[\partial_{i}\partial_{j}g(\Sigma_{0}^{1/2}X)\right](\Sigma_{1})_{ij}.$ This shows that (6.10) holds for any $g$ that is smooth, completing the proof of Lemma 6.3. ∎ ## 7 Proof of Theorem 3.1 Let us first recall the notation. We fix $r\geq 1$ and assume that $\sigma:\mathbb{R}\rightarrow\mathbb{R}$ satisfies assumption 2.1 with this value of $r$. We also fix a finite collection $x_{\mathcal{A}}=\left\\{x_{\alpha},\alpha\in\mathcal{A}\right\\}\subseteq\mathbb{R}^{n_{0}}$ of distinct network inputs and $p$ directional derivatives $d_{1},\ldots,d_{p}$ as in (3.2). We denote by $N(p,r)=\\#\left\\{J=\left(j_{1},\ldots,j_{p}\right)\in\mathbb{N}^{p}~{}|~{}j_{1}+\cdots+j_{p}\leq r\right\\},$ which computes the number of possible derivatives of order at most $r$ in the $p$ directional derivatives $d_{j}$. We also denote by $\mathcal{F}^{(\ell)}$ the sigma algebra generated by the weights and biases in layers up to and including $\ell$. The starting point for proving Theorem 3.1 is the following simple but fundamental observation. ###### Lemma 7.1. For each $\ell\geq 0$, conditional on $\mathcal{F}^{(\ell)}$, $\left\\{\left(D_{\alpha}^{J}z_{i;\alpha}^{(\ell+1)},\,\alpha\in\mathcal{A},\,\left|J\right|\leq r\right)\right\\}_{i=1}^{n_{\ell+1}}$ is a collection of $n_{\ell+1}$ iid centered Gaussians of dimension $N(p,r)$. ###### Proof. The defining recursion (2.1) of a fully connected network yields for each $\alpha,J$ $\displaystyle D_{\alpha}^{J}z_{i;\alpha}^{(\ell+1)}$ $\displaystyle=D_{\alpha}^{J}\left\\{b_{i}^{(\ell+1)}+\sum_{j=1}^{n_{\ell}}W_{ij}^{(\ell+1)}\sigma\left(z_{j;\alpha}^{(\ell)}\right)\right\\}=\delta_{\left|J\right|=0}b_{i}^{(\ell+1)}+\sum_{j=1}^{n_{\ell}}W_{ij}^{(\ell+1)}D_{\alpha}^{J}\sigma\left(z_{j;\alpha}^{(\ell)}\right).$ (7.1) Note that $D_{\alpha}^{J}\sigma(z_{j;\alpha}^{{}_{(\ell)}})$ are measurable with respect to $\mathcal{F}^{(\ell)}$. The conclusion now follows since the weights $W_{ij}^{{}_{(\ell+1)}},\,j=1\ldots,n_{\ell}$ and bias $b_{i}^{{}_{(\ell+1)}}$ are centered Gaussians and are independent for different $i$. ∎ Thus, the structure of $z_{\alpha}^{{}_{(\ell+1)}}$ and its derivatives is always that of a Gaussian field after conditioning on $\mathcal{F}^{{}_{(\ell)}}$. To ease the notation in what comes given $f:\mathbb{R}^{\left|\mathcal{A}\right|\times N(n_{0},r)}\rightarrow\mathbb{R}$, let us abbreviate $f\left(z_{j;\mathcal{A}}^{(\ell)}\right):=f\left(D_{\alpha}^{J}z_{j;\alpha}^{(\ell)},\,\alpha\in\mathcal{A},\,\left|J\right|\leq r\right),\qquad j\in[n_{\ell}].$ Next, we remind the reader that given $f:\mathbb{R}^{\left|\mathcal{A}\right|\times N(n_{0},r)}\rightarrow\mathbb{R}$, which is measurable and polynomially bounded, the corresponding collective observable $\mathcal{O}_{f}^{(\ell)}$ at layer $\ell$ is $\mathcal{O}_{f}^{(\ell)}=\frac{1}{n_{\ell}}\sum_{j=1}^{n_{\ell}}f\left(z_{j;\mathcal{A}}^{(\ell)}\right)$ and that the statement (A.3) in Proposition A.1 ensures $\sup_{n\geq 1}\mathbb{E}\left[\left|\mathcal{O}_{f}^{(\ell)}\right|\right]<\infty.$ (7.2) Recall also our notation for the conditional covariance $\Sigma_{\alpha_{1}\alpha_{2}}^{(\ell)}:=\mathrm{Cov}\left(z_{i;\alpha_{1}}^{(\ell+1)},z_{i;\alpha_{2}}^{(\ell+1)}~{}|~{}\mathcal{F}^{(\ell)}\right)=C_{b}+\frac{C_{W}}{n_{\ell}}\sum_{j=1}^{n_{\ell}}\sigma\left(z_{j;\alpha_{1}}^{(\ell)}\right)\sigma\left(z_{j;\alpha_{2}}^{(\ell)}\right)$ and note that both it and its derivatives $\displaystyle D_{\alpha_{1}}^{J_{1}}D_{\alpha_{2}}^{J_{2}}\Sigma_{\alpha_{1}\alpha_{2}}^{(\ell)}$ $\displaystyle=\mathrm{Cov}\left(D_{\alpha_{1}}^{J_{1}}z_{i;\alpha_{1}}^{(\ell+1)},D_{\alpha_{2}}^{J_{2}}z_{i;\alpha_{2}}^{(\ell+1)}~{}|~{}\mathcal{F}^{(\ell)}\right)$ $\displaystyle=D_{\alpha_{1}}^{J_{1}}D_{\alpha_{2}}^{J_{2}}\left(C_{b}+\frac{C_{W}}{n_{\ell}}\sum_{j=1}^{n_{\ell}}\sigma\left(z_{j;\alpha_{1}}^{(\ell)}\right)\sigma\left(z_{j;\alpha_{2}}^{(\ell)}\right)\right)$ are collective observables at layer $\ell$. Our first application of Lemma 7.1 is the following reduction of the study of cumulants of $D_{\alpha}^{J}z_{i;\alpha}^{(\ell+1)}$ to the cumulants of certain collective observables. ###### Proposition 7.2. Fix $k,\ell\geq 1$ and $p$-dimensional multi-indices $J_{1},\ldots,J_{k}$ with $\left|J_{i}\right|\leq r.$ If $k$ is odd, then $\kappa\left(D_{\alpha_{1}}^{J_{1}}z_{i_{1};\alpha_{1}}^{(\ell+1)},\ldots,D_{\alpha_{k}}^{J_{k}}z_{i_{k};\alpha_{k}}^{(\ell+1)}\right)=0$ In contrast, if $k$ is even $\kappa\left(D_{\alpha_{1}}^{J_{1}}z_{i_{1};\alpha_{1}}^{(\ell+1)},\ldots,D_{\alpha_{k}}^{J_{k}}z_{i_{k};\alpha_{k}}^{(\ell+1)}\right)~{}=~{}\text{ finite sums of }\kappa\left(\mathcal{O}_{f_{1}}^{(\ell)},\ldots,\mathcal{O}_{f_{k/2}}^{(\ell)}\right),$ where $\mathcal{O}_{f_{j}}^{(\ell)}$ are collective observables of the form $D_{\alpha_{1}}^{J_{1}}D_{\alpha_{2}}^{J_{2}}\Sigma_{\alpha_{1}\alpha_{2}}^{(\ell)},\qquad\left|J_{1}\right|,\left|J_{2}\right|\leq r.$ (7.3) ###### Proof. Using (6.1) and recalling that $\mathcal{F}^{(\ell)}$ is the sigma algebra generated by weights and biases in layers up to and including $\ell$, we have that $\kappa\left(D_{\alpha_{1}}^{J_{1}}z_{i_{1};\alpha_{1}}^{(\ell+1)},\ldots,D_{\alpha_{k}}^{J_{k}}z_{i_{k};\alpha_{k}}^{(\ell+1)}\right)$ equals $\displaystyle\sum_{\begin{subarray}{c}\pi=\left(\pi_{1},\ldots,\pi_{B}\right)\end{subarray}}\kappa\left(\kappa\left(\left(D^{J}z^{(\ell+1)}\right)_{\pi_{1}}\big{|}\mathcal{F}^{(\ell)}\right),\ldots,\kappa\left(\left(D^{J}z^{(\ell+1)}\right)_{\pi_{B}}\big{|}\mathcal{F}^{(\ell)}\right)\right),$ (7.4) where the sum is over partitions $\pi$ of $[k]$ and for $b=1,\ldots,B$ we’ve abbreviated $\left(D^{J}z^{(\ell+1)}\right)_{\pi_{b}}:=\left(D_{\alpha_{t}}^{J_{t}}z_{i_{t};\alpha_{t}}^{(\ell+1)},\quad t\in\pi_{b}\right).$ By Lemma 7.1, $\\{(D_{\alpha}^{J}z_{i;\alpha}^{(\ell+1)},\,\alpha\in\mathcal{A},\,\left|J\right|\leq d),\,i=1,\ldots,n_{\ell+1}\\}$ are iid centered Gaussians conditional on $\mathcal{F}^{(\ell)}$. Hence, by the properties (6.2) and (6.3) and (6.4) from Proposition 6.1, in the sum over partitions above, a term is non-zero only if $\forall b\in[B],\qquad\left|\pi_{b}\right|=2\quad\text{and}\quad i_{\pi_{b}(1)}=i_{\pi_{b}(2)}$ This proves that $\kappa\left(D^{J_{1}}z_{i_{1};\alpha_{1}}^{(\ell+1)},\ldots,D^{J_{k}}z_{i_{k};\alpha_{k}}^{(\ell+1)}\right)$ vanishes if $k$ is odd. To treat the case when $k$ is even observe that by (7.1) $\displaystyle\kappa\left(D^{J_{1}}z_{i_{1};\alpha_{1}}^{(\ell+1)},D^{J_{2}}z_{i_{2};\alpha_{2}}^{(\ell+1)}\big{|}\mathcal{F}^{(\ell)}\right)$ $\displaystyle=\delta_{i_{1}i_{2}}D_{\alpha_{1}}^{J_{1}}D_{\alpha_{2}}^{J_{2}}\Sigma_{\alpha_{1}\alpha_{2}}^{(\ell)}.$ Substituting this into (7.4) completes the proof. ∎ When $k=2$, Proposition 7.2 and our assumption (2.1) shows that for each $\ell\geq 1$, any $i_{1},i_{2}\in[n_{\ell+1}],\,\alpha\in\mathcal{A},$ and multi-indices $J_{1},J_{2}$ of order at most $d$, there exists a polynomially bounded function $f:\mathbb{R}^{\left|\mathcal{A}\right|\times N(n_{0},d)}\rightarrow\mathbb{R}$ for which $\kappa\left(D_{\alpha_{1}}^{J_{1}}z_{i_{1};\alpha_{1}}^{(\ell+1)},D_{\alpha_{2}}^{J_{2}}z_{i_{2};\alpha_{2}}^{(\ell+1)}\right)=\mathbb{E}\left[\mathcal{O}_{f}^{(\ell)}\right]$ In light of (7.2) this proves Theorem 3.1 when $k=2$. Further, since the cumulant of $2$ or more random variables is shift-invariant, we may assume for $k\geq 3$ that the collective observables $D_{\alpha_{1}}^{J_{1}}D_{\alpha_{2}}^{J_{2}}\Sigma_{\alpha_{1}\alpha_{2}}^{(\ell)}$ in Proposition 7.2 are replaced by their zero mean versions: $\Delta_{\alpha_{1}\alpha_{2}}^{J_{1},J_{2},(\ell)}:=D_{\alpha_{1}}^{J_{1}}D_{\alpha_{2}}^{J_{2}}\Sigma_{\alpha_{1}\alpha_{2}}^{(\ell)}-\mathbb{E}\left[D_{\alpha_{1}}^{J_{1}}D_{\alpha_{2}}^{J_{2}}\Sigma_{\alpha_{1}\alpha_{2}}^{(\ell)}\right].$ (7.5) Hence, Theorem 3.1 is a special case of the following result. ###### Theorem 7.3. Fix $k,m\geq 1$. Consider any $m$-tuple $F=\left(f_{1},\ldots,f_{m}\right)$ consisting of measurable, functions $f_{i}:\mathbb{R}^{\left|\mathcal{A}\right|\times N(n_{0},r)}\rightarrow\mathbb{R},\quad i=1,\ldots,m\qquad$ that are polynomially bounded and satisfy $\mathbb{E}\left[\mathcal{O}_{f_{i}}^{(\ell)}\right]=\mathbb{E}\left[f_{i}\left(z_{1;\mathcal{A}}^{(\ell)}\right)\right]=0,\qquad i=1,\ldots,m.$ Define the $m-$tuple of collective observables $\overrightarrow{\mathcal{O}}_{F}^{(\ell)}:=\left(\mathcal{O}_{f_{i}}^{(\ell)},\,i=1,\ldots,m\right).$ Consider further any measurable polynomially bounded functions $g_{j}:\mathbb{R}^{m}\rightarrow\mathbb{R},\quad j=1,\ldots,k.$ which are smooth in a neighborhood of $0$. If $f_{i}$ and $\sigma$ are in fact smooth, then, for every $\ell\geq 1$ $\sup_{n\geq 1}\left|n^{k-1}\kappa\left(g_{1}\left(\overrightarrow{\mathcal{O}}_{F}^{(\ell)}\right),\ldots,g_{k}\left(\overrightarrow{\mathcal{O}}_{F}^{(\ell)}\right)\right)\right|<\infty$ (7.6) Moreover, (7.6) holds without the assumption that $f_{i},\sigma$ are smooth provided that for each $\ell$ the vector of iterated directional derivatives $(D_{\alpha}^{J}z_{i;\alpha}^{(\ell)},\,\left|J\right|\leq r,\alpha\in\mathcal{A})$ of order at most $r$ is non-degenerate in the sense of (3.3). ###### Proof. Our starting point is a reduction of Theorem 7.3 to the case when $g_{j}$ are polynomials. This is related to a technique called the delta method in some parts of the mathematical statistics literature [59]. ###### Proposition 7.4 (Polynomials are Enough for Theorem 7.3). Fix $m\geq 1$ and suppose that for each $n\geq 1$ we have an $m-$tuple $X_{n}=\left(X_{n,1},\ldots,X_{n,m}\right)$ of mean $0$ random variables that possess bounded moments of all orders: $\sup_{n\geq 1}\left|\mathbb{E}\left[X_{n,1}^{q_{1}}\cdots X_{n,m}^{q_{m}}\right]\right|<\infty,\qquad\forall\,q_{1},\ldots,q_{m}\geq 0.$ (7.7) Suppose for any given polynomials $p_{1},\ldots,p_{k}$ in $m$ variables we have $\sup_{n\geq 1}\left|n^{k-1}\kappa\left(p_{1}(X_{n}),\ldots,p_{k}(X_{n})\right)\right|<\infty.$ (7.8) Then, for any measurable, polynomially bounded functions $g_{j}:\mathbb{R}^{m}\rightarrow\mathbb{R},\,j=1,\ldots,k,$ which are smooth in some fixed neighborhood of $0$ $\sup_{n\geq 1}\left|n^{k-1}\kappa\left(g_{1}\left(X_{n}\right),\ldots,g_{k}\left(X_{n}\right)\right)\right|<\infty.$ (7.9) ###### Proof. We begin with the following simple Lemma, which allows us to translate between the cumulants bounds (7.8) and high probability bounds. ###### Lemma 7.5. For any $q\geq 1$ $\sup_{n\geq 1}\sup_{1\leq i\leq m}\left|n^{\lceil\frac{q}{2}\rceil}\mathbb{E}\left[X_{i,n}^{q}\right]\right|<\infty.$ ###### Proof. We have by property (6.5) from Proposition 6.1 that $\displaystyle\mathbb{E}\left[X_{i;n}^{q}\right]=\sum_{\begin{subarray}{c}\pi=\left(\pi_{1},\ldots,\pi_{B}\right)\\\ \pi\in P(m)\end{subarray}}\prod_{b=1}^{B}\kappa\big{(}{\underbrace{X_{i;n},\ldots,X_{i;n}}_{\left|\pi_{b}\right|\text{ times}}}\big{)}.$ Since by assumption $X_{i;n}$ has mean $0$, we have $\kappa(X_{i;n})=\mathbb{E}\left[X_{i;n}\right]=0.$ Thus, the only partitions $\pi=\left(\pi_{1},\ldots,\pi_{B}\right)\in S(m)$ that give rise to non-zero terms in the expression above must have $B\leq\lfloor\frac{q}{2}\rfloor$. Moreover, for any such partition, we have $\left\lceil\frac{q}{2}\right\rceil=q-\left\lfloor\frac{q}{2}\right\rfloor=-\left\lfloor\frac{q}{2}\right\rfloor+\sum_{b=1}^{B}\left|\pi_{b}\right|\leq\sum_{b=1}^{B}\left(\left|\pi_{b}\right|-1\right).$ Hence, we find $\displaystyle\sup_{n\geq 1}\left|n^{\lceil\frac{q}{2}\rceil}\mathbb{E}\left[X_{i;n}^{q}\right]\right|$ $\displaystyle\leq\sum_{\begin{subarray}{c}\pi=\left(\pi_{1},\ldots,\pi_{B}\right)\\\ \pi\in P(m),\,\left|\pi_{b}\right|\geq 2\end{subarray}}\prod_{b=1}^{B}\sup_{n\geq 1}\left|n^{\left|\pi_{b}\right|-1}\kappa\big{(}{\underbrace{X_{i;n},\ldots,X_{i;n}}_{\left|\pi_{b}\right|\text{ times}}}\big{)}\right|<\infty,$ where the final inequality follows from the assumption (7.8). ∎ Applying Markov’s inequality and Lemma 7.5 shows that for any $q\geq 1$ we have $\sup_{n\geq 1}n^{q}\mathbb{P}\left(S_{n}^{c}\right)<\infty,\qquad S_{n}:=\left\\{\left|X_{i;n}\right|\leq n^{-1/4},\qquad i=1,\ldots,m\right\\}.$ (7.10) This localization estimate allows us to replace each $g_{i}$ by its Taylor expansion around $0$. Indeed, note that $\kappa\left(g_{1}(X_{n}),\ldots,g_{k}(X_{n})\right)=P\left(\mathbb{E}\left[g_{1}(X_{n})^{q_{1}}\cdots g_{k}(X_{n})^{q_{k}}\right],\quad q_{1}+\cdots+q_{k}\leq k\right)$ for some universal polynomial $P$ evaluated at the mixed moments of $X_{n}$ (the formula for this polynomial is given in (6.6) but is not important). Moreover, using the growth assumption (7.7) on $X$ and the fact that $g_{i}$ are polynomially bounded we find that $\sup_{n\geq 1}\mathbb{E}\left[g(X_{n})^{q_{1}}\cdots g_{k}(X_{n})^{q_{k}}\right]<\infty,\qquad\forall\,q_{1},\ldots,q_{k}\geq 1.$ (7.11) This, in combination with the localization estimate (7.10) applied with $q=k-1$ yields $\kappa\left(g_{1}(X_{n}),\ldots,g_{k}(X_{n})\right)=P\left(\mathbb{E}\left[{\bf 1}_{S_{n}}g_{1}(X_{n})^{q_{1}}\cdots g_{k}(X_{n})^{q_{m}}\right],\quad q_{1}+\cdots+q_{m}\leq k\right)+O(n^{-k+1}).$ Note that for $n$ sufficiently large, on the event $S_{n}$, the argument $X_{n}$ is any fixed neighborhood of $0$. Hence, we may write $g_{j}(X_{n})=p_{j}(X_{n})+O(n^{-k+1}),$ where $p_{j}$ represents the $q-$th order Taylor expansion of $g_{j}$ around $0$ with $q$ sufficiently large and the constant in the error term is uniformly bounded. This yields $\kappa\left(g_{1}(X_{n}),\ldots,g_{k}(X_{n})\right)=P\left(\mathbb{E}\left[{\bf 1}_{S_{n}}p_{1}(X_{n})^{q_{1}}\cdots p_{k}(X_{n})^{q_{m}}\right],\quad q_{1}+\cdots+q_{m}\leq k\right)+O(n^{-k+1}).$ Finally, using the mixed moment estimates (7.7) and the localization estimate (7.10), we conclude $\displaystyle\kappa\left(g_{1}(X_{n}),\ldots,g_{k}(X_{n})\right)$ $\displaystyle=P\left(\mathbb{E}\left[p_{1}(X_{n})^{q_{1}}\cdots p_{k}(X_{n})^{q_{m}}\right],\quad q_{1}+\cdots+q_{m}\leq k\right)+O(n^{-k+1})$ $\displaystyle=\kappa\left(p_{1}(X_{n}),\ldots,p_{k}(X_{n})\right)+O(n^{-k+1}).$ Recalling (7.8) completes the proof. ∎ Proposition 7.4 shows that, in establishing the conclusion (7.6) of Theorem 7.3, it is sufficient to assume that $g_{j}$ are polynomials. The remainder of the proof of Theorem 7.3 is by induction on $\ell$, starting with $\ell=1$. In view of Proposition 7.4, the following result establishes the base case. ###### Proposition 7.6 (Base Case: Theorem 7.3 holds for polynomials at $\ell=1$). Fix $k,m\geq 1$ and suppose $f_{i},\,i=1,\ldots,m$ are as in the statement of Theorem 7.3. Then, if $p_{1},\ldots,p_{k}$ are any polynomials in $m$ variables, we have $\sup_{n\geq 1}\left|n^{k-1}\kappa\left(p_{1}\left(\overrightarrow{\mathcal{O}}_{F}^{(1)}\right),\ldots,p_{k}\left(\overrightarrow{\mathcal{O}}_{F}^{(1)}\right)\right)\right|<\infty.$ ###### Proof. Since cumulants are multi-linear, we may and shall assume that $p_{a}$ are monomials: $p_{a}(x)=x^{Q^{(a)}}:=x_{1}^{q_{1}^{(a)}}\cdots x_{m}^{q_{m}^{(a)}},\qquad x=\left(x_{1},\ldots,x_{m}\right),\quad Q^{(a)}=\left(q_{1}^{(a)},\ldots,q_{m}^{(a)}\right).$ (7.12) Recall that $\mathcal{O}_{f_{i}}^{(1)}:=n_{1}^{-1}\sum_{j=1}^{n_{1}}f_{i}\left(z_{j;\mathcal{A}}^{(1)}\right).$ Therefore, writing $q^{(a)}:=q_{1}^{(a)}+\cdots+q_{m}^{(a)}$ we find $\displaystyle p_{a}\left(\overrightarrow{\mathcal{O}}_{F}^{(1)}\right)=n_{1}^{-q^{(a)}}\sum_{J^{(a)}}f_{J^{(a)}},\qquad f_{J^{(a)}}:=\prod_{i=1}^{m}\prod_{q=1}^{q_{i}^{(a)}}f_{i}\big{(}z_{j_{q;i}^{(a)};\mathcal{A}}^{(1)}\big{)},$ where the sum is over tuples of multi-indices $J^{(a)}=\left(J_{1}^{(a)},\ldots,J_{m}^{(a)}\right),\qquad J_{i}^{(a)}=\left(j_{q;i}^{(a)}\in[n_{1}],\,i\in[m],\,q\in[q_{i}^{(a)}]\right).$ (7.13) Hence, using that cumulants are multi-linear (see (6.3)), we obtain $\displaystyle\kappa\left(p_{1}\left(\overrightarrow{\mathcal{O}}_{F}^{(1)}\right),\ldots,p_{k}\left(\overrightarrow{\mathcal{O}}_{F}^{(1)}\right)\right)=n_{1}^{-(q^{(1)}+\cdots+q^{(k)})}\sum_{\begin{subarray}{c}J^{(1)},\ldots,J^{(k)}\end{subarray}}\kappa\left(f_{J^{(1)}},\ldots,f_{J^{(k)}}\right),$ where the sum extends over ordered collections $\left(J^{(a)},\,1\leq a\leq k\right)$ of multi-indices as in (7.13). The expression on the right hand side can be interpreted as an average. Namely, we can think of the indices $j_{q;i}^{(a)}\in[n_{1}]$ are chosen uniformly from $[n_{1}]$ and independently for all $i,q,a$. Writing $\mathcal{E}$ for the average with respect to this distribution, we obtain $\displaystyle\kappa\left(p_{1}\left(\overrightarrow{\mathcal{O}}_{F}^{(1)}\right),\ldots,p_{k}\left(\overrightarrow{\mathcal{O}}_{F}^{(1)}\right)\right)=\mathcal{E}\left[\kappa\left(f_{J^{(1)}},\ldots,f_{J^{(k)}}\right)\right].$ Our goal is to show that this average is small. To quantify this, let us associate to each collection $\left(J^{(a)},\,a\in[k]\right)$ a graph $\mathcal{G}\left(J^{(a)},\,a\in[k]\right)=\left([k],\,E\left(J^{(a)},\,a\in[k]\right)\right),$ (7.14) with vertex set $[k]$ and edge set defined by $(a,a^{\prime})\in\mathcal{E}\left(J^{(a)},\,a\in[k]\right)\quad\Longleftrightarrow\quad\exists i,i^{\prime}\in[m],\,q\in[q_{i}^{(a)}],\,q^{\prime}\in[q_{i^{\prime}}^{(a^{\prime})}]\text{ s.t. }j_{q;i}^{(a)}=j_{q^{\prime};i^{\prime}}^{(a^{\prime})}.$ The key point is that in light of the vanishing property (6.2) of cumulants and the fact that neurons at layer $1$ are independent $\mathcal{G}\left(J^{(a)},\,a\in[k]\right)\text{ disconnected }\quad\Longrightarrow\quad\kappa\left(f_{J^{(1)}}^{(1)},\ldots,f_{J^{(k)}}^{(1)}\right)=0.$ Hence, $\displaystyle\kappa\left(p_{1}\left(\overrightarrow{\mathcal{O}}_{F}^{(1)}\right),\ldots,p_{k}\left(\overrightarrow{\mathcal{O}}_{F}^{(1)}\right)\right)=\mathcal{E}\left[{\bf 1}_{\left\\{\mathcal{G}\left(J^{(a)},\,a\in[k]\right)\text{ connected}\right\\}}\kappa\left(f_{J^{(1)}}^{(1)},\ldots,f_{J^{(k)}}^{(1)}\right)\right].$ Since $f_{i}$ are assumed to be polynomially bounded and the distribution of the neuron pre-activations $z_{i;\alpha}^{{}_{(1)}}$ is that of centered Gaussians with mean $0$ and covariance $\mathrm{Cov}\left(z_{i_{1};\alpha_{1}}^{{}_{(1)}},z_{i_{2};\alpha_{2}}^{{}_{(1)}}\right)=\delta_{i_{1}i_{2}}\left(C_{b}+\frac{C_{W}}{n_{0}}\sum_{j=1^{n_{0}}}x_{j;\alpha_{1}}x_{j;\alpha_{2}}\right),$ we have for any fixed $k$ that $\sup_{n\geq 1}\sup_{J^{(1)},\ldots,J^{(a)}}\left|\kappa\left(f_{J^{(1)}}^{(1)},\ldots,f_{J^{(k)}}^{(1)}\right)\right|<\infty.$ Hence, $\displaystyle\kappa\left(p_{1}\left(\overrightarrow{\mathcal{O}}_{F}^{(1)}\right),\ldots,p_{k}\left(\overrightarrow{\mathcal{O}}_{F}^{(1)}\right)\right)=O\left(\mathcal{P}\left(\mathcal{G}\left(J^{(a)},\,a\in[k]\right)\text{ connected}\right)\right),$ where $\mathcal{P}$ is the probability measure associated to our random draw of $J^{(1)},\ldots,J^{(k)}$. To complete the proof, note that since $m,q_{i}^{(a)}$ are fixed, by a simple union bound, we obtain $\displaystyle\mathcal{P}\left(\mathcal{G}\left(J^{(a)},\,a\in[k^{\prime}]\right)\text{ connected}~{}\bigg{|}~{}\mathcal{G}\left(J^{(a)},\,a\in[k^{\prime}-1]\right)\text{ connected}\right)=O(n^{-1}).$ Hence, $\displaystyle\mathcal{P}\left(\mathcal{G}\left(J^{(a)},\,a\in[k]\right)\text{ connected}\right)$ $\displaystyle\qquad=\prod_{k^{\prime}=2}^{k}\mathcal{P}\left(\mathcal{G}\left(J^{(a)},\,a\in[k^{\prime}]\right)\text{ connected}~{}\bigg{|}~{}\mathcal{G}\left(J^{(a)},\,a\in[k^{\prime}-1]\right)\text{ connected}\right)$ $\displaystyle\qquad=O(n^{-k+1}).$ (7.15) Thus, $\kappa\left(p_{1}\left(\overrightarrow{\mathcal{O}}_{F}^{(1)}\right),\ldots,p_{k}\left(\overrightarrow{\mathcal{O}}_{F}^{(1)}\right)\right)=O(n^{-k+1}),$ as desired. ∎ Propositions 7.4 and 7.6 together show that the conclusion (7.6) of Theorem 7.3 holds at layer $1$. In conjunction with Proposition 7.4, the following result establishes that if the conclusion (7.6) of Theorem 7.3 holds at some layer $\ell\geq 1$ then it also holds at layer $\ell+1$. This will complete the proof by inductive of Theorem 7.3. ###### Proposition 7.7 (Inductive Step: Reducing polynomial cumulants in layer $\ell+1$ to smooth cumulants in layer $\ell$). Fix $\ell\geq 1$. Case 1: Suppose that $\sigma$ is smooth. Assume that for any collection $F^{\prime}=\left(f_{i}^{\prime}:\mathbb{R}^{\left|\mathcal{A}\right|\times N(n_{0},r)}\rightarrow\mathbb{R},\,i=1,\ldots,m\right)$ of smooth and polynomially bounded functions and any $g_{j}$ as in the statement of Theorem 7.3 the conclusion (7.6) of Theorem 7.3 holds at layer $\ell$: $\sup_{n\geq 1}\left|n^{k-1}\kappa\left(g_{1}\left(\overrightarrow{\mathcal{O}}_{F^{\prime}}^{(\ell)}\right),\ldots,g_{k}\left(\overrightarrow{\mathcal{O}}_{F^{\prime}}^{(\ell)}\right)\right)\right|<\infty.$ Then, if $p_{1},\ldots,p_{k}$ are any polynomials in $m$ variables, and $F=\left(f_{i},\,i=1,\ldots,m\right)$ is an arbitrary collection of smooth and polynomially bounded functions $f_{i}:\mathbb{R}^{\left|\mathcal{A}\right|\times N(n_{0},r)}\rightarrow\mathbb{R}$, then $\sup_{n\geq 1}\left|n^{k-1}\kappa\left(p_{1}\left(\overrightarrow{\mathcal{O}}_{F}^{(\ell+1)}\right),\ldots,p_{k}\left(\overrightarrow{\mathcal{O}}_{F}^{(\ell+1)}\right)\right)\right|<\infty.$ Case 2: Suppose $\sigma$ is not smooth but satisfies Assumption 2.1 and that $(D_{\alpha}^{J}z_{i;\alpha}^{(\ell)},\,\alpha\in\mathcal{A},\,\left|J\right|\leq r)$ is non-degenerate in the infinite width in the sense of (3.3). Assume that for any collection $F^{\prime}=\left(f_{i}^{\prime}:\mathbb{R}^{\left|\mathcal{A}\right|\times N(n_{0},r)}\rightarrow\mathbb{R},\,i=1,\ldots,m\right)$ of measurable and polynomially bounded functions and any $g_{j}$ as in the statement of Theorem 7.3 the conclusion (7.6) of Theorem 7.3 holds at layer $\ell$: $\sup_{n\geq 1}\left|n^{k-1}\kappa\left(g_{1}\left(\overrightarrow{\mathcal{O}}_{F^{\prime}}^{(\ell)}\right),\ldots,g_{k}\left(\overrightarrow{\mathcal{O}}_{F^{\prime}}^{(\ell)}\right)\right)\right|<\infty.$ Then, if $p_{1},\ldots,p_{k}$ are any polynomials in $m$ variables, and $F=\left(f_{i},\,i=1,\ldots,m\right)$ is an arbitrary collection of measurable and polynomially bounded functions $f_{i}:\mathbb{R}^{\left|\mathcal{A}\right|\times N(n_{0},d)}\rightarrow\mathbb{R}$, then $\sup_{n\geq 1}\left|n^{k-1}\kappa\left(p_{1}\left(\overrightarrow{\mathcal{O}}_{F}^{(\ell+1)}\right),\ldots,p_{k}\left(\overrightarrow{\mathcal{O}}_{F}^{(\ell+1)}\right)\right)\right|<\infty.$ ###### Proof. The proof of Proposition 7.7 is similar but somewhat more involved than that of Proposition 7.6. Moreover, the two cases are proved in essentially the same way, except that we will employ the different cases in Lemma 6.3. We give the details in the case when $\sigma$ is smooth and indicate where the proof is modified slightly to handle the non-smooth case. To start, as in the proof of Proposition 7.6, note that since cumulants are multi-linear (see (6.3)), it is enough to assume that $p_{j}$ are monomials. Thus, borrowing the notation from the proof of Proposition 7.6 (see starting (7.12)), we find $\displaystyle\kappa\left(p_{1}\left(\overrightarrow{\mathcal{O}}_{F}^{(\ell+1)}\right),\ldots,p_{k}\left(\overrightarrow{\mathcal{O}}_{F}^{(\ell+1)}\right)\right)=n_{\ell+1}^{-(q^{(1)}+\cdots+q^{(a)})}\sum_{\begin{subarray}{c}J^{(1)},\ldots,J^{(k)}\end{subarray}}\kappa\left(f_{J^{(1)}}^{(\ell+1)},\ldots,f_{J^{(k)}}^{(\ell+1)}\right),$ where $f_{J^{(a)}}^{(\ell+1)}:=\prod_{i=1}^{m}\prod_{q=1}^{q_{i}^{(a)}}f_{j_{\alpha}^{(a)}}^{(\ell+1)},\qquad f_{j}^{(\ell+1)}:=f\left(z_{j;\mathcal{A}}^{(\ell+1)}\right).$ Note that, as in Proposition A.1, the polynomially bounded assumption on $f_{j}$ and the non-linearity $\sigma$ together with the Gaussianity of weights and biases show that $\sup_{n\geq 1}\left|\kappa\left(f_{J^{(1)}}^{(\ell+1)},\ldots,f_{J^{(k)}}^{(\ell+1)}\right)\right|<\infty.$ (7.16) Using the law of total cumulance (6.1), we find that $\kappa\left(p_{1}(\overrightarrow{\mathcal{O}}_{F}^{(\ell+1)}),\ldots,p_{k}(\overrightarrow{\mathcal{O}}_{F}^{(\ell+1)})\right)$ equals $\displaystyle\sum_{\begin{subarray}{c}\pi=\left(\pi_{1},\ldots,\pi_{B}\right)\end{subarray}}n_{\ell+1}^{-(q^{(1)}+\cdots+q^{(a)})}\sum_{\begin{subarray}{c}J^{(1)},\ldots,J^{(k)}\end{subarray}}\kappa\left(\kappa\left(f_{J^{(\pi_{1})}}^{(\ell+1)}~{}|~{}\mathcal{F}^{(\ell)}\right),\ldots,\kappa\left(f_{J^{(\pi_{B})}}^{(\ell+1)}~{}|~{}\mathcal{F}^{(\ell)}\right)\right),$ where $\pi$ is any partition of $[k]$ and $f_{J^{(\pi_{b})}}:=\left(f_{J^{(a)}},\,a\in\pi_{b}\right).$ Just as in the proof of Proposition 7.6, we may interpret the sum over $J^{(1)},\ldots,J^{(k)}$ as an average over the distribution in which $j_{q;i}^{(a)}$ are drawn iid uniformly on $[n_{\ell+1}]$. Writing $\mathcal{E}$ for averages with respect to this distribution yields $\kappa\left(p_{1}(\overrightarrow{\mathcal{O}}_{F}^{(\ell+1)}),\ldots,p_{k}(\overrightarrow{\mathcal{O}}_{F}^{(\ell+1)})\right)=\sum_{\begin{subarray}{c}\pi=\left(\pi_{1},\ldots,\pi_{b}\right)\end{subarray}}\mathcal{E}\left[\kappa\left(\kappa\left(f_{J^{(\pi_{1})}}^{(\ell+1)}~{}|~{}\mathcal{F}^{(\ell)}\right),\ldots,\kappa\left(f_{J^{(\pi_{b})}}^{(\ell+1)}~{}|~{}\mathcal{F}^{(\ell)}\right)\right)\right]$ As in (7.14), we may associate to each collection $J^{(\pi_{t})}$ the graph $\mathcal{G}\left(J^{(\pi_{t})}\right)$. Recall that by Lemma 7.1, the neurons pre-activations $D_{\alpha}^{J}z_{i;\alpha}^{(\ell+1)}$ in layer $\ell+1$ are independent for different $i$ conditional on $\mathcal{F}^{(\ell)}$. Hence, in view of the vanishing property (6.2) of cumulants, we obtain that $\kappa\left(p_{1}(\overrightarrow{\mathcal{O}}_{F}^{(\ell+1)}),\ldots,p_{k}(\overrightarrow{\mathcal{O}}_{F}^{(\ell+1)})\right)$ equals $\displaystyle\sum_{\begin{subarray}{c}\pi=\left(\pi_{1},\ldots,\pi_{B}\right)\end{subarray}}\mathcal{E}\left[{\bf 1}_{\left\\{\mathcal{G}\left(J^{(\pi_{b})}\right)\text{ connected }\forall
# Mid-infrared Single-photon Detection Using Superconducting NbTiN Nanowires with Sub-15 ps Time Resolution in a Gifford-McMahon Cryocooler Jin Chang Optics Research Group, ImPhys Department, Faculty of Applied Sciences, Delft University of Technology, Delft 2628 CJ, The Netherlands. Single Quantum B.V., Delft 2628 CJ, The Netherlands. _Corresponding <EMAIL_ADDRESS>Johannes W. N. Los Single Quantum B.V., Delft 2628 CJ, The Netherlands. Ronan Gourgues Single Quantum B.V., Delft 2628 CJ, The Netherlands. Stephan Steinhauer KTH Royal Institute of Technology, Department of Applied Physics, Albanova University Centre, Roslagstullsbacken 21, 106 91 Stockholm, Sweden S. N. Dorenbos Single Quantum B.V., Delft 2628 CJ, The Netherlands. Silvania F. Pereira Optics Research Group, ImPhys Department, Faculty of Applied Sciences, Delft University of Technology, Delft 2628 CJ, The Netherlands. H. Paul Urbach Optics Research Group, ImPhys Department, Faculty of Applied Sciences, Delft University of Technology, Delft 2628 CJ, The Netherlands. Val Zwiller KTH Royal Institute of Technology, Department of Applied Physics, Albanova University Centre, Roslagstullsbacken 21, 106 91 Stockholm, Sweden Iman Esmaeil Zadeh Optics Research Group, ImPhys Department, Faculty of Applied Sciences, Delft University of Technology, Delft 2628 CJ, The Netherlands. ###### Abstract Shortly after their inception [1], superconducting nanowire single-photon detectors (SNSPDs) became the leading quantum light detection technology [2]. With the capability of detecting single-photons with near-unity efficiency [3, 4, 5], high time resolution [6, 7], low dark count rate [8], and fast recovery time [9], SNSPDs outperform conventional single-photon detection techniques. However, detecting lower energy single-photons (<0.8 eV) with high efficiency and low timing jitter has remained a challenge. To achieve unity internal efficiency at mid-infrared wavelengths, previous works [10, 11] used amorphous superconducting materials with low energy gaps at the expense of reduced time resolution (close to a nanosecond [12]), and by operating them in complex mK dilution refrigerators. In this work, we provide an alternative approach with SNSPDs fabricated from 5-9.5 nm thick NbTiN superconducting films and devices operated in conventional Gifford-McMahon (GM) cryocoolers. By optimizing the superconducting film deposition process, film thickness and nanowire design, our fiber-coupled devices achieved > 70% system detection efficiency (SDE) at 2 µm and sub-15 ps timing jitter. Furthermore, detectors from the same batch demonstrated unity internal detection efficiency at 3 µm and 80% internal efficiency at 4 µm, paving the road for an efficient mid-infrared single- photon detection technology with unparalleled time resolution and without mK cooling requirements. We also systematically studied the dark count rates (DCRs) of our detectors coupled to different types of mid-infrared optical fibers and black-body radiation filters. This offers insight into the trade- off between bandwidth and dark count rates for mid-infrared SNSPDs. To conclude, this paper significantly extends the working wavelength range for SNSPDs made from polycrystalline NbTiN to 1.5-4 µm, and we expect quantum optics experiments and applications in the mid-infrared range to benefit from this far-reaching technology. ## Introduction Detecting light at the single photon level has enabled novel scientific and industrial applications in recent decades [2]. Specifically, near- and mid- infrared detection are crucial for areas such as infrared fluorescence and spectroscopy [13, 14, 15], semiconductor and industrial production monitoring [16, 17], planetary soil studies [18], remote light detection and ranging [19] as well as two-photon entanglement and interference [20] experiments. However, since photon energy is inversely proportional to wavelength, detecting long wavelength photons is intrinsically more challenging than detecting shorter wavelength photons. Generally, Si-based detectors can be used for infrared detection but suffer from a low cut-off wavelength, typically around 1.1 µm [21], making them inefficient for long wavelength photon detection. Si:Sb based impurity band conduction detectors show mid-infrared light detection capability but not at the single-photon level [22]. Similarly, narrow-bandgap photoconductive semiconductors, like HgCdTe, InAs and InGaAs detectors suffer from low efficiency, large dark counts and poor time resolution [23]. In contrast, SNSPDs have high detection efficiency [3, 4, 5], high detection rates [24], low dark count rates (DCR) [8], unprecedented temporal resolution [7, 6] and thus outperform traditional infrared single-photon detectors. In 2001, NbN-based SNSPDs were first demonstrated by detecting 810 nm single- photons [1]. Subsequently, SNSPDs fabricated on different platforms were explored and developed [2]. Although high system detection efficiencies have been realized and reported for the UV [25], visible [26] and near- infrared/telecom [27, 3, 4, 5], detecting single-photons beyond 1550 nm with high efficiency and time resolution has remained a challenge [28]. Early works showed amorphous WSi based SNSPDs could be used for mid-infrared detection. However, these studies employed 4-6 nm thin superconducting films, resulting in low critical currents which is detrimental to the timing jitter. The reported temporal resolution was close to the nanosecond scale [12]. Also, these devices must be operated at sub-Kelvin temperatures, requiring complex dilution refrigerators. NbN-based SNSPDs with ultra-narrow line widths showed sensitivity up to 5 µm (saturated internal efficiency until 2.7 µm) [29]. A consequence of squeezing the nanowire width to around 30 nm makes fabrication challenging and degrades the detectors’ time resolution with the reduced critical current. Alternatively, our previous work [30] showed that by optimizing the stoichiometry of polycrystalline NbTiN film during reactive magnetron co- sputtering deposition, it is possible to make SNSPDs with strongly saturated efficiency plateaus in the near-infrared region at 2.8 K operating temperature, and also high performance at visible wavelengths up to 7 K [31]. Also, relatively thick NbTiN superconducting films were used [6, 32] to improve our detectors’ optical absorption and critical current, therefore enhancing efficiency and time resolution. Building on our previous results, in this work we made SNSPDs from 5, 6.5, 7.5 and 9.5 nm thick NbTiN films with different nanowire designs. First, by characterizing our SNSPDs using flood illumination, we optimized the meander design in terms of internal detection efficiency. Encouraged by our initial characterization results, we fabricated fiber-coupled SNSPDs and achieved a system detection efficiency of >70% at 2 µm in Gifford-McMahon (GM) cryo-coolers (2.4-2.8 K). Broadband detectors were also demonstrated with >50% SDE over the entire 1550-2000 nm range with sub-15 ps timing jitter. Furthermore, devices made from 7.5 and 6.5 nm films showed unity internal detection efficiency at 3 µm and 80% internal efficiency at 4 µm. We also systematically studied the DCRs from the detector itself (intrinsic DCRs) and from black-body radiation delivered by different types of fibers as well as coated fibers as a technique to reduce the DCR. These results offer a comprehensive understanding of the origin of dark counts in mid-infrared SNSPD systems. ## 1 SNSPD Fabrication and Measurement Setup Similar to [30], we deposited superconducting NbTiN films by a reactive magnetron co-sputtering deposition process. The stoichiometry of the films was controlled by adjusting the sputtering powers on the Ti and Nb targets. Film thickness was determined by a calibrated crystal microbalance and SNSPDs were fabricated as described in [6]. Our fabricated detectors were either tested under flood illumination (figure 1 (a)), or etched into a key-hole die shape and packaged using a standard ferrule and mating sleeves approach [33] (figure 1 (b)). This coupling method guarantees automatic alignment between detector and optical fibers for accurate system efficiency measurements. Both the flood illumination and fiber-coupled setup are shown in figure 1 (c). Figure 1: Illustration of (a) detector flood illumination, (b) a fiber-coupled detector, and (c) schematic of the efficiency measurement set-up. As shown in figure 1 (c), we employ a near-infrared tunable laser (JGR-TLS5, 1260-1650 nm) and mid-infrared CW lasers with different wavelengths (2000 and 2700 nm laser from Thorlabs, 3001 and 4013 nm laser from Nanoplus) as input photon sources. Single mode optical fibers are used to couple the light to the first fiber-to-fiber coupler (containing neutral density filters and a polarizer). A beam splitter is then used to create a reference arm with the majority of the power coupled to a calibrated power meter $P_{1}$. The signal beam with the lowest power is sent to the second fiber-to-fiber coupler, also containing a polarizer and neutral density filters. A polarization controller is used to tune the polarization state of the light after the second fiber-to- fiber coupler. After recording the light power intensity emerging from the polarization controller with power meter $P_{2}$, the light is guided to the system to carry out either flood illumination or fiber-coupled measurements. For flood illumination measurements, the input light is heavily attenuated to the single-photon regime. For fiber-coupled device measurements we use the following procedure: we first set the ratio $P_{1}/P_{2}$ to 50 dB by placing ND (neutral density) filters in both fiber-to-fiber couplers, and then add additional ND filters to the first coupler to reach $P_{1}$ = 10 nW. In this way, the input photon flux can be back calculated. For example, 10 nW with 50 dB attenuation at 2000 nm corresponds to an input photon flux of $1.006\times 10^{6}$ photons per second. More measurement details can be found in our previous work [4]. ## 2 Characterization of SNSPDs with Flood Illumination In this work, SNSPDs with different nanowire widths (40/60/80 nm) and diameters (8/9/10 µm) were fabricated from 5-9.5 nm thick NbTiN films. For example, 60-120-r4 refers to a meandering nanowire design with 60 nm wide lines, a pitch of 120 nm, and 4 µm radius (see insert in figure 3 (b)). Figure 2: Internal efficiency measurements of SNSPDs fabricated from (a) 9.5 nm and (b) 7.5 nm thick NbTiN films. As shown in figure 2 (a), at 1550 and 1625 nm, the 40 nm width device (yellow), 60 nm width device (green ) and 80 nm width device ( red ) all showed saturated internal efficiencies. When the laser wavelength was increased to 2000 nm, the 40 and 60 nm wide nanowires (yellow and green) devices maintained saturated internal efficiencies, while the 80 nm wide nanowire device (red curve) does not reach unity internal efficiency. To obtain greater saturated internal efficiency, one possible solution is to make narrower lines. However, with narrower line width (<40 nm) the nanofabrication patterning, development, and etching, become more critical. This will, in general, affect the fabrication yield. Alternatively, we sputtered 7.5 nm thick NbTiN films and made detectors with the same designs and nanofabrication process. As shown in figure 2(b), by using 7.5 nm thick NbTiN film, all devices with line widths ranging from 40-80 nm showed unity internal efficiency at 2000 nm. Detailed performance of devices based on 9.5 and 7.5 nm films is summarized in table 1. A thinner film leads to lower critical currents (for the same meander design), timing jitter is thus higher because the output pulse has a lower signal to noise ratio [34]. The rise time (time interval for signal to go from 20%-80% of the pulse amplitude) of the devices on 7.5 nm NbTiN was longer than for the 9.5 nm devices and dead-time (width of pulse at level of 1/e of the amplitude) was also slightly longer, this can be explained by the fact that devices made with thinner films have higher kinetic inductance. Meander Structure | $I_{c}$ (µA) 9.5/7.5 nm | Rise-time (ps) 9.5/7.5 nm | Dead-time (ns) 9.5/7.5 nm | Jitter (ps) 9.5/7.5 nm ---|---|---|---|--- 80-160-r4.5 | 25.0/18.4 | 350/375 | 9.3/10.6 | 30/44 60-120-r4 | 18.2/14 | 325/335 | 11.6/12.6 | 40/45 40-80-r5 | 8.40/6 | 400/425 | 38.4/49.2 | 93/97 | | | | Table 1: Flood illumination measurement results of 9.5 and 7.5 nm NbTiN based SNSPDs. Figure 3: (a) Photon counting rate (PCR) curves at 2700 nm of 60-120-r4 detectors from 7.5 nm (green) and 9.5 nm (blue) film, (b) PCR curves at 3001 nm of 60-120-r4 detector/purple and 1.1$\times$9 µm detector/red from 7.5 nm film, (c) PCR curves at 4013 nm of a 40-120-r5 detector from 6.5 nm film, and (d) statistics of device yield made from films with different thicknesses. The above results show that saturated internal efficiency until 2000 nm can be obtained with detectors made from 9.5/7.5 nm thick films. In order to explore the internal saturation limit, we carried out longer wavelength flood illumination measurements at 2700 and 3001 nm for a number of selected detectors. Figure 3 (a) shows detectors with 60-120-r4 meander design from both 9.5/7.5 nm films at 2700 nm. Both detectors reach unity internal efficiency and detectors from 7.5 nm film (dark green curve) show stronger saturated internal efficiency than detectors from 9.5 nm film (light blue curve). This is because by reducing the thickness of the superconducting film, the superconducting energy gap is reduced with the same input photon power, it is easier to break the superconducting state and form a resistive region [35]. Previous measurements at 2700 nm indicate that detectors made from 7.5 nm films are still promising for detecting single photons beyond 2700 nm, we fabricated two types of SNSPDs from a 7.5 nm NbTiN film and measured their detection performances at 3001 nm. As shown in figure 3 (b), a ’large’ meandering nanowire detector design of 60-120-r4 (purple), and a ’small’ detector design with line-width 60 nm, filling factor 50%, $1.1\times 9$ µm (red) were employed. Both detectors show saturated internal efficiency at 3001 nm and the smaller detector shows superior saturated internal efficiency over the larger one. According to a previous study [36], the performance of SNSPDs is influenced by the inhomogeneity of the superconducting film. Since the total length of the small detector ($\sim 90$ µm) is more than 3 times shorter than the large ($\sim 418$ µm), less inhomogeneity can be expected and better detection performance is observed. This shows that by reducing SNSPD’s total length, better detection performance can be potentially achieved. In previous works [11], the best performing device was a single 10-µm-long line. As a consequence the active area is smaller, which can be increased by using different detector architectures, for example multi-pixel [37] or interleaved nanowire designs [38]. Finally, we evaluated detectors made from even thinner films (5 and 6.5 nm). In figure 3 (c), we demonstrate that a detector (40-120-r5) from 6.5 nm film achieves 80% internal efficiency at 4013 nm (determined using a sigmoid curve fitting). This represents the state-of-the-art mid-infrared polycrystalline material based SNSPDs. To get a better understanding of the film thickness on the detector performance, we created an overview in figure 3 (d). We present the statistics of 32 fabricated SNSPDs from 4 different films. It is clear that 5 and 6.5 nm films suffer from low yield. The detectors made from the 5 nm films do not work well, because of their low critical current (1-2 µA). The non-working detectors from the 6.5 nm film did not show unity internal efficiency at 1550 nm possibly caused by lower film homogeneity of the thin film [6]. In contrast, 7.5 and 9.5 nm films show higher yield but detectors from the 9.5 nm film start to show decreased internal efficiency in the mid- infrared compared to detectors from the 7.5 nm film. Here, we suggest two practical solutions to solve the trade-off between film thickness and performance for future mid-infrared SNSPDs study: Bias-assisted sputtering can be applied to improve the critical current of SNSPDs [39] and post-processing treatment (for example, helium ion irradiation [40]) can enhance the internal efficiency of SNSPDs made from thicker films. ## 3 Measurements of Fiber-coupled SNSPDs For most quantum optics experiments and applications, a fiber-coupled detector/system is preferred because of mature fiber optics technology and instruments. Figure 4: Fiber-coupled SNSPDs measurements of (a) SDE of detector #1 at 1310, 1625, 1550 and 2000 nm, (b) timing jitter of detector #1, (c) SDE of detector #2 at 2001 nm, and (d) DCRs of detector #2 under different conditions: no fiber (green), low-pass coated fiber (turquoise), UHNA fiber (red), and mid-IR fiber (purple) plugged in. The previous section provides evidence that both 7.5 and 9.5 nm NbTiN superconducting films are suitable for making mid-infrared SNSPDs in terms of good yield and internal efficiency while a reduced thickness (5-6.5 nm) leads to fewer working devices. Thus, we fabricated fiber-coupled SNSPDs from 7.5 nm thick NbTiN film. Similar to [4], the NbTiN films were initially deposited on SiO2 grown by thermal oxidation process and nanowire meanders were eventually located on top of a Au/SiO2 membrane acting as optical cavity. After packaging and wire bonding, detectors were mounted in a closed-cycle cryocooler with a base temperature of 2.4-2.8 K and coupled to single mode fibers. Afterwards, lasers with different wavelengths were used for system detection efficiency (SDE) measurements as described in the previous section. Figure 4 (a) shows the performance of detector #1 (60 nm line width) made from a 7.5 nm NbTiN film. The SDEs for 1300, 1550, 1625 and 2000 nm are 50%, 60%, 61%, and 63% respectively. The inset in figure 4 (a) shows detector #1’s photon counting rate (PCR) curves at several wavelengths. Besides high SDE, high timing resolution is also highly desirable for many applications, for example, LiDAR [19], fluorescence microscopy and spectroscopy [14, 13]. The Instrument Response Function (IRF) of detector #1 was characterized with a ps- pulsed laser (4.2 ps pulse-width at 1064 nm wavelength) and a fast oscilloscope (4 GHz bandwidth, 40 GHz sampling rate) as described in [4]. As shown in figure 4 (b), using a low-noise cryogenic amplifier operated at 40 K, the IRF of this device shows a Gaussian shaped histogram. After fitting, we obtain 14.3$\pm$0.1 ps timing jitter (full width at half maximum, FWHM). Compared to previous reported values for mid-infrared SNSPDs [12], we improved time resolution by nearly two orders of magnitude. The inset picture in figure 4 (b) shows the pulse trace of detector #1. It shows a dead-time of 11.6 ns, indicating good performance at high count rates [41]. Similarly, in figure 4 (c), we show the 2000 nm SDE measurement of detector #2, which is made from another 7.5 nm NbTiN film but has a slightly higher meander filling factor (approximately 10% higher). The benefit of a higher filling factor is an increased optical absorption, which means if the internal efficiency is saturated, a higher SDE can be achieved compared to a similar device with a lower filling factor. As can be seen, detector #2 shows well saturated internal efficiency (when Ib $\geqslant$ 0.8Ic). After sigmoid fitting (to improve the efficiency estimation accuracy), we obtained a peak SDE over 70% at 2000 nm. Besides achieving high system detection efficiency, high dark count rates of mid-infrared SNSPDs are a major challenge. Previous work [19] showed that the DCRs of mid-infrared SNSPDs is typically in the order of $10^{4}$ Hz without using additional filters. In figure 4 (d), we systematically studied the DCRs of detector #2 in four different schemes: DCR without any fiber connected to the detector (green curve), DCR with end-face coated SM2000 fiber (fiber operating wavelength 1.7-2.3 µm, cyan curve), DCR with ultra-high NA fiber without coating (fiber operating wavelength 1.5-2 µm, red curve) and DCR with mid-infrared ZrF4 fiber without coating (fiber operating wavelength 2.3-4.1 µm, purple curve). As a result, at Ib=0.8Ic, the DCRs of detector #2 for the above mentioned four schemes are around the order of 101 Hz, 102 Hz, 105 Hz and 106 Hz, respectively. It is clear that the DCR of detector #2 when coupled to an end-face coated fiber is 3 orders of magnitude lower than coupled to ultra-high NA fiber without coating. By using this fiber end-facet coating (low-pass filter), the DCR of detector #2 is below 240 Hz when it reaches unity internal efficiency at 2000 nm. In contrast, when detector #2 is connected to mid-infrared ZrF4 fiber for SDE measurements at 3-4 µm, the detector showed over 2.78 MHz DCR at 0.8 Ic bias and starts latching [42]. This prevents further SDE measurements of detector #2 at 3-4 µm. To solve this issue, low-pass filters should be employed before the detectors, either by using fiber end-face coating, or adding cold filtering stages inside the cryostat [43]. ## 4 Discussion and Conclusion In the past, amorphous materials were mainly used for mid-infrared single photon detection motivated by the intuition that their superconducting energy gap (0.59-0.61 meV for WSi [44]) is lower than polycrystalline material (2.46 meV for NbN [45]). This work pinpoints that NbTiN (polycrystalline) based SNSPDs can also achieve high mid-infrared single photon detection efficiency while maintaining unprecedented time resolution. Furthermore, given that the energy of a single photon even at 10 µm wavelength (123.9 meV) is still significantly larger than both materials’ superconducting energy gap, other physical properties of the superconducting materials need to be investigated to enhance SNSPDs’ mid-infrared detection response. Besides improving internal detection efficiency, reducing the dark count rates is another outstanding challenge for mid-infrared SNSPD systems. As shown in this work, only room temperature black body radiation delivered to the detector by ZrF4 fiber has led to >106 Hz DCR. To overcome this issue, either extra cryogenic filters need to be added before the detectors or the entire experiment has to be performed at cryogenic temperatures. In conclusion, we demonstrated SNSPDs made from magnetron co-sputtered NbTiN superconducting films (5-9.5 nm) with unity internal efficiency at 3 µm and 80% internal efficiency at 4013 nm when operated in closed-cycle Gifford- McMahon coolers (2.4-2.8 K). Our fiber coupled device achieves over 70% system detection efficiency at 2 µm and > 50% system detection efficiency from 1300 to 2000 nm with sub-15 ps time resolution. By employing an end-facet coated fiber, the dark count rate of mid-infrared SNSPDs was reduced by 3 orders of magnitude compared to uncoated single mode fibers. The DCR when coupled to mid-infrared ZrF4 fiber is also studied, which offers valuable information for building mid-infrared fiber-coupled SNSPDs systems in the future. To the best of our knowledge, the detectors presented in this work have the best system detection efficiency and temporal resolution among the mid-infrared SNSPDs reported so far, and NbTiN is a solid choice for making mid-infrared SNSPDs without mK dilution refrigerators. ## Acknowledgements J.C. acknowledges China Scholarships Council (CSC, No.201603170247) . I.E.Z., V.Z., and Single Quantum B.V. acknowledge the supports from the ATTRACT project funded by the EC under Grant Agreement 777222. R.B.M.G. acknowledges support by the European Commission via the Marie-Sklodowska Curie action Phonsi (H2020-MSCA-ITN-642656). S.N.D., S.S., V.Z. and Single Quantum B.V. acknowledge EU FET-Open project funding (No. 899580). V.Z. acknowledges funding from the Knut and Alice Wallenberg Foundation Grant “Quantum Sensors”, and support from the Swedish Research Council (VR) through the VR Grant for International Recruitment of Leading Researchers (Ref 2013-7152) and Research Environment Grant (Ref 2016-06122). ## References * [1] Gol’Tsman, G. _et al._ Picosecond superconducting single-photon optical detector. _Applied Physics Letters_ 79, 705–707 (2001). * [2] Esmaeil Zadeh, I. _et al._ Superconducting nanowire single-photon detectors: A perspective on evolution, state-of-the-art, future developments, and applications. _Applied Physics Letters_ 118, 190502 (2021). * [3] Hu, P. _et al._ Detecting single infrared photons toward optimal system detection efficiency. _Optics Express_ 28, 36884–36891 (2020). * [4] Chang, J. _et al._ Detecting telecom single photons with 99.5- 2.07+ 0.5% system detection efficiency and high time resolution. _APL Photonics_ 6, 036114 (2021). * [5] Reddy, D. V., Nerem, R. R., Nam, S. W., Mirin, R. P. & Verma, V. B. Superconducting nanowire single-photon detectors with 98% system detection efficiency at 1550 nm. _Optica_ 7, 1649–1653 (2020). * [6] Esmaeil Zadeh, I. _et al._ Efficient single-photon detection with 7.7 ps time resolution for photon-correlation measurements. _ACS Photonics_ 7, 1780–1787 (2020). * [7] Korzh, B. _et al._ Demonstration of sub-3 ps temporal resolution with a superconducting nanowire single-photon detector. _Nature Photonics_ 14, 250–255 (2020). * [8] Shibata, H., Shimizu, K., Takesue, H. & Tokura, Y. Ultimate low system dark-count rate for superconducting nanowire single-photon detector. _Optics Letters_ 40, 3428–3431 (2015). * [9] Münzberg, J. _et al._ Superconducting nanowire single-photon detector implemented in a 2D photonic crystal cavity. _Optica_ 5, 658–665 (2018). * [10] Chen, Q. _et al._ Mid-infrared single photon detector with superconductor Mo80Si20 nanowire. _arXiv preprint arXiv:2011.06699_ (2020). * [11] Verma, V. _et al._ Single-photon detection in the mid-infrared up to 10 µm wavelength using tungsten silicide superconducting nanowire detectors. _APL Photonics_ 6, 056101 (2021). * [12] Chen, L. _et al._ Ultra-sensitive mid-infrared emission spectrometer with sub-ns temporal resolution. _Optics Express_ 26, 14859–14868 (2018). * [13] Verma, V. _et al._ Towards single-photon spectroscopy in the mid-infrared using superconducting nanowire single-photon detectors. In _Advanced Photon Counting Techniques XIII_ , vol. 10978, 109780N (International Society for Optics and Photonics, 2019). * [14] Chen, L. _et al._ Mid-infrared laser-induced fluorescence with nanosecond time resolution using a superconducting nanowire single-photon detector: New technology for molecular science. _Accounts of Chemical Research_ 50, 1400–1409 (2017). * [15] Elsinger, L. _et al._ Integration of colloidal pbs/cds quantum dots with plasmonic antennas and superconducting detectors on a silicon nitride photonic platform. _Nano Letters_ 19, 5452–5458 (2019). * [16] Mc Manus, M. _et al._ Pica: Backside failure analysis of cmos circuits using picosecond imaging circuit analysis. _Microelectronics Reliability_ 40, 1353–1358 (2000). * [17] Willer, U., Saraji, M., Khorsandi, A., Geiser, P. & Schade, W. Near-and mid-infrared laser monitoring of industrial processes, environment and security applications. _Optics and Lasers in Engineering_ 44, 699–710 (2006). * [18] Sprague, A. _et al._ Mercury: Mid-infrared (3–13.5 $\mu$m) observations show heterogeneous composition, presence of intermediate and basic soil types, and pyroxene. _Meteoritics & Planetary Science_ 37, 1255–1268 (2002). * [19] Taylor, G. G. _et al._ Photon counting lidar at 2.3 µm wavelength with superconducting nanowires. _Optics Express_ 27, 38147–38158 (2019). * [20] Prabhakar, S. _et al._ Two-photon quantum interference and entanglement at 2.1 µm. _Science Advances_ 6, eaay5195 (2020). * [21] Tan, C. L. & Mohseni, H. Emerging technologies for high performance infrared detectors. _Nanophotonics_ 7, 169–197 (2018). * [22] Huffman, J. E., Crouse, A., Halleck, B., Downes, T. & Herter, T. L. Si: Sb blocked impurity band detectors for infrared astronomy. _Journal of Applied Physics_ 72, 273–275 (1992). * [23] Rogalski, A. Next decade in infrared detectors. In _Electro-Optical and Infrared Systems: Technology and Applications XIV_ , vol. 10433, 104330L (International Society for Optics and Photonics, 2017). * [24] Zhang, W. _et al._ A 16-pixel interleaved superconducting nanowire single-photon detector array with a maximum count rate exceeding 1.5 ghz. _IEEE Transactions on Applied Superconductivity_ 29, 1–4 (2019). * [25] Wollman, E. E. _et al._ Uv superconducting nanowire single-photon detectors with high efficiency, low noise, and 4 k operating temperature. _Optics Express_ 25, 26792–26801 (2017). * [26] Chang, J. _et al._ Multimode-fiber-coupled superconducting nanowire single-photon detectors with high detection efficiency and time resolution. _Applied Optics_ 58, 9803–9807 (2019). * [27] Le Jeannic, H. _et al._ High-efficiency wsi superconducting nanowire single-photon detectors for quantum state engineering in the near infrared. _Optics Letters_ 41, 5341–5344 (2016). * [28] Taylor, G. G., Morozov, D. & Hadfield, R. H. Mid-infrared photon counting with superconducting nanowires. In _Quantum Optics and Photon Counting 2021_ , vol. 11771, 1177106 (International Society for Optics and Photonics, 2021). * [29] Marsili, F. _et al._ Efficient single photon detection from 500 nm to 5 µm wavelength. _Nano Letters_ 12, 4799–4804 (2012). * [30] Zichi, J. _et al._ Optimizing the stoichiometry of ultrathin nbtin films for high-performance superconducting nanowire single-photon detectors. _Optics Express_ 27, 26579–26587 (2019). * [31] Gourgues, R. _et al._ Superconducting nanowire single photon detectors operating at temperature from 4 to 7 k. _Optics Express_ 27, 24601–24609 (2019). * [32] Chang, J., Zadeh, I. E., Los, J. W., Zichi, J. & Zwiller, V. Superconducting nanowire single photon detector with high efficiency and time resolution for multimode fiber coupling. In _CLEO: QELS_Fundamental Science_ , FF1A–2 (Optical Society of America, 2019). * [33] Miller, A. J. _et al._ Compact cryogenic self-aligning fiber-to-detector coupling with losses below one percent. _Optics express_ 19, 9102–9110 (2011). * [34] You, L. _et al._ Jitter analysis of a superconducting nanowire single photon detector. _AIP Advances_ 3, 072135 (2013). * [35] Hofherr, M. _et al._ Intrinsic detection efficiency of superconducting nanowire single-photon detectors with different thicknesses. _Journal of Applied Physics_ 108, 014507 (2010). * [36] Cheng, Y., Gu, C. & Hu, X. Inhomogeneity-induced timing jitter of superconducting nanowire single-photon detectors. _Applied Physics Letters_ 111, 062604 (2017). * [37] Wollman, E. E. _et al._ Kilopixel array of superconducting nanowire single-photon detectors. _Optics Express_ 27, 35279–35289 (2019). * [38] Huang, J. _et al._ High speed superconducting nanowire single-photon detector with nine interleaved nanowires. _Superconductor Science and Technology_ 31, 074001 (2018). * [39] Dane, A. E. _et al._ Bias sputtered nbn and superconducting nanowire devices. _Applied Physics Letters_ 111, 122601 (2017). * [40] Zhang, W. _et al._ Saturating intrinsic detection efficiency of superconducting nanowire single-photon detectors via defect engineering. _Physical Review Applied_ 12, 044040 (2019). * [41] Esmaeil Zadeh, I. _et al._ Single-photon detectors combining high efficiency, high detection rates, and ultra-high timing resolution. _APL Photonics_ 2, 111301 (2017). * [42] Annunziata, A. J. _et al._ Reset dynamics and latching in niobium superconducting nanowire single-photon detectors. _Journal of Applied Physics_ 108, 084507 (2010). * [43] Shibata, H., Fukao, K., Kirigane, N., Karimoto, S. & Yamamoto, H. Snspd with ultimate low system dark count rate using various cold filters. _IEEE Transactions on Applied Superconductivity_ 27, 1–4 (2016). * [44] Zhang, X. _et al._ Characteristics of superconducting tungsten silicide WxSi1-x for single photon detection. _Physical Review B_ 94, 174509 (2016). * [45] Antonova, E., Dzhuraev, D., Motulevich, G. & Sukhov, V. Superconducting energy gap in niobium nitride. _Zhurnal Ehksperimental’noj i Teoreticheskoj Fiziki_ 80, 2426–2429 (1981).
# The Future of Hybrid Meetings Marios Constantinides Nokia Bell LabsCambridgeUnited Kingdom <EMAIL_ADDRESS>and Daniele Quercia Nokia Bell LabsCambridgeUnited Kingdom<EMAIL_ADDRESS> (2022) ###### Abstract. Meetings are typically considered to be the fuel of an organization’s productivity—a place where employees discuss ideas and make collective decisions. However, it is no secret that meetings are also often perceived as wasteful vacuums, depleting employee morale and productivity, likely due to the fact that current technologies fall short in fully supporting physical or virtual meeting experience. In this position paper, we discuss the three key elements that make a meeting successful (i.e., execution, psychological safety, and physical comfort), and present new tools for hybrid meetings that incorporate those elements. As past research has focused on supporting meeting execution (the first element), we set the roadmap for future research on the two other elements: on psychological safety by articulating how new technologies could make meeting useful for all participants, ensure all participants give and receive appropriate levels of attention, and enable all participants to feel and make others feel comfortable; and on physical comfort by dwelling on how new technologies could make the meeting experience comfortable by integrating all human senses. We also discuss the potential danger of these technologies inadvertently becoming surveillance tools. workplace, meetings, psychological safety, physical comfort ††journalyear: 2022††copyright: rightsretained††conference: 2022 Symposium on Human-Computer Interaction for Work; June 8–9, 2022; Durham, NH, USA††booktitle: 2022 Symposium on Human-Computer Interaction for Work (CHIWORK), June 8–9, 2022, Durham, NH, USA††price: 15.00††doi: 10.1145/3533406.3533415††isbn: 978-1-4503-9655-4/22/06††ccs: Human-centered computing Collaborative and social computing systems and tools ## 1\. Introduction While meeting tools, to a great extent, have increasingly simplified and augmented the ways meetings are conducted, they still fall short in supporting the nuanced experience of offline meetings. To see why, consider, a virtual all-hands meeting, where one would expect a one-to-many form of communication. One could imagine that not everyone would feel comfortable sharing their video streams in such a setting and, as such, the meeting host might miss the opportunity to “read the room” (i.e., interpret non-verbal cues that are generally expressed in offline settings). The interplay between offline and online worlds has slowly but surely produced a new breed of meetings: _the hybrid meeting_. This type of meeting typically involves a mixture of in- person and remote attendees: remote attendees join the meeting via a virtual meeting platform (e.g., Zoom), and in-person attendees sit together in the typical meeting room. Meeting experience is therefore shaped not only by the participants’ interactions but also by their diverse physical environments. In this position paper, we: * • Discuss the three key elements that make a meeting successful (Section 2). * • Present recent research that incorporates those three elements (Section 3). * • Set the roadmap for future research, focusing on the two elements out of the three that are often neglected in the literature—psychological safety and physical comfort (Section 4). * • Discuss the potential danger of future meeting technologies inadvertently becoming surveillance tools (Section 5). ## 2\. What makes meetings successful Figure 1. Three key elements that make a meeting successful along with key research questions for future research: (a) _Execution_ : whether the meeting had a clear purpose, structure, and resulted in actionable items; (b) _Psychological safety_ : whether participants felt listened to and were motivated to contribute during the meeting; and (c) _Physical comfort_ : whether participants felt physically comfortable by, say, being in an environment with good air quality and sufficient natural light. Most innovations in meeting technologies have focused on execution, and future work should focus on the two other aspects - psychological safety and physical comfort. A sample of key research questions are reported in this infographic. A meeting’s perceived experience has been evaluated through standardized questionnaires, or ad-hoc post meeting surveys. For example, the Event Performance Indices (EPI) (Kerns, 2015) is a six-item questionnaire that focuses on a meeting’s overall performance results, and, in turn, measures meeting effectiveness. The questionnaire prompts participants’ satisfaction levels including, for example, whether the meeting was worth the time investment, and whether participants were personally motivated. In the fields of Management and Organizational Science, meeting productivity has been directly linked with a meeting’s agenda, structure, and purpose (Lent, 2015; Cohen, 2016; Schwarz, 2016). Inclusiveness (Axtell, 2016, 2017), dominance (Romano and Nunamaker, 2001), peripheral activities (Niemantsverdriet and Erickson, 2017), and psychological safety (Grenny, 2017; Axtell, 2018) are aspects that have also been found to influence meeting experience and productivity. The physical comfort of the workplace environment is an additional factor that makes up the list (Alavi et al., 2017). For example, a recent survey found that light and outdoor views were among the most popular perks employees craved for (Meister, 2018), whereas stuffy and stale air offices tended to reduce productivity (Allen et al., 2016; Allen, 2017; Tom Y. Chang and Neidell, 2016). More recently, to capture the factors that generally make a meeting successful, Constantinides et al. (Constantinides et al., 2020) designed a 28-question survey and administered it to 363 individuals whose answers were then statistically analyzed. The survey covered an array of themes, previously identified in the Organization and Management Science literature, including a meeting’s psychological experience (e.g., contribution, balance, turn-taking, attention), its structure, and its content (e.g., agenda, physical environment, use of technology). The survey responses were then statistically analyzed using a Principal Component Analysis, with 11 items of the 28-question survey explaining 62% of the total variance. These 11 items loaded on three factors - the three factors that make a meeting successful: : Execution. Whether participants ultimately “got things done”—more specifically, whether the meeting had a clear purpose, structure, and resulted in actionable items. : Psychological Safety. Whether participants felt listened to and were motivated to contribute during the meeting. : Physical Comfort. Whether participants felt physically comfortable by, for example, be in a meeting room with good air quality and sufficient natural light. ## 3\. Current Support For The Three Factors Determining Success Next, we discuss recent research aiming at supporting these three key elements in either physical or virtual meetings. ### 3.1. Execution Execution refers to whether the meeting had a clear purpose, structure, and resulted in actionable items. Besides the mainstream communication tools of the likes of MS Teams, WebEx, Skype, Zoom, just to name a few, a large body of scientific work has been focused on supporting meeting execution through contextual information, text, audio, and video support. For example, one work focused on detecting key decisions in dialogues (Kim and Rudin, 2014), while another on generating an “action items” list from these dialogues (McGregor and Tang, 2017). Using an agenda planning technique, Garcia et al. (Garcia et al., 2004) developed a tool that allows meeting participants to vote for agendas items. As the balance of conversational turn-taking is important for group performance (Woolley et al., 2010), technologies were developed to create awareness. This was done by highlighting salient moments and visualizing participants’ contributions (Kim et al., 2008). Content recording was also a focus for technologies supporting execution. NoteLook (Chiu et al., 1999) exploited video streams to support note taking. Catchup (Tucker et al., 2010) automatically identified the gist of what was missed in a meeting, allowing people to join the meeting late and still participate effectively. Video Threads (Barksdale et al., 2012) supported asynchronous video sharing for geographically distributed teams. Finally, Banerjee et al.’s playback system allowed for revisiting a recorded meeting (Banerjee et al., 2005). ### 3.2. Psychological Safety Psychological safety refers to whether participants felt listened to and were motivated to contribute during the meeting. As Edmondson described it, psychological safety refers to “the absence of interpersonal fear that allows people to speak up with work-relevant content” (Edmondson, 1999). In face-to- face interactions (e.g., during a social encounter or a meeting), we primarily read faces through visual cues and read bodies through a multi-sensory integration. All these cues, to a great extent, allow us to understand the dynamics in a conversation (e.g., whether one feels listened to and receives the appropriate attention from his/her peers). In virtual interactions, however, these natural cues are often lost, flattening the psychological and social experience. Furthermore, current meeting technologies need to deliver this experience in an attention-enhancing way as opposed to the current attention-depleting way: currently, in a virtual meeting, attendees need to pay attention not only to what is being said, but also to peripheral modes of communication (e.g., text messages, raising of virtual hands). To bring the offline meeting experience into the virtual world, recent research developed an app called MeetCues. This captures three types of cues, aiming at augmenting psychological safety in hybrid meetings. First, the MeetCues app allows meeting attendees to provide real-time feedback (Aseniero et al., 2020). The app enabled attendees to react on what was being discussed by tagging key points and action items. These virtual crowdsourced tags were translated into an emoji cloud visualization, which allowed attendees to infer the overall atmosphere. In this way, these crowdsourced cues served as indicators of moments during which, for example, participants agreed with what was being said or whether further clarification was needed, cultivating a safe environment for contributions. However, these crowdsourced feedbacks did not fully capture the multi-sensory integration that we are attuned to in physical meetings. A case in point is the almost universally acceptable non-verbal cue of nodding (Peleckis and Peleckienė, 2015). In face-to-face interactions, it is natural to observe bodily expressions to understand, for example, whether one agrees on what is being said (nodding), or a clarification is needed. However, the picture is different in virtual meetings. Participants of virtual meetings who often turn off their cameras, leave the speaker staring at a sea of black squares, feeling psychologically disoriented. That is why the second type of cues MeetCues captured was body movements. It did so by integrating wearable devices with the MeetCues app, which captured participants’ heart rates, head and hand movements, and changes in postures (Choi et al., 2021; Park et al., 2020). These body cues were predictive of the meeting’s vibrancy and multi-tasking activities, and ultimately of the meeting’s success; these cues were even more predictive than the meeting’s emotional content derived from the meeting’s transcript. For example, head movements such as nodding served as a proxy for (dis)agreement, while changes in postures served as a proxy for (dis)comfort. In particular, these two body cues metrics proved useful in helping attendees infer the levels of psychological safety “in the room”. The third type of cues MeetCues captured was types of conversations (Choi et al., 2020) (e.g., a heated discussion resulting in a conflict eventually being resolved, a supportive conversation, an exchange of knowledge). By analyzing more than four thousand minutes of conversations from eighty-five real-world meetings, Zhou et al. (Zhou et al., 2021b) found that conversation types were more predictive of meeting success than traditional voice and text analytics. By monitoring these conversations during a meeting (or after it), one could potentially measure specific aspects of organizational productivity, and proactively take actions for improvement. To see how, consider conflict resolution. Having new real-time ways of marking indicators of, for example, conflicts, not only would increase attendees’ awareness but would also help cultivate an approach of conflict resolution. ### 3.3. Physical Comfort Physical comfort refers to whether participants felt physically comfortable by, for example, be in a meeting room with good air quality and sufficient natural light. To see what we mean by the word ‘comfort’, let us draw a parallel with architecture. In that area, comfort mainly describes four main qualities that a good building must possess: visually pleasurable, thermally livable, acoustically tolerant, and respiratory humane (in terms of air quality) (Alavi et al., 2017). To paraphrase comfort in the meeting context, recent research has initially focused on the specific aspects of good air quality and natural light. A team of researchers, in particular, developed miniaturized devices, called Geckos, which are fitted with cheap-to-produce sensors capturing light, temperature, and the presence of a broad range of gases such as volatile organic compounds (VOCs). They integrated Geckos with the MeetCues app (Aseniero et al., 2020), and created a new indoor environmental sensing infrastructure ComFeel (Constantinides et al., 2020). At the end of each meeting, the MeetCues app asks each participant: _(a)_ whether the meeting had a clear purpose and structure, and resulted in a list of actionable points, and _(b)_ whether each participant felt listened to and was motivated to contribute. To explore the extent to which air quality alone determined whether a meeting was perceived to be productive or not, they deployed ComFeel in a corporate office and gathered data from 29 meetings in different rooms. As one expects, productive meetings were those in which participants felt safe to contribute: the probability of a meeting being productive increased by 35% for each standard deviation increase in psychological safety. Surprisingly, the results for air quality were dramatic too. The productivity probability increased by as much as 25% for each standard deviation increase in room pleasantness. Furthermore, among all the sensors, the air quality one was the most important: indeed, room pleasantness was achieved through improved temperature (with a relative contribution of 25%), lighting (30%), and air quality (45%). These results suggest that, if a meeting takes place in a stuffy conference room, even if it is run well, people will still struggle to pay attention. To fix that, one just needs to do a handful of things, from manually or automatically adjusting ventilation, lighting, temperature, which could increase a meeting’s productivity by a considerable extent. ## 4\. The three factors in Future Hybrid Meetings Most innovations in meeting technologies have focused on supporting physical or virtual meetings independently, and they did so by extensively focusing on how to support meeting execution. That is why future work should focus on the two other overlooked aspects (Figure 1): on psychological safety (Section §4.1) by making meetings useful for all participants, ensuring all participants give and receive appropriate levels of attention, and enabling all participants to feel and make others feel comfortable; and on physical comfort (Section §4.2) by integrating all human senses, and, as such, making the meeting sensory experience pleasurable. At the same time though, future work should also consider how these new technologies could inadvertently become surveillance tools (Section §5). ### 4.1. Rethinking Psychological Safety The design space of meeting technologies could be well enriched with features that fully support all three facets of psychological safety (Constantinides et al., 2020): _(a)_ making the meeting useful and satisfying for all participants, _(b)_ ensuring all participants give and receive appropriate levels of attention, and _(c)_ enabling all participants to feel and make others feel comfortable. First, to make a meeting useful and satisfying, existing tools already provide analytics in the form of summaries or real-time feedback (Aseniero et al., 2020). However, these analytics could be enriched with aspects that are hard to quantify such as the types of conversations happening in a meeting (Zhou et al., 2021b), nuanced emotional states (Zhou et al., 2022), or other language markers that are hidden in conversations (e.g., presence of stress, engagement in emphatic conversations) (Scepanovic et al., 2020; Robertson et al., 2019; Zhou et al., 2021a). Second, as turn-taking and balance in conversation helps to cultivate a safe environment for contribution (Woolley et al., 2010), future meeting tools need to ensure that all attendees give and receive the appropriate levels of attention. For example, some meeting technologies already do provide indicators about “speaking up” time, and often do so through attention- depleting notifications. However, future technologies must ensure attention- enhancing ways of notifying users. As Weiser and Brown argued in their seminar work _The Coming Age of Calm Technology_ , “the information display should move easily from the periphery of users’ attention to the centre, and back” (Weiser and Brown, 1997). Future design elements need to smoothly capture the user’s attention only when necessary, while calmly remain in the user’s periphery most of the time. For example, while current meeting tools allow various modes of communication (e.g., voice, text) and notifications (e.g., emojis) in a virtual meeting, personalized or timely delivery of those notifications could remove unnecessary information overload. Imagine a scenario during which five attendees need to focus on the speaker, yet one of them types a question and another raises his/her virtual hand. If these notifications were to be delivered straight away, they may well overload the speaker and distract the audience. More work needs to go into identifying the right moment for delivering a notification, or that for delivering a notification to the best person in a group, allowing for personalized delivery. Third, it is also important for meeting technologies to facilitate an inclusive environment where all attendees feel comfortable, and can make others feel comfortable. One way of achieving it is to create social interactions that scaffold learning, and then use computational methods to weave these interactions into the fabric of meeting tools. For example, as demonstrated in massive online classes (MOOCs) (Quintana et al., 2018), future meeting technologies could coach people skills that foster inclusion and diversity, and collaboratively help them reflect on their behavior. Another way of achieving inclusivity is through broadening the design space. Design elements such as emojis allow, to a great extent, attendees to non-verbally communicate and express their emotional state; it can be seen as a way of developing trust or empathy for others. Future meeting technologies could also borrow concepts from biophilic design (Wilson, 1984) and embrace new types of visual cues (Qin et al., 2020) (e.g., the use of different symbols, imagery, and artificial artifacts), thus making additional layers of non-verbal communication available to users. ### 4.2. Rethinking Physical Comfort Aligned with the Human-Building Interaction vision (Alavi et al., 2019) (an emerging area aiming at unifying HCI research in the built environment), physical comfort goes well beyond air quality, integrating all the human senses. To begin with, let’s take sight. Could we make the physical environment visually pleasurable? Recent studies showed that adjusting light conditions in a room could serve as a stress reducing intervention mechanism, or even as a biofeedback relaxation training (Ren et al., 2019; Yu et al., 2018). Moving to hearing. Could we design ubiquitous technologies that facilitate meetings anywhere and everywhere? Consider a knowledge worker who attends meetings from a variety of places: from the office to home to a cafeteria to even an autonomous car (Kun et al., 2020). How could future technologies support smooth transition between different places, and provide a pleasurable sound experience? Smart earable devices can be one of the answers wherein an array of sensors could be embedded in them to control not only what sound the end user is hearing, but also how that sound is delivered in a specific physical setting (Kawsar et al., 2018). While addressing sight and hearing is to some extent easier, smell and touch require future research efforts. Which odour is more relaxing and energizing during a meeting, and, to what extent, certain odour could be adapted as a meeting unfolds to facilitate better execution? How could we stimulate the sense of touch to increase physical comfort during a meeting? For example, as most of white collar jobs require employees to spend long hours sitting, previous studies explored the design of ergonomic chairs that would increase physical comfort levels. While these studies typically employ various objective methods to capture physical comfort such as pressure sensors measurement (Zemp et al., 2015) or an array of sensors including, for example, temperature, gas, illuminance, VOC, CO2 (Zhong et al., 2020), new sensing modalities such as Electromyography (EMG) could prove useful to fully incorporate the sense of touch in the design space of these chairs. ## 5\. The dangers of meeting technologies While these technologies hold the promise of enabling employees to be productive, report after report has highlighted the outcries of workplace AI- based solutions being biased and unfair, and lacking transparency and accountability. During the COVID-19 pandemic, systems were being used to analyze footage from security cameras in workplace to detect when employees are not complying with social distancing rules111https://www.ft.com/content/58bdc9cd-18cc-44f6-bc9b-8ca4ac598fc8; while there is a handful of good intentions behind such a technology, the very same technology could be used for tracking employees’ movements, or time away from desk. As we move towards a future likely ruled by big data and powerful AI algorithms, important questions arise relating to the psychological impacts of surveillance, data governance, leadership and organizational culture, and compliance with ethical and moral concerns. The historian and philosopher Yuval Noah Harari argued that digital platforms such as those that allow us to work from remote need to follow three basic rules222https://www.ft.com/content/f1b30f2c-84aa-4595-84f2-7816796d6841 to protect us from “digital dictatorships”. First, any data collection on people should be used to help people rather than to manipulate, control, or harm them. In the meetings context, this translates into providing analytics that help employees reflect on their experience rather than causing them to receive low performance review in the event of a meeting not being executed well. Second, surveillance must always go both ways. That is, whenever an organization increases surveillance of individuals, at the same time, organizational accountability needs to increase. If organizations could establish processes to monitor their workforce, they may well establish processes to audit their own actions as well. Third, data should not be concentrated in a single entity, not least because data monopolies are the recipe for dictatorship. In the workplace context, this translates into newly created divisions in a company that oversee data collection and, ideally, ensure that data about their workers is in the hands of the workers themselves. In a global modernized workplace, new meeting tools are likely to be developed, facilitating and augmenting the experience of hybrid meetings. As a result of the COVID-19 pandemic, fully remote or hybrid meetings removed any physical barriers and often contributed to improved work-life balance (Rudnicka et al., 2020). On the downside, tools for supporting hybrid meetings may inadvertently becoming surveillance tools, compromising employees’ privacy (Constantinides and Quercia, 2022). To unpack the AI ethics of workplace technologies, Constantinides and Quercia (Constantinides and Quercia, 2022) conducted a crowdsourcing study to understand how employees judge such technologies and determine which ones are desirable, and why. They considered 16 workplace technologies that track productivity based on diverse inputs (e.g., tracking audio conversation during virtual meetings, tracking text messages in collaboration tools), and asked crowd-workers to judge these scenarios along five moral dimensions. They found that workplace technologies were judged harshly depending on three aspects (heuristics) participants used to assess the scenarios. In increasing importance, these aspects reflected whether a scenario: 1) was not currently supported by existing technologies (_hard to adopt_); 2) interfered with current ways of working (_intrusive_); and, more importantly, 3) was not fit for tracking productivity or infringed on individual rights (_harmful_). Tracking eye movements in virtual meetings and the visited websites in remote work, despite being possible, were considered to be “on the way” of getting the job done (they were easy to adopt but intrusive). By contrast, tracking text messages in collaboration tools such as Slack was considered to not interfere with work (unobtrusive). Finally, tracking audio conversations in virtual meetings was considered to be possible (easy to adopt), and not interfere with work (unobtrusive), yet it was considered to be harmful, as it entailed tracking not only whether a meeting took place but also its content, causing a loss of control. The above heuristics offer a guide on how workplace technologies are likely to be morally judged. Having a technology that is easy to implement and does not interfere with work is not necessarily a technology that should be deployed. _Tracking facial expressions_ (even beyond the nefarious uses - of dubious effectiveness - of inferring political orientation or sexual preferences (Wang and Kosinski, 2018)) is possible and could be done in seamless ways (e.g., with existing off-the-shelf cameras), yet it would be still considered harmful and unethical. _Tracking eye movements_ , _task completion_ , or _typing behavior_ was considered a proxy for focus (harmless) yet intrusive as it would “get in the way”. _Tracking social media use in remote work_ was considered not only intrusive but also harmful, as it infringes on privacy rights. On a very pragmatic level, there is a handful of reasons as to why organizations opt for employee surveillance (e.g., maintaining productivity, monitoring resources used, protecting the organization from legal liabilities). Critics, however, rightly argue that there is a fine line between what organizations could be monitoring and what they should be monitoring. If this line is crossed, it will have consequences on employees, affecting their well-being, work culture, and productivity (Ball, 2010). If future meeting tools incorporate any kind of employee monitoring, they need to preserve individual rights, including that of privacy. New meetings tools need to also ensure non-discrimination, while promoting inclusivity, and, at the same time, provide explainable and understandable outputs (e.g., the decision upon which the system decided that a meeting was successful or not). Hybrid meetings also create asymmetries of interactions stemming from the social and cultural contexts. As Saatci et al.(Saatçi et al., 2019) found, the most challenging asymmetry is the diverse experience between co-located and remote meeting participants. Remote participants often feel isolated, while co-located participants dominate the interaction. Differences in language and accent, cultural behaviors, digital literacy, physical location, and the boundaries between work and personal life (Rudnicka et al., 2020) also contribute to interaction asymmetries. To overcome such challenges, new meetings tools should focus on making meetings more inclusive for everyone by maximizing psychological safety and optimizing physical comfort. ###### Acknowledgements. We thank those who actively supported this research at Nokia Bell Labs; in particular, Sagar Joglekar, Bon Adriel Aseniero, and Jun-Ho Choi for their active role in the development of the meeting companion app MeetCues; and Mark Clougherty, Sean Kennedy, Michael Eggleston, and Marcus Weldon for their guidance during the development. We also thank Nigel Oseland for sharing his research at the intersection of environmental psychology and workplace strategy. ## References * (1) * Alavi et al. (2019) Hamed S Alavi, Elizabeth F Churchill, Mikael Wiberg, Denis Lalanne, Peter Dalsgaard, Ava Fatah gen Schieck, and Yvonne Rogers. 2019. Introduction to Human-Building Interaction (HBI) Interfacing HCI with Architecture and Urban Design. _ACM Transactions on Computer-Human Interaction (TOCHI)_ 26, 2 (2019). https://doi.org/10.1145/3309714 * Alavi et al. (2017) Hamed S Alavi, Himanshu Verma, Michael Papinutto, and Denis Lalanne. 2017. Comfort: A Coordinate of User Experience in Interactive Built Environments. In _IFIP Conference on Human-Computer Interaction_. Springer, 247–257. https://doi.org/10.1007/978-3-319-67687-6_16 * Allen (2017) Joseph G. Allen. 2017\. Stale Office Air Is Making You Less Productive. _Harvard Business Review_ (2017). https://hbr.org/2017/03/research-stale-office-air-is-making-you-less-productive * Allen et al. (2016) Joseph G Allen, Piers MacNaughton, Usha Satish, Suresh Santanam, Jose Vallarino, and John D Spengler. 2016. Associations of Cognitive Function Scores with Carbon Dioxide, Ventilation, and Volatile Organic Compound Exposures in Office Workers: A Controlled Exposure Study of Green and Conventional Office Environments. _Environmental Health Perspectives_ 124, 6 (2016), 805–812. https://doi.org/10.1289/ehp.1510037 * Aseniero et al. (2020) Bon Adriel Aseniero, Marios Constantinides, Sagar Joglekar, Ke Zhou, and Daniele Quercia. 2020\. MeetCues: Supporting Online Meetings Experience. In _Proceedings of the IEEE Visualization Conference (VIS)_. IEEE, 236–240. https://doi.org/10.1109/VIS47514.2020.00054 * Axtell (2016) Paul Axtell. 2016\. 6 Reasons to Get Better at Leading Meetings. _Harvard Business Review_ (2016). https://hbr.org/2016/12/6-reasons-to-get-better-at-leading-meetings * Axtell (2017) Paul Axtell. 2017\. How to Design Meetings Your Team Will Want to Attend. _Harvard Business Review_ (2017). https://hbr.org/2017/04/how-to-design-meetings-your-team-will-want-to-attend * Axtell (2018) Paul Axtell. 2018\. How to Respond When You’re Put on the Spot in a Meeting. _Harvard Business Review_ (2018). https://hbr.org/2017/12/how-to-save-a-meeting-thats-gotten-tense * Ball (2010) Kirstie Ball. 2010\. Workplace Surveillance: An Overview. _Labor History_ 51, 1 (2010), 87–106. https://doi.org/10.1080/00236561003654776 * Banerjee et al. (2005) Satanjeev Banerjee, Carolyn Rose, and Alexander I Rudnicky. 2005. The Necessity of a Meeting Recording and Playback System, and the Benefit of Topic–level Annotations to Meeting Browsing. In _IFIP Conference on Human-Computer Interaction_. Springer, 643–656. https://doi.org/10.1007/11555261_52 * Barksdale et al. (2012) Jeremy Barksdale, Kori Inkpen, Mary Czerwinski, Aaron Hoff, Paul Johns, Asta Roseway, and Gina Venolia. 2012. Video Threads: Asynchronous Video Sharing for Temporally Distributed Teams. In _Proceedings of the ACM Conference on Computer Supported Cooperative Work (CSCW)_. ACM, 1101–1104. https://doi.org/10.1145/2145204.2145367 * Chiu et al. (1999) Patrick Chiu, Ashutosh Kapuskar, Sarah Reitmeier, and Lynn Wilcox. 1999. NoteLook: Taking Notes in Meetings with Digital Video and Ink. In _Proceedings of ACM International Conference on Multimedia_. ACM, 149–158. https://doi.org/10.1145/319463.319483 * Choi et al. (2021) Jun-Ho Choi, Marios Constantinides, Sagar Joglekar, and Daniele Quercia. 2021. KAIROS: Talking Heads and Moving Bodies for Successful Meetings. In _Proceedings of the International Workshop on Mobile Computing Systems and Applications (HotMobile)_. 1–7. https://doi.org/10.1145/3446382.3448361 * Choi et al. (2020) Minje Choi, Luca Maria Aiello, Krisztián Zsolt Varga, and Daniele Quercia. 2020. Ten Social Dimensions of Conversations and Relationships. In _Proceedings of The Web Conference (WWW)_. 1514–1525. https://doi.org/10.1145/3366423.3380224 * Cohen (2016) Jordan Cohen. 2016\. Use Subtle Cues to Encourage Better Meetings. _Harvard Business Review_ (2016). https://hbr.org/2016/09/use-subtle-cues-to-encourage-better-meetings * Constantinides and Quercia (2022) Marios Constantinides and Daniele Quercia. 2022. Good Intentions, Bad Inventions: How Employees Judge Pervasive Technologies in the Workplace. _arXiv_ (2022). * Constantinides et al. (2020) Marios Constantinides, Sanja Šćepanović, Daniele Quercia, Hongwei Li, Ugo Sassi, and Michael Eggleston. 2020. ComFeel: Productivity is a Matter of the Senses Too. _Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies (IMWUT)_ 4, 4 (2020), 1–21. https://doi.org/10.1145/3432234 * Edmondson (1999) Amy Edmondson. 1999\. Psychological Safety and Learning Behavior in Work Teams. _Administrative Science Quarterly_ 44, 2 (1999), 350–383. https://doi.org/10.2307/2666999 * Garcia et al. (2004) AC Bicharra Garcia, John Kunz, and Martin Fischer. 2004\. Cutting to the Chase: Improving Meeting Effectiveness by Focusing on the Agenda. In _Proceedings of the ACM Conference on Computer Supported Cooperative Work (CSCW)_ , Vol. 6. ACM, 346–349. https://doi.org/10.1145/1031607.1031664 * Grenny (2017) Joseph Grenny. 2017\. How to Save a Meeting That’s Gotten Tense. _Harvard Business Review_ (2017). * Kawsar et al. (2018) Fahim Kawsar, Chulhong Min, Akhil Mathur, and Allesandro Montanari. 2018. Earables for Personal-scale Behavior Analytics. _IEEE Pervasive Computing_ 17, 3 (2018), 83–89. https://doi.org/10.1109/MPRV.2018.03367740 * Kerns (2015) Ira Kerns. 2015\. _The 6 Post-Event Survey Questions That Will Reveal Your Meeting’s Effectiveness_. https://www.meetingsnet.com/corporate-meetings/6-post-event-survey-questions-will-reveal-your-meeting-s-effectiveness * Kim and Rudin (2014) Been Kim and Cynthia Rudin. 2014. Learning About Meetings. _Data Mining and Knowledge Discovery_ 28, 5-6 (2014), 1134–1157. https://doi.org/10.1007/s10618-014-0348-z * Kim et al. (2008) Taemie Kim, Agnes Chang, Lindsey Holland, and Alex Sandy Pentland. 2008. Meeting Mediator: Enhancing Group Collaboration Using Sociometric Feedback. In _Proceedings of the ACM Conference on Computer Supported Cooperative Work and Social Computing (CSCW)_. ACM, 457–466. https://doi.org/10.1145/1460563.1460636 * Kun et al. (2020) Andrew L Kun, Orit Shaer, Raffaella Sadun, Linda Ng Boyle, and John D Lee. 2020. The Future of Work and Play: From Automated Vehicles to Working from Home. _New Future of Work_ (2020). https://par.nsf.gov/biblio/10186623 * Lent (2015) Richard M Lent. 2015\. _Leading Great Meetings: How to Structure Yours for Success_. Meeting for Results. * McGregor and Tang (2017) Moira McGregor and John C Tang. 2017. More to Meetings: Challenges in Using Speech-based Technology to Support Meetings. In _Proceedings of the ACM Conference on Computer Supported Cooperative Work and Social Computing (CSCW)_. ACM, 2208–2220. https://doi.org/10.1145/2998181.2998335 * Meister (2018) Jeanne C. Meister. 2018\. The #1 Office Perk? Natural Light. _Harvard Business Review_ (2018). * Niemantsverdriet and Erickson (2017) Karin Niemantsverdriet and Thomas Erickson. 2017. Recurring Meetings: An Experiential Account of Repeating Meetings in a Large Organization. _Proceedings of the ACM on Human-Computer Interaction_ 1, CSCW (2017), 84. https://doi.org/10.1145/3134719 * Park et al. (2020) Sungkyu Park, Marios Constantinides, Luca Maria Aiello, Daniele Quercia, and Paul Van Gent. 2020\. WellBeat: A Framework for Tracking Daily Well-Being Using Smartwatches. _IEEE Internet Computing_ 24, 5 (2020), 10–17. https://doi.org/10.1109/MIC.2020.3017867 * Peleckis and Peleckienė (2015) Kestutis Peleckis and Valentina Peleckienė. 2015. Nonverbal Communication in Business Negotiations and Business Meetings. _International Letters of Social and Humanistic Sciences_ 62 (2015), 62–72. https://www.learntechlib.org/p/176672/ * Qin et al. (2020) Chao Ying Qin, Marios Constantinides, Luca Maria Aiello, and Daniele Quercia. 2020. HeartBees: Visualizing Crowd Affects. In _Proceedings of the IEEE VIS Arts Program (VISAP)_. IEEE, 1–8. https://doi.org/10.1109/VISAP51628.2020.00007 * Quintana et al. (2018) Rebecca M Quintana, Christopher Brooks, Cinzia Villanucci Smothers, Yuanru Tan, Zheng Yao, and Chinmay Kulkarni. 2018. Mentor Academy: Engaging Global Learners in the Creation of Data Science Problems for MOOCs. International Society of the Learning Sciences. https://doi.org/10.22318/cscl2018.1415 * Ren et al. (2019) Xipei Ren, Bin Yu, Yuan Lu, Biyong Zhang, Jun Hu, and Aarnout Brombacher. 2019\. LightSit: An Unobtrusive Health-promoting System for Relaxation and Fitness Microbreaks at Work. _Sensors_ 19, 9 (2019), 2162. https://doi.org/10.3390/s19092162 * Robertson et al. (2019) Alexander Robertson, Luca Maria Aiello, and Daniele Quercia. 2019. The Language of Dialogue is Complex. In _Proceedings of the International AAAI Conference on Web and Social Media (ICWSM)_ , Vol. 13. 428–439. https://ojs.aaai.org/index.php/ICWSM/article/view/3241 * Romano and Nunamaker (2001) Nicholas C Romano and Jay F Nunamaker. 2001. Meeting Analysis: Findings from Research and Practice. In _Proceedings of the Annual Hawaii International Conference on System Sciences_. IEEE, 13–pp. https://doi.org/10.1109/HICSS.2001.926253 * Rudnicka et al. (2020) Anna Rudnicka, Joseph W Newbold, Dave Cook, Marta E Cecchinato, Sandy Gould, and Anna L Cox. 2020\. Eworklife: Developing Effective Strategies for Remote Working During the COVID-19 Pandemic. In _Eworklife: developing effective strategies for remote working during the COVID-19 pandemic_. The New Future of Work Online Symposium. * Saatçi et al. (2019) Banu Saatçi, Roman Rädle, Sean Rintel, Kenton O’Hara, and Clemens Nylandsted Klokmose. 2019\. Hybrid meetings in the modern workplace: stories of success and failure. In _International Conference on Collaboration and Technology_. Springer, 45–61. * Scepanovic et al. (2020) Sanja Scepanovic, Enrique Martin-Lopez, Daniele Quercia, and Khan Baykaner. 2020. Extracting Medical Entities From Social Media. In _Proceedings of the ACM Conference on Health, Inference, and Learning (CHIL)_. 170–181. https://doi.org/10.1145/3368555.3384467 * Schwarz (2016) Roger Schwarz. 2016\. 8 Ground Rules for Great Meetings. _Harvard Business Review_ (2016). https://hbr.org/2016/06/8-ground-rules-for-great-meetings * Tom Y. Chang and Neidell (2016) Tal Gross Tom Y. Chang, Joshua Graff Zivin and Matthew Neidell. 2016. Air Pollution Is Making Office Workers Less Productive. _Harvard Business Review_ (2016). https://hbr.org/2016/09/air-pollution-is-making-office-workers-less-productive * Tucker et al. (2010) Simon Tucker, Ofer Bergman, Anand Ramamoorthy, and Steve Whittaker. 2010. Catchup: A Useful Application of Time-travel in Meetings. In _Proceedings of ACM Conference on Computer Supported Cooperative Work (CSCW)_. ACM, 99–102. https://doi.org/10.1145/1718918.1718937 * Wang and Kosinski (2018) Yilun Wang and Michal Kosinski. 2018. Deep neural networks are more accurate than humans at detecting sexual orientation from facial images. _Journal of Personality and Social Psychology_ 114, 2 (2018), 246. https://doi.org/10.1037/pspa0000098 * Weiser and Brown (1997) Mark Weiser and John Seely Brown. 1997. The Coming Age of Calm Technology. In _Beyond Calculation_. Springer, 75–85. https://doi.org/10.1007/978-1-4612-0685-9_6 * Wilson (1984) Edward O Wilson. 1984\. _Biophilia_. Harvard University Press. https://doi.org/10.4159/9780674045231 * Woolley et al. (2010) Anita Williams Woolley, Christopher F Chabris, Alex Pentland, Nada Hashmi, and Thomas W Malone. 2010\. Evidence for a Collective Intelligence Factor in the Performance of Human Groups. _Science_ 330, 6004 (2010), 686–688. https://doi.org/10.1126/science.1193147 * Yu et al. (2018) Bin Yu, Jun Hu, Mathias Funk, and Loe Feijs. 2018\. DeLight: Biofeedback Through Ambient Light for Stress Intervention and Relaxation Assistance. _Personal and Ubiquitous Computing_ 22, 4 (2018), 787–805. https://doi.org/10.1007/s00779-018-1141-6 * Zemp et al. (2015) Roland Zemp, William R Taylor, and Silvio Lorenzetti. 2015\. Are Pressure Measurements Effective in the Assessment of Office Chair Comfort/Discomfort? A Review. _Applied Ergonomics_ 48 (2015), 273–282. https://doi.org/10.1016/j.apergo.2014.12.010 * Zhong et al. (2020) Sailin Zhong, Hamed S Alavi, and Denis Lalanne. 2020\. Hilo-wear: Exploring Wearable Interaction with Indoor Air Quality Forecast. In _Proceedings of the ACM CHI Conference Extended Abstracts on Human Factors in Computing Systems (CHI)_. 1–8. https://doi.org/10.1145/3334480.3382813 * Zhou et al. (2021a) Ke Zhou, Luca Maria Aiello, Sanja Scepanovic, Daniele Quercia, and Sara Konrath. 2021a. The Language of Situational Empathy. _Proceedings of the ACM on Human-Computer Interaction_ 5, CSCW1 (2021), 1–19. https://doi.org/10.1145/3449087 * Zhou et al. (2021b) Ke Zhou, Marios Constantinides, Luca Maria Aiello, Sagar Joglekar, and Daniele Quercia. 2021b. The Role of Different Types of Conversations for Meeting Success. _IEEE Pervasive Computing_ 20, 4 (2021), 35–42. https://doi.org/10.1109/MPRV.2021.3115879 * Zhou et al. (2022) Ke Zhou, Marios Constantinides, Sagar Joglekar, and Daniele Quercia. 2022. Predicting Meeting Success With Nuanced Emotions. _IEEE Pervasive Computing_ (2022). https://doi.org/10.1109/MPRV.2022.3145047
# End-to-End Anti-Backdoor Learning on Images and Time Series Yujing Jiang1, Xingjun Ma2, Sarah Monazam Erfani1, Yige Li3, James Bailey1 1Faculty of Engineering and Information Technology, The University of Melbourne <EMAIL_ADDRESS>sarah.erfani<EMAIL_ADDRESS>2School of Computer Science, Fudan University <EMAIL_ADDRESS>3School of Computer Science and Technology, Xidian University <EMAIL_ADDRESS> ###### Abstract Backdoor attacks present a substantial security concern for deep learning models, especially those utilized in applications critical to safety and security. These attacks manipulate model behavior by embedding a hidden trigger during the training phase, allowing unauthorized control over the model’s output during inference time. Although numerous defenses exist for image classification models, there is a conspicuous absence of defenses tailored for time series data, as well as an end-to-end solution capable of training clean models on poisoned data. To address this gap, this paper builds upon Anti-Backdoor Learning (ABL) and introduces an innovative method, End-to- End Anti-Backdoor Learning (E2ABL), for robust training against backdoor attacks. Unlike the original ABL, which employs a two-stage training procedure, E2ABL accomplishes end-to-end training through an additional classification head linked to the shallow layers of a Deep Neural Network (DNN). This secondary head actively identifies potential backdoor triggers, allowing the model to dynamically cleanse these samples and their corresponding labels during training. Our experiments reveal that E2ABL significantly improves on existing defenses and is effective against a broad range of backdoor attacks in both image and time series domains. ## Introduction Deep learning has achieved remarkable performance in computer vision tasks such as object detection [1], motion tracking [2], and autonomous driving [3], as well as time series analysis in fields like finance [4], smart manufacturing [5], and healthcare [6]. With the increasing deployment of Deep Neural Networks (DNNs) in real-world applications, their vulnerability to backdoor attacks has become a significant concern. Backdoor attacks occur either by poisoning a few training samples with a trigger pattern [7] or by manipulating the training procedure [8], thereby implanting a backdoor in the DNN model. The compromised model learns a strong correlation between the trigger pattern and a chosen backdoor label. At inference time, it predicts correct labels on clean inputs while exhibiting a systematic bias towards the backdoor label in the presence of the trigger. This issue is especially serious as these technologies are being deployed in safety-critical applications. Therefore, developing defenses against such backdoor attacks is becoming a critical necessity. Backdoor attacks generally have two primary objectives: 1) high effectiveness, i.e., high attack success rate (ASR), and 2) high stealthiness, i.e., maintaining a high clean accuracy (CA) while visually undetectable. High effectiveness allows the attacker to manipulate the model’s prediction in a more precise manner, while high stealthiness ensures that the attacks cannot be easily detected by rudimentary filtering or manual inspection. Stealthiness also involves designing subtle trigger patterns that do not impact the model’s clean performance (performance on clean data), making detection even more challenging. Several works [7, 8, 9, 10, 11, 12] have studied backdoor attacks on both image and time series data. As more and more DNNs are being trained and employed in different types of real-world applications, defending against malicious and stealthy backdoor attacks on different tasks and data modalities has become an imperative task. This work takes the first attempt to design one single defense method that could work for two data modalities, i.e., images and time series. The reason why we chose images and time series is that both modalities are continuous (unlike discrete texts), their classification tasks are well-studied, and there exist multiple backdoor attacks for both types of data. However, current defense methods are mostly tailored for image data and have not been well studied for time series data. It is thus unclear whether defenses developed against image backdoor attacks are suitable for time series backdoor attacks. Anti-Backdoor Learning (ABL) [13] is a robust training method that was initially introduced to train clean models on poisoned data. It has demonstrated promising results against a diverse set of backdoor attacks on image datasets. Nonetheless, ABL has some limitations. One notable limitation is its two-stage training process. In the first stage, the model undergoes a training phase for a specified number of epochs, following which a subset of suspected backdoor samples is isolated. The model then enters a secondary training phase aimed at “unlearning” these potentially harmful patterns. Each of these stages demands distinct training objectives, thereby complicating the process and potentially reducing its efficiency. In this paper, we propose an End-to-End Anti-Backdoor Learning (E2ABL) training method that is capable of training a clean model on a poisoned dataset in an end-to-end manner. Our approach eliminates the need to change the training objectives or extend training epochs. Specifically, we introduce a secondary classification head attached to the shallow layers of the DNN model. This second head traps potential backdoor samples and corrects their labels. The second head is specifically designed to be sensitive to backdoor correlations and samples. This ensures backdoor samples are captured in the shallow layers, safeguarding the primary head and steering the model training toward a more secure and trustworthy direction. To summarize, our main contributions are: * • We introduce End-to-End Anti-Backdoor Learning (E2ABL), a novel end-to-end robust training method that can train backdoor-free models on backdoored datasets. E2ABL works effectively for both image and time series data, and to the best of our knowledge, is the first backdoor defense method for time series models. * • E2ABL proposes a novel strategy of using a second head to safeguard the learning of the main network against backdoor attacks. The second head is designed to learn, capture, and rectify backdoored samples in real-time, and is trained concurrently with the main head to neutralize the impact of backdoor attacks. * • Through extensive empirical evaluations, we demonstrate that E2ABL can serve as an effective backdoor defense method against a broad spectrum of backdoor attacks on both image and time series data. Further, models trained using our E2ABL consistently outperform those trained by other defense methods, exhibiting higher clean accuracy (CA) and lower attack success rate (ASR). ## Related Work In this section, we provide a brief overview of the existing literature focusing on both backdoor attacks and defenses. ### Backdoor Attack #### Image Attacks Backdoor attacks optimize two primary objectives including attack effectiveness, and stealthiness. These objectives are fulfilled by optimizing metrics such as the attack success rate and clean accuracy, while also focusing on the design of increasingly subtle and inconspicuous trigger patterns. Additionally, efforts are made to minimize the rate at which training samples are poisoned. The seminal work in this domain, BadNets [7], laid the foundation for backdoor attacks on Deep Neural Networks (DNNs) by introducing a simplistic checkerboard pattern affixed to the lower-right corner of a clean image. In the wake of this pioneering study, subsequent research has ventured into more sophisticated techniques, such as the integration of blended backgrounds [14], the inclusion of natural reflections [8], and the utilization of imperceptible noise [8, 9, 10, 15]. There are also works utilizing adversarial patterns [16] and sample-wise patterns [17, 18] as backdoor attack methods. Remarkably, some attacks have even demonstrated the ability to reverse-engineer training data without requiring access to the original dataset [19]. Moreover, clean-label attacks, which insert triggers without altering the actual class labels, have gained attention in recent research [16, 20, 21, 22, 23]. Many of these methods achieve considerable attack success rates while contaminating less than 10% of the training dataset, some being effective at a surprisingly low poisoning rate of 0.1%. These trends underscore the stealthy and evasive nature of backdoor attacks, thereby accentuating the urgent need for robust anti-backdoor learning mechanisms. It is worth noting that the majority of these attacks and subsequent studies have been primarily focused on image data and image classification models. #### Time Series Attacks The field of backdoor attacks on time series data is still in its nascent stage. One of the pioneering works in this domain is by [24], which transformed time series data into 2D images. This transformation allowed them to apply existing backdoor attack techniques originally designed for image data. Building on this initial exploration, [11] introduced TimeTrojan, a specialized backdoor attack tailored for DNN-based time series classifiers. TimeTrojan utilizes a multi-objective optimization framework, enabling it to establish strong and stealthy links between the hidden trigger and the target label, thereby making the attack more effective and less detectable. More recently, the research landscape has seen the advent of a Generative Adversarial Network (GAN)-based approach presented by [12]. This method overlays a unique, sample-specific trigger on each compromised time series data point. The GAN-based approach not only elevates the level of stealthiness but also enhances the natural appearance of the poisoned data, further complicating detection efforts. In addition to data poisoning-based backdoor attacks, it is worth mentioning another category of attacks that directly target the model’s parameters [25, 26]. These parameter-based attacks can be executed independently or in conjunction with data poisoning-based strategies, thereby presenting a multi- faceted threat landscape. However, the primary focus of this paper remains on countermeasures against data poisoning-based backdoor attacks. The exploration of defenses against model parameter manipulation-based backdoor attacks constitutes an avenue for our future work, given its distinct set of challenges and implications. ### Backdoor Defenses (On Image Data) While a large number of defense methods have been proposed to combat backdoor attacks on image data, there is a noticeable absence of techniques specifically tailored for time series data. Prominent existing defenses, such as Mode Connectivity Repair (MCR) [27], Neural Attention Distillation (NAD) [28], Adversarial Neuron Pruning (ANP) [29], and Reconstructive Neural Pruning (RNP) [30] are principally engineered to counter the harmful influence of backdoor triggers in image-based neural networks. These methods largely neglect the unique characteristics and vulnerabilities inherent to time series models. Moreover, earlier defense strategies like Fine-Pruning [31] have been found to be less effective in the face of evolving, more sophisticated backdoor attacks [8, 32]. In response to these challenges, [13] introduced the concept of Anti-Backdoor Learning (ABL) where the goal is to train clean models directly on poisoned data. Their proposed ABL method consists of two distinctive stages. In the first stage, the target model undergoes initial training for several epochs, following which a limited number of samples with the lowest loss values are isolated as backdoor samples. The second stage involves fine-tuning the model in conjunction with maximizing the model’s loss on the isolated backdoor samples. By employing different training objectives in the two stages, ABL diverges from standard training procedures which are mostly end-to-end training with one single loss. Despite these advancements in the image domain, the time series domain is notably under-researched. Particularly, there are no established defense methods specifically designed to counter backdoor attacks on time series models, highlighting an urgent gap in the current literature. In this paper, we advocate the concept of ABL and propose a novel End-to-End Anti-Backdoor Learning (E2ABL) method that works for both images and time series. The second head attached to the main network in E2ABL serves as an innovative mechanism that is capable of real-time identification, capture, and rectification of backdoor samples, thereby protecting the integrity of the learning process conducted by the main network. ## End-to-End Anti-Backdoor Learning This section presents our innovative E2ABL method, which features two key advancements: a dual-head model architecture and a true class recovery mechanism. We also outline the threat model, formulation, and motivation for E2ABL. ### Threat Model In this study, we focus on classification tasks involving both image and time series data. The methodology could potentially be extended to other types of data and applications, such as natural language processing or anomaly detection. We adopt a classic data poisoning-based threat model where the adversary can poison the training data by injecting backdoor trigger patterns into a few clean samples. The backdoor-poisoned dataset is then used by the defender to train a target DNN model. The defender has full control over the training process but has no prior knowledge of the poisoning statistics, including the existence of an attack, the number of poisoned samples, the trigger pattern(s), or the backdoor target label. The defender’s objective is to train a clean, backdoor-free model from a potentially poisoned dataset, aiming to attain a clean performance equivalent to models trained on clean data. This scenario embodies a robust training environment where backdoor mitigation or elimination strategies can still be applied effectively, even for those devised in different settings. ### Problem Formulation Consider a benign training set comprised of $N$ independently and identically distributed samples, denoted as ${\mathcal{D}}=\\{({\bm{x}}_{i},y_{i})\\}_{i=1}^{N}$. In this dataset, each ${\bm{x}}_{i}$ represents a single training sample, and $y_{i}$ is its corresponding ground truth label. A classification model represented as $f_{\theta}$, learns the function $f_{\theta}:{\mathcal{X}}\rightarrow{\mathcal{Y}}$, which maps the input space to the label space. This learning process is usually achieved by minimizing the empirical error, as shown below: $\mathcal{L}=\mathbb{E}_{({\bm{x}},y)\sim{\mathcal{D}}}[\ell(f_{\theta}({\bm{x}}),y)],$ (1) where $\ell(\cdot)$ denotes the loss function, such as the widely used cross- entropy loss. A backdoor adversary manipulates a portion of the benign dataset ${\mathcal{D}}$, creating a backdoor-poisoned subset ${\mathcal{D}}_{p}$, while leaving the remaining dataset ${\mathcal{D}}_{c}$ intact. This forms a compromised dataset denoted as ${\mathcal{D}}^{\prime}={\mathcal{D}}_{p}\cup{\mathcal{D}}_{c}$. The poisoned samples are created through the operation ${\bm{x}}_{i}^{\prime}={\bm{x}}_{i}\odot{\bm{p}}_{i}$, where ${\bm{p}}_{i}$ denotes a backdoor trigger pattern, which can either be specific to each sample or consistent across samples (in this case, ${\bm{p}}_{1}={\bm{p}}_{2}=\cdots={\bm{p}}_{|{\mathcal{D}}_{p}|}$). The operator $\odot$ denotes an element-wise modification process, such as addition, subtraction, or replacement. Typically, all compromised samples within ${\mathcal{D}}_{p}$ share a common backdoor target label $y^{\prime}$. Training a model on the (potentially) manipulated dataset ${\mathcal{D}}^{\prime}$ can be considered as a dual-task learning problem involving: 1) a “clean task” focusing on the clean subset of data ${\mathcal{D}}_{c}$, and 2) a “backdoor task” concentrating on the poisoned subset of data ${\mathcal{D}}_{p}$. In standard (unsecured) training, the model is trained on both the clean and poisoned data by minimizing the following empirical error: ${\mathcal{L}}=\underbrace{{\mathbb{E}}_{({\bm{x}},y)\sim{\mathcal{D}}_{c}}[\ell(f_{\theta}({\bm{x}}),y)]}_{\textup{clean task}}+\underbrace{{\mathbb{E}}_{({\bm{x}}^{\prime},y^{\prime})\sim{\mathcal{D}}_{p}}[\ell(f_{\theta}({\bm{x}}^{\prime}),y^{\prime})]}_{\textup{backdoor task}}.$ (2) The outcome of the above optimization is a backdoored model, denoted as $f_{\theta}^{\prime}$, which consistently outputs the backdoor target label whenever the trigger pattern appears, i.e., $f_{\theta}^{\prime}({\bm{x}}\odot{\bm{p}})=y^{\prime}$. To inhibit the learning of backdoor samples, ABL employs its first stage to segregate clean samples into $\hat{{\mathcal{D}}}_{c}$ and possibly poisoned samples into $\hat{{\mathcal{D}}}_{p}$ . It then seeks to minimize the model’s error on $\hat{{\mathcal{D}}}_{c}$ while maximizing its error on $\hat{{\mathcal{D}}}_{p}$ in its second stage by minimizing the following objective: ${\mathcal{L}}=\underbrace{{\mathbb{E}}_{({\bm{x}},y)\sim{\hat{\mathcal{D}}}_{c}}[\ell(f_{\theta}({\bm{x}}),y)]}_{\textup{clean training}}-\underbrace{{\mathbb{E}}_{({\bm{x}}^{\prime},y^{\prime})\sim{\hat{\mathcal{D}}}_{p}}[\ell(f_{\theta}({\bm{x}}^{\prime}),y^{\prime})]}_{\textup{backdoor unlearning}}.$ (3) Notably, the opposite optimization direction on $\hat{{\mathcal{D}}}_{p}$ can help unlearn the backdoor trigger from the model. Following ABL, we next introduce our E2ABL method, which leverages an enhanced optimization objective with the integration of a second classification head. ### Proposed E2ABL Method ##### Motivation As discovered in ABL [13], a strong correlation exists between the trigger pattern and the backdoor label. The more potent the attack, the stronger the correlation becomes, which in turn allows the backdoor samples to be learned more quickly by the model. This indicates that the backdoor task, as defined in Equation (2), is generally less complex than the clean task. This suggests that a simpler task could be efficiently learned by either a smaller network or the shallow layers of a deeper network. Such insight motivates us to add a second classification head to the shallow layers of the target model, specifically designed to learn and trap the backdoor samples. The placement of the second head in a ResNet-34 model [33] is illustrated in Figure 1. Leveraging the second head, our E2ABL methodology trains a model by minimizing the following empirical error: $\displaystyle{\mathcal{L}}$ $\displaystyle=\underbrace{{\mathbb{E}}_{({\bm{x}},y)\sim\hat{{\mathcal{D}}}_{c}}[\ell(h_{1}({\bm{x}}),y)]}_{\textup{main head: clean leaning}}+\underbrace{{\mathbb{E}}_{({\bm{x}}^{\prime},y^{\prime})\sim\hat{{\mathcal{D}}}_{p}}[\ell(h_{2}({\bm{x}}^{\prime}),y^{\prime})]}_{\textup{second head: backdoor learning}}$ $\displaystyle+\underbrace{{\mathbb{E}}_{({\bm{x}}^{\prime},y_{\textup{rectified}})\sim{{\mathcal{D}}}^{*}}[\ell(h_{1}({\bm{x}}^{\prime}),y_{\textup{rectified}})]}_{\textup{main head: backdoor recovery}},$ (4) where $h_{1}(\cdot)$ and $h_{2}(\cdot)$ denote the primary and second heads of the network $f_{\theta}$ respectively. The detected clean and poisoned samples are denoted as $\hat{{\mathcal{D}}}_{c}$ and $\hat{{\mathcal{D}}}_{p}$ (maintaining $\hat{{\mathcal{D}}}_{c}\cap\hat{{\mathcal{D}}}_{p}=\emptyset$ and ${\mathcal{D}}^{\prime}=\hat{{\mathcal{D}}}_{c}\cup\hat{{\mathcal{D}}}_{p}$ ). And, we use ${\mathcal{D}}^{*}$ to denote samples from the detected poison set $\hat{{\mathcal{D}}}_{p}$ with corrected class labels. In our design, the second head plays a crucial role in enhancing the robustness of the training process. But it will be removed when training is completed, i.e., only the network and the main head will be used as the final model ($f=h_{1}$). A noteworthy distinction from the ABL model, as outlined in Equation (3), is that the second head’s objective is to minimize the error on $\hat{{\mathcal{D}}}_{p}$ (as opposed to maximizing it). This is to effectively detect and trap backdoor samples to the second head. Moreover, E2ABL utlizes a specialized detection and recovery strategy for both $\hat{{\mathcal{D}}}_{c}$ and $\hat{{\mathcal{D}}}_{p}$ . This involves a method premised on the drop rate of training loss for detecting backdoor samples and the subsequent recovery of their true classes. The specifics of this approach will be described in subsequent sections. Figure 1: The second classification head attached to ResNet-34. #### Backdoor Sample Detection Through empirical observation, we note that several convolutional channels within the shallow layers are closely tied to the backdoor trigger. These channels consistently produce specific features for nearly every backdoored input, which is so even for dynamic backdoor attacks that utilize sample-wise trigger patterns. The deep layers of the model will amplify these features, prompting the model to predict the backdoor class. More significantly, due to their short-cut nature [34], these trigger-related features can be rapidly and adequately learned even with very shallow networks [35, 36]. In light of this, E2ABL utilizes an additional classification head to capture the backdoor features at the shallow layers and trap those features to safeguard the main head during the training procedure. Specifically, the second head is designed to learn backdoor features at several shallow layers, constituting two new convolutional layers and one fully connected (FC) layer, as depicted in Figure 1. The final output of this second head aligns with that of the main head, producing class probabilities. The pathway from the input to the output of the second head operates as a shallow model, adept at learning the backdoor features. Prior to training the main network or executing backdoor sample detection, the second head first undergoes a self-training phase (lasting for only a few epochs) on the entire dataset ${\mathcal{D}}^{\prime}$ , as a warm-up. Subsequently, it partitions all samples into two subsets: $\hat{{\mathcal{D}}}_{c}$ and $\hat{{\mathcal{D}}}_{p}$ , based on the loss reduction during the warm-up phase. Samples that exhibit the most precipitous decline in loss are allocated to the poison subset, $\hat{{\mathcal{D}}}_{p}$ . The metric used to bifurcate the training data is defined as follows: $\Delta\ell=\sum_{i=2}^{n}\ \frac{\mathcal{L}_{i}-\mathcal{L}_{i-1}}{(i-1)^{2}},$ (5) where $\mathcal{L}_{i}$ is the historical loss value in the $i^{th}$ epoch. Following the poisoning rate (less than 20%) assumption made in ABL, we segregate the top 20% of training samples (more discussions on this percentage are given in the ablation studies section), exhibiting the most significant loss drops $\Delta\ell$, into $\hat{{\mathcal{D}}}_{p}$ , while the remainder is retained in $\hat{{\mathcal{D}}}_{c}$ . These two subsets form the basis for training the main and second heads as detailed in Equation (Motivation). This detection process is performed at the conclusion of each training epoch, subsequent to the initial warm-up phase. Our backdoor sample detection strategy as described above is notably simpler than the loss-restricted training and filtering strategy employed in ABL. Unlike ABL which strives to accurately identify the backdoor samples at this stage, our method partitions the dataset into two broad subsets. While it is likely that the detected poison subset $\hat{{\mathcal{D}}}_{p}$ will encompass the majority of the backdoor samples (as some clean “easy-to-learn” samples could also have exceptionally high loss drop, as demonstrated in [13]), we cannot fully separate the backdoor samples from the rest at this stage. Following this, we will introduce the second key technique of E2ABL that makes it work more effectively than ABL. #### True Class Recovery This operation purifies the labels of certain samples in the detected poison subset $\hat{{\mathcal{D}}}_{p}$ and incorporates these corrected samples into an additional subset, ${\mathcal{D}}^{*}$ . This additional subset is then used to train the main head in conjunction with $\hat{{\mathcal{D}}}_{c}$ . Backdoor attacks are commonly tied to a specific label deliberately chosen by the adversary. This particular label is referred to as the backdoor label (or class). Intuitively, if we manage to successfully identify and modify this backdoor label, we could effectively break the correlation between the trigger pattern and the backdoor label, thereby mitigating the attack’s impact. Furthermore, through empirical experiments, we discovered that the true label of a backdoor sample can be recovered using the output of the main head. As the main head remains unaffected by the attack during training, it allows us to recover the authentic identity of the sample, ensuring the model learns the correct information. Intuitively, samples in $\hat{{\mathcal{D}}}_{p}$ with the largest loss drop, such as the top 1% of all training samples, are most likely to be backdoor samples. We take their predominant label prediction (the one with the highest softmax value) from the second head as the backdoor label. This detection strategy is built upon two assumptions: 1) backdoor samples have the most significant loss drop, and 2) the second head is designed to be highly skilled in learning the backdoor. Following this phase, samples with the most notable loss drop are relabeled based on the main head’s prediction and incorporated into the ${\mathcal{D}}^{*}$ . The entire training procedure of E2ABL is described in Algorithm 1. Throughout the training process, the second head functions as a backdoor supervisor, segregating the input samples into clean and backdoor subsets, identifying the most suspicious samples within the backdoor subset and correcting their labels, and discovering samples from the backdoor subset that are not particularly suspicious (those bearing a non-backdoor label). These operations are all contingent on the predictions made by the second head. In summary, both heads are concurrently trained on the corresponding subsets defined in Equation (Motivation). These subsets are dynamically updated at the end of each epoch, after the warm-up of the second head. Algorithm 1 Training Procedure of E2ABL Input: ${\mathcal{D}}^{\prime}$ : backdoor-poisoned dataset; $h_{1}(\cdot)$ , $h_{2}(\cdot)$ : the main and second head; $\ell_{\text{CE}}(\cdot)$ : cross- entropy loss; $\hat{{\mathcal{D}}}_{c}$ : detected clean subset; $\hat{{\mathcal{D}}}_{p}$ : detected poison subset; ${\mathcal{D}}^{*}$ : samples with corrected labels. Output: $h_{1}(\cdot)$ Hyper-parameters: $T_{\text{warm\\_start}}$ : warm start epochs for the second head; $T_{\text{training}}$: total training epochs; $\gamma$: clean percentage of samples for detection; $y_{\textup{rectified}}$: the non-backdoor class with the maximum probability. 1: for $i$ in [1, …, $T_{\text{warm\\_start}}$ ] do 2: # Warm up the second head 3: $h_{2}\leftarrow\operatorname*{arg\,min}_{h_{2}}\mathbb{E}_{({\bm{x}},y)\sim{\mathcal{D}}^{\prime}}[\ell_{\text{CE}}(h_{2}({\bm{x}}),y)]$ 4: end for 5: for $i$ in [1, …, $T_{\text{training}}$] do 6: # Detect backdoor samples and fix their labels 7: $\hat{{\mathcal{D}}}_{c}$ $\leftarrow$ for $\gamma_{c}\%$ low loss drop samples in ${\mathcal{D}}^{\prime}$ w.r.t. $h_{2}(\cdot)$ 8: $\hat{{\mathcal{D}}}_{p}$ $\leftarrow$ for $\gamma_{p}\%$ high loss drop samples in ${\mathcal{D}}^{\prime}$ w.r.t. $h_{2}(\cdot)$ 9: ${\mathcal{D}}^{*}$ $\leftarrow(\scalebox{0.85}{${\mathcal{D}}^{\prime}\setminus\hat{{\mathcal{D}}}_{c}$})$ with corrected labels $y_{\textup{rectified}}$ 10: # Update the two heads 11: $h_{1}\leftarrow\operatorname*{arg\,min}_{h_{1}}\mathbb{E}_{({\bm{x}},y)\sim\hat{{\mathcal{D}}}_{c}}[\ell_{\text{CE}}(h_{1}({\bm{x}}),y)]$ 12: $h_{1}\leftarrow\operatorname*{arg\,min}_{h_{1}}\mathbb{E}_{({\bm{x}}^{\prime},y_{\textup{rectified}})\sim{\mathcal{D}}^{*}}[\ell_{\text{CE}}(h_{1}({\bm{x}}^{\prime}),y_{\textup{rectified}})]$ 13: $h_{2}\leftarrow\operatorname*{arg\,min}_{h_{2}}\mathbb{E}_{({\bm{x}}^{\prime},y^{\prime})\sim\hat{{\mathcal{D}}}_{p}}[\ell_{\text{CE}}(h_{2}({\bm{x}}^{\prime}),y^{\prime}]$ 14: end for 15: return $h_{1}$ ## Experiments TABLE I: The attack success rate (ASR) (lower is better) and the clean accuracy (CA) (higher is better) of 4 backdoor defense methods against state- of-the-art backdoor attacks on both image and time series datasets. ‘None’ means no attack. Dataset | Attack | No Defense | FP | NAD | ABL | E2ABL (Ours) ---|---|---|---|---|---|--- ASR | CA | ASR | CA | ASR | CA | ASR | CA | ASR | CA CIFAR-10 | None | N/A | 89.32% | N/A | 86.07% | N/A | 87.43% | N/A | 88.04% | N/A | 89.39% BadNets | 100.0% | 87.51% | 99.87% | 82.90% | 3.48% | 84.11% | 3.18% | 86.44% | 0.17% | 87.96% Blend | 100.0% | 85.64% | 86.40% | 82.16% | 4.97% | 83.11% | 16.85% | 84.93% | 8.95% | 85.21% Trojan | 100.0% | 88.77% | 65.17% | 82.46% | 16.43% | 76.59% | 3.45% | 87.38% | 1.87% | 88.26% Dynamic | 100.0% | 86.40% | 87.63% | 82.48% | 31.59% | 73.14% | 18.83% | 85.93% | 12.15% | 86.22% CL | 99.81% | 84.11% | 51.94% | 82.16% | 14.95% | 81.14% | 0.00% | 89.05% | 0.18% | 89.11% SIG | 99.45% | 84.58% | 74.81% | 83.04% | 2.37% | 82.18% | 0.08% | 88.44% | 0.25% | 88.92% LBA | 99.02% | 82.89% | 56.72% | 81.19% | 10.07% | 78.28% | 0.12% | 81.26% | 0.03% | 82.34% CBA | 89.14% | 85.71% | 75.94% | 81.32% | 34.94% | 81.12% | 29.28% | 84.75% | 25.64% | 85.27% DFST | 99.55% | 84.92% | 78.47% | 81.67% | 35.01% | 79.39% | 5.47% | 81.14% | 3.21% | 81.95% Average | 98.55% | 85.61% | 75.22% | 82.15% | 17.09% | 79.90% | 8.58% | 85.48% | 5.83% | 86.14% GTSRB | None | N/A | 97.91% | N/A | 90.48% | N/A | 95.64% | N/A | 96.78% | N/A | 97.95% BadNets | 100.0% | 97.50% | 99.40% | 88.12% | 0.22% | 89.62% | 0.05% | 96.42% | 0.09% | 96.89% Blend | 100.0% | 96.12% | 99.18% | 87.34% | 7.54% | 93.16% | 25.81% | 93.27% | 12.18% | 93.95% Trojan | 99.84% | 96.74% | 93.41% | 85.72% | 0.46% | 90.55% | 0.43% | 95.24% | 0.27% | 95.68% Dynamic | 100.0% | 97.13% | 99.82% | 85.16% | 69.64% | 79.15% | 6.48% | 95.87% | 5.69% | 96.23% SIG | 96.58% | 97.02% | 81.04% | 86.43% | 4.97% | 90.42% | 5.45% | 96.41% | 4.78% | 96.79% Average | 99.28% | 96.90% | 94.57% | 86.55% | 16.57% | 88.58% | 7.64% | 95.44% | 4.60% | 95.91% ArabicDigits | None | N/A | 86.24% | N/A | 82.15% | N/A | 83.44% | N/A | 84.95% | N/A | 86.10% TT-FGSM | 83.43% | 72.27% | 21.14% | 62.78% | 7.63% | 69.48% | 0.04% | 83.56% | 0.13% | 84.32% TT-DE | 96.06% | 69.12% | 42.83% | 60.46% | 26.82% | 68.17% | 4.16% | 81.83% | 2.54% | 82.85% TSBA-B | 97.70% | 83.49% | 63.22% | 62.15% | 57.63% | 71.27% | 24.11% | 79.19% | 16.89% | 81.52% Average | 92.40% | 74.96% | 42.40% | 61.80% | 30.69% | 69.64% | 9.44% | 81.53% | 6.52% | 82.90% ECG5000 | None | N/A | 99.60% | N/A | 95.50% | N/A | 96.90% | N/A | 97.40% | N/A | 99.40% TT-FGSM | 76.10% | 88.20% | 20.00% | 64.00% | 8.10% | 72.60% | 0.00% | 92.90% | 0.00% | 96.40% TT-DE | 98.20% | 86.40% | 29.40% | 63.10% | 18.20% | 70.40% | 1.90% | 92.60% | 0.60% | 96.00% TSBA-B | 98.70% | 98.10% | 58.70% | 63.50% | 45.20% | 74.60% | 10.80% | 91.70% | 8.60% | 94.80% Average | 91.00% | 90.90% | 36.03% | 63.53% | 23.83% | 72.53% | 4.23% | 92.40% | 3.07% | 95.73% UWave | None | N/A | 92.47% | N/A | 86.43% | N/A | 88.96% | N/A | 90.17% | N/A | 92.54% TT-FGSM | 87.15% | 81.10% | 14.49% | 72.37% | 5.62% | 76.14% | 0.37% | 88.72% | 0.69% | 91.55% TT-DE | 96.64% | 78.12% | 28.41% | 71.68% | 11.27% | 74.40% | 3.32% | 86.62% | 1.73% | 91.15% TSBA-B | 94.13% | 89.67% | 54.73% | 73.69% | 48.49% | 77.13% | 15.34% | 84.76% | 14.64% | 89.27% Average | 92.64% | 82.96% | 32.54% | 72.58% | 21.79% | 75.89% | 6.34% | 86.70% | 5.69% | 90.66% ### Attack Configurations Our analysis encompasses 9 backdoor attacks on image datasets, including CIFAR-10 [37] and GTSRB [38] datasets. The evaluated attacks consist of 4 classic backdoor attacks, including BadNets [7], Blend [14], Trojan [19], and Dynamic [17]; 2 clean-label backdoor attacks, including Clean-Label attack (CL) [21] and Sinusoidal signal attack (SIG) [39]; and 3 feature-space backdoor attacks, including Latent Backdoor Attack (LBA) [32], Composite Backdoor Attack (CBA) [40], and DFST [10]. Furthermore, we tested our E2ABL against 3 time series backdoor attacks, including TimeTrojan-FGSM [11], TimeTrojan-DE [11], and TSBA-B [12], using 3 multivariate signal datasets. These datasets include ArabicDigits, ECG5000, and UWave, all sourced from the MTS Archive [41]. All the tested image and time series-based attacks adopt a consistent poison rate of 10%, with all other training parameters being set as per their original configurations. It is worth noting that several attacks, including CL, LBA, CBA, and DFST, could not be reproduced on certain image datasets such as GTSRB. Consequently, those experiments have been excluded from our reported results. ### Defense and Training Details We assess our E2ABL in comparison with 3 representative defense methods: Fine- pruning (FP) [31], Neural Attention Distillation (NAD) [28], and Anti-backdoor Learning (ABL) [13]. Given that ResNet has been demonstrated to be an effective baseline method for time series classification, as supported by [42, 12], we utilize ResNet-34 as the backbone model for all poisoned datasets in each attack scenario, on both image and time series data. For FP, we prune the final convolutional layer of each model until the CA falls below the minimum CA under the no-defense condition. In terms of NAD, we follow the standard distillation procedure, which necessitates fine-tuning the backdoored student network for 10 epochs with a 5% clean data subset. Regarding the original ABL defense, we train the model for 20 epochs, applying a learning rate of 0.1 on CIFAR-10 and 0.01 on GTSRB prior to the turning epoch. We followed the ABL work and set the backdoor isolation and unlearning rate ($\gamma_{p}$) as 1%. After successfully segregating 1% of the potential backdoor samples, we proceed to fine-tune the model for 60 additional epochs with all training samples to restore the model’s clean accuracy. We then carry out backdoor unlearning with the 1% isolated backdoor samples, applying a learning rate of 0.0001 for 20 epochs. As for our E2ABL defense, we follow the training procedure outlined in Algorithm 1. We initially trained the second head on the full training dataset for 2 epochs with a learning rate of 0.1 and monitored the loss reduction for each training sample. The subsets defined in Equation (Motivation) are dynamically updated based on the weighted sum of the total loss reductions specified in Equation (5), with the loss drop threshold $\gamma_{c}$ set as 80 and $\gamma_{p}$ set as 1. The E2ABL model undergoes a total of 60 epochs of training, inclusive of the 2 warm-up epochs, with a learning rate of 0.01 for the main head and 0.005 for the second head. ### Effectiveness of E2ABL Defense #### Comparison to Existing Defenses Table I shows that our E2ABL achieves the best clean accuracy among all backdoor defense techniques while maintaining an exceptionally low Attack Success Rate (ASR) against state-of-the-art backdoor attacks. In the case of clean-label attacks, E2ABL fully recovers clean accuracy while maintaining an almost negligible ASR. On average, our E2ABL outperforms the original ABL by a margin of 2.76% in terms of lower ASR and 0.66% in higher Clean Accuracy (CA) across all 9 experiments on the image datasets. For time series, compared to the original ABL, our E2ABL achieves a lower ASR by 1.58% and a higher CA by 2.89% on average, spanning all time series experiments. Previous research has shown that implementing backdoor defense methods on clean training datasets can negatively impact the clean accuracy of the final model [13]. However, compared to existing defenses, our E2ABL achieved an even higher CA when the training data is completely clean, as shown in the ‘None’ rows for each dataset in Table I. Interestingly, on certain datasets such as CIFAR-10 and UWave, the models trained by our E2ABL exhibit higher CAs than those trained using standard training procedures. For instance, when the training data is clean, a standard model achieves a CA of 89.32%, but a model trained with our E2ABL reaches a CA of 89.39%. This phenomenon may be attributed to the exclusion of those “easy-to-learn” samples, which could have a negative impact on the model’s overall performance. This underlines a unique advantage of our defense method in real-world scenarios where the presence of a backdoor attack remains uncertain. Note that our defense requires no prior knowledge of the attack, whereas it uses an additional backdoor detection head (i.e., the second head) to determine whether there are any backdoor samples in the training set and recover the potential backdoor label. #### Effectiveness with Different Subset Sizes We also study the correlation between the isolation size (the ratio between the isolated clean subset and the full set, $\gamma_{c}$ ) and the performance of our E2ABL. We test E2ABL with the clean subset size varying from 50% to 90% against all the nine state-of-the-art backdoor attacks on the CIFAR-10 dataset and show their ASR and CA results in Figure 2. It shows that, with a higher clean subset size, more training samples will be used to train the main head, resulting in moderately higher CA. However, achieving a perfect separation between backdoor and clean samples is not feasible, increasing the clean subset size will reduce the precision of clean sample detection so that poisoned samples have more chance to be mixed into the clean subset, causing a significant rise in ASR (indicates worse performance). We also find that even with a few ($<5$) poisoned samples mixed into the clean subset, a noticeable increase in ASR will be observed in the final model. This also indicates the importance of our proposed strategy that first performs a less accurate but more secure clean-vs-poison isolation and then gradually refines the samples in the poison subset $\hat{{\mathcal{D}}}_{p}$ to improve the clean performance. Figure 2: Performance of E2ABL regarding ASR and CA with different isolation rate ( $\gamma_{c}$ ) on CIFAR-10. ### Detection and Recovery Performances During the E2ABL procedure, it can distinguish between clean and backdoor samples with high precision. Samples present in $\hat{{\mathcal{D}}}_{c}$ were classified as clean samples. However, not all samples in $\hat{{\mathcal{D}}}_{p}$ should be considered as backdoor samples. Conceptually, we could designate the 1% of samples in $\hat{{\mathcal{D}}}_{p}$ that have the largest loss drop rate $\Delta\ell$ as the “most probable” backdoor samples. In this subsection, we delve into the precision of these two subsets to offer further insights into their detection performance. TABLE II: Performance of E2ABL in differentiating between clean and backdoor samples, and restoring true labels. The second column shows the precision of identifying backdoor-infected samples within the subset characterized by the top 1% of loss reductions. The results are computed at the 10th epoch, following the warm start of the second head. Attack | | Precision --- Clean $\hat{{\mathcal{D}}}_{c}$ | Precision --- Backdoor${}_{\text{top}1\%}$ | Recall --- Backdoor $\hat{{\mathcal{D}}}_{p}$ | Precision --- True Label (1) BadNets | 100% | 99.8% | 100% | 83.8% (2) Blend | 99.0% | 98.4% | 99.6% | 76.2% (3) Trojan | 100% | 99.6% | 100% | 72.0% (4) Dynamic | 98.4% | 95.2% | 94.4% | 88.6% (5) CL | 100% | 100% | 100% | 0% (6) SIG | 100% | 99.6% | 100% | 0% (7) LBA | 100% | 99.8% | 100% | 68.4% (8) CBA | 97.2% | 96.4% | 91.6% | 76.8% (9) DFST | 98.8% | 98.6% | 99.8% | 63.0% #### Precision of the Detected Clean and Backdoor Samples As demonstrated in Table II, our E2ABL exhibits high precision in separating clean and backdoor samples based on the loss drops captured by the second head. However, it is worth noting that stronger backdoor attacks, such as Dynamic and CBA, result in lower detection precision. This means that a small fraction of backdoor samples may not fall within the 1% of samples in $\hat{{\mathcal{D}}}_{p}$ that have the most substantial loss drops. Furthermore, some are not even captured in the $\hat{{\mathcal{D}}}_{p}$ set (which contains top 20% samples based on loss drops), as illustrated by the third column in Table II. This phenomenon explains the higher (worse) ASR observed in some of the experiments and serves as an area where E2ABL could be further refined. Generally speaking, our method highlights an effective technique for segregating backdoor samples, thereby allowing E2ABL to train clean models on potentially compromised training data. #### Precision of the Recovered True Class Labels Our E2ABL method introduces a dynamic recovery of the true class labels of poisoned training samples during the training process to recover certain poisoned samples in $\hat{{\mathcal{D}}}_{p}$ to enhance the clean accuracy of the main head. As shown in the last column of Table II, these recovered labels present high precision for dirty-label and feature-based attacks. Note that as long as the sample is not a backdoor sample, its loss value with respect to the backdoor label at the second head will be notably high, as the second head does not have the capacity to sufficiently learn the clean task. In the case of clean-label attacks (such as CL and SIG), the backdoor-poisoned samples at the second head will point to the adversary-chosen target. Accordingly, the poisoned samples will be corrected to different (although might be incorrect) class labels other than the backdoor target. This disrupts the correlation between the trigger pattern and the backdoor label, making it more challenging for the main head to learn and recognize the backdoor. ## Ablation Studies TABLE III: Ablation studies of E2ABL on CIFAR-10. The full names of the attacks are in Table II. The $\Delta$CA and $\Delta$ASR are calculated based on the E2ABL results in Table I. | Unlearn Top 1% | With No Control | Use Two Models ---|---|---|--- | $\Delta$ASR | $\Delta$CA | $\Delta$ASR | $\Delta$CA | $\Delta$ASR | $\Delta$CA (1) | +1.01% | -3.54% | +7.55% | -0.62% | +0.93% | +0.13% (2) | +1.64% | -5.41% | +14.64% | -4.13% | +2.16% | +0.64% (3) | +0.83% | -2.55% | +9.62% | -1.59% | +1.14% | +0.20% (4) | +0.96% | -2.94% | +12.25% | -1.41% | +0.85% | +0.35% (5) | +0.56% | -0.94% | +2.36% | -2.15% | +1.02% | -0.06% (6) | +1.12% | -1.15% | +4.17% | +0.05% | +0.81% | -0.81% (7) | +0.98% | -5.42% | +7.42% | +0.11% | +1.23% | -0.67% (8) | +0.51% | -4.73% | +6.39% | -1.04% | +0.79% | +0.13% (9) | +1.26% | -5.10% | +8.59% | -2.98% | +1.34% | +0.29% avg. | +0.99% | -3.53% | +8.11% | -1.53% | +1.14% | +0.02% ### E2ABL Without Label Correction To achieve higher CA and lower ASR, our E2ABL attempts to rectify the labels of the detected backdoor samples, specifically targeting the 1% of samples in $\hat{\mathcal{D}}_{p}$ with the largest loss drops. These corrected samples are subsequently included in the subset $\mathcal{D}^{*}$, enabling the main head to learn the clean task from them. To explore alternative approaches to managing the detected backdoor samples, we conducted two experiments without using the corrected samples. First, instead of training the main head with $\mathcal{D}^{*}$, we applied the unlearning operation of the top 1% high loss-drop samples utilizing the original ABL method (using negative cross- entropy loss defined with respect to the backdoor label). Additionally, we conducted an experiment that entails training the main head without the corrected samples in $\mathcal{D}^{*}$. As presented in the first two columns of Table III, training the main head with the “rectified” samples in $\mathcal{D}^{*}$ leads to improvements in both ASR and CA. In contrast, when no backdoor control (namely backdoor unlearning and true class recovery) is applied in the main head’s training, the ASR increases dramatically, and CA declines against the majority of attacks. This indicates the significance of the true class recovery step in our E2ABL method, emphasizing its role in enhancing accuracy and robustness. In most cases, the proposed backdoor recovery method not only significantly reduces ASR but also boosts CA, resulting from the recovered true labels. ### A Second Head or a Second Model? Our E2ABL methodology incorporates a secondary head that is attached to the shallow layers of the DNN, aimed at detecting and rectifying backdoor samples. This approach is based on the assumption that the backdoor task is substantially more straightforward than the clean task. Such an implementation will naturally lead to the question: “Can a second model achieve the same result?” The most significant difference between employing a second model as opposed to a second head lies in whether they share the same shallow layer weights with the main network. In essence, without this weight sharing, attaching a second head to the DNN model is equivalent to using two separate DNN models. To further explore this concept, we conducted an experiment utilizing an alternative two-model setting. In this setting, one smaller model is exclusively trained to differentiate between clean and backdoor subsets, mirroring the function of the second head in E2ABL. Concurrently, a full ResNet-34 model is trained following the same procedure as $h_{1}$, as outlined in Algorithm 1 of the manuscript. The results, presented in the third column of Table III, reveal that the two-model design can only marginally enhance the CA by 0.02% with an average 1.14% decline in ASR. These findings illuminate that utilizing separate weights might compromise the secondary model’s proficiency in detecting and isolating backdoor samples. The underlying cause of this limitation is that the two models are not learned synchronously, and thus their learning pace may vary. The shared layer design in E2ABL ensures a coordinated learning process, maximizing both detection efficiency and correction effectiveness. This illustrates the advantages of employing a second head in comparison to a separate two-model approach. ### Different Isolation and Recovery Rates TABLE IV: Performance of E2ABL under different isolation and recovery rates ($\gamma_{rec}$, $\gamma_{iso}$): $\gamma_{iso}$ is the isolation rate, $\gamma_{rec}$ is the recovery rate. The $\Delta$CA and $\Delta$ASR are calculated w.r.t. the result of our default experiment setting with ($\gamma_{rec}$, $\gamma_{iso}$) = (1%, 80%) shown in Table I. $\gamma$ | $(1\%,70\%)$ | $(2\%,80\%)$ | $(5\%,80\%)$ ---|---|---|--- | $\Delta$ASR | $\Delta$CA | $\Delta$ASR | $\Delta$CA | $\Delta$ASR | $\Delta$CA (1) | +1.05% | +0.74% | -0.02% | -0.21% | -0.06% | -0.36% (2) | +3.68% | +0.92% | -2.94% | -0.42% | -3.17% | -0.96% (3) | +1.22% | +0.47% | -0.32% | -0.09% | -0.79% | -0.17% (4) | +4.11% | +0.39% | -3.25% | -0.47% | -7.12% | -0.41% (5) | +0.84% | +0.28% | -0.02% | -0.10% | -0.08% | -0.05% (6) | +0.59% | +0.37% | -0.10% | -0.15% | -0.17% | -0.23% (7) | +0.63% | +1.01% | +0.01% | +0.08% | -0.01% | +0.00% (8) | +3.89% | +0.87% | -4.58% | -0.27% | -8.12% | -0.95% (9) | +1.20% | +0.59% | -0.89% | -0.39% | -1.27% | -0.62% avg. | +1.91% | +0.63% | -1.35% | -0.22% | -2.31% | -0.42% We perform experiments employing three distinct sets of hyperparameters: isolation rate ($\gamma_{p}$) and recovery rate ($\gamma_{c}$). The findings, as detailed in Table IV, indicate that our proposed method demonstrates robustness in different hyperparameter settings. The following conclusions can be derived: 1) A higher recovery rate (increasing from $1\%$ to $5\%$) can further diminish ASR, while the CA is primarily preserved. This suggests that calibration of the recovery rate will have a limited adverse effect on the system’s overall accuracy. 2) Conversely, a higher isolation rate (increasing from $10\%$ to $20\%$) can lead to an improvement in CA, though it causes a marginal increase in ASR by less than $2\%$. Importantly, the overall ASR still remains minimal, indicating that the method’s ability to defend against attacks is maintained, even when the isolation rate is reduced. These observations underscore the robustness of E2ABL, confirming that adjustments to these particular hyper-parameters have a controlled impact on the system’s performance for both image and time series, thereby offering flexibility in tuning according to specific requirements. ### Model behavior between two heads Figure 3: Behavior of dual-head learning over training epochs. The experiments are performed using the CIFAR-10 dataset, incorporating Trojan attacks. To clearly depict the training behavior of the dual-head model, we carried out supplementary experiments without a warm start (lasting for 2 epochs). The comparative results are presented in the first row of Figure 3. We also plotted the precision of clean data and backdoor isolation in the secondary head. The following conclusion can be derived: 1) The cold start approach significantly enhances the second head’s ability to effectively segregate backdoor samples. As our E2ABL model is designed to train clean models on poisoned data, only the clean data is channeled into the main head for training purposes, primarily after the removal of the majority of backdoor samples. Simultaneously, these backdoor samples are utilized in the unlearning process. 2) Without the cold start, the final ASR witnesses a 6% reduction. This outcome is attributed to the exposure of backdoor samples in the early stages of the main head training, which proves challenging to eliminate in the later stages of backdoor unlearning. Conversely, the cold start method first empowers the second head to distinguish between clean and backdoor samples, thereby effectively supporting the subsequent clean training process of the main head. 3) As demonstrated in the bottom row of the figure, the precision of backdoor isolation converges more rapidly compared to clean isolation, reaching nearly 98% in just 2 epochs. However, the convergence of clean isolation occurs over a longer period, taking nearly 10 epochs. This observation also indicates that the backdoor task is easier to learn compared to the clean task, primarily attributable to the higher loss values incorporated during training. ### Where To Attach the Second Head? In this work, we introduce a secondary classification head, integrated into the shallow layers of the DNN. Functioning as a trap for backdoor samples, this secondary head plays a dual role: it 1) detects these deceptive samples and 2) concurrently corrects their labels. Specifically designed to be sensitive to the presence of backdoors, this innovative secondary head performs essential detection tasks, identifying backdoor samples and striving to recover their true labels. By confining the backdoor samples within the shallow layers, this approach protects the primary head, guiding the model training towards a more secure and trustworthy trajectory. In our experiment, utilizing ResNet-34 as the backbone model, the secondary head is constructed of two convolutional layers, strategically attached to the termination of the second convolutional stage, as illustrated in Figure 1. It is worth noting that the specific attachment point of the second head and its size (including the number of convolutional layers and the number of neurons in fully connected layers) require further investigation. This assessment can lead to a deeper understanding of the second head, contributing to a more resilient model. Figure 4: A generalized view of attaching an additional classification head to any DNN models. As depicted in Figure 4, the entire dual-head model can be understood from the following perspective. First, the early convolutional layers are responsible for extracting high-level feature representations from the input sample. In the case of a backdoor-poisoned sample, these feature representations embody both clean features (which infer the true class of the given input) and backdoor features (which lead the model to misclassify the sample into the target class). Subsequently, these mixed feature representations serve as inputs for both the main head and the secondary head. Intriguingly, the two heads are trained with opposing objectives: the main head aims for backdoor- free classification, while the secondary head is sensitized to detect backdoor features. The variations in the attachment point of the secondary head control the depth of the feature representation generated for the latter tasks. To investigate the corresponding consequences of varying the attachment point of the secondary head, we conducted controlled experiments with the ResNet-34 model. The location of attachment was systematically altered, ranging from the initial convolutional group (after Block 1) to a position immediately prior to the FC layer (after Block 4). This experimental design allowed us to explore the effects of the secondary head’s placement on the model’s overall performance and behavior, contributing to our understanding of its optimal integration. TABLE V: Ablation studies of E2ABL on the attachment point of the secondary head. The full names of the attacks can be found in Table 2 of the manuscript. The $\Delta$CA and $\Delta$ASR are calculated based on the CIFAR-10 results in Table 1 of the manuscript (attachment point is after Block 2). | After Block 1 | After Block 3 | After Block 4 ---|---|---|--- | $\Delta$ASR | $\Delta$CA | $\Delta$ASR | $\Delta$CA | $\Delta$ASR | $\Delta$CA (1) | +1.23% | +0.61% | -0.09% | -1.63% | +0.98% | -6.14% (2) | +1.54% | +0.69% | -0.04% | -2.57% | +1.12% | -7.18% (3) | +2.46% | -0.92% | +0.17% | -3.68% | -0.14% | -5.12% (4) | +3.17% | +0.80% | -0.20% | -6.40% | +0.68% | -9.19% (5) | +0.64% | -0.56% | -0.34% | -6.77% | +1.63% | -13.10% (6) | +0.96% | +0.75% | -0.16% | -4.69% | +1.32% | -12.57% (7) | +1.15% | -0.64% | +0.31% | -5.93% | +0.65% | -8.16% (8) | +0.99% | -0.61% | +0.07% | -7.11% | +0.84% | -11.24% (9) | +1.10% | +0.73% | -0.14% | -5.74% | +0.79% | -10.64% avg. | +1.47% | +0.54% | -0.05% | -4.95% | +0.87% | -9.26% The results, as presented in Table V, lead to the following conclusion: attaching the secondary head to the output of deeper convolutional groups yields a lower ASR (which is favorable from a defense perspective), but also results in a lower CA (indicating worse performance in prediction). However, when the secondary head is affixed to deeper convolutional groups (such as after Block 4), the dual-head model exhibits a noticeable decline in performance across both metrics, leading to unintended negative consequences. The possible cause of this decline in performance may be attributed to the convolutional layers picking up an excessive number of backdoor features, while more benign features are overshadowed or ignored. As a result, the secondary head’s capacity to provide protection to the main head in terms of backdoor robustness becomes limited. The current configuration of the dual head model contains $1.2\times$ of parameters with $1.25\times$ run-time compared with the ResNet-34 model. In our supplementary experiments with alternative backbone models like ResNet-18 and ResNet-50, we noticed consistent trends related to the placement of the secondary head. Generally speaking, our dual-head model tends to achieve an optimal balance between ASR and CA when the secondary head is attached around the midpoint of the convolutional layers. ## Conclusion In this paper, we proposed the End-to-End Anti-Backdoor Learning (E2ABL) methodology, a simple but innovative technique specifically engineered to train models that remain clean even when exposed to potentially backdoor- poisoned training data. The E2ABL approach, by connecting a second head to the shallow layers of a Deep Neural Network (DNN), serves as a backdoor supervisor that learns, detects, and segregates backdoor samples. E2ABL also incorporates a partitioning mechanism to distinguish clean samples from potentially poisoned ones, thus creating a subset of backdoor samples. It then employs a novel, dynamic true class recovery process to rectify the labels of a certain proportion of samples within the poisoned subset. Through extensive experiments on both image and time series data, we have proven E2ABL’s effectiveness in defending against 9 backdoor attacks. It can train clean and reliable models even when confronted with sophisticated backdoor attacks. This work presents an actionable solution for safety-critical industries seeking to train models devoid of backdoor vulnerabilities using real-world datasets. While there are still many open problems, this work has made a first attempt toward a single unified defense for multiple tasks and data modalities. This work can thus serve as a useful baseline for future research. ## References * [1] J. Redmon, S. Divvala, R. Girshick, and A. Farhadi, “You only look once: Unified, real-time object detection,” in CVPR, 2016. * [2] A. Balasundaram and C. Chellappan, “Vision based motion tracking in real time videos,” in ICCIC, 2017. * [3] M. Cordts, M. Omran, S. Ramos, T. Rehfeld, M. Enzweiler, R. Benenson, U. Franke, S. Roth, and B. Schiele, “The cityscapes dataset for semantic urban scene understanding,” in CVPR, 2016. * [4] O. Peia and K. Roszbach, “Finance and growth: time series evidence on causality,” Journal of Financial Stability, 2015. * [5] A. Essien and C. Giannetti, “A deep learning model for smart manufacturing using convolutional lstm neural network autoencoders,” IEEE Transactions on Industrial Informatics, 2020. * [6] R. B. Penfold and F. Zhang, “Use of interrupted time series analysis in evaluating health care quality improvements,” Academic pediatrics, 2013. * [7] T. Gu, B. Dolan-Gavitt, and S. Garg, “Badnets: Identifying vulnerabilities in the machine learning model supply chain,” arXiv preprint arXiv:1708.06733, 2017. * [8] Y. Liu, X. Ma, J. Bailey, and F. Lu, “Reflection backdoor: A natural backdoor attack on deep neural networks,” in ECCV, 2020. * [9] A. Turner, D. Tsipras, and A. Madry, “Label-consistent backdoor attacks,” arXiv preprint arXiv:1912.02771, 2019. * [10] S. Cheng, Y. Liu, S. Ma, and X. Zhang, “Deep feature space trojan attack of neural networks by controlled detoxification,” in AAAI, 2021. * [11] D. Ding, M. Zhang, Y. Huang, X. Pan, F. Feng, E. Jiang, and M. Yang, “Towards backdoor attack on deep learning based time series classification,” in ICDE, 2022. * [12] Y. Jiang, X. Ma, S. M. Erfani, and J. Bailey, “Backdoor attacks on time series: A generative approach,” arXiv preprint arXiv:2211.07915, 2022. * [13] Y. Li, X. Lyu, N. Koren, L. Lyu, B. Li, and X. Ma, “Anti-backdoor learning: Training clean models on poisoned data,” NeurIPS, 2021. * [14] X. Chen, C. Liu, B. Li, K. Lu, and D. Song, “Targeted backdoor attacks on deep learning systems using data poisoning,” arXiv preprint arXiv:1712.05526, 2017. * [15] E. Bagdasaryan and V. Shmatikov, “Blind backdoors in deep learning models,” in USENIX Security, 2021. * [16] S. Zhao, X. Ma, X. Zheng, J. Bailey, J. Chen, and Y.-G. Jiang, “Clean-label backdoor attacks on video recognition models,” in CVPR, 2020. * [17] A. Nguyen and A. Tran, “Input-aware dynamic backdoor attack,” NeurIPS, 2020. * [18] Y. Li, Y. Li, B. Wu, L. Li, R. He, and S. Lyu, “Invisible backdoor attack with sample-specific triggers,” in CVPR, 2021. * [19] Y. Liu, S. Ma, Y. Aafer, W.-C. Lee, J. Zhai, W. Wang, and X. Zhang, “Trojaning attack on neural networks,” in NDSS, 2018. * [20] A. Shafahi, W. R. Huang, M. Najibi, O. Suciu, C. Studer, T. Dumitras, and T. Goldstein, “Poison frogs! targeted clean-label poisoning attacks on neural networks,” NeurIPS, 2018. * [21] A. Turner, D. Tsipras, and A. Madry, “Clean-label backdoor attacks,” https://people.csail.mit.edu/madry/lab/, 2019. * [22] C. Zhu, W. R. Huang, H. Li, G. Taylor, C. Studer, and T. Goldstein, “Transferable clean-label poisoning attacks on deep neural nets,” in International Conference on Machine Learning, pp. 7614–7623, PMLR, 2019. * [23] A. Saha, A. Subramanya, and H. Pirsiavash, “Hidden trigger backdoor attacks,” in AAAI, 2020. * [24] S. Wang, S. Nepal, C. Rudolph, M. Grobler, S. Chen, and T. Chen, “Backdoor attacks against transfer learning with pre-trained deep learning models,” IEEE Transactions on Services Computing, 2020. * [25] J. Dumford and W. Scheirer, “Backdooring convolutional neural networks via targeted weight perturbations,” in IJCB, 2020. * [26] S. Garg, A. Kumar, V. Goel, and Y. Liang, “Can adversarial weight perturbations inject neural backdoors,” in CIKM, 2020. * [27] P. Zhao, P.-Y. Chen, P. Das, K. N. Ramamurthy, and X. Lin, “Bridging mode connectivity in loss landscapes and adversarial robustness,” arXiv preprint arXiv:2005.00060, 2020. * [28] Y. Li, X. Lyu, N. Koren, L. Lyu, B. Li, and X. Ma, “Neural attention distillation: Erasing backdoor triggers from deep neural networks,” ICLR, 2021. * [29] D. Wu and Y. Wang, “Adversarial neuron pruning purifies backdoored deep models,” NeurIPS, 2021. * [30] Y. Li, X. Lyu, X. Ma, N. Koren, L. Lyu, B. Li, and Y.-G. Jiang, “Reconstructive neuron pruning for backdoor defense,” ICML, 2023. * [31] K. Liu, B. Dolan-Gavitt, and S. Garg, “Fine-pruning: Defending against backdooring attacks on deep neural networks,” in RAID, 2018. * [32] Y. Yao, H. Li, H. Zheng, and B. Y. Zhao, “Latent backdoor attacks on deep neural networks,” in CCS, 2019. * [33] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in CVPR, 2016. * [34] R. Geirhos, J.-H. Jacobsen, C. Michaelis, R. Zemel, W. Brendel, M. Bethge, and F. A. Wichmann, “Shortcut learning in deep neural networks,” Nature Machine Intelligence, vol. 2, no. 11, pp. 665–673, 2020. * [35] S. Li, S. Ma, M. Xue, and B. Z. H. Zhao, “Deep learning backdoors,” in Security and Artificial Intelligence, pp. 313–334, Springer, 2022. * [36] S. Yang, Y. Li, Y. Jiang, and S.-T. Xia, “Backdoor defense via suppressing model shortcuts,” arXiv preprint arXiv:2211.05631, 2022. * [37] A. Krizhevsky, G. Hinton, et al., “Learning multiple layers of features from tiny images,” 2009. * [38] J. Stallkamp, M. Schlipsing, J. Salmen, and C. Igel, “The german traffic sign recognition benchmark: a multi-class classification competition,” in IJCNN, 2011. * [39] M. Barni, K. Kallas, and B. Tondi, “A new backdoor attack in cnns by training set corruption without label poisoning,” in ICIP, 2019. * [40] J. Lin, L. Xu, Y. Liu, and X. Zhang, “Composite backdoor attack for deep neural network by mixing existing benign features,” in SIGSAC Conference on Computer and Communications Security, 2020. * [41] Y. Chen, E. Keogh, B. Hu, N. Begum, A. Bagnall, A. Mueen, and G. Batista, “The ucr time series classification archive,” July 2015. www.cs.ucr.edu/~eamonn/time_series_data/. * [42] Z. Wang, W. Yan, and T. Oates, “Time series classification from scratch with deep neural networks: A strong baseline,” in IJCNN, 2017.
# ZeroGen: Zero-shot Multimodal Controllable Text Generation with Multiple Oracles Haoqin Tu, Bowen Yang, Xianfeng Zhao State Key Laboratory of Information Security, Institute of Information Engineering, School of Cyber Security, University of Chinese Academy of Sciences <EMAIL_ADDRESS><EMAIL_ADDRESS> ###### Abstract Automatically generating textual content with desired attributes is an ambitious task that people have pursued long. Existing works have made a series of progress in incorporating unimodal controls into language models (LMs), whereas how to generate controllable sentences with multimodal signals and high efficiency remains an open question. To tackle the puzzle, we propose a new paradigm of zero-shot controllable text generation with multimodal signals (ZeroGen). Specifically, ZeroGen leverages controls of text and image successively from token-level to sentence-level and maps them into a unified probability space at decoding, which customizes the LM outputs by weighted addition without extra training. To achieve better inter-modal trade-offs, we further introduce an effective dynamic weighting mechanism to regulate all control weights. Moreover, we conduct substantial experiments to probe the relationship of being in-depth or in-width between signals from distinct modalities. Encouraging empirical results on three downstream tasks show that ZeroGen not only outperforms its counterparts on captioning tasks by a large margin but also shows great potential in multimodal news generation with a higher degree of control. Our code will be released at https://github.com/ImKeTT/ZeroGen. Figure 1: Traditional CTG only has unimodal guidance (up), while our ZeroGen follows Multimodal CTG (down) that incorporates multimodal controls to generate relevant texts. We mark words/sentences that are relevant to the textual control or visual control. ## 1 Introduction Large-scale pre-trained models (PTMs) have recently achieved great success and become a milestone in the field of AI. Owing to their sophisticated pre- training objectives and huge model parameters, PTMs can benefit a variety of downstream tasks just like Oracles. In the domain of language, pre-trained language models (PLMs) have become a cornerstone of versatile generation tasks including controllable text generation (CTG). By controlling the presence of certain linguistic attributes, these PLMs can be trained to generate texts with desired aspects such as length, and topic Kikuchi et al. (2016); Ficler and Goldberg (2017). Conventional approaches usually construct a conditional LM with supervision (e.g., by fine-tuning), which is unscalable due to the combinatorially numerous conceivable compositions and the lack of annotated data Keskar et al. (2019); Liu et al. (2022). Most recent studies have begun to look into “plug-and-play” (PnP) solutions. Those techniques plug in arbitrary restrictions to guide the generation of desired sentences with PLMs and little training expenses. And the control signals of this paradigm are typically limited to unimodal domains, such as provided keywords or topics Dathathri et al. (2019); Pascual et al. (2021); Yang and Klein (2021); Tu et al. (2022). Rapidly, the PnP fashion has been adopted to bridge multimodal knowledge, recent works have introduced pre-trained multimodal models like CLIP Radford et al. (2021) into cross-modal tasks with vision-only controls such as captioning. These approaches obtained exceptional performances with minimal or no task-oriented training Su et al. (2022a); Tewel et al. (2022); Nukrai et al. (2022). On the one hand, meaningful interactions between human speakers often necessitate real-world experiences Bisk et al. (2020), and the text-only instruction alone may not be sufficient to fulfill such communication purpose Harnad (1990). As a result, using unimodal controls for CTG may conflict between how to reliably regulate current PLMs and real-world scenarios (e.g., multimodal controlled news generation in Figure 1). On the other hand, unlike some keyword-guided PnP works Pascual et al. (2021); Gu et al. (2022), models that incorporate visual guidance into language generation insert constant controls at LM decoding instead of considering the dynamic nature of such process, which may lead to task under-performance Su et al. (2022a); Tewel et al. (2022). To overcome those shortcomings, we take a step further to extend the current unimodal PnP paradigm into a multimodal setting and propose ZeroGen. To accomplish multimodal CTG task, we are aware that inputs from different domains affect different granularities of presences in texts. As shown in Figure 1, while textual control steers generated news to the science topic by presenting related keywords, visual control provides more abundant ambient information by producing sentence descriptions. In order to plug in multimodal signals, we propose to unify controls into the LM output probability using token- or sentence-level similarity with several Oracles. Specifically, we first regard the textual guidance as the token-level similarity between keywords and the LM vocabulary from a textual Oracle before decoding, then we incorporate such guidance to LM outputs by weighted addition at generation. For visual guidance, we use a multimodal score Su et al. (2022a) based on sentence-level probability determined by a multimodal Oracle. Finally, we employ beam search to find the token with the highest score at each step. To adapt to the dynamic nature of LM decoding and further promote model performance, we provide a dynamic weighting mechanism on the word-level that can not only enhance visual information expression but also maintain output fluency. We conduct three tasks (image captioning, stylized captioning, and controllable news generation) with ZeroGen. We explore the relationship between textual and visual control being either vertical or lateral. Specifically, in two captioning tasks, textual objects of the image extend the visual signal as a complement (vertical extension). For news generation, a collection of positive or negative words are used to embody generated news a specific sentiment (lateral extension). The effectiveness of our approach in providing better captions and easily controlled news is demonstrated by results on both automatic metrics and human evaluations. Contributions. (1) We explore the task of multimodal controllable text generation under zero-shot setting and propose ZeroGen that utilizes token- and sentence-level multimodal guidance to fulfill this task. (2) We present a dynamic weighting scheme on the word-level that can be applied to different modalities and boost the fluency and controllability of generated texts. (3) Extensive experiments on two captioning tasks and the controllable news generation task not only justify the effectiveness of ZeroGen but also investigate the relationship between different types of modal controls. Figure 2: Workflow of ZeroGen at decoding step $t$. Through multiple LM output changing stages, ZeroGen is essentially a decoding scheme that finds a word related to both textual ($\textbf{C}_{T}$) and visual control ($\textbf{C}_{V}$) at each step. It then feeds the word back to the base LM for the future conditional generation. ## 2 Related Work #### Efficient Image Captioning. The prerequisite of supervised captioning for a large amount of paired image- text data is unrealistic in real-life scenarios. Various attempts have been made to reduce the dependence on large paired image-text data. For example, some works Anderson et al. (2018); Laina et al. (2019); Chen et al. (2020); Honda et al. (2021) have sought to incorporate objects from given images into model training. Despite their efficiency in comparing supervised methods, they still need to be trained with partial cross-modal guidance as supervision. CLIP Radford et al. (2021) as a milestone for vision-language alignment has shown impressive zero-shot capabilities on various multimodal generation tasks. For example, Tewel et al. (2022) proposed the first zero-shot captioning model with CLIP and a base LM (i.e., GPT-2). It constantly updates the model’s transformer cache under the direction of CLIP guidance decoding. Nevertheless, it still demands gradient computation and optimization during generation, introducing additional generation overhead. Su et al. (2022a) proposed MAGIC that utilizes a token decoding score based on CLIP to produce plausible captions without task-specified training. Most recently, Nukrai et al. (2022) employs text-only training with gaussian noises parameterized by a few images to connect CLIP and the base LM textual embedding. Still, Nukrai et al. (2022) requires a small amount of external visual knowledge during training. As for our model, ZeroGen expands MAGIC with additional capabilities to facilitate multimodal guided generation with dynamic weighting, supporting several downstream applications while keeping its ability to transfer to different base LMs. Most recently, Zeng et al. (2023) proposed to employ sample-based sequential polishing during language decoding to produce plausible and fluent captions. #### PnP Controllable Text Generation. To avoid excessive training costs from fine-tuning PLMs into CTG tasks, researchers have turned their attention to specialized training-free methods such as the “plug-and-play” (PnP) framework by Dathathri et al. (2019). This framework can be used along an existing generative LM (the base LM) with minimum or no training procedure between PnP components and the base LM. In comparison to conventional methods, these PnP approaches typically follow two aspects. In-model guidance approaches including “prompt tuning” Lester et al. (2021) that either aim at optimizing the input prompts and additional parameters that are fed into the base LM Houlsby et al. (2019); Li and Liang (2021); Lester et al. (2021) or seek to alter certain hidden representations that are not model input or output layers, by plugging a trainable model into the middle of the base LM Dathathri et al. (2019); Duan et al. (2020); Mai et al. (2020); Tu et al. (2022). Out-model guidance techniques, on the contrary, focus on building controllable language models that only modify the output probabilities from the base LMs at inference time Krause et al. (2021); Pascual et al. (2021); Yang and Klein (2021); Liu et al. (2021a); Krause et al. (2021). And our ZeroGen belongs to the last category that only imposes control signals at LM decoding. ## 3 ZeroGen Methodology For the multimodal CTG task, we formally define it as: given the visual control $\textbf{C}_{V}$ (i.e., an image) and $N$ representative words from a topic or an image as the textual control $\textbf{C}_{T}=\\{C_{T_{1}},...,C_{T_{N}}\\}$, we aim at getting the textual output $\textbf{X}=\\{x_{1},x_{2},...\\}$ to meet the two control aspects simultaneously. ZeroGen focuses on the output probability space of the base LM. As shown in Figure 2, at decoding step $t$, it first adjusts the original LM output probability $p_{\text{LM}_{t}}$ to $p^{\prime}_{\text{LM}_{t}}$ follows the token-level textual guidance from keywords-vocabulary similarities, then it completes word searching on $p^{\prime}_{\text{LM}_{t}}$ using a sentence- level multimodal scoring function and beam search. Note that, instead of calculating the token similarity constantly Pascual et al. (2021), we only compute it once before decoding and turning it into the overall textual control with options. Finally, being processed on word-level, the dynamic weighting scheme is applied to regulate both control weights for every generation step. ### 3.1 Token-level Textual Guidance Since the appearance of keywords from a certain topic can drive sentences in such direction, we consider token-level similarity between LM tokens and keywords in $\textbf{C}_{T}$ as the textual guidance. To avoid the additional computational costs, we unify the textual control into probability space by a set of cosine similarities between word $C_{T_{n}}\in\textbf{C}_{T}$ and the full base LM vocabulary $\textbf{V}\in\mathbb{R}^{V}$ before decoding. These word similarities are obtained using the textual Oracle $\phi_{T}$ (e.g., pre- trained word embedding): $\displaystyle p(\textbf{V},\textbf{C}_{T})=\left\\{\cos(\phi_{T}(\textbf{V}),\phi_{T}(C_{T_{n}}))\right\\}_{n=1}^{N},$ where $p(\textbf{V},\textbf{C}_{T})\in\mathbb{R}^{N\times V}$, $V$ is the vocabulary size. To fully utilize all the given keywords, we explore three selection methods at time $t$ when $N>1$ to get the overall textual control $p_{t}(\textbf{C}_{T})\in\mathbb{R}^{V}$: #### Step-wise Random (SR): we first provide changing controls through the generation. At different steps, we sample one keyword-vocabulary similarity uniformly from $p(\textbf{V},\textbf{C}_{T})$ as textual guidance. #### Mean Pooling (MP): an intuitive way to consider all textual information is to average their guiding similarities w.r.t. V across distinct keywords. #### Word-wise Max (WM): for every token $w$ from V, we choose the most similar keyword in $\textbf{C}_{T}$ with $w$ (with the highest cosine similarity score) to compute its guiding probability and compose all the highest similarities together as $p_{t}(\textbf{C}_{T})$. The overall textual control $p_{t}(\textbf{C}_{T})\in\mathbb{R}^{V}$ is available after this selection, we introduce it to $p_{\text{LM}_{t}}$ as a control bias by simple addition operation with weighting: $p^{\prime}_{\text{LM}_{t}}=p_{\text{LM}_{t}}+\alpha\times p_{t}(\textbf{C}_{T})$. ### 3.2 Sentence-level Visual Guidance Image information carries more general and higher-level global information than a single word. As discussed in Sec. 1, we thus consider sentence-level similarity between generated texts and visual control $\textbf{C}_{V}$ as the visual guidance. We employ scoring function $S_{t}$ for word $w\in\textbf{V}$ at $t$-th step with weighted visual guidance as in Su et al. (2022a), and use beam search for generation: $\displaystyle S_{t}\left(w,\textbf{C}_{V}\mid x_{<t},W_{t}^{(k)}\right)=$ $\displaystyle\left\\{\begin{aligned} &\begin{aligned} p^{\prime}_{\text{LM}_{t}}&(w\mid x_{<t})+\\\ &\beta\times\frac{e^{p_{\phi_{M}}\left(\left[x_{<t};w\right],\textbf{C}_{V}\right)}}{\sum_{z\in W_{t}^{(k)}}e^{p_{\phi_{M}}\left(\left[x_{<t};z\right],\textbf{C}_{V}\right)}}\end{aligned},\text{if }w\in W_{t}^{(k)}\\\ &-\inf,\qquad\qquad\qquad\qquad\qquad\quad\quad\text{otherwise}.\end{aligned}\right.$ Here $[x_{<t};w]$ means appending $w$ to the generated texts before step $t$, $W_{t}^{(k)}$ is the searching beam consists of words with the $k$ highest probabilities in $p^{\prime}_{\text{LM}_{t}}$. In detail, we bridge texts and $\textbf{C}_{V}$ using multimodal Oracle $\phi_{M}$ (e.g., CLIP) and compute their similarity: $p_{\phi_{M}}([x_{<t};w],\textbf{C}_{V})=\cos(\phi_{M}([x_{<t};w]),\phi_{M}(\textbf{C}_{V}))$. Our final goal is to find $x_{t}=\arg\max_{w\in\textbf{V}}S_{t}(w,\textbf{C}_{V}\mid x_{<t},W_{t}^{(k)})$. ### 3.3 Multimodal Dynamic Weighting To further boost the model performance and make the model get attuned to different generation steps, a novel dynamic weighting mechanism is proposed to achieve step-wise multimodal weights adjustment. Concretely, we replace $\alpha,\beta$ with dynamic $\alpha_{t},\beta_{t}$ severally. The design should consider such principles: (1) It is necessary to seek a certain balance between textual control (i.e., shifts the LM output probability) and the original LM modeling to avoid inconsistent outputs. (2) During generation, visual-relevant words ought to be encouraged, while those irrelevant are punished. Since the smallest comprehensible output of an LM is one word, we then apply this framework on the word-level. #### Dynamic $\boldsymbol{\alpha_{t}}$. To maintain the original language modeling ability, and also make the most out of provided textual guidance. We re-scale the textual control using a step- wise weighting calibration that incorporates the original LM output confidence $p_{\text{LM}_{t}}$. Specifically, we compute the average probability of $\hat{N}\in[1,N]$ keywords in $\textbf{C}_{T}$ from the unshifted LM output as the $t$-th textual control weight: $\displaystyle D_{T}=\sum_{n=1}^{\hat{N}}\frac{p_{\text{LM}_{t}}\left(C_{T_{n}}\mid x_{<t}\right)}{\hat{N}},$ $\displaystyle\alpha_{t}=\min\left(\frac{D_{T}}{\lambda},\hat{\alpha}\right).$ If $D_{T}$ is high, keywords from $\textbf{C}_{T}$ are encouraged to be spoken. Since this is the exact time when unchanged base LM has high confidence to produce these words, we can avoid jeopardizing output fluency while generating controlled texts. #### Dynamic $\boldsymbol{\beta_{t}}$. To reward the generation stages where all words in $W_{t}^{(k)}$ are highly associated with the knowledge in $\textbf{C}_{V}$ and penalize those do not, we employ average word-level similarity between current candidate words and the visual control: $\displaystyle D_{V}=\sum_{w\in W_{t}^{(k)}}\frac{p\left(w,\textbf{C}_{V}\right)}{k},$ $\displaystyle\beta_{t}=\min\left(\frac{D_{V}}{\lambda},\hat{\beta}\right).$ If $D_{V}$ is high, words in $W_{t}^{(k)}$ are relevant to $\textbf{C}_{V}$ and should be expressed with a higher chance. Inspired by Gu et al. (2022), $\lambda$ in this framework serves as a threshold that amplifies the control signal if $D_{V}$ or $D_{T}$ is larger than it and vice versa. Meanwhile, $\hat{\alpha},\hat{\beta}$ are weighting upper bounds. Model | MS-COCO | Flickr30k | Speed ---|---|---|--- B@1 $\uparrow$ | B@4 $\uparrow$ | M $\uparrow$ | R-L $\uparrow$ | CIDEr $\uparrow$ | SPICE $\uparrow$ | B@1 $\uparrow$ | B@4 $\uparrow$ | M $\uparrow$ | R-L $\uparrow$ | CIDEr $\uparrow$ | SPICE $\uparrow$ Weakly Supervised Approaches IC-SME Laina et al. (2019) | - | 6.5 | 12.9 | 35.1 | 22.7 | - | - | 7.9 | 13.0 | 32.8 | 9.9 | - | - S2S-GCC Honda et al. (2021) | 50.4 | 7.6 | 13.5 | 37.3 | 31.8 | 8.4 | - | - | - | - | - | - | - CapDec Nukrai et al. (2022) | 69.2 | 26.4 | 25.1 | 51.9 | 91.8 | - | 55.5 | 17.7 | 20.0 | 43.9 | 39.1 | - | - Unsupervised Approaches CLIPRe | 39.5 | 4.9 | 11.4 | 29.0 | 13.6 | 5.3 | 38.5 | 5.2 | 11.6 | 27.6 | 10.0 | 5.7 | - ZeroCap Tewel et al. (2022) | 49.8 | 7.0 | 15.4 | 31.8 | 34.5 | 9.2 | 44.7 | 5.4 | 11.8 | 27.3 | 16.8 | 6.2 | 1.0 $\times$ MAGIC Su et al. (2022a) | 56.5 | 12.4 | 17.3 | 39.6 | 48.3 | 11.2 | 43.3 | 6.8 | 12.3 | 30.8 | 20.5 | 6.8 | 26.6 $\times$ ConZIC Zeng et al. (2023) | - | 1.3 | 11.5 | - | 12.8 | 5.2 | - | - | - | - | - | - | - DeCap Li et al. (2023) | - | 8.9 | 17.5 | - | 50.6 | 13.1 | - | - | - | - | - | - | - ZeroGen | 59.4 | 15.5 | 18.7 | 42.3 | 55.4 | 12.1 | 54.9 | 13.1 | 15.2 | 37.4 | 26.4 | 8.3 | 16.4 $\times$ -TDW | 58.9 | 15.2 | 18.4 | 41.8 | 54.4 | 11.9 | 54.1 | 12.8 | 14.7 | 36.8 | 24.5 | 7.7 | 16.5 $\times$ -T | 58.6 | 14.7 | 17.4 | 41.3 | 51.7 | 11.8 | 53.3 | 11.9 | 14.3 | 36.2 | 24.1 | 7.5 | 18.6 $\times$ -VDW | 57.0 | 12.6 | 17.6 | 39.7 | 49.7 | 11.6 | 49.2 | 6.4 | 14.1 | 32.4 | 22.9 | 7.7 | 22.5 $\times$ -DW | 57.0 | 12.6 | 17.6 | 39.7 | 49.7 | 11.6 | 47.7 | 7.1 | 13.8 | 32.3 | 21.9 | 7.6 | 21.6 $\times$ Table 1: Captioning results of ZeroGen with only 1 object as $\textbf{C}_{T}$ (i.e., $N=1$) on MS-COCO and Flickr30k. ZeroGen outperforms most baselines with tolerable efficiency sacrifice. T, TDW/VDW, DW represent textual control, textual/visual dynamic weighting and two dynamic weighting schemes combined respectively. Model | B@1 $\uparrow$ | B@4 $\uparrow$ | M $\uparrow$ | R-L $\uparrow$ | CIDEr $\uparrow$ ---|---|---|---|---|--- ZeroGen ($N=1$) | 59.4 | 15.5 | 18.7 | 42.3 | 55.4 ZeroGen ($N=2$) | 60.1 | 15.6 | 18.5 | 42.3 | 55.9 ZeroGen ($N=3$) | 60.4 | 15.6 | 18.6 | 42.3 | 56.5 ZeroGen ($N=4$) | 60.5 | 15.7 | 18.7 | 42.4 | 57.0 ZeroGen ($N=5$) | 60.6 | 15.8 | 18.7 | 42.4 | 57.1 Table 2: Captioning results of ZeroGen on MS-COCO with varied size $N$ for textual control $\textbf{C}_{T}$. Model | B@1 $\uparrow$ | B@4 $\uparrow$ | M $\uparrow$ | R-L $\uparrow$ | CIDEr $\uparrow$ ---|---|---|---|---|--- ZeroGen SR | 59.6 | 15.3 | 18.4 | 42.1 | 55.5 ZeroGen MP | 59.9 | 15.2 | 18.3 | 42.0 | 55.2 ZeroGen WM | 60.6 | 15.8 | 18.7 | 42.4 | 57.1 Table 3: Captioning results of ZeroGen with $N=5$ for $\textbf{C}_{T}$ on MS- COCO with three $p_{t}(\textbf{C}_{T})$ options. ## 4 General Implementations and Baselines #### General Implementations. We take SimCTG Su et al. (2022b) as our base LM and first fine-tune it on every dataset with text-only data like previous works. Since ZeroGen follows the zero-shot paradigm, it can leverage any off-the-shelf LM and empower it a pair of eyes. For Oracles, we employ GloVe Pennington et al. (2014) as the textual Oracle $\phi_{T}$ and CLIP Radford et al. (2021) as the multimodal Oracle $\phi_{M}$. The $\hat{N}$ for $\alpha_{t}$ is $N$ itself on two captioning tasks, while $\hat{N}=2$ on controllable news generation task through ablation study. The amplifying factor $\lambda$ is $0.2$ throughout the paper. See Appendix A for full model details. #### Baseline Models. For the image captioning task, we select both weakly supervised and unsupervised methods as our baselines, (1) IC-SME Laina et al. (2019), S2S-GCC Honda et al. (2021), and CapDec Nukrai et al. (2022) are three weakly supervised approaches, the former two adapt neural network modules to align visual features with pseudo captions, CapDec introduces CLIP guidance and few images in training. (2) CLIPRe, ZeroCap Tewel et al. (2022), and MAGIC Su et al. (2022a) are three zero-shot methods, which follow the retrieval manner, CLIP-guided gradient update, and decoding scheme respectively. For fair comparisons, we use the same base LM as ours for ZeroCap and MAGIC. In stylized captioning, MemCap Zhao et al. (2020) is additionally considered. Model | FlickrStyle10k Romantic | FlickrStyle10k Humorous ---|---|--- B@1 $\uparrow$ | B@3 $\uparrow$ | M $\uparrow$ | R-L $\uparrow$ | CIDEr $\uparrow$ | SPICE $\uparrow$ | B@1 $\uparrow$ | B@3 $\uparrow$ | M $\uparrow$ | R-L $\uparrow$ | CIDEr $\uparrow$ | SPICE $\uparrow$ MemCap Zhao et al. (2020) | 21.2 | 4.8 | 8.4 | - | 22.4 | - | 19.9 | 4.3 | 7.4 | - | 19.4 | - ZeroCap Tewel et al. (2022) | 19.3 | 2.7 | 7.6 | 16.5 | 14.9 | 7.0 | 18.4 | 2.7 | 7.7 | 16.5 | 15.6 | 7.7 MAGIC Su et al. (2022a) | 23.3 | 4.9 | 8.6 | 21.7 | 24.4 | 8.6 | 23.7 | 5.2 | 9.0 | 21.2 | 27.8 | 10.1 CapDec∗ Nukrai et al. (2022) | 21.4 | 5.0 | 9.6 | - | 26.9 | - | 24.9 | 4.3 | 10.2 | - | 34.1 | - ConZIC Zeng et al. (2023) | - | 1.2 | 6.1 | - | - | - | - | 1.2 | 6.1 | - | - | - ZeroGen | 24.4 | 5.5 | 9.2 | 22.3 | 27.3 | 9.8 | 24.2 | 5.6 | 9.6 | 22.0 | 30.5 | 11.2 -TDW | 23.5 | 5.4 | 8.7 | 21.9 | 26.1 | 9.0 | 24.2 | 5.6 | 9.6 | 21.9 | 30.5 | 11.2 -T | 23.3 | 4.9 | 8.6 | 21.8 | 24.7 | 8.6 | 23.7 | 5.2 | 9.1 | 21.2 | 28.3 | 10.2 -VDW | 24.0 | 5.5 | 9.0 | 22.1 | 26.9 | 9.4 | 24.1 | 5.6 | 9.5 | 21.9 | 30.2 | 11.1 -DW | 23.4 | 5.1 | 8.7 | 21.8 | 25.0 | 9.0 | 23.8 | 5.3 | 9.1 | 21.4 | 29.1 | 10.3 Table 4: Stylized captioning results on two subsets of FlickrStyle10k with $N=1$. * meas CapDec is a weakly supervised method that requires additional visual knowledge from several images during training. For controllable news generation task, we take MAGIC and MAGIC+PPLM as two baseline models. Specifically, MAGIC+PPLM is the combination of two existing PnP works that take image and keywords as input respectively. PPLM Dathathri et al. (2019) is the first controllable PnP LM that requires gradient descents of model hidden states at decoding time. More details and code links of baselines are available in Appendix A.5. ## 5 Experiments and Analysis ### 5.1 Image Captioning #### Dataset and Metrics. We conduct experiments on MS-COCO and Flickr30k using Karpathy split Karpathy and Fei-Fei (2015). For the visual control, we take images for captioning task as $\textbf{C}_{V}$. For the textual control, we take textual objects of the corresponding image as $\textbf{C}_{T}$.111Textual objects are extracted from each picture ahead of the generation using a pre-trained DETR Carion et al. (2020). We use five relevance-based metrics for evaluation: BLEU-1 (B@1), BLEU-4 (B@4) Papineni et al. (2002), METEOR (M) Denkowski and Lavie (2014), ROUGE-L (R-L) Lin and Och (2004), CIDEr Vedantam et al. (2015), and SPICE Anderson et al. (2016). Besides, we also compare the decoding speed of ZeroGen against baselines.222We measure the model’s decoding speed on the same machine with one NVIDIA GeForce 3090 GPU sequentially. #### Main Results. Since both modal controls aim to enhance the model’s ability to understand image content and to generate better captions, we consider $\textbf{C}_{T}$ to be a vertical augmentation (or a complement) of $\textbf{C}_{V}$ in this task. From results in Table 1, we can draw the following conclusions, (1) the proposed ZeroGen model consistently outperforms unsupervised baselines and most of the weakly supervised methods (except CapDec) by a great margin, demonstrating the superiority of the proposed method. (2) Textual guidance as a vertical augmentation of the visual guidance, provides extra information about an image, thus promoting the model performance by more than 2 absolute points in CIDEr on both datasets (comparing model -T). (3) Both dynamic weighting techniques help strengthen the model’s capacity, especially VDW, we ascribe this situation to its direct optimization of certain token appearances that are recognized in the image. (4) However, ZeroGen falls short in efficiency comparing MAGIC, but still largely outperforms ZeroCap. This is because additional computations are required for multimodal controls and dynamic weightings, but there is no need for gradient calculation in our model like ZeroCap. We also make a series of cross-domain evaluations in Appendix B.3, which further verifies the robustness of ZeroGen across various domains. #### Number of Objects in $\boldsymbol{\textbf{C}_{T}}$. Is the more objects the better? To answer this question, we conduct an experiment over MS-COCO with varied numbers of objects from the image (size $N$ in $\textbf{C}_{T}$) using word-wise max (WM) for $p(\textbf{C}_{T})$ selection. In Table 2, we can observe that, as the increase of object number, our model generally performs better on most metrics, which verifies that more textual object guidance brings more information for captioning task. Similar results also prove the answer on Flickr30k as shown in Appendix B.1. #### $\boldsymbol{p(\textbf{C}_{T})}$ Selection Method. In Table 3, the most effective method for $p(\textbf{C}_{T})$ selection is word-wise max (WM). It is attributed to WM’s highlight of textual objects with all their relevant tokens in the vocabulary. While mean pooling (MP) also takes all given keywords into consideration, by presenting them equally, it may introduce biases in token similarity calculation and output controlling. Hence, we use WM for the rest experiments. Model | Positive | Negative | Speed $\uparrow$ ---|---|---|--- D-2 $\uparrow$ | D-4 $\uparrow$ | C-S $\uparrow$ | $\Delta\text{Acc}$ $\uparrow$ | PPL $\downarrow$ | D-2 $\uparrow$ | D-4 $\uparrow$ | C-S $\uparrow$ | $\Delta\text{Acc}$ $\uparrow$ | PPL $\downarrow$ Human∗ | 96.25 | 96.98 | 23.36 | 0.00 | 14.59 | 96.25 | 96.98 | 23.36 | 0.00 | 14.59 | - $\textsc{MAGIC}^{*}$ Su et al. (2022a) | 95.62 | 95.92 | 20.07 | - | 10.01 | 95.62 | 95.92 | 20.07 | - | 10.01 | 25.0 $\times$ +PPLM Dathathri et al. (2019) | 74.22 | 81.44 | 20.44 | 11.00 | 29.07 | 74.47 | 83.66 | 20.79 | 18.76 | 27.32 | 1.0 $\times$ ZeroGen | 72.04 | 79.32 | 18.11 | 22.12 | 12.22 | 76.42 | 83.01 | 19.11 | 31.75 | 13.04 | 9.8 $\times$ -TDW | 71.87 | 78.90 | 18.08 | 21.87 | 11.75 | 76.29 | 82.52 | 19.14 | 29.88 | 12.53 | 10.4 $\times$ -VDW | 75.44 | 82.06 | 17.56 | 20.50 | 11.62 | 77.80 | 83.84 | 18.20 | 29.63 | 12.62 | 11.7 $\times$ -DW | 81.70 | 86.38 | 17.22 | 19.00 | 12.62 | 77.73 | 83.60 | 18.19 | 29.13 | 12.13 | 12.4 $\times$ -T∗ | 95.27 | 95.80 | 21.19 | - | 10.84 | 95.27 | 95.80 | 21.19 | - | 10.84 | 17.9 $\times$ ZeroGen w/ obj | 81.56 | 87.93 | 19.42 | 16.37 | 12.93 | 82.23 | 87.93 | 19.66 | 29.76 | 13.30 | 9.8 $\times$ Table 5: Results of controllable news generation on VisNews. With $\textbf{C}_{T}$ controlling the sentiment, we regard the textual and the visual control as vertical elements in this task. Methods with * cannot be controlled w.r.t. sentiment. Model | Positive | Negative ---|---|--- Flue.$\uparrow$ | Relv.$\uparrow$ | Sent.$\uparrow$/$\downarrow$ | Flue.$\uparrow$ | Relv.$\uparrow$ | Sent.$\uparrow$/$\downarrow$ MAGIC | 3.37 | 2.77 | 28.7/22.0 | 3.85 | 3.13 | 46.0/14.7 +PPLM | 2.24 | 2.85 | 34.0/7.3 | 3.12 | 3.11 | 52.0/10.7 ZeroGen | 3.38 | 2.94 | 80.0/10.7 | 3.80 | 2.85 | 84.7/6.0 Table 6: Human evaluation results. Sent. scores are percentages of news that obeys/disobeys given sentiment. ### 5.2 Stylized Captioning To explore the sufficiency of our model to adapt to different styles, such as “romantic” or “humorous”. We follow Nukrai et al. (2022) to conduct stylized- text fine-tuning in base LM for stylized captioning. #### Dataset and Metrics. In this task we still take textual objects from images as $\textbf{C}_{T}$ and we follow the exact experimental setting in previous works Zhao et al. (2020); Nukrai et al. (2022) on FlickrStyle10k dataset Gan et al. (2017). As for metrics, we take the same ones in Sec. 5.1. Refer to Appendix A.3 for more detailed settings. #### Main Results. Table 4 shows quantitative results of the task, (1) ZeroGen outperforms most baselines on two stylized data, including weakly supervised CapDec on Romantic and MemCap with task-oriented training. (2) While it under-performs CapDec on some metrics over Humorous data, our method produces more fluent and plausible captions with consistently higher B@3 scores. (3) From two stylized sets, textual guidance takes a large credit to boost the model performance (comparing model -T), verifying the effectiveness of the proposed multimodal guidance in ZeroGen. ### 5.3 Controllable News Generation Textual guidance can not only serve as a complement to visual knowledge but can also be a lateral extension. In this task, we assign textual control for news sentiment guidance and visual control for image-relevant news generation. #### Dataset and Metrics. We conduct experiments on VisNews Liu et al. (2021b). We fine-tune the base LM (i.e., SimCTG) on news data with the news title as an input prompt. We follow Dathathri et al. (2019) to obtain word lists for two types of sentiment guidance respectively. We take four aspects for evaluation, diversity: Distinct-2 (D-2) and Distinct-4 (D-4) Li et al. (2016). Image-text relevance: CLIP score (C-S) is the image-text similarity calculated by a CLIP. Control degree: $\Delta\text{Acc}$ (%) evaluates the accuracy gain between generated sentences and human written news.333Human written news in the test set consists of $62.88\%$ positive content and $37.12\%$ negative content. Fluency: perplexity (PPL) measures the model output confidence. For human evaluation, we take Fluency (Flue.) for content fluency evaluation, Relevance (Relv.) for image/title-news relevance evaluation, and Sentiment (Sent.) to measure sentiment control. We strictly obey a double-blind procedure, where three annotators know nothing about the models. We sample 100 instances across every model.444Details of automatic metrics and human evaluation settings are in Appendix A.4 and D respectively. #### Main Results. From results in Table 5, we can draw the following conclusions: (1) ZeroGen has the highest classification accuracy gain and competitive CLIP scores among all presented statistics, proving that the proposed method can successfully produce controllable outputs under both modal supervisions. But our model generally degenerates the diversity, which we consider a trade-off. (2) Introducing dynamic weighting enhances the overall model performance. While VDW augments connections between the given image and generated news content with higher CLIP scores (C-S), TDW is able to make the output more recognizable w.r.t. the sentiment control without sacrificing content diversity. These findings validate the vastness and efficacy of the dynamic weighting mechanism even when their functional domains (i.e., $\textbf{C}_{T},\textbf{C}_{V}$) are not complementary. (3) External controls jeopardize our model’s output confidence with slightly higher PPL than MAGIC, yet ZeroGen still largely outperforms MAGIC+PPLM, the only controllable counterpart on PPL. Also, ZeroGen without parts of the dynamic weighting (e.g., -VDW) can advantageously outgain MAGIC+PPLM on both controllability and diversity metrics. (4) ZeroGen requires no task-oriented training, thus registering its superiority in decoding efficiency over MAGIC+PPLM by nearly 10 times faster. We also present human evaluation results in Table 6,555The average Fleiss’s Kappa Fleiss and Cohen (1973) is 0.28, indicating three annotators reached a fair agreement. which can further verify our findings above. $\hat{\alpha}$ | 5.0 | 8.0 | 10.0 ---|---|---|--- Pos. | D-2 $\uparrow$ | 91.60 | 72.04 | 52.37 D-4 $\uparrow$ | 95.00 | 79.32 | 60.00 C-S $\uparrow$ | 19.96 | 18.11 | 17.19 $\Delta\text{Acc}\uparrow$ | 10.75 | 22.12 | 26.62 PPL $\downarrow$ | 11.76 | 12.22 | 10.28 Neg. | D-2 $\uparrow$ | 92.95 | 76.42 | 52.09 D-4 $\uparrow$ | 95.87 | 83.01 | 60.07 C-S $\uparrow$ | 21.59 | 19.11 | 18.81 $\Delta\text{Acc}\uparrow$ | 11.26 | 31.75 | 51.26 PPL $\downarrow$ | 11.71 | 13.04 | 11.52 Table 7: Effect of different $\hat{\alpha}$ on news generation task. #### Effect of $\boldsymbol{\alpha}$ Upper Bound. $\textbf{C}_{T}$ is the only source for sentiment manipulation in this task, the upper bound of which can decide how distinguishable an output sentence is w.r.t. sentiment. In Table 7, with the increase of $\hat{\alpha}$, both accuracy and fluency will gain significant benefits. However, the diversity of texts and image relevance indicators will fall precipitously. This phenomenon is explained as more guiding information from sentiment may make the model inclined to only express the desired sentiment, trading for wider imagination associated with image or diversity. At the user end, we can twitch $\hat{\alpha}$ according to tasks to fit in different situations. #### $\boldsymbol{\textbf{C}_{T}}$ Plays Two Roles. Now we have examined the function of $\textbf{C}_{T}$ as a complement (captioning) or additional control element (news generation). We wonder can $\textbf{C}_{T}$ play both roles well at the same time? We conduct experiments using our ZeroGen with both objects from the image and sentiment words as textual guidance. Our method with objects as a part of $\textbf{C}_{T}$ is marked with “w/ obj”. Though ZeroGen w/ obj reaches a higher CLIP score, the accuracy and PPL generally decline comparing methods without textual objects guidance, which can also be accomplished by switching $\hat{\alpha}$ as mentioned earlier. That is to say, $\textbf{C}_{T}$ may be confused to play both roles as a complement and a lateral extension of images at the same time. Figure 3: An example of news from varied models. We highlight Positive and Negative words respectively. #### Case Analysis. We exhibit an example of generated controllable news in Figure 3. As the image shows Culture secretary Jeremy Hunt is giving a talk. All methods are able to produce image/title-relevant sentences, but MAGIC+PPLM generates some false evidence such as recognizing Jeremy Hunt as the “leader of Conservative and Nationalist Labour groups”. Besides, our ZeroGen can produce more diverse and controllable words like “good”, “benefits” for positive and “deadbeat”, “lose” for negative, while MAGIC+PPLM is not competent to fulfill the controllable aspect. More cases are exhibited in Appendix C. ## 6 Conclusion In this paper, we present ZeroGen, a paradigm of zero-shot controllable text generation with multimodal signals. We explicitly separate visual control and textual control into sentence-level and token-level guidance. And we use two Oracles to unify the control signals to LM output probability space. A dynamic weighting mechanism is applied to adapt to all multimodal controls, which further boosts the model generation ability. Three tasks from captioning to controllable news generation justify the effectiveness of ZeroGen and help us explore the relationship between distinct signals. By providing multimodal knowledge, we demonstrate LMs without task-specified training can substantially achieve astonishing performance in multimodal tasks across different setups and domains. ## 7 Limitations Although ZeroGen successfully achieves zero-shot controllable text generation, our technique is still subject to a few limitations to be addressed in follow- up work. (1) There is still a large gap between weakly and fully supervised methods. We believe the rich semantic information contained in these large- scale pre-trained Oracles we employed can further narrow it. (2) The diversity in our controllable news generation task is insufficient. Since this is a widespread problem in the field of zero-shot research, we plan to alleviate the issue by incorporating more diverse language decoding schemes Xu et al. (2022) or partial training parameters in the model such as adapters Houlsby et al. (2019). (3) The existence of spurious correlations Tu et al. (2020); Chai et al. (2022) in bad cases (as shown in Appendix C) is nonnegligible, one of our future work directions is to handle it by introducing causal inference Pearl (2009). ## 8 Ethics Statement We are well aware that text generation technologies may be abused to create deceptive, harmful, or objectionable content. For our ZeroGen, we can conduct experiments on detoxification datasets Gehman et al. (2020) to make it a useful tool for combating hate speech and eliminating harmful information in PLMs. As we are considering components to make our method more robust and effective in multimodal controllable tasks, we believe it is meaningful and beneficial to progress research on controllable text generation. ## References * Anderson et al. (2016) Peter Anderson, Basura Fernando, Mark Johnson, and Stephen Gould. 2016. Spice: Semantic propositional image caption evaluation. In _European conference on computer vision_ , pages 382–398. Springer. * Anderson et al. (2018) Peter Anderson, Stephen Gould, and Mark Johnson. 2018. Partially-supervised image captioning. _Advances in Neural Information Processing Systems_ , 31. * Bisk et al. (2020) Yonatan Bisk, Ari Holtzman, Jesse Thomason, Jacob Andreas, Yoshua Bengio, Joyce Chai, Mirella Lapata, Angeliki Lazaridou, Jonathan May, Aleksandr Nisnevich, et al. 2020. Experience grounds language. In _Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)_ , pages 8718–8735. * Carion et al. (2020) Nicolas Carion, Francisco Massa, Gabriel Synnaeve, Nicolas Usunier, Alexander Kirillov, and Sergey Zagoruyko. 2020. End-to-end object detection with transformers. In _European conference on computer vision_ , pages 213–229. Springer. * Chai et al. (2022) Junyi Chai, Reid Pryzant, Victor Ye Dong, Konstantin Golobokov, Chenguang Zhu, and Yi Liu. 2022. Fast: Improving controllability for text generation with feedback aware self-training. _arXiv preprint arXiv:2210.03167_. * Chen et al. (2020) Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey Hinton. 2020. A simple framework for contrastive learning of visual representations. In _International conference on machine learning_ , pages 1597–1607. PMLR. * Dathathri et al. (2019) Sumanth Dathathri, Andrea Madotto, Janice Lan, Jane Hung, Eric Frank, Piero Molino, Jason Yosinski, and Rosanne Liu. 2019. Plug and play language models: A simple approach to controlled text generation. _arXiv preprint arXiv:1912.02164_. * Denkowski and Lavie (2014) Michael Denkowski and Alon Lavie. 2014. Meteor universal: Language specific translation evaluation for any target language. In _Proceedings of the ninth workshop on statistical machine translation_ , pages 376–380. * Duan et al. (2020) Yuguang Duan, Canwen Xu, Jiaxin Pei, Jialong Han, and Chenliang Li. 2020. Pre-train and plug-in: Flexible conditional text generation with variational auto-encoders. In _Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics_ , pages 253–262. * Ficler and Goldberg (2017) Jessica Ficler and Yoav Goldberg. 2017. Controlling linguistic style aspects in neural language generation. _EMNLP 2017_ , page 94. * Fleiss and Cohen (1973) Joseph L Fleiss and Jacob Cohen. 1973. The equivalence of weighted kappa and the intraclass correlation coefficient as measures of reliability. _Educational and psychological measurement_ , 33(3):613–619. * Gan et al. (2017) Chuang Gan, Zhe Gan, Xiaodong He, Jianfeng Gao, and Li Deng. 2017. Stylenet: Generating attractive visual captions with styles. In _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition_ , pages 3137–3146. * Gehman et al. (2020) Samuel Gehman, Suchin Gururangan, Maarten Sap, Yejin Choi, and Noah A Smith. 2020\. Realtoxicityprompts: Evaluating neural toxic degeneration in language models. In _Findings of the Association for Computational Linguistics: EMNLP 2020_ , pages 3356–3369. * Gu et al. (2022) Yuxuan Gu, Xiaocheng Feng, Sicheng Ma, Jiaming Wu, Heng Gong, and Bing Qin. 2022\. Improving controllable text generation with position-aware weighted decoding. In _Findings of the Association for Computational Linguistics: ACL 2022_ , pages 3449–3467. * Harnad (1990) Stevan Harnad. 1990. The symbol grounding problem. _Physica D: Nonlinear Phenomena_ , 42(1-3):335–346. * Hartmann et al. (2022) Jochen Hartmann, Mark Heitmann, Christian Siebert, and Christina Schamp. 2022. More than a feeling: Accuracy and application of sentiment analysis. _International Journal of Research in Marketing_. * Honda et al. (2021) Ukyo Honda, Yoshitaka Ushiku, Atsushi Hashimoto, Taro Watanabe, and Yuji Matsumoto. 2021. Removing word-level spurious alignment between images and pseudo-captions in unsupervised image captioning. In _Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume_ , pages 3692–3702. * Houlsby et al. (2019) Neil Houlsby, Andrei Giurgiu, Stanislaw Jastrzebski, Bruna Morrone, Quentin De Laroussilhe, Andrea Gesmundo, Mona Attariyan, and Sylvain Gelly. 2019. Parameter-efficient transfer learning for nlp. In _International Conference on Machine Learning_ , pages 2790–2799. PMLR. * Karpathy and Fei-Fei (2015) Andrej Karpathy and Li Fei-Fei. 2015. Deep visual-semantic alignments for generating image descriptions. In _Proceedings of the IEEE conference on computer vision and pattern recognition_ , pages 3128–3137. * Keskar et al. (2019) Nitish Shirish Keskar, Bryan McCann, Lav R Varshney, Caiming Xiong, and Richard Socher. 2019. Ctrl: A conditional transformer language model for controllable generation. _arXiv preprint arXiv:1909.05858_. * Kikuchi et al. (2016) Yuta Kikuchi, Graham Neubig, Ryohei Sasano, Hiroya Takamura, and Manabu Okumura. 2016. Controlling output length in neural encoder-decoders. In _Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing_ , pages 1328–1338. * Krause et al. (2021) Ben Krause, Akhilesh Deepak Gotmare, Bryan McCann, Nitish Shirish Keskar, Shafiq Joty, Richard Socher, and Nazneen Fatema Rajani. 2021. Gedi: Generative discriminator guided sequence generation. In _Findings of the Association for Computational Linguistics: EMNLP 2021_ , pages 4929–4952. * Laina et al. (2019) Iro Laina, Christian Rupprecht, and Nassir Navab. 2019. Towards unsupervised image captioning with shared multimodal embeddings. In _Proceedings of the IEEE/CVF International Conference on Computer Vision_ , pages 7414–7424. * Lester et al. (2021) Brian Lester, Rami Al-Rfou, and Noah Constant. 2021. The power of scale for parameter-efficient prompt tuning. _arXiv preprint arXiv:2104.08691_. * Li et al. (2016) Jiwei Li, Michel Galley, Chris Brockett, Jianfeng Gao, and William B Dolan. 2016\. A diversity-promoting objective function for neural conversation models. In _Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies_ , pages 110–119. * Li et al. (2023) Wei Li, Linchao Zhu, Longyin Wen, and Yi Yang. 2023. Decap: Decoding clip latents for zero-shot captioning via text-only training. _arXiv preprint arXiv:2303.03032_. * Li and Liang (2021) Xiang Lisa Li and Percy Liang. 2021. Prefix-tuning: Optimizing continuous prompts for generation. In _Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)_ , pages 4582–4597. * Lin and Och (2004) Chin-Yew Lin and Franz Josef Och. 2004. Automatic evaluation of machine translation quality using longest common subsequence and skip-bigram statistics. In _Proceedings of the 42nd Annual Meeting of the Association for Computational Linguistics (ACL-04)_ , pages 605–612. * Liu et al. (2021a) Alisa Liu, Maarten Sap, Ximing Lu, Swabha Swayamdipta, Chandra Bhagavatula, Noah A Smith, and Yejin Choi. 2021a. Dexperts: Decoding-time controlled text generation with experts and anti-experts. In _Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)_ , pages 6691–6706. * Liu et al. (2021b) Fuxiao Liu, Yinghan Wang, Tianlu Wang, and Vicente Ordonez. 2021b. Visual news: Benchmark and challenges in news image captioning. In _Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, EMNLP 2021, Virtual Event / Punta Cana, Dominican Republic, 7-11 November, 2021_ , pages 6761–6771. Association for Computational Linguistics. * Liu et al. (2022) Guangyi Liu, Zeyu Feng, Yuan Gao, Zichao Yang, Xiaodan Liang, Junwei Bao, Xiaodong He, Shuguang Cui, Zhen Li, and Zhiting Hu. 2022. Composable text controls in latent space with odes. _arXiv preprint arXiv:2208.00638_. * Mai et al. (2020) Florian Mai, Nikolaos Pappas, Ivan Montero, Noah A Smith, and James Henderson. 2020\. Plug and play autoencoders for conditional text generation. In _Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)_ , pages 6076–6092. * Nukrai et al. (2022) David Nukrai, Ron Mokady, and Amir Globerson. 2022. Text-only training for image captioning using noise-injected clip. _arXiv preprint arXiv:2211.00575_. * Papineni et al. (2002) Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In _Proceedings of the 40th annual meeting of the Association for Computational Linguistics_ , pages 311–318. * Pascual et al. (2021) Damian Pascual, Beni Egressy, Clara Meister, Ryan Cotterell, and Roger Wattenhofer. 2021. A plug-and-play method for controlled text generation. In _Findings of the Association for Computational Linguistics: EMNLP 2021_ , pages 3973–3997. * Pearl (2009) Judea Pearl. 2009. Causal inference in statistics: An overview. _Statistics surveys_ , 3:96–146. * Pennington et al. (2014) Jeffrey Pennington, Richard Socher, and Christopher D Manning. 2014. Glove: Global vectors for word representation. In _Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP)_ , pages 1532–1543. * Radford et al. (2021) Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. 2021. Learning transferable visual models from natural language supervision. In _International Conference on Machine Learning_ , pages 8748–8763. PMLR. * Radford et al. (2019) Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, et al. 2019. Language models are unsupervised multitask learners. _OpenAI blog_ , 1(8):9. * Socher et al. (2013) Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D Manning, Andrew Y Ng, and Christopher Potts. 2013. Recursive deep models for semantic compositionality over a sentiment treebank. In _Proceedings of the 2013 conference on empirical methods in natural language processing_ , pages 1631–1642. * Su et al. (2022a) Yixuan Su, Tian Lan, Yahui Liu, Fangyu Liu, Dani Yogatama, Yan Wang, Lingpeng Kong, and Nigel Collier. 2022a. Language models can see: Plugging visual controls in text generation. _arXiv preprint arXiv:2205.02655_. * Su et al. (2022b) Yixuan Su, Tian Lan, Yan Wang, Dani Yogatama, Lingpeng Kong, and Nigel Collier. 2022b. A contrastive framework for neural text generation. _arXiv preprint arXiv:2202.06417_. * Tewel et al. (2022) Yoad Tewel, Yoav Shalev, Idan Schwartz, and Lior Wolf. 2022. Zerocap: Zero-shot image-to-text generation for visual-semantic arithmetic. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , pages 17918–17928. * Tu et al. (2022) Haoqin Tu, Zhongliang Yang, Jinshuai Yang, Siyu Zhang, and Yongfeng Huang. 2022\. Pcae: A framework of plug-in conditional auto-encoder for controllable text generation. _Knowledge-Based Systems_ , 256:109766. * Tu et al. (2020) Lifu Tu, Garima Lalwani, Spandana Gella, and He He. 2020. An empirical study on robustness to spurious correlations using pre-trained language models. _Transactions of the Association for Computational Linguistics_ , 8:621–633. * Vedantam et al. (2015) Ramakrishna Vedantam, C Lawrence Zitnick, and Devi Parikh. 2015. Cider: Consensus-based image description evaluation. In _Proceedings of the IEEE conference on computer vision and pattern recognition_ , pages 4566–4575. * Xu et al. (2022) Jin Xu, Xiaojiang Liu, Jianhao Yan, Deng Cai, Huayang Li, and Jian Li. 2022. Learning to break the loop: Analyzing and mitigating repetitions for neural text generation. _arXiv preprint arXiv:2206.02369_. * Yang and Klein (2021) Kevin Yang and Dan Klein. 2021. Fudge: Controlled text generation with future discriminators. In _Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies_ , pages 3511–3535. * Zeng et al. (2023) Zequn Zeng, Hao Zhang, Zhengjue Wang, Ruiying Lu, Dongsheng Wang, and Bo Chen. 2023\. Conzic: Controllable zero-shot image captioning by sampling-based polishing. _arXiv preprint arXiv:2303.02437_. * Zhao et al. (2020) Wentian Zhao, Xinxiao Wu, and Xiaoxun Zhang. 2020. Memcap: Memorizing style knowledge for image captioning. In _Proceedings of the AAAI Conference on Artificial Intelligence_ , volume 34, pages 12984–12992. ## Appendix A Implementation Details ### A.1 General Model Details For the base LM, we use SimCTG Su et al. (2022b), which is extended from a pre-trained GPT-2 Radford et al. (2019) model. SimCTG essentially consists of contrastive training and contrastive searching. As for training period, it introduces a $\mathcal{L}_{\mathrm{CL}}$ term to learn discriminative and isotropic token representations: $\displaystyle\mathcal{L}_{\mathrm{CL}}$ $\displaystyle=\frac{1}{V\times(V-1)}\sum_{i=1}^{V}\sum_{j=1,j\neq i}^{V}\max\\{0,$ (1) $\displaystyle\rho-s\left(h_{x_{i}},h_{x_{i}}\right)+s\left(h_{x_{i}},h_{x_{j}}\right)\\},$ where $V$ is the vocabulary size, function $s(\cdot,\cdot)$ is the similarity function, $h_{x_{i}}$ is the LM hidden state of token $x_{i}$ and $\rho$ is a pre-defined margin. The final training objective of a LM turns into: $\mathcal{L}_{\text{SimCTG}}=\mathcal{L}_{\mathrm{MLE}}+\mathcal{L}_{\mathrm{CL}},$ (2) with $\mathcal{L}_{\mathrm{MLE}}$ to be the vanilla MLE objective of LM. As for contrastive decoding, at decoding time $t$, the token to be generated is formalized as: $\displaystyle S_{\text{SimCTG}}(x_{<t})=(1-\eta)\times\underbrace{p_{\theta}\left(w\mid x_{<t}\right)}_{\text{model confidence}}-$ $\displaystyle\eta\times\underbrace{\left(\max\left\\{s\left(h_{w},h_{x_{j}}\right):1\leq j\leq t-1\right\\}\right)}_{\text{degeneration penalty}}$ $\displaystyle x_{t}=\underset{w\in W_{t}^{(k)}}{\arg\max}S_{\text{SimCTG}}(x_{<t})$ where $\eta$ is a parameter to balance generation diversity and consistency. Then for our ZeroGen, we have the following decoding objective based on the shifted LM output $p^{\prime}_{\text{LM}_{t}}$: $\displaystyle x_{t}$ $\displaystyle=\underset{w\in W_{t}^{(k)}}{\arg\max}\\{S_{\text{SimCTG}}(x_{<t})$ $\displaystyle+\beta_{t}\times S_{t}(w,\textbf{C}_{V}\mid x_{<t})\\},$ here $W_{t}^{(k)}$ is grouped from $p^{\prime}_{\text{LM}_{t}}$ and $S_{t}$ is the MAGIC score we introduced in Sec. 3.2. When we apply ZeroGen, there are several parameters should be decided in advance: $k$ in $W_{t}^{(k)}$, $\eta$ for contrastive decoding, $\beta,\alpha,\hat{\beta},\hat{\alpha}$ for dynamic weighting mechanism. We present detailed parameters in Table 9 to aid reproducibility. We also present the workflow of our system in Algorithm 1. We implement all the experiments on the same machine with one NVIDIA GeForce RTX 3090 GPU with 24G memory and one Intel 3.70GHz i9-10900K CPU. We will release the code of all methods (including baselines) and datasets processing once the paper is accepted. Algorithm 1 ZeroGen Input: Visual control: $\textbf{C}_{V}$, textual control: $\textbf{C}_{T}$ Output: Generated content $\textbf{X}=[x_{1},x_{2},...]$ 1:initialize V; //LM vocabulary 2:initialize $\phi_{M},\phi_{T}$; //oracles 3:initialize $\hat{\beta},\hat{\alpha},k,\lambda$; //hyper params 4:compute $p(\textbf{V},\textbf{C}_{T})$ using $\phi_{T}$; 5:$x_{0}\longleftarrow\texttt{[BOS]},\textbf{X}\longleftarrow\text{[}x_{0}\text{]},t\longleftarrow 0$; 6:while $x_{t}\not=\texttt{[EOS]}$ do 7: $t\leftarrow t+1$; 8: compute $p_{\text{LM}_{t}}$ using base LM and $x_{<t}$; 9: compute $p_{t}(\textbf{C}_{T})$ using $p(\textbf{V},\textbf{C}_{T})$; 10: $D_{T}\leftarrow\sum_{n}p_{\text{LM}_{t}}(C_{T_{n}}\mid x_{<t})/N$; 11: $\alpha_{t}\leftarrow\min(D_{T}/\lambda,\hat{\alpha})$; 12: $p^{\prime}_{\text{LM}_{t}}\leftarrow p_{\text{LM}_{t}}+\alpha_{t}\times p_{t}(\textbf{C}_{T})$; 13: $D_{V}\leftarrow\sum_{w\in W_{t}^{(k)}}p_{\phi_{M}}(w_{t},\textbf{C}_{V})/k$; 14: $\beta_{t}\leftarrow\min(D_{V}/\lambda,\hat{\beta})$; 15: compute $S_{t}(w,\textbf{C}_{V}\mid x_{<t})$ using $\phi_{M}$; 16: $x_{t}\leftarrow\arg\max_{w}S_{t}(w,\textbf{C}_{V}\mid x_{<t})$; 17: add $x_{t}$ to content X; 18:return generated content X; Dataset | Train | Val | Test | # Voc | # Len ---|---|---|---|---|--- F10k Humor | 6,000 | 1,000 | 1,000 | 7,186 | 14.07 F10k Romantic | 6,000 | 1,000 | 1,000 | 6,434 | 14.55 VisNews | 13,098 | 200 | 800 | 23,274 | 136.20 Table 8: Detailed statistics of data employed in our tasks. # Voc and # Len represent the vocabulary size and average sentence length of current dataset. F10k represents the Flickr10k dataset. # Params / Data | MS-COCO | Flickr30k | F10k Romantic | F10k Humor | VisNews ---|---|---|---|---|--- $k$ (int) | 45 | 25 | 45 | 45 | 5 $\eta$ (float) | 0.10 | 0.10 | 0.10 | 0.10 | 0.65 $\alpha$ (float) | 1.0 | 2.0 | 1.0 | 1.0 | 8.0 $\beta$ (float) | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 $\hat{\alpha}$ (float) | 2.5 | 2.0 | 3.0 | 2.5 | 8.0 $\hat{\beta}$ (float) | 1.0 | 0.5 | 0.5 | 0.5 | 0.5 $N$ (int) | 1$\sim$5 | 1$\sim$5 | 1 | 1 | 40 Table 9: Detailed parameters of ZeroGen for different tasks. ### A.2 Image Captioning Details For both MS-COCO and Flickr30k datasets, we take the Karpathy split Karpathy and Fei-Fei (2015) and use the train, valid sets for base LM trainin and test set for the task evaluation. In detail, MS-COCO dataset is under a Creative Commons Attribution 4.0 License, and is publicly available at https://cocodataset.org, while Flickr30k is under a Creative Commons Public Domain Dedication License, and is publicly available at https://www.kaggle.com/datasets/hsankesara/flickr-image-dataset. For the base LM, we load the publicly available pre-trained language model weights666https://huggingface.co/cambridgeltl/magic_flickr30k and https://huggingface.co/cambridgeltl/magic_mscoco and set the maximum decoding length to be 16 for this task. ### A.3 Stylized Captioning Details For stylized captioning and controllable news generation task, the Flickr10k Stylized dataset is introduced by Gan et al. (2017) and under an unknown license, it can be publicly downloaded from https://zhegan27.github.io/Papers/FlickrStyle_v0.9.zip. On model side, we follow Zhao et al. (2020) to randomly sample 6,000 instances from the original corpus as training data and 1,000 as test data. Their detailed statistics are shown in Table 8. Following Nukrai et al. (2022), we fine-tune two base LMs on two training sets respectively to achieve stylized outputs. As for the base LM fine-tuning, we use SimCTG with 0.5 to be the margin $\rho$ and 1e-5 as the learning rate to train the model until it has no further loss decrease on the valid set. We set the maximum sentence length to 128 for base LM training and 25 for decoding. ### A.4 Controllable News Generation Details For dataset and metrics, we conduct experiments based on VisNews Liu et al. (2021b). This dataset is under an unknown license, and can be acquired by asking the author directly.777https://github.com/FuxiaoLiu/VisualNews- Repository Specifically, for image-text data, we sampled 13000, 200, and 800 image-news pairs from the original VisNews dataset as train, valid, and test set. We use train and valid set for base LM (i.e., SimCTG) fine-tuning with the news title as an input prompt. The test set is employed for the final evaluation. Details are shown in Table 8. The max training news length is 200, and the max decoding length is set to 130. Figure 4: Classification accuracy (Prob) and CLIP-score (C-S) with varied topic word number $N$. For word bags of two sentiments, we follow Dathathri et al. (2019) to obtain word lists of “happiness” and “negative words” for positive and negative sentiment guidance respectively.888Word lists are downloaded from www.enchantedlearning.com/wordlist. As for evaluation, diversity, we use the CLIP model that is different from the one guides multimodal generation in ZeroGen to compute the CLIP score (C-S). For $\Delta\text{Acc}$ (%), we take the pre-trained SiEBRT model Hartmann et al. (2022) that has SOTA performance on SST-2 dataset Socher et al. (2013) as the classifier. From Table 9, we can notice that we select the topic number $N$ to be 40 in this task, meaning at every decoding time, we input 40 sentiment words as textual control $\textbf{C}_{T}$. We conduct ablation experiments w.r.t. topic number $N$ and the classification accuracy as well as CLIP score in Figure 4. From the picture, we can figure out that, as the increase of $N$, the control degree gradually gets higher with better accuracy while the image-news relevance declines with lower C-S. This is because more words from one sentiment as $\textbf{C}_{T}$ make the model more focused on sentiment control but sacrifice some image-related text generative ability. Model | MS-COCO | Flickr30k ---|---|--- B@1 $\uparrow$ | B@4 $\uparrow$ | M $\uparrow$ | R-L $\uparrow$ | CIDEr $\uparrow$ | SPICE $\uparrow$ | B@1 $\uparrow$ | B@4 $\uparrow$ | M $\uparrow$ | R-L $\uparrow$ | CIDEr $\uparrow$ | SPICE $\uparrow$ ZeroGen ($N=1$) | 59.4 | 15.5 | 18.7 | 42.3 | 55.4 | 12.1 | 54.9 | 13.1 | 15.2 | 37.4 | 26.4 | 8.3 ZeroGen ($N=2$) | 60.1 | 15.6 | 18.5 | 42.3 | 55.9 | 11.9 | 55.3 | 13.3 | 15.4 | 37.7 | 27.5 | 8.3 ZeroGen ($N=3$) | 60.4 | 15.6 | 18.6 | 42.3 | 56.5 | 12.1 | 55.0 | 13.1 | 15.2 | 37.4 | 27.1 | 8.2 ZeroGen ($N=4$) | 60.5 | 15.7 | 18.7 | 42.4 | 57.0 | 12.1 | 54.5 | 13.1 | 15.2 | 37.5 | 27.2 | 8.3 ZeroGen ($N=5$) | 60.6 | 15.8 | 18.7 | 42.4 | 57.1 | 12.1 | 55.0 | 13.0 | 15.2 | 37.5 | 27.3 | 8.2 Table 10: Caption results of ZeroGen with only varied number of objects as textual control for each generation. Model | MS-COCO | Flickr30k ---|---|--- B@1 $\uparrow$ | B@4 $\uparrow$ | M $\uparrow$ | R-L $\uparrow$ | CIDEr $\uparrow$ | SPICE $\uparrow$ | B@1 $\uparrow$ | B@4 $\uparrow$ | M $\uparrow$ | R-L $\uparrow$ | CIDEr $\uparrow$ | SPICE $\uparrow$ ZeroGen SR | 59.6 | 15.3 | 18.4 | 42.1 | 55.5 | 11.8 | 54.8 | 13.1 | 15.0 | 37.2 | 26.7 | 8.1 ZeroGen MP | 59.9 | 15.2 | 18.3 | 42.0 | 55.2 | 11.8 | 54.8 | 13.2 | 15.1 | 37.5 | 26.7 | 8.1 ZeroGen WM | 60.6 | 15.8 | 18.7 | 42.4 | 57.1 | 12.1 | 55.3 | 13.3 | 15.4 | 37.7 | 27.5 | 8.3 Table 11: Caption results of ZeroGen with only three different $p_{t}(\textbf{C}_{T})$ selection methods. WM, MP, SR represent Word-wise Max, Mean Pooling and Step-wise Random in Sec. 3.1 respectively. Model | MS-COCO $\implies$ Flickr30k | Flickr30k $\implies$ MS-COCO ---|---|--- B@1 $\uparrow$ | B@4 $\uparrow$ | M $\uparrow$ | R-L $\uparrow$ | CIDEr $\uparrow$ | SPICE $\uparrow$ | B@1 $\uparrow$ | B@4 $\uparrow$ | M $\uparrow$ | R-L $\uparrow$ | CIDEr $\uparrow$ | SPICE $\uparrow$ ZeroCap | 49.2 | 6.2 | 11.9 | 29.3 | 18.3 | - | 46.3 | 6.0 | 13.7 | 30.1 | 27.3 | - MAGIC | 46.4 | 6.2 | 12.2 | 31.3 | 17.5 | 5.9 | 41.4 | 5.2 | 12.5 | 30.7 | 18.3 | 5.7 ZeroGen | 50.5 | 8.1 | 13.1 | 34.5 | 17.3 | 6.0 | 46.9 | 7.6 | 14.0 | 34.4 | 26.1 | 6.8 -TDW | 50.1 | 8.0 | 12.7 | 34.0 | 17.0 | 5.8 | 46.2 | 7.1 | 13.5 | 33.7 | 23.9 | 6.2 -T | 49.3 | 8.1 | 12.5 | 33.7 | 16.7 | 5.8 | 45.1 | 6.7 | 13.1 | 33.3 | 22.4 | 5.9 -VDW | 49.3 | 7.2 | 13.0 | 33.5 | 18.5 | 6.2 | 43.8 | 6.2 | 13.5 | 32.6 | 24.5 | 6.3 -DW | 48.2 | 7.1 | 12.5 | 32.8 | 17.4 | 5.9 | 43.7 | 6.2 | 13.5 | 32.6 | 24.4 | 6.3 Table 12: Cross-domain results on two image-caption datasets MS-COCO and Flickr30k. ### A.5 Baseline Model Details For IC-SME, S2S-GCC and CapDec results, we directly take them from their respect paper. For ZeroCap, we take their official implementation from https://github.com/YoadTew/zero-shot-image-to-text and use its default parameter setting for captioning tasks. For MAGIC, we take the official code from https://github.com/yxuansu/MAGIC to reproduce the results. For MAGIC+PPLM implementation, we take two codebases into consideration, including the official PPLM code from https://github.com/uber-research/PPLM and a simpler version of PPLM from https://github.com/hit-scma/CAT-PAW. And we add MAGIC process at the decoding stage of PPLM as MAGIC+PPLM, we provide a minimum reproducible code of it in this anonymous repository: https://anonymous.4open.science/r/Pplm_Magic-3E15. For positive sentiment control, we set the iteration for each LM hidden state update to be 5 with step size to be 0.03, while 15 iterations for negative control. And we choose (15+5)/2=10 iterations for decoding time measure in Sec. 5.3. We use the same SimCTG model in ZeroGen on VisNews for MAGIC+PPLM generation. We set the max decoding length to be 130, $k=5$ for MAGIC searching in MAGIC+PPLM like our method. Other hyper-parameters are the default values in the official code repositories. ## Appendix B Additional Experimental Results ### B.1 Number of Objects in $\boldsymbol{\textbf{C}_{T}}$ for Captioning We display the full results of varying the number of objects in $\textbf{C}_{T}$ (number $N$) for captioning task in Table 10. For images with less than $N$ objects extracted, we just use their all existing objects as the textual control $\textbf{C}_{T}$. We can observe that, for both datasets, more textual guidance can bring performance gain. Nevertheless, increasing objects in this process do not necessarily gain better metrics. For instance, on Flickr30k, the model with $N=2$ performs the best among other $N$ settings. This may be because too much textual guidance cause confusion for ZeroGen on relatively easy tasks (i.e., shorter captions, smaller vocabulary size). ### B.2 $\boldsymbol{p(\textbf{C}_{T})}$ Selection Method in Captioning We also present full results of $p(\textbf{C}_{T})$ selection methods in Table 11. On two datasets, the results are consistent and indicate that WM is the best way for calculating $p_{t}(\textbf{C}_{T})$ at $t$-th step. ### B.3 Image Captioning in Cross Domain For cross-domain captioning evaluation, we follow the setting in Su et al. (2022b) and fine-tune the base LM on one dataset (e.g., MS-COCO), while evaluating the model’s captioning capacity on another dataset (e.g., $\texttt{MS-COCO}\implies$ Flickr30k). Results in Table 12 can verify findings from in-domain evaluations in Sec. 5.1. Figure 5: Good case of news generated by our models comparing baselines. Figure 6: Good case of news generated by our models comparing baselines. ## Appendix C More Cases of ZeroGen In the presented cases, we highlight Positive words and Negative words respectively. #### Good Cases. As shown in Figure 5 and 6, the proposed method is capable to generate image- related and sentiment-controllable texts even with very few prompt words (Figure 5). And our ZeroGen generally produce more diverse sentiment-specified words like “beautiful”, “unique”, “great-looking” and “good” for positive sentiment and “terrible”, “damaged”, “dirty” and “evil” for negative sentiment. The compared baselines are unable to generate controllable news. For instance, MAGIC+PPLM generates very short texts given negative sentiment for Hair today image in Figure 5 and negative contents given positive sentiment in Figure 6. #### Bad Cases. We present bad cases of ZeroGen in Figure 7 and 8. For both cases, we can observe that there may exist generating biases caused by spurious correlation in dataset Tu et al. (2020); Chai et al. (2022). In Figure 7, the image describes a smiling woman with a flowered sweater, which means the visual control may be confounded with textual control when $\textbf{C}_{T}$ includes negative words (under the case where no task-oriented training is conducted). Our method struggles to generate negative-only content given the image and negative keywords. In Figure 8, the title Morrisons faces gloomy week gives away its sentimental preference (i.e., negative) for the image. Similarly, for both ZeroGen and MAGIC+PPLM, they fail to generate news with only positive sentiment given positive textual control. We plan to take causal-related solutions such as self-training and causal intervention Pearl (2009); Chai et al. (2022) towards this issue in the future. Figure 7: Bad case of news generated by our models comparing baselines. Figure 8: Bad case of news generated by our models comparing baselines. ## Appendix D Human Evaluation For annotators, we hire three graduate students from America or China with fluent English reading skills. Each annotator is assigned $100$ (instances)$\times 3$ (models)$\times 3$ (aspects) $=900$ rating tasks, resulting in $900$ (tasks)$\times 3$ (annotators) $=2,700$ human ratings in total. We use a three-scale scheme (i.e., integrate score 1, 2 to the poor class, 3 to the moderate class and 4, 5 to the good class) for Fluency and Relevance to compute the Fleiss’s kappa Fleiss and Cohen (1973). The annotators have acknowledged the use of annotated data sets and are paid an average annotation salary. All annotators were aware of the potential risks or ethical concerns of machine-generated texts. #### Annotation Instruction Here we present the human evaluation standard: Fluency(Flue.): Whether the generated news is fluent and easy to understand. 1. 1. The system’s result does not make sense and it is unreadable. 2. 2. Choose this score when you are hesitant between score 1 and score 3. 3. 3. The system’s result contains minor errors but they do not affect your understanding. 4. 4. Choose this score when you are hesitant between score 3 and score 5. 5. 5. The system’s result is human-like, grammatically correct, and very easy to understand. Relevance (Relv.): Whether the generated news is related to the given image and the corresponding title. 1. 1. The system’s result is completely irrelevant to the given image. 2. 2. Choose this score when you are hesitant between score 1 and score 3. 3. 3. The system’s result is partially related to the image and some of its content can be found in the image. 4. 4. Choose this score when you are hesitant between score 3 and score 5. 5. 5. The system’s result is very related to the given image and contains a diverse set of concepts in the image. Sentiment (Sent.): Does the generated news have positive or negative sentiment. * • Positive: The system’s result has a positive sentiment. * • Negative: The system’s result has a negative sentiment. * • Can’t Tell: The system’s result is neither negative nor positive.
[Theory of computation]: Formal languages and automata theory — Automata over infinite objects — Quantitative automata Primary: 68Q45, 68Q70; Secondary: 68Q42 # Weighted $\omega$-Restricted One-Counter Automata Manfred Droste Universität Leipzig, Institut für Informatik <EMAIL_ADDRESS>and Werner Kuich Technische Universität Wien, Institut für Diskrete Mathematik und Geometrie<EMAIL_ADDRESS> ###### Abstract. Let $S$ be a complete star-omega semiring and $\Sigma$ be an alphabet. For a weighted $\omega$-restricted one-counter automaton $\mathcal{C}$ with set of states $\\{1,\dots,n\\}$, $n\geq 1$, we show that there exists a mixed algebraic system over a complete semiring-semimodule pair ${((S\ll\Sigma^{*}\gg)^{n\times n},(S\ll\Sigma^{\omega}\gg)^{n})}$ such that the behavior $\|\mathcal{C}\|$ of $\mathcal{C}$ is a component of a solution of this system. In case the basic semiring is $\mathbb{B}$ or $\mathbb{N}^{\infty}$ we show that there exists a mixed context-free grammar that generates $\|\mathcal{C}\|$. The construction of the mixed context-free grammar from $\mathcal{C}$ is a generalization of the well-known triple construction in case of restricted one-counter automata and is called now triple-pair construction for $\omega$-restricted one-counter automata. ###### Key words and phrases: weighted pushdown automata, algebraic series, weighted contextfree grammar, formal power series, complete semiring This work was partially supported by DFG Graduiertenkolleg 1763 (QuantLA) The second author was partially supported by Austrian Science Fund (FWF): grant no. I1661 – N25. ## 1\. Introduction Restricted one-counter pushdown automata and languages were introduced by Greibach [13] and considered in Berstel [1], Chapter vii@ 4. These restricted one-counter pushdown automata are pushdown automata having just one pushdown symbol accepting by empty tape, and the family of restricted one-counter languages is the family of languages accepted by them. Let L be the Lukasiewicz language, i.e., the formal language over the alphabet $\Sigma=\\{a,b\\}$ generated by the context-free grammar with productions $S\to aSS,S\to b$. Then the family of restricted one-counter languages is the principal cone generated by L, while the family of one-counter languages is the full AFL generated by L. All these results can be transferred to formal power series and restricted one-counter automata over them (see Kuich, Salomaa [16], Example 11.5). Restricted one-counter automata can also be used to accept infinite words and it is this aspect we generalize in our paper. We consider weighted $\omega$-restricted one-counter automata and their relation to algebraic systems over the complete semiring-semimodule pair $(S^{n\times n},V^{n})$, where $S$ is a complete star-omega semiring. It turns out that the well-known triple construction for pushdown automata in case of unweighted restricted one-counter automata can be generalized to a triple-pair construction for weighted $\omega$-restricted one-counter automata. In the classical theory, the triple construction yields for a given pushdown automaton an equivalent context-free grammar. (See Harrison [14], Theorem 5.4.3; Bucher, Maurer [3], Sätze 2.3.10, 2.3.30; Kuich, Salomaa [16], pages 178, 306; Kuich [15], page 642; Ésik, Kuich [11], pages 77, 78.) The paper consists of this and three more sections. In Section 2, we review the necessary preliminaries. In Section 3, restricted one-counter matrices are introduced and their properties are studied. The main result is that, for such a matrix $M$, the $p$-block of the infinite column vector $M^{\omega,k}$ is a solution of the linear equation $z=(M_{p,p^{2}}(M^{*})_{p,\varepsilon}+M_{p,p}+M_{p,p^{2}})z$. In Section 4, weighted $\omega$-restricted one-counter automata are introduced as a special case of weighted $\omega$-pushdown automata. We show that for a weighted $\omega$-restricted one-counter automaton $\mathcal{C}$ there exists a mixed algebraic system such that the behavior $\|\mathcal{C}\|$ of $\mathcal{C}$ is a component of a solution of this system. In Section 5 we consider the case that the complete star-omega semiring $S$ is equal to $\mathbb{B}$ or $\mathbb{N}^{\infty}$. Then for a given weighted $\omega$-restricted one- counter automaton $\mathcal{C}$ a mixed context-free grammar is constructed that generates $\|\mathcal{C}\|$. This construction is a generalization of the well-known triple construction in case of restricted one-counter automata and is called _triple-pair construction_ for $\omega$-restricted one-counter automata. ## 2\. Preliminaries For the convenience of the reader, we quote definitions and results of Ésik, Kuich [7, 8, 10] from Ésik, Kuich [11]. The reader should be familiar with Sections 5.1-5.6 of Ésik, Kuich [11]. A semiring $S$ is called _complete_ if it is possible to define sums for all families $(a_{i}\mid i\in I)$ of elements of $S$, where $I$ is an arbitrary index set, such that the following conditions are satisfied (see Conway [4], Eilenberg [6], Kuich [15]): (i) $\displaystyle\sum\limits_{i\in\emptyset}a_{i}=0,\qquad\sum\limits_{i\in\\{j\\}}a_{i}=a_{j},\qquad\sum\limits_{i\in\\{j,k\\}}a_{i}=a_{j}+a_{k}\text{ for }j\neq k\,,$ (ii) $\displaystyle\sum\limits_{j\in J}\big{(}\sum_{i\in I_{j}}a_{i}\big{)}=\sum_{i\in I}a_{i}\,,\text{ if }\ \bigcup_{j\in J}\\!I_{j}=I\ \text{ and }\ I_{j}\cap I_{j^{\prime}}=\emptyset\ \text{ for }\ j\neq j^{\prime}\,,$ (iii) $\displaystyle\sum_{i\in I}(c\cdot a_{i})=c\cdot\big{(}\sum_{i\in I}a_{i}\big{)},\qquad\sum_{i\in I}(a_{i}\cdot c)=\big{(}\sum_{i\in I}a_{i}\big{)}\cdot c\,.$ This means that a semiring $S$ is complete if it is possible to define “infinite sums” (i) that are an extension of the finite sums, (ii) that are associative and commutative and (iii) that satisfy the distribution laws. If $S$ is a monoid and conditions (i) and (ii) are satisfied then $S$ is called a _complete monoid_. A semiring S equipped with an additional unary star operation ${}^{*}:S\to S$ is called a starsemiring. In complete semirings for each element $a$, the _star_ $a^{*}$ of $a$ is defined by $a^{*}=\sum_{j\geq 0}a^{j}\,.$ Hence, each complete semiring is a starsemiring, called a _complete starsemiring_. Suppose that $S$ is a semiring and $V$ is a commutative monoid written additively. We call $V$ a (left) $S$-semimodule if $V$ is equipped with a (left) action $\displaystyle S\times V$ $\displaystyle\ \to\ V$ $\displaystyle(s,v)$ $\displaystyle\ \mapsto\ sv$ subject to the following rules: $\displaystyle s(s^{\prime}v)$ $\displaystyle=(ss^{\prime})v$ $\displaystyle 1v$ $\displaystyle=v$ $\displaystyle(s+s^{\prime})v$ $\displaystyle=sv+s^{\prime}v$ $\displaystyle\hskip 56.9055pt0v$ $\displaystyle=0$ $\displaystyle s(v+v^{\prime})$ $\displaystyle=sv+sv^{\prime}$ $\displaystyle s0$ $\displaystyle=0,$ for all $s,s^{\prime}\in S$ and $v,v^{\prime}\in V$. When V is an $S$-semimodule, we call $(S,V)$ a _semiring-semimodule pair_. Suppose that $(S,V)$ is a semiring-semimodule pair such that $S$ is a starsemiring and $S$ and $V$ are equipped with an omega operation ${}^{\omega}:S\to V$. Then we call $(S,V)$ a _starsemiring-omegasemimodule pair_. Ésik, Kuich [9] define a _complete semiring-semimodule pair_ to be a semiring- semimodule pair $(S,V)$ such that $S$ is a complete semiring and V is a complete monoid with $\displaystyle s\big{(}\sum_{i\in I}v_{i}\big{)}$ $\displaystyle=\sum_{i\in I}sv_{i}$ $\displaystyle\big{(}\sum_{i\in I}s_{i}\big{)}v$ $\displaystyle=\sum_{i\in I}s_{i}v\,,$ for all $s\in S$, $v\in V$, and for all families $(s_{i})_{i\in I}$ over $S$ and $(v_{i})_{i\in I}$ over $V$; moreover, it is required that an _infinite product operation_ $(s_{1},s_{2},\ldots)\ \mapsto\ \prod_{j\geq 1}s_{j}$ is given mapping infinite sequences over $S$ to $V$ subject to the following three conditions: $\displaystyle\prod_{i\geq 1}s_{i}$ $\displaystyle\ =\ \prod_{i\geq 1}(s_{n_{i-1}+1}\cdot\dots\cdot s_{n_{i}})$ $\displaystyle s_{1}\cdot\prod_{i\geq 1}s_{i+1}$ $\displaystyle\ =\ \prod_{i\geq 1}s_{i}$ $\displaystyle\prod_{j\geq 1}\sum_{i_{j}\in I_{j}}s_{i_{j}}$ $\displaystyle\ =\ \sum_{(i_{1},i_{2},\dots)\in I_{1}\times I_{2}\times\dots}\prod_{j\geq 1}s_{i_{j}}\,,$ where in the first equation $0=n_{0}\leq n_{1}\leq n_{2}\leq\dots$ and $I_{1},I_{2},\dots$ are arbitrary index sets. Suppose that $(S,V)$ is complete. Then we define $\displaystyle s^{*}$ $\displaystyle\ =\ \sum_{i\geq 0}s^{i}$ $\displaystyle s^{\omega}$ $\displaystyle\ =\ \prod_{i\geq 1}s\,,$ for all $s\in S$. This turns $(S,V)$ into a starsemiring-omegasemimodule pair. Observe that, if $(S,V)$ is a complete semiring-semimodule pair, then $0^{\omega}=0$. For a starsemiring $S$, we denote by $S^{n\times n}$ the semiring of $n\times n$-matrices over $S$. If $(S,V)$ is a complete semiring-semimodule pair then, by Ésik, Kuich [12], $(S^{n\times n},V^{n})$ is again a complete semiring- semimodule pair. A _star-omega semiring_ is a semiring $S$ equipped with unary operations ∗ and ${}^{\omega}:S\to S$. A star-omega semiring $S$ is called _complete_ if $(S,S)$ is a complete semiring semimodule pair, i.e., if $S$ is complete and is equipped with an infinite product operation that satisfies the three conditions stated above. For the theory of infinite words and finite automata accepting infinite words by the Büchi condition consult Perrin, Pin [17]. ## 3\. Restricted one-counter matrices In this section we introduce restricted one-counter (roc) matrices. Restricted one-counter matrices are a special case of pushdown matrices introduced by Kuich, Salomaa [16]. A matrix $M\in(S^{n\times n})^{\Gamma^{*}\times\Gamma^{*}}$ is termed a _pushdown transition matrix_ (with _pushdown alphabet_ $\Gamma$ and _set of states_ $\\{1,\dots,n\\}$) if 1. (i) for each $p\in\Gamma$ there exist only finitely many blocks $M_{p,\pi}$, $\pi\in\Gamma^{*}$, that are non-zero; 2. (ii) for all $\pi_{1},\pi_{2}\in\Gamma^{*}$, $M_{\pi_{1},\pi_{2}}=\left\\{\begin{array}[]{ll}M_{p,\pi}&\hskip 5.69046pt\text{if there exist }p\in\Gamma,\pi,\pi^{\prime}\in\Gamma^{*}\text{ with }\pi_{1}=p\pi^{\prime}\text{ and }\pi_{2}=\pi\pi^{\prime},\\\ 0&\hskip 5.69046pt\text{otherwise.}\end{array}\right.$ Theorem 10.5 of Kuich, Salomaa [16] states that for pushdown matrices over power series semirings with particular properties, $(M^{*})_{\pi_{1}\pi_{2},\varepsilon}=(M^{*})_{\pi_{1},\varepsilon}(M^{*})_{\pi_{2},\varepsilon}$ holds for all $\pi_{1},\pi_{2}\in\Gamma^{*}$. This result is generalized in the case of roc-matrices to arbitrary roc-matrices over complete starsemirings in Corollary 2. Then we prove some important equalities for roc-matrices. In Theorem 1 and Corollary 2, $S$ denotes a complete starsemiring; afterwards in this section, $(S,V)$ denotes a complete semiring-semimodule pair. A _restricted one-counter_ (abbreviated _roc_) _matrix (with counter symbol $p$)_ is a matrix $M$ in $(S^{n\times n})^{p^{*}\times p^{*}}$, for some $n\geq 1$, subject to the following condition: There exist matrices $A,B,C\in S^{n\times n}$ such that, for all $k\geq 1$, $M_{p^{k},p^{k+1}}=A\,,\quad M_{p^{k},p^{k}}=C\,\quad M_{p^{k},p^{k-1}}=B\,,$ and these blocks of $M$ are the only ones which may be non-zero. (Here, $p^{*}=\\{p^{n}\mid n\geq 0\\}$. A block of $M$ is an element of the matrix $M$ which is itself a matrix in $S^{n\times n}$.) Observe that, for $k\geq 1$, $\displaystyle M_{p^{k},p^{k+1}}$ $\displaystyle\ =\ M_{p,p^{2}}$ $\displaystyle\ =\ A\,,$ $\displaystyle M_{p^{k},p^{k}}$ $\displaystyle\ =\ M_{p,p}$ $\displaystyle\ =\ C\,,$ $\displaystyle M_{p^{k},p^{k-1}}$ $\displaystyle\ =\ M_{p,\varepsilon}$ $\displaystyle\ =\ B\,,$ $\displaystyle M_{\varepsilon,p^{k}}$ $\displaystyle\ =\ M_{\varepsilon,\varepsilon}$ $\displaystyle\ =\ 0\,.$ Also note that the matrix $A$ (resp $B,C$) in $S^{n\times n}$ describes the weight of transitions when pushing (resp., popping, not changing) an additional symbol $p$ to (resp., from) the pushdown counter. ###### Theorem 1. Let S be a complete starsemiring and $M$ be a roc-matrix. Then, for all $i\geq 0$, $(M^{*})_{p^{i+1},\varepsilon}\ =\ (M^{*})_{p,\varepsilon}(M^{*})_{p^{i},\varepsilon}\,.$ ###### Proof 3.1. First observe that $(M^{*})_{p^{i+1}\\!,\varepsilon}=\\!\\!\sum_{m\geq 0}(M^{m+1})_{p^{i+1}\\!,\varepsilon}=\\!\\!\sum_{m\geq 0}\sum_{\hskip 8.5359pti_{1},\dots,i_{m}\geq 1}\\!\\!\\!\\!\\!M_{p^{i\text{+}1}\\!,p^{i_{1}}}M_{p^{i_{1}}\\!,p^{i_{2}}}\dots M_{p^{i_{m\text{-}1}}\\!,p^{i_{m}}}M_{p^{i_{m}}\\!,\varepsilon}\,,$ where, for $m=0$, the product equals $M_{p^{i+1},\varepsilon}$. Now we obtain $\displaystyle(M^{*})_{p^{i+1},\varepsilon}=\ $ $\displaystyle\sum_{m\geq 0}\sum_{i_{1},\dots,i_{m}\geq 1}M_{p^{i+1},p^{i_{1}}}\dots M_{p^{i_{m-1}},p^{i_{m}}}M_{p^{i_{m}},\varepsilon}$ $\displaystyle=\ $ $\displaystyle\sum_{m_{1}\geq 0}\big{(}\sum_{j_{1},\dots,j_{m_{1}}\geq 1}M_{p^{i+1},p^{i+j_{1}}}\dots M_{p^{i+j_{m_{1}-1}},p^{i+j_{m_{1}}}}M_{p^{i+j_{m_{1}}},p^{i}}\big{)}\cdot$ $\displaystyle\sum_{m_{2}\geq 0}\big{(}\sum_{i_{1},\dots,i_{m_{2}}\geq 1}M_{p^{i},p^{i_{1}}}\dots M_{p^{i_{m_{2}-1}},p^{i_{m_{2}}}}M_{p^{i_{m_{2}}},\varepsilon}\big{)}$ $\displaystyle=\ $ $\displaystyle\sum_{m_{1}\geq 0}\big{(}\sum_{j_{1},\dots,j_{m_{1}}\geq 1}M_{p,p^{j_{1}}}\dots M_{p^{j_{m_{1}-1}},p^{j_{m_{1}}}}M_{p^{j_{m_{1}}},\varepsilon}\big{)}(M^{*})_{p^{i},\varepsilon}$ $\displaystyle=\ $ $\displaystyle(M^{*})_{p,\varepsilon}(M^{*})_{p^{i},\varepsilon}\,.$ Clearly, in each sequence leading from $p^{i+1}$ to $\varepsilon$, there is a first time at which the top $p$ is reduced to $\varepsilon$ and at which $p^{i}$ is seen. This moment is reached at the end of the second line. Hence, in the second line the pushdown contents $p^{i+j_{1}},\dots,p^{i+j_{m_{1}}}$, $m_{1}\geq 0$ are always nonempty. ###### Corollary 2. For all $i\geq 0$, $\ (M^{*})_{p^{i},\varepsilon}\ =\ ((M^{*})_{p,\varepsilon})^{i}$. ###### Lemma 3. Let $(S,V)$ be a complete semiring-semimodule pair. Let $M\in(S^{n\times n})^{p^{*}\times p^{*}}$ be a roc-matrix. Then $(M^{\omega})_{p^{2}}\ =\ (M^{\omega})_{p}+(M^{*})_{p,\varepsilon}(M^{\omega})_{p}\,.$ ###### Proof 3.2. Subsequently in the first equation we split the summation so that in the first summand there is no factor $M_{p^{2},p}$, while in the second summand there is at least one factor $M_{p^{2},p}$; since $k_{1},\dots,k_{m}\geq 2$, $M_{p^{k_{m}},p}$ is the first such factor. In the second equality we use the property of $M$ being a roc-matrix: $M_{p^{i},p^{j}}=M_{p^{i-1},p^{j-1}}$ for $i\geq 2$, $j\geq 1$. We compute: $\displaystyle(M^{\omega})_{p^{2}}=\ $ $\displaystyle\sum_{i_{1},i_{2},\dots\geq 2}M_{p^{2},p^{i_{1}}}M_{p^{i_{1}},p^{i_{2}}}\dots+$ $\displaystyle\sum_{m\geq 0}\sum_{k_{1},k_{2},\dots,k_{m}\geq 2}\\!\\!\\!M_{p^{2},p^{k_{1}}}M_{p^{k_{1}},p^{k_{2}}}\dots M_{p^{k_{m}},p}\sum_{j_{1},j_{2},\dots\geq 1}\\!\\!M_{p,p^{j_{1}}}M_{p^{j_{1}},p^{j_{2}}}\dots$ $\displaystyle=\ $ $\displaystyle\sum_{i_{1},i_{2},\dots\geq 2}M_{p,p^{i_{1}-1}}M_{p^{i_{1}-1},p^{i_{2}-1}}\dots+$ $\displaystyle\sum_{m\geq 0}\sum_{k_{1},k_{2},\dots,k_{m}\geq 2}\\!\\!\\!M_{p,p^{k_{1}-1}}M_{p^{k_{1}-1},p^{k_{2}-1}}\dots M_{p^{k_{m}-1},\varepsilon}(M^{\omega})_{p}$ $\displaystyle\ =(M^{\omega})_{p}+\sum_{m\geq 0}(M^{m+1})_{p,\varepsilon}(M^{\omega})_{p}$ $\displaystyle\ =(M^{\omega})_{p}+(M^{*})_{p,\varepsilon}(M^{\omega})_{p}\,.$ Intuitively, our next theorem states that infinite computations starting with $p$ on the pushdown tape yield the same matrix $(M^{\omega})_{p}$ as the sum of the following three matrix products: * • $M_{p,p^{2}}(M^{\omega})_{p}$ (i.e., changing the contents of the pushdown tape from $p$ to $pp$ and starting the infinite computations with the leftmost $p$; the second $p$ is never read), * • $M_{p,p^{2}}(M^{*})_{p,\varepsilon}(M^{\omega})_{p}$ (i.e., changing the contents of the pushdown tape from $p$ to $pp$, emptying the leftmost $p$ by finite computations and starting the infinite computations with the rightmost $p$), * • $M_{p,p}(M^{\omega})_{p}$ (i.e., changing the contents of the pushdown tape from $p$ to $p$ and starting the infinite computations with this $p$). The forthcoming Theorem 7 has an analogous intuitive interpretation. ###### Theorem 4. Let $(S,V)$ be a complete semiring-semimodule pair and let $M\in(S^{n\times n})^{p^{*}\times p^{*}}$ be a roc-matrix. Then $(M^{\omega})_{p}\ =\ (M_{p,p^{2}}+M_{p,p^{2}}(M^{*})_{p,\varepsilon}+M_{p,p})(M^{\omega})_{p}\,.$ ###### Proof 3.3. We obtain, by Lemma 3 $\displaystyle(M_{p,p^{2}}+M_{p,p^{2}}(M^{*})_{p,\varepsilon}+M_{p,p})(M^{\omega})_{p}$ $\displaystyle=\quad$ $\displaystyle M_{p,p^{2}}((M^{\omega})_{p}+(M^{*})_{p,\varepsilon}(M^{\omega})_{p})+M_{p,p}(M^{\omega})_{p}$ $\displaystyle=\quad$ $\displaystyle M_{p,p^{2}}(M^{\omega})_{p^{2}}+M_{p,p}(M^{\omega})_{p}\ =\ (MM^{\omega})_{p}\ =\ (M^{\omega})_{p}\,.$ ###### Corollary 5. Let $M\in(S^{n\times n})^{p^{*}\times p^{*}}$ be a roc-matrix. Then $(M^{\omega})_{p}$ is a solution of $z=(M_{p,p^{2}}+M_{p,p^{2}}(M^{*})_{p,\varepsilon}+M_{p,p})z\,.$ When we say “$G$ is the graph with matrix $M\in(S^{n\times n})^{p^{*}\times p^{*}}$” then it means that $G$ is the graph with adjacency matrix $M^{\prime}\in S^{(p^{*}\times n)\times(p^{*}\times n)}$, where $M^{\prime}$ corresponds to $M$ with respect to the canonical isomorphism between $(S^{n\times n})^{p^{*}\times p^{*}}$ and $S^{(p^{*}\times n)\times(p^{*}\times n)}$. Let now $M$ be a roc-matrix and $0\leq k\leq n$. Then $M^{\omega,k}$ is the column vector in $(V^{n})^{p^{*}}$ defined as follows: For $i\geq 1$ and $1\leq j\leq n$, let $((M^{\omega,k})_{p^{i}})_{j}$ be the sum of all weights of paths in the graph with matrix $M$ that have initial vertex $(p^{i},j)$ and visit vertices $(p^{i^{\prime}},j^{\prime})$, $i^{\prime}\in\mathbb{N}$, $j^{\prime}\in\\{1,\ldots,k\\}$, infinitely often. Observe that $M^{\omega,0}=0$ and $M^{\omega,n}=M^{\omega}$. Later on it will be seen that this formalizes the Büchi acceptance condition with repeated states $\\{1,\ldots,k\\}$. Let $P_{k}=\\{(j_{1},j_{2},\dots)\in\\{1,\dots,n\\}^{\omega}\mid j_{t}\leq k\text{ for infinitely many }t\geq 1\\}$. Then for $1\leq j\leq n$, we obtain $((M^{\omega,k})_{p})_{j}=\sum_{i_{1},i_{2},\dots\geq 1}\sum_{(j_{1},j_{2},\dots)\in P_{k}}(M_{p,p^{i_{1}}})_{j,j_{1}}(M_{p^{i_{1}},p^{i_{2}}})_{j_{1},j_{2}}(M_{p^{i_{2}},p^{i_{3}}})_{j_{2},j_{3}}\dots\,.$ By Theorem 5.4.1 of Ésik, Kuich [11], we obtain for a finite matrix $A\in S^{n\times n}$ and for $0\leq k\leq n$, $1\leq j\leq n$, $(A^{\omega,k})_{j}=\sum_{(j_{1},j_{2},\dots)\in P_{k}}A_{j,j_{1}}A_{j_{1},j_{2}}A_{j_{2},j_{3}}\dots\,.$ Observe that again $A^{\omega,0}=0$ and $A^{\omega,n}=A^{\omega}$. In the next lemma, we use the following summation identity: Assume that $A_{1},A_{2},\dots$ are matrices in $S^{n\times n}$. Then for $0\leq k\leq n$, $1\leq j\leq n$, and $m\geq 1$, $\displaystyle\sum_{(j_{1},j_{2},\dots)\in P_{k}}(A_{1})_{j,j_{1}}(A_{2})_{j_{1},j_{2}\dots}=$ $\displaystyle\sum_{1\leq j_{1},\dots,j_{m}\leq n}(A_{1})_{j,j_{1}}\dots(A_{m})_{j_{m-1},j_{m}}\sum_{(j_{m+1},j_{m+2},\dots)\in P_{k}}(A_{m+1})_{j_{m},j_{m+1}}\dots\,.$ ###### Lemma 6. Let $(S,V)$ be a complete semiring-semimodule pair. Let $M\in(S^{n\times n})^{\Gamma^{*}\times\Gamma^{*}}$ be a roc-matrix and $0\leq k\leq n$. Then $(M^{\omega,k})_{p^{2}}\ =\ (M^{\omega,k})_{p}+(M^{*})_{p,\varepsilon}(M^{\omega,k})_{p}\,.$ ###### Proof 3.4. We use the proof of Lemma 3, i.e., the proof for the case $M^{\omega,n}=M^{\omega}$. For $1\leq j\leq n$, we obtain $((M^{\omega,k})_{p^{2}})_{j}=$ $\displaystyle\sum_{i_{1},i_{2},\dots\geq 2}\sum_{(j_{1},j_{2},\dots)\in P_{k}}(M_{p,p^{i_{1}-1}})_{j,j_{1}}(M_{p^{i_{1}-1},p^{i_{2}-1}})_{j_{1},j_{2}}\dots+$ $\displaystyle\Big{(}\sum_{1\leq j^{\prime}\leq n}\sum_{m\geq 0}\sum_{k_{1},k_{2},\dots,k_{m}\geq 2}\sum_{1\leq j_{1},\dots,j_{m}\leq n}\\!\\!(M_{p,p^{k_{1}-1}})_{j,j_{1}}\dots(M_{p^{k_{m}-1},\varepsilon})_{j_{m},j^{\prime}}\Big{)}\ \cdot$ $\displaystyle\Big{(}\sum_{k_{m+2},k_{m+3},\dots\geq 1}\sum_{(j_{m+2},j_{m+3},\dots)\in P_{k}}\\!\\!(M_{p,p^{k_{m+2}}})_{j^{\prime},j_{m+2}}(M_{p^{k_{m+2}},p^{k_{m+3}}})_{j_{m+2},j_{m+3}}\dots\Big{)}=$ $\displaystyle((M^{\omega,k})_{p})_{j}+\sum_{1\leq j^{\prime}\leq n}((M^{*})_{p,\varepsilon})_{j,j^{\prime}}((M^{\omega,k})_{p})_{j^{\prime}}=$ $\displaystyle((M^{\omega,k})_{p})_{j}+((M^{*})_{p,\varepsilon}(M^{\omega,k})_{p})_{j}=$ $\displaystyle((M^{\omega,k})_{p}+(M^{*})_{p,\varepsilon}(M^{\omega,k})_{p})_{j}\,.$ ###### Theorem 7. Let $(S,V)$ be a complete semiring-semimodule pair and let $M\in(S^{n\times n})^{p^{*}\times p^{*}}$ be a roc-matrix. Then $(M^{\omega,k})_{p}\ =\ (M_{p,p^{2}}+M_{p,p^{2}}(M^{*})_{p,\varepsilon}+M_{p,p})(M^{\omega,k})_{p}\,,$ for all $0\leq k\leq n$. ###### Proof 3.5. We obtain, by Lemma 6, for all $0\leq k\leq n$, $\displaystyle(M_{p,p^{2}}+M_{p,p^{2}}(M^{*})_{p,\varepsilon}+M_{p,p})(M^{\omega,k})_{p}$ $\displaystyle=\quad$ $\displaystyle M_{p,p^{2}}((M^{\omega,k})_{p}+(M^{*})_{p,\varepsilon}(M^{\omega,k})_{p})+M_{p,p}(M^{\omega,k})_{p}$ $\displaystyle=\quad$ $\displaystyle M_{p,p^{2}}(M^{\omega,k})_{p^{2}}+M_{p,p}(M^{\omega,k})_{p}\ =\ (MM^{\omega,k})_{p}\ =\ (M^{\omega,k})_{p}\,.$ ###### Corollary 8. Let $M\in(S^{n\times n})^{p^{*}\times p^{*}}$ be a roc-matrix. Then, for all $0\leq k\leq n$, $(M^{\omega,k})_{p}$ is a solution of $z=(M_{p,p^{2}}+M_{p,p^{2}}(M^{*})_{p,\varepsilon}+M_{p,p})z\,.$ ## 4\. $\omega$-restricted one-counter automata In this section, we define $\omega$-roc automata as a special case of $\omega$-pushdown automata. We show that for an $\omega$-roc automaton $\mathcal{C}$ there exists an algebraic system over a complete semiring- semimodule pair such that the behavior $\|\mathcal{C}\|$ of $\mathcal{C}$ is a component of a solution of this system. In the sequel, $(S,V)$ is a complete semiring-semimodule pair and $S^{\prime}$ is a subset of $S$ containing $0$ an $1$. An _$S^{\prime}$ -$\omega$-pushdown automaton_ $\mathcal{P}=(n,\Gamma,I,M,P,p_{0},k)$ is given by 1. (i) a finite set of _states_ $\\{1,\dots,n\\}$, $n\geq 1$, 2. (ii) an alphabet $\Gamma$ of _pushdown symbols_ , 3. (iii) a _pushdown transition matrix_ $M\in({S^{\prime}}^{n\times n})^{\Gamma^{*}\times\Gamma^{*}}$, 4. (iv) an _initial state vector_ $I\in{S^{\prime}}^{1\times n}$, 5. (v) a _final state vector_ $P\in{S^{\prime}}^{n\times 1}$, 6. (vi) an _initial pushdown symbol_ $p_{0}\in\Gamma$, 7. (vii) a set of _repeated states_ $\\{1,\dots,k\\}$, $0\leq k\leq n$. The definition of a pushdown transition matrix is given at the beginning of Section 3. (See also Kuich, Salomaa [16], Kuich [15] and Ésik, Kuich [11].) Clearly, any roc-matrix is a pushdown transition matrix. The _behavior of $\mathcal{P}$_ is an element of $S\times V$ and is defined by $\|\mathcal{P}\|=(I(M^{*})_{p_{0},\varepsilon}P,I(M^{\omega,k})_{p_{0}})\,.$ Here $I(M^{*})_{p_{0},\varepsilon}P$ is the behavior of the $S^{\prime}$-$\omega$-pushdown automaton $\mathcal{P}_{1}=(n,\Gamma,I,M,P,p_{0},0)$ and $I(M^{\omega,k})_{p_{0}}$ is the behavior of the $S^{\prime}$-$\omega$-pushdown automaton $\mathcal{P}_{2}=(n,\Gamma,I,M,0,p_{0},k)$. Observe that $\mathcal{P}_{2}$ is an automaton with the Büchi acceptance condition: if $G$ is the graph with adjacency matrix $M$, then only paths that visit the repeated states ${1,\dots,k}$ infinitely often contribute to $\|\mathcal{P}_{2}\|$. Furthermore, $\mathcal{P}_{1}$ contains no repeated states and behaves like an ordinary $S^{\prime}$-pushdown automaton. An $S^{\prime}$-$\omega$-roc automaton is an $S^{\prime}$-$\omega$-pushdown automaton with just one pushdown symbol such that its pushdown matrix is a roc-matrix. In the sequel, an $S^{\prime}$-$\omega$-roc automaton $\mathcal{P}=(n,\\{p\\},I,M,P,p,k)$ is denoted by $\mathcal{C}=(n,I,M,P,k)$ with behavior $\|\mathcal{C}\|=(I(M^{*})_{p,\varepsilon}P,I(M^{\omega,k})_{p})\,.$ ###### Remark 9. Consider an $S^{\prime}$-$\omega$-pushdown automaton $\mathcal{P}$ with just one pushdown symbol. By the construction in the proof of Theorem 13.28 of Kuich, Salomaa [16], an $S^{\prime}$-$\omega$-roc automaton $\mathcal{C}$ can be constructed such that $\|\mathcal{C}\|=\|\mathcal{P}\|$. The next definitions and results are taken from Ésik, Kuich [11, Section 5.6] For the definition of an $S^{\prime}$-algebraic system over a quemiring $S\times V$ we refer the reader to [11], page 136, and for the definition of quemirings to [11], page 110. Here we note that a quemiring $T$ is isomorphic to a quemiring $S\times V$ determined by the semiring-semimodule pair $(S,V)$, cf. [11], page 110. Observe that the forthcoming system (1) is a system over the quemiring $S^{n\times n}\times V^{n}$. Compare the forthcoming algebraic system (2) with the algebraic systems occurring in the proofs of Theorem 14.15 of Kuich, Salomaa [16] and Theorem 6.4 of Kuich [15], both in the case of a roc-matrix. Let $M$ be a roc-matrix. Consider the ${S^{\prime}}^{n\times n}$-algebraic system over the complete semiring-semimodule pair $(S^{n\times n},V^{n})$ $y\ =\ M_{p,p^{2}}yy+M_{p,p}y+M_{p,\varepsilon}\,.$ (1) Then by Theorem 5.6.1 of Ésik, Kuich [11] $(A,U)\in(S^{n\times n},V^{n})$ is a solution of (1) iff $A$ is a solution of the ${S^{\prime}}^{n\times n}$-algebraic system over $S^{n\times n}$ $x=M_{p,p^{2}}xx+M_{p,p}x+M_{p,\varepsilon}$ (2) and $U$ is a solution of the $S^{n\times n}$-linear system over $V^{n}$ $z=M_{p,p^{2}}z+M_{p,p^{2}}Az+M_{p,p}z\,.$ (3) ###### Theorem 10. Let $S$ be a complete starsemiring and $M$ be a roc-matrix. Then $(M^{*})_{p,\varepsilon}$ is a solution of the ${S^{\prime}}^{n\times n}$-algebraic system (2). If $S$ is a continuous starsemiring, then $(M^{*})_{p,\varepsilon}$ is the least solution of (2). ###### Proof 4.1. We obtain, by Theorem 1 $\displaystyle M_{p,p^{2}}(M^{*})_{p,\varepsilon}(M^{*})_{p,\varepsilon}+M_{p,p}(M^{*})_{p,\varepsilon}+M_{p,\varepsilon}$ $\displaystyle=\ $ $\displaystyle M_{p,p^{2}}(M^{*})_{p^{2},\varepsilon}+M_{p,p}(M^{*})_{p,\varepsilon}+M_{p,\varepsilon}$ $\displaystyle=\ $ $\displaystyle(MM^{*})_{p,\varepsilon}=(M^{+})_{p,\varepsilon}=(M^{*})_{p,\varepsilon}\,.$ This proves the first sentence of our theorem. The second sentence of Theorem 10 is proved by Theorem 6.4 of Kuich [15]. ###### Theorem 11. Let $(S,V)$ be a complete semiring-semimodule pair and $M$ be a roc-matrix. Then $((M^{*})_{p,\varepsilon},(M^{\omega,k})_{p})\,,$ is a solution of the ${S^{\prime}}^{n\times n}$-algebraic system (1), for each $0\leq k\leq n$. ###### Proof 4.2. Let $0\leq k\leq n$, and consider the $S^{n\times n}$-linear system $z=(M_{p,p^{2}}+M_{p,p^{2}}(M^{*})_{p,\varepsilon}+M_{p,p})z\,.$ By Corollary 8, $(M^{\omega,k})_{p}$ is a solution of this system. Hence, by Theorem 5.6.1 of Ésik, Kuich [11] (see the remark above) and Theorem 10, $((M^{*})_{p,\varepsilon},(M^{\omega,k})_{p})$ is a solution of the system (1). Observe that, if $S$ is a continuous semiring, then the $S^{n\times n}$-linear system in the proof of Theorem 11 is in fact an $\mathfrak{Alg}({S^{\prime}})^{n\times n}$-linear system (see Kuich [15, p. 623]). ###### Theorem 12. Let $(S,V)$ be a complete semiring-semimodule pair and let $\mathcal{C}=(n,I,M,P,k)$ be an $S^{\prime}$-$\omega$-roc-automaton. Then $(\|\mathcal{C}\|,((M^{*})_{p,\varepsilon},(M^{\omega,k})_{p}))$ is a solution of the ${S^{\prime}}^{n\times n}$-algebraic system $y_{0}=IyP\ ,\quad y=M_{p,p^{2}}yy+M_{p,p}y+M_{p,\varepsilon}$ (4) over the complete semiring-semimodule pair $(S^{n\times n},V^{n})$. ###### Proof 4.3. By Theorem 11, $((M^{*})_{p,\varepsilon},(M^{\omega,k})_{p})$ is a solution of the second equation. Since $I((M^{*})_{p,\varepsilon},(M^{\omega,k})_{p})P=(I(M^{*})_{p,\varepsilon}P,I(M^{\omega,k})_{p})=\|\mathcal{C}\|\,,$ $(\|\mathcal{C}\|,((M^{*})_{p,\varepsilon},(M^{\omega,k})_{p}))$ is a solution of the given ${S^{\prime}}^{n\times n}$-algebraic system. Let now $S$ be a complete star-omega semiring and $\Sigma$ be an alphabet. Then by Theorem 5.5.5 of Ésik, Kuich [11], $(S\ll\\!\Sigma^{*}\\!\gg,S\ll\\!\Sigma^{\omega}\\!\gg)$ is a complete semiring-semimodule pair. Let $\mathcal{C}\\!=\\!(n,I,M,P,k)$ be an $S\langle\Sigma\cup\\{\varepsilon\\}\rangle$-$\omega$-roc automaton. Consider the algebraic system (4) over the complete semiring-semimodule pair ${((S\ll\\!\Sigma^{*}\\!\gg)^{n\times n},}\allowbreak{(S\ll\\!\Sigma^{\omega}\\!\gg)^{n})}$ and the mixed algebraic system (5) over ${((S\ll\Sigma^{*}\gg)^{n\times n},}$ ${(S\ll\Sigma^{\omega}\gg)^{n})}$ induced by (4) $\displaystyle x_{0}$ $\displaystyle=IxP,\qquad$ $\displaystyle x$ $\displaystyle=M_{p,p^{2}}xx+M_{p,p}x+M_{p,\varepsilon}\,,$ (5) $\displaystyle z_{0}$ $\displaystyle=Iz,$ $\displaystyle z$ $\displaystyle=M_{p,p^{2}}z+M_{p,p^{2}}xz+M_{p,p}z\,.$ Then, by Theorem 12, $(I(M^{*})_{p,\varepsilon}P,(M^{*})_{p,\varepsilon},I(M^{\omega,k})_{p},(M^{\omega,k})_{p}),\ 0\leq k\leq n,$ is a solution of (5). It is called _solution of order $k$_. Hence, we have proved the next theorem. ###### Theorem 13. Let $S$ be a complete star-omega semiring and $\mathcal{C}=(n,I,M,P,k)$ be an $S\langle\Sigma\cup\\{\varepsilon\\}\rangle$-$\omega$-roc automaton. Then $(I(M^{*})_{p,\varepsilon}P,(M^{*})_{p,\varepsilon},I(M^{\omega,k})_{p},(M^{\omega,k})_{p}),\ 0\leq k\leq n$, is a solution of the mixed algebraic system (5). ∎ Let now in (5) $x=([i,p,j])_{1\leq i,j\leq n}$ be an $n\times n$-matrix of variables and $z=([i,p])_{1\leq i\leq n}$ be an $n$-dimensional column vector of variables. If we write the mixed algebraic system (5) component-wise, we obtain a mixed algebraic system over $({(S\ll\Sigma^{*}\gg),}{(S\ll\Sigma^{\omega}\gg)})$ with variables $[i,p,j]$ over $S\ll\Sigma^{*}\gg$, where $1\leq i,j\leq n$, and variables $[i,p]$ over $S\ll\Sigma^{\omega}\gg$, where $1\leq i\leq n$. Observe that we do not really need $p$ in the notation of the variables. But we want to save the form of the triple construction in connection with pushdown automata. Let $M_{p,p^{2}}=(a_{ij})_{1\leq i,j,\leq n},M_{p,p}=(c_{ij})_{1\leq i,j,\leq n},M_{p,\varepsilon}=(b_{ij})_{1\leq i,j,\leq n}$ and write (5) with the matrices $x$ and $z$ of variables component-wise then we obtain: $\displaystyle x_{0}$ $\displaystyle=$ $\displaystyle\sum_{1\leq m_{1},m_{2}\leq n}I_{m_{1}}[m_{1},p,m_{2}]P_{m_{2}}$ (6) $\displaystyle[i,p,j]$ $\displaystyle=$ $\displaystyle\sum_{1\leq m_{1},m_{2}\leq n}a_{im_{1}}[m_{1},p,m_{2}][m_{2},p,j]+$ $\displaystyle\sum_{1\leq m\leq n}c_{im}[m,p,j]+b_{ij}$ $\displaystyle z_{0}$ $\displaystyle=$ $\displaystyle\sum_{1\leq m\leq n}I_{m}[m,p]$ $\displaystyle[i,p]$ $\displaystyle=$ $\displaystyle\sum_{1\leq m\leq n}a_{im}[m,p]+\sum_{1\leq m_{1},m_{2}\leq n}a_{im_{1}}[m_{1},p,m_{2}][m_{2},p]+$ $\displaystyle\sum_{1\leq m\leq n}c_{im}[m,p]$ for all $1\leq i,j\leq n$. ###### Theorem 14. Let $S$ be a complete star-omega semiring and $\mathcal{C}=(n,I,M,P,k)$ be an $S\langle\Sigma\cup\\{\varepsilon\\}\rangle$-$\omega$-roc automaton. Then $(\sigma_{0},((M^{*})_{p,\varepsilon})_{ij},\tau_{0},(M^{\omega,k})_{p})$ is a solution of the system (6) with $\|\mathcal{C}\|=(\sigma_{0},\tau_{0})$ . ###### Proof 4.4. By Theorem 13. ## 5\. Mixed algebraic systems and mixed context-free grammars In this section we associate a mixed context-free grammar with finite and infinite derivations to the algebraic system (6). The language generated by this mixed context-free grammar is then the behavior $\|\mathcal{C}\|$ of the $\omega$-roc automaton $\mathcal{C}$. The construction of the mixed context- free grammar from the $\omega$-roc automaton $\mathcal{C}$ is a generalization of the well-known triple construction in case of roc automata and is called now _triple-pair construction for $\omega$-roc automata_. We will consider the commutative complete star-omega semirings $\mathbb{B}=(\\{0,1\\},\vee,\land,*,0,1)$ with $0^{*}=1^{*}=1$ and $\mathbb{N}^{\infty}=(\mathbb{N}\cup\\{\infty\\},+,\cdot,^{*},0,1)$ with $0^{*}=1$ and $a^{*}=\infty$ for $a\neq\infty$. If $S=\mathbb{B}$ or $S=\mathbb{N}^{\infty}$ and $1\leq k\leq n$, then we associate to the mixed algebraic system (6) over $((S\ll\Sigma^{*}\gg),(S\ll\Sigma^{\omega}\gg))$ the _mixed context-free grammar_ $G_{k}\ =\ (X,Z,\Sigma,P_{X},P_{Z},x_{0},z_{0},k)\,.$ (See also Ésik, Kuich [11, page 139].) Here 1. (i) $X=\\{x_{0}\\}\cup\\{[i,p,j]\mid 1\leq i,j\leq n\\}$ is a set of _variables for finite derivations_ ; 2. (ii) $Z=\\{z_{0}\\}\cup\\{[i,p]\mid 1\leq i\leq n\\}$ is a set of _variables for infinite derivations_ ; 3. (iii) $\Sigma$ is an alphabet of _terminal symbols_ ; 4. (iv) $P_{X}$ is a finite set of _productions for finite derivations_ given below; 5. (v) $P_{Z}$ is a finite set of _productions for infinite derivations_ given below; 6. (vi) $x_{0}$ is the _start variable for finite derivations_ ; 7. (vii) $z_{0}$ is the _start variable for infinite derivations_ ; 8. (viii) $\\{[i,p]\mid 1\leq i\leq k\\}$ is the set of _repeated variables for infinite derivations_. In the definition of $G_{k}$ the sets $P_{X}$ and $P_{Z}$ are as follows: $\displaystyle P_{X}=\ $ $\displaystyle\\{x_{0}\to a[m_{1},p,m_{2}]b\mid$ $\displaystyle\ \ (I_{m_{1}},a)\cdot(P_{m_{2}},b)\neq 0,a,b\in\Sigma\cup\\{\varepsilon\\},1\leq m_{1},m_{2}\leq n\\}\ \cup$ $\displaystyle\\{[i,p,j]\to a[m_{1},p,m_{2}][m_{2},p,j]\mid$ $\displaystyle\ \ (a_{im_{1}},a)\neq 0,a\in\Sigma\cup\\{\varepsilon\\},1\leq i,j,m_{1},m_{2}\leq n\\}\ \cup$ $\displaystyle\\{[i,p,j]\to a[m,p,j]\mid(c_{im},a)\neq 0,a\in\Sigma\cup\\{\varepsilon\\},1\leq i,j,m\leq n\\}\ \cup$ $\displaystyle\\{[i,p,j]\to a\mid(b_{ij},a)\neq 0,a\in\Sigma\cup\\{\varepsilon\\},1\leq i,j\leq n\\}\,,$ $\displaystyle P_{Z}=\ $ $\displaystyle\\{z_{0}\to a[m,p]\mid(I_{m},a)\neq 0,a\in\Sigma\cup\\{\varepsilon\\},1\leq m\leq n\\}\ \cup$ $\displaystyle\\{[i,p]\to a[m,p]\mid(a_{im},a)\neq 0,a\in\Sigma\cup\\{\varepsilon\\},1\leq i,m\leq n\\}\ \cup$ $\displaystyle\\{[i,p]\to a[m_{1},p,m_{2}][m_{2},p]\mid$ $\displaystyle\ \ (a_{im_{1}},a)\neq 0,a\in\Sigma\cup\\{\varepsilon\\},1\leq i,m_{1},m_{2}\leq n\\}\ \cup$ $\displaystyle\\{[i,p]\to a[m,p]\mid(c_{im},a)\neq 0,a\in\Sigma\cup\\{\varepsilon\\},1\leq i,m\leq n\\}\,.$ A _finite leftmost derivation_ $\alpha_{1}\Rightarrow_{\\!L}^{*}\alpha_{2}$, where $\alpha_{1},\alpha_{2}\in(X\cup\Sigma)^{*}$, by productions in $P_{X}$ is defined as usual. An _infinite (leftmost) derivation_ $\pi:z_{0}\Rightarrow_{\\!L}^{\omega}w$, for $z_{0}\in Z,w\in\Sigma^{\omega}$, is defined as follows: $\displaystyle\pi:\ $ $\displaystyle z_{0}\Rightarrow_{\\!L}\alpha_{0}[{i_{0}},p]\Rightarrow_{\\!L}^{*}w_{0}[i_{0},p]\Rightarrow_{\\!L}w_{0}\alpha_{1}[i_{1},p]\Rightarrow_{\\!L}^{*}w_{0}w_{1}[{i_{1}},p]\Rightarrow_{\\!L}\dots$ $\displaystyle\Rightarrow_{\\!L}^{*}w_{0}w_{1}\dots w_{m}[{i_{m}},p]\Rightarrow_{\\!L}w_{0}w_{1}\dots w_{m}\alpha_{m+1}[{i_{m+1}},p]\Rightarrow_{\\!L}^{*}\dots\,,$ where $z_{0}\to\alpha_{0}[{i_{0}},p],[{i_{0}},p]\to\alpha_{1}[{i_{1}},p],\dots,[{i_{m}},p]\to\alpha_{m+1}[{i_{m+1}},p],\dots$ are productions in $P_{Z}$ and $w=w_{0}w_{1}\dots w_{m}\dots$. We now define an infinite derivation $\pi_{k}:z_{0}\Rightarrow_{\\!L}^{\omega,k}w$ for $0\leq k\leq n$, $z_{0}\in Z$, $w\in\Sigma^{\omega}$: We take the above definition $\pi:z_{0}\Rightarrow^{\omega}w$ and consider the sequence of the first elements of the variables of $X$ that are rewritten in the finite leftmost derivation $\alpha_{m}\Rightarrow_{L}^{*}w_{m}$, $m\geq 0$. Assume this sequence is $i_{m}^{1},i_{m}^{2},\dots,i_{m}^{t_{m}}$ for some $t_{m}$, $m\geq 1$. Then, to obtain $\pi_{k}$ from $\pi$, the condition $i_{0},i_{1}^{1},\dots,i_{1}^{t_{1}},i_{1},i_{2}^{1},\dots,i_{2}^{t_{2}},i_{2},\dots,i_{m},i_{m+1}^{1},\dots,i_{m+1}^{t_{m+1}},i_{m+1},\dots\in P_{k}$ has to be satisfied. Then $L(G_{k})=\\{w\in\Sigma^{*}\mid x_{0}\Rightarrow_{\\!L}^{*}w\\}\ \cup\ \\{w\in\Sigma^{\omega}\mid\pi:z_{0}\Rightarrow_{\\!L}^{\omega,k}w\\}\,.$ Observe that the construction of $G_{k}$ from $\mathcal{C}$ is nothing else than a generalization of the triple construction in the case of a roc- automaton, if $\mathcal{C}$ is viewed as a pushdown automaton, since the construction of the context-free grammar $G=(X,\Sigma,P_{X},x_{0})$ is the triple construction. (See Harrison [14], Theorem 5.4.3; Bucher, Maurer [3], Sätze 2.3.10, 2.3.30; Kuich, Salomaa [16], pages 178, 306; Kuich [15], page 642; Ésik, Kuich [11], pages 77, 78.) We call the construction of the mixed context-free grammar $G_{k}$, for $0\leq k\leq n$, from $\mathcal{C}$ the _triple-pair construction for $\omega$-roc automata_. This is justified by the definition of the sets of variables $\\{[i,p,j]\mid 1\leq i,j,\leq n\\}$ and $\\{[i,p]\mid 1\leq i\leq n\\}$ of $G_{k}$ and by the forthcoming Corollary 16. In the next theorem we use the isomorphism between ${\mathbb{B}\ll\Sigma^{*}\gg}\times{\mathbb{B}\ll\Sigma^{\omega}\gg}$ and $2^{\Sigma^{*}}\times 2^{\Sigma^{\omega}}$. ###### Theorem 15. Assume that $(\sigma,\tau)$ is the solution of order $k$ of the mixed algebraic system (6) over $(\mathbb{B}\ll\Sigma^{*}\gg,\mathbb{B}\ll\Sigma^{\omega}\gg)$ for $k\in\\{0,\dots,n\\}$. Then $L(G_{k})\ =\ \sigma_{x_{0}}\cup\tau_{z_{0}}\,.$ ###### Proof 5.1. By Theorem<EMAIL_ADDRESS>of Salomaa, Soittola [18] and by Theorem 14, we obtain $\sigma_{x_{0}}=\\{w\in\Sigma^{*}\mid x_{0}\Rightarrow_{\\!L}^{*}w\\}$. We now show that $\tau_{z_{0}}$ is generated by the infinite derivations $\Rightarrow_{\\!L}^{\omega,k}$ from $z_{0}$. First observe that the rewriting by the typical $[i,p,j]$\- and $[i,p]$\- production corresponds to the situation that in the graph of the $\omega$-restricted one counter automaton $\mathcal{C}$ the edge from $(p\rho,i)$ to $(pp\rho,j),(p\rho,j)$ or $(\rho,j)$, $\rho=p^{t}$ for some $t\geq 0$ is passed after the state $i$ is visited. The first step of the infinite derivation $\pi_{k}$ is given by $z_{0}\Rightarrow_{\\!L}\alpha_{0}[i_{0},p]$ and indicates that the path in the graph of $\mathcal{C}$ corresponding to $\pi_{k}$ starts in state $i_{0}$. Furthermore, the sequence of the first elements of variables that are rewritten in $\pi_{k}$, i.e., $i_{0},i_{1}^{1},\dots,i_{1}^{t_{1}},i_{1},i_{2}^{2},\dots,i_{2}^{t_{2}},i_{2},\dots,i_{m},i_{m+1}^{1},\dots,i_{m+1}^{t_{m+1}},i_{m+1},\dots$ indicates that the path in the graph of $\mathcal{C}$ corresponding to $\pi_{k}$ visits these states. Since this sequence is in $P_{k}$ the corresponding path contributes to $\|\mathcal{C}\|$. Hence, by Theorem 14 we obtain $\tau_{z_{0}}=\\{w\in\Sigma^{\omega}\mid\pi:z_{0}\Rightarrow_{\\!L}^{*}w\\}\,.$ ###### Corollary 16. Assume that, for some $k\in\\{0,\dots,n\\}$, the mixed context free grammar $G_{k}$ associated to the mixed algebraic system (6) is constructed from the $\mathbb{B}\langle\Sigma\cup\\{\varepsilon\\}\rangle$-$\omega$-roc automaton $\mathcal{C}$. Then $L(G_{k})=\|\mathcal{C}\|\,.$ ###### Proof 5.2. By Theorems 14 and 15. For the remainder of this section our basic semiring is $\mathbb{N}^{\infty}$, which allows us to draw some stronger conclusions. ###### Theorem 17. Assume that $(\sigma,\tau)$ is the solution of order $k$ of the mixed algebraic system (6) over $(\mathbb{N}^{\infty}\ll\Sigma^{*}\gg,\mathbb{N}^{\infty}\ll\Sigma^{\omega}\gg)$, $k\in\\{0,\dots,n\\}$, where $I_{m_{1}},P_{m_{1}},a_{m_{1}m_{2}},b_{m_{1}m_{2}},c_{m_{1}m_{2}}$, $1\leq m_{1},m_{2}\leq n$ are in $\\{0,1\\}\langle\Sigma\cup\\{\varepsilon\\}\rangle$. Denote by $d(w)$, for $w\in\Sigma^{*}$, the number (possibly $\infty$) of distinct finite leftmost derivations of $w$ from $x_{0}$ with respect to $G_{k}$; and by $c(w)$, for $w\in\Sigma^{\omega}$, the number (possibly $\infty$) of distinct infinite leftmost derivations $\pi$ of $w$ from $z_{0}$ with respect to $G_{k}$. Then $\sigma_{x_{0}}=\sum_{w\in\Sigma^{*}}d(w)w\qquad\text{\ and \ }\qquad\tau_{z_{0}}=\sum_{w\in\Sigma^{\omega}}c(w)w\,.$ ###### Proof 5.3. By Theorem<EMAIL_ADDRESS>of Salomaa, Soittola [18], Theorems 5.5.9 and 5.6.3 of Ésik, Kuich [11] and Theorem 14. In the forthcoming Corollary 18 we consider, for a given $\\{0,1\\}\langle\Sigma\cup\\{\varepsilon\\}\rangle$-$\omega$-roc automaton $\mathcal{C}=(n,I,M,P,k)$ the number of distinct computations from an initial instantaneous description $(i,w,p)$ for $w\in\Sigma^{*}$, $I_{i}\neq 0$, to an accepting instantaneous description $(j,\varepsilon,\varepsilon)$, with $P_{j}\neq 0$, $i,j\in\\{0,\dots,n\\}$. Here $(i,w,p)$ means that $\mathcal{C}$ starts in the initial state $i$ with $w$ on its input tape and $p$ on its pushdown tape; and $(j,\varepsilon,\varepsilon)$ means that $\mathcal{C}$ has entered the final state $j$ with empty input tape and empty pushdown tape. Furthermore, we consider the number of distinct infinite computations starting in an initial instantaneous description $(i,w,p)$ for $w\in\Sigma^{\infty}$, $I_{i}\neq 0$. ###### Corollary 18. Assume that, for some $k\in\\{0,\dots,n\\}$, the mixed context-free grammar $G_{k}$ associated to the mixed algebraic system (6) is constructed from the $\\{0,1\\}\langle\Sigma\cup\\{\varepsilon\\}\rangle$-$\omega$-roc automaton $\mathcal{C}$. Then the number (possibly $\infty$) of distinct finite leftmost derivations of $w\in\Sigma^{*}$ from $x_{0}$ equals the number of distinct finite computations from an initial instantaneous description for $w$ to an accepting instantaneous description; moreover, the number (possibly $\infty$) of distinct infinite (leftmost) derivations of $w\in\Sigma^{\omega}$ from $z_{0}$ equals the number of distinct infinite computations starting in an initial instantaneous description for $w$. ###### Proof 5.4. By Corollary 3.4.12 of Ésik, Kuich [11, Theorem 4.3] and the definition of infinite derivations with respect to $G_{k}$. The context-free grammar $G_{k}$ associated to (6) is called _unambiguous_ if each $w\in L(G)$, $w\in\Sigma^{*}$ has a unique finite leftmost derivation and each $w\in L(G)$, $w\in\Sigma^{\omega}$, has a unique infinite (leftmost) derivation. An $\mathbb{N}^{\infty}\langle\Sigma\cup\\{\varepsilon\\}\rangle$-$\omega$-roc automaton $\mathcal{C}$ is called _unambiguous_ if $(\|\mathcal{C}\|,w)\in\\{0,1\\}$ for each $w\in\Sigma^{*}\cup\Sigma^{\omega}$. ###### Corollary 19. Assume that, for some $k\in\\{0,\dots,n\\}$, the mixed context-free grammar $G_{k}$ associated to the mixed algebraic system (6) is constructed from the $\\{0,1\\}\langle\Sigma\cup\\{\varepsilon\\}\rangle$-$\omega$-roc automaton $\mathcal{C}$. Then $G_{k}$ is unambiguous iff $\|\mathcal{C}\|$ is unambiguous. In the forthcoming paper Droste, Ésik, Kuich [5] we extend the results of this paper to weighted $\omega$-pushdown automata and obtain the triple-pair construction for them. In the classical theory this triple-pair constructions extends the well-known triple construction that, given an $\omega$-pushdown automaton, yields an equivalent context-free grammar. ## Acknowledgment The ideas of and personal discussions with Zoltán Ésik were of great influence in preparing this paper. Thanks are due to two unknown referees for their helpful remarks. ## References * [1] Berstel, J.: Transductions and Context-Free Languages. Teubner, 1979. * [2] Bloom, S. L., Ésik, Z.: Iteration Theories. EATCS Monographs on Theoretical Computer Science. Springer, 1993. * [3] Bucher, W., Maurer, H.: Theoretische Grundlagen der Programmiersprachen. B. I. Wissenschaftsverlag, 1984. * [4] Conway, J. H.: Regular Algebra and Finite Machines. Chapman & Hall, 1971\. * [5] Droste, M., Ésik, Z., Kuich, W.: The triple-pair construction for weighted $\omega$-pushdown automata. In: Automata and Formal Languages (AFL 2017), EPTCS (2017) 101-113. * [6] Eilenberg, S.: Automata, Languages and Machines. Vol. A. Academic Press, 1974. * [7] Ésik, Z., Kuich, W.: A semiring-semimodule generalization of $\omega$-context-free languages. In: Theory is Forever (Eds.: J. Karhumäki, H. Maurer, G. Paun, G. Rozenberg), LNCS 3113, Springer, 2004, 68–80. * [8] Ésik, Z., Kuich, W.: A semiring-semimodule generalization of $\omega$-regular languages II. Journal of Automata, Languages and Combinatorics 10 (2005) 243–264. * [9] Ésik, Z., Kuich, W.: On iteration semiring-semimodule pairs. Semigroup Forum 75 (2007), 129–159. * [10] Ésik, Z., Kuich, W.: A semiring-semimodule generalization of transducers and abstract $\omega$-families of power series. Journal of Automata, Languages and Combinatorics, 12 (2007), 435–454. * [11] Ésik, Z., Kuich, W.: Modern Automata Theory. http://www.dmg.tuwien.ac.at/kuich * [12] Ésik, Z., Kuich, W.: Continuous semiring-semimodule pairs and mixed algebraic systems. Acta Cybernetica 252 (2017) 43-59. * [13] Greibach S. A.: An infinite hierarchy of context-free languages. Journal of the ACM 16 (1969) 91–106. * [14] Harrison, M. A.: Introduction to Formal Language Theory. Addison-Wesley, 1978. * [15] Kuich, W.: Semirings and formal power series: Their relevance to formal languages and automata theory. In: Handbook of Formal Languages (Eds.: G. Rozenberg and A. Salomaa), Springer, 1997, Vol. 1, Chapter 9, 609–677. * [16] Kuich, W., Salomaa, A.: Semirings, Automata, Languages. EATCS Monographs on Theoretical Computer Science, Vol. 5. Springer, 1986. * [17] Perrin, D., Pin, J. - E.: Infinite Words – Automata, Semigroups, Logic and Games, Elsevier, 2004. * [18] Salomaa, A., Soittola, M.: Automata - Theoretic Aspects of Formal Power Series, Springer, 1978.
work in progress.5[gray].98 # Universal cutoff for Dyson Ornstein Uhlenbeck process Jeanne Boursier (JB) Université Paris-Dauphine, PSL University, UMR 7534, CNRS, CEREMADE, 75016 Paris, France<EMAIL_ADDRESS>, Djalil Chafaï (DC) École Normale Supérieure, UMR 8553, CNRS, DMA, 75005 Paris, France & Université Paris-Dauphine, PSL University, UMR 7534, CNRS, CEREMADE, 75016 Paris, France<EMAIL_ADDRESS>https://djalil.chafai.net/ and Cyril Labbé (CL) Université de Paris, Laboratoire de Probabilités, Statistiques et Modélisation, UMR 8001, F-75205 Paris, France<EMAIL_ADDRESS> (Date: Summer 2021, revised Winter 2022, revised Summer 2022, compiled ) ###### Abstract. We study the Dyson–Ornstein–Uhlenbeck diffusion process, an evolving gas of interacting particles. Its invariant law is the beta Hermite ensemble of random matrix theory, a non-product log-concave distribution. We explore the convergence to equilibrium of this process for various distances or divergences, including total variation, relative entropy, and transportation cost. When the number of particles is sent to infinity, we show that a cutoff phenomenon occurs: the distance to equilibrium vanishes abruptly at a critical time. A remarkable feature is that this critical time is independent of the parameter beta that controls the strength of the interaction, in particular the result is identical in the non-interacting case, which is nothing but the Ornstein–Uhlenbeck process. We also provide a complete analysis of the non- interacting case that reveals some new phenomena. Our work relies among other ingredients on convexity and functional inequalities, exact solvability, exact Gaussian formulas, coupling arguments, stochastic calculus, variational formulas and contraction properties. This work leads, beyond the specific process that we study, to questions on the high-dimensional analysis of heat kernels of curved diffusions. ###### Key words and phrases: Dyson process; Ornstein–Uhlenbeck process; Coulomb gas; Random Matrix Theory; High dimensional phenomenon; Cutoff phenomenon; High-dimensional probability; Functional inequalities; Spectral analysis; Stochastic Calculus; Gaussian analysis; Markov process; Diffusion process; Interacting Particle System ###### 2000 Mathematics Subject Classification: 60J60 (Diffusion processes); 82C22 (Interacting particle systems) ###### Contents 1. 1 Introduction and main results 2. 2 Additional comments and open problems 3. 3 Cutoff phenomenon for the OU 4. 4 General exactly solvable aspects 5. 5 The random matrix cases 6. 6 Cutoff phenomenon for the DOU in TV and Hellinger 7. 7 Cutoff phenomenon for the DOU in Wasserstein 8. A Distances and divergences 9. B Convexity and its dynamical consequences ## 1\. Introduction and main results Let us consider a Markov process $X={(X_{t})}_{t\geq 0}$ with state space $S$ and invariant law $\mu$ for which $\lim_{t\to\infty}\mathrm{dist}(\mathrm{Law}(X_{t})\mid\mu)=0$ where $\mathrm{dist}(\cdot\mid\cdot)$ is a distance or divergence on the probability measures on $S$. Suppose now that $X=X^{n}$ depends on a dimension, size, or complexity parameter $n$, and let us set $S=S^{n}$, $\mu=\mu^{n}$, and $X_{0}=x^{n}_{0}\in S^{n}$. For example $X^{n}$ can be a random walk on the symmetric group of permutations of $\\{1,\ldots,n\\}$, Brownian motion on the group of $n\times n$ unitary matrices, Brownian motion on the $n$-dimensional sphere, etc. In many of such examples, it has been proved that when $n$ is large enough, the supremum over some set of initial conditions $x^{n}_{0}$ of the quantity $\mathrm{dist}(\mathrm{Law}(X_{t}^{n})\mid\mu^{n})$ collapses abruptly to $0$ when $t$ passes a critical value $c=c_{n}$ which may depend on $n$. This is often referred to as a _cutoff phenomenon_. More precisely, if $\mathrm{dist}$ ranges from $0$ to $\max$, then, for some subset $S^{n}_{0}\subset S^{n}$ of initial conditions, some critical value $c=c_{n}$ and for all $\varepsilon\in(0,1)$, $\lim_{n\to\infty}\sup_{x^{n}_{0}\in S^{n}_{0}}\mathrm{dist}(\mathrm{Law}(X^{n}_{t_{n}})\mid\mu^{n})=\begin{cases}\max&\text{if $t_{n}=(1-\varepsilon)c_{n}$}\\\ 0&\text{if $t_{n}=(1+\varepsilon)c_{n}$}\end{cases}.$ It is standard to introduce, for an arbitrary small threshold $\eta>0$, the quantity $\inf\\{t\geq 0:\sup_{x_{0}\in S_{0}^{n}}\mathrm{dist}(\mathrm{Law}(X^{n}_{t})\mid\mu^{n})\leq\eta\\}$ known as the _mixing time_ in the literature. Of course such a definition fully makes sense as soon as $t\mapsto{\sup_{x_{0}\in S_{0}^{n}}}\mathrm{dist}(\mathrm{Law}(X^{n}_{t})\mid\mu^{n})$ is non- increasing. When $S^{n}$ is finite, it is customary to take $S^{n}_{0}=S^{n}$. When $S^{n}$ is infinite, it may happen that the supremum over the whole set $S^{n}$ of the distance to equilibrium remains equal to $\max$ at all times, in which case one has to consider strict subspaces of initial conditions. For some processes, it is possible to restrict $S^{n}_{0}$ to a single state in which case one obtains a very precise description of the convergence to equilibrium starting from this initial condition. Note that the constraint over the initial condition can be made compatible with a limiting dynamics, for instance a mean-field limit when the process describes an exchangeable interacting particle system. The _cutoff phenomenon_ was put forward by Aldous and Diaconis at the origin for random walks on finite sets, see for instance [1, 28, 26, 52] and references therein. The analysis of the cutoff phenomenon is the subject of an important activity, still seeking for a complete theory: let us mention that, for the total variation distance, Peres proposed the so-called product condition (the mixing time must be much larger than the inverse of the spectral gap) as a necessary and sufficient condition for a cutoff phenomenon to hold, but counter-examples were exhibited [52, Sec. 18.3] and the product condition is only necessary. The study of the cutoff phenomenon for Markov diffusion processes goes back at least to the works of Saloff-Coste [62, 64] in relation notably with Nash–Sobolev type functional inequalities, heat kernel analysis, and Diaconis–Wilson probabilistic techniques. We also refer to the more recent work [55] for the case of diffusion processes on compact groups and symmetric spaces, in relation with group invariance and representation theory, a point of view inspired by the early works of Diaconis on Markov chains and of Saloff-Coste on diffusion processes. Even if most of the available results in the literature on the cutoff phenomenon are related to compact state spaces, there are some notable works devoted to non-compact spaces such as [47, 19, 10, 8, 9, 11]. Our contribution is an exploration of the cutoff phenomenon for the Dyson–Ornstein–Uhlenbeck diffusion process, for which the state space is $\mathbb{R}^{n}$. This process is an interacting particle system. When the interaction is turned off, we recover the Ornstein–Uhlenbeck process, a special case that has been considered previously in the literature but for which we also provide new results. ### 1.1. Distances As for $\mathrm{dist}$ we use several standard distances or divergences between probability measures: Wasserstein, total variation (TV), Hellinger, Entropy, $\chi^{2}$ and Fisher, surveyed in Appendix A. We take the following convention for probability measures $\mu$ and $\nu$ on the same space: $\mathrm{dist}(\mu\mid\nu)=\begin{cases}\mathrm{Wasserstein}(\mu,\nu)&\text{when $\mathrm{dist}=\mathrm{Wasserstein}$}\\\ \left\|\mu-\nu\right\|_{\mathrm{TV}}&\text{when $\mathrm{dist}=\mathrm{TV}$}\\\ \mathrm{Hellinger}(\mu,\nu)&\text{when $\mathrm{dist}=\mathrm{Hellinger}$}\\\ \mathrm{Kullback}(\mu\mid\nu)&\text{when $\mathrm{dist}=\mathrm{Kullback}$}\\\ \chi^{2}(\mu\mid\nu)&\text{when $\mathrm{dist}=\chi^{2}$}\\\ \mathrm{Fisher}(\mu\mid\nu)&\text{when $\mathrm{dist}=\mathrm{Fisher}$}\end{cases},$ (1.1) see Appendix A for precise definitions. The maximal value $\max$ taken by $\mathrm{dist}$ is given by $\max=\begin{cases}1&\text{ if }\mathrm{dist}\in\\{\mathrm{TV},\mathrm{Hellinger}\\},\\\ +\infty&\text{ if }\mathrm{dist}\in\\{\mathrm{Wasserstein},\mathrm{Kullback},\chi^{2},\mathrm{Fisher}\\}.\end{cases}$ (1.2) ### 1.2. The Dyson–Ornstein–Uhlenbeck (DOU) process and preview of main results The DOU process is the solution $X^{n}={(X^{n}_{t})}_{t\geq 0}$ on $\mathbb{R}^{n}$ of the stochastic differential equation $X^{n}_{0}=x^{n}_{0}\in\mathbb{R}^{n},\quad\mathrm{d}X_{t}^{n,i}=\sqrt{\frac{2}{n}}\mathrm{d}B^{i}_{t}-V^{\prime}(X_{t}^{n,i})\mathrm{d}t+\frac{\beta}{n}\sum_{j\neq i}\frac{\mathrm{d}t}{X_{t}^{n,i}-X_{t}^{n,j}},\quad 1\leq i\leq n,$ (1.3) where ${(B_{t})}_{t\geq 0}$ is a standard $n$-dimensional Brownian motion (BM), and where * • $V(x)=\frac{x^{2}}{2}$ is a “confinement potential” acting through the drift $-V^{\prime}(x)=-x$ * • $\beta\geq 0$ is a parameter tuning the interaction strength. The notation $X^{n,i}_{t}$ stands for the $i$-th coordinate of the vector $X^{n}_{t}$. The process $X^{n}$ can be thought of as an interacting particle system of $n$ one-dimensional Brownian particles $X^{n,1},\ldots,X^{n,n}$, subject to confinement and singular pairwise repulsion when $\beta>0$ (respectively first and second term in the drift). We take an inverse temperature of order $n$ in (1.3) in order to obtain a mean-field limit without time-changing the process, see Section 2.5. The spectral gap is $1$ for all $n\geq 1$, see Section 2.6. We refer to Section 2.9 for other parametrizations or choices of inverse temperature. In the special cases $\beta\in\\{0,1,2\\}$, the cutoff phenomenon for the DOU process can be established by using Gaussian analysis and stochastic calculus, see Sections 1.4 and 1.5. For $\beta=0$, the process reduces to the Ornstein–Uhlenbeck process (OU) and its behavior serves as a benchmark for the interaction case $\beta\neq 0$, while when $\beta\in\\{1,2\\}$, the approach involves a lift to unitary invariant ensembles of random matrix theory. For a general $\beta\geq 1$, our main results regarding the cutoff phenomenon for the DOU process are given in Sections 1.6 and 1.7. We are able, in particular, to prove the following: for all $\mathrm{dist}\in\\{\mathrm{Wasserstein},\mathrm{TV},\mathrm{Hellinger}\\}$, $a>0$, $\varepsilon\in(0,1)$, we have $\lim_{n\to\infty}\sup_{x^{n}_{0}\in[-a,a]^{n}}\mathrm{dist}(\mathrm{Law}(X^{n}_{t_{n}})\mid P_{n}^{\beta})=\begin{cases}\max&\text{if $t_{n}=(1-\varepsilon)c_{n}$}\\\ 0&\text{if $t_{n}=(1+\varepsilon)c_{n}$}\end{cases},$ where $P_{n}^{\beta}$ is the invariant law of the process, and where $c_{n}:=\begin{cases}\log(\sqrt{n}a)&\text{ if $\mathrm{dist}=\mathrm{Wasserstein}$}\\\ \log(na)&\text{ if $\mathrm{dist}\in\\{\mathrm{TV},\mathrm{Hellinger}\\}$}\end{cases}.$ This result is stated in a slightly more general form in Corollary 1.7. Our proof relies crucially on an exceptional exact solvability of the dynamics, notably the fact that we know explicitly the optimal long time behavior in entropy and coupling distance, as well as the eigenfunction associated to the spectral gap which turns out to be linear and optimal. This comes from the special choice of $V$ as well as the special properties of the Coulomb interaction. We stress that such an exact solvability is no longer available for a general strongly convex $V$, even for instance in the simple example $V(x)=\frac{x^{2}}{2}+x^{4}$ or for general linear forces. Nevertheless, and as usual, two other special classical choices of $V$ could be explored, related to Laguerre and Jacobi weights, see Section 2.8. ### 1.3. Analysis of the Dyson–Ornstein–Uhlenbeck process The process $X^{n}$ was essentially discovered by Dyson in [33], in the case $\beta\in\\{1,2,4\\}$, because it describes the dynamics of the eigenvalues of $n\times n$ symmetric/Hermitian/symplectic random matrices with independent Ornstein–Uhlenbeck entries, see Lemma 5.1 and Lemma 5.2 below for the cases $\beta=1$ and $\beta=2$ respectively. * • Case $\beta=0$ (interaction turned off). The particles become $n$ independent one-dimensional Ornstein–Uhlenbeck processes, and the DOU process $X^{n}$ becomes exactly the $n$-dimensional Ornstein–Uhlenbeck process $Z^{n}$ solving (1.8). The process lives in $\mathbb{R}^{n}$. The particles collide but since they do not interact, this does not raise any issue. * • Case $0<\beta<1$. Then with positive probability the particles collide producing a blow up of the drift, see for instance [22, 25] for a discussion. Nevertheless, it is possible to define the process for all times, for instance by adding a local time term to the stochastic differential equation, see [25] and references therein. It is natural to expect that the cutoff universality works as for $\beta\not\in(0,1)$, but for simplicity we do not consider this case here. * • Case $\beta\geq 1$. If we order the coordinates by defining the convex domain $D_{n}=\\{x\in\mathbb{R}^{n}:x_{1}<\cdots<x_{n}\\},$ and if $x^{n}_{0}\in D_{n}$ then the equation (1.3) admits a unique strong solution that never exits $D_{n}$, in other words the particles never collide and the order of the initial particles is preserved at all times, see [60]. Moreover if $\overline{D}_{n}=\\{x\in\mathbb{R}^{n}:x_{1}\leq\cdots\leq x_{n}\\}$ then it is possible to start the process from the boundary $\overline{D}_{n}\setminus D_{n}$, in particular from $x^{n}_{0}$ such that $x^{n,1}_{0}=\cdots=x^{n,n}_{0}$, and despite the singularity of the drift, it can be shown that with probability one, $X^{n}_{t}\in D_{n}$ for all $t>0$. We refer to [2, Th. 4.3.2] for a proof in the Dyson Brownian Motion case that can be adapted _mutatis mutandis_. In the sequel, we will only consider the cases $\beta=0$ with $x_{0}^{n}\in\mathbb{R}^{n}$ and $\beta\geq 1$ with $x^{n}_{0}\in\overline{D}_{n}$. The drift in (1.3) is the gradient of a function, and (1.3) rewrites $X_{0}^{n}=x_{0}^{n}\in D_{n},\quad\mathrm{d}X_{t}^{n}=\sqrt{\frac{2}{n}}\mathrm{d}B_{t}-\frac{1}{n}\nabla E(X_{t}^{n})\mathrm{d}t,$ (1.4) where $E(x_{1},\ldots,x_{n})=n\sum_{i=1}^{n}V(x_{i})+{\beta}\sum_{i>j}\log\frac{1}{|x_{i}-x_{j}|}$ (1.5) can be interpreted as the energy of the configuration of particles $x_{1},\ldots,x_{n}$. * • If $\beta=0$, then the Markov process $X^{n}$ is an Ornstein–Uhlenbeck process, irreducible with unique invariant law $P_{n}^{0}=\mathcal{N}(0,\frac{1}{n}I_{n})$ which is reversible. * • If $\beta\geq 1$, then the Markov process $X^{n}$ is not irreducible, but $D_{n}$ is a recurrent class carrying a unique invariant law $P_{n}^{\beta}$, which is reversible and given by $P_{n}^{\beta}=\frac{\mathrm{e}^{-E(x_{1},\ldots,x_{n})}}{C_{n}^{\beta}}\mathbf{1}_{(x_{1},\ldots,x_{n})\in\overline{D}_{n}}\mathrm{d}x_{1}\cdots\mathrm{d}x_{n},$ (1.6) where $C_{n}^{\beta}$ is the normalizing factor given by $C_{n}^{\beta}=\int_{\overline{D}_{n}}\mathrm{e}^{-E(x_{1},\ldots,x_{n})}\mathrm{d}x_{1}\cdots\mathrm{d}x_{n}.$ (1.7) In terms of geometry, it is crucial to observe that since $-\log$ is convex on $(0,+\infty)$, the map $(x_{1},\ldots,x_{n})\in D_{n}\mapsto\mathrm{Interaction}(x_{1},\ldots,x_{n})={\beta}\sum_{i>j}\log\frac{1}{x_{i}-x_{j}},$ is convex. Thus, since $V$ is convex on $\mathbb{R}$, it follows that $E$ is convex on $D_{n}$. For all $\beta\geq 0$, the law $P_{n}^{\beta}$ is log- concave with respect to the Lebesgue measure as well as with respect to $\mathcal{N}(0,\frac{1}{n}I_{n})$. ### 1.4. Non-interacting case and Ornstein–Uhlenbeck benchmark When we turn off the interaction by taking $\beta=0$ in (1.3), the DOU process becomes an Ornstein–Uhlenbeck process (OU) $Z^{n}={(Z^{n}_{t})}_{t\geq 0}$ on $\mathbb{R}^{n}$ solving the stochastic differential equation $Z^{n}_{0}=z^{n}_{0}\in\mathbb{R}^{n},\quad\mathrm{d}Z^{n}_{t}=\sqrt{\frac{2}{n}}\mathrm{d}B^{n}_{t}-Z^{n}_{t}\mathrm{d}t,$ (1.8) where $B^{n}$ is a standard $n$-dimensional BM. The invariant law of $Z^{n}$ is the product Gaussian law $P_{n}^{0}=\mathcal{N}(0,\frac{1}{n}I_{n})=\mathcal{N}(0,\frac{1}{n})^{\otimes n}$. The explicit Gaussian nature of $Z_{t}^{n}\sim\mathcal{N}(z_{0}^{n}\mathrm{e}^{-t},\frac{1-\mathrm{e}^{-2t}}{n}I_{n})$, valid for all $t\geq 0$, allows for a fine analysis of convergence to equilibrium, as in the following theorem. ###### Theorem 1.1 (Cutoff for OU: mean-field regime). Let $Z^{n}={(Z^{n}_{t})}_{t\geq 0}$ be the OU process (1.8) and let $P_{n}^{0}$ be its invariant law. Suppose that $\varliminf_{n\to\infty}\frac{|z^{n}_{0}|^{2}}{n}>0\quad\text{and}\quad\varlimsup_{n\to\infty}\frac{|z^{n}_{0}|^{2}}{n}<\infty$ where $|z|=\sqrt{z_{1}^{2}+\cdots+z_{n}^{2}}$ is the Euclidean norm. Then for all $\varepsilon\in(0,1)$, $\lim_{n\to\infty}\mathrm{dist}(\mathrm{Law}(Z^{n}_{t_{n}})\mid P_{n}^{0})=\begin{cases}\max&\text{if $t_{n}=(1-\varepsilon)c_{n}$}\\\ 0&\text{if $t_{n}=(1+\varepsilon)c_{n}$}\end{cases}$ where $c_{n}=\begin{cases}\frac{1}{2}\log(n)&\text{if $\mathrm{dist}=\mathrm{Wasserstein}$},\\\ \log(n)&\text{if $\mathrm{dist}\in\\{\mathrm{TV},\mathrm{Hellinger},\mathrm{Kullback},\chi^{2}\\}$},\\\ \frac{3}{2}\log(n)&\text{if $\mathrm{dist}=\mathrm{Fisher}$}.\end{cases}$ Theorem 1.1 is proved in Section 3. See Figure 1 and Figure 2 for a numerical experiment. Theorem 1.1 constitutes a very natural benchmark for the cutoff phenomenon for the DOU process. Theorem 1.1 is not a surprise, and actually the TV and Hellinger cases are already considered in [47], see also [7]. Let us mention that in [9], a cutoff phenomenon for TV, entropy and Wasserstein is proven for the OU process of _fixed_ dimension $d$ and vanishing noise. This is to be compared with our setting where the dimension is sent to infinity: the results (and their proofs) are essentially the same in these two situations, however we will see below that if one considers more general initial conditions, there are some substantial differences according to whether the dimension is fixed or sent to infinity. The restriction over the initial condition in Theorem 1.1 is spelled out in terms of the second moment of the empirical distribution, a natural choice suggested by the mean-field limit discussed in Section 2.5. It yields a mixing time of order $\log(n)$, just like for Brownian motion on compact Lie groups, see [64, 55]. For the OU process and more generally for overdamped Langevin processes, the non-compactness of the space is replaced by the confinement or tightness due to the drift. Actually, Theorem 1.1 is a particular instance of the following, much more general result that reveals that, except for the Wasserstein distance, a cutoff phenomenon _always_ occurs. ###### Theorem 1.2 (General cutoff for OU). Let $Z^{n}={(Z^{n}_{t})}_{t\geq 0}$ be the OU process (1.8) and let $P_{n}^{0}$ be its invariant law. Let $\mathrm{dist}\in\\{\mathrm{TV},\mathrm{Hellinger},\mathrm{Kullback},\chi^{2},\mathrm{Fisher}\\}$. Then, for all $\varepsilon\in(0,1)$, $\lim_{n\to\infty}\mathrm{dist}(\mathrm{Law}(Z^{n}_{t_{n}})\mid P_{n}^{0})=\begin{cases}\max&\text{if $t_{n}=(1-\varepsilon)c_{n}$,}\\\ 0&\text{if $t_{n}=(1+\varepsilon)c_{n}$}\end{cases}$ where $c_{n}=\begin{cases}\log(\sqrt{n}|z_{0}^{n}|)\vee\frac{1}{4}\log(n)&\text{if $\mathrm{dist}\in\\{\mathrm{TV},\mathrm{Hellinger},\mathrm{Kullback},\chi^{2}\\}$},\\\ \log(n|z_{0}^{n}|)\vee\frac{1}{2}\log(n)&\text{if $\mathrm{dist}=\mathrm{Fisher}$}.\end{cases}$ Regarding the Wasserstein distance, the following dichotomy occurs: * • if $\lim_{n\to\infty}|z_{0}^{n}|=+\infty$, then for all $\varepsilon\in(0,1)$, with $c_{n}=\log|z_{0}^{n}|$, $\lim_{n\to\infty}\mathrm{Wasserstein}(\mathrm{Law}(Z_{t_{n}}),P_{n}^{0})=\begin{cases}+\infty&\text{if $t_{n}=(1-\varepsilon)c_{n}$,}\\\ 0&\text{if $t_{n}=(1+\varepsilon)c_{n}$},\end{cases}$ * • if $\lim_{n\to\infty}|z_{0}^{n}|=\alpha\in[0,\infty)$ then there is _no cutoff phenomenon_ namely for any $t>0$ $\lim_{n\to\infty}\mathrm{Wasserstein}^{2}(\mathrm{Law}(Z_{t}),P_{n}^{0})=\alpha^{2}\mathrm{e}^{-2t}+2\Bigr{(}1-\sqrt{1-\mathrm{e}^{-2t}}-\tfrac{1}{2}\mathrm{e}^{-2t}\Bigr{)}.$ Theorem 1.2 is proved in Section 3. The observation that for every distance or divergence, except for the Wasserstein distance, a cutoff phenomenon occurs _generically_ seems to be new. Let us make a few comments. First, in terms of convergence to equilibrium the relevant observable in Theorem 1.2 appears to be the Euclidean norm $|z_{0}^{n}|$ of the initial condition. This quantity differs from the eigenfunction associated to the spectral gap of the generator, which is given by $z_{1}+\cdots+z_{n}$ as we will recall later on. Note by the way that (3.4) and (2.3) are equal! Second, cutoff occurs at a time that is _independent_ of the initial condition provided that its Euclidean norm is small enough: this cutoff time appears as the time required to regularize the initial condition (a Dirac mass) into a sufficiently spread out absolutely continuous probability measure; in particular this cutoff phenomenon would not hold generically if we allowed for spread out (non-Dirac) initial conditions. Note that, for the OU process of _fixed_ dimension and vanishing noise, we would not observe a cutoff phenomenon when starting from initial conditions with small enough Euclidean norm: this is a high dimensional phenomenon. In this respect, the Wasserstein distance is peculiar since it is much less stringent on the local behavior of the measures at stake: for instance $\lim_{n\to\infty}\mathrm{Wasserstein}(\delta_{0},\delta_{1/n})=0$ while for all other distances or divergences considered here, the corresponding quantity would remain equal to $\max$. This explains the absence of _generic_ cutoff phenomenon for Wasserstein. Third, the explicit expressions provided in our proof allow to extract the cutoff profile in each case, but we prefer not to provide them in our statement and refer the interested reader to the end of Section 3. ### 1.5. Exactly solvable intermezzo When $\beta\neq 0$, the law of the DOU process is no longer Gaussian nor explicit. However several exactly solvable aspects are available. Let us recall that a Cox–Ingersoll–Ross process (CIR) of parameters $a,b,\sigma$ is the solution $R=(R_{t})_{t\geq 0}$ on $\mathbb{R}_{+}$ of $R_{0}=r_{0}\in\mathbb{R}_{+},\quad\mathrm{d}R_{t}=\sigma\sqrt{R_{t}}\mathrm{d}W_{t}+(a-bR_{t})\mathrm{d}t,$ (1.9) where $W$ is a standard BM. Its invariant law is $\mathrm{Gamma}(2a/\sigma^{2},2b/\sigma^{2})$ with density proportional to $r\geq 0\mapsto r^{2a/\sigma^{2}-1}\mathrm{e}^{-2br/\sigma^{2}}$, with mean $a/b$, and variance $a\sigma^{2}/(2b^{2})$. It was proved by William Feller in [38] that the density of $R_{t}$ at an arbitrary $t$ can be expressed in terms of special functions. If ${(Z_{t})}_{t\geq 0}$ is a $d$-dimensional OU process of parameters $\theta\geq 0$ and $\rho\in\mathbb{R}$, weak solution of $\mathrm{d}Z_{t}=\theta\mathrm{d}W_{t}-\rho Z_{t}\mathrm{d}t$ (1.10) where $W$ is a $d$-dimensional BM, then $R={(R_{t})}_{t\geq 0}$, $R_{t}:=|Z_{t}|^{2}$, is a CIR process with parameters $a=\theta^{2}d$, $b=2\rho$, $\sigma=2\theta$. When $\rho=0$ then $Z$ is a BM while $R=|Z|^{2}$ is a squared Bessel process. The following theorem gathers some exactly solvable aspects of the DOU process for general $\beta\geq 1$, which are largely already in the statistical physics folklore, see [58]. It is based on our knowledge of eigenfunctions associated to the first spectral values of the dynamics, see (2.6), and their remarkable properties. As in (2.6), we set $\pi(x):=x_{1}+\cdots+x_{n}$ when $x\in\mathbb{R}^{n}$. ###### Theorem 1.3 (From DOU to OU and CIR). Let ${(X^{n}_{t})}_{t\geq 0}$ be the DOU process (1.3), with $\beta=0$ or $\beta\geq 1$, and let $P_{n}^{\beta}$ be its invariant law. Then: * • ${(\pi(X^{n}_{t}))}_{t\geq 0}$ is a one-dimensional OU process weak solution of (1.8) with $\theta=\sqrt{2}$, $\rho=1$. Its invariant law is $\mathcal{N}(0,1)$. It does not depend on $\beta$, and $\pi(X^{n}_{t})\sim\mathcal{N}(\pi(x^{n}_{0})\mathrm{e}^{-t},1-\mathrm{e}^{-2t})$, $t\geq 0$. Furthermore $\pi(X^{n}_{t})^{2}$ is a CIR process of parameters $a=2$, $b=2$, $\sigma=2\sqrt{2}$. * • ${(|X^{n}_{t}|^{2})}_{t\geq 0}$ is a CIR process, weak solution of (1.9) with $a=2+\beta(n-1)$, $b=2$, $\sigma=\sqrt{8/n}$. Its invariant law is $\mathrm{Gamma}(\frac{1}{2}(n+\beta\frac{n(n-1)}{2}),\frac{n}{2})$ of mean $1+\frac{\beta}{2}(n-1)$ and variance $\beta+\frac{2-\beta}{n}$. Furthermore, if $d=n+\beta\frac{n(n-1)}{2}$ is a positive integer, then ${(|X^{n}_{t}|^{2})}_{t\geq 0}$ has the law of ${(|Z_{t}|^{2})}_{t\geq 0}$ where ${(Z_{t})}_{t\geq 0}$ is a $d$-dimensional OU process, weak solution of (1.8) with $\theta=\sqrt{2/n}$, $\rho=1$, and $Z_{0}=z^{n}_{0}$ for an arbitrary $z^{n}_{0}\in\mathbb{R}^{d}$ such that $|z^{n}_{0}|=|x^{n}_{0}|$. At this step it is worth noting that Theorem 1.3 gives in particular, denoting $\beta_{n}:=1+\frac{\beta}{2}(n-1)$, $\mathbb{E}[\pi(X^{n}_{t})]=\pi(x^{n}_{0})\mathrm{e}^{-t}\underset{t\to\infty}{\longrightarrow}0\quad\text{and}\quad\mathbb{E}[|X^{n}_{t}|^{2}]=\beta_{n}+(|x^{n}_{0}|^{2}-\beta_{n})\mathrm{e}^{-2t}\underset{t\to\infty}{\longrightarrow}\beta_{n}.$ (1.11) Following [25, Sec. 2.2], the limits can also be deduced from the Dumitriu–Edelman tridiagonal random matrix model [32] isospectral to $\beta$-Hermite. These formulas for the “transient” first two moments $\mathbb{E}[\pi(X_{t}^{n})]$ and $\mathbb{E}[|X_{t}^{n}|^{2}]$ reveal an abrupt convergence to their equilibrium values : * • If $\lim_{n\to\infty}\frac{\pi(x^{n}_{0})}{n}=\alpha\neq 0$ then for all $\varepsilon\in(0,1)$, $\lim_{n\to\infty}|\mathbb{E}[\pi(X^{n}_{t_{n}})]|=\begin{cases}+\infty&\text{if $t_{n}=(1-\varepsilon)\log(n)$}\\\ 0&\text{if $t_{n}=(1+\varepsilon)\log(n)$}\end{cases}.$ (1.12) * • If $\lim_{n\to\infty}\frac{|x^{n}_{0}|^{2}}{n}=\alpha\neq\frac{\beta}{2}$ then for all $\varepsilon\in(0,1)$, denoting $\beta_{n}:=1+\frac{\beta}{2}(n-1)$, $\lim_{n\to\infty}\left|\mathbb{E}[|X^{n}_{t_{n}}|^{2}]-\beta_{n}\right|=\begin{cases}+\infty&\text{if $t_{n}=(1-\varepsilon)\frac{1}{2}\log(n)$}\\\ 0&\text{if $t_{n}=(1+\varepsilon)\frac{1}{2}\log(n)$}\end{cases}.$ (1.13) These critical times are universal with respect to $\beta$. The first two transient moments are related to the eigenfunctions (2.6) associated to the first two non-zero eigenvalues of the dynamics. Higher order transient moments are related to eigenfunctions associated to higher order eigenvalues. Note that $\mathbb{E}[\pi(X^{n}_{t})]$ and $\mathbb{E}[|X^{n}_{t}|^{2}]$ are the first two moments of the non-normalized mean empirical measure $\mathbb{E}[\sum_{i=1}^{n}\delta_{X^{n,i}_{t}}]$, and this lack of normalization is responsible of the critical times of order $\log(n)$. In contrast, the first two moments of the normalized mean empirical measure $\mathbb{E}[\frac{1}{n}\sum_{i=1}^{n}\delta_{X^{n,i}_{t}}]$, given by $\frac{1}{n}\mathbb{E}[\pi(X^{n}_{t})]$ and $\frac{1}{n}\mathbb{E}[|X^{n}_{t}|^{2}]$ respectively, do not exhibit a critical phenomenon. This is related to the exponential decay of the first two moments in the mean-field limit (2.12), as well as the lack of cutoff for Wasserstein already revealed for OU by Theorem 1.2. This also reminds the high dimension behavior of norms in the field of the asymptotic geometric analysis of convex bodies. In another direction, this elementary observation on the moments also illustrates that the cutoff phenomenon for a given quantity is not stable under rather simple transformations of this quantity. From the first part of Theorem 1.3 and contraction properties available for _some_ distances or divergences, see Lemma A.2, we obtain the following lower bound on the mixing time for the DOU, which is independent of $\beta$: ###### Corollary 1.4 (Lower bound on the mixing time). Let ${(X^{n}_{t})}_{t\geq 0}$ be the DOU process (1.3) with $\beta=0$ or $\beta\geq 1$, and invariant law $P_{n}^{\beta}$. Let $\mathrm{dist}\in\\{\mathrm{TV},\mathrm{Hellinger},\mathrm{Kullback},\chi^{2},\mathrm{Wasserstein}\\}$. Set $c_{n}:=\begin{cases}\log(|\pi(x_{0}^{n})|)&\text{if $\mathrm{dist}\in\\{\mathrm{TV},\mathrm{Hellinger},\mathrm{Kullback},\chi^{2}\\}$}\\\ \log\Big{(}\frac{|\pi(x_{0}^{n})|}{\sqrt{n}}\Big{)}&\text{if $\mathrm{dist}=\mathrm{Wasserstein}$}\end{cases},$ and assume that $\lim_{n\to\infty}c_{n}=\infty$. Then, for all $\varepsilon\in(0,1)$, we have $\lim_{n\to\infty}\mathrm{dist}(\mathrm{Law}(X^{n}_{(1-\varepsilon)c_{n}})\mid P_{n}^{\beta})=\max.$ Theorem 1.3 and Corollary 1.4 are proved in Section 4. The derivation of an upper bound on the mixing time is much more delicate: once again recall that the case $\beta=0$ covered by Theorem 1.2 is specific as it relies on exact Gaussian computations which are no longer available for $\beta\geq 1$. In the next subsection, we will obtain results for general values of $\beta\geq 1$ via more elaborate arguments. In the specific cases $\beta\in\\{1,2\\}$, there are some exactly solvable aspects that one can exploit to derive, in particular, precise upper bounds on the mixing times. Indeed, for these values of $\beta$, the DOU process is the process of eigenvalues of the matrix-valued OU process: $M_{0}=m_{0},\quad\mathrm{d}M_{t}=\sqrt{\frac{2}{n}}\mathrm{d}B_{t}-M_{t}\mathrm{d}t,$ where $B$ is a BM on the symmetric $n\times n$ matrices if $\beta=1$ and on Hermitian $n\times n$ matrices if $\beta=2$, see (5.4) and (5.16) for more details. Based on this observation, we can deduce an upper bound on the mixing times by contraction (for _most_ distances or divergences). ###### Theorem 1.5 (Upper bound on mixing time in matrix case). Let ${(X^{n}_{t})}_{t\geq 0}$ be the DOU process (1.3) with $\beta\in\\{0,1,2\\}$, and invariant law $P_{n}^{\beta}$, and $\mathrm{dist}\in\\{\mathrm{TV},\mathrm{Hellinger},\mathrm{Kullback},\chi^{2},\mathrm{Wasserstein}\\}$. Set $c_{n}:=\begin{cases}\log(\sqrt{n}|x_{0}^{n}|)\vee\log(\sqrt{n})&\text{if $\mathrm{dist}\in\\{\mathrm{TV},\mathrm{Hellinger},\mathrm{Kullback},\chi^{2}\\}$}\\\ \log(|x_{0}^{n}|)&\text{if $\mathrm{dist}=\mathrm{Wasserstein}$}\end{cases},$ and assume that $\lim_{n\to\infty}c_{n}=\infty$ if $\mathrm{dist}=\mathrm{Wasserstein}$. Then, for all $\varepsilon\in(0,1)$, we have $\lim_{n\to\infty}\mathrm{dist}(\mathrm{Law}(X^{n}_{(1+\varepsilon)c_{n}})\mid P_{n}^{\beta})=0.$ Combining this upper bound with the lower bound already obtained above, we derive a cutoff phenomenon in this particular matrix case. ###### Corollary 1.6 (Cutoff for DOU in the matrix case). Let ${(X^{n}_{t})}_{t\geq 0}$ be the DOU process (1.3), with $\beta\in\\{0,1,2\\}$, and invariant law $P_{n}^{\beta}$. Let $\mathrm{dist}\in\\{\mathrm{TV},\mathrm{Hellinger},\mathrm{Kullback},\chi^{2},\mathrm{Wasserstein}\\}$. Let ${(a_{n})}_{n}$ be a real sequence satisfying $\inf_{n}\sqrt{n}a_{n}>0$, and assume further that $\lim_{n\to\infty}\sqrt{n}a_{n}=\infty$ if $\mathrm{dist}=\mathrm{Wasserstein}$. Then, for all $\varepsilon\in(0,1)$, we have $\lim_{n\to\infty}\sup_{x^{n}_{0}\in[-a_{n},a_{n}]^{n}}\mathrm{dist}(\mathrm{Law}(X^{n}_{t_{n}})\mid P_{n}^{\beta})=\begin{cases}\max&\text{if $t_{n}=(1-\varepsilon)c_{n}$}\\\ 0&\text{if $t_{n}=(1+\varepsilon)c_{n}$}\end{cases}$ where $c_{n}:=\begin{cases}\log(na_{n})&\text{ if $\mathrm{dist}\in\\{\mathrm{TV},\mathrm{Hellinger},\mathrm{Kullback},\chi^{2}\\}$}\\\ \log(\sqrt{n}a_{n})&\text{ if $\mathrm{dist}=\mathrm{Wasserstein}$}\end{cases}.$ Theorem 1.5 and Corollary 1.6 are proved in Section 5. It is worth noting that $d=n+\beta\frac{n(n-1)}{2}$ in Theorem 1.3 is indeed an integer in the “random matrix” cases $\beta\in\\{1,2\\}$, and corresponds then exactly to the degree of freedom of the Gaussian random matrix models GOE and GUE respectively. More precisely, if we let $X^{n}_{\infty}\sim P_{n}^{\beta}$ then: * • If $\beta=1$ then $P_{n}^{\beta}$ is the law of the eigenvalues of $S\sim\mathrm{GOE}_{n}$, and $|X^{n}_{\infty}|^{2}=\sum_{j,k=1}^{n}S_{jk}^{2}$ which is the sum of $n$ squared Gaussians of variance $v=1/n$ (diagonal) plus twice the sum of $\frac{n^{2}-n}{2}$ squared Gaussians of variance $\frac{v}{2}$ (off-diagonal) all being independent. The duplication has the effect of renormalizing the variance from $\frac{v}{2}$ to $v$. All in all we have the sum of $d=\frac{n^{2}+n}{2}$ independent squared Gaussians of same variance $v$. See Section 5. * • If $\beta=2$ then $P_{n}^{\beta}$ is the law of the eigenvalues of $H\sim\mathrm{GUE}_{n}$, and $|X^{n}_{\infty}|^{2}=\sum_{j,k=1}^{n}|H_{jk}|^{2}$ is the sum of $n$ squared Gaussians of variance $v=1/n$ (diagonal) plus twice the sum of $n^{2}-n$ squared Gaussians of variance $\frac{v}{2}$ (off-diagonal) all being independent. All in all we have the sum of $d=n^{2}$ independent squared Gaussians of same variance $v$. See Section 5. Another manifestation of exact solvability lies at the level of functional inequalities. Indeed, and following [25], the optimal Poincaré constant of $P_{n}^{\beta}$ is given by $1/n$ and does not depend on $\beta$, and the extremal functions are tranlations/dilations of $x\mapsto\pi(x)=x_{1}+\cdots+x_{n}$. This corresponds to a spectral gap of the dynamics equal to $1$ and its associated eigenfunction. Moreover, the optimal logarithmic Sobolev inequality of $P_{n}^{\beta}$ (Lemma B.1) is given by $2/n$ and does not depend on $\beta$, and the extremal functions are of the form $x\mapsto\mathrm{e}^{c(x_{1}+\cdots+x_{n})}$, $c\in\mathbb{R}$. This knowledge of the optimal constants and extremal functions and their independence with respect to $\beta$ is truly remarkable. It plays a crucial role in the results presented in this article. More precisely, the optimal Poincaré inequality is used for the lower bound via the first eigenfunctions while the optimal logarithmic Sobolev inequality is used for the upper bound via exponential decay of the entropy. ### 1.6. Cutoff in the general interacting case Our main contribution consists in deriving an upper bound on the mixing times in the general case $\beta\geq 1$: the proof relies on the logarithmic Sobolev inequality, some coupling arguments and a regularization procedure. ###### Theorem 1.7 (Upper bound on the mixing time: the general case). Let ${(X^{n}_{t})}_{t\geq 0}$ be the DOU process (1.3), with $\beta=0$ or $\beta\geq 1$ and invariant law $P_{n}^{\beta}$. Take $\mathrm{dist}\in\\{\mathrm{TV},\mathrm{Hellinger},\mathrm{Wasserstein}\\}$. Set $c_{n}:=\begin{cases}\log(\sqrt{n}|x_{0}^{n}|)\vee\log({n})&\text{if $\mathrm{dist}\in\\{\mathrm{TV},\mathrm{Hellinger}\\}$}\\\ \log(|x_{0}^{n}|)\vee\log(\sqrt{n})&\text{if $\mathrm{dist}=\mathrm{Wasserstein}$}\end{cases}.$ Then, for all $\varepsilon\in(0,1)$, we have $\lim_{n\to\infty}\mathrm{dist}(\mathrm{Law}(X^{n}_{(1+\varepsilon)c_{n}})\mid P_{n}^{\beta})=0.$ Combining this upper bound with the general lower bound that we obtained in Corollary 1.4, we deduce the following cutoff phenomenon. Observe that it holds both for $\beta=0$ and $\beta\geq 1$, and that the expression of the mixing time does not depend on $\beta$. ###### Corollary 1.8 (Cutoff for DOU in the general case). Let ${(X^{n}_{t})}_{t\geq 0}$ be the DOU process (1.3) with $\beta=0$ or $\beta\geq 1$ and invariant law $P_{n}^{\beta}$. Take $\mathrm{dist}\in\\{\mathrm{TV},\mathrm{Hellinger},\mathrm{Wasserstein}\\}$. Let ${(a_{n})}_{n}$ be a real sequence satisfying $\inf_{n}a_{n}>0$. Then, for all $\varepsilon\in(0,1)$, we have $\lim_{n\to\infty}\sup_{x^{n}_{0}\in[-a_{n},a_{n}]^{n}}\mathrm{dist}(\mathrm{Law}(X^{n}_{t_{n}})\mid P_{n}^{\beta})=\begin{cases}\max&\text{if $t_{n}=(1-\varepsilon)c_{n}$}\\\ 0&\text{if $t_{n}=(1+\varepsilon)c_{n}$}\end{cases}$ where $c_{n}:=\begin{cases}\log(na_{n})&\text{ if $\mathrm{dist}\in\\{\mathrm{TV},\mathrm{Hellinger}\\}$}\\\ \log(\sqrt{n}a_{n})&\text{ if $\mathrm{dist}=\mathrm{Wasserstein}$}\end{cases}.$ The proofs of Theorem 1.7 and Corollary 1.8 for the TV and Hellinger distances are presented in Section 6. The Wasserstein distance is treated in Section 7. Let us make a comment on the assumptions made on $a_{n}$ in Corollaries 1.6 and 1.8. They are dictated by the upper bounds established in Theorems 1.5 and 1.7, which take the form of maxima of two terms: one that depends on the initial condition, and another one which is a power of a logarithm of $n$. The logarithmic term is an upper bound on the time required to regularize a pointwise initial condition, its precise expression varies according to the method of proof we rely on: in the matrix case, it is the time required to regularize a larger object, the matrix-valued OU process; in the general case, it is related to the time it takes to make the entropy of a pointwise initial condition small. These bounds are not optimal for $\beta=0$ (compare with Theorem 1.2), and probably neither for $\beta\geq 1$. A natural, but probably quite difficult, goal would be to establish a cutoff phenomenon in the situation where the set of initial conditions is reduced to _any_ given singleton, as in Theorem 1.2 for the case $\beta=0$. Recall that in that case, the asymptotic of the mixing time is dictated by the Euclidean norm of the initial condition. In the case $\beta\geq 1$, this _cannot_ be the right observable since the Euclidean norm does not measure the distance to equilibrium. Instead one should probably consider the Euclidean norm $|x_{0}^{n}-\rho_{n}|$, where $\rho_{n}$ is the vector of the quantiles of order $1/n$ of the semi-circle law that arises in the mean-field limit equilibrium (see Subsection 2.5). More precisely $\rho_{n,i}=\inf\left\\{t\in\mathbb{R}:\int_{-\infty}^{t}\frac{\sqrt{2\beta-x^{2}}}{\beta\pi}\mathbf{1}_{x\in[-\sqrt{2\beta},\sqrt{2\beta}]}\mathrm{d}x\geq\frac{i}{n}\right\\},\quad i\in\\{1,\ldots,n\\}.$ (1.14) Note that $\rho_{n}=0$ when $\beta=0$. A first step in this direction is given by the following result: ###### Theorem 1.9 (DOU in the general case and pointwise initial condition). Let ${(X^{n}_{t})}_{t\geq 0}$ be the DOU process (1.3) with $\beta=0$ or $\beta\geq 1$, and invariant law $P_{n}^{\beta}$. There hold * • If $\lim_{n\to\infty}|x^{n}_{0}-\rho_{n}|=+\infty$, then, denoting $t_{n}=\log(|x_{0}^{n}-\rho_{n}|)$, for all $\varepsilon\in(0,1)$, $\lim_{n\to\infty}\mathrm{Wasserstein}(\mathrm{Law}(X_{(1+\varepsilon)t_{n}}),P_{n}^{\beta})=0.$ * • If $\lim_{n\to\infty}|x^{n}_{0}-\rho_{n}|=\alpha\in[0,\infty)$, then, for all $t>0$, $\varlimsup_{n\to\infty}\mathrm{Wasserstein}(\mathrm{Law}(X_{t}),P_{n}^{\beta})^{2}\leq\alpha^{2}\mathrm{e}^{-2t}.$ Theorem 1.9 is proved in Section 7. ### 1.7. Non-pointwise initial conditions It is natural to ask about the cutoff phenomenon when the initial conditions $X^{n}_{0}$ is not pointwise. Even if we turn off the interaction by taking $\beta=0$, the law of the process at time $t$ is then no longer Gaussian in general, which breaks the method of proof used for Theorem 1.1 and Theorem 1.2. Nevertheless, Theorem 1.10 below provides a universal answer, that is both for $\beta=0$ and $\beta\geq 1$, at the price however of introducing several objects and notations. More precisely, for any probability measure $\mu$ on $\mathbb{R}^{n}$, we introduce $S(\mu)=\begin{cases}\displaystyle\int\frac{\mathrm{d}\mu}{\mathrm{d}x}\log\frac{\mathrm{d}\mu}{\mathrm{d}x}\mathrm{d}x=\text{``}\mathrm{Kullback}(\mu\mid\mathrm{d}x)\text{''}&\text{if $\displaystyle\frac{\mathrm{d}\mu}{\mathrm{d}x}\log\frac{\mathrm{d}\mu}{\mathrm{d}x}\in L^{1}(\mathrm{d}x)$}\\\ +\infty&\text{otherwise}\end{cases}.$ (1.15) Note that $S$ takes its values in the whole $(-\infty,+\infty]$, and when $S(\mu)<+\infty$ then $-S(\mu)$ is the _Boltzmann–Shannon entropy_ of the law $\mu$. For all $x\in\mathbb{R}^{n}$ with $x_{i}\neq x_{j}$ for all $i\neq j$, we have $E(x_{1},\ldots,x_{n})=n^{2}\iint\Phi(x,y)\mathbf{1}_{\\{x\neq y\\}}L_{n}(\mathrm{d}x)L_{n}(\mathrm{d}y)$ (1.16) where $\displaystyle L_{n}:=\frac{1}{n}\sum_{i=1}^{n}\delta_{x_{i}}$ and where $\displaystyle\Phi(x,y):=\frac{n}{n-1}\frac{V(x)+V(y)}{2}+\frac{\beta}{2}\log\frac{1}{|x-y|}$. Let us define the map $\Psi:\mathbb{R}^{n}\mapsto\overline{D}_{n}$ by $\Psi(x_{1},\ldots,x_{n}):=(x_{\sigma(1)},\ldots,x_{\sigma(n)}).$ (1.17) where $\sigma$ is any permutation of $\\{1,\ldots,n\\}$ that reorders the particles non-decreasingly. ###### Theorem 1.10 (Cutoff for DOU with product smooth initial conditions). Let ${(X^{n}_{t})}_{t\geq 0}$ be the DOU process (1.3) with $\beta=0$ or $\beta\geq 1$, and invariant law $P_{n}^{\beta}$. Let $S$, $\Phi$, and $\Psi$ be as in (1.15), (1.16), and (1.17). Let us assume that $\mathrm{Law}(X_{0}^{n})$ is the image law or push forward of a product law $\mu_{1}\otimes\cdots\otimes\mu_{n}$ by $\Psi$ where $\mu_{1},\ldots,\mu_{n}$ are laws on $\mathbb{R}$. Then: 1. (1) If $\displaystyle\varliminf_{n\to\infty}\Bigr{|}\frac{1}{n}\sum_{i=1}^{n}\int x\mu_{i}(\mathrm{d}x)\Bigr{|}\neq 0$ then, for all $\varepsilon\in(0,1)$, $\lim_{n\to\infty}\mathrm{Kullback}(\mathrm{Law}(X_{(1-\varepsilon)\log(n)})\mid P_{n}^{\beta})=+\infty.$ 2. (2) If $\displaystyle\varlimsup_{n\to\infty}\frac{1}{n^{2}}\sum_{i=1}^{n}S(\mu_{i})<\infty$ and $\displaystyle\varlimsup_{n\to\infty}\frac{1}{n^{2}}\sum_{i\neq j}\iint\Phi\,\mathrm{d}\mu_{i}\otimes\mathrm{d}\mu_{j}<\infty$, then, for all $\varepsilon\in(0,1)$, $\lim_{n\to\infty}\mathrm{Kullback}(\mathrm{Law}(X_{(1+\varepsilon)\log(n)})\mid P_{n}^{\beta})=0.$ Theorem 1.10 is proved in Section 6.3. It is likely that Theorem 1.10 can be extended to the case $\mathrm{dist}\in\\{\mathrm{Wasserstein},\mathrm{Hellinger},\mathrm{Fisher}\\}$. ### 1.8. Structure of the paper * • Section 2 provides additional comments and open problems. * • Section 3 focuses on the OU process ($\beta=0$) and gives the proofs of Theorems 1.1 and 1.2. * • Section 4 concerns the exact solvability of the DOU process for all $\beta$, and provides the proofs of Theorem 1.3 and Corollary 1.4. * • Section 5 is about random matrices and gives the proofs of Theorem 1.5 and Corollary 1.6. * • Section 6 deals with the DOU process for all $\beta$ with the TV and Hellinger distances, and provides the proofs of Theorem 1.7 and Corollary 1.8. * • Section 7 gives the Wasserstein counterpart of Section 6 and the proof of Theorem 1.9. * • Appendix A provides a survey on distances and divergences, with new results. * • Appendix B gathers useful dynamical consequences of convexity. ## 2\. Additional comments and open problems ### 2.1. About the results and proofs The proofs of our results rely among other ingredients on convexity and optimal functional inequalities, exact solvability, exact Gaussian formulas, coupling arguments, stochastic calculus, variational formulas, contraction properties and regularization. The proofs of Theorems 1.1 and 1.2 are based on the explicit Gaussian nature of the OU process, which allows to use Gaussian formulas for all the distances and divergences that we consider (the Gaussian formula for $\mathrm{Fisher}$ seems to be new). Our analysis of the convergence to equilibrium of the OU process seems to go beyond what is already known, see for instance [47] and [10, 8, 9, 11]. Theorem 1.3 is a one-dimensional analogue of [15, Th. 1.2]. The proof exploits the explicit knowledge of eigenfunctions of the dynamics (2.6), associated with the first two non-zero spectral values, and their remarkable properties. The first one is associated to the spectral gap and the optimal Poincaré inequality. It implies Corollary 1.4, which is the provider of all our lower bounds on the mixing time for the cutoff. The proof of Theorem 1.5 is based on a contraction property and the upper bound for matrix OU processes. It is not available beyond the matrix cases. All the other upper bounds that we establish are related to an optimal exponential decay which comes from convexity and involves sometimes coupling, the simplest instance being Theorem 1.7 about the Wasserstein distance. The usage of the Wasserstein metrics for Dyson dynamics is quite natural, see for instance [13]. The proof of Theorem 1.7 for the $\mathrm{TV}$ and $\mathrm{Hellinger}$ relies on the knowledge of the optimal exponential decay of the entropy (with respect to equilibrium) related to the optimal logarithmic Sobolev inequality. Since pointwise initial conditions have infinite entropy, the proof proceeds in three steps: first we regularize the initial condition to make its entropy finite, second we use the optimal exponential decay of the entropy of the process starting from this regularized initial condition, third we control the distance between the processes starting from the initial condition and its regularized version. This last part is inspired by a work of Lacoin [48] for the simple exclusion process on the segment, subsequently adapted to continuous state-spaces [18, 19], where one controls an _area_ between two versions of the process. The (optimal) exponential decay of the entropy (Lemma B.2) is equivalent to the (optimal) logarithmic Sobolev inequality (Lemma B.1). For the DOU process, the optimal logarithmic Sobolev inequality provided by Lemma B.1 achieves also the universal bound with respect to the spectral gap, just like for Gaussians. This sharpness between the best logarithmic Sobolev constant and the spectral gap also holds for instance for the random walk on the hypercube, a discrete process for which a cutoff phenomenon can be established with the optimal logarithmic Sobolev inequality, and which can be related to the OU process, see for instance [30, 29] and references therein. If we generalize the DOU process by adding an arbitrary convex function to $V$, then we will still have a logarithmic Sobolev inequality – see [25] for several proofs including the proof via the Bakry–Émery criterion – however the optimal logarithmic Sobolev constant will no longer be explicit nor sharp with respect to the spectral gap, and the spectral gap will no longer be explicit. The proof of Theorem 1.10 relies crucially on the tensorization property of $\mathrm{Kullback}$ and on the asymptotics on the normalizing constant $C_{n}^{\beta}$ at equilibrium. ### 2.2. Analysis and geometry of the equilibrium The full space $\mathbb{R}^{n}$ is, up to a bunch of hyperplanes, covered with $n!$ disjoint isometric copies of the convex domain $D_{n}$ obtained by permuting the coordinates (simplices or Weyl chambers). Following [25], for all $\beta\geq 0$ let us define the law $P_{*n}^{\beta}$ on $\mathbb{R}^{n}$ with density proportional to $\mathrm{e}^{-E}$, just like for $P_{n}^{\beta}$ in (1.6) but without the $\mathbf{1}_{(x_{1},\ldots,x_{n})\in\overline{D}_{n}}$. If $\beta=0$ then $P_{*n}^{0}=P_{n}^{0}=\mathcal{N}(0,\frac{1}{n}I_{n})$ according to our definition of $P_{n}^{0}$. If $\beta>0$ then $P_{*n}^{\beta}$ has density $(C_{*n}^{\beta})^{-1}\mathrm{e}^{-E}$ with $C_{*n}^{\beta}=n!C_{n}^{\beta}$ where $C_{n}^{\beta}$ is the normalization of $P_{n}^{\beta}$. Moreover $P_{*n}^{\beta}$ is a mixture of $n!$ isometric copies of $P_{n}^{\beta}$, while $P_{n}^{\beta}$ is the image law or push forward of $P_{*n}^{\beta}$ by the map $\Psi_{n}:\mathbb{R}^{n}\to\overline{D}_{n}$ defined in (1.17). Furthermore for all bounded measurable $f:\mathbb{R}^{n}\to\mathbb{R}$, denoting $\Sigma_{n}$ the symmetric group of permutations of $\\{1,\ldots,n\\}$, $\int f\mathrm{d}P_{*n}^{\beta}=\int f_{\mathrm{sym}}\mathrm{d}P_{n}^{\beta}\quad\text{with}\quad f_{\mathrm{sym}}(x_{1},\ldots,x_{n}):=\frac{1}{n!}\sum_{\sigma\in\Sigma_{n}}f(x_{\sigma(1)},\ldots,x_{\sigma(n)}).$ Regarding log-concavity, it is important to realize that if $\beta=0$ then $E$ is convex on $\mathbb{R}^{n}$, while if $\beta>0$ then $E$ is convex on $D_{n}$ but is not convex on $\mathbb{R}^{n}$ and has $n!$ isometric local minima. * • The law $P_{*n}^{\beta}$ is centered but is not log-concave when $\beta>0$ since $E$ is not convex on $\mathbb{R}^{n}$. As $\beta\to 0^{+}$ the law $P_{*n}^{\beta}$ tends to $P_{*n}^{0}=P_{n}^{0}=\mathcal{N}(0,\frac{1}{n}I_{n})$ which is log-concave. * • The law $P_{n}^{\beta}$ is not centered but is log-concave for all $\beta\geq 0$. Its density vanishes at the boundary of $D_{n}$ if $\beta>0$. As $\beta\to 0^{+}$ the law $P_{n}^{\beta}$ tends to the law of the order statistics of $n$ i.i.d. $\mathcal{N}(0,\frac{1}{n})$. ### 2.3. Spectral analysis of the generator: the non-interacting case This subsection and the next deal with analytical aspects of our dynamics. We start with the OU process ($\beta=0$) for which everything is explicit; the next subsection deals with the DOU process ($\beta\geq 1$). The infinitesimal generator of the OU process is given by $\operatorname{G}f=\frac{1}{n}\Bigr{(}\Delta-\nabla E\cdot\nabla\Bigr{)}=\frac{1}{n}\sum_{i=1}^{n}\partial_{i}^{2}-\sum_{i=1}^{n}V^{\prime}(x_{i})\partial_{i}.$ (2.1) It is a self-adjoint operator on $L^{2}(\mathbb{R}^{n},P_{n}^{0})$ that leaves globally invariant the set of polynomials. Its spectrum is the set of all non- positive integers, that is, $\lambda_{0}=0>\lambda_{1}=-1>\lambda_{2}=-2>\cdots$. The corresponding eigenspaces $F_{0},F_{1},F_{2},\cdots$ are finite dimensional: $F_{m}$ is spanned by the multivariate Hermite polynomials of degree $m$, in other words tensor products of univariate Hermite polynomials. In particular, $F_{0}$ is the vector space of constant functions while $F_{1}$ is the $n$-dimensional vector space of all linear functions. Let us point out that $\operatorname{G}$ can be restricted to the set of $P_{n}^{0}$ square integrable _symmetric_ functions: it leaves globally invariant the set of _symmetric_ polynomials, its spectrum is unchanged but the associated eigenspaces $E_{m}$ are the restrictions of the vector spaces $F_{m}$ to the set of symmetric functions, in other words, $E_{m}$ is spanned by the multivariate _symmetrized_ Hermite polynomials of degree $m$. Note that $E_{1}$ is the one-dimensional space generated by $\pi(x)=x_{1}+\cdots+x_{n}$. The Markov semigroup ${(\mathrm{e}^{t\operatorname{G}})}_{t\geq 0}$ generated by $\operatorname{G}$ admits $P_{n}^{0}$ as a reversible invariant law since $\operatorname{G}$ is self-adjoint in $L^{2}(P_{n}^{0})$. Following [62], let us introduce the _heat kernel_ $p_{t}(x,y)$ which is the density of $\mathrm{Law}(X^{n}_{t}\mid X^{n}_{0}=x)$ with respect to the invariant law $P_{n}^{0}$. The long-time behavior reads $\lim_{t\to\infty}p_{t}(x,\cdot)=1$ for all $x\in\mathbb{R}^{n}$. Let $\left\|\cdot\right\|_{p}$ be the norm of $L^{p}=L^{p}(P_{n}^{0})$. For all $1\leq p\leq q$, $t\geq 0$, $x\in\mathbb{R}^{n}$, we have $2\|\mathrm{Law}(X^{n}_{t}\mid X^{n}_{0}=x)-P_{n}^{0}\|_{\mathrm{TV}}=\|p_{t}(x,\cdot)-1\|_{1}\leq\|p_{t}(x,\cdot)-1\|_{p}\leq\|p_{t}(x,\cdot)-1\|_{q}.$ (2.2) In the particular case $p=2$ we can write $\|p_{t}(x,\cdot)-1\|_{2}^{2}=\sum_{m=1}^{\infty}\mathrm{e}^{-2mt}\sum_{\psi\in B_{m}}|\psi(x)|^{2}.$ (2.3) where $B_{m}$ is an orthonormal basis of $F_{m}\subset L^{2}(P_{n}^{0})$, hence $\|p_{t}(x,\cdot)-1\|_{2}^{2}\geq\mathrm{e}^{-2t}\sum_{\psi\in B_{1}}|\psi(x)|^{2},$ (2.4) which leads to a lower bound on the $\chi^{2}$ (in other words $L^{2}$) cutoff, provided one can estimate $\sum_{\psi\in B_{1}}|\psi(x)|^{2}$ which is the square of the norm of the projection of $\delta_{x}$ on $B_{1}$. Following [62, Th. 6.2], an upper bound would follow from a Bakry–Émery curvature–dimension criterion $\mathrm{CD}(\rho,d)$ with a finite dimension $d$, in relation with Nash–Sobolev inequalities and dimensional pointwise estimates on the heat kernel $p_{t}(x,\cdot)$ or ultracontractivity of the Markov semigroup, see for instance [63, Sec. 4.1]. The OU process satisfies to $\mathrm{CD}(\rho,\infty)$ but never to $\mathrm{CD}(\rho,d)$ with $d$ finite and is not ultracontractive. Actually the OU process is a critical case, see [3, Ex. 2.7.3]. ### 2.4. Spectral analysis of the generator: the interacting case We now assume that $\beta\geq 1$. The infinitesimal generator of the DOU process is the operator $\operatorname{G}f=\frac{1}{n}\Bigr{(}\Delta-\nabla E\cdot\nabla\Bigr{)}=\frac{1}{n}\sum_{i=1}^{n}\partial_{i}^{2}-\sum_{i=1}^{n}V^{\prime}(x_{i})\partial_{i}+\frac{\beta}{2n}\sum_{j\neq i}\frac{\partial_{i}-\partial_{j}}{x_{i}-x_{j}}.$ (2.5) Despite the interaction term, the operator leaves globally invariant the set of _symmetric_ polynomials. Following Lassalle in [49, 4], see also [25], the operator $\operatorname{G}$ is a self-adjoint operator on the space of $P_{*n}^{\beta}$ square integrable _symmetric_ functions of $n$ variables, its spectrum does not depend on $\beta$ and matches the spectrum of the OU process case $\beta=0$. In particular the spectral gap is $1$. The eigenspaces $E_{m}$ are spanned by the generalized symmetrized Hermite polynomials of degree $m$. For instance, $E_{1}$ is the one-dimensional space generated by $\pi(x)=x_{1}+\cdots+x_{n}$ while $E_{2}$ is the two-dimensional space spanned by $(x_{1}+\cdots+x_{n})^{2}-1\quad\text{and}\quad x_{1}^{2}+\cdots+x_{n}^{2}-1-\frac{\beta}{2}(n-1).$ (2.6) From the isometry between $L^{2}(\overline{D}_{n},P_{n}^{\beta})$ and $L^{2}_{\mathrm{sym}}(\mathbb{R}^{n},P_{*n}^{\beta})$, the above explicit spectral decomposition applies to the semigroup of the DOU on $\overline{D}_{n}$. Formally, the discussion presented at the end of the previous subsection still applies. However, in the present interacting case the integrability properties of the heat kernel are not known: in particular, we do not know whether $p_{t}(x,\cdot)$ lies in $L^{p}(P_{n}^{\beta})$ for $t>0$, $x\in\overline{D}_{n}$ and $p>1$. This leads to the question, of independent interest, of pointwise upper and lower Gaussian bounds for heat kernels similar to the OU process, with explicit dependence of the constants over the dimension. We refer for example to [65, 36, 41] for some results in this direction. ### 2.5. Mean-field limit The measure $P_{n}^{\beta}$ is log-concave since $E$ is convex, and its density writes $x\in\mathbb{R}^{n}\mapsto\frac{\mathrm{e}^{-\frac{n}{2}|x|^{2}}}{C_{n}^{\beta}}\prod_{i>j}(x_{i}-x_{j})^{\beta}\mathbf{1}_{x_{1}\leq\cdots\leq x_{n}}.$ (2.7) See [25, Sec. 2.2] for a high-dimensional analysis. The Boltzmann–Gibbs measure $P_{n}^{\beta}$ is known as the $\beta$-Hermite ensemble or H$\beta$E. When $\beta=2$, it is better known as the Gaussian Unitary Ensemble (GUE). If $X^{n}\sim P_{n}^{\beta}$ then the Wigner theorem states that the empirical measure with atoms distributed according to $P_{n}^{\beta}$ converges in distribution to a semi-circle law, namely $\frac{1}{n}\sum_{i=1}^{n}\delta_{X^{n,i}}\underset{n\to\infty}{\overset{\text{weak}}{\longrightarrow}}\frac{\sqrt{2\beta-x^{2}}}{\beta\pi}\mathbf{1}_{x\in[-\sqrt{2\beta},\sqrt{2\beta}]}\mathrm{d}x,$ (2.8) and this can be deduced in this Coulomb gas context from a large deviation principle as in [12]. Let ${(X^{n})}_{t\geq 0}$ be the process solving (1.3) with $\beta\geq 0$ or $\beta\geq 1$, and let $\mu^{n}_{t}=\frac{1}{n}\sum_{k=1}^{n}\delta_{X^{n,i}_{t}}$ (2.9) be the empirical measure of the particles at time $t$. Following notably [60, 14, 21, 20, 53, 31], if the sequence of initial conditions ${(\mu_{0}^{n})}_{n\geq 1}$ converges weakly as $n\to\infty$ to a probability measure $\mu_{0}$, then the sequence of measure valued processes ${({(\mu_{t}^{n})}_{t\geq 0})}_{n\geq 1}$ converges weakly to the unique probability measure valued deterministic process ${(\mu_{t})}_{t\geq 0}$ satisfying the evolution equation $\langle\mu_{t},f\rangle=\langle\mu_{0},f\rangle-\int_{0}^{t}\int V^{\prime}(x)f^{\prime}(x)\mu_{s}(\mathrm{d}x)\mathrm{d}s+\frac{\beta}{2}\int_{0}^{t}\int_{\mathbb{R}^{2}}\frac{f^{\prime}(x)-f^{\prime}(y)}{x-y}\mu_{s}(\mathrm{d}x)\mu_{s}(\mathrm{d}y)\mathrm{d}s\quad$ (2.10) for all $t\geq 0$ and $f\in\mathcal{C}^{3}_{b}(\mathbb{R},\mathbb{R})$. The equation (2.10) is a weak formulation of a McKean–Vlasov equation or free Fokker–Planck equation associated to a free OU process. Moreover, if $\mu_{0}$ has all its moments finite, then for all $t\geq 0$, we have the free Mehler formula $\mu_{t}=\mathrm{dil}_{\mathrm{e}^{-2t}}\mu_{0}\boxplus\mathrm{dil}_{\sqrt{1-\mathrm{e}^{-2t}}}\mu_{\infty},$ (2.11) where $\mathrm{dil}_{\sigma}\mu$ is the law of $\sigma X$ when $X\sim\mu$, where “$\boxplus$” stands for the free convolution of probability measures of Voiculescu free probability theory, and where $\mu_{\infty}$ is the semi- circle law of variance $\frac{\beta}{2}$. In particular, if $\mu_{0}$ is a semi-circle law then $\mu_{t}$ is a semi-circle law for all $t\geq 0$. Let us introduce the $k$-th moment $m_{k}(t):=\displaystyle\int x^{k}\mu_{t}(\mathrm{d}x)$ of $\mu_{t}$. The first and second moments satisfy the differential equations $m_{1}^{\prime}=-m_{1}$ and $m_{2}^{\prime}=-2m_{2}+\beta$ respectively, which give $m_{1}(t)=\mathrm{e}^{-t}m_{1}(0)\underset{t\to\infty}{\longrightarrow}0\quad\text{and}\quad m_{2}(t)=m_{2}(0)\mathrm{e}^{-2t}+\frac{\beta}{2}(1-\mathrm{e}^{-2t})\underset{t\to\infty}{\longrightarrow}\frac{\beta}{2}.$ (2.12) More generally, beyond the first two moments, the Cauchy–Stieltjes transform $z\in\mathbb{C}_{+}=\\{z\in\mathbb{C}:\Im z>0\\}\mapsto s_{t}(z)=\int_{\mathbb{R}}\frac{\mu_{t}(\mathrm{d}x)}{x-z}$ (2.13) of $\mu_{t}$ is the solution of the following complex Burgers equation $\partial_{t}s_{t}(z)=s_{t}(z)+z\partial_{z}s_{t}(z)+\beta s_{t}(z)\partial_{z}s_{t}(z),\quad t\geq 0,z\in\mathbb{C}_{+}.$ (2.14) The semi-circle law on $[-c,c]$ has density $\frac{2\sqrt{c^{2}-x^{2}}}{\pi c^{2}}\mathbf{1}_{x\in[-c,c]}$, mean $0$, second moment or variance $\frac{c^{2}}{4}$, and Cauchy–Stieltjes transform $s_{t}(z)=\frac{\sqrt{4z^{2}-4c^{2}}-2z}{c^{2}}$, $t\geq 0,z\in\mathbb{C}_{+}$. The cutoff phenomenon is in a sense a diagonal $(t,n)$ estimate, melting long time behavior and high dimension. When $|z_{0}^{n}|$ is of order $n$, cutoff occurs at a time of order $\approx\log(n)$: this informally corresponds to taking $t\to\infty$ in $(\mu_{t})_{t\geq 0}$. When $\mu_{0}$ is centered with same second moment $\frac{\beta}{2}$ as $\mu_{\infty}$, then there is a Boltzmann H-theorem interpretation of the limiting dynamics as $n\to\infty$: the steady-state is the Wigner semi-circle law $\mu_{\infty}$, the second moment is conserved by the dynamics, the Voiculescu entropy is monotonic along the dynamics, grows exponentially, and is maximized by the steady-state. ### 2.6. $L^{p}$ cutoff Following [26], we can deduce an $L^{p}$ cutoff started from $x$ from an $L^{1}$ cutoff by showing that the heat kernel $p_{t}(x,\cdot)$ is in $L^{p}(P_{n}^{\beta})$ for some $t>0$. Thanks to the Mehler formula, it can be checked that this holds for the OU case, despite the lack of ultracontractivity. The heat kernel of the DOU process is less accessible. In another exactly solvable direction, the $L^{p}$ cutoff phenomenon has been studied for instance in [62, 64] for Brownian motion on compact simple Lie groups, and in [64, 55] for Brownian motion on symmetric spaces, in relation with representation theory, an idea which goes back at the origin to the works of Diaconis on random walks on groups. ### 2.7. Cutoff window and profile Once a cutoff phenomenon is established, one can ask for a finer description of the pattern of convergence to equilibrium. The _cutoff window_ is the order of magnitude of the transition time from the value $\max$ to the value $0$: more precisely, if cutoff occurs at time $c_{n}$ then we say that the cutoff window is $w_{n}$ if $\displaystyle\varlimsup_{b\to+\infty}\varlimsup_{n\to\infty}\mathrm{dist}(\mathrm{Law}(X_{c_{n}+bw_{n}})\mid P_{n}^{\beta})$ $\displaystyle=0,$ $\displaystyle\varliminf_{b\to-\infty}\varliminf_{n\to\infty}\mathrm{dist}(\mathrm{Law}(X_{c_{n}+bw_{n}})\mid P_{n}^{\beta})$ $\displaystyle=\max,$ and for any $b\in\mathbb{R}$ $0<\varliminf_{n\to\infty}\mathrm{dist}(\mathrm{Law}(X_{c_{n}+bw_{n}})\mid P_{n}^{\beta})\leq\varlimsup_{n\to\infty}\mathrm{dist}(\mathrm{Law}(X_{c_{n}+bw_{n}})\mid P_{n}^{\beta})<\max.$ Note that necessarily $w_{n}=o(c_{n})$ by definition of the cutoff phenomenon. Note also that $w_{n}$ is unique in the following sense: $w^{\prime}_{n}$ is a cutoff window if and only if $w_{n}/w^{\prime}_{n}$ remains bounded from above and below as $n\to\infty$. We say that the _cutoff profile_ is given by $\varphi:\mathbb{R}\to[0,1]$ if $\lim_{n\to\infty}\mathrm{dist}(\mathrm{Law}(X_{c_{n}+bw_{n}})\mid P_{n}^{\beta})=\varphi(b).$ The analysis of the OU process carried out in Theorems 1.1 and 1.2 can be pushed further to establish the so-called cutoff profiles, we refer to the end of Section 3 for details. Regarding the DOU process, such a detailed description of the convergence to equilibrium does not seem easily accessible. However it is straightforward to deduce from our proofs that the cutoff window is of order $1$, in other words the inverse of the spectral gap, in the setting of Corollary 1.6. This is also the case in the setting of Corollary 1.8 for the Wasserstein distance. We believe that this remains true in the setting of Corollary 1.8 for the TV and Hellinger distances: actually, a lower bound of the required order can be derived from the calculations in the proof of Corollary 1.4; on the other hand, our proof of the upper bound on the mixing time does not allow to give a precise enough upper bound on the window. ### 2.8. Other potentials It is natural to ask about the cutoff phenomenon for the process solving (1.3) when $V$ is a more general $\mathcal{C}^{2}$ function. The invariant law $P_{n}^{\beta}$ of this Markov diffusion writes $\frac{\mathrm{e}^{-n\sum_{i=1}^{n}V(x_{i})}}{C_{n}^{\beta}}\prod_{i>j}(x_{i}-x_{j})^{\beta}\mathbf{1}_{(x_{1},\ldots,x_{n})\in\overline{D}_{n}}\mathrm{d}x_{1}\cdots\mathrm{d}x_{n}.$ (2.15) The case where $V-\frac{\rho}{2}\left|\cdot\right|^{2}$ is convex for some constant $\rho\geq 0$ generalizes the DOU case and has exponential convergence to equilibrium, see [25]. Three exactly solvable cases are known: * • $\mathrm{e}^{-V(x)}=\mathrm{e}^{-\frac{x^{2}}{2}}$: the DOU process associated to the Gaussian law weight and the $\beta$-Hermite ensemble including HOE/HUE/HSE when $\beta\in\\{1,2,4\\}$, * • $\mathrm{e}^{-V(x)}=x^{a-1}\mathrm{e}^{-x}\mathbf{1}_{x\in[0,\infty)}$: the Dyson–Laguerre process associated to the Gamma law weight and the $\beta$-Laguerre ensembles including LOE/LUE/LSE when $\beta\in\\{1,2,4\\}$, * • $\mathrm{e}^{-V(x)}=x^{a-1}(1-x)^{b-1}\mathbf{1}_{x\in[0,1]}$: the Dyson–Jacobi process associated to the Beta law weight and the $\beta$-Jacobi ensembles including JOE/JUE/JSE when $\beta\in\\{1,2,4\\}$, up to a scaling. Following Lassalle [49, 51, 50, 4] and Bakry [5], in these three cases, the multivariate orthogonal polynomials of the invariant law $P_{n}^{\beta}$ are the eigenfunctions of the dynamics of the process. We refer to [35, 32, 54] for more information on (H/L/J)$\beta$E random matrix models. The contraction property or spectral projection used to pass from a matrix process to the Dyson process can be used to pass from BM on the unitary group to the Dyson circular process for which the invariant law is the Circular Unitary Ensemble (CUE). This provides an upper bound for the cutoff phenomenon. The cutoff for BM on the unitary group is known and holds at critical time or order $\log(n)$, see for instance [64, 62, 55]. More generally, we could ask about the cutoff phenomenon for a McKean–Vlasov type interacting particle system ${(X^{n}_{t})}_{t\geq 0}$ in $(\mathbb{R}^{d})^{n}$ solution of the stochastic differential equation of the form $\mathrm{d}X^{n,i}_{t}=\sigma_{n,t}(X^{n})\mathrm{d}B^{n}_{t}-\sum_{i=1}^{n}\nabla V_{n,t}(X^{n,i}_{t})\mathrm{d}t-\sum_{j\neq i}\nabla W_{n,t}(X^{n,i}_{t}-X^{n,j}_{t})\mathrm{d}t,\quad 1\leq i\leq n,$ (2.16) for various types of confinement $V$ and interaction $W$ (convex, repulsive, attractive, repulsive-attractive, etc), and discuss the relation with the propagation of chaos. The case where $V$ and $W$ are both convex and constant in time is already very well studied from the point of view of long-time behavior and mean-field limit in relation with convexity, see for instance [21, 20, 53]. Regarding universality, it is worth noting that if $V=\left|\cdot\right|^{2}$ and if $W$ is convex then the proof by factorization of the optimal Poincaré and logarithmic Sobolev inequalities and their extremal functions given in [25] remains valid, paving the way to the generalization of many of our results in this spirit. On the other hand, the convexity of the limiting energy functional in the mean-field limit is of Bochner type and suggests to take for $W$ a power, in other words a Riesz type interaction. ### 2.9. Alternative parametrization If ${(X^{n}_{t})}_{t\geq 0}$ is the process solution of the stochastic differential equation (1.3), then for all real parameters $\alpha>0$ and $\sigma>0$, the space scaled and time changed stochastic process ${(Y^{n}_{t})}_{t\geq 0}={(\sigma X^{n}_{\alpha t})}_{t\geq 0}$ solves the stochastic differential equation $Y^{n}_{0}=\sigma x^{n}_{0},\quad\mathrm{d}Y_{t}^{n,i}=\sqrt{\frac{2\alpha\sigma^{2}}{n}}\mathrm{d}B^{i}_{t}-\alpha Y_{t}^{n,i}\mathrm{d}t+\frac{\alpha\beta\sigma^{2}}{n}\sum_{j\neq i}\frac{\mathrm{d}t}{Y_{t}^{n,i}-Y_{t}^{n,j}},\quad 1\leq i\leq n,$ (2.17) where ${(B_{t})}_{t\geq 0}$ is a standard $n$-dimensional BM. The invariant law of ${(Y^{n}_{t})}_{t\geq 0}$ is $\frac{\mathrm{e}^{-\frac{n}{2\sigma^{2}}|y|^{2}}}{C_{n}^{\beta}}\prod_{i>j}(y_{i}-y_{j})^{\beta}\mathbf{1}_{(y_{1},\ldots,y_{n})\in\overline{D}_{n}}\mathrm{d}y_{1}\cdots\mathrm{d}y_{n}$ (2.18) where $C_{n}^{\beta}$ is the normalizing constant. This law and its normalization $C_{n}^{\beta}$ depend on the “shape parameter” $\beta$, the “scale parameter” $\sigma$, and does not depend on the “speed parameter” $\alpha$. When $\beta>0$, taking $\sigma^{2}=\beta^{-1}$, the stochastic differential equation (2.17) boils down to $Y^{n}_{0}=\frac{x^{n}_{0}}{\sqrt{\beta}},\quad\mathrm{d}Y_{t}^{n,i}=\sqrt{\frac{2\alpha}{n\beta}}\mathrm{d}B^{i}_{t}-\alpha Y_{t}^{n,i}\mathrm{d}t+\frac{\alpha}{n}\sum_{j\neq i}\frac{\mathrm{d}t}{Y_{t}^{n,i}-Y_{t}^{n,j}},\quad 1\leq i\leq n$ (2.19) while the invariant law becomes $\frac{\mathrm{e}^{-\frac{n\beta}{2}|y|^{2}}}{C_{n}^{\beta}}\prod_{i>j}(y_{i}-y_{j})^{\beta}\mathbf{1}_{(y_{1},\cdots,y_{n})\in\overline{D}_{n}}\mathrm{d}y_{1}\cdots\mathrm{d}y_{n}.$ (2.20) The equation (2.19) is the one considered in [37, Eq. (12.4)] and in [46, Eq. (1.1)]. The advantage of (2.19) is that $\beta$ can be now truly interpreted as an inverse temperature and the right-hand side in the analogue of (2.8) does not depend on $\beta$, while the drawback is that we cannot turn off the interaction by setting $\beta=0$ and recover the OU process as in (1.3). It is worthwhile mentioning that for instance Theorem 1.7 remains the same for the process solving (2.19) in particular the cutoff threshold is at critical time $\frac{c_{n}}{\alpha}$ and does not depend on $\beta$. ### 2.10. Discrete models There are several discrete space Markov processes admitting the OU process as a scaling limit, such as for instance the random walk on the discrete hypercube, related to the Ehrenfest model, for which the cutoff has been studied in [30, 29], and the M/M/$\infty$ queuing process, for which a discrete Mehler formula is available [24]. Certain discrete space Markov processes incorporate a singular repulsion mechanism, such as for instance the exclusion process on the segment, for which the study of the cutoff in [48] shares similarities with our proof of Theorem 1.7. It is worthwhile noting that there are discrete Coulomb gases, related to orthogonal polynomials for discrete measures, suggesting to study discrete Dyson processes. More generally, it could be natural to study the cutoff phenomenon for Markov processes on infinite discrete state spaces, under curvature condition, even if the subject is notoriously disappointing in terms of high-dimensional analysis. We refer to the recent work [61] for the finite state space case. ## 3\. Cutoff phenomenon for the OU In this section, we prove Theorems 1.1 and 1.2: actually we only prove the latter since it implies the former. We start by recalling a well-known fact. ###### Lemma 3.1 (Mehler formula). If ${(Y_{t})}_{t\geq 0}$ is an OU process in $\mathbb{R}^{d}$ solution of the stochastic differential equation $Y_{0}=y_{0}\in\mathbb{R}^{d}$ and $\mathrm{d}Y_{t}=\sigma\mathrm{d}B_{t}-\mu Y_{t}\mathrm{d}t$ for parameters $\sigma>0$ and $\mu>0$ where $B$ is a standard $d$-dimensional Brownian motion then ${(Y_{t})}_{t\geq 0}={\Bigr{(}y_{0}\mathrm{e}^{-\mu t}+\sigma\int_{0}^{t}\mathrm{e}^{\mu(s-t)}\mathrm{d}B_{s}\Bigr{)}}_{t\geq 0}\text{ hence }Y_{t}\sim\mathcal{N}\Bigr{(}y_{0}\mathrm{e}^{-\mu t},\frac{\sigma^{2}}{2}\frac{1-\mathrm{e}^{-2\mu t}}{\mu}\mathrm{I}_{d}\Bigr{)}\text{ for all $t\geq 0$.}$ Moreover its coordinates are independent one-dimensional OU processes with initial condition $y_{0}^{i}$ and invariant law $\mathcal{N}(0,\frac{\sigma^{2}}{2\mu})$, $1\leq i\leq d$. ###### Proof of Theorem 1.1 and Theorem 1.2. By using Lemma 3.1, for all $n\geq 1$ and $t\geq 0$, $Z^{n}_{t}\sim\mathcal{N}\Bigr{(}z^{n}_{0}\mathrm{e}^{-t},\frac{1-\mathrm{e}^{-2t}}{n}I_{n}\Bigr{)}=\otimes_{i=1}^{n}\mathcal{N}\Bigr{(}z^{n,i}_{0}\mathrm{e}^{-t},\frac{1-\mathrm{e}^{-2t}}{n}\Bigr{)},\ P_{n}^{0}=\mathcal{N}\Bigr{(}0,\frac{I_{n}}{n}\Bigr{)}=\mathcal{N}\Bigr{(}0,\frac{1}{n}\Bigr{)}^{\otimes n}.$ (3.1) _Hellinger, Kullback, $\chi^{2}$, Fisher, and Wasserstein cutoffs._ A direct computation from (3.1) or Lemma A.5 either from multivariate Gaussian formulas or univariate via tensorization gives $\displaystyle\mathrm{Hellinger}^{2}(\mathrm{Law}(Z^{n}_{t}),P_{n}^{0})$ $\displaystyle=1-\exp\Bigr{(}-\frac{n}{4}\frac{|z^{n}_{0}|^{2}\mathrm{e}^{-2t}}{2-\mathrm{e}^{-2t}}+\frac{n}{4}\log\Bigr{(}4\frac{1-\mathrm{e}^{-2t}}{(2-\mathrm{e}^{-2t})^{2}}\Bigr{)}\Bigr{)},$ (3.2) $\displaystyle 2\mathrm{Kullback}(\mathrm{Law}(Z^{n}_{t})\mid P_{n}^{0})$ $\displaystyle=n|z^{n}_{0}|^{2}\mathrm{e}^{-2t}-n\mathrm{e}^{-2t}-n\log(1-\mathrm{e}^{-2t}),$ (3.3) $\displaystyle\chi^{2}(\mathrm{Law}(Z^{n}_{t})\mid P_{n}^{0})$ $\displaystyle=-1+\frac{1}{(1-\mathrm{e}^{-4t})^{n/2}}\exp\Bigr{(}n|z_{0}^{n}|^{2}\frac{\mathrm{e}^{-2t}}{1+\mathrm{e}^{-2t}}\Bigr{)},$ (3.4) $\displaystyle\mathrm{Fisher}(\mathrm{Law}(Z^{n}_{t})\mid P_{n}^{0})$ $\displaystyle=n^{2}|z^{n}_{0}|^{2}\mathrm{e}^{-2t}+n^{2}\frac{\mathrm{e}^{-4t}}{1-\mathrm{e}^{-2t}},$ (3.5) $\displaystyle\mathrm{Wasserstein}^{2}(\mathrm{Law}(Z^{n}_{t}),P_{n}^{0})$ $\displaystyle=|z^{n}_{0}|^{2}\mathrm{e}^{-2t}+2(1-\sqrt{1-\mathrm{e}^{-2t}}-\frac{1}{2}\mathrm{e}^{-2t}),$ (3.6) which gives the desired lower and upper bounds as before by using the hypothesis on $z^{n}_{0}$. _Total variation cutoff._ By using the comparison between total variation and Hellinger distances (Lemma A.1) we deduce from (3.2) the cutoff in total variation distance at the same critical time. The upper bound for the total variation distance can alternatively be obtained by using the $\mathrm{Kullback}$ estimate (3.3) and the Pinsker–Csiszár–Kullback inequality (Lemma A.1). Since both distributions are tensor products, we could use alternatively the tensorization property of the total variation distance (Lemma A.4) together with the one-dimensional version of the Gaussian formula for $\mathrm{Kullback}$ (Lemma A.1) to obtain the result for the total variation. ∎ ###### Remark 3.2 (Competition between bias and variance mixing). From the computations of the proof of Theorem 1.2, we can show that for $\mathrm{dist}\in\\{\mathrm{TV},\mathrm{Hellinger},\chi^{2}\\}$ $A_{t}:=\mathrm{dist}(\mathrm{Law}(Z^{n}_{t})\mid\mathrm{Law}(Z^{n}_{t}-z^{n}_{0}\mathrm{e}^{-t}))$ has a cutoff at time $c_{n}^{A}=\log(\sqrt{n}|z^{n}_{0}|)$, while $B_{t}:=\mathrm{dist}(\mathrm{Law}(Z^{n}_{t}-z^{n}_{0}\mathrm{e}^{-t})\mid P_{n}^{0})$ admits a cutoff at time $c_{n}^{B}=\frac{1}{4}\log(n)$. The triangle inequality for $\mathrm{dist}$ yields $|A_{t}-B_{t}|\leq\mathrm{dist}(\mathrm{Law}(Z^{n}_{t})\mid P_{n}^{0})\leq A_{t}+B_{t}.$ Therefore the critical time of Theorem 1.2 is dictated by either $A_{t}$ or $B_{t}$, according to whether $c_{n}^{A}\gg c_{n}^{B}$ or $c_{n}^{A}\ll c_{n}^{B}$. This can be seen as a competition between bias and variance mixing. ###### Remark 3.3 (Total variation discriminating event for small initial conditions). Let us introduce the random variable $Z^{n}_{\infty}\sim P_{n}^{0}=\mathcal{N}(0,\frac{1}{n}I_{n})=\mathcal{N}(0,\frac{1}{n})^{\otimes n}$, in accordance with (3.1). There holds $S_{t}^{n}:=\sum_{i=1}^{n}(Z_{t}^{n,i}-z_{0}^{n,i}\mathrm{e}^{-t})^{2}\sim\mathrm{Gamma}\Bigr{(}\frac{n}{2},\frac{n}{2(1-\mathrm{e}^{-2t})}\Bigr{)}\quad\text{and}\quad|Z^{n}_{\infty}|^{2}\sim\mathrm{Gamma}\Bigr{(}\frac{n}{2},\frac{n}{2}\Bigr{)}.$ We can check, using an explicit computation of Hellinger and Kullback between Gamma distributions and the comparison between total variation and Hellinger distances (Lemma A.1), that $C_{t}:=\mathrm{dist}(\mathrm{Law}(S^{n}_{t})\mid\mathrm{Law}(|Z^{n}_{\infty}|^{2}))$ admits a cutoff at time $c_{n}^{C}=c_{n}^{B}=\frac{1}{4}\log(n)$. Moreover, one can exhibit a discriminating event for the TV distance. Namely, we can observe that $\Bigr{\|}\mathrm{Gamma}\Bigr{(}\frac{n}{2},\frac{n}{2(1-\mathrm{e}^{-2t})}\Bigr{)}-\mathrm{Gamma}\Bigr{(}\frac{n}{2},\frac{n}{2}\Bigr{)}\Bigr{\|}_{\mathrm{TV}}=\mathbb{P}(|Z^{n}_{\infty}|^{2}\geq\alpha_{t})-\mathbb{P}(S_{t}^{n}\geq\alpha_{t})$ with $\alpha_{t}$ the unique point where the two densities meet, which happens to be $\alpha_{t}=-\mathrm{e}^{2t}\log(1-\mathrm{e}^{-2t})(1-\mathrm{e}^{-2t}).$ From the explicit expressions (3.2), (3.3), (3.4), (3.5), (3.6), it is immediate to extract the cutoff profile associated to the convergence of $\mathrm{Law}(Z_{t}^{n})$ to $P_{n}^{0}$ in Hellinger, Kullback, $\chi^{2}$, Fisher and Wasserstein. For Wasserstein we already know by Theorem 1.2 that a cutoff occurs if and only if $|z_{0}^{n}|\underset{n\to\infty}{\to}\infty$. In this case, regarding the profile, we have $\lim_{n\to\infty}\mathrm{Wasserstein}(\mathrm{Law}(Z_{t}^{n}),P_{n}^{0})=\phi(b),$ (3.7) where for all $b\in\mathbb{R}$, $t_{n,b}=\log|z_{0}^{n}|+b\quad\text{and}\quad\phi(b)=\mathrm{e}^{-b}.$ (3.8) For the other distances and divergences, let us assume that the following limit exists $a:=\lim_{n\to\infty}\sqrt{n}|z_{0}^{n}|^{2}\in[0,+\infty].$ (3.9) This quantity can be related with $c_{n}^{A}:=\log(|z_{0}^{n}|\sqrt{n})\quad\text{and}\quad c_{n}^{B}:=\frac{\log n}{4}$ (3.10) which were already introduced in Remark 3.2. Indeed $a=0\Longleftrightarrow c_{n}^{A}\ll c_{n}^{B},\quad a=+\infty\Longleftrightarrow c_{n}^{A}\gg c_{n}^{B},$ while $a\in(0,\infty)$ is equivalent to $c_{n}^{A}\asymp c_{n}^{B}$. Then, for $\mathrm{dist}\in\\{\mathrm{Hellinger},\mathrm{Kullback},\chi^{2},\mathrm{Fisher}\\}$, we have, for all $b\in\mathbb{R}$, $\lim_{n\to\infty}\mathrm{dist}(\mathrm{Law}(Z_{t_{n,b}})\mid P_{n}^{0})=\phi(b),$ (3.11) where $t_{n,b}$ and $\phi(b)$ are as in Table 1. The cutoff window is always of size $1$. | $a=+\infty$ | $a=0$ | $a\in(0,+\infty)$ ---|---|---|--- $t_{n,b}$ | | | Hellinger | $\log(|z_{0}^{n}|\sqrt{n})+b$ | $\frac{\log n}{4}+b$ | $\frac{\log n}{4}+b$ Kullback | $\log(|z_{0}^{n}|\sqrt{n})+b$ | $\frac{\log n}{4}+b$ | $\frac{\log n}{4}+b$ $\chi^{2}$ | $\log(|z_{0}^{n}|\sqrt{n})+b$ | $\frac{\log n}{4}+b$ | $\frac{\log n}{4}+b$ Fisher | $\log(|z_{0}^{n}|n)+b$ | $\frac{\log n}{2}+b$ | $\frac{\log n}{2}+b$ $\phi(b)$ | | | Hellinger | $\sqrt{1-\mathrm{e}^{-\frac{1}{8}\mathrm{e}^{-2b}}}$ | $\sqrt{1-\mathrm{e}^{-\frac{1}{16}\mathrm{e}^{-4b}}}$ | $\sqrt{1-\mathrm{e}^{-\frac{1}{8}{a\mathrm{e}^{-2b}}-\frac{1}{16}\mathrm{e}^{-4b}}}$ Kullback | $\frac{1}{2}\mathrm{e}^{-2b}$ | $\frac{1}{4}\mathrm{e}^{-4b}$ | $\frac{1}{2}a\mathrm{e}^{-2b}+\frac{1}{4}\mathrm{e}^{-4b}$ $\chi^{2}$ | $\mathrm{e}^{\mathrm{e}^{-2b}}-1$ | $\mathrm{e}^{\frac{1}{2}\mathrm{e}^{-4b}}-1$ | $\mathrm{e}^{a\mathrm{e}^{-2b}+\frac{1}{2}\mathrm{e}^{-4b}}-1$ Fisher | $\mathrm{e}^{-2b}$ | $\mathrm{e}^{-4b}$ | $a\mathrm{e}^{-2b}+\mathrm{e}^{-4b}$ Table 1. Values of $t_{n,b}$ and $\phi(b)$ for the cutoff profile of the OU process in (3.11). Since the total variation distance is not expressed in a simple explicit manner, further computations are needed to extract the precise cutoff profile, which is given in the following lemma: ###### Lemma 3.4 (Cutoff profile in $\mathrm{TV}$ for OU). Let $Z^{n}=(Z_{t}^{n})_{t\geq 0}$ be the OU process (1.8), started from $z_{0}^{n}\in\mathbb{R}^{n}$, and let $P_{n}^{0}$ be its invariant law. Assume as in (3.9) that $a:=\lim_{n\to\infty}|z_{0}^{n}|^{2}\sqrt{n}\in[0,+\infty]$, and let $t_{n,b}$ be as in Table (1) for Hellinger. Then, for all $b\in\mathbb{R}$, we have $\lim_{n\to\infty}\|\mathrm{Law}({Z}_{t_{n,b}}^{n})-P_{n}^{0}\|_{\mathrm{TV}}=\phi(b),$ where $\phi(b):=\begin{cases}\displaystyle\mathrm{erf}\Bigr{(}\frac{\mathrm{e}^{-b}}{2\sqrt{2}}\Bigr{)}&\text{if $a=+\infty$}\\\\[10.00002pt] \displaystyle\mathrm{erf}\Bigr{(}\frac{\mathrm{e}^{-2b}}{4}\Bigr{)}&\text{if $a=0$}\\\\[10.00002pt] \displaystyle\mathrm{erf}\Bigr{(}\frac{\sqrt{2a\mathrm{e}^{-2b}+\mathrm{e}^{-4b}}}{4}\Bigr{)}&\text{if $a\in(0,+\infty)$}\end{cases},$ where $\displaystyle\mathrm{erf}(u):=\frac{1}{\sqrt{\pi}}\int_{|t|\leq u}\mathrm{e}^{-t^{2}}\mathrm{d}t=\mathbb{P}(|X|\leq\sqrt{2}u)$ with $X\sim\mathcal{N}(0,1)$ is the _error function_. ###### Proof of Lemma 3.4. The idea is to exploit the fact that we consider Gaussian product measures (the covariance matrices are multiple of the identity), which allows a finer analysis than for instance in [27, Le. 3.1]. We begin with a rather general step. Let $\mu$ and $\nu$ be two probability measures on $\mathbb{R}^{n}$ with densities $f$ and $g$ with respect to the Lebesgue measure $\mathrm{d}x$. We have then $\|\mu-\nu\|_{\mathrm{TV}}=\frac{1}{2}\int|f-g|\mathrm{d}x=\frac{1}{2}\Bigr{(}\int(f-g)\mathbf{1}_{g\leq f}\mathrm{d}x-\int(f-g)\mathbf{1}_{f\leq g}\mathrm{d}x\Bigr{)},$ (3.12) and since $-\int(f-g)\mathbf{1}_{f\leq g}\mathrm{d}x=-\int(f-g)(1-\mathbf{1}_{g<f})\mathrm{d}x=\int(f-g)\mathbf{1}_{g<f}\mathrm{d}x=\int(f-g)\mathbf{1}_{g\leq f}\mathrm{d}x,$ we obtain $\|\mu-\nu\|_{\mathrm{TV}}=\int(f-g)\mathbf{1}_{g\leq f}\mathrm{d}x=\mu(g\leq f)-\nu(g\leq f).$ (3.13) In particular, when $\mu=\mathcal{N}(m_{1},\sigma_{1}^{2}I_{n})$ and $\nu=\mathcal{N}(m_{2},\sigma_{2}^{2}I_{n})$ then $g(x)\leq f(x)$ is equivalent to $\psi(x):=\frac{|x-m_{1}|^{2}}{\sigma_{1}^{2}}-\frac{|x-m_{2}|^{2}}{\sigma_{2}^{2}}\leq n\log\frac{\sigma_{2}^{2}}{\sigma_{1}^{2}},$ (3.14) for all $x\in\mathbb{R}^{n}$, and therefore, with $Z_{1}\sim\mu$ and $Z_{2}\sim\nu$, we get $\|\mu-\nu\|_{\mathrm{TV}}=\mathbb{P}\Bigr{(}\psi(Z_{1})\leq n\log\frac{\sigma_{2}^{2}}{\sigma_{1}^{2}}\Bigr{)}-\mathbb{P}\Bigr{(}\psi(Z_{2})\leq n\log\frac{\sigma_{2}^{2}}{\sigma_{1}^{2}}\Bigr{)}.$ (3.15) Let us assume from now on that $m_{2}=0$ and $\sigma_{1}\neq\sigma_{2}$. We can then gather the quadratic terms as $\psi(x)=\Bigr{(}1-\frac{\sigma_{1}^{2}}{\sigma_{2}^{2}}\Bigr{)}\frac{|x-\tilde{m}_{1}|^{2}}{\sigma_{1}^{2}}+\Bigr{(}\frac{1}{\sigma_{1}^{2}}-\frac{1}{(1-\frac{\sigma_{1}^{2}}{\sigma_{2}^{2}})\sigma_{1}^{2}}\Bigr{)}|m_{1}|^{2}\quad\text{where}\quad\tilde{m}_{1}:=\frac{1}{1-\frac{\sigma_{1}^{2}}{\sigma_{2}^{2}}}m_{1}.$ (3.16) We observe at this step that the random variable $\frac{|Z_{1}-\tilde{m}_{1}|^{2}}{\sigma_{1}^{2}}$ follows a noncentral chi- squared distribution, which depends only on $n$ and on the noncentrality parameter $\lambda_{1}:=\frac{|m_{1}-\tilde{m}_{1}|^{2}}{\sigma_{1}^{2}}=\frac{\sigma_{1}^{2}}{(\sigma_{2}^{2}-\sigma_{1}^{2})^{2}}|m_{1}|^{2}.$ (3.17) Similarly, the random variable $\frac{|Z_{2}-\tilde{m}_{1}|^{2}}{\sigma_{2}^{2}}$ follows a noncentral chi- squared distribution, which depends only on $n$ and on the noncentrality parameter $\lambda_{2}:=\frac{|\tilde{m}_{1}|^{2}}{\sigma_{1}^{2}}=\frac{\sigma_{2}^{4}}{\sigma_{1}^{2}(\sigma_{2}^{2}-\sigma_{1}^{2})^{2}}|m_{1}|^{2}.$ (3.18) It follows that the law of $\psi(Z_{1})$ and the law of $\psi(Z_{2})$ depend over $m_{1}$ only via $|m_{1}|$. Hence $\psi(Z_{1})\overset{\mathrm{d}}{=}X_{1}+\cdots+X_{n}\quad\text{and}\quad\psi(Z_{2})\overset{\mathrm{d}}{=}Y_{1}+\cdots+Y_{n}$ (3.19) where $X_{1},\ldots,X_{n}$ and $Y_{1},\ldots,Y_{n}$ are two sequences of i.i.d. random variables for which the mean and variance depends only (and explicitly) on $|m_{1}|$, $\sigma_{1}$, $\sigma_{2}$. Note in particular that these means and variances are given by $\frac{1}{n}$ the ones of $\psi(Z_{1})$ and $\psi(Z_{2})$. Now we specialize to the case where $\mu=\mathrm{Law}(Z^{n}_{t})=\mathcal{N}(z_{0}^{n}\mathrm{e}^{-t},\frac{1-\mathrm{e}^{-2t}}{n}I_{n})$ and $\nu=\mathrm{Law}(Z^{n}_{\infty})=\mathcal{N}(0,\frac{1}{n}I_{n})=P_{n}^{0}$, and we find $\mathbb{E}[\psi(Z_{1})]=n\Bigr{(}1-\frac{\sigma_{t}^{2}}{\sigma_{\infty}^{2}}\Bigr{)}-\frac{|z_{0}^{n}|^{2}\mathrm{e}^{-2t}}{\sigma_{\infty}^{2}},\quad\mathbb{E}[\psi(Z_{2})]=n\Bigr{(}\frac{\sigma_{\infty}^{2}}{\sigma_{t}^{2}}-1\Bigr{)}+\frac{|z_{0}^{n}|^{2}\mathrm{e}^{-2t}}{\sigma_{t}^{2}}$ while $\mathrm{Var}[\psi(Z_{1})]=2n\Bigr{(}\frac{1}{\sigma_{t}^{2}}-\frac{1}{\sigma_{\infty}^{2}}\Bigr{)}^{2}\sigma_{t}^{4}+4\frac{\sigma_{t}^{2}}{\sigma_{\infty}^{4}}|z_{0}^{n}|^{2}\mathrm{e}^{-2t},\quad\mathrm{Var}[\psi(Z_{2})]=2n\Bigr{(}\frac{1}{\sigma_{t}^{2}}-\frac{1}{\sigma_{\infty}^{2}}\Bigr{)}^{2}\sigma_{\infty}^{4}+4\frac{\sigma_{\infty}^{2}}{\sigma_{t}^{4}}|z_{0}^{n}|^{2}\mathrm{e}^{-2t}.$ Let $t=t_{n,b}$ be as in Table 1 for Hellinger. Using (3.15) and the central limit theorem for the i.i.d. random variables $X_{1},\ldots,X_{n}$ and $Y_{1},\ldots,Y_{n}$, we get, with $Z\sim\mathcal{N}(0,1)$, $\bigr{\|}\mathrm{Law}({Z}_{t}^{n})-P_{n}^{0}\bigr{\|}_{\mathrm{TV}}=\mathbb{P}(Z\leq\gamma_{n,t})-\mathbb{P}(Z\leq\tilde{\gamma}_{n,t})+o_{n}(1),$ where $\gamma_{n,t}:=\frac{-n\log(1-\mathrm{e}^{-2t})-\mathbb{E}[\psi(Z^{n}_{t})]}{\sqrt{\mathrm{Var}[\psi(Z^{n}_{t})]}},\quad\tilde{\gamma}_{n,t}:=\frac{-n\log(1-\mathrm{e}^{-2t})-\mathbb{E}[\psi(Z^{n}_{\infty})]}{\sqrt{\mathrm{Var}[\psi(Z^{n}_{\infty})]}}.$ Expanding $\gamma_{n,t_{n,b}}$ gives the cutoff profile. Let us detail the computations in the most involved case $\lim_{n\to\infty}|z_{0}^{n}|^{2}\sqrt{n}=a\in(0,+\infty)$. For all $b\in\mathbb{R}$, recall $t_{n,b}=\frac{\log n}{4}+b$. One may check that $-n\log(1-\mathrm{e}^{-2t_{n,b}})-\mathbb{E}[\psi(Z^{n}_{t_{n,b}})]=\frac{1}{2}\mathrm{e}^{-4b}+a\mathrm{e}^{-2b}+o_{n}(1),$ $-n\log(1-\mathrm{e}^{-2t_{n,b}})-\mathbb{E}[\psi(Z^{n}_{\infty})]=-\frac{1}{2}\mathrm{e}^{-4b}-a\mathrm{e}^{-2b}+o_{n}(1),$ $\mathrm{Var}[\psi(Z^{n}_{t_{n,b}})]=2\mathrm{e}^{-4b}+4a\mathrm{e}^{-2b}+o_{n}(1),\quad\mathrm{Var}[\psi(Z^{n}_{\infty})]=2\mathrm{e}^{-4b}+4a\mathrm{e}^{-2b}+o_{n}(1).$ It follows that $\lim_{n\to\infty}\bigr{\|}\mathrm{Law}({Z}_{t_{n,b}}^{n})-P_{n}^{0}\bigr{\|}_{\mathrm{TV}}=\mathbb{P}\Bigr{(}|Z|\leq\frac{1}{2\sqrt{2}}\sqrt{\mathrm{e}^{-4b}+2a\mathrm{e}^{-2b}}\Bigr{)}=\mathrm{erf}\Bigr{(}\frac{1}{4}\sqrt{\mathrm{e}^{-4b}+2a\mathrm{e}^{-2b}}\Bigr{)}.$ The other cases are similar. ∎ ## 4\. General exactly solvable aspects In this section, we prove Theorem 1.3 and Corollary 1.4. The proof of Theorem 1.3 is based on the fact that the polynomial functions $\pi(x)=x_{1}+\cdots+x_{n}$ and $|x|^{2}=x_{1}^{2}+\cdots+x_{n}^{2}$ are, up to an additive constant for the second, eigenfunctions of the dynamics associated to the spectral values $-1$ and $-2$ respectively, and that their “carré du champ” is affine. In the matrix cases $\beta\in\\{1,2\\}$, these functions correspond to the dynamics of the trace, the dynamics of the squared Hilbert–Schmidt trace norm, and the dynamics of the squared trace. It is remarkable that this phenomenon survives beyond these matrix cases, yet another manifestation of the Gaussian “ghosts” concept due to Edelman, see for instance [34]. ###### Proof of Theorem 1.3. The process $Y_{t}:=\pi(X_{t}^{n})$ solves $\mathrm{d}Y_{t}=\sum_{i=1}^{n}\mathrm{d}X_{t}^{n,i}=\sqrt{\frac{2}{n}}\sum_{i=1}^{n}\mathrm{d}B^{i}_{t}-\sum_{i=1}^{n}X_{t}^{n,i}\mathrm{d}t+\frac{\beta}{n}\sum_{j\neq i}\frac{\mathrm{d}t}{X_{t}^{n,i}-X_{t}^{n,j}}.$ By symmetry, the double sum vanishes. Note that the process $W_{t}:=\frac{1}{\sqrt{n}}\sum_{i=1}^{n}B^{i}_{t}$ is a standard one dimensional BM, so that $\mathrm{d}Y_{t}=\sqrt{2}\mathrm{d}W_{t}-Y_{t}\mathrm{d}t.$ This proves the first part of the statement. We turn to the second part. Recall that $X_{t}\in D_{n}$ for all $t>0$. By Itô’s formula $\mathrm{d}(X_{t}^{n,i})^{2}=\sqrt{\frac{8}{n}}X_{t}^{n,i}\mathrm{d}B^{i}_{t}-2(X_{t}^{n,i})^{2}\mathrm{d}t+2\frac{\beta}{n}X_{t}^{n,i}\sum_{j:j\neq i}\frac{\mathrm{d}t}{X_{t}^{n,i}-X_{t}^{n,j}}+\frac{2}{n}\mathrm{d}t.$ Set $W_{t}:=\sum_{i=1}^{n}\int_{0}^{t}\frac{X_{s}^{n,i}}{|X^{n}_{s}|}\mathrm{d}B^{i}_{s}$. The process ${(W_{t})}_{t\geq 0}$ is a BM by the Lévy characterization since $\langle W\rangle_{t}=\int_{0}^{t}\frac{\sum_{i=1}^{n}(X^{n,i}_{s})^{2}}{|X^{n}_{s}|^{2}}\mathrm{d}s=t.$ Furthermore, a simple computation shows that $\sum_{i=1}^{n}X_{t}^{n,i}\sum_{j:j\neq i}\frac{1}{X_{t}^{n,i}-X_{t}^{n,j}}=\frac{n(n-1)}{2}.$ Consequently the process $R_{t}:=|X_{t}^{n}|^{2}$ solves $\mathrm{d}R_{t}=\sqrt{\frac{8}{n}R_{t}}\mathrm{d}W_{t}+\Big{(}2+\beta(n-1)-2R_{t}\Big{)}\mathrm{d}t,$ and is therefore a CIR process of parameters $a=2+\beta(n-1)$, $b=2$, and $\sigma=\sqrt{8/n}$. When $d=\frac{\beta}{2}n^{2}+(1-\frac{\beta}{2})n$ is a positive integer, the last property of the statement follows from the connection between OU and CIR recalled right before the statement of the theorem. ∎ The last proof actually relies on the following general observation. Let $X$ be an $n$-dimensional continuous semi-martingale solution of $\mathrm{d}X_{t}=\sigma(X_{t})\mathrm{d}B_{t}+b(X_{t})\mathrm{d}t$ where $B$ is a $n$-dimensional standard BM, and where $x\in\mathbb{R}^{n}\mapsto\sigma(x)\in\mathcal{M}_{n,n}(\mathbb{R})\quad\text{and}\quad x\in\mathbb{R}^{n}\mapsto b(x)\in\mathbb{R}^{n}$ are Lipschitz. The infinitesimal generator of the Markov semigroup is given by $\operatorname{G}(f)(x)=\frac{1}{2}\sum_{i,j=1}^{n}a_{i,j}(x)\partial_{i,j}f(x)+\sum_{i=1}^{n}b_{i}(x)\partial_{i}f(x),\quad\text{where}\quad a(x)=\sigma(x)(\sigma(x))^{\top},$ for all $f\in\mathcal{C}^{2}(\mathbb{R}^{n},\mathbb{R})$ and $x\in\mathbb{R}^{n}$. Then, by Itô’s formula, the process $M^{f}={(M^{f}_{t})}_{t\geq 0}$ given by $M^{f}_{t}=f(X_{t})-f(X_{0})-\int_{0}^{t}(\operatorname{G}f)(X_{s})\mathrm{d}s=\sum_{i,k=1}^{n}\int_{0}^{t}\partial_{i}f(X_{s})\sigma_{i,k}(X_{s})\mathrm{d}B_{s}^{k}$ is a local martingale, and moreover, for all $t\geq 0$, $\langle M^{f}\rangle_{t}=\int_{0}^{t}\Gamma(f)(X_{s})\mathrm{d}s\quad\text{where}\quad\Gamma(f)(x)=|\sigma(x)^{\top}\nabla f(x)|^{2}=a(x)\nabla f\cdot\nabla f.$ The functional quadratic form $\Gamma$ is known as the “carré du champ” operator. If $f$ is an eigenfunction of $\operatorname{G}$ associated to the spectral value $\lambda$ in the sense that $\operatorname{G}f=\lambda f$ (note by the way that $\lambda\leq 0$ since $\operatorname{G}$ generates a Markov process), then we get $f(X_{t})=f(X_{0})+\lambda\int_{0}^{t}f(X_{s})\mathrm{d}s+M_{t}^{f},\quad\text{in other words}\quad\mathrm{d}f(X_{t})=\mathrm{d}M^{f}_{t}+\lambda f(X_{t})\mathrm{d}t.$ Now if $\Gamma(f)=c$ (as in the first part of the theorem), then by the Lévy characterization of Brownian motion, the continuous local martingale $W:=\frac{1}{\sqrt{c}}M^{f}$ starting from the origin is a standard BM and we recover the result of the first part of the theorem. On the other hand, if $\Gamma(f)=cf$ (as in the second part of the theorem), then by the Lévy characterization of BM the local martingale $W:=\int_{0}^{t}\frac{1}{\sqrt{cf(X_{s})}}\mathrm{d}M_{s}^{f}$ is a standard BM and we recover the result of the second part. At this point, we observe that the infinitesimal generator of the CIR process $R$ is the Laguerre partial differential operator $L(f)(x)=\frac{4}{n}xf^{\prime\prime}(x)+(2+\beta(n-1)-2x)f^{\prime}(x).$ (4.1) This operator leaves invariant the set of polynomials of degree less than or equal to $k$, for all integer $k\geq 0$, a property inherited from (2.5). We will use this property in the following proof. ### 4.1. Proof of Corollary 1.4 By Theorem 1.3, $Z=\pi(X^{n})$ is an OU process in $\mathbb{R}$ solution of the stochastic differential equation $Z_{0}=\pi(X_{0}^{n}),\quad\mathrm{d}Z_{t}=\sqrt{2}\mathrm{d}B_{t}-Z_{t}\mathrm{d}t,$ where $B$ is a standard one-dimensional BM. By Lemma 3.1, $Z_{t}\sim\mathcal{N}(Z_{0}\mathrm{e}^{-t},1-\mathrm{e}^{-2t})$ for all $t\geq 0$ and the equilibrium distribution is $P_{n}^{\beta}\circ\pi^{-1}=\mathcal{N}(0,1)$. Using the contraction property stated in Lemma A.2, the comparison between Hellinger and TV of Lemma A.1 and the explicit expressions for Gaussian distributions of Lemma A.5, we find $\displaystyle\|\mathrm{Law}(X^{n}_{t})-P_{n}^{\beta}\|_{\mathrm{TV}}$ $\displaystyle\geq\|\mathrm{Law}(Z_{t})-P_{n}^{\beta}\circ\pi^{-1}\|_{\mathrm{TV}}$ $\displaystyle\geq\mathrm{Hellinger}^{2}(\mathrm{Law}(Z_{t}),P_{n}^{\beta}\circ\pi^{-1})$ $\displaystyle=1-\frac{(1-\mathrm{e}^{-2t})^{1/4}}{(1-\frac{1}{2}\mathrm{e}^{-2t})^{1/2}}\exp\Bigr{(}-\frac{\pi(X^{n}_{0})^{2}\mathrm{e}^{-2t}}{4(2-\mathrm{e}^{-2t})}\Bigr{)}.$ Setting $c_{n}:=\log(|\pi(X^{n}_{0})|)$ and assuming that $\lim_{n\to\infty}c_{n}=\infty$, we deduce that for all $\varepsilon\in(0,1)$ $\lim_{n\to\infty}\|\mathrm{Law}(X^{n}_{c_{n}(1-\varepsilon)})-P_{n}^{\beta}\|_{\mathrm{TV}}=1.$ The comparison between $\mathrm{Hellinger}$ and $\mathrm{TV}$ of Lemma A.1 allows to deduce that this remains true for the Hellinger distance. We turn to Kullback. The contraction property stated in Lemma A.2 and the explicit expressions for Gaussian distributions of Lemma A.5 yield $\displaystyle 2\mathrm{Kullback}(\mathrm{Law}(X^{n}_{t})\mid P_{n}^{\beta})$ $\displaystyle\geq 2\mathrm{Kullback}(\mathrm{Law}(Z_{t})\mid P_{n}^{\beta}\circ\pi^{-1})$ $\displaystyle=\pi(X^{n}_{0})^{2}\mathrm{e}^{-2t}-\mathrm{e}^{-2t}-\log(1-\mathrm{e}^{-2t}).$ This is enough to deduce that $\lim_{n\to\infty}\mathrm{Kullback}(\mathrm{Law}(X^{n}_{(1-\varepsilon)c_{n}})\mid P_{n}^{\beta})=+\infty.$ The situation is similar for $\chi^{2}$: the contraction property stated in Lemma A.2 and the explicit expressions for Gaussian distributions of Lemma A.5 yield $\displaystyle\chi^{2}(\mathrm{Law}(X^{n}_{t})\mid P_{n}^{\beta})$ $\displaystyle\geq\chi^{2}(\mathrm{Law}(Z_{t})\mid P_{n}^{\beta}\circ\pi^{-1})$ $\displaystyle=-1+\frac{1}{\sqrt{1-\mathrm{e}^{-4t}}}\exp\left(\frac{1}{1+\mathrm{e}^{-2t}}(1-\pi(X^{n}_{0})\mathrm{e}^{-t})^{2}\right),$ so that $\lim_{n\to\infty}\chi^{2}(\mathrm{Law}(X^{n}_{(1-\varepsilon)c_{n}})\mid P_{n}^{\beta})=+\infty.$ Regarding the Wasserstein distance, we have $\left\|\pi\right\|_{\mathrm{Lip}}:=\sup_{x\neq y}\frac{|\pi(x)-\pi(y)|}{|x-y|}\leq\sqrt{n}$ from the Cauchy–Schwarz inequality, and by Lemma A.2, for all probability measures $\mu$ and $\nu$ on $\mathbb{R}^{n}$, $\mathrm{Wasserstein}(\mu\circ\pi^{-1},\nu\circ\pi^{-1})\leq\sqrt{n}\mathrm{Wasserstein}(\mu,\nu).$ (4.2) Using the explicit expressions for Gaussian distributions of Lemma A.5, we thus find $\displaystyle\mathrm{Wasserstein}^{2}(\mathrm{Law}(X_{t}^{n}),P_{n}^{\beta})$ $\displaystyle\geq\frac{1}{n}\mathrm{Wasserstein}^{2}(\mathrm{Law}(Z_{t}),P_{n}^{\beta}\circ\pi^{-1})$ $\displaystyle=\frac{1}{n}\Bigr{(}\pi(X^{n}_{0})^{2}\mathrm{e}^{-2t}+2-\mathrm{e}^{-2t}-2\sqrt{1-\mathrm{e}^{-2t}}\Bigr{)}.$ Setting $c_{n}:=\log\Big{(}\frac{|\pi(x_{0}^{n})|}{\sqrt{n}}\Big{)}$ and assuming $c_{n}\to\infty$ as $n\to\infty$, we thus deduce that for all $\varepsilon\in(0,1)$ $\lim_{n\to\infty}\mathrm{Wasserstein}(\mathrm{Law}(X^{n}_{(1-\varepsilon)c_{n}}),P_{n}^{\beta})=+\infty.$ ## 5\. The random matrix cases In this section, we prove Theorem 1.5 and Corollary 1.6 that cover the matrix cases $\beta\in\\{1,2\\}$. For these values of $\beta$, the DOU process is the image by the spectral map of a matrix OU process, connected to the random matrix models $\mathrm{GOE}$ and $\mathrm{GUE}$. We could consider the case $\beta=4$ related to $\mathrm{GSE}$. Beyond these three algebraic cases, it could be possible for an arbitrary $\beta\geq 1$ to use random tridiagonal matrices dynamics associated to $\beta$ Dyson processes, see for instance [44]. The next two subsections are devoted to the proof of Theorem 1.5 in the $\beta=2$ and $\beta=1$ cases respectively. The third section provides the proof of Corollary 1.6. ### 5.1. Hermitian case ($\beta=2$) Let $\mathrm{Herm}_{n}$ be the set of $n\times n$ complex Hermitian matrices, namely the set of $h\in\mathcal{M}_{n,n}(\mathbb{C})$ with $h_{i,j}=\overline{h_{j,i}}$ for all $1\leq i,j\leq n$. An element $h\in\mathrm{Herm}_{n}$ is parametrized by the $n^{2}$ real variables $(h_{i,i})_{1\leq i\leq n}$, $(\Re h_{i,j})_{1\leq i<j\leq n}$, $(\Im h_{i,j})_{1\leq i<j\leq n}$. We define, for $h\in\mathrm{Herm}_{n}$ and $1\leq i,j\leq n$, $\pi_{i,j}(h)=\begin{cases}h_{i,i}&\text{if $i=j$}\\\ \sqrt{2}\,\Re h_{i,j}&\text{if $i<j$}\\\ \sqrt{2}\,\Im h_{j,i}&\text{if $i>j$}\end{cases}.$ (5.1) Note that $\mathrm{Tr}(h^{2})=\sum_{i,j=1}^{n}|h_{i,j}|^{2}=\sum_{i=1}^{n}h_{i,i}^{2}+2\sum_{i<j}(\Re h_{i,j})^{2}+2\sum_{i<j}(\Im h_{i,j})^{2}=\sum_{i,j}\pi_{i,j}(h)^{2}.$ We thus identify $\mathrm{Herm}_{n}$ with $\mathbb{R}^{n}\times\mathbb{R}^{2\frac{n^{2}-n}{2}}=\mathbb{R}^{n^{2}}$, this identification is isometrical provided $\mathrm{Herm}_{n}$ is endowed with the norm $\sqrt{\mathrm{Tr}(h^{2})}$ and $\mathbb{R}^{n^{2}}$ with the Euclidean norm. The Gaussian Unitary Ensemble $\mathrm{GUE}_{n}$ is the Gaussian law on $\mathrm{Herm}_{n}$ with density $h\in\mathrm{Herm}_{n}\mapsto\frac{\mathrm{e}^{-\frac{n}{2}\mathrm{Tr}(h^{2})}}{C_{n}}\quad\text{where}\quad C_{n}:=\int_{\mathbb{R}^{n^{2}}}\mathrm{e}^{-\frac{n}{2}\mathrm{Tr}(h^{2})}\prod_{i=1}^{n}\mathrm{d}h_{i,i}\prod_{i<j}\mathrm{d}\Re h_{i,j}\prod_{i<j}\mathrm{d}\Im h_{i,j}.$ (5.2) If $H$ is a random $n\times n$ Hermitian matrix then $H\sim\mathrm{GUE}_{n}$ if and only if the $n^{2}$ real random variables $\pi_{i,j}(H)$, $1\leq i,j\leq n$, are independent Gaussian random variables with $\pi_{i,j}(H)\sim\mathcal{N}\Bigr{(}0,\frac{1}{n}\Bigr{)},\quad 1\leq i,j\leq n.$ (5.3) The law $\mathrm{GUE}_{n}$ is the unique invariant law of the Hermitian matrix OU process ${(H_{t})}_{t\geq 0}$ on $\mathrm{Herm}_{n}$ solution of the stochastic differential equation $H_{0}=h_{0}\in\mathrm{Herm}_{n},\quad\mathrm{d}H_{t}=\sqrt{\frac{2}{n}}\mathrm{d}B_{t}-H_{t}\mathrm{d}t,$ (5.4) where $B={(B_{t})}_{t\geq 0}$ is a Brownian motion on $\mathrm{Herm}_{n}$, in the sense that the stochastic processes $(\pi_{i,j}(B_{t}))_{t\geq 0}$, $1\leq i\neq j\leq n$, are independent standard one-dimensional BM. The coordinates stochastic processes ${(\pi_{i,j}(H_{t}))}_{t\geq 0}$, $1\leq i,j\leq n$, are independent real OU processes. For any $h$ in $\mathrm{Herm}_{n}$, we denote by $\Lambda(h)$ the vector of the eigenvalues of $h$ ordered in non-decreasing order. Lemma 5.1 below is an observation which dates back to the seminal work of Dyson [33], hence the name DOU for $X^{n}$. We refer to [37, Ch. 12] and [2, Sec. 4.3] for a mathematical approach using modern stochastic calculus. ###### Lemma 5.1 (From matrix OU to DOU). The image of $\mathrm{GUE}_{n}$ by the map $\Lambda$ is the Coulomb gas $P_{n}^{\beta}$ given by (1.6) with $\beta=2$. Moreover the stochastic process $X^{n}={(X^{n}_{t})}_{t\geq 0}={(\Lambda(H_{t}))}_{t\geq 0}$ is well-defined and solves the stochastic differential equation (1.3) with $\beta=2$ and $x_{0}^{n}=\Lambda(h_{0})$. Let $\beta=2$. Let us assume from now on that the initial value $h_{0}\in\mathrm{Herm}_{n}$ of ${(H_{t})}_{t\geq 0}$ has eigenvalues $x_{0}^{n}$ where $x^{n}_{0}$ is as in Theorem 1.5. We start by proving the upper bound on the $\chi^{2}$ distance stated in Theorem 1.5: it will be an adaptation of the proof of the upper bound of Theorem 1.1 applied to the Hermitian matrix OU process ${(H_{t})}_{t\geq 0}$ combined with the contraction property of the $\chi^{2}$ distance. Indeed, by Lemma 5.1 and the contraction property of Lemma A.2 $\chi^{2}(\mathrm{Law}(X_{t}^{n})\mid P_{n}^{\beta})\leq\chi^{2}(\mathrm{Law}(H_{t})\mid\mathrm{GUE}_{n}).$ (5.5) We claim now that the right-hand side tends to $0$ as $n\to\infty$ when $t=t_{n}$ is well chosen. Indeed, using the identification between $\mathrm{Herm}_{n}$ and $\mathbb{R}^{n^{2}}$ mentioned earlier, we have $\mathrm{GUE}_{n}=\mathcal{N}(m_{2},\Sigma_{2})$ where $m_{2}=0$ and where $\Sigma_{2}$ is an $n^{2}\times n^{2}$ diagonal matrix with $(\Sigma_{2})_{(i,j),(i,j)}=\frac{1}{n}.$ (5.6) On the other hand, the Mehler formula (Lemma 3.1) gives $\mathrm{Law}(H_{t})=\mathcal{N}(m_{1},\Sigma_{1})$ where $m_{1}=\mathrm{e}^{-t}h_{0}$ and where $\Sigma_{1}$ is an $n^{2}\times n^{2}$ diagonal matrix with $(\Sigma_{1})_{(i,j),(i,j)}=\frac{1-\mathrm{e}^{-2t}}{n}.$ (5.7) Therefore, using Lemma A.5, the analogue of (3.3) reads $\chi^{2}(\mathrm{Law}(H_{t})\mid\mathrm{GUE}_{n})=-1+\frac{1}{(1-\mathrm{e}^{-4t})^{n^{2}/2}}\exp\left(n|h_{0}|^{2}\frac{\mathrm{e}^{-2t}}{1+\mathrm{e}^{-2t}}\right).$ (5.8) where $|h_{0}|^{2}=\sum_{1\leq i,j\leq n}\pi_{i,j}(h_{0})^{2}=\sum_{1\leq i,j\leq n}|(h_{0})_{i,j}|^{2}=\mathrm{Tr}(h_{0}^{2})=|x_{0}^{n}|^{2}.$ (5.9) Taking now $c_{n}:=\log(\sqrt{n}|x_{0}^{n}|)\vee\log(\sqrt{n})$, for any $\varepsilon\in(0,1)$, we get $\chi^{2}(\mathrm{Law}(X_{(1+\varepsilon)c_{n}}^{n})\mid P_{n}^{\beta})\leq\chi^{2}(\mathrm{Law}(H_{(1+\varepsilon)c_{n}})\mid\mathrm{GUE}_{n})\underset{n\to\infty}{\longrightarrow}0.$ (5.10) In the right-hand side of (5.8), the factor $n^{2}$ is the dimension of the $\mathbb{R}^{n^{2}}$ to which $\mathrm{Herm}_{n}$ is identified, while the factor $n$ in the first term is due to the $1/n$ scaling in the stochastic differential equation of the process. This explains the difference with the analogue (3.3) in dimension $n$. From the comparison between TV, Hellinger, Kullback and $\chi^{2}$ stated in Lemma A.1, we easily deduce that the previous convergence remains true upon replacing $\chi^{2}$ by $\mathrm{TV}$, $\mathrm{Hellinger}$ or $\mathrm{Kullback}$. It remains to cover the upper bound for the Wasserstein distance. This distance is more sensitive to contraction arguments: according to Lemma A.2, one needs to control the Lipschitz norm of the “contraction map” at stake. It happens that the spectral map, restricted to the set $\mathrm{Herm}_{n}$ of $n\times n$ Hermitian matrices, is $1$-Lipschitz: more precisely, the Hoffman–Wielandt inequality, see [43] and [45, Th. 6.3.5], asserts that for any two such matrices $A$ and $B$, denoting $\Lambda(A)=(\lambda_{i}(A))_{1\leq i\leq n}$ and $\Lambda(B)=(\lambda_{i}(B))_{1\leq i\leq n}$ the ordered sequences of their eigenvalues, we have $\sum_{i=1}^{n}|\lambda_{i}(A)-\lambda_{i}(B)|^{2}\leq\sum_{i,j}|A_{i,j}-B_{i,j}|^{2}.$ Applying Lemma A.2, we thus deduce that $\mathrm{Wasserstein}(\mathrm{Law}(X_{t}^{n}),P_{n}^{\beta})\leq\mathrm{Wasserstein}(\mathrm{Law}(H_{t}),\mathrm{GUE}_{n}).$ (5.11) Following the Gaussian computations in the proof of Theorem 1.2, we obtain $\mathrm{Wasserstein}^{2}(\mathrm{Law}(H_{t}),\mathrm{GUE}_{n})=|x_{0}^{n}|^{2}\mathrm{e}^{-2t}+2-\mathrm{e}^{-2t}-2\sqrt{1-\mathrm{e}^{-2t}}.$ (5.12) Set $c_{n}:=\log(|x_{0}^{n}|)$. If $c_{n}\to\infty$ as $n\to\infty$ then for all $\varepsilon\in(0,1)$ we find $\mathrm{Wasserstein}(\mathrm{Law}(X_{(1+\varepsilon)c_{n}}^{n}),P_{n}^{\beta})\underset{n\to\infty}{\longrightarrow}0.$ This completes the proof of Theorem 1.5. ### 5.2. Symmetric case ($\beta=1$) The method is similar to the case $\beta=2$. Let us focus only on the differences. Let $\mathrm{Sym}_{n}$ be the set of $n\times n$ real symmetric matrices, namely the set of $s\in\mathcal{M}_{n,n}(\mathbb{R})$ with $s_{i,j}=s_{j,i}$ for all $1\leq i,j\leq n$. An element $s\in\mathrm{Sym}_{n}$ is parametrized by the $n+\frac{n^{2}-n}{2}=\frac{n(n+1)}{2}$ real variables $(s_{i,j})_{1\leq i\leq j\leq n}$. We define, for $s\in\mathrm{Sym}_{n}$ and $1\leq i\leq j\leq n$, $\pi_{i,j}(s)=\begin{cases}s_{i,i}&\text{if $i=j$}\\\ \sqrt{2}\,s_{i,j}&\text{if $i<j$}\end{cases}.$ (5.13) Note that $\mathrm{Tr}(s^{2})=\sum_{i,j=1}^{n}s_{i,j}^{2}=\sum_{i=1}^{n}s_{i,i}^{2}+2\sum_{i<j}s_{i,j}^{2}=\sum_{1\leq i\leq j\leq n}\pi_{i,j}(s)^{2}.$ We thus identify isometrically $\mathrm{Sym}_{n}$, endowed with the norm $\sqrt{\mathrm{Tr}(h^{2})}$, with $\mathbb{R}^{n}\times\mathbb{R}^{\frac{n^{2}-n}{2}}=\mathbb{R}^{\frac{n(n+1)}{2}}$ endowed with the Euclidean norm. The Gaussian Orthogonal Ensemble $\mathrm{GOE}_{n}$ is the Gaussian law on $\mathrm{Sym}_{n}$ with density $s\in\mathrm{Sym}_{n}\mapsto\frac{\mathrm{e}^{-\frac{n}{2}\mathrm{Tr}(s^{2})}}{C_{n}}\quad\text{where}\quad C_{n}:=\int_{\mathbb{R}^{\frac{n(n+1)}{2}}}\mathrm{e}^{-\frac{n}{2}\mathrm{Tr}(s^{2})}\prod_{1\leq i\leq j\leq n}\mathrm{d}s_{i,j}.$ (5.14) If $S$ is a random $n\times n$ real symmetric matrix then $S\sim\mathrm{GOE}_{n}$ if and only if the $\frac{n(n+1)}{2}$ real random variables $\pi_{i,j}(S)$, $1\leq i\leq j\leq n$, are independent Gaussian random variables with $\pi_{i,j}(S)\sim\mathcal{N}\Bigr{(}0,\frac{1}{n}\Bigr{)},\quad 1\leq i\leq j\leq n.$ (5.15) The law $\mathrm{GOE}_{n}$ is the unique invariant law of the real symmetric matrix OU process ${(S_{t})}_{t\geq 0}$ on $\mathrm{Sym}_{n}$ solution of the stochastic differential equation $S_{0}=s_{0}\in\mathrm{Sym}_{n},\quad\mathrm{d}S_{t}=\sqrt{\frac{2}{n}}\mathrm{d}B_{t}-S_{t}\mathrm{d}t$ (5.16) where $B={(B_{t})}_{t\geq 0}$ is a Brownian motion on $\mathrm{Sym}_{n}$, in the sense that the stochastic processes $(\pi_{i,j}(B_{t}))_{t\geq 0}$, $1\leq i\leq j\leq n$, are independent standard one-dimensional BM. The coordinates stochastic processes ${(\pi_{i,j}(S_{t}))}_{t\geq 0}$, $1\leq i\leq j\leq n$, are independent real OU processes. For any $s$ in $\mathrm{Sym}_{n}$, we denote by $\Lambda(s)$ the vector of the eigenvalues of $s$ ordered in non-decreasing order. Lemma 5.2 below is the real symmetric analogue of Lemma 5.1. ###### Lemma 5.2 (From matrix OU to DOU). The image of $\mathrm{GOE}_{n}$ by the map $\Lambda$ is the Coulomb gas $P_{n}^{\beta}$ given by (1.6) with $\beta=1$. Moreover the stochastic process $X^{n}={(X^{n}_{t})}_{t\geq 0}={(\Lambda(S_{t}))}_{t\geq 0}$ is well-defined and solves the stochastic differential equation (1.3) with $\beta=1$ and $x_{0}^{n}=\Lambda(s_{0})$. As for the case $\beta=2$, the idea now is that the DOU process is sandwiched between a real OU process and a matrix OU process. By similar computations to the case $\beta=2$, the analogue of (5.8) becomes $\chi^{2}(\mathrm{Law}(H_{t})\mid\mathrm{GOE}_{n})=-1+\frac{1}{(1-\mathrm{e}^{-4t})^{\frac{(n(n+1))^{2}}{8}}}\exp\left(n|h_{0}|^{2}\frac{\mathrm{e}^{-2t}}{1+\mathrm{e}^{-2t}}\right).$ (5.17) This allows to deduce the upper bound for TV, Hellinger, Kullback and $\chi^{2}$. Regarding the Wasserstein distance, the analogue of (5.12) reads $\mathrm{Wasserstein}^{2}(\mathrm{Law}(S_{t}),\mathrm{GOE}_{n})=|x_{0}^{n}|^{2}\mathrm{e}^{-2t}+2-\mathrm{e}^{-2t}-2\sqrt{1-\mathrm{e}^{-2t}}.$ (5.18) If $\lim_{n\to\infty}\log(|x_{0}^{n}|)=\infty$ then we deduce the asserted result, concluding the proof of Theorem 1.5. ### 5.3. Proof of Corollary 1.6 Let $\beta\in\\{1,2\\}$. Recall the definitions of $a_{n}$ and $c_{n}$ from the statement. Take $x_{0}^{n,i}=a_{n}$ for all $i$, and note that $\pi(x_{0}^{n})=na_{n}$. Given our assumptions on $a_{n}$, Corollary 1.4 yields for this particular choice of initial condition and for any $\varepsilon\in(0,1)$ $\lim_{n\to\infty}\mathrm{dist}(\mathrm{Law}(X^{n}_{(1-\varepsilon)c_{n}})\mid P_{n}^{\beta})=\max.$ On the other hand, in the proof of Theorem 1.5 we saw that $\chi^{2}(\mathrm{Law}(X_{t}^{n})\mid P_{n}^{\beta})\leq-1+\frac{1}{(1-\mathrm{e}^{-4t})^{b_{n}/2}}\exp\left(n|x_{0}^{n}|^{2}\frac{\mathrm{e}^{-2t}}{1+\mathrm{e}^{-2t}}\right),$ where $b_{n}=n^{2}$ for $\beta=2$ and $b_{n}=(n(n+1)/2)^{2}$ for $\beta=1$. Since $|x_{0}^{n}|\leq\sqrt{n}a_{n}$ for all $x_{0}^{n}\in[-a_{n},a_{n}]^{n}$, and given the comparison between TV, Hellinger, Kullback and $\chi^{2}$ stated in Lemma A.1 we obtain for $\mathrm{dist}\in\\{\mathrm{TV},\mathrm{Hellinger},\mathrm{Kullback},\chi^{2}\\}$ and for all $\varepsilon\in(0,1)$ $\lim_{n\to\infty}\sup_{x_{0}^{n}\in[-a_{n},a_{n}]^{n}}\mathrm{dist}(\mathrm{Law}(X^{n}_{(1+\varepsilon)c_{n}})\mid P_{n}^{\beta})=0,$ thus concluding the proof of Corollary 1.6 regarding theses distances. Concerning Wasserstein, the proof of Theorem 1.5 shows that for any $x_{0}^{n}\in[-a_{n},a_{n}]^{n}$ we have $\displaystyle\mathrm{Wasserstein}^{2}(\mathrm{Law}(X_{t}^{n}),P_{n}^{\beta})$ $\displaystyle\leq|x_{0}^{n}|^{2}\mathrm{e}^{-2t}+2-\mathrm{e}^{-2t}-2\sqrt{1-\mathrm{e}^{-2t}}$ $\displaystyle\leq na_{n}^{2}\mathrm{e}^{-2t}+2-\mathrm{e}^{-2t}-2\sqrt{1-\mathrm{e}^{-2t}}.$ If $\sqrt{n}a_{n}\to\infty$, then for $c_{n}=\log(\sqrt{n}a_{n})$ we deduce that for all $\varepsilon\in(0,1)$ $\lim_{n\to\infty}\sup_{x_{0}^{n}\in[-a_{n},a_{n}]^{n}}\mathrm{dist}(\mathrm{Law}(X^{n}_{(1+\varepsilon)c_{n}})\mid P_{n}^{\beta})=0.$ ## 6\. Cutoff phenomenon for the DOU in TV and Hellinger In this section, we prove Theorem 1.7 and Corollary 1.8 for the TV and Hellinger distances. We only consider the case $\beta\geq 1$, although the arguments could be adapted _mutatis mutandis_ to cover the case $\beta=0$: note that the result of Theorem 1.7 and Corollary 1.8 for $\beta=0$ can be deduced from Theorem 1.2. At the end of this section, we also provide the proof of Theorem 1.10. ### 6.1. Proof of Theorem 1.7 in TV and Hellinger By the comparison between TV and Hellinger stated in Lemma A.1, it suffices to prove the result for the TV distance, so we concentrate on this distance until the end of this section. Our proof is based on the exponential decay of the relative entropy at an explicit rate given by the optimal logarithmic Sobolev constant. However, this requires the relative entropy of the initial condition to be _finite_. Consequently, we proceed in three steps. First, given an arbitrary initial condition $x_{0}^{n}\in\overline{D}_{n}$, we build an absolutely continuous probability measure $\mu_{x_{0}^{n}}$ on $D_{n}$ that approximates $\delta_{x_{0}^{n}}$ and whose relative entropy is not too large. Second, we derive a decay estimate starting from this regularized initial condition. Third, we control the total variation distance between the two processes starting respectively from $\delta_{x_{0}^{n}}$ and $\mu_{x_{0}^{n}}$. #### 6.1.1. Regularization In order to have a finite relative entropy at time $0$, we first regularize the initial condition by smearing out each particle in a ball of radius bounded below by $n^{-(\kappa+1)}$, for some $\kappa>0$. Let us first introduce the regularization at scale $\eta$ of a Dirac distribution $\delta_{z}$, $z\in\mathbb{R}$ by $\delta_{z}^{(\eta)}(\mathrm{d}u)=\mathrm{Uniform}([z,z+\eta])(\mathrm{d}u)=\eta^{-1}\mathbf{1}_{[z,z+\eta]}\mathrm{d}u.$ Given $x\in\overline{D}_{n}$ and $\kappa>0$, we define a regularized version of $\delta_{x}$ at scale $n^{-\kappa}$, that we denote $\mu_{x}$, by setting $\mu_{x}=\otimes_{i=1}^{n}\delta_{x_{i}+3i\eta}^{(\eta)},$ (6.1) where $\eta:=n^{-(\kappa+1)}$. The parameters have been tuned in such a way that, independently of the choice of $x\in\overline{D}_{n}$, the following properties hold. The supports of the Dirac masses $\delta_{x_{i}+3i\eta}^{(\eta)}$, $i\in\\{1,\ldots,n\\}$, lie at distance at least $\eta$ from each other. The volume of the support of $\mu_{x}$ is equal to $\eta^{n}$, and therefore the relative entropy of $\mu_{x}$ with respect to the Lebesgue measure is not too large. Finally, provided $X_{0}^{n}=x$ and $Y_{0}^{n}$ is distributed according to $\mu_{x}$, almost surely $|X_{0}^{n}-Y_{0}^{n}|_{\infty}\leq(3n+1)\eta$. #### 6.1.2. Convergence of the regularized process to equilibrium ###### Lemma 6.1 (Convergence of regularized process). Let $(Y_{t}^{n})_{t\geq 0}$ be a DOU process solution of (1.3), $\beta\geq 1$, and let $P_{n}^{\beta}$ be its invariant law. Assume that $\mathrm{Law}(Y^{n}_{0})$ is the regularized measure $\mu_{x_{0}^{n}}$ in (6.1) associated to some initial condition $x_{0}^{n}\in\overline{D}_{n}$. Then there exists a constant $C>0$, only depending on $\kappa$, such that for all $t\geq 0$, all $n\geq 2$ and all $x_{0}^{n}\in\overline{D}_{n}$ $\mathrm{Kullback}(\mathrm{Law}(Y_{t}^{n})\mid P_{n}^{\beta})\leq C(n|x_{0}^{n}|^{2}+n^{2}\log(n))\mathrm{e}^{-2t}.$ ###### Proof of Lemma 6.1. By Lemma B.2 and since $\mathrm{Law}(Y^{n}_{0})=\mu_{x_{0}^{n}}$, for all $t\geq 0$, there holds $\mathrm{Kullback}(\mathrm{Law}(Y_{t}^{n})\mid P_{n}^{\beta})\leq\mathrm{Kullback}(\mu_{x_{0}^{n}}\mid P_{n}^{\beta})\mathrm{e}^{-2t}.$ (6.2) Now we have $\mathrm{Kullback}(\mu_{x_{0}^{n}}\mid P_{n}^{\beta})=\mathbb{E}_{\mu_{x_{0}^{n}}}\left[\log\frac{\mathrm{d}\mu_{x_{0}^{n}}}{\mathrm{d}P_{n}^{\beta}}\right].$ Recall the definition of $S$ in (1.15). As $P_{n}^{\beta}$ has density $\frac{\mathrm{e}^{-E}}{C_{n}^{\beta}}$, we may re-write this as $\mathrm{Kullback}(\mu_{x_{0}^{n}}\mid P_{n}^{\beta})=S(\mu_{x_{0}^{n}})+\mathbb{E}_{\mu_{x_{0}^{n}}}[E]+\log C_{n}^{\beta}.$ (6.3) Recall the partition function $C_{*n}^{\beta}=n!C_{n}^{\beta}$ from Subsection 2.2. It is proved in [12], using explicit expressions involving Gamma functions via a Selberg integral, that for some constant $C>0$ $\log C_{n}^{\beta}\leq\log C_{*n}^{\beta}\leq Cn^{2}.$ (6.4) Next, we claim that $S(\mu_{x_{0}^{n}})\leq n\log(n^{1+\kappa})$. Indeed since $\mu_{x_{0}^{n}}$ is a product measure, the tensorization property of entropy recalled in Lemma A.4 gives $\mathrm{Kullback}(\mu_{x_{0}^{n}}\mid\mathrm{d}x)=\sum_{i=1}^{n}\mathrm{Kullback}(\delta_{0}^{(\eta)}\mid\mathrm{d}x).$ Moreover an immediate computation yields $\mathrm{Kullback}(\delta_{0}^{(\eta)}\mid\mathrm{d}x)=\log(\eta^{-1})$ so that, given the definition of $\eta$, we get $\mathrm{Kullback}(\mu_{x_{0}^{n}}\mid\mathrm{d}x)=n\log(n^{\kappa+1}).$ (6.5) We turn to the estimation of the term $\mathbb{E}_{\mu_{x_{0}^{n}}}[E].$ The confinement term can be easily bounded: $\mathbb{E}_{\mu_{x_{0}^{n}}}\left[\frac{n}{2}\sum_{i=1}^{n}{x_{i}^{2}}\right]\leq(n|x_{0}^{n}|^{2}+n^{2}\eta^{2}).$ Let us now estimate the logarithmic energy of $\mu_{x_{0}^{n}}$. Using the fact that the logarithmic function is increasing, together with the fact the supports of $\delta_{x_{i}+3i\eta}^{(\eta)}$ lie at distance at least $\eta$ from each other, we notice that for any $i>j$ there holds $\displaystyle\mathbb{E}_{\mu_{x_{0}^{n}}}\left[\log|x_{i}-x_{j}|\right]$ $\displaystyle=\iint\log|x-y|\delta_{x_{i}+3i\eta}^{(\eta)}(\mathrm{d}x)\delta_{x_{j}+3j\eta}^{(\eta)}(\mathrm{d}y)$ $\displaystyle\geq\iint\log|x-y|\delta_{3\eta}^{(\eta)}(\mathrm{d}x)\delta_{0}^{(\eta)}(\mathrm{d}y)$ $\displaystyle\geq\log\eta.$ It follows that the initial logarithmic energy cannot be much larger than $n^{2}\log n$: $\mathbb{E}_{\mu_{x_{0}^{n}}}\left[\sum_{i>j}\log\frac{1}{|x_{i}-x_{j}|}\right]\leq\frac{n(n-1)}{2}\log n^{\kappa+1}.$ This implies that there exists a constant $C>0$, only depending on $\kappa$, such that for all $n\geq 2$ $\mathbb{E}_{\mu_{x_{0}^{n}}}[E]=\mathbb{E}_{\mu_{x_{0}^{n}}}\left[\frac{n}{2}\sum_{i=1}^{n}|x_{i}|^{2}+{\beta}\sum_{i>j}\log\frac{1}{|x_{i}-x_{j}|}\right]\leq C\big{(}n|x_{0}^{n}|^{2}+n^{2}\log n\big{)}.$ (6.6) Inserting (6.4), (6.5) and (6.6) into (6.3) we obtain (for a different constant $C>0$) $\mathrm{Kullback}(\mu_{x_{0}^{n}}\mid P_{n}^{\beta})\leq C\big{(}n|x_{0}^{n}|^{2}+n^{2}\log n\big{)}.$ This bound, combined with (6.2), concludes the proof of Lemma 6.1. ∎ #### 6.1.3. Convergence to the regularized process in total variation distance Let $(X_{t}^{n})_{t\geq 0}$ and $(Y_{t}^{n})_{t\geq 0}$ be two DOU processes with $X_{0}^{n}=x_{0}^{n}$ and $\mathrm{Law}(Y_{0}^{n})=\mu_{x_{0}^{n}}$, where the measure $\mu_{x_{0}^{n}}$ is defined in (6.1). Below we prove that, as soon as the parameter $\kappa$ is large enough, the total variation distance between $\mathrm{Law}(X_{t}^{n})$ and $\mathrm{Law}(Y_{t}^{n})$ tends to $0$, for any fixed $t>0$. Note that at time $0$, almost surely, there holds $X_{0}^{n,i}\leq Y_{0}^{n,i}$, for every $i\in\\{1,\ldots,n\\}$. We now introduce a coupling of the processes $(X_{t}^{n})_{t\geq 0}$ and $(Y_{t}^{n})_{t\geq 0}$ that preserves this ordering at all times. Consider two independent standard BM $B^{n}$ and $W^{n}$ in $\mathbb{R}^{n}$. Let $X^{n}$ be the solution of (1.3) driven by $B^{n}$, and let $Y^{n}$ be the solution of $\mathrm{d}Y_{t}^{n,i}=\sqrt{\frac{2}{n}}\Big{(}\mathbf{1}_{\\{Y_{t}^{n,i}\neq X_{t}^{n,i}\\}}\mathrm{d}W^{i}_{t}+\mathbf{1}_{\\{Y_{t}^{n,i}=X_{t}^{n,i}\\}}\mathrm{d}B^{i}_{t}\Big{)}-Y_{t}^{n,i}\mathrm{d}t+\frac{\beta}{n}\sum_{j\neq i}\frac{\mathrm{d}t}{Y_{t}^{n,i}-Y_{t}^{n,j}},\quad 1\leq i\leq n.$ We denote by $\mathbb{P}$ the probability measure under which these two processes are coupled. Let us comment on the driving noise in the equation satisfied by $Y^{n}$. When the $i$-th coordinates of $X^{n}$ and $Y^{n}$ equal, we take the same driving Brownian motion and the difference $Y^{n,i}-X^{n,i}$ remains non-negative due to the convexity of $-\log$ defined in (1.5), see the monotoncity result stated in Lemma B.4. On the other hand, when these two coordinates differ, we take independent driving Brownian motions in order for their difference to have non-zero quadratic variation (this allows to increase their merging probability). Under this coupling, the ordering of $X^{n}$ and $Y^{n}$ is thus preserved at all times, and if $X_{s}^{n}=Y_{s}^{n}$ for some $s\geq 0$, then it remains true at all times $t\geq s$. Note however that if $X_{s}^{n,i}=Y_{s}^{n,i}$, then this equality does not remain true at all times except if all the coordinates match. As in (A.7), the total variation distance between the laws of $X_{t}^{n}$ and $Y_{t}^{n}$ may be bounded by $\|\mathrm{Law}(Y_{t}^{n})-\mathrm{Law}(X_{t}^{n})\|_{\mathrm{TV}}\leq\mathbb{P}(X_{t}^{n}\neq Y_{t}^{n}),$ for all $t\geq 0$. We wish to establish that for any given $t>0$, $\lim_{n\to\infty}\mathbb{P}(X_{t}^{n}\neq Y_{t}^{n})=0.$ To do so, we work with the area between the two processes $X^{n}$ and $Y^{n}$, defined by $A^{n}_{t}:=\sum_{i=1}^{n}\big{(}Y^{n,i}_{t}-X^{n,i}_{t}\big{)}=\pi(Y^{n}_{t})-\pi(X^{n}_{t}),\quad t\geq 0.$ As the two processes are ordered at any time, this is nothing but the geometric area between the two discrete interfaces $i\mapsto X_{t}^{n,i}$ and $i\mapsto Y_{t}^{n,i}$ associated to the configurations $X_{t}^{n}$ and $Y_{t}^{n}$. We deduce that the merging time of the two processes coincide with the hitting time of $0$ by this area, that we denote by $\tau=\inf\\{t\geq 0:A_{t}^{n}=0\\}$. The process $A^{n}$ has a very simple structure: it is a semimartingale that behaves like an OU process with a randomly varying quadratic variation. Let $N_{t}$ be the number of coordinates that do not coincide at time $t$, that is $N_{t}:=\\#\big{\\{}i\in\\{1,\ldots,n\\}:X^{n,i}_{t}\neq Y^{n,i}_{t}\big{\\}}.$ Then $A^{n}$ satisfies $\mathrm{d}A^{n}_{t}=-A^{n}_{t}\mathrm{d}t+\mathrm{d}M_{t},$ where $M$ is a centered martingale with quadratic variation $\mathrm{d}\langle M\rangle_{t}=\frac{2}{n}N_{t}\mathrm{d}t.$ (6.7) Note that whenever $t<\tau$ we have $\mathrm{d}\langle M\rangle_{t}\geq\frac{2}{n}.$ This _a priori_ lower bound on the quadratic variation of $M$, combined with the Dubins–Schwarz theorem, allows to check that $\tau<\infty$ almost surely. Note that in view of the coupling between $X_{t}^{n}$ and $Y_{t}^{n}$, we have $X_{t}^{n}=Y_{t}^{n}$ for all $t\geq\tau$. Recall the following informal fact: with large probability, a Brownian motion starting from $a$ hits $b$ by a time of order $(a-b)^{2}$. For a continuous martingale, this becomes: with large probability, a continuous martingale starting from $a$ accumulates a quadratic variation of order $(a-b)^{2}$ up to its first hitting time of $b$. Our next lemma states such a bound on the supermartingale $A^{n}$. ###### Lemma 6.2. Let $a>b\geq 0$. Let $\tau_{b}=\inf\\{t>0:A_{t}=b\\}<\infty$ almost surely. Then, for all $u\geq 1$, $\mathbb{P}(\langle A\rangle_{\tau_{b}}\geq(a-b)^{2}u\mid A_{0}=a)\leq 4u^{-1/2}.$ ###### Proof. Without loss of generality one can assume that $A_{0}=a$ almost surely. By Itô’s formula, for all $\lambda\geq 0$, the process $S_{t}=\exp\Bigr{(}-\lambda A_{t}-\frac{\lambda^{2}}{2}\langle A\rangle_{t}\Bigr{)},$ defines a submartingale (taking its values in $[0,1]$). Doob’s stopping theorem yields $\mathbb{E}[\mathrm{e}^{-\frac{\lambda^{2}}{2}\langle A\rangle_{\tau_{b}}}]=\mathrm{e}^{\lambda b}\mathbb{E}[S_{\tau_{b}}]\geq\mathrm{e}^{\lambda b}\mathbb{E}[S_{0}]=\mathrm{e}^{-\lambda(a-b)}.$ On the other hand, for $\lambda=2(a-b)^{-1}u^{-1/2}$, there holds $\displaystyle\mathbb{E}[\mathrm{e}^{-\frac{\lambda^{2}}{2}\langle A\rangle_{\tau_{b}}}]$ $\displaystyle\leq\mathbb{P}\big{(}\langle A\rangle_{\tau_{b}}<(a-b)^{2}u\big{)}+\mathrm{e}^{-\frac{\lambda^{2}}{2}(a-b)^{2}u}\,\mathbb{P}\big{(}\langle A\rangle_{\tau_{b}}\geq(a-b)^{2}u\big{)}$ $\displaystyle\leq 1-(1-\mathrm{e}^{-\frac{\lambda^{2}}{2}(a-b)^{2}u})\mathbb{P}\big{(}\langle A\rangle_{\tau_{b}}\geq(a-b)^{2}u\big{)}$ $\displaystyle\leq 1-\frac{1}{2}\mathbb{P}\big{(}\langle A\rangle_{\tau_{b}}\geq(a-b)^{2}u\big{)}.$ Consequently one deduces that $\mathbb{P}(\langle A\rangle_{\tau_{b}}\geq(a-b)^{2}u)\leq 2(1-\mathrm{e}^{-\lambda(a-b)})\leq 4u^{-1/2}.$ ∎ We are now ready to prove the following lemma: ###### Lemma 6.3. If $\kappa>\frac{3}{2}$, then for every sequence of times ${(t_{n})}_{n}$ with $\varliminf_{n\to\infty}t_{n}>0$, we have $\lim_{n\to\infty}\sup_{x_{0}^{n}\in\overline{D}_{n}}\|\mathrm{Law}(Y_{t_{n}}^{n})-\mathrm{Law}(X_{t_{n}}^{n})\|_{\mathrm{TV}}=0.$ ###### Proof of Lemma 6.3. Let ${(t_{n})}_{n}$ be a sequence of times such that $\varliminf_{n\to\infty}t_{n}>0$. In view of the definition of $\mu_{{x_{0}^{n}}}$ and $\eta$, the initial area satisfies almost surely $A_{0}^{n}\leq 4n^{1-\kappa}.$ According to Lemma 6.2, with a probability that goes to $1$, one has $\langle A^{n}\rangle_{\tau}-\langle A^{n}\rangle_{0}<16n^{2-2\kappa}\log n.$ On the other hand, by (6.7), we have the following control on the quadratic variation: $\langle A\rangle_{\tau}-\langle A\rangle_{0}\geq\frac{2}{n}\tau.$ One deduces that, with a probability that goes to $1$, $\tau\leq\frac{16}{2}n^{3-2\kappa}\log n,$ and this quantity goes to $0$ as $n\to\infty$, whenever $\kappa>\frac{3}{2}$. Therefore for $\kappa>\frac{3}{2}$, there holds $\lim_{n\to\infty}\sup_{x_{0}^{n}\in\overline{D}_{n}}\mathbb{P}(X_{t_{n}}^{n}\neq Y_{t_{n}}^{n})=0,$ thus concluding the proof of Lemma 6.3. ∎ ###### Proof of Theorem 1.7 in TV and Hellinger. Let $\kappa>\frac{3}{2}$ and fix some initial condition $x_{0}^{n}\in\overline{D}_{n}$. By the triangle inequality for $\mathrm{TV}$, there holds $\|\mathrm{Law}(X_{t}^{n})-P_{n}^{\beta}\|_{\mathrm{TV}}\leq\|\mathrm{Law}(Y_{t}^{n})-P_{n}^{\beta}\|_{\mathrm{TV}}+\|\mathrm{Law}(X_{t}^{n})-\mathrm{Law}(Y_{t}^{n})\|_{\mathrm{TV}}.$ (6.8) Taking $t=t_{n}(1+\varepsilon)$ with $t_{n}=\log(\sqrt{n}|x_{0}^{n}|)\vee\log(n)$, one deduces from Lemma 6.1 and the Pinsker inequality stated in Lemma A.1 that the first term in the right- hand side of (6.8) vanishes as $n$ tends to infinity. Meanwhile Lemma 6.3 guaranties that the second term tends to $0$ as $n$ tends to infinity. We also conclude using the comparison between TV and Hellinger (see Lemma A.1) that $\lim_{n\to\infty}\mathrm{Hellinger}(\mathrm{Law}(X_{t_{n}}^{n}),P_{n}^{\beta})=0.$ ∎ ### 6.2. Proof of Corollary 1.8 in TV and Hellinger ###### Proof of Corollary 1.8 in TV and Hellinger. By Lemma A.1 and the triangle inequality for $\mathrm{TV}$, we have $\displaystyle\sup_{x_{0}^{n}\in[-a_{n},a_{n}]^{n}}\|\mathrm{Law}(X_{t}^{n})-P_{n}^{\beta}\|_{\mathrm{TV}}$ $\displaystyle\leq\sup_{x_{0}^{n}\in[-a_{n},a_{n}]^{n}}\|\mathrm{Law}(Y_{t}^{n})-\mathrm{Law}(X_{t}^{n})\|_{\mathrm{TV}}$ $\displaystyle\quad+\sup_{x_{0}^{n}\in[-a_{n},a_{n}]^{n}}\sqrt{2\,\mathrm{Kullback}(\mathrm{Law}(Y_{t}^{n})\mid P_{n}^{\beta})}.$ Take $t=(1+\varepsilon)c_{n}$ with $c_{n}=\log(na_{n})$. Lemmas 6.1 and 6.3, combined with the assumption made on $(a_{n})$, show that the two terms on the right-hand side vanish as $n\to\infty$. Using Lemma A.1, the same result holds for $\mathrm{Hellinger}$. On the other hand, take $x_{0}^{n,i}=a_{n}$ for all $i$ and note that $\pi(x_{0}^{n})=na_{n}$ goes to $+\infty$ as $n\to\infty$. By Corollary 1.4 we find $\lim_{n\to\infty}\sup_{x^{n}_{0}\in[-a_{n},a_{n}]^{n}}\mathrm{dist}(\mathrm{Law}(X^{n}_{(1-\varepsilon)c_{n}})\mid P_{n}^{\beta})=1\;$ whenever $\mathrm{dist}\in\\{\mathrm{TV},\mathrm{Hellinger}\\}$. ∎ ### 6.3. Proof of Theorem 1.10 ###### Proof of Theorem 1.10. _Lower bound._ The contraction property provided by Lemma A.2 gives $\mathrm{Kullback}(\mathrm{Law}(X_{t}^{n})\mid P_{n}^{\beta})\geq\mathrm{Kullback}(\mathrm{Law}(\pi(X_{t}^{n}))\mid P_{n}^{\beta}\circ\pi^{-1}).$ By Theorem 1.3 $P_{n}\circ\pi^{-1}=\mathcal{N}(0,1)$ and $Y=\pi(X^{n})$ is an OU process weak solution of $Y_{0}=\pi(X^{n}_{0})$ and $\mathrm{d}Y_{t}=\sqrt{2}\mathrm{d}B_{t}-Y_{t}\mathrm{d}t$. In particular for all $t\geq 0$, $\mathrm{Law}(Y_{t})$ is a mixture of Gaussian laws in the sense that for any measurable test function $g$ with polynomial growth, $\mathbb{E}_{\mathrm{Law}(Y_{t})}[g]=\mathbb{E}[g(Y_{t})]=\mathbb{E}[G_{t}(Y_{0})]\quad\text{where}\quad G_{t}(y)=\mathbb{E}_{\mathcal{N}(y\mathrm{e}^{-t},1-\mathrm{e}^{-2t})}[g].$ Now we use (again) the variational formula used in the proof of Lemma A.2 to get $\mathrm{Kullback}(\mathrm{Law}(\pi(X_{t}^{n}))\mid P_{n}^{\beta}\circ\pi^{-1})=\sup_{g}\\{\mathbb{E}_{\mathrm{Law}(\pi(X_{t}^{n}))}[g]-\log\mathbb{E}_{\mathcal{N}(0,1)}[\mathrm{e}^{g}]\\},$ and taking for $g$ the linear function defined by $g(x)=\lambda x$ for all $x\in\mathbb{R}$ and for some $\lambda\neq 0$ yields $\mathrm{Kullback}(\mathrm{Law}(\pi(X_{t}^{n}))\mid P_{n}^{\beta}\circ\pi^{-1})\geq\lambda\mathrm{e}^{-t}\sum_{i=1}^{n}\int x\mu_{i}(\mathrm{d}x)-\frac{\lambda^{2}}{2}.$ Finally, by using the assumption on first moment and taking $\lambda$ small enough we get, for all $\varepsilon\in(0,1)$, $\lim_{n\to\infty}\mathrm{Kullback}(\mathrm{Law}(\pi(X_{(1-\varepsilon)\log(n)}^{n})\mid P_{n}^{\beta}\circ\pi^{-1})=+\infty,$ _Upper bound._ From Lemma B.2 we have, for all $t\geq 0$, $\mathrm{Kullback}(\mathrm{Law}(X_{t}^{n})\mid P_{n}^{\beta})\leq\mathrm{Kullback}(\mathrm{Law}(X_{0}^{n})\mid P_{n}^{\beta})\mathrm{e}^{-2t}.$ Arguing like in the proof of Lemma 6.1 and using the contraction property of $\mathrm{Kullback}$ provided by Lemma A.2 for the map $\Psi$ defined in (1.17), we can write the following decomposition $\displaystyle\mathrm{Kullback}(\mathrm{Law}(X_{0}^{n})\mid P_{n}^{\beta})$ $\displaystyle\leq\mathrm{Kullback}(\otimes_{i=1}^{n}\mu_{i}\mid P_{*n}^{\beta})$ $\displaystyle=S(\otimes_{i=1}^{n}\mu_{i})+\mathbb{E}_{\otimes_{i=1}^{n}\mu_{i}}[E]+\log C_{*n}^{\beta}$ $\displaystyle\leq\sum_{i=1}^{n}S(\mu_{i})+\sum_{i\neq j}\iint\Phi\mathrm{d}\mu_{i}\otimes\mathrm{d}\mu_{j}+Cn^{2}.$ Combining (6.4) with the assumptions on the $\mu_{i}$’s yields for some constant $C>0$ $\mathrm{Kullback}(\mathrm{Law}(X_{0}^{n})\mid P_{n}^{\beta})\leq Cn^{2}$ and it follows finally that for all $\varepsilon\in(0,1)$, $\lim_{n\to\infty}\mathrm{Kullback}(\mathrm{Law}(X_{(1+\varepsilon)\log(n)})\mid P_{n}^{\beta})=0.$ ∎ ## 7\. Cutoff phenomenon for the DOU in Wasserstein ### 7.1. Proofs of Theorem 1.7 and Corollary 1.8 in Wasserstein Let ${(X_{t})}_{t\geq 0}$ be the DOU process. By Lemma B.2, for all $t\geq 0$ and all initial conditions $X_{0}\in\overline{D}_{n}$, $\mathrm{Wasserstein}^{2}(\mathrm{Law}(X_{t}),P_{n}^{\beta})\leq\mathrm{e}^{-2t}\mathrm{Wasserstein}^{2}(\mathrm{Law}(X_{0}),P_{n}^{\beta}).$ Suppose now that $\mathrm{Law}(X^{n}_{0})=\delta_{x^{n}_{0}}$. Then the triangle inequality for the Wasserstein distance gives $\mathrm{Wasserstein}^{2}(\delta_{x^{n}_{0}},P_{n}^{\beta})=\int\left|x^{n}_{0}-x\right|^{2}P_{n}^{\beta}(\mathrm{d}x)\leq 2|x^{n}_{0}|^{2}+2\int\left|x\right|^{2}P_{n}^{\beta}(\mathrm{d}x).$ By Theorem 1.3, the mean at equilibrium of $|X_{t}^{n}|^{2}$ equals $1+\frac{\beta}{2}(n-1)$ and therefore $\int\left|x\right|^{2}P_{n}^{\beta}(\mathrm{d}x)=1+\frac{\beta}{2}(n-1).$ We thus get $\mathrm{Wasserstein}^{2}(\mathrm{Law}(X^{n}_{t}),P_{n}^{\beta})\leq 2(|x^{n}_{0}|^{2}+1+\frac{\beta}{2}(n-1))\mathrm{e}^{-2t}.$ Set $c_{n}:=\log(|x_{0}^{n}|)\vee\log(\sqrt{n})$. For any $\varepsilon\in(0,1)$, we have $\lim_{n\to\infty}\mathrm{Wasserstein}(\mathrm{Law}(X^{n}_{(1+\varepsilon)c_{n}}),P_{n}^{\beta})=0$ and this concludes the proof of Theorem 1.7 in the Wasserstein distance. Regarding the proof of Corollary 1.8, if $x_{0}^{n}\in[-a_{n},a_{n}]^{n}$ then $|x_{0}^{n}|\leq\sqrt{n}a_{n}$. Therefore if $\inf_{n}a_{n}>0$, setting $c_{n}=\log(\sqrt{n}a_{n})$ we find, as required, $\lim_{n\to\infty}\sup_{x_{0}^{n}\in[-a_{n},a_{n}]^{n}}\mathrm{Wasserstein}(\mathrm{Law}(X^{n}_{(1+\varepsilon)c_{n}}),P_{n}^{\beta})=0.$ ### 7.2. Proof of Theorem 1.9 This is an adaptation of the previous proof. We compute $\displaystyle\mathrm{Wasserstein}^{2}(\delta_{x_{0}^{n}},P_{n}^{\beta})$ $\displaystyle=\int\left|x^{n}_{0}-x\right|^{2}P_{n}^{\beta}(\mathrm{d}x)$ $\displaystyle\leq 2\left|x^{n}_{0}-\rho_{n}\right|^{2}+2\int\left|\rho_{n}-x\right|^{2}P_{n}^{\beta}(\mathrm{d}x),$ where $\rho_{n}\in D_{n}$ is the vector of the quantiles of order $1/n$ of the semi-circle law as in (1.14). The rigidity estimates established in [17, Th. 2.4] justify that $\lim_{n\to\infty}\int\left|\rho_{n}-x\right|^{2}P_{n}^{\beta}(\mathrm{d}x)=0.$ If $|x^{n}_{0}-\rho_{n}|$ diverges with $n$, we deduce that for all $\varepsilon\in(0,1)$, with $t_{n}=\log(|x^{n}_{0}-\rho_{n}|)$, $\lim_{n\to\infty}\mathrm{Wasserstein}(\mathrm{Law}(X^{n}_{(1+\varepsilon)t_{n}}),P_{n}^{\beta})=0.$ On the other hand, if $|x^{n}_{0}-\rho_{n}|$ converges to some limit $\alpha$ then we easily get, for any $t\geq 0$, $\varlimsup_{n\to\infty}\mathrm{Wasserstein}^{2}(\mathrm{Law}(X^{n}_{t}),P_{n}^{\beta})\leq\alpha^{2}\mathrm{e}^{-2t}.$ ###### Remark 7.1 (High-dimensional phenomena). With $X_{n}\sim P_{n}^{\beta}$, in the bias-variance decomposition $\int\left|\rho_{n}-x\right|^{2}P_{n}^{\beta}(\mathrm{d}x)=|\mathbb{E}X_{n}-\rho_{n}|^{2}+\mathbb{E}(|X_{n}-\mathbb{E}X_{n}|^{2}),$ the second term of the right hand side is a variance term that measures the concentration of the log-concave random vector $X_{n}$ around its mean $\mathbb{E}X_{n}$, while the first term in the right hand side is a bias term that measures the distance of the mean $\mathbb{E}X_{n}$ to the mean-field limit $\rho_{n}$. Note also that $\mathbb{E}(|X_{n}-\mathbb{E}X_{n}|^{2})=\mathbb{E}(|X_{n}|^{2})-|\mathbb{E}X_{n}|^{2}=1+\frac{\beta}{2}(n-1)-|\mathbb{E}X_{n}|^{2}$,
L>0$, and thus $N^{(k_{\ell})}_{1}(u)>N^{(k_{\ell})}_{2}(u)>0$ for all $u\in[t,t+\epsilon]$, for all $\ell$ large enough. It follows that $X(j)=0$ for all $j=\lfloor k_{\ell}t\rfloor+1,\dots,\lfloor k_{\ell}(t+\epsilon)\rfloor$, and that the minima in equations (27) and (28) are never attained in the third case (otherwise we would have $N^{(k_{\ell})}_{2}(u)=0$ for some $u\in[t,t+\epsilon]$). Therefore, we have $\displaystyle M_{1}(j)$ $\displaystyle=\min\Big{\\{}Q_{2}(j)-D_{2}(j)+S_{2}(j),\,\,Q_{1}(j)-D_{1}(j)+S_{1}(j)\Big{\\}},$ $\displaystyle M_{2}(j)$ $\displaystyle=\min\Big{\\{}Q_{3}(j)-D_{3}(j)+S_{3}(j)-M_{1}(j),\,\,Q_{2}(j)-D_{2}(j)+S_{2}(j)\Big{\\}},$ where $Q(\cdot)$ is defined recursively as $\displaystyle Q_{1}(u+1)$ $\displaystyle=Q_{1}(u)-D_{1}(u)+S_{1}(u)-M_{1}(u)$ $\displaystyle Q_{2}(u+1)$ $\displaystyle=Q_{2}(u)-D_{2}(u)+S_{2}(u)-M_{1}(u)-M_{2}(u)$ $\displaystyle Q_{3}(u+1)$ $\displaystyle=Q_{3}(u)-D_{3}(u)+S_{3}(u)-M_{2}(u).$ Note that this corresponds to Case 1 in Lemma 8. Therefore, since $\omega\in\mathcal{C}$, Lemma 8 implies $\displaystyle r_{1}(t+\epsilon)-r_{1}(t)$ $\displaystyle=\lim\limits_{\ell\to\infty}R^{(k_{\ell})}_{1}(t+\epsilon)-R^{(k_{\ell})}_{1}(t)$ $\displaystyle=\epsilon.\,\overline{C}_{1},$ and $\displaystyle r_{2}(t+\epsilon)-r_{2}(t)$ $\displaystyle=\lim\limits_{\ell\to\infty}R^{(k_{\ell})}_{2}(t+\epsilon)-R^{(k_{\ell})}_{2}(t)$ $\displaystyle=\epsilon.\,\underline{C}_{2}.$ Dividing by $\epsilon$ and taking the limit as $\epsilon$ goes to zero we conclude that, when $n_{1}(t)>n_{2}(t)>0$, we have $\displaystyle\frac{dn_{1}(t)}{dt}$ $\displaystyle=\lambda_{1}-\overline{C}_{1}\qquad\text{and}\qquad\frac{dn_{2}(t)}{dt}=\lambda_{2}-\underline{C}_{2}.$ Case (b): $n_{1}(t)=n_{2}(t)>0$ First note that, if $n_{1}(t)>0$ and $n_{2}(t)>0$, the same argument as in the previous case gives us $r_{i}(t+\epsilon)-r_{i}(t)\in\big{[}\epsilon\underline{C}_{i},\,\epsilon\overline{C}_{i}\big{]},$ (30) and $r_{1}(t+\epsilon)+r_{2}(t+\epsilon)-r_{1}(t)-r_{2}(t)=\epsilon C_{1,2},$ (31) for all sufficiently small $\epsilon$. Suppose that $\lambda_{1}-\overline{C}_{1}>\lambda_{2}-\underline{C}_{2}.$ Combining this with Equation (30), it follows that $n_{1}(u)>n_{2}(u)$ for all $u\in(t,t+\epsilon]$. Therefore $\displaystyle\frac{dn_{1}(u)}{du}$ $\displaystyle=\lambda_{1}-\overline{C}_{1}\qquad\text{and}\qquad\frac{dn_{2}(u)}{du}=\lambda_{2}-\underline{C}_{2},$ for all $u\in(t,t+\epsilon]$. Since $t$ is a regular time, we also have $\displaystyle\frac{dn_{1}(t)}{dt}$ $\displaystyle=\lambda_{1}-\overline{C}_{1}\qquad\text{and}\qquad\frac{dn_{2}(t)}{dt}=\lambda_{2}-\underline{C}_{2}.$ Analogously, if $\lambda_{2}-\overline{C}_{2}>\lambda_{1}-\underline{C}_{1},$ we have $\displaystyle\frac{dn_{1}(t)}{dt}$ $\displaystyle=\lambda_{1}-\underline{C}_{1}\qquad\text{and}\qquad\frac{dn_{2}(t)}{dt}=\lambda_{2}-\overline{C}_{2}.$ Finally, suppose that $\frac{\lambda_{1}+\lambda_{2}-C_{1,2}}{2}\geq\max\left\\{\lambda_{1}-\overline{C}_{1},\,\,\lambda_{2}-\overline{C}_{2}\right\\}.$ In particular, this means that $\lambda_{1}-\overline{C}_{1}\leq\lambda_{2}-\underline{C}_{2}\qquad\text{and}\qquad\lambda_{2}-\overline{C}_{2}\leq\lambda_{1}-\underline{C}_{1}.$ Therefore, we have $\frac{dn_{1}(u)}{du}-\frac{dn_{2}(u)}{du}\leq 0$ when $n_{1}(u)>n_{2}(u)$, and $\frac{dn_{1}(u)}{du}-\frac{dn_{2}(u)}{du}\geq 0$ when $n_{1}(u)<n_{2}(u)$. ###### Lemma 10. We have $\frac{dn_{1}(t)}{dt}-\frac{dn_{2}(t)}{dt}=0,$ and $\frac{dn_{1}(t)}{dt}=\frac{dn_{2}(t)}{dt}=\frac{\lambda_{1}+\lambda_{2}-C_{1,2}}{2}.$ ###### Proof. Proof: We prove this by contradiction. Suppose that $\frac{dn_{1}(t)}{dt}-\frac{dn_{2}(t)}{dt}>0.$ Since $n_{1}(t)=n_{2}(t)$ and $t$ is a regular point, we must have $n_{1}(u)>n_{2}(u)$ for all sufficiently small $u>t$. However, we have $\frac{dn_{1}(u)}{du}-\frac{dn_{2}(u)}{du}\leq 0$ for all sufficiently small $u>t$, which contradicts the regularity of $t$. Assuming $\frac{dn_{1}(t)}{dt}-\frac{dn_{2}(t)}{dt}<0$ yields the same contradiction. Therefore, we must have $\frac{dn_{1}(t)}{dt}-\frac{dn_{2}(t)}{dt}=0.$ Combining this with Equation (31), we obtain $\frac{dn_{1}(t)}{dt}=\frac{dn_{2}(t)}{dt}=\frac{\lambda_{1}+\lambda_{2}-C_{1,2}}{2}.$ $\square$ ∎ Case (c): $n_{1}(t)>n_{2}(t)=0$ Suppose that $\lambda_{2}<\underline{C}_{2}$. Then, using the same argument as in Case (a) but now with only $N_{1}(\cdot)$ being infinitely backlogged and $N_{2}(\cdot)$ being stable (Lemma 3), we conclude that $\displaystyle\frac{dn_{1}(t)}{dt}$ $\displaystyle=\lambda_{1}-C_{1}(\lambda_{2})\qquad\text{and}\qquad\frac{dn_{2}(t)}{dt}=0.$ Suppose that $\lambda_{2}>\underline{C}_{2}$. Since $n_{2}(u)<n_{1}(u)$ for all sufficiently small $u>t$, we have $\displaystyle\frac{dr_{2}(t)}{dt}$ $\displaystyle\leq\underline{C}_{2}.$ (32) Therefore, we have $n_{2}(u)>0$ for all sufficiently small $u>t$. By Case (a), we have $\displaystyle\frac{dn_{1}(u)}{du}$ $\displaystyle=\lambda_{1}-\overline{C}_{1}\qquad\text{and}\qquad\frac{dn_{2}(u)}{du}=\lambda_{2}-\underline{C}_{2}$ for all sufficiently small $u>t$. Using once more that $t$ is a regular time, it follows that $\displaystyle\frac{dn_{1}(t)}{dt}$ $\displaystyle=\lambda_{1}-\overline{C}_{1}\qquad\text{and}\qquad\frac{dn_{2}(t)}{dt}=\lambda_{2}-\underline{C}_{2}.$ Finally, suppose that $\lambda_{2}=\underline{C}_{2}$. Using the same argument as in Lemma 10, it can be checked that $\displaystyle\frac{dn_{2}(t)}{dt}$ $\displaystyle=0.$ Case (d): $n_{1}(t)=n_{2}(t)=0$ Since $t$ is a regular point, $n_{i}(0)>0$, and cases (a), (b), and (c) imply that $n_{i}(\cdot)$ can only become $0$ at a non-regular point, we must have $n_{1}(u)=n_{2}(u)=0$ for all sufficiently large $u<t$. It follows that $\frac{dn_{1}(u)}{du}=\frac{dn_{2}(u)}{du}=0$ for all sufficiently large $u<t$. Using once again that $t$ is a regular point yields $\frac{dn_{1}(t)}{dt}=\frac{dn_{2}(t)}{dt}=0.$ $\square$ ∎ ### F.3 Completing the proof of Theorem 3 For every sample path in $\mathcal{C}$, we have established the following. Proposition 2 implies the existence of limit points of the sequence of processes $\left\\{N^{(k)}(\cdot)\right\\}_{k=1}^{\infty}$. Furthermore, according to Proposition 3 these limit points verify the differential equations of the fluid model. Combining this with the fact that the trajectories are Lipschitz continuous, and thus they are differentiable almost everywhere, the limit points are fluid solutions. In particular, this means that fluid solutions exist. Finally, since the trajectories are piece-wise linear and any component that hits zero stays at zero forever, the uniqueness of solutions follows from the uniqueness of the pieces that comprise them. In particular, this implies that all limit points are the same, and therefore the limit converges. ## Appendix G Proof of Theorem 2 * (i) When $\lambda_{1}+\lambda_{2}<C_{1,2}$, $\lambda_{1}<C_{1}(\lambda_{2})$, and $\lambda_{2}<C_{2}(\lambda_{1})$, theorems 3 and 4 imply that there exists $\delta>0$ such that, for any $n^{0}\in\mathbb{R}_{+}^{2}$ with $\|n^{0}\|_{1}>0$, if $N(0)=kn^{0}$, we have $\lim\limits_{k\to\infty}\frac{N_{i}(kt)}{k}=0,\qquad a.s.,$ for all $t\geq\delta\|n^{0}\|_{1}$. Moreover, Equation (13) implies that, for any $q^{0}\in\mathbb{R}_{+}^{3}$ with $\|q^{0}\|_{1}>0$, if $Q(0)=kq^{0}$, we have $\lim\limits_{k\to\infty}\frac{Q_{j}(kt)}{k}=0,\qquad a.s.,$ for all $t>0$. Therefore, the positive recurrence of $\big{(}{\bf N}(\cdot),{\bf Q}(\cdot)\big{)}$ when $\lambda_{1}+\lambda_{2}<C_{1,2}$, $\lambda_{1}<C_{1}(\lambda_{2})$, and $\lambda_{2}<C_{2}(\lambda_{1})$ follows from these finite-time convergences to $0$, and [11, Theorem 6.2]. * (ii) We first consider a coupled process $\left(\underline{N}(\cdot),\underline{\bf Q}(\cdot)\right)$, where the coupling with the original processes is done in a way so that $\underline{N}(0)={\bf N}_{1}(0)+{\bf N}_{2}(0)$, $\underline{\bf Q}_{1}(0)={\bf Q}_{1}(0)+{\bf Q}_{3}(0)$, and $\underline{\bf Q}_{2}(0)={\bf Q}_{2}(0)$ and so that they have the same arrival processes $A_{1}(\cdot)$, $A_{2}(\cdot)$, $S_{1}(\cdot)$, $S_{2}(\cdot)$, and $S_{3}(\cdot)$, and the same abandonment primitives $Z_{\ell}^{(i)}(\cdot)$. These new processes are defined recursively as $\displaystyle\underline{N}(t+1)$ $\displaystyle=\underline{N}(t)+A_{1}(t)+A_{2}(t)-\sum\limits_{\ell=1}^{\overline{\overline{M}}(t)}Y_{\ell}(t)$ $\displaystyle\underline{Q}_{1}(t+1)$ $\displaystyle=\underline{Q}_{1}(t)-\underline{D}_{1}(t)+S_{1}(t)+S_{3}(t)-\overline{\overline{M}}(t)$ $\displaystyle\underline{Q}_{2}(t+1)$ $\displaystyle=\underline{Q}_{2}(t)-\underline{D}_{2}(t)+S_{2}(t)-\overline{\overline{M}}(t),$ where $\overline{\overline{M}}(t)=\min\left\\{\underline{Q}_{1}(t)-\underline{D}_{1}(t)+S_{1}(t)+S_{3}(t),\,\,\underline{Q}_{2}(t)-\underline{D}_{2}(t)+S_{2}(t)\right\\}.$ This corresponds to a system where three-way matchings are always attempted if there are enough qubits, regardless of the requests available. Then, $\underline{N}(\cdot)$ keeps track of the difference between the total number of requests that arrived to the system, and the total number of successful three-way matchings. This can be negative, and it can be checked that $\underline{N}(t)\leq{\bf N}_{1}(t)+{\bf N}_{2}(t)$, for all $t\geq 0$, for any non-idling policy. Moreover, since $\underline{Q}_{1}(\cdot)$ and $\underline{Q}_{2}(\cdot)$ behave as two-sided queues with abandonments, we have $\lim\limits_{t\to\infty}\frac{1}{t}\sum\limits_{s=0}^{t}\sum\limits_{\ell=1}^{\overline{\overline{M}}(s)}Y_{\ell}(s)=C_{1,2}.$ Therefore, since $\lambda_{1}+\lambda_{2}>C_{1,2}$, we have that $\underline{N}(t)$ diverges to $+\infty$ almost surely and, since $\underline{N}(t)\leq{\bf N}_{1}(t)+{\bf N}_{2}(t)$ for all $t\geq 0$, the process $\big{(}{\bf N}(\cdot),{\bf Q}(\cdot)\big{)}$ is transient. On the other hand, suppose that $\lambda_{1}+\lambda_{2}\leq C_{1,2}$, and that $\lambda_{j}<\underline{C}_{j}$ and $\lambda_{i}>C_{i}(\lambda_{j})$. Analogously to the previous case, we can construct a coupled process $\underline{N}_{i}(\cdot)$ such that $N_{i}(t)\geq\underline{N}_{i}(t)$ for all $t\geq 0$, and such that its throughput is $C_{i}(\lambda_{j})$. Then, since $\lambda_{i}>C_{i}(\lambda_{j})$, we have that $\underline{N}_{i}(\cdot)$ diverges to $+\infty$ almost surely and, since $N_{i}(t)\geq\underline{N}_{i}(t)$ for all $t\geq 0$, the process $\big{(}{\bf N}(\cdot),{\bf Q}(\cdot)\big{)}$ is transient.
# Artificial-Noise-Aided Secure MIMO Wireless Communications via Intelligent Reflecting Surface Sheng Hong, Cunhua Pan, Hong Ren, Kezhi Wang, and Arumugam Nallanathan This work was supported by the National Natural Science Foundation of China (61661032), the Young Natural Science Foundation of Jiangxi Province (20181BAB202002), the China Postdoctoral Science Foundation (2017M622102), the Foundation from China Scholarship Council (201906825071).S. Hong is with Information Engineering School of Nanchang University, Nanchang 330031, China. (email: [email protected]). C. Pan, H. Ren, and A. Nallanathan are with the School of Electronic Engineering and Computer Science at Queen Mary University of London, London E1 4NS, U.K. (e-mail:{c.pan, h.ren, a.nallanathan}@qmul.ac.uk). K. Wang is with Department of Computer and Information Sciences, Northumbria University, UK. (email: [email protected]). ###### Abstract This paper considers a MIMO secure wireless communication system aided by the physical layer security technique of sending artificial noise (AN). To further enhance the system security performance, the advanced intelligent reflecting surface (IRS) is invoked in the AN-aided communication system, where the base station (BS), legitimate information receiver (IR) and eavesdropper (Eve) are equipped with multiple antennas. With the aim for maximizing the secrecy rate (SR), the transmit precoding (TPC) matrix at the BS, covariance matrix of AN and phase shifts at the IRS are jointly optimized subject to constrains of transmit power limit and unit modulus of IRS phase shifts. Then, the secrecy rate maximization (SRM) problem is formulated, which is a non-convex problem with multiple coupled variables. To tackle it, we propose to utilize the block coordinate descent (BCD) algorithm to alternately update the TPC matrix, AN covariance matrix, and phase shifts while keeping SR non-decreasing. Specifically, the optimal TPC matrix and AN covariance matrix are derived by Lagrangian multiplier method, and the optimal phase shifts are obtained by Majorization-Minimization (MM) algorithm. Since all variables can be calculated in closed form, the proposed algorithm is very efficient. We also extend the SRM problem to the more general multiple-IRs scenario and propose a BCD algorithm to solve it. Finally, simulation results validate the effectiveness of system security enhancement via an IRS. ###### Index Terms: Intelligent Reflecting Surface (IRS), Reconfigurable Intelligent Surfaces, Secure Communication, Physical Layer Security, Artificial Noise (AN), MIMO. ## I Introduction The next-generation (i.e, 6G) communication is expected to be a sustainable green, cost-effective and secure communication system [1]. In particular, secure communication is crucially important in 6G communication networks since communication environment becomes increasingly complicated and the security of private information is imperative [2]. The information security using crytographic encryption (in the network layer) is a conventional secure communication technique, which suffers from the vulnerabilities, such as secret key distribution, protection and management [3]. Unlike this network layer security approach, the physical layer security can guarantee good security performance bypassing the relevant manipulations on the secret key, thus is more attractive for the academia and industry [4]. There are various physical-layer secrecy scenarios. The first one is the classical physical- layer secrecy setting where there is one legitimate information receiver (IR) and one eavesdropper (Eve) operating over a single-input-single-output (SISO) channel (i.e., the so-called three-terminal SISO Gaussian wiretap channel) [5, 6]. The second one considers the physical-layer secrecy with an IR and Eve operating over a multiple-input-single-output (MISO) channel, which is called as three-terminal MISO Gaussian wiretap channel. The third one is a renewed and timely scenario with one IR and one Eve operating over a multiple-input- multiple-output (MIMO) channel, which is named as three-terminal MIMO Gaussian wiretap channel[7, 8] and is the focus of this paper. For MIMO systems, a novel idea in physical-layer security is to transmit artificial noise (AN) from the base station (BS) to contaminate the Eve’s received signal [9, 10, 11]. For these AN-aided methods, a portion of transmit power is assigned to the artificially generated noise to interfere the Eve, which should be carefully designed. For AN-aided secrecy systems, while most of the existing AN-aided design papers focused on the MISO wiretap channel and null-space AN [7, 12], designing the transmit precoding (TPC) matrix together with AN covariance matrix for the MIMO wiretap channel is more challenging [13]. In general, the secrecy rate (SR) achieved by the mutual information difference between the legitimate IR and the Eve is limited by the channel difference between the BS-IR link and the BS-Eve link. The AN-aided method can further improve the SR, but it consumes the transmit power destined for the legitimate IR. When the transmit power is confined, the performance bottleneck always exists for the AN-aided secure communication. To conquer the dilemma, the recently proposed intelligent reflecting surface (IRS) technique can be exploited. Since higher SR can be achieved by enhancing the channel quality in the BS-IR link and degrading the channel condition in the BS-Eve link, the IRS can serve as a powerful complement to AN-aided secure communication due to its capability of reconfiguring the wireless propagation environment. The IRS technique has been regarded as a revolutionary technique to control and reconfigure the wireless environment [14, 15],[16]. An IRS comprises an array of reflecting elements, which can reflect the incident electromagnetic (EM) wave passively, and the complex reflection coefficient contains the phase shift and amplitude. In practical applications, the phase shifts of the reflection coefficients are discrete due to the manufacturing cost[17]. However, many works on IRS aided wireless communications are based on the assumption of continuous phase shifts [18],[19]. To investigate the potential effect of IRS on the secure communication, we also assume continuous phase shifts to simplify the problem. We evaluate its impact on the system performance in the simulation section. Theoretically, the reflection amplitude of each IRS element can be adjusted for different purpose [20]. However, considering the hardware cost, the reflection amplitude is usually assumed to be 1 for simplicity. Hence, by smartly tuning the phase shifts with a preprogrammed controller, the direct signals from the BS and the reflected signals from the IRS can be combined constructively or destructively according to different requirements. In comparison to the existing related techniques which the IRS resembles, such as active intelligent surface [21], traditional reflecting surfaces[22], backscatter communication [23] and amplify-and- forward (AF) relay [24], the IRSs have the advantages of flexible reconfiguration on the phase shifts in real time, minor additional power consumption, easy installation with many reflecting elements, etc. Furthermore, due to the light weight and compact size, the IRS can be integrated into the traditional communication systems with minor modifications [25]. Because of these appealing virtues, IRS has introduced into various wireless communication systems, including the single-user case [26, 27], the downlink multiuser case [28, 18, 29, 30, 31], mobile edge computing [32], wireless information and power transfer design [33], and the physical layer security design [34, 35, 36, 37]. IRS is promising to strengthen the system security of wireless communication. In [34, 36, 38], the authors investigated the problem of maximizing the achievable SR in a secure MISO communication system aided by IRS, where both the legitimate user and eavesdropper are equipped with a single antenna. The TPC matrix at the BS and the phase shifts at the IRS were optimized by an alternate optimization (AO) strategy. To handle the nonconvex unit modulus constraint, the semidefinite relaxation (SDR) [39], majorization-minimization (MM) [18, 40], complex circle manifold (CCM) [41] techniques were proposed to optimize phase shifts. An IRS-assisted MISO secure communication with a single IR and single Eve was also considered in [35], but it was limited to a special scenario, where the Eve has a stronger channel than the IR, and the two channels from BS to Eve and IR are highly correlated. Under this assumption, the transmit beamforming and the IRS reflection beamforming are jointly optimized to improve the SR. Similarly, a secure IRS-assisted downlink MISO broadcast system was considered in [37], and it assumes that multiple legitimate IRs and multiple Eves are in the same directions to the BS, which implies that the IR channels are highly correlated with the Eve channels. [42] considered the transmission design for an IRS-aided secure MISO communication with a single IR and single Eve, in which the system energy consumption is minimized under two assumptions that the channels of access point (AP)-IRS links are rank-one and full-rank. An IRS-assisted MISO network with cooperative jamming was investigated in [2]. The physical layer security in a simultaneous wireless information and power transfer (SWIPT) system was considered with the aid of IRS [43]. However, there are a paucity of papers considering the IRS-assisted secure communication with AN. A secure MISO communication system aided by the transmit jamming and AN was considered in [44], where a large number of Eves exist, and the AN beamforming vector and jamming vector were optimized to reap the additional degrees of freedom (DoF) brought by the IRS. [45] investigated the resource allocation problem in an IRS-assisted MISO communication by jointly optimizing the beamforming vectors, the phase shifts of the IRS, and AN covariance matrix for secrecy rate maximization (SRM), but the direct BS-IRs links and direct BS-Eves link are assumed to be blocked. Although a few papers have studied security enhancement for an AN-aided system through the IRS, the existing papers related to this topic either only studied the MISO scenario or assumed special settings to the channels. The investigation on the MIMO scenario with general channel settings is absent in the existing literature. Hence, we investigate this problem in this paper by employing an IRS in an AN-aided MIMO communication system for the physical layer security enhancement. Specifically, by carefully designing the phase shifts of the IRS, the reflected signals are combined with the direct signals constructively for enhancing the data rate at the IR and destructively for decreasing the rate at the Eve. As a result, the TPC matrix and AN covariance matrix at the BS can be designed flexibly with a higher DoF than the case without IRS. In this work, the TPC matrix, AN covariance matrix and the phase shift matrix are jointly optimized. Since these optimization variables are highly coupled, an efficient algorithm based on the block coordinate descent (BCD) and MM techniques for solving the problem is proposed. We summarize our main contributions as follows: 1. 1. This is the first research on exploiting an IRS to enhance security in AN- aided MIMO communication systems. Specifically, an SRM problem is formulated by jointly optimizing the TPC matrix and AN covariance matrix at the BS, together with the phase shifts of the IRS subject to maximum transmit power limit and the unit modulus constraint of the phase shifters. The objective function (OF) of this problem is the difference of two Shannon capacity expressions, thus is not jointly concave over the three highly-coupled variables. To handle it, the popular minimum mean-square error (MMSE) algorithm is used to reformulate the SRM problem. 2. 2. The BCD algorithm is exploited to optimize the variables alternately. Firstly, given the phase shifts of IRS, the optimal TPC matrix and AN covariance matrix are obtained in closed form by utilizing the Lagrangian multiplier method. Then, given the TPC matrix and AN covariance matrix, the optimization problem for IRS phase shifts is transformed by sophisticated matrix manipulations into a quadratically constrained quadratic program (QCQP) problem subject to unit modulus constraints. To solve it, the MM algorithm is utilized, where the phase shifts are derived in closed form iteratively. Based on the BCD-MM algorithm, the original formulated SRM problem can be solved efficiently. 3. 3. The SRM problem is also extended to the more general scenario of multiple legitimate IRs. A new BCD algorithm is proposed to solve it, where the optimal TPC matrix and AN covariance matrix are obtained by solving a QCQP problem, and the unit modulus constraint is handled by the penalty convex-concave procedure (CCP) method. 4. 4. The simulation results confirm that on the one hand, the IRS can greatly enhance the security of an AN-aided MIMO communication system; on the other hand, the phase shifts of IRS should be properly optimized. Simulation results also show that larger IRS element number and more transmit power is beneficial to the security. Moreover, properly-selected IRS location and good channel states of the IRS-related links are important to realize the full potential of IRS. This paper is organized as follows. Section II provides the signal model of an AN-aided MIMO communication system assisted by an IRS, and the SRM problem formulation. The SRM problem is reformulated in Section III, where the BCD-MM algorithm is proposed to optimize the TPC matrix, AN covariance matrix and phase shifts of IRS. Section IV extends the SRM problem to a more general scenario of multiple IRs. In Section V, numerical simulations are given to validate the algorithm efficiency and security enhancement. Section VI concludes this paper. _Notations_ : Throughout this paper, boldface lower case, boldface upper case and regular letters are used to denote vectors, matrices, and scalars respectively. ${\bf{X}}\odot{\bf{Y}}$ is the Hadamard product of $\bf X$ and $\bf Y$. ${\rm{Tr}}\left({\bf{X}}\right)$ and $\left|{\bf{X}}\right|$ denote the trace and determinant of ${\bf{X}}$ respectively. ${{\mathbb{C}}^{M\times N}}$ denotes the space of $M\times N$ complex matrices. ${\rm{Re}}\\{\cdot\\}$ and $\arg\\{\cdot\\}$ denote the real part of a complex value and the extraction of phase information respectively. ${\rm{diag}}\\{\cdot\\}$ is the operator for diagonalization. ${\cal C}{\cal N}({\bm{\mu}},{\bf{Z}})$ represents a circularly symmetric complex gaussian (CSCG) random vector with mean ${\bm{\mu}}$ and covariance matrix ${\bf{Z}}$. ${\left(\cdot\right)^{\rm{T}}}$, ${\left(\cdot\right)^{\rm{H}}}$ and ${\left(\cdot\right)^{\rm{\ast}}}$ denote the transpose, Hermitian and conjugate operators respectively. $(\cdot)^{\star}$ stands for the optimal value, and $(\cdot)^{{\dagger}}$ means the pseudo-inverse. $[\cdot]^{+}$ is the projection onto the non-negative number, i.e, if $y=[x]^{+}$, then $y=\rm{max}\\{0,x\\}$. ## II Signal Model and Problem Formulation ### II-A Signal Model We consider an IRS-aided communication network shown in Fig. 1 that consists of a BS, a legitimate IR and an Eve, all of which are equipped with multiple antennas. The number of transmit antennas at the BS is ${{N}_{T}}\geq 1$, and the numbers of receive antennas at the legitimate IR and Eve are ${{N}_{I}}\geq 1$ and ${{N}_{E}}\geq 1$ respectively. To ensure secure transmission from the BS to the IR, the AN is sent from the BS to interfere the eavesdropper to achieve strong secrecy. Figure 1: An AN-aided MIMO secure communication system with IRS. With above assumptions, the BS employed the TPC matrix to transmit data streams with AN. The transmitted signal can be modeled as $\displaystyle{\bf{x}}={\bf{Vs}}+{\bf{n}},$ (1) where ${\bf{V}}\in{{\mathbb{C}}^{{{N}_{T}}\times d}}$ is the TPC matrix; the number of data streams is $d\leq\min({{N}_{T}},{{N}_{I}})$; the transmitted data towards the IR is $\mathbf{s}\sim\mathcal{C}\mathcal{N}(0,{\mathbf{I}_{d}})$; and $\mathbf{n}\in{\cal C}{\cal N}({\bm{0}},{\bf{Z}})$ represents the AN random vector with zero mean and covariance matrix $\mathbf{Z}$. Assuming that the wireless signals are propagated in a non-dispersive and narrow-band way, we model the equivalent channels of the BS-IRS link, the BS- IR link, the BS-Eve link, the IRS-IR link, the IRS-Eve link by the matrices $\mathbf{G}\in{{\mathbb{C}}^{M\times{{N}_{T}}}}$, ${{\bf{H}}_{b,I}}\in{{\mathbb{C}}^{{{N}_{I}}\times{{N}_{T}}}}$, ${{\bf{H}}_{b,E}}\in{{\mathbb{C}}^{{{N}_{E}}\times{{N}_{T}}}}$, ${{\bf{H}}_{R,I}}\in{{\mathbb{C}}^{{{N}_{I}}\times M}}$,${{\bf{H}}_{R,E}}\in{{\mathbb{C}}^{{{N}_{E}}\times M}}$, respectively. The phase shift coefficients of IRS are collected in a diagonal matrix defined by ${\bf{\Phi}}=\text{diag }\\!\\!\\{\\!\\!\text{ }{{\phi}_{1}},\cdots,{{\phi}_{m}},\cdots,{{\phi}_{M}}\text{ }\\!\\!\\}\\!\\!\text{ }$ and ${{\phi}_{m}}={{e}^{j{{\theta}_{m}}}}$, where ${{\theta}_{m}}\in[0,2\pi]$ denotes the phase shift of the $m$-th reflection element. The multi-path signals that have been reflected by multiple times are considered to be absorbed and diffracted, then the signal received at the legitimate IR is given by ${\bf{y}}_{I}=({\bf{H}}_{b,I}+{\bf{H}}_{R,I}{\bf{\Phi}}{\bf{G}}){\bf{x}}+{\bf{n}}_{I},$ (2) where ${{\bf{n}}_{I}}$ is the random noise vector at IR obeying the distribution ${{\bf{n}}_{I}}\sim\mathcal{C}\mathcal{N}({\bf{0}},\sigma_{I}^{2}{\bf{I}}_{{{N}_{I}}})$. The signal received at the Eve is $\displaystyle{\bf{y}}_{E}=({\bf{H}}_{b,E}+{\bf{H}}_{R,E}{\bf{\Phi}}{\bf{G}}){\bf{x}}+{{\bf{n}}_{E}},$ (3) where ${\bf{n}}_{E}$ is the Eve’s noise vector following the distribution ${\bf{n}}_{E}\sim\mathcal{C}\mathcal{N}({\bf{0}},\sigma_{E}^{2}{\bf{I}}_{{{N}_{E}}})$. Assume that the BS has acquired the prior information of all the channel state information (CSI). Then the BS is responsible for optimizing the IRS phase shifts and sending them back to the IRS controller through a separate low-rate communication link such as wireless links [20],[17] or wired lines [46]. The assumption of perfect CSI knowledge is idealistic, since the CSI estimation for IRS networks is challenging. However, the algorithms developed allow us to derive the relevant performance upper bounds for realistic scenarios in the presence of realistic CSI errors. Recently, we have investigated the design of robust and secure transmission in IRS-aided MISO wireless communication systems in [47] by considering the statistical CSI error model associated with the cascaded channels for the eavesdropper. Its extension to the MIMO scenario will be studied in our future work. Upon substituting $\bf{x}$ into (2), ${\bf{y}}_{I}$ can be rewirtten as $\displaystyle{\bf{y}}_{I}={\hat{\bf{H}}_{I}}({\bf{V}}{\bf{s}}+{\bf{n}})+{{\bf{n}}_{I}}{\rm{=}}{\hat{\bf{H}}_{I}}{\bf{V}}{\bf{s}}+{\hat{\bf{H}}_{I}}{\bf{n}}+{{\bf{n}}_{I}},$ (4) where ${{\hat{\bf{H}}}_{I}}\overset{\triangle}{=}{{\bf{H}}_{b,I}}+{{\bf{H}}_{R,I}}{\bf{\Phi}}{\bf{G}}$ is defined as the equivalent channel spanning from the BS to the legitimate IR. Then, the data rate (bit/s/Hz) achieved by the legitimate IR is given by $\displaystyle{R_{I}}({\bf{V}},{\bf{\Phi}},{\bf{Z}})={\rm{log}}\left|{{\bf{I}}+{{\hat{\bf{H}}}_{I}}{\bf{V}}{{\bf{V}}^{H}}\hat{\bf{H}}_{I}^{H}{\bf{J}}_{I}^{-1}}\right|,$ (5) where ${{\bf{J}}_{I}}$ is the interference-plus-noise covariance matrix given by ${{\bf{J}}_{I}}\overset{\triangle}{=}{{\hat{\bf{H}}}_{I}}{\bf{Z}}{{\hat{\bf{H}}}_{I}}^{H}+\sigma_{I}^{2}{{\bf{I}}_{{{N}_{I}}}}$. Upon substituting $\bf{x}$ into (3), ${{\bf{y}}_{E}}$ can be rewritten as $\displaystyle{\bf{y}}_{E}={\hat{\bf{H}}_{E}}({\bf{Vs}}+{\bf{n}})+{\bf{n}}_{E}={\hat{\bf{H}}_{E}}{\bf{Vs}}+{\hat{\bf{H}}_{E}}{\bf{n}}+{\bf{n}}_{E},$ (6) where ${{\hat{\bf{H}}}_{E}}\overset{\triangle}{=}{{\bf{H}}_{b,E}}+{{\bf{H}}_{R,E}}{\bf{\Phi}}{\bf{G}}$ is defined as the equivalent channel spanning from the BS to the Eve. Then, the data rate (bit/s/Hz) achieved by the Eve is given by $\displaystyle{R_{E}}({\bf{V}},{\bf{\Phi}},{\bf{Z}})={\rm{log}}\left|{{\bf{I}}+{{\hat{\bf{H}}}_{E}}{\bf{V}}{{\bf{V}}^{H}}\hat{\bf{H}}_{E}^{H}{\bf{J}}_{E}^{-1}}\right|,$ (7) where ${{\bf{J}}_{E}}$ is the interference-plus-noise covariance matrix given by ${{\bf{J}}_{E}}\overset{\triangle}{=}{{\hat{\bf{H}}}_{E}}{\bf{Z}}{{\hat{\bf{H}}}_{E}}^{H}+\sigma_{E}^{2}{{\bf{I}}_{{{N}_{E}}}}$. The achievable secrecy rate is given by $\displaystyle{{\rm{C}}_{AN}}{\rm{(}}{\bf{V}},{\bf{\Phi}},{\bf{Z}})$ $\displaystyle{\rm{=}}[{R_{I}}({\bf{V}},{\bf{\Phi}},{\bf{Z}})-{R_{E}}({\bf{V}},{\bf{\Phi}},{\bf{Z}}){]^{+}}$ $\displaystyle={\rm{log}}\left|{{\bf{I}}+{{\hat{\bf{H}}}_{I}}{\bf{V}}{{\bf{V}}^{H}}\hat{\bf{H}}_{I}^{H}{\bf{J}}_{I}^{-1}}\right|-{\rm{log}}\left|{{\bf{I}}+{{\hat{\bf{H}}}_{E}}{\bf{V}}{{\bf{V}}^{H}}\hat{\bf{H}}_{E}^{H}{\bf{J}}_{E}^{-1}}\right|$ $\displaystyle={\rm{log}}\left|{{\bf{I}}+{{\hat{\bf{H}}}_{I}}{\bf{V}}{{\bf{V}}^{H}}\hat{\bf{H}}_{I}^{H}{{({{\hat{\bf{H}}}_{I}}{\bf{Z}}{{\hat{\bf{H}}}_{I}}^{H}+\sigma_{I}^{2}{{\bf{I}}_{{N_{I}}}})}^{-1}}}\right|$ $\displaystyle\quad-{\rm{log}}\left|{{\bf{I}}+{{\hat{\bf{H}}}_{E}}{\bf{V}}{{\bf{V}}^{H}}\hat{\bf{H}}_{E}^{H}{{({{\hat{\bf{H}}}_{E}}{\bf{Z}}{{\hat{\bf{H}}}_{E}}^{H}+\sigma_{E}^{2}{{\bf{I}}_{{N_{E}}}})}^{-1}}}\right|.$ (8) ### II-B Problem Formulation With the aim for maximizing SR, the TPC matrix ${\bf{V}}$ at the BS, the AN covariance matrix ${\bf{Z}}$ at the BS, and the phase shift matrix ${\bf{\Phi}}$ at the IRS should be optimized jointly subject to the constraints of the maximum transmit power and unit modulus of phase shifts. Hence, we formulate the SRM problem as $\displaystyle\ \underset{{\bf{V}},{\bf{\Phi}},{\bf{Z}}}{\mathop{\text{missing}}{max}}\ \ {{\rm{C}}_{AN}}{\rm{(}}{\bf{V}},{\bf{\Phi}},{\bf{Z}}{\rm{)}}$ (9a) $\displaystyle\ \ \text{s.t.}\quad\ {\rm{Tr(}}{\bf{V}}{{\bf{V}}^{H}}{\rm{+}}{\bf{Z}}{\rm{)}}\leq{P_{T}},$ (9b) $\displaystyle\quad\quad\quad{\bf{Z}}\succeq 0,$ (9c) $\displaystyle\quad\quad\quad\\!\left|{{{{\phi}}_{m}}}\right|=1,m=1,\cdots,M,$ (9d) where ${P_{T}}$ is the maximum transmit power limit. The optimal value of SR in (9) is always non-negative, which can be proved by using contradiction. Assume that the optimal value of SR is negative, then we can simply set the TPC matrix ${\bf{V}}$ to zero matrix, and the resulted SR will be equal to zero, which is larger than a negative SR. By variable substitution ${\bf{Z}}={{\bf{V}}_{E}}{{\bf{V}}^{H}_{E}}$, where ${{\bf{V}}_{E}}\in{{\mathbb{C}}^{{{N}_{T}}\times{{N}_{T}}}}$, Problem (9) is equivalent to $\displaystyle\ \underset{{\bf{V}},{{\bf{V}}_{E}},{\bf{\Phi}}}{\mathop{\text{missing}}{max}}\ \ {{\rm{C}}_{AN}}{\rm{(}}{\bf{V}},{{\bf{V}}_{E}},{\bf{\Phi}}{\rm{)}}$ (10a) $\displaystyle\ \ \text{s.t.}\quad\ {\rm{Tr(}}{\bf{V}}{{\bf{V}}^{H}}{\rm{+}}{{\bf{V}}_{E}}{{\bf{V}}^{H}_{E}}{\rm{)}}\leq{P_{T}},$ (10b) $\displaystyle\quad\quad\quad\\!\left|{{{{\phi}}_{m}}}\right|=1,m=1,\cdots,M,$ (10c) where the OF of (10a) is obtained by substituting ${\bf{Z}}={{\bf{V}}_{E}}{{\bf{V}}_{E}}^{H}$ into (II-A). In (10a), the expression of OF is difficult to tackle, and the variables of ${\bf{V}}$, ${\bf{V}}_{E}$ and ${\bf{\Phi}}$ are coupled with each other, which make Problem (10) difficult to solve. In addition, the unit modulus constraint imposed on the phase shifts in (10c) aggravates the difficulty. In the following, we provide a low-complexity algorithm to solve this problem. ## III A Low-Complexity Algorithm of BCD-MM Firstly, the OF of Problem (10) is reformulated into a more tractable expression equivalently. Then, the BCD-MM method is proposed for optimizing the TPC matrix ${\bf{V}}$, ${\bf{V}}_{E}$, and the phase shift matrix ${\bf{\Phi}}$ alternately. ### III-A Reformulation of the Original Problem Firstly, the achievable SR ${{\rm{C}}_{AN}}{\rm{(}}{\bf{V}},{{\bf{V}}_{E}},{\bf{\Phi}}{\rm{)}}$ in (II-A) can be further simplified as $\displaystyle{{\rm{C}}_{AN}}\rm{(}{\bf{V}},{{\bf{V}}_{E}},{\bf{\Phi}}\rm{)}$ $\displaystyle{\rm{=log}}\left|{{\bf{I}}_{{N_{I}}}+{{\hat{\bf{H}}}_{I}}{\bf{V}}{{\bf{V}}^{H}}\hat{\bf{H}}_{I}^{H}{{({{\hat{\bf{H}}}_{I}}{\bf{Z}}{{\hat{\bf{H}}}_{I}}^{H}+\sigma_{I}^{2}{{\bf{I}}_{{N_{I}}}})}^{-1}}}\right|{\rm{+log}}\left|{{{\hat{\bf{H}}}_{E}}{\bf{Z}}{{\hat{\bf{H}}}_{E}}^{H}+\sigma_{E}^{2}{{\bf{I}}_{{N_{E}}}}}\right|$ $\displaystyle\quad-{\rm{log}}\left|{{{\hat{\bf{H}}}_{E}}{\bf{Z}}{{\hat{\bf{H}}}_{E}}^{H}+\sigma_{E}^{2}{{\bf{I}}_{{N_{E}}}}+{{\hat{\bf{H}}}_{E}}{\bf{V}}{{\bf{V}}^{H}}\hat{\bf{H}}_{E}^{H}}\right|$ $\displaystyle=\underbrace{{\rm{log}}\left|{{\bf{I}}_{{N_{I}}}+{{\hat{\bf{H}}}_{I}}{\bf{V}}{{\bf{V}}^{H}}\hat{\bf{H}}_{I}^{H}{{({{\hat{\bf{H}}}_{I}}{{\bf{V}}_{E}}{{\bf{V}}^{H}_{E}}{{\hat{\bf{H}}}_{I}}^{H}+\sigma_{I}^{2}{{\bf{I}}_{{N_{I}}}})}^{-1}}}\right|}_{{f_{1}}}$ $\displaystyle\quad{\rm{+}}\underbrace{{\rm{log}}\left|{{{\bf{I}}_{{N_{E}}}}+{{\hat{\bf{H}}}_{E}}{{\bf{V}}_{E}}{{\bf{V}}^{H}_{E}}{{\hat{\bf{H}}}_{E}}^{H}(\sigma_{E}^{2}{{\bf{I}}_{{N_{E}}}})^{-1}}\right|}_{{f_{2}}}$ $\displaystyle\quad\underbrace{-{\rm{log}}\left|{{{\bf{I}}_{{N_{E}}}}+\sigma_{E}^{-2}{{\hat{\bf{H}}}_{E}}({\bf{V}}{{\bf{V}}^{H}}+{{\bf{V}}_{E}}{{\bf{V}}^{H}_{E}})\hat{\bf{H}}_{E}^{H}}\right|}_{{f_{3}}}.$ (11) The expression in $f_{1}$ represents the data rate of the legitimate IR, which can be reformulated by exploiting the relationship between the data rate and the mean-square error (MSE) for the optimal decoding matrix. Specifically, the linear decoding matrix ${{\bf{U}}_{I}}\in{{\mathbb{C}}^{{{N}_{T}}\times{d}}}$ is applied to estimate the signal vector $\hat{{\bf{s}}}$ for the legitimate IR, and the MSE matrix of the legitimate IR is given by $\displaystyle{{\bf{E}}_{I}}({{\bf{U}}_{I}},{\bf{V}},{{\bf{V}}_{E}})$ $\displaystyle\buildrel\Delta\over{=}{{\mathbb{E}}_{{\bf{s}},{\bf{n}},{{\bf{n}}_{I}}}}\left[{(\hat{\bf{s}}-{\bf{s}}){{(\hat{\bf{s}}-{\bf{s}})}^{H}}}\right]$ $\displaystyle{\rm{=}}({{\bf{U}}_{I}}^{H}{{\hat{\bf{H}}}_{I}}{\bf{V}}-{\bf{I}}_{d}){({{\bf{U}}_{I}}^{H}{{\hat{\bf{H}}}_{I}}{\bf{V}}-{\bf{I}}_{d})^{H}}+{{\bf{U}}_{I}}^{H}({{\hat{\bf{H}}}_{I}}{{\bf{V}}_{E}}{{\bf{V}}_{E}}^{H}{{\hat{\bf{H}}}_{I}}^{H}{\rm{+}}\sigma_{I}^{2}{{\bf{I}}_{{N_{I}}}}){{\bf{U}}_{I}}.$ (12) By introducing an auxiliary matrix ${{\bf{W}}_{I}}\succeq 0$, ${{\bf{W}}_{I}}\in{{\mathbb{C}}^{{d}\times{d}}}$ and exploiting the fact 3) of Lemma 4.1 in [48], we have $\displaystyle f_{1}$ $\displaystyle{\rm{=}}\mathop{\text{missing}}{max}\limits_{{{\bf{U}}_{I}},{{\bf{W}}_{I}}\succeq 0}h_{1}({{\bf{U}}_{I}},{\bf{V}},{{\bf{V}}_{E}},{{\bf{W}}_{I}})$ $\displaystyle\buildrel\Delta\over{=}\mathop{\text{missing}}{max}\limits_{{{\bf{U}}_{I}},{{\bf{W}}_{I}}\succeq 0}\log\left|{{{\bf{W}}_{I}}}\right|-{\rm{Tr}}({{\bf{W}}_{I}}{{\bf{E}}_{I}}({{\bf{U}}_{I}},{\bf{V}},{{\bf{V}}_{E}}))+d.$ (13) $h_{1}({{\bf{U}}_{I}},{\bf{V}},{{\bf{V}}_{E}},{{\bf{W}}_{I}})$ is concave with respect to (w.r.t.) each matrix of the matrices ${{\bf{U}}_{I}}$,${\bf{V}}$,${{\bf{V}}_{E}}$,${{\bf{W}}_{I}}$ by fixing the other three matrices. According to the facts 1) and 2) of Lemma 4.1 in [48], the optimal ${{\bf{U}}^{\star}_{I}}$, ${{\bf{W}}^{\star}_{I}}$ to achieve the maximum value of $h_{1}({{\bf{U}}_{I}},{\bf{V}},{{\bf{V}}_{E}},{{\bf{W}}_{I}})$ is given by $\displaystyle{{\bf{U}}^{\star}_{I}}{\rm{=}}\text{arg}\mathop{\text{missing}}{max}\limits_{{{\bf{U}}_{I}}}h_{1}({{\bf{U}}_{I}},{\bf{V}},{{\bf{V}}_{E}},{{\bf{W}}_{I}}){\rm{=}}({\hat{\bf{H}}_{I}}{{\bf{V}}_{E}}{{\bf{V}}^{H}_{E}}{\hat{\bf{H}}_{I}}^{H}{\rm{+}}\sigma_{I}^{2}{{\bf{I}}_{{N_{I}}}}{\rm{+}}{\hat{\bf{H}}_{I}}{\bf{V}}{{\bf{V}}^{H}}{\hat{\bf{H}}_{I}}^{H})^{-1}{\hat{\bf{H}}_{I}}{\bf{V}},$ (14) $\displaystyle{{\bf{W}}^{\star}_{I}}{\rm{=}}\text{arg}\mathop{\text{missing}}{max}\limits_{{{\bf{W}}_{I}}\succeq 0}h_{1}({{\bf{U}}_{I}},{\bf{V}},{{\bf{V}}_{E}},{{\bf{W}}_{I}}){\rm{=[}}{{\bf{E}}^{\star}_{I}}({{\bf{U}}^{\star}_{I}},{\bf{V}},{{\bf{V}}_{E}}){]^{-1}},$ (15) where ${{\bf{E}}^{\star}_{I}}$ is obtained by plugging the expression of ${{\bf{U}}^{\star}_{I}}$ into ${{\bf{E}}_{I}}({{\bf{U}}_{I}},{\bf{V}},{{\bf{V}}_{E}})$ as $\displaystyle{{\bf{E}}^{\star}_{I}}({{\bf{U}}^{\star}_{I}},{\bf{V}},{{\bf{V}}_{E}}){\rm{=}}({{\bf{U}}^{{\star}H}_{I}}{\hat{\bf{H}}_{I}}{\bf{V}}-{\bf{I}}_{d}){({{\bf{U}}^{{\star}H}_{I}}{\hat{\bf{H}}_{I}}{\bf{V}}-{\bf{I}}_{d})^{H}}+{{\bf{U}}^{{\star}H}_{I}}({\hat{\bf{H}}_{I}}{{\bf{V}}_{E}}{{\bf{V}}^{H}_{E}}{\hat{\bf{H}}_{I}}^{H}{\rm{+}}\sigma_{I}^{2}{{\bf{I}}_{{N_{I}}}}){{\bf{U}}^{{\star}}_{I}}{{\rm{}}}.$ (16) Similarly, by introducing the auxiliary variables ${{\bf{W}}_{E}}\succeq 0$, ${{\bf{W}}_{E}}\in{{\mathbb{C}}^{{{N}_{T}}\times{{N}_{T}}}}$, ${{\bf{U}}_{E}}\in{{\mathbb{C}}^{{{N}_{E}}\times{{N}_{T}}}}$, and exploiting the fact 3) of Lemma 4.1 in [48], we have $\displaystyle f_{2}$ $\displaystyle{\rm{=}}\mathop{\text{missing}}{max}\limits_{{{\bf{U}}_{E}},{{\bf{W}}_{E}}\succeq 0}h_{2}({{\bf{U}}_{E}},{{\bf{V}}_{E}},{{\bf{W}}_{E}})$ $\displaystyle\buildrel\Delta\over{=}\mathop{\text{missing}}{max}\limits_{{{\bf{U}}_{E}},{{\bf{W}}_{E}}\succeq 0}\log\left|{{{\bf{W}}_{E}}}\right|-{\rm{Tr}}({{\bf{W}}_{E}}{{\bf{E}}_{E}}({{\bf{U}}_{E}},{{\bf{V}}_{E}}))+{N}_{t},$ (17) $h_{2}({{\bf{U}}_{E}},{{\bf{V}}_{E}},{{\bf{W}}_{E}})$ is concave w.r.t each matrix of the matrices ${{\bf{U}}_{E}}$,${{\bf{V}}_{E}}$,${{\bf{W}}_{E}}$ when the other two matrices are given. According to the facts 1) and 2) of Lemma 4.1 in [48], the optimal ${{\bf{U}}^{\star}_{E}}$, ${{\bf{W}}^{\star}_{E}}$ to achieve the maximum value of $h_{2}({{\bf{U}}_{E}},{{\bf{V}}_{E}},{{\bf{W}}_{E}})$ is given by $\displaystyle{{\bf{U}}^{\star}_{E}}{\rm{=}}\text{arg}\mathop{\text{missing}}{max}\limits_{{{\bf{U}}_{E}}}h_{2}({{\bf{U}}_{E}},{{\bf{V}}_{E}},{{\bf{W}}_{E}}){\rm{=}}(\sigma_{E}^{2}{{\bf{I}}_{{N_{E}}}}{\rm{+}}{\hat{\bf{H}}_{E}}{{\bf{V}}_{E}}{{{\bf{V}}^{H}_{E}}}{\hat{\bf{H}}_{E}}^{H})^{-1}{\hat{\bf{H}}_{E}}{{\bf{V}}_{E}},$ (18) $\displaystyle{{\bf{W}}^{\star}_{E}}{\rm{=}}\text{arg}\mathop{\text{missing}}{max}\limits_{{{\bf{W}}_{E}}\succeq 0}h_{2}({{\bf{U}}_{E}},{{\bf{V}}_{E}},{{\bf{W}}_{E}}){\rm{=[}}{{\bf{E}}^{\star}_{E}}({{\bf{U}}^{\star}_{E}},{{\bf{V}}_{E}}){]^{-1}},$ (19) where ${{\bf{E}}^{\star}_{E}}$ is obtained by plugging the expression of ${{\bf{U}}^{\star}_{E}}$ into ${{\bf{E}}_{E}}({{\bf{U}}_{E}},{{\bf{V}}_{E}})$ as $\displaystyle\begin{array}[]{l}{{\bf{E}}^{\star}_{E}}({{\bf{U}}^{\star}_{E}},{{\bf{V}}_{E}})=({{\bf{U}}_{E}}^{{\star}H}{{\hat{\bf{H}}}_{E}}{\bf{V}}_{E}-{\bf{I}}_{N_{T}}){({{\bf{U}}^{{\star}H}_{E}}{{\hat{\bf{H}}}_{E}}{\bf{V}}_{E}-{\bf{I}}_{N_{T}})^{H}}+{{\bf{U}}^{{\star}H}_{E}}(\sigma_{E}^{2}{{\bf{I}}_{{N_{E}}}}){{\bf{U}}^{\star}_{E}}.\end{array}$ (21) By using the Lemma 1 in [13], we have $\displaystyle f_{3}$ $\displaystyle{\rm{=}}\mathop{\text{missing}}{max}\limits_{{{\bf{W}}_{X}}\succeq 0}h_{3}({{\bf{V}}},{{\bf{V}}_{E}},{{\bf{W}}_{X}})$ $\displaystyle{\rm{=}}\mathop{\text{missing}}{max}\limits_{{{\bf{W}}_{X}}\succeq 0}\log\left|{{{\bf{W}}_{X}}}\right|-{\rm{Tr}}({{\bf{W}}_{X}}{{\bf{E}}_{X}}({{\bf{V}}},{{\bf{V}}_{E}}))+{N}_{E},$ (22) where ${{\bf{W}}_{X}}\succeq 0$, ${{\bf{W}}_{X}}\in{{\mathbb{C}}^{{{N}_{E}}\times{{N}_{E}}}}$ are the introduced auxiliary variable, and $\displaystyle\begin{array}[]{l}{{\bf{E}}_{X}}({{\bf{V}}},{{\bf{V}}_{E}})\buildrel\Delta\over{=}{{{\bf{I}}_{{N_{E}}}}+\sigma_{E}^{-2}{{\hat{\bf{H}}}_{E}}({\bf{V}}{{\bf{V}}^{H}}+{{\bf{V}}_{E}}{{\bf{V}}^{H}_{E}})\hat{\bf{H}}_{E}^{H}}.\end{array}$ (24) $h_{3}({{\bf{V}}},{{\bf{V}}_{E}},{{\bf{W}}_{X}})$ is concave w.r.t each matrix of the matrices ${{\bf{V}}},{{\bf{V}}_{E}},{{\bf{W}}_{X}}$ when the other two matrices are given. The optimal ${{\bf{W}}^{\star}_{X}}$ to achieve the maximum value of $h_{3}({{\bf{V}}},{{\bf{V}}_{E}},{{\bf{W}}_{X}})$ is $\displaystyle{{\bf{W}}^{\star}_{X}}{\rm{=}}\text{arg}\mathop{\text{missing}}{max}\limits_{{{\bf{W}}_{X}}\succeq 0}h_{3}({{\bf{V}}},{{\bf{V}}_{E}},{{\bf{W}}_{X}}){\rm{=[}}{{\bf{E}}_{X}}({{\bf{V}}},{{\bf{V}}_{E}}){]^{-1}}.$ (25) By substituting (III-A), (III-A), (III-A) into (III-A), we have $\displaystyle{{\rm{C}}_{AN}}{\rm{(}}{\bf{V}},{{\bf{V}}_{E}}{\rm{)}}$ $\displaystyle=\mathop{\text{missing}}{argmax}\limits_{{{\bf{U}}_{I}},{{\bf{W}}_{I}},{{\bf{U}}_{E}},{{\bf{W}}_{E}},{{\bf{W}}_{X}}}{{\rm{C}}^{l}_{AN}}({{\bf{U}}_{I}},{{\bf{W}}_{I}},{{\bf{U}}_{E}},{{\bf{W}}_{E}},{{\bf{W}}_{X}},{\bf{V}},{{\bf{V}}_{E}}),$ (26) where $\displaystyle{{\rm{C}}^{l}_{AN}}({{\bf{U}}_{I}},{{\bf{W}}_{I}},{{\bf{U}}_{E}},{{\bf{W}}_{E}},{{\bf{W}}_{X}},{\bf{V}},{{\bf{V}}_{E}})\buildrel\Delta\over{=}$ $\displaystyle h_{1}({{\bf{U}}_{I}},{\bf{V}},{{\bf{V}}_{E}},{{\bf{W}}_{I}})+h_{2}({{\bf{U}}_{E}},{{\bf{V}}_{E}},{{\bf{W}}_{E}})$ $\displaystyle+h_{3}({{\bf{V}}},{{\bf{V}}_{E}},{{\bf{W}}_{X}}).$ (27) Obviously, ${{\rm{C}}^{l}_{AN}}({{\bf{U}}_{I}},{{\bf{W}}_{I}},{{\bf{U}}_{E}},{{\bf{W}}_{E}},{{\bf{W}}_{X}},{\bf{V}},{{\bf{V}}_{E}})$ is a concave function for each of the matrices ${{\bf{U}}_{I}}$,${{\bf{W}}_{I}}$,${{\bf{U}}_{E}}$,${{\bf{W}}_{E}}$,${{\bf{W}}_{X}}$,${\bf{V}}$,${{\bf{V}}_{E}}$ when the other six matrices are given. By substituting (26) into Problem (10), we have the following equivalent problem: $\displaystyle\ \underset{{{\bf{U}}_{I}},{{\bf{W}}_{I}}\succeq 0,{{\bf{U}}_{E}},{{\bf{W}}_{E}}\succeq 0,{{\bf{W}}_{X}}\succeq 0,{\bf{V}},{{\bf{V}}_{E}},{\bf{\Phi}}}{\mathop{\text{missing}}{max}}\ \ {{\rm{C}}^{l}_{AN}}({{\bf{U}}_{I}},{{\bf{W}}_{I}},{{\bf{U}}_{E}},{{\bf{W}}_{E}},{{\bf{W}}_{X}},{\bf{V}},{{\bf{V}}_{E}},{\bf{\Phi}})$ (28a) $\displaystyle\quad\quad\quad\quad\quad\quad\text{s.t.}\quad\ {\rm{Tr(}}{\bf{V}}{{\bf{V}}^{H}}{\rm{+}}{{\bf{V}}_{E}}{{\bf{V}}_{E}}^{H}{\rm{)}}\leq{P_{T}},$ (28b) $\displaystyle\quad\quad\quad\quad\quad\quad\quad\quad\ \ \left|{{{{\phi}}_{m}}}\right|=1,m=1,\cdots,M.$ (28c) To solve Problem (28), we apply the BCD method, each iteration of which consists the following two sub-iterations. Firstly, with given ${\bf{V}},{{\bf{V}}_{E}},{\bf{\Phi}}$, update ${{\bf{U}}_{I}},{{\bf{W}}_{I}},{{\bf{U}}_{E}},{{\bf{W}}_{E}},{{\bf{W}}_{X}}$ by using (14), (15), (18), (19), (25) respectively. Secondly, with given ${{\bf{U}}_{I}},{{\bf{W}}_{I}},{{\bf{U}}_{E}},{{\bf{W}}_{E}},{{\bf{W}}_{X}}$, update ${\bf{V}},{{\bf{V}}_{E}},{\bf{\Phi}}$ by solving the following subproblem: $\displaystyle\ \underset{{\bf{V}},{{\bf{V}}_{E}},{\bf{\Phi}}}{\mathop{\text{missing}}{min}}\ \ -\text{Tr}({{\mathbf{W}}_{I}}{{\mathbf{V}}^{H}}{{{\mathbf{\hat{H}}}}_{I}}^{H}{{\mathbf{U}}_{I}})-\text{Tr}({{\mathbf{W}}_{I}}{{\mathbf{U}}^{H}_{I}}{{{\mathbf{\hat{H}}}}_{I}}\mathbf{V})+\text{Tr}({{\mathbf{V}}^{H}}{{\mathbf{H}}_{V}}\mathbf{V})$ $\displaystyle\quad\quad\quad\quad-\text{Tr}({{\mathbf{W}}_{E}}{{\mathbf{V}}^{H}_{E}}{{{\mathbf{\hat{H}}}}_{E}}^{H}{{\mathbf{U}}_{E}})-\text{Tr}({{\mathbf{W}}_{E}}{{\mathbf{U}}^{H}_{E}}{{{\mathbf{\hat{H}}}}_{E}}{{\mathbf{V}}_{E}})+\text{Tr}({{\mathbf{V}}^{H}_{E}}{{\mathbf{H}}_{VE}}{{\mathbf{V}}_{E}})$ (29a) $\displaystyle\ \ \text{s.t.}\quad{\rm{Tr(}}{\bf{V}}{{\bf{V}}^{H}}{\rm{+}}{{\bf{V}}_{E}}{{\bf{V}}^{H}_{E}}{\rm{)}}\leq P_{T},$ (29b) $\displaystyle\quad\quad\quad\\!\left|{{{{\phi}}_{m}}}\right|=1,m=1,\cdots,M,$ (29c) where $\displaystyle{{\mathbf{H}}_{V}}={{\mathbf{\hat{H}}}_{I}}^{H}{{\mathbf{U}}_{I}}{{\mathbf{W}}_{I}}{{\mathbf{U}}^{H}_{I}}{{\mathbf{\hat{H}}}_{I}}+\sigma_{E}^{-2}\mathbf{\hat{H}}_{E}^{H}{{\mathbf{W}}_{X}}{{\mathbf{\hat{H}}}_{E}},$ (30) $\displaystyle{{\mathbf{H}}_{VE}}={{\mathbf{\hat{H}}}_{I}}^{H}{{\mathbf{U}}_{I}}{{\mathbf{W}}_{I}}{{\mathbf{U}}^{H}_{I}}{{\mathbf{\hat{H}}}_{I}}+{{\mathbf{\hat{H}}}_{E}}^{H}{{\mathbf{U}}_{E}}{{\mathbf{W}}_{E}}{{\mathbf{U}}^{H}_{E}}{{\mathbf{\hat{H}}}_{E}}+\sigma_{E}^{-2}{{\mathbf{\hat{H}}}_{E}}^{H}{{\mathbf{W}}_{X}}{{\mathbf{\hat{H}}}_{E}}.$ (31) Problem (29) is obtained from Problem (28) by taking the ${{\bf{U}}_{I}},{{\bf{W}}_{I}},{{\bf{U}}_{E}},{{\bf{W}}_{E}},{{\bf{W}}_{X}}$ as constant values, and the specific derivations are given in Appendix A. It is obvious that Problem (29) is much easier to tackle than Problem (10) due to the convex quadratic OF in (29a). Now, we devote to solve Problem (29) equivalently instead of Problem (10), and the matrices ${\bf{V}}$, ${\bf{V}}_{E}$, and phase shift matrix $\mathbf{\Phi}$ will be optimized. ### III-B Optimizing the Matrices ${\bf{V}}$ and ${\bf{V}}_{E}$ In this subsection, the TPC matrix ${\bf{V}}$ and matrix ${\bf{V}}_{E}$ are optimized by fixing $\mathbf{\Phi}$. Specifically, the unit modulus constraint on the phase shifts $\mathbf{\Phi}$ is removed, and the updated optimization problem reduced from Problem (29) is given by $\displaystyle\ \ \underset{{\bf{V}},{{\bf{V}}_{E}}}{\mathop{\text{missing}}{min}}\ \ -\text{Tr}({{\mathbf{W}}_{I}}{{\mathbf{V}}^{H}}{{{\mathbf{\hat{H}}}}_{I}}^{H}{{\mathbf{U}}_{I}})-\text{Tr}({{\mathbf{W}}_{I}}{{\mathbf{U}}^{H}_{I}}{{{\mathbf{\hat{H}}}}_{I}}\mathbf{V})+\text{Tr}({{\mathbf{V}}^{H}}{{\mathbf{H}}_{V}}\mathbf{V})$ $\displaystyle\quad\quad\quad\quad-\text{Tr}({{\mathbf{W}}_{E}}{{\mathbf{V}}^{H}_{E}}{{{\mathbf{\hat{H}}}}_{E}}^{H}{{\mathbf{U}}_{E}})-\text{Tr}({{\mathbf{W}}_{E}}{{\mathbf{U}}^{H}_{E}}{{{\mathbf{\hat{H}}}}_{E}}{{\mathbf{V}}_{E}})+\text{Tr}({{\mathbf{V}}^{H}_{E}}{{\mathbf{H}}_{VE}}{{\mathbf{V}}_{E}})$ (32a) $\displaystyle\ \ \text{s.t.}\quad{\rm{Tr(}}{\bf{V}}{{\bf{V}}^{H}}{\rm{+}}{{\bf{V}}_{E}}{{\bf{V}}^{H}_{E}}{\rm{)}}\leq P_{T}.$ (32b) The above problem is a convex QCQP problem, and the standard optimization packages, such as CVX [49] can be exploited to solve it. However, the calculation burden is heavy. To reduce the complexity, the near-optimal closed form expressions of the TPC matrix and AN covariance matrix are provided by applying the Lagrangian multiplier method. Since Problem (32) is a convex problem, the Slater’s condition is satisfied, where the duality gap between Problem (32) and its dual problem is zero. Thus, Problem (32) can be solved by addressing its dual problem if the dual problem is easier. For this purpose, by introducing Lagrange multiplier $\lambda$ to combine the the constraint and OF of Problem (32), the Lagrangian function of Problem (32) is obtained as $\displaystyle\mathcal{L}\left({{\bf{V}},{{\bf{V}}_{E}},\lambda}\right)$ $\displaystyle\\!\buildrel\Delta\over{=}\\!-{\rm{Tr}}\left({{\bf{W}}_{I}}{{\bf{V}}^{H}}{\bf{\hat{H}}}_{I}^{H}{{\bf{U}}_{I}}\right)\\!-\\!{\rm{Tr}}\left({{{\bf{W}}_{I}}{\bf{U}}_{I}^{H}{{{\bf{\hat{H}}}}_{I}}{\bf{V}}}\right)\\!+\\!{\rm{Tr}}\left({{{\bf{V}}^{H}}{{\bf{H}}_{V}}{\bf{V}}}\right)\\!-\\!{\rm{Tr}}\left({{{\bf{W}}_{E}}{\bf{V}}_{E}^{H}{\bf{\hat{H}}}_{E}^{H}{{\bf{U}}_{E}}}\right)$ $\displaystyle\quad-{\rm{Tr}}\left({{{\bf{W}}_{E}}{\bf{U}}_{E}^{H}{{{\bf{\hat{H}}}}_{E}}{{\bf{V}}_{E}}}\right)+{\rm{Tr}}\left({{\bf{V}}_{E}^{H}{{\bf{H}}_{VE}}{{\bf{V}}_{E}}}\right)+\lambda[{{\rm{Tr}}\left({{\bf{V}}{{\bf{V}}^{H}}+{{\bf{V}}_{E}}{\bf{V}}_{E}^{H}}\right)}-P_{T}]$ $\displaystyle=-{\rm{Tr}}\left({{{\bf{W}}_{I}}{{\bf{V}}^{H}}{\bf{\hat{H}}}_{I}^{H}{{\bf{U}}_{I}}}\right)-{\rm{Tr}}\left({{{\bf{W}}_{I}}{\bf{U}}_{I}^{H}{{{\bf{\hat{H}}}}_{I}}{\bf{V}}}\right)+{\rm{Tr}}\left[{{{\bf{V}}^{H}}\left({{{\bf{H}}_{V}}+\lambda{\bf{I}}}\right){\bf{V}}}\right]$ $\displaystyle\quad-{\rm{Tr}}\left({{{\bf{W}}_{E}}{\bf{V}}_{E}^{H}{\bf{\hat{H}}}_{E}^{H}{{\bf{U}}_{E}}}\right)\\!-\\!{\rm{Tr}}\left({{{\bf{W}}_{E}}{\bf{U}}_{E}^{H}{{{\bf{\hat{H}}}}_{E}}{{\bf{V}}_{E}}}\right)\\!+\\!{\rm{Tr}}\left[{{\bf{V}}_{E}^{H}\left({{{\bf{H}}_{VE}}\\!+\\!\lambda{\bf{I}}}\right){{\bf{V}}_{E}}}\right]\\!-\\!\lambda{P_{T}}.$ (33) Then the dual problem of Problem (32) is $\displaystyle\mathop{\max}\limits_{\lambda}\quad\quad{\rm{}}h\left(\lambda\right)$ (34a) $\displaystyle\text{s.t.}\quad\quad{\rm{}}\lambda\geq 0,$ (34b) where $h\left(\lambda\right)$ is the dual function given by $\displaystyle h\left(\lambda\right)\buildrel\Delta\over{=}\mathop{\min}\limits_{{\bf{V}},{{\bf{V}}_{E}}}{\rm{}}\mathcal{L}\left({{\bf{V}},{{\bf{V}}_{E}},\lambda}\right).$ (35) Note that Problem (35) is a convex quadratic optimization problem with no constraint, which can be solved in closed form. The optimal solution ${\bf{V}^{\star}},{{\bf{V}^{{\star}}}_{E}}$ for Problem (35) is $\displaystyle[{\bf{V}^{\star}},{{\bf{V}^{{\star}}}_{E}}]=\text{arg}\mathop{\min}\limits_{{\bf{V}},{{\bf{V}}_{E}}}{\rm{}}\mathcal{L}\left({{\bf{V}},{{\bf{V}}_{E}},\lambda}\right).$ (36) By setting the first-order derivative of $\mathcal{L}\left({{\bf{V}},{{\bf{V}}_{E}},\lambda}\right)$ w.r.t. ${{{\bf{V}}}}$ to zero matrix, we can obtain the optimal solution of ${\bf{V}}$ as follows: $\displaystyle\frac{\partial{\mathcal{L}\left({{\bf{V}},{{\bf{V}}_{E}},\lambda}\right)}}{{\partial{\bf{V}}}}=\bf{0},$ (37a) $\displaystyle\frac{\partial{\mathcal{L}\left({{\bf{V}},{{\bf{V}}_{E}},\lambda}\right)}}{{\partial{{\bf{V}}_{E}}}}=\bf{0}.$ (37b) The left hand side of Equation (37a) can be expanded as $\displaystyle\frac{\partial{\mathcal{L}\left({{\bf{V}},{{\bf{V}}_{E}},\lambda}\right)}}{{\partial{\bf{V}}}}$ $\displaystyle=\frac{{\partial{\rm{Tr}}\left[{{{\bf{V}}^{H}}\left({{{\bf{H}}_{V}}+\lambda{\bf{I}}}\right){\bf{V}}}\right]}}{{\partial{\bf{V}}}}-\left({{{\bf{W}}_{I}}{\bf{U}}_{I}^{H}{{{\bf{\hat{H}}}}_{I}}}\right)^{H}-\left({{\bf{\hat{H}}}_{I}^{H}{{\bf{U}}_{I}}{{\bf{W}}_{I}}}\right)$ $\displaystyle=2\left({{{\bf{H}}_{V}}+\lambda{\bf{I}}}\right){\bf{V}}-2\left({{\bf{\hat{H}}}_{I}^{H}{{\bf{U}}_{I}}{{\bf{W}}_{I}}}\right).$ (38) The equation (37a) becomes $\displaystyle\left({{{\bf{H}}_{V}}+\lambda{\bf{I}}}\right){\bf{V}}=\left({{\bf{\hat{H}}}_{I}^{H}{{\bf{U}}_{I}}{{\bf{W}}_{I}}}\right).$ (39) Then the optimal solution ${\bf{V}^{\star}}$ for Problem (36) is $\displaystyle{{\bf{V}}^{\star}}$ $\displaystyle=\left({{{\bf{H}}_{V}}+\lambda{\bf{I}}}\right)^{{\dagger}}\left({{\bf{\hat{H}}}_{I}^{H}{{\bf{U}}_{I}}{{\bf{W}}_{I}}}\right)$ $\displaystyle\buildrel\Delta\over{=}{{\bf{\Theta}}_{V}}\left(\lambda\right)\left({{\bf{\hat{H}}}_{I}^{H}{{\bf{U}}_{I}}{{\bf{W}}_{I}}}\right).$ (40) Similarly, we solve Problem (36) by setting the first-order derivative of $\mathcal{L}\left({{\bf{V}},{{\bf{V}}_{E}},\lambda}\right)$ w.r.t. ${{{\bf{V}}_{E}}}$ to zero matrix, which becomes $\displaystyle 2\left({{{\bf{H}}_{VE}}+\lambda{\bf{I}}}\right){{\bf{V}}_{E}}-2{\bf{\hat{H}}}_{E}^{H}{{\bf{U}}_{E}}{\bf{W}}_{E}^{H}=\bf{0}.$ (41) Then the optimal solution ${\bf{V}}_{E}^{\star}$ for Problem (36) is $\displaystyle{\bf{V}}_{E}^{\star}$ $\displaystyle=\left({{{\bf{H}}_{VE}}+\lambda{\bf{I}}}\right)^{{\dagger}}{\bf{\hat{H}}}_{E}^{H}{{\bf{U}}_{E}}{\bf{W}}_{E}^{H}$ $\displaystyle\buildrel\Delta\over{=}{{\bf{\Theta}}_{VE}}\left(\lambda\right){\bf{\hat{H}}}_{E}^{H}{{\bf{U}}_{E}}{\bf{W}}_{E}^{H}.$ (42) Once the optimal solution ${\lambda}^{\star}$ for Problem (34) is found, the final optimal ${\bf{V}^{\star}},{\bf{V}}_{E}^{\star}$ can be obtained. The value of ${\lambda}^{\star}$ should be chosen in order to guarantee the complementary slackness condition as $\displaystyle\lambda[{\rm{Tr(}}{\bf{V}^{\star}}{{\bf{V}}^{{\star}H}}{\rm{+}}{{\bf{V}}_{E}^{{\star}}}{{\bf{V}}^{{\star}H}_{E}}{\rm{)}}-P_{T}]=0.$ (43) We define $\displaystyle P(\lambda)$ $\displaystyle\buildrel\Delta\over{=}{\rm{Tr(}}{\bf{V}^{\star}}{{\bf{V}}^{{\star}H}}{\rm{+}}{{\bf{V}}_{E}^{{\star}}}{{\bf{V}}^{{\star}H}_{E}}{\rm{)}}={\rm{Tr(}}{\bf{V}^{\star}}{{\bf{V}}^{{\star}H}}{\rm{)}}+{\rm{Tr(}}{{\bf{V}}_{E}^{{\star}}}{{\bf{V}}^{{\star}H}_{E}}{\rm{)}},$ (44) where $\displaystyle{\rm{Tr}}\left({{\bf{V}}^{\star}{{\bf{V}}^{{\star}H}}}\right)$ $\displaystyle={\rm{Tr}}\left({{{\bf{\Theta}}_{V}}\left(\lambda\right)({{\bf{\hat{H}}}_{I}^{H}{{\bf{U}}_{I}}{\bf{W}}_{I}^{H}})}({{\bf{\hat{H}}}_{I}^{H}{{\bf{U}}_{I}}{\bf{W}}_{I}^{H}})^{H}{{\bf{\Theta}}^{H}_{V}}\left(\lambda\right)\right)$ $\displaystyle={\rm{Tr}}\left({{{\bf{\Theta}}^{H}_{V}}\left(\lambda\right){{\bf{\Theta}}_{V}}\left(\lambda\right)({{\bf{\hat{H}}}_{I}^{H}{{\bf{U}}_{I}}{\bf{W}}_{I}^{H}})}({{\bf{\hat{H}}}_{I}^{H}{{\bf{U}}_{I}}{\bf{W}}_{I}^{H}})^{H}\right),$ (45) $\displaystyle{\rm{Tr}}\left({{{\bf{V}}_{E}^{{\star}H}}{\bf{V}}_{E}^{\star}}\right)$ $\displaystyle={\rm{Tr}}\left({{{\bf{\Theta}}_{VE}}\left(\lambda\right)({{\bf{\hat{H}}}_{E}^{H}{{\bf{U}}_{E}}{\bf{W}}_{E}^{H}})}({{\bf{\hat{H}}}_{E}^{H}{{\bf{U}}_{E}}{\bf{W}}_{E}^{H}})^{H}{{\bf{\Theta}}^{H}_{VE}}\left(\lambda\right)\right)$ $\displaystyle={\rm{Tr}}\left({{{\bf{\Theta}}^{H}_{VE}}\left(\lambda\right){{\bf{\Theta}}_{VE}}\left(\lambda\right)({{\bf{\hat{H}}}_{E}^{H}{{\bf{U}}_{E}}{\bf{W}}_{E}^{H}})}({{\bf{\hat{H}}}_{E}^{H}{{\bf{U}}_{E}}{\bf{W}}_{E}^{H}})^{H}\right).$ (46) Then $P(\lambda)$ becomes $\displaystyle P(\lambda)={\rm{Tr}}\left({{{\bf{\Theta}}^{n}_{V}}({{\bf{\hat{H}}}_{I}^{H}{{\bf{U}}_{I}}{\bf{W}}_{I}^{H}})({{\bf{\hat{H}}}_{I}^{H}{{\bf{U}}_{I}}{\bf{W}}_{I}^{H}})^{H}}\right)+{\rm{Tr}}\left({{{\bf{\Theta}}^{n}_{VE}}({{\bf{\hat{H}}}_{E}^{H}{{\bf{U}}_{E}}{\bf{W}}_{E}^{H}})({{\bf{\hat{H}}}_{E}^{H}{{\bf{U}}_{E}}{\bf{W}}_{E}^{H}})^{H}}\right),$ (47) where $\displaystyle{{\bf{\Theta}}^{n}_{V}}$ $\displaystyle={{\bf{\Theta}}^{H}_{V}}\left(\lambda\right){{\bf{\Theta}}_{V}}\left(\lambda\right)=\left({{{\bf{H}}_{V}}+\lambda{\bf{I}}}\right)^{{\dagger}H}\left({{{\bf{H}}_{V}}+\lambda{\bf{I}}}\right)^{{\dagger}},$ (48) $\displaystyle{{\bf{\Theta}}^{n}_{VE}}$ $\displaystyle={{\bf{\Theta}}^{H}_{VE}}\left(\lambda\right){{\bf{\Theta}}_{VE}}\left(\lambda\right)=\left({{{\bf{H}}_{VE}}+\lambda{\bf{I}}}\right)^{{\dagger}H}\left({{{\bf{H}}_{VE}}+\lambda{\bf{I}}}\right)^{{\dagger}}.$ (49) To find the optimal ${\lambda^{\star}}\geq 0$, we first check whether $\lambda=0$ is the optimal solution or not. If $\displaystyle P(0)={\rm{Tr}}\left({{{\bf{V}}^{{\star}H}}(0){\bf{V}}^{\star}(0)}\right)+{\rm{Tr}}\left({{\bf{V}}_{E}^{{\star}H}(0){{\bf{V}}_{E}}^{\star}(0)}\right)\leq{P_{T}},$ (50) then the optimal solutions are given by ${{\bf{V}}^{\star}}={{\bf{V}}}(0)$ and ${{\bf{V}}_{E}^{\star}}={{\bf{V}}_{E}}(0)$. Otherwise, the optimal $\lambda^{\star}>0$ is the solution of the equation $P(\lambda)=0$. It is ready to verify that ${{\bf{H}}_{V}}$ and ${{\bf{H}}_{VE}}$ is a positive semidefinite matrix. Let us define the rank of ${{\bf{H}}_{V}}$ and ${{\bf{H}}_{VE}}$ as $r_{V}={\rm{rank}}({\bf{H}}_{V})\leq N_{T}$ and $r_{VE}={\rm{rank}}({\bf{H}}_{VE})\leq N_{T}$ respectively. By decomposing ${{\bf{H}}_{V}}$ and ${{\bf{H}}_{VE}}$ by using the singular value decomposition (SVD), we have ${{\bf{H}}_{V}}=\left[{{{\bf{P}}_{V,1}},{{\bf{P}}_{V,2}}}\right]{{\bf{\Sigma}}_{V}}{\left[{{{\bf{P}}_{V,1}},{{\bf{P}}_{V,2}}}\right]^{\rm{H}}},{{\bf{H}}_{VE}}=\left[{{{\bf{P}}_{{VE},1}},{{\bf{P}}_{{VE},2}}}\right]{{\bf{\Sigma}}_{VE}}{\left[{{{\bf{P}}_{{VE},1}},{{\bf{P}}_{{VE},2}}}\right]^{\rm{H}}},$ (51) where ${\bf{P}}_{V,1}$ comprises the first $r_{V}$ singular vectors associated with the $r_{V}$ positive eigenvalues of ${{\bf{H}}_{V}}$, and ${\bf{P}}_{V,2}$ includes the last $N_{T}-r_{V}$ singular vectors associated with the $N_{T}-r_{V}$ zero-valued eigenvalues of ${{\bf{H}}_{V}}$, ${{\bm{\Sigma}}_{V}}={\rm{diag}}\left\\{{{{\bm{\Sigma}}_{V,1}},{{\bf{0}}_{\left({{N_{T}}-{r_{V}}}\right)\times\left({{N_{T}}-{r_{V}}}\right)}}}\right\\}$ with ${\bm{\Sigma}}_{V,1}$ representing the diagonal submatrix collecting the first $r_{V}$ positive eigenvalues. Similarly, the first $r_{VE}$ singular vectors corresponding to the $r_{VE}$ positive eigenvalues of ${{\bf{H}}_{VE}}$ are contained in ${\bf{P}}_{VE,1}$, while the last $N_{T}-r_{VE}$ singular vectors corresponding to the $N_{T}-r_{VE}$ zero- valued eigenvalues of ${{\bf{H}}_{VE}}$ are held in ${\bf{P}}_{VE,2}$. ${{\bm{\Sigma}}_{VE}}={\rm{diag}}\left\\{{{{\bm{\Sigma}}_{{VE},1}},{{\bf{0}}_{\left({{N_{T}}-{r_{VE}}}\right)\times\left({{N_{T}}-{r_{VE}}}\right)}}}\right\\}$ is a diagonal matrix with ${\bm{\Sigma}}_{{VE},1}$ representing the diagonal submatrix gathering the first $r_{VE}$ positive eigenvalues. By defining ${{\bf{P}}_{V}}\buildrel\Delta\over{=}\left[{{{\bf{P}}_{V,1}},{{\bf{P}}_{V,2}}}\right]$ and ${{\bf{P}}_{VE}}\buildrel\Delta\over{=}\left[{{{\bf{P}}_{{VE},1}},{{\bf{P}}_{{VE},2}}}\right]$, and substituting (51) into (48) and (49), $P(\lambda)$ becomes $\displaystyle P(\lambda)={\rm{Tr}}\left({[{\left({{{\bf{P}}_{V}}{{\bf{\Sigma}}_{V}}{\bf{P}}_{V}^{H}+\lambda{{\bf{P}}_{V}}{\bf{P}}_{V}^{H}}\right)^{-1}}{\left({{{\bf{P}}_{V}}{{\bf{\Sigma}}_{V}}{\bf{P}}_{V}^{H}+\lambda{{\bf{P}}_{V}}{\bf{P}}_{V}^{H}}\right)^{-1}}]({{\bf{\hat{H}}}_{I}^{H}{{\bf{U}}_{I}}{\bf{W}}_{I}^{H}})({{\bf{\hat{H}}}_{I}^{H}{{\bf{U}}_{I}}{\bf{W}}_{I}^{H}})^{H}}\right)$ $\displaystyle\\!\\!+\\!\\!{\rm{Tr}}\\!\left(\\!{[\\!{\left(\\!{{{\bf{P}}_{VE}}{{\bf{\Sigma}}_{VE}}{\bf{P}}_{VE}^{H}\\!+\\!\lambda{{\bf{P}}_{VE}}{\bf{P}}_{VE}^{H}}\\!\right)^{-1}}\\!\\!{\left({{{\bf{P}}_{VE}}{{\bf{\Sigma}}_{VE}}{\bf{P}}_{VE}^{H}\\!+\\!\lambda{{\bf{P}}_{VE}}{\bf{P}}_{VE}^{H}}\right)^{-1}}\\!]\\!(\\!{{\bf{\hat{H}}}_{E}^{H}{{\bf{U}}_{E}}{\bf{W}}_{E}^{H}}\\!)\\!({{\bf{\hat{H}}}_{E}^{H}{{\bf{U}}_{E}}{\bf{W}}_{E}^{H}})^{H}}\\!\right)$ $\displaystyle={\rm{Tr}}\left({[{\left({{{\bf{\Sigma}}_{V}}+\lambda{\bf{I}}}\right)^{-2}}]{\bf{Z}}_{V}}\right)+{\rm{Tr}}\left({[{\left({{{\bf{\Sigma}}_{VE}}+\lambda{\bf{I}}}\right)^{-2}}]{\bf{Z}}_{VE}}\right)$ $\displaystyle{=}\sum\limits_{i=1}^{r_{V}}\left[{\frac{{{\left[{{\bf{Z}}_{V}}\right]}_{i,i}}}{{{\left({{{\left[{{\Sigma}_{V}}\right]}_{i,i}}\\!+\\!\lambda}\right)}^{2}}}}\right]+\sum\limits_{i=1}^{r_{VE}}\left[{\frac{{{\left[{{\bf{Z}}_{VE}}\right]}_{i,i}}}{{{\left({{{\left[{{\Sigma}_{VE}}\right]}_{i,i}}\\!+\\!\lambda}\right)}^{2}}}}\right]+\sum\limits_{i={r_{V}}+1}^{{N_{T}}}{\left[{\frac{{{\left[{{\bf{Z}}_{V}}\right]}_{i,i}}}{{{\left({\lambda}\right)}^{2}}}}\right]}\\!+\\!\sum\limits_{i={r_{VE}}+1}^{{N_{T}}}{\left[{\frac{{{\left[{{\bf{Z}}_{VE}}\right]}_{i,i}}}{{{\left({\lambda}\right)}^{2}}}}\right]},$ (52) where ${{\bf{Z}}_{V}}={\bf{P}}_{V}^{H}({{\bf{\hat{H}}}_{I}^{H}{{\bf{U}}_{I}}{\bf{W}}_{I}^{H}})({{\bf{\hat{H}}}_{I}^{H}{{\bf{U}}_{I}}{\bf{W}}_{I}^{H}})^{H}{{\bf{P}}_{V}}$ and ${{\bf{Z}}_{VE}}={\bf{P}}_{VE}^{H}({{\bf{\hat{H}}}_{E}^{H}{{\bf{U}}_{E}}{\bf{W}}_{E}^{H}})({{\bf{\hat{H}}}_{E}^{H}{{\bf{U}}_{E}}{\bf{W}}_{E}^{H}})^{H}{{\bf{P}}_{VE}}$. ${\left[{{{\bf{Z}}_{V}}}\right]}_{i,i}$, ${\left[{{{\bf{Z}}_{VE}}}\right]}_{i,i}$, ${\left[{{{\Sigma}}_{V}}\right]}_{i,i}$, and ${\left[{{{\Sigma}}_{VE}}\right]}_{i,i}$ represent the $i$th diagonal element of matrices ${{{\bf{Z}}_{V}}}$, ${{\bf{Z}}_{VE}}$, ${{{\Sigma}}_{V}}$, and ${{{\Sigma}}_{VE}}$, respectively. The first line of (III-B) is obtained by substituting (51) into the expression of $P({\lambda})$ in (47). It can be verified from the last line of (III-B) that $P({\lambda})$ is a monotonically decreasing function. Then, the optimal $\lambda^{\star}$ can be obtained by solving the following equation, $\displaystyle\sum\limits_{i=1}^{r_{V}}\\!\left[{\frac{{{\left[{{{\bf{Z}}_{V}}}\right]}_{i,i}}}{{{{\left({{{\left[{{{\Sigma}}_{V}}\right]}_{i,i}}+\lambda}\right)}^{2}}}}}\right]\\!+\\!\\!\sum\limits_{i=1}^{r_{VE}}\\!\left[{\frac{{{\left[{{{\bf{Z}}_{VE}}}\right]}_{i,i}}}{{{{\left({{{\left[{{{\Sigma}}_{VE}}\right]}_{i,i}}+\lambda}\right)}^{2}}}}}\right]\\!+\\!\\!\sum\limits_{i={r_{V}}+1}^{{N_{T}}}\\!{\left[{\frac{{{\left[{{{\bf{Z}}_{V}}}\right]}_{i,i}}}{{{{\left({\lambda}\right)}^{2}}}}}\right]}\\!+\\!\\!\sum\limits_{i={r_{VE}}+1}^{{N_{T}}}\\!{\left[{\frac{{{\left[{{{\bf{Z}}_{VE}}}\right]}_{i,i}}}{{{{\left({\lambda}\right)}^{2}}}}}\right]}=P_{T}.$ (53) To solve it, the bisection search method is utilized. Since $P(\infty)=0$, the solution to Equation (53) must exist. The lower bound of $\lambda^{\star}$ is a positive value approaching zero, while the upper bound of $\lambda^{\star}$ is given by ${\lambda^{\star}}<\sqrt{\frac{{\sum\limits_{i=1}^{{N_{T}}}{{{\left[{{{\bf{Z}}_{V}}}\right]}_{i,i}}}}+{\sum\limits_{i=1}^{{N_{T}}}{{{\left[{{{\bf{Z}}_{VE}}}\right]}_{i,i}}}}}{{{P_{T}}}}}\buildrel\Delta\over{=}\lambda^{{\rm{ub}}}.$ (54) which can be proved as $\displaystyle{P}(\lambda^{{\rm{ub}}})$ $\displaystyle=\sum\limits_{i=1}^{r_{V}}{\frac{{{{\left[{{{\bf{Z}}_{V}}}\right]}_{i,i}}}}{{{{\left({{{\left[{{{\Sigma}}_{V}}\right]}_{i,i}}+{\lambda^{{\rm{ub}}}}}\right)}^{2}}}}}+\sum\limits_{i=1}^{r_{VE}}{\frac{{{{\left[{{{\bf{Z}}_{VE}}}\right]}_{i,i}}}}{{{{\left({{{\left[{{{\Sigma}}_{VE}}\right]}_{i,i}}+{\lambda^{{\rm{ub}}}}}\right)}^{2}}}}}+\sum\limits_{i={r_{V}}+1}^{{N_{T}}}{\left[{\frac{{{\left[{{\bf{Z}}_{V}}\right]}_{i,i}}}{{{\left({\lambda^{{\rm{ub}}}}\right)}^{2}}}}\right]}\\!+\\!\sum\limits_{i={r_{VE}}+1}^{{N_{T}}}{\left[{\frac{{{\left[{{\bf{Z}}_{VE}}\right]}_{i,i}}}{{{\left({\lambda^{{\rm{ub}}}}\right)}^{2}}}}\right]}$ $\displaystyle<\sum\limits_{i=1}^{{N_{T}}}{\frac{{{{\left[{{{\bf{Z}}_{V}}}\right]}_{i,i}}}}{{{{\left({\lambda^{{\rm{ub}}}}\right)}^{2}}}}}+\sum\limits_{i=1}^{{N_{T}}}{\frac{{{{\left[{{{\bf{Z}}_{VE}}}\right]}_{i,i}}}}{{{{\left({\lambda^{{\rm{ub}}}}\right)}^{2}}}}}={P_{T}}.$ (55) When the optimal $\lambda^{{\star}}$ is found, the optimal matrices ${{\bf{V}}^{{\star}}}$ and ${{\bf{V}}_{E}^{{\star}}}$ can be obtained by substituting $\lambda^{\star}$ into (III-B) and (III-B). ### III-C Optimizing the Phase Shifts $\mathbf{\Phi}$ In this subsection, the phase shift matrix $\mathbf{\Phi}$ is optimized by fixing ${{\bf{V}}}$ and ${{\bf{V}}_{E}}$. The transmit power constraint in Problem (29) is only related with ${{\bf{V}}}$ and ${{\bf{V}}_{E}}$, thus is removed. Then, the optimization problem for $\mathbf{\Phi}$ reduced from Problem (29) is formulated as $\displaystyle\ \ \underset{{\bf{\Phi}}}{\mathop{\text{missing}}{min}}\ \ {g_{0}}(\mathbf{\Phi})\buildrel\Delta\over{=}-\text{Tr}({{\mathbf{W}}_{I}}{{\mathbf{V}}^{H}}{{{\mathbf{\hat{H}}}}_{I}}^{H}{{\mathbf{U}}_{I}})-\text{Tr}({{\mathbf{W}}_{I}}{{\mathbf{U}}_{I}}^{H}{{{\mathbf{\hat{H}}}}_{I}}\mathbf{V})+\text{Tr}({{\mathbf{V}}^{H}}{{\mathbf{H}}_{V}}\mathbf{V})$ $\displaystyle\quad\quad\quad\quad\quad\quad\ -\text{Tr}({{\mathbf{W}}_{E}}{{\mathbf{V}}_{E}}^{H}{{{\mathbf{\hat{H}}}}_{E}}^{H}{{\mathbf{U}}_{E}})-\text{Tr}({{\mathbf{W}}_{E}}{{\mathbf{U}}_{E}}^{H}{{{\mathbf{\hat{H}}}}_{E}}{{\mathbf{V}}_{E}})+\text{Tr}({{\mathbf{V}}_{E}}^{H}{{\mathbf{H}}_{VE}}{{\mathbf{V}}_{E}})$ (56a) $\displaystyle\ \ \text{s.t.}\quad\left|{{{{\phi}}_{m}}}\right|=1,m=1,\cdots,M.$ (56b) By the aid of complex mathematical manipulations, which are given in details in Appendix B, Problem (56) can be transformed into a form that can facilitate the MM algorithm. Based on the derivations in Appendix B, the OF ${g_{0}}(\mathbf{\Phi})$ can be equivalently transformed into $\displaystyle{g_{0}}(\mathbf{\Phi})={\rm{Tr}}\left({{{\bf{\Phi}}^{H}{\bf{D}}^{H}}}\right)+{\rm{Tr}}\left({{\bf{\Phi D}}}\right)+{\rm{Tr}}\left[{{{\bf{\Phi}}^{H}}{{\bf{B}}_{VE}}{\bf{\Phi}}{{\bf{C}}_{VE}}}\right]+{\rm{Tr}}\left({{{\bf{\Phi}}^{H}}{{\bf{B}}_{V}}{\bf{\Phi}}{{\bf{C}}_{V}}}\right)+C_{t},$ (57) where $C_{t}$, ${\bf{D}}$, ${{\bf{C}}_{VE}}$, ${{\bf{C}}_{V}}$, ${{\bf{B}}_{VE}}$ and ${{\bf{B}}_{V}}$ are constants for ${\bf{\Phi}}$, and are given in Appendix B. By exploiting the matrix properties in [50, Eq. (1.10.6)], the trace operators can be removed, and the third and fourth terms in (57) become as $\displaystyle{\rm{Tr}}\left({{{\bm{\Phi}}^{\rm{H}}}{\bf{B}}_{VE}{\bm{\Phi}}{\bf{C}}_{VE}}\right)={{\bm{\phi}}^{\rm{H}}}\left({{\bf{B}}_{VE}\odot{{\bf{C}}_{VE}^{\rm{T}}}}\right){\bm{\phi}},$ (58a) $\displaystyle{\rm{Tr}}\left({{{\bm{\Phi}}^{\rm{H}}}{\bf{B}}_{V}{\bm{\Phi}}{\bf{C}}_{V}}\right)={{\bm{\phi}}^{\rm{H}}}\left({{\bf{B}}_{V}\odot{{\bf{C}}_{V}^{\rm{T}}}}\right){\bm{\phi}},$ (58b) where ${\bm{\phi}}\buildrel\Delta\over{=}{\left[{{e^{j{\theta_{1}}}},\cdots,{e^{j{\theta_{m}}}},\cdots,{e^{j{\theta_{M}}}}}\right]^{\rm{T}}}$ is a vector holding the diagonal elements of ${\bm{\Phi}}$. Similarly, the trace operators can be removed for the first and second terms in (57) as ${\rm{Tr}}\left({{{\bm{\Phi}}^{\rm{H}}}{{\bf{D}}^{\rm{H}}}}\right)={{\bf{d}}^{\rm{H}}}({{\bm{\phi}}}^{*}),{\rm{Tr}}\left({{\bm{\Phi}}{\bf{D}}}\right)={\bm{\phi}}^{\rm{T}}{\bf{d}},$ (59) where ${\bf{d}}={\left[{{{\left[{\bf{D}}\right]}_{1,1}},\cdots,{{\left[{\bf{D}}\right]}_{M,M}}}\right]^{\rm{T}}}$ is a vector gathering the diagonal elements of matrix ${\bf{D}}$. Hence, Problem (56) can be rewritten as $\displaystyle{\mathop{\min}\limits_{{\bm{\phi}}}\quad{{\bm{\phi}}^{\rm{H}}}{\bm{\Xi}}{\bm{\phi}}+{\bm{\phi}}^{\rm{T}}{\bf{d}}+{{\bf{d}}^{\rm{H}}}({{\bm{\phi}}}^{*})}$ (60a) $\displaystyle\textrm{s.t.}\quad\left|{{\phi_{m}}}\right|=1,m=1,\cdots,M,$ (60b) where $\bm{\Xi}={\bf{B}}_{VE}\odot{{\bf{C}}_{VE}^{\rm{T}}}+{\bf{B}}_{V}\odot{{\bf{C}}_{V}^{\rm{T}}}$. $\bm{\Xi}$ is a semidefinite matrix, because it is a sum of two semidefinite matrices, both of which are Hadamard products of two semidefinite matrices. It is observed that ${\bf{B}}_{VE}$, ${{\bf{C}}_{VE}^{\rm{T}}}$, ${\bf{B}}_{V}$ and ${{\bf{C}}_{V}^{\rm{T}}}$ are semidefinite matrices. Then, the Hadamard products of ${{\bf{B}}_{VE}\odot{{\bf{C}}_{VE}^{\rm{T}}}}$ and ${{\bf{B}}_{V}\odot{{\bf{C}}_{V}^{\rm{T}}}}$ are semidefinite according to the Property (9) on Page 104 of [50]. Problem (60) can be further simplified as $\displaystyle{\mathop{\min}\limits_{\bm{\phi}}\quad f({\bm{\phi}})\buildrel\Delta\over{=}{{\bm{\phi}}^{\rm{H}}}{\bm{\Xi}}{\bm{\phi}}+2{\rm{Re}}\left\\{{{{\bm{\phi}}^{\rm{H}}}({{\bf{d}}}^{*})}\right\\}}$ (61a) $\displaystyle\textrm{s.t.}\quad\left|{{\phi_{m}}}\right|=1,m=1,\cdots,M.$ (61b) The Problem (61) can be solved by the SDR technique [28] by transforming the unimodulus constraint into a rank-one constraint, however, the rank-one solution cannot always be obtained and the computation complexity is heavy for the SDR method. Thus, we propose to solve Problem (61) efficiently by the MM algorithm as [25], where the closed-form solution can be obtained in each iteration. Details are omitted for simplicity. ### III-D Overall Algorithm to Solve Problem (10) To sum up, the detailed execution of the overall BCD-MM algorithm proposed for solving Problem (10) is provided in Algorithm 1. The MM algorithm is exploited for solving the optimal phase shifts ${\bf{\Phi}}^{(n+1)}$ of Problem (61) in Step 5. The iteration process in MM algorithm ensures that the OF value of Problem (61) decreases monotonically. Moreover, the BCD algorithm also guarantees that the OF value of Problem (29) monotonically decreases in each step and each iteration of Algorithm 1. Since the OF value in (29a) has a lower bound with the power limit, the convergence of Algorithm 1 is guaranteed. Algorithm 1 BCD-MM Algorithm 1: Parameter Setting. Set the maximum number of iterations $n_{\rm{max}}$ and the first iterative number $n=1$; Give the error tolerance $\varepsilon$. 2: Variables Initialization. Initialize the variables ${\bf{V}}^{(1)}$, ${\bf{V}}_{E}^{(1)}$ and ${\bf{\Phi}}^{(1)}$ in the feasible region; Compute the OF value of Problem (10) as ${\rm{OF(}}{{\bf{V}}^{(1)}},{{\bf{V}}^{(1)}_{E}},{\bf{\Phi}}^{(1)}{\rm{)}}$; 3: Auxiliary Variables Calculation. Given ${\bf{V}}^{(n)},{{\bf{V}}^{(n)}_{E}}$, ${\bf{\Phi}}^{(n)}$, compute the optimal matrices ${{\bf{U}}^{(n)}_{I}},{{\bf{W}}^{(n)}_{I}},{{\bf{U}}^{(n)}_{E}},{{\bf{W}}^{(n)}_{E}},{{\bf{W}}^{(n)}_{X}}$ according to (14), (15), (18), (19), (25) respectively; 4: Matrices Optimization. Given ${{\bf{U}}^{(n)}_{I}},{{\bf{W}}^{(n)}_{I}},{{\bf{U}}^{(n)}_{E}},{{\bf{W}}^{(n)}_{E}},{{\bf{W}}^{(n)}_{X}}$, solve the optimal TPC matrix ${\bf{V}}^{(n+1)}$ and equivalent AN covariance matrix ${{\bf{V}}^{(n+1)}_{E}}$ of Problem (36) with the Lagrangian multiplier method; 5: Phase Shifts Optimization. Given ${{\bf{U}}^{(n)}_{I}},{{\bf{W}}^{(n)}_{I}},{{\bf{U}}^{(n)}_{E}},{{\bf{W}}^{(n)}_{E}},{{\bf{W}}^{(n)}_{X}}$ and ${\bf{V}}^{(n+1)},{{\bf{V}}^{(n+1)}_{E}}$, solve the optimal phase shifts ${\bf{\Phi}}^{(n+1)}$ of Problem (61) with the MM algorithm; 6: Termination Check. If ${{\left|\\!{{\rm{OF}}\\!(\\!{\bf{V}}^{(n+1)}\\!,\\!{{\bf{V}}^{(n+1)}_{E}}\\!,\\!{\bf{\Phi}}^{(n+1)}\\!)\\!\\!-\\!\\!{\rm{OF}}\\!(\\!{\bf{V}}^{(n)}\\!,\\!{{\bf{V}}^{(n)}_{E}}\\!,\\!{\bf{\Phi}}^{(n)}\\!)}\\!\right|}/\\!{{\rm{OF}}\\!(\\!{\bf{V}}^{(n+1)}\\!,\\!{{\bf{V}}^{(n+1)}_{E}}\\!,\\!{\bf{\Phi}}^{(n+1)}\\!)}}\\!<\varepsilon$ or $n\geq n_{\rm{max}}$, terminate. Otherwise, update $n\leftarrow n+1$ and jump to step 2. Based on the algorithm description, the complexity analysis of the proposed BCD-MM algorithm is performed. In Step 3, computing the decoding matrices ${{\bf{U}}^{(n)}_{I}}$ and ${{\bf{U}}^{(n)}_{E}}$ costs the complexity of ${\cal O}(N_{I}^{3})+{\cal O}(N_{E}^{3})$, while calculating the auxiliary matrices ${{\bf{W}}^{(n)}_{I}}$, ${{\bf{W}}^{(n)}_{E}}$, and ${{\bf{W}}^{(n)}_{X}}$ consumes the complexity of ${\cal O}(d^{3})+{\cal O}(N_{T}^{3})+{\cal O}(N_{E}^{3})$. The complexity of calculating the TPC matrix ${\bf{V}}^{(n+1)}$ and AN covariance matrix ${{\bf{V}}^{(n+1)}_{E}}$ in Step 4 can be analyzed according to the specific process of Lagrangian multiplier method based on the fact that the complexity of computing product ${\bf{XY}}$ of complex matrices ${\bf{X}}\in{{\mathbb{C}}^{m\times n}}$ and ${\bf{Y}}\in{{\mathbb{C}}^{n\times p}}$ is ${\cal O}\left({mnp}\right)$. By assuming that $N_{T}>N_{I}({\rm{or\ }}N_{E})>d$, the complexity of computing the matrices $\\{{{\mathbf{H}}_{V}},{{\mathbf{H}}_{VE}}\\}$ in (30) and (31) is ${\cal O}(N_{T}^{3})+{\cal O}(2N_{T}^{2}d)+{\cal O}(2N_{T}^{2}N_{E})$; while the complexity of calculating ${\bf{V}}^{*}$, ${\bf{V}}_{E}^{*}$ in (III-B) and (III-B) is ${\cal O}(2N_{T}^{3})$. The SVD decomposition of $\\{{{\mathbf{H}}_{V}},{{\mathbf{H}}_{VE}}\\}$ requires the computation complexity of ${\cal O}(2N_{T}^{3})$, while calculating $\\{{\bf{Z}}_{V}\\}$ and $\\{{\bf{Z}}_{VE}\\}$ requires the complexity of ${\cal O}(N_{T}^{2}N_{I})+{\cal O}(2N_{T}^{3})$. The complexity of finding the Lagrangian multipliers $\\{\lambda\\}$ is negligible. Thus, the overall complexity for ${\bf{V}}^{(n+1)}$, ${\bf{V}}_{E}^{(n+1)}$ is about ${\cal O}({\rm{max}}\\{2N_{T}^{3},2N_{T}^{2}N_{E}\\})$. In step 5, obtaining optimal ${\bf{\Phi}}^{(n+1)}$ by the MM algorithm need a complexity of $C_{MM}={\cal O}(M^{3}+T_{MM}M^{2})$, where $T_{MM}$ is the iteration number for convergence. Based on the complexity required in Step 3, 4 and 5, the overall complexity $C_{\rm{BCD-MM}}$ of the BCD-MM algorithm can be evaluated by $C_{\rm{BCD-MM}}={\cal O}({\rm{max}}\\{2N_{T}^{3},2N_{T}^{2}N_{E},C_{MM}\\}).$ (62) ## IV Extension to the Multiple-IRs Scenario ### IV-A Problem Formulation Consider a multicast extension where there are $L\geq 2$ legitimate IRs, and they all intend to receive the same message. The signal model for the MIMO multi-IRs wiretap channel scenario is $\displaystyle{\bf{y}}_{I,l}={\hat{{\bf{H}}}_{I,l}}({\bf{V}}{\bf{s}}+{\bf{n}})+{{\bf{n}}_{I,l}},l=1,\cdots,L,$ (63) where ${{\hat{\bf{H}}}_{I,l}}\overset{\triangle}{=}{{\bf{H}}_{b,I,l}}+{{\bf{H}}_{R,I,l}}{\bf{\Phi}}{\bf{G}}$. The subscript $l$ indicates the $l$th IR, and the other notations are the same as (4) and (6). Under these settings, the achievable SR is given by [51] $\displaystyle R_{s}{\rm{(}}{\bf{V}},{{\bf{V}}_{E}},{\bf{\Phi}}{\rm{)}}$ $\displaystyle=\underset{l=1,\cdots,L}{\min}\\{{R_{I,l}}({\bf{V}},{\bf{\Phi}},{\bf{Z}})-{R_{E}}({\bf{V}},{\bf{\Phi}},{\bf{Z}})\\},$ (64) where ${R_{I,l}}({\bf{V}},{\bf{\Phi}},{\bf{Z}})={\rm{log}}\left|{{\bf{I}}+{{\hat{{\bf{H}}}}_{I,l}}{\bf{V}}{{\bf{V}}^{H}}\hat{{\bf{H}}}_{I,l}^{H}{\bf{J}}_{I,l}^{-1}}\right|$ and ${{\bf{J}}_{I,l}}\overset{\triangle}{=}{{\hat{{\bf{H}}}}_{I,l}}{\bf{Z}}{{\hat{{\bf{H}}}}_{I,l}}^{H}+\sigma_{I,l}^{2}{{\bf{I}}_{{{N}_{I}}}}$. Then the multicast counterpart of the AN-aided SRM problem (10) is formulated as $\displaystyle\ \underset{{\bf{V}},{{\bf{V}}_{E}},{\bf{\Phi}}}{\mathop{\text{max}}}\ \ {R_{s}}{\rm{(}}{\bf{V}},{{\bf{V}}_{E}},{\bf{\Phi}}{\rm{)}}$ (65a) $\displaystyle\ \ \text{s.t.}\quad\ {\rm{Tr(}}{\bf{V}}{{\bf{V}}^{H}}{\rm{+}}{{\bf{V}}_{E}}{{\bf{V}}_{E}^{H}}{\rm{)}}\leq{P_{T}},$ (65b) $\displaystyle\quad\quad\quad\\!\left|{{\phi}_{m}}\right|=1,m=1,\cdots,M.$ (65c) The objective function of Problem (65) can be rewritten as $\displaystyle R_{s}{\rm{(}}{\bf{V}},{{\bf{V}}_{E}},{\bf{\Phi}}{\rm{)}}$ $\displaystyle=\underset{l=1,\cdots,L}{\min}\\{\underbrace{{\rm{log}}\left|{{\bf{I}}_{{N_{I}}}+{{\hat{{\bf{H}}}}_{I,l}}{\bf{V}}{{\bf{V}}^{H}}\hat{{\bf{H}}}_{I,l}^{H}{{({{\hat{{\bf{H}}}}_{I,l}}{{\bf{V}}_{E}}{{\bf{V}}_{E}^{H}}{{\hat{{\bf{H}}}}_{I,l}}^{H}+\sigma_{I,l}^{2}{{\bf{I}}_{{N_{I}}}})}^{-1}}}\right|}_{{f_{1,l}}}\\}$ $\displaystyle\quad{\rm{+}}\underbrace{{\rm{log}}\left|{{{\bf{I}}_{{N_{E}}}}+{{\hat{{\bf{H}}}}_{E}}{{\bf{V}}_{E}}{{\bf{V}}_{E}^{H}}{{\hat{{\bf{H}}}}_{E}}^{H}(\sigma_{E}^{2}{{\bf{I}}_{{N_{E}}}})^{-1}}\right|}_{{f_{2}}}$ $\displaystyle\quad\underbrace{-{\rm{log}}\left|{{{\bf{I}}_{{N_{E}}}}+\sigma_{E}^{-2}{{\hat{{\bf{H}}}}_{E}}({\bf{V}}{{\bf{V}}^{H}}+{{\bf{V}}_{E}}{{\bf{V}}_{E}^{H}})\hat{{\bf{H}}}_{E}^{H}}\right|}_{{f_{3}}},$ (66a) $\displaystyle=\underset{l=1,\cdots,L}{\min}\\{\mathop{\text{max}}\limits_{{{\bf{U}}_{I,l}},{{\bf{W}}_{I,l}}\succeq 0}h_{1,l}({{\bf{U}}_{I,l}},{\bf{V}},{{\bf{V}}_{E}},{{\bf{W}}_{I,l}})\\}+\mathop{\text{max}}\limits_{{{\bf{U}}_{E}},{{\bf{W}}_{E}}\succeq 0}h_{2}({{\bf{U}}_{E}},{{\bf{V}}_{E}},{{\bf{W}}_{E}})$ $\displaystyle\quad+\mathop{\text{max}}\limits_{{{\bf{W}}_{X}}\succeq 0}h_{3}({\bf{V}},{{\bf{V}}_{E}},{{\bf{W}}_{X}}).$ (66b) The lower bound to the first term of (66b) can be found as $\displaystyle\underset{l=1,\cdots,L}{\min}\\{\mathop{\text{max}}\limits_{{{\bf{U}}_{I,l}},{{\bf{W}}_{I,l}}\succeq 0}h_{1,l}({{\bf{U}}_{I,l}},{\bf{V}},{{\bf{V}}_{E}},{{\bf{W}}_{I,l}})\\}$ (67a) $\displaystyle\geq\mathop{\text{max}}\limits_{\\{{{\bf{U}}_{I,l}},{{\bf{W}}_{I,l}}\succeq 0\\}_{l=1}^{L}}\\{\underset{l=1,\cdots,L}{\min}h_{1,l}({{\bf{U}}_{I,l}},{\bf{V}},{{\bf{V}}_{E}},{{\bf{W}}_{I,l}})\\},$ (67b) where (67b) holds due to the fact that $\underset{x}{\min}\ \underset{y}{\max}f(x,y)\geq\underset{y}{\max}\ \underset{x}{\min}f(x,y)$ for any function $f(x,y)$. Here by exchanging the positions of $\mathop{\text{max}}\limits_{\\{{{\bf{U}}_{I,l}},{{\bf{W}}_{I,l}}\succeq 0\\}_{l=1}^{L}}$ and $\underset{l=1,\cdots,L}{\min}$ in (67a), we can find a lower bound to $R_{s}{\rm{(}}{\bf{V}},{{\bf{V}}_{E}},{\bf{\Phi}}{\rm{)}}$ as $\displaystyle f_{ms}({\bf{V}},{{\bf{V}}_{E}},\\{{{\bf{U}}_{I,l}},{{\bf{W}}_{I,l}}\\}_{l=1}^{L},{{\bf{U}}_{E}},{{\bf{W}}_{E}},{{\bf{W}}_{X}})$ $\displaystyle\triangleq\mathop{\text{max}}\limits_{{\bf{V}},{{\bf{V}}_{E}},\\{{{\bf{U}}_{I,l}},{{\bf{W}}_{I,l}}\succeq 0\\}_{l=1}^{L},{{\bf{U}}_{E}},{{\bf{W}}_{E}}\succeq 0,{{\bf{W}}_{X}}\succeq 0}\\{\underset{l=1,\cdots,L}{\min}h_{1,l}({{\bf{U}}_{I,l}},{\bf{V}},{{\bf{V}}_{E}},{{\bf{W}}_{I,l}})$ $\displaystyle\quad+h_{2}({{\bf{U}}_{E}},{{\bf{V}}_{E}},{{\bf{W}}_{E}})+h_{3}({\bf{V}},{{\bf{V}}_{E}},{{\bf{W}}_{X}})\\}.$ (68) We simplify Problem (65) by maximizing a lower bound to its original objective as follows, $\displaystyle\ \underset{{\bf{V}},{{\bf{V}}_{E}},{\bf{\Phi}},\\{{{\bf{U}}_{I,l}},{{\bf{W}}_{I,l}}\succeq 0\\}_{l=1}^{L},{{\bf{U}}_{E}},{{\bf{W}}_{E}}\succeq 0,{{\bf{W}}_{X}}\succeq 0}{\mathop{\text{max}}}f_{ms}({\bf{V}},{{\bf{V}}_{E}},\\{{{\bf{U}}_{I,l}},{{\bf{W}}_{I,l}}\\}_{l=1}^{L},{{\bf{U}}_{E}},{{\bf{W}}_{E}},{{\bf{W}}_{X}})$ (69a) $\displaystyle\ \ \text{s.t.}\quad\ {\rm{Tr(}}{\bf{V}}{{\bf{V}}^{H}}{\rm{+}}{{\bf{V}}_{E}}{{\bf{V}}_{E}^{H}}{\rm{)}}\leq{P_{T}},$ (69b) $\displaystyle\quad\quad\quad\\!\left|{{\phi}_{m}}\right|=1,m=1,\cdots,M.$ (69c) To solve the multicast AN-aided SRM problem in (69), a BCD-QCQP-CCP algorithm is proposed. ### IV-B BCD Iterations for Problem (69) The equivalent SRM problem in (69) provides a desirable formulation for BCD algorithm. In particular, one can show that problem (69) is convex w.r.t. either ${\bf{V}},{{\bf{V}}_{E}}$ or ${\bf{\Phi}}$ or $\\{{{\bf{U}}_{I,l}}$, ${{\bf{W}}_{I,l}}\\}_{l=1}^{L}$, ${{\bf{U}}_{E}}$, ${{\bf{W}}_{E}}$, ${{\bf{W}}_{X}}$. By fixing ${\bf{\Phi}}$, the iteration process for problem (69) is as follows. Let ${\bf{V}}^{n}$, ${{\bf{V}}_{E}^{n}}$, $\\{{{\bf{U}}_{I,l}^{n}},{{\bf{W}}_{I,l}^{n}}\\}_{l=1}^{L},{{\bf{U}}_{E}^{n}},{{\bf{W}}_{E}^{n}},{{\bf{W}}_{X}^{n}}$ denote the BCD iterate at the $n$th iteration. The BCD iterates are generated via $\displaystyle{{\bf{U}}_{I,l}^{n}}=\text{arg}\mathop{\text{max}}\limits_{{{\bf{U}}_{I,l}}}h_{1,l}({{\bf{U}}_{I,l}},{\bf{V}}^{n},{{\bf{V}}_{E}^{n}},{{\bf{W}}_{I,l}^{n}})$ $\displaystyle\qquad\>\>{\rm{=}}({\hat{{\bf{H}}}_{I,l}}{{\bf{V}}_{E}^{n}}{{\bf{V}}_{E}^{nH}}{\hat{{\bf{H}}}_{I,l}}^{H}{\rm{+}}\sigma_{I,l}^{2}{{\bf{I}}_{{N_{I}}}}{\rm{+}}{\hat{{\bf{H}}}_{I,l}}{\bf{V}}^{n}{{\bf{V}}^{nH}}{\hat{{\bf{H}}}_{I,l}}^{H})^{-1}{\hat{{\bf{H}}}_{I,l}}{\bf{V}}^{n},$ (70a) $\displaystyle{{\bf{W}}_{I,l}^{n}}=\text{arg}\mathop{\text{max}}\limits_{{{\bf{W}}_{I,l}}\succeq 0}h_{1,l}({{\bf{U}}_{I,l}^{n}},{\bf{V}}^{n},{{\bf{V}}_{E}^{n}},{{\bf{W}}_{I,l}}){\rm{=[}}{{\bf{E}}_{I,l}}({{\bf{U}}_{I,l}^{n}},{\bf{V}}^{n},{{\bf{V}}_{E}^{n}}){]^{-1}}$ $\displaystyle{\rm{=}}\text{[}({{\bf{U}}_{I,l}^{{n}H}}{\hat{{\bf{H}}}_{I,l}}{\bf{V}}^{n}-{\bf{I}}_{d}){({{\bf{U}}_{I,l}^{{n}H}}{\hat{{\bf{H}}}_{I,l}}{\bf{V}}^{n}-{\bf{I}}_{d})^{H}}+{{\bf{U}}_{I,l}^{{n}H}}({\hat{{\bf{H}}}_{I,l}}{{\bf{V}}_{E}^{n}}{{\bf{V}}_{E}^{nH}}{\hat{{\bf{H}}}_{I,l}}^{H}{\rm{+}}\sigma_{I,l}^{2}{{\bf{I}}_{{N_{I}}}}){{\bf{U}}_{I,l}^{{n}}}]^{-1},$ (70b) $\displaystyle{{\bf{U}}_{E}^{n}}{\rm{=}}\text{arg}\mathop{\text{max}}\limits_{{{\bf{U}}_{E}}}h_{2}({{\bf{U}}_{E}},{{\bf{V}}_{E}^{n}},{{\bf{W}}_{E}^{n}})$ $\displaystyle\qquad\>\>{\rm{=}}(\sigma_{E}^{2}{{\bf{I}}_{{N_{E}}}}{\rm{+}}{\hat{{\bf{H}}}_{E}}{{\bf{V}}_{E}^{n}}{{\bf{V}}_{E}^{nH}}{\hat{{\bf{H}}}_{E}}^{H})^{-1}{\hat{{\bf{H}}}_{E}}{{\bf{V}}_{E}^{n}},$ (70c) $\displaystyle{{\bf{W}}_{E}^{n}}{=}\text{arg}\mathop{\text{max}}\limits_{{{\bf{W}}_{E}}\succeq 0}h_{2}({{\bf{U}}_{E}^{n}},{{\bf{V}}_{E}^{n}},{{\bf{W}}_{E}}){\rm{=[}}{{\bf{E}}_{E}}({{\bf{U}}_{E}^{n}},{{\bf{V}}_{E}^{n}}){]^{-1}}$ $\displaystyle\qquad\>\>{\rm{=}}[({{\bf{U}}_{E}}^{{n}H}{{\hat{{\bf{H}}}}_{E}}{\bf{V}}_{E}^{n}-{\bf{I}}_{N_{T}}){({{\bf{U}}_{E}^{{n}H}}{{\hat{{\bf{H}}}}_{E}}{\bf{V}}_{E}^{n}-{\bf{I}}_{N_{T}})^{H}}+{{\bf{U}}_{E}^{{n}H}}(\sigma_{E}^{2}{{\bf{I}}_{{N_{E}}}}){{\bf{U}}_{E}^{n}}]^{-1},$ (70d) $\displaystyle{{\bf{W}}_{X}^{n}}{\rm{=}}\text{arg}\mathop{\text{max}}\limits_{{{\bf{W}}_{X}}\succeq 0}h_{3}({\bf{V}}^{n},{{\bf{V}}_{E}^{n}},{{\bf{W}}_{X}}){\rm{=[}}{{\bf{E}}_{X}}({\bf{V}}^{n},{{\bf{V}}_{E}^{n}}){]^{-1}}$ $\displaystyle\qquad\>\>{\rm{=}}[{{{\bf{I}}_{{N_{E}}}}+\sigma_{E}^{-2}{{\hat{{\bf{H}}}}_{E}}({\bf{V}}^{n}{{\bf{V}}^{nH}}+{{\bf{V}}_{E}^{n}}{{\bf{V}}_{E}^{nH}})\hat{{\bf{H}}}_{E}^{H}}]^{-1}.$ (70e) The parameters ${\bf{V}}^{n}$, ${{\bf{V}}_{E}^{n}}$, ${\bf{\Phi}}^{n}$ are obtained by solving the following problem $\displaystyle\ \underset{{\bf{V}},{{\bf{V}}_{E}},{\bf{\Phi}}}{\mathop{\text{max}}}\ \ \underset{l=1,\cdots,L}{\min}\\{h_{1,l}({{\bf{U}}_{I,l}},{\bf{V}},{{\bf{V}}_{E}},{{\bf{W}}_{I,l}})\\}$ $\displaystyle\quad\quad\quad\quad\quad\quad+h_{2}({{\bf{U}}_{E}},{{\bf{V}}_{E}},{{\bf{W}}_{E}})+h_{3}({\bf{V}},{{\bf{V}}_{E}},{{\bf{W}}_{X}})$ (71a) $\displaystyle\ \ \text{s.t.}\quad\ {\rm{Tr(}}{\bf{V}}{{\bf{V}}^{H}}{\rm{+}}{{\bf{V}}_{E}}{{\bf{V}}_{E}^{H}}{\rm{)}}\leq{P_{T}},$ (71b) $\displaystyle\quad\quad\quad\\!\left|{{\phi}_{m}}\right|=1,m=1,\cdots,M.$ (71c) ### IV-C Optimizing the Matrices ${\bf{V}}$ and ${{\bf{V}}_{E}}$ By fixing ${\bf{\Phi}}$, Problem (71) can be written more compactly as $\displaystyle\ \underset{{\bf{V}},{{\bf{V}}_{E}}}{\mathop{\text{min}}}\ \ \underset{l=1,\cdots,L}{\max}\\{-\text{Tr}({{\bf{W}}_{I,l}}{{\bf{V}}^{H}}{{\hat{{\bf{H}}}}_{I,l}}^{H}{{\bf{U}}_{I,l}})-\text{Tr}({{\bf{W}}_{I,l}}{{\bf{U}}_{I,l}}^{H}{{\hat{{\bf{H}}}}_{I,l}}{\bf{V}})$ $\displaystyle\quad\quad\quad\quad\quad\quad\quad+\text{Tr}({{\mathbf{V}}^{H}}{{\mathbf{H}}_{V,l}^{i}}\mathbf{V})+\text{Tr}({{\mathbf{V}}_{E}^{H}}{{\mathbf{H}}_{VE,l}^{i}}{{\mathbf{V}}_{E}})-{C}_{l}\\}$ $\displaystyle\quad\quad\quad\quad\quad\quad\quad-\text{Tr}({{\mathbf{W}}_{E}}{{\mathbf{V}}_{E}^{H}}{{\mathbf{\hat{H}}}_{E}}^{H}{{\mathbf{U}}_{E}})-\text{Tr}({{\mathbf{W}}_{E}}{{\mathbf{U}}_{E}^{H}}{{\mathbf{\hat{H}}}_{E}}{{\mathbf{V}}_{E}})$ $\displaystyle\quad\quad\quad\quad\quad\quad\quad+\text{Tr}({{\mathbf{V}}^{H}}{{\mathbf{H}}_{V}^{e}}\mathbf{V})+\text{Tr}({{\mathbf{V}}_{E}^{H}}{{\mathbf{H}}_{VE}^{e}}{{\mathbf{V}}_{E}})$ (72a) $\displaystyle\ \ \ \ \ \text{s.t.}\quad\ {\rm{Tr(}}{\bf{V}}{{\bf{V}}^{H}}{\rm{+}}{{\bf{V}}_{E}}{{\bf{V}}_{E}^{H}}{\rm{)}}\leq{P_{T}},$ (72b) where $\displaystyle{{\mathbf{H}}_{V}^{e}}\text{(}\mathbf{\Phi}\text{)}=\sigma_{E}^{-2}\mathbf{\hat{H}}_{E}^{H}{{\mathbf{W}}_{X}}{{\mathbf{\hat{H}}}_{E}},$ (73a) $\displaystyle{{\mathbf{H}}_{VE}^{e}}\text{(}\mathbf{\Phi}\text{)}={{\mathbf{\hat{H}}}_{E}}^{H}{{\mathbf{U}}_{E}}{{\mathbf{W}}_{E}}{{\mathbf{U}}_{E}^{H}}{{\mathbf{\hat{H}}}_{E}}+\sigma_{E}^{-2}{{\mathbf{\hat{H}}}_{E}}^{H}{{\mathbf{W}}_{X}}{{\mathbf{\hat{H}}}_{E}},$ (73b) $\displaystyle{{\mathbf{H}}_{V,l}^{i}}\text{(}\mathbf{\Phi}\text{)}={{\mathbf{\hat{H}}}_{I,l}}^{H}{{\mathbf{U}}_{I,l}}{{\mathbf{W}}_{I,l}}{{\mathbf{U}}_{I,l}^{H}}{{\mathbf{\hat{H}}}_{I,l}},$ (73c) $\displaystyle{{\mathbf{H}}_{VE,l}^{i}}\text{(}\mathbf{\Phi}\text{)}={{\mathbf{\hat{H}}}_{I,l}}^{H}{{\mathbf{U}}_{I,l}}{{\mathbf{W}}_{I,l}}{{\mathbf{U}}_{I,l}^{H}}{{\mathbf{\hat{H}}}_{I,l}},$ (73d) $\displaystyle{C}_{l}=\log\left|{{\bf{W}}_{I,l}}\right|+d-\text{Tr}({{\bf{W}}_{I,l}}+\sigma_{I,l}^{2}{{\bf{W}}_{I,l}}{{\bf{U}}_{I,l}}^{H}{{\bf{U}}_{I,l}}).$ (73e) The Problem (72) is a convex QCQP problem, we can obtain its optimal solution using a general-purpose convex optimization solver. ### IV-D Optimizing the Phase Shifts $\mathbf{\Phi}$ By fixing ${\bf{V}},{{\bf{V}}_{E}}$, the optimization problem for the phase shift matrix $\mathbf{\Phi}$ reduced from Problem (72) is formulated as $\displaystyle\ \underset{\mathbf{\Phi}}{\mathop{\text{min}}}\ \ {g_{0}}(\mathbf{\Phi})\triangleq\underset{l=1,\cdots,L}{\max}\\{-\text{Tr}({{\bf{W}}_{I,l}}{{\bf{V}}^{H}}{{\hat{{\bf{H}}}}_{I,l}}^{H}{{\bf{U}}_{I,l}})-\text{Tr}({{\bf{W}}_{I,l}}{{\bf{U}}_{I,l}}^{H}{{\hat{{\bf{H}}}}_{I,l}}{\bf{V}})$ $\displaystyle\quad\quad\quad\quad\quad\quad\quad+\text{Tr}({{\mathbf{V}}^{H}}{{\mathbf{H}}_{V,l}^{i}}\mathbf{V})+\text{Tr}({{\mathbf{V}}_{E}^{H}}{{\mathbf{H}}_{VE,l}^{i}}{{\mathbf{V}}_{E}})-{C}_{l}\\}$ $\displaystyle\quad\quad\quad\quad\quad\quad\quad-\text{Tr}({{\mathbf{W}}_{E}}{{\mathbf{V}}_{E}^{H}}{{\mathbf{\hat{H}}}_{E}}^{H}{{\mathbf{U}}_{E}})-\text{Tr}({{\mathbf{W}}_{E}}{{\mathbf{U}}_{E}^{H}}{{\mathbf{\hat{H}}}_{E}}{{\mathbf{V}}_{E}})$ $\displaystyle\quad\quad\quad\quad\quad\quad\quad+\text{Tr}({{\mathbf{V}}^{H}}{{\mathbf{H}}_{V}^{e}}\mathbf{V})+\text{Tr}({{\mathbf{V}}_{E}^{H}}{{\mathbf{H}}_{VE}^{e}}{{\mathbf{V}}_{E}})$ (74a) $\displaystyle\ \ \ \ \ \text{s.t.}\quad\ \\!\left|{{\phi}_{m}}\right|=1,m=1,\cdots,M.$ (74b) By complex mathematical manipulations, the OF ${g_{0}}(\mathbf{\Phi})$ can be equivalently transformed into $\displaystyle{g_{0}}(\mathbf{\Phi})$ $\displaystyle\triangleq\underset{l=1,\cdots,L}{\max}\\{{g_{0,l}^{i}}(\mathbf{\Phi})\\}+{g_{0}^{e}}(\mathbf{\Phi}),$ (75) where $\displaystyle{g_{0,l}^{i}}(\mathbf{\Phi})$ $\displaystyle={\rm{Tr}}\left({{\bf{\Phi}}^{H}{\bf{D}}_{l}^{iH}}\right)+{\rm{Tr}}\left({\bf{\Phi}}{\bf{D}}_{l}^{i}\right)+{\rm{Tr}}\left[{{\bf{\Phi}}{{\bf{C}}_{VE}}{{\bf{\Phi}}^{H}}{{\bf{B}}_{VE,l}^{i}}}\right]+{\rm{Tr}}\left({{\bf{\Phi}}{{\bf{C}}_{V}}{{\bf{\Phi}}^{H}}{{\bf{B}}_{V,l}^{i}}}\right)+C_{l}^{i},$ (76a) $\displaystyle{g_{0}^{e}}(\mathbf{\Phi})$ $\displaystyle={\rm{Tr}}\left({{\bf{\Phi}}^{H}{\bf{D}}^{eH}}\right)+{\rm{Tr}}\left({\bf{\Phi}}{\bf{D}}^{e}\right)+{\rm{Tr}}\left[{{\bf{\Phi}}{{\bf{C}}_{VE}}{{\bf{\Phi}}^{H}}{{\bf{B}}_{VE}^{e}}}\right]+{\rm{Tr}}\left({{\bf{\Phi}}{{\bf{C}}_{V}}{{\bf{\Phi}}^{H}}{{\bf{B}}_{V}^{e}}}\right)+C^{e},$ (76b) and $\displaystyle{\bf{D}}_{l}^{i}$ $\displaystyle={\bf{G}}{{\bf{V}}_{X}}{\bf{H}}_{b,I,l}^{H}{{\bf{M}}_{I,l}}{{\bf{H}}_{R,I,l}}-{\bf{GV}}{{\bf{W}}_{I,l}}{\bf{U}}_{I,l}^{H}{{\bf{H}}_{R,I,l}},$ (77a) $\displaystyle{{\bf{C}}_{VE}}$ $\displaystyle={\bf{G}}{{\bf{V}}_{E}}{\bf{V}}_{E}^{H}{{\bf{G}}^{H}},$ (77b) $\displaystyle{{\bf{C}}_{V}}$ $\displaystyle={\bf{G}}{\bf{V}}{\bf{V}}^{H}{{\bf{G}}^{H}},$ (77c) $\displaystyle{{\bf{B}}_{VE,l}^{i}}$ $\displaystyle=\left({{\bf{H}}_{R,I,l}^{H}{{\bf{U}}_{I,l}}{{\bf{W}}_{I,l}}{\bf{U}}_{I,l}^{H}{{\bf{H}}_{R,I,l}}}\right),$ (77d) $\displaystyle{{\bf{B}}_{V,l}^{i}}$ $\displaystyle=\left({{\bf{H}}_{R,I,l}^{H}{{\bf{U}}_{I,l}}{{\bf{W}}_{I,l}}{\bf{U}}_{I,l}^{H}{{\bf{H}}_{R,I,l}}}\right),$ (77e) $\displaystyle C_{l}^{i}$ $\displaystyle={\rm{Tr}}\left[{{\bf{H}}_{b,I,l}}{{\bf{V}}_{X}}{\bf{H}}_{b,I,l}^{H}{{\bf{M}}_{I,l}}\right]\\!+\\!{\rm{Tr}}\left[{{{\bf{U}}_{I,l}}{\bf{W}}_{I,l}^{H}{{\bf{V}}^{H}}{\bf{H}}_{b,I,l}^{H}}\right]\\!+\\!{\rm{Tr}}\left[{{{\bf{H}}_{b,I,l}}{\bf{V}}{{\bf{W}}_{I,l}}{\bf{U}}_{I,l}^{H}}\right]-{C}_{l},$ (77f) $\displaystyle{\bf{M}}_{I,l}$ $\displaystyle={{\bf{U}}_{I,l}}{{\bf{W}}_{I,l}}{\bf{U}}_{I,l}^{H},$ (77g) $\displaystyle{\bf{D}}^{e}$ $\displaystyle=\sigma_{E}^{-2}{\bf{G}}{{\bf{V}}_{X}}{\bf{H}}_{b,E}^{H}{{\bf{W}}_{X}}{{\bf{H}}_{R,E}}+{\bf{G}}{{\bf{V}}_{E}}{\bf{V}}_{E}^{H}{\bf{H}}_{b,E}^{H}{{\bf{M}}_{E}}{{\bf{H}}_{R,E}}-{\bf{G}}{{\bf{V}}_{E}}{{\bf{W}}_{E}}{\bf{U}}_{E}^{H}{{\bf{H}}_{R,E}},$ (77h) $\displaystyle{{\bf{B}}_{VE}^{e}}$ $\displaystyle=\left({\sigma_{E}^{-2}{\bf{H}}_{R,E}^{H}{{\bf{W}}_{X}}{{\bf{H}}_{R,E}}+{\bf{H}}_{R,E}^{H}{{\bf{U}}_{E}}{{\bf{W}}_{E}}{\bf{U}}_{E}^{H}{{\bf{H}}_{R,E}}}\right),$ (77i) $\displaystyle{{\bf{B}}_{V}^{e}}$ $\displaystyle=\left({\sigma_{E}^{-2}{\bf{H}}_{R,E}^{H}{{\bf{W}}_{X}}{{\bf{H}}_{R,E}}}\right),$ (77j) $\displaystyle C^{e}$ $\displaystyle=\sigma_{E}^{-2}{\rm{Tr}}\left[{{\bf{H}}_{b,E}}{{\bf{V}}_{X}}{\bf{H}}_{b,E}^{H}{{\bf{W}}_{X}}\right]+{\rm{Tr}}\left[{{\bf{H}}_{b,E}}{{\bf{V}}_{E}}{\bf{V}}_{E}^{H}{\bf{H}}_{b,E}^{H}{{\bf{M}}_{E}}\right]$ $\displaystyle+{\rm{Tr}}\left[{{{\bf{U}}_{E}}{\bf{W}}_{E}^{H}{\bf{V}}_{E}^{H}{\bf{H}}_{b,E}^{H}}\right]+{\rm{Tr}}\left[{{{\bf{H}}_{b,E}}{{\bf{V}}_{E}}{{\bf{W}}_{E}}{\bf{U}}_{E}^{H}}\right].$ (77k) Similarly, Problem (74) can be further simplified as $\displaystyle\underset{{\bm{\phi}}}{\mathop{\text{min}}}$ $\displaystyle\underset{l=1,\cdots,L}{\max}\\{{{\bm{\phi}}^{{\rm{H}}}}{\bm{\Xi}_{l}^{i}}{\bm{\phi}}+2\textrm{Re}[{\bm{\phi}}^{{\rm{H}}}{\bf{d}}_{l}^{i*}]+C_{l}^{i}\\}+{{\bm{\phi}}^{{\rm{H}}}}{\bm{\Xi}^{e}}{\bm{\phi}}+2\textrm{Re}[{\bm{\phi}}^{{\rm{H}}}{\bf{d}}^{e*}]+C^{e}$ (78a) s.t. $\displaystyle\quad\quad\ \\!\left|{{\phi}_{m}}\right|=1,m=1,\cdots,M,$ (78b) where $\displaystyle{\bm{\Xi}_{l}^{i}}$ $\displaystyle={\bf{B}}_{VE,l}^{i}\odot{{\bf{C}}_{VE}^{{\rm{T}}}}+{\bf{B}}_{V,l}^{i}\odot{{\bf{C}}_{V}^{{\rm{T}}}},$ (79a) $\displaystyle{\bm{\Xi}^{e}}$ $\displaystyle={\bf{B}}_{VE}^{e}\odot{{\bf{C}}_{VE}^{{\rm{T}}}}+{\bf{B}}_{V}^{e}\odot{{\bf{C}}_{V}^{{\rm{T}}}},$ (79b) $\displaystyle{\bf{d}}_{l}^{i}$ $\displaystyle={\left[{{{\left[{\bf{D}}_{l}^{i}\right]}_{1,1}},\cdots,{{\left[{\bf{D}}_{l}^{i}\right]}_{M,M}}}\right]^{{\rm{T}}}},$ (79c) $\displaystyle{\bf{d}}^{e}$ $\displaystyle={\left[{{{\left[{\bf{D}}^{e}\right]}_{1,1}},\cdots,{{\left[{\bf{D}}^{e}\right]}_{M,M}}}\right]^{{\rm{T}}}}.$ (79d) By using the Lemma 1 in [25], Problem (78) will be recast as $\displaystyle\underset{{\bm{\phi}}}{\mathop{\max}}$ $\displaystyle\underset{l=1,\cdots,L}{\min}\\{2\textrm{Re}[{{\bm{\phi}}^{{\rm{H}}}}{\bf{q}}_{l}^{i,t}]-C_{q,l}^{i,t}\\}+2\textrm{Re}[{{\bm{\phi}}^{{\rm{H}}}}{\bf{q}}^{e,t}]-C_{q}^{e,t}$ (80a) s.t. $\displaystyle\quad\quad\ \\!\left|{{\phi}_{m}}\right|=1,m=1,\cdots,M,$ (80b) where $\displaystyle{\bf{q}}_{l}^{i,t}$ $\displaystyle=\left({{\lambda_{l,{\rm{\max}}}^{i}}{{\bf{I}}_{M}}-{\bm{\Xi}_{l}^{i}}}\right){{\bm{\phi}}^{t}}-{\bf{d}}_{l}^{i*},$ (81a) $\displaystyle C_{q,l}^{i,t}$ $\displaystyle=2M{\lambda_{l,{\rm{\max}}}^{i}}-{\left({{\bm{\phi}}^{t}}\right)^{{\rm{H}}}}\left({\bm{\Xi}_{l}^{i}}\right){{\bm{\phi}}^{t}}+C_{l}^{i},$ (81b) $\displaystyle{\bf{q}}^{e,t}$ $\displaystyle=\left({{\lambda_{{\rm{\max}}}^{e}}{{\bf{I}}_{M}}-{\bm{\Xi}^{e}}}\right){{\bm{\phi}}^{t}}-{\bf{d}}^{e*},$ (81c) $\displaystyle C_{q}^{e,t}$ $\displaystyle=2M{\lambda_{{\rm{\max}}}^{e}}-{\left({{\bm{\phi}}^{t}}\right)^{{\rm{H}}}}\left({\bm{\Xi}^{e}}\right){{\bm{\phi}}^{t}}+C^{e},$ (81d) and ${\lambda_{l,{\rm{\max}}}^{i}}$ is the the maximum eigenvalue of ${\bm{\Xi}_{l}^{i}}$, and ${\lambda_{{\rm{\max}}}^{e}}$ is the maximum eigenvalue of ${\bm{\Xi}^{e}}$. By defining ${\bf{q}}_{l}^{ie,t}\triangleq{\bf{q}}_{l}^{i,t}+{\bf{q}}^{e,t}$ and $C_{q,l}^{ie,t}\triangleq C_{q,l}^{i,t}+C_{q}^{e,t}$, Problem (80) can be rewritten as $\displaystyle\underset{{\bm{\phi}}}{\mathop{\max}}$ $\displaystyle\underset{l=1,\cdots,L}{\min}\\{2\textrm{Re}[{{\bm{\phi}}^{{\rm{H}}}}{\bf{q}}_{l}^{ie,t}]-C_{q,l}^{ie,t}\\}$ (82a) s.t. $\displaystyle\quad\quad\ \\!\left|{{\phi}_{m}}\right|=1,m=1,\cdots,M,$ (82b) which is equivalent to the following problem $\displaystyle\underset{{\bm{\phi}},z}{\mathop{\max}}$ $\displaystyle\quad z$ (83a) s.t. $\displaystyle 2\textrm{Re}[{{\bm{\phi}}^{{\rm{H}}}}{\bf{q}}_{l}^{ie,t}]-C_{q,l}^{ie,t}\geq z,l=1,\cdots,L,$ (83b) $\displaystyle\left|{{\phi}_{m}}\right|=1,m=1,\cdots,M.$ (83c) We note that the above problem is still non-convex due to the unit-modulus constraints. To deal with the non-convex constraints, the penalty CCP method is applied. Following the penalty CCP framework, the constraint (83c) are firstly equivalently rewritten as $-1\leq\left|{{\phi}_{m}}\right|^{2}\leq 1,m=1,\cdots,M$. The non-convex parts of the resulting constraints are then linearized by $\left|{{\phi}_{m}^{\text{(n)}}}\right|^{2}-2\textrm{Re}({{\phi}_{m}^{\text{(*)}}}{{\phi}_{m}^{\text{(n)}}})\leq-1,m=1,\cdots,M$. We finally have the following convex subproblem of ${\bm{\phi}}$ as $\displaystyle\underset{{\bm{\phi}},z}{\mathop{\max}}$ $\displaystyle\quad z-\lambda^{(t)}\left\|\mathbf{b}\right\|_{1}$ (84a) s.t. $\displaystyle\ 2\textrm{Re}[{{\bm{\phi}}^{{\rm{H}}}}{\bf{q}}_{l}^{ie,t}]-C_{q,l}^{ie,t}\geq z,$ (84b) $\displaystyle\left|{{\phi}_{m}^{\text{(n)}}}\right|^{2}-2\textrm{Re}({{\phi}_{m}^{\text{(*)}}}{{\phi}_{m}^{\text{(n)}}})\leq b_{m}-1,m=1,\cdots,M,$ (84c) $\displaystyle\left|{{\phi}_{m}}\right|^{2}\leq 1+b_{M+m},m=1,\cdots,M.$ (84d) where $\mathbf{b}=[b_{1},\cdots,b_{2M}]^{T}$ are slack variables imposed over the equivalent linear constraints of the unit-modulus constraints, and $\left\|\mathbf{b}\right\|_{1}$ is the penalty term in the OF. $\left\|\mathbf{b}\right\|_{1}$ is scaled by the regularization factor $\lambda^{(t)}$ to control the feasibility of the constraints. The specific steps of the penalty CCP method can be referred in [52]. ## V Simulation Results In this section, numerical simulations are carried out to evaluate the assistance of the IRS on the AN-aided MIMO secure communication system. We focus on the scenario of the standard three-terminal MIMO Guassian wiretap channel shown in Fig. 2, where there are one BS, one legitimate IR and one Eve, all with multiple antennas. The distance from the BS to the IRS is $d_{BR}=50$ m. We assume that the line connecting the IR and Eve is parallel to the line connecting the BS and the IRS, and that the vertical distance between them is $d_{v}=2$ m. Figure 2: The three-terminal MIMO communication scenario in simulation. The large-scale path loss is modeled as ${\rm{PL=}}{{\rm{P}}{{\rm{L}}_{0}}-10\alpha{{\log}_{10}}\left({\frac{d}{{{d_{0}}}}}\right)}$, where ${\rm{P}}{{\rm{L}}_{0}}$ is the path loss at the reference distance $d_{0}=1$ m, $\alpha$ is the path loss exponent, $d$ is the link distance. In our simulations, we set ${\rm{P}}{{\rm{L}}_{0}}=-30$ dB. The path loss exponents of the links from BS to Eve, from BS to IR, from IRS to Eve and from IRS to IR are ${\alpha_{{\rm{BE}}}}=3.5$, ${\alpha_{{\rm{BI}}}}=3.5$, ${\alpha_{{\rm{RE}}}}=2.5$ and ${\alpha_{{\rm{RI}}}}=2.5$ respectively. The path-loss exponents of the link from BS to IRS is set to be ${\alpha_{{\rm{BR}}}}=2.2$, which means that the IRS is well-located, and the path loss is negligible in this link. For the direct channels from the BSs to the Eve and IR, the small-scale fading is assumed to be Rayleigh fading due to extensive scatters. However, for the IRS-related channels, the small-scale fading is assumed to be Rician fading. Specifically, the small-scale channel can be modeled as $\displaystyle\tilde{\mathbf{H}}$ $\displaystyle=\left(\sqrt{\frac{\beta}{1+\beta}}\tilde{\mathbf{H}}^{LOS}+\sqrt{\frac{1}{1+\beta}}\tilde{\mathbf{H}}^{NLOS}\right),$ (85) where $\beta$ is the Rican factor, $\tilde{\mathbf{H}}^{LOS}$ denotes the deterministic line of sight (LoS) component of the IRS-related channel, and $\tilde{\mathbf{H}}^{NLOS}$ denotes the non-LoS (NLoS) component of the IRS- related channel, which is modeled as Rayleigh fading. By assuming the antennas at the BS, IRS, Eve and IR are arranged in a uniform linear array (ULA), the $\tilde{\mathbf{H}}^{LOS}$ can be modeled as $\tilde{\mathbf{H}}^{LOS}=\mathbf{a}_{r}\mathbf{a}_{t}^{H}$, where $\mathbf{a}_{t}$ and $\mathbf{a}_{r}$ are the steering vectors of the transmitting and receiving arrays respectively. The $\mathbf{a}_{t}$ and $\mathbf{a}_{r}$ are defined as, $\displaystyle\mathbf{a}_{t}$ $\displaystyle=\left[\begin{array}[]{cccc}1,&\exp(j2\pi\frac{d_{t}}{\lambda}\sin\varphi_{t}),&\cdots,&\exp(j2\pi\frac{d_{t}}{\lambda}(N_{t}-1)\sin\varphi_{t})\end{array}\right]^{T},$ (86b) $\displaystyle\mathbf{a}_{r}$ $\displaystyle=\left[\begin{array}[]{cccc}1,&\exp(j2\pi\frac{d_{r}}{\lambda}\sin\varphi_{r}),&\cdots,&\exp(j2\pi\frac{d_{r}}{\lambda}(N_{r}-1)\sin\varphi_{r})\end{array}\right]^{T}.$ (86d) In (86), $\lambda$ is the wavelength; $d_{t}$ and $d_{r}$ are the element intervals of the transmitting and receiving array; $\varphi_{t}$ and $\varphi_{r}$ are the angle of departure and the angle of arrival; $N_{t}$ and $N_{r}$ are the number of antennas/elements at the transmitter and receiver, respectively. We set $\frac{d_{t}}{\lambda}=\frac{d_{r}}{\lambda}=0.5$, and $\varphi_{t}=\tan^{-1}(\frac{y_{r}-y_{t}}{x_{r}-x_{t}})$, $\varphi_{r}=\pi-\varphi_{t}$, where $(x_{t},y_{t})$ is the location of the transmitter, and $(x_{r},y_{r})$ is the location of the receiver. If not specified, the simulation parameters are set as follows. The IR’s noise power and the Eve’s noise power are $\sigma_{I}^{2}=-75$ dBm and $\sigma_{E}^{2}=-75$ dBm. The numbers of BS antennas, IR antennas, and Eve antennas are $N_{T}=4$, $N_{I}=2$, and $N_{E}=2$ respectively. There are $d=2$ data streams and $M=50$ IRS reflection elements. The transmit power limit is $P_{T}=15$ dBm, and the error tolerance is $\varepsilon=10^{-6}$. The horizontal distance between the BS and the Eve is $d_{BE}=44$ m. The horizontal distance between the BS and the IR is selected from $d_{BI}=[10\ \text{m},70\ \text{m}]$. The channels are realized 200 times independently to average the simulation results. ### V-A Convergence Analysis The convergence performance of the proposed BCD-MM algorithm is investigated. The iterations of the BCD algorithm are termed as outer-layer iterations, while the iterations of the MM algorithm are termed as the inner-layer iterations. Fig. 4 shows three examples of convergence behaviour for $M=$10, 20 and 40 phase shifts of IRS. In Fig. 4, the SR increases versus the iteration number, and finally reaches a stable value. It is shown that the algorithm converges quickly, almost with 20 iterations, which demonstrates the efficiency of the proposed algorithm. Moreover, a larger converged SR value is reached with a larger $M$, which means that better security can be obtained by using more IRS elements. However, more IRS elements bring a heavier computation, which is demonstrated in Fig. 4 in the form of a slower convergence speed with more phase shifters. Figure 3: Convergence behaviour of the BCD algorithm. Figure 4: Convergence behaviour of the MM algorithm. Specifically, we evaluate the convergence performance of the MM algorithm used for solving the optimal IRS phase shifts. The inner-layer iterative process of the MM algorithm in the first iteration of the BCD algorithm is shown in Fig. 4. The SR value increases as the iteration number increases, and finally converges to a stable value. According with the convergency performance in the out-layer iteration, similar conclusions can be drawn for the inner-layer iteration, which is that a higher converged SR value can be obtained with more phase shifts but at the cost of lower convergence speed. The reason for the lower convergence speed with larger $M$ value is that more optimization variables are introduced, which require more computation complexity. ### V-B Performance Evaluation In this subsection, our proposed algorithm is evaluated by comparing the simulation results to three schemes of 1. 1. RandPhase: The phase shifts of the IRS are randomly selected from $[0,2\pi]$. In this scheme, the MM algorithm is skipped, and only the TPC matrix and AN covariance matrix are optimized. 2. 2. No-IRS: Without the IRS, the channel matrices of IRS related links become zero matrices, which is ${\bf{H}}_{R,I}={\bf{0}}$, ${\bf{H}}_{R,E}={\bf{0}}$ and ${{\bf{G}}}={\bf{0}}$. This scheme results a conventional AN-aided communication system, and only the TPC matrix and AN covariance matrix need to be optimized. 3. 3. BCD-QCQP-SDR: The BCD algorithm is utilized. However, the TPC matrix and the AN covariance matrix is optimized by tackling Problem (32) as a QCQP problem, which is solved by the general CVX solvers, e.g. Sedumi or Mosek. The phase shifts of IRS are optimized by solving Problem (61) with the SDR technique. #### V-B1 Impact of Transmit Power Figure 5: Achievable SR versus the transmit power limit. Figure 6: Achievable SR versus the number of phase shifts $M$. To evaluate the impact of the transmit power limit $P_{T}$, the average SRs versus the transmit power limit for various schemes are given in Fig. 6, which demonstrates that the achieved SRs of three schemes increase as the power limit $P_{T}$ increases. It is observed that the BCD-MM algorithm significantly outperforms the other three benchmark schemes over the entire range of transmit power limits. By comparing the RandPhase scheme to the No- IRS scheme, we find that the RandPhase scheme is better than the No-IRS scheme for obtaining higher SR, and that the SR gap increases with the power limit $P_{T}$. The reason is that, for the RandPhase scheme, the IR is closer to the IRS than the Eve is, and more signal power from the IRS can be acquired by the IR than that by the Eve, while for the No-IRS scheme, the IR is further from the BS than the Eve is, and less signal power from the BS can be acquired by the IR than that by the Eve. This comparison result signifies that even the phase shifts of IRS are random, the IRS can enhance the system security. In comparison to the no-IRS scheme, the SR gain achieved by the proposed algorithm is very obvious, and increases greatly with the power limit $P_{T}$, which confirms the effectiveness and benefits of employing the IRS. By comparing the proposed scheme and the RandPhase scheme, we find that the security gain obtained for the proposed scheme is much greater than that for the RandPhase scheme. That’s because the phase shifts of IRS are properly designed to enhance the signal received at the IR more constructively, and weaken the signal received at the Eve more destructively. This comparison result emphasizes that optimizing the phase shifts of IRS is important and necessary. By comparing the proposed BCD-MM algorithm and the BCD-QCQP-SDR algorithm, we observe that in terms of the SR performance, the proposed BCD-MM algorithm is better BCD-QCQP-SDR, and the performance gain increases with the $P_{T}$. Moreover, the proposed BCD-MM algorithm is much more efficient than the BCD-QCQP-SDR algorithm. The superiority of the proposed algorithm is further validated. #### V-B2 Impact of the Phase Shifts Number The averaged SR performance of four schemes with various phase shifts number $M$ is shown in Fig. 6, which demonstrates that the proposed BCD-MM algorithm is significantly superior to the other three schemes. We observe that the SR achieved by the BCD-MM scheme obviously increases with $M$, while the RandPhase scheme only shows a slight improvement as $M$ increases, and the No- IRS scheme has very low SRs irrelative with $M$. Larger the element number $M$ of IRS is, more significant the performance gain obtained by the proposed algorithm is. For example, when $M$ is small as $M=10$, the SR gain of the BCD-MM relative to the No-IRS is only 1.3 bit/s/Hz, while this SR gain becomes 9.5 bit/s/Hz when $M$ increases to $M=100$. The performance gain for the proposed algorithm originates from two perspectives. On the one hand, a higher array gain can be obtained by increasing $M$, since more signal power can be received at the IRS with larger $M$. On the other hand, a higher reflecting beamforming gain can be obtained by increasing $M$, which means that the sum of coherently adding the reflected signals at the IRS elements increases with $M$ by appropriately designing the phase shifts. However, only the array gain can be exploited by the RandPhase scheme, thus the SRs for it increase very slowly, and remain at much lower values than those for the proposed algorithm. These results further confirm that more security improvements can be archived by using a large IRS with more reflect elements and optimizing the phase shifts properly, however there may bring the computation complexity problem. In comparison to the BCD-QCQP-SDR algorithm, the proposed BCD-MM algorithm can achieve the higher SR, and the SR performance gap increases with $M$. #### V-B3 Impact of the relative location of IRS Figure 7: Achievable SR versus the location of the IR $d_{\rm{BI}}$. Figure 8: Achievable SR versus the path loss exponent of IRS-related links. Fig. 8 illustrates the achieved SRs for four schemes with various BS-IR horizontal distance $d_{BI}$, where the BS-Eve distance is fixed to be $d_{BE}=44$ m. It is observed that the proposed BCD-MM algorithm is the best among the four schemes for obtaining the highest SR value. When the IR moves far away from the BS, the SRs decrease for the four schemes, however, the SRs achieved for the RandPhase, the proposed BCD-MM algorithm and the BCD-QCQP-SDR algorithm increase greatly when the IR approaches the IRS. The achieved SRs at different BS-IR distances of the RandPhase scheme and the no-IRS scheme are almost the same, except for $d_{BI}\in[40\text{m},50\text{m}]$, in which case the IRS brings a prominent security enhancement when IR becomes close to it even with random IRS phase shifts. Similarly, the proposed BCD-MM algorithm and the BCD-QCQP-SDR algorithm can achieve almost the same SRs, except for $d_{BI}\in[40\text{m},50\text{m}]$, in which case the IR is close to the IRS, and the proposed BCD-MM algorithm is superior to the BCD-QCQP-SDR algorithm. For other BS-IR distances where the IR is far from the IRS, the SRs of RandPhase scheme are similar with those of the No-IRS scheme due to the not fully explored potential of IRS. By optimizing the phase shifts of IRS, the SRs are enhanced at different BS-IRS distances. And the SR gain of the proposed BCD-MM algorithm over the RandPhase scheme increases when the IR moves close to the IRS ($d_{BI}\in[40\text{m},50\text{m}]$). This signifies that as long as the IRS is deployed close to the IR, significant security enhancement can be achieved by the IRS in an AN-aided MIMO communication system. Moreover, it is highly recommended that the IRS phase shifts should be optimized to prevent the system security degrading into the level of No-IRS scheme. #### V-B4 Impact of the Path Loss Exponent of IRS-related Links In the above simulations, the path loss exponents of the IRS-related links (including the BS-IRS link, IRS-IR link and IRS-Eve link) are set to be low by assuming that the IRS is properly located to obtain clean channels without heavy blockage. Practically, such kind of settings may not always be sensible due to the real-field environment. Thus, it is necessary to investigate the security gain brought by the IRS and our proposed algorithm with higher values of IRS-related path loss exponents. For the sake of analysis, we assume the path-loss exponents of the links from BS to IRS, from IRS to IR and from IRS to Eve are the same as ${\alpha_{{\rm{BR}}}}={\alpha_{{\rm{RI}}}}={\alpha_{{\rm{RE}}}}\buildrel\Delta\over{=}{\alpha_{{\rm{IRS}}}}$. Then, the achieved SRs versus the path-loss exponent ${\alpha_{{\rm{IRS}}}}$ of IRS-related links are shown in Fig. 8, which demonstrates that the SR obtained by the BCD-MM algorithm decreases as ${\alpha_{{\rm{IRS}}}}$ increases, and finally drops to the same SR value which is achieved by the RandPhase, BCD-QCQP-SDR and No-IRS schemes. The reason is that larger ${\alpha_{{\rm{IRS}}}}$ means more severe signal attenuation in the IRS- related links, and more weakened signal received and reflected at the IRS. In comparison to the BCD-QCQP-SDR algorithm, the proposed BCD-MM algorithm can achieve the higher SR when the channel state of the IRS related channels is good, i.e., ${\alpha_{{\rm{IRS}}}}$ is low, and achieve almost the same SR when ${\alpha_{{\rm{IRS}}}}$ is large. Similarly, the performance gains brought by our proposed algorithm over the RandPhase and No-IRS schemes is significant with a small ${\alpha_{{\rm{IRS}}}}$. Specifically, for ${\alpha_{{\rm{IRS}}}}=2$ (almost ideal channels), the security gain is up to 9.6 bit/s/Hz over the No-IRS scheme, and 6.8 bit/s/Hz over the RandPhase scheme. Therefore, the security gain of IRS-assisted systems depends on the channel conditions of the IRS-related links. This suggests that it is much preferred to deploy the IRS with fewer obstacles, in which case, the performance gain brought by the IRS can be explored thoroughly. Fig. 8 also shows that when ${\alpha_{{\rm{IRS}}}}$ is small, the RandPhase scheme can obtain security gain over the No-IRS scheme, but this security gain decreases to zero when ${\alpha_{{\rm{IRS}}}}$ becomes large. However, the SR gain of the RandPhase scheme over the No-IRS scheme is almost negligible in comparison to the SR gain of the proposed scheme over the No-IRS scheme, which demonstrates that the necessity of jointly optimizing the TPC matrix, AN covariance matrix and the phase shifts at the IRS. #### V-B5 Impact of the Number of Data Streams Compared with the MISO scenario, a significant advantage of the MIMO scenario is that multiple data streams can be transmitted to the users. To evaluate the impact of the number of data streams on the SR, the average SRs versus the transmit power limit for various numbers of data streams are given in Fig. 10. Figure 9: Achievable SR versus the transmit power limit for various numbers of data streams. Figure 10: Achievable SR versus the reflection amplitude $\eta$. The number of transmit antennas is $N_{T}=4$. The Rician fading channels are utilized. The path loss exponents are ${\alpha_{{\rm{BR}}}}=2.2$, ${\alpha_{{\rm{BE}}}}=3.5$, ${\alpha_{{\rm{BI}}}}=2.5$, ${\alpha_{{\rm{RE}}}}=3.5$ and ${\alpha_{{\rm{RI}}}}=2.5$ respectively. The Rician factor is $\beta=3$. The number of phase shifts is $M=50$. As shown in Fig. 10, the SR increases with the transmit power limit and larger number of data streams result the higher SR. When the transmit power limit is low, marginal performance gains are achieved by increasing the number $d$ of data streams. When the transmit power limit is high, significant performance gains can be achieved by increasing the number $d$ of data streams. This means that a greater number of data streams ensure the higher SR, and the performance gain expands with the transmit power limit. For the case of $d=1$, the SR performance of $N_{I}=N_{E}=4$ and $N_{I}=N_{E}=1$ is further compared. It is revealed that the SR obtained by four receiving antennas is higher than the SR obtained by one single receiving antenna when the transmit power limit is relatively low. With the increase of transmit power limit, the SR performance gain brought by multiple receiving antennas decreases. When the transmit power limit is high enough, the SR performance is saturated, and the SR performance of the multiple receiving antennas and single receiving antenna becomes the same. #### V-B6 Impact of the Reflection Amplitude Due to the manufactural and hardware reasons, the signals reflected by the IRS may be attenuated. Then, in Fig. 10, we study the impact of the reflection amplitude on the security performance. The transmit power limit is 10dBm. We assume that the reflection amplitudes of all the IRS elements are same as $\eta$, and that the phase shift matrix of the IRS is rewritten as ${\bf{\Phi}}=\eta\text{diag }\\!\\!\\{\\!\\!\text{ }{{\phi}_{1}},\cdots,{{\phi}_{m}},\cdots,{{\phi}_{M}}\text{ }\\!\\!\\}\\!\\!\text{ }$. As expected, the SR achieved by the IRS-aided scheme increases with $\eta$ due to less power loss. As $\eta$ increases, the superiority of the proposed BCD-MM algorithm over the other algorithms becomes more obvious. The reflection amplitude has a great impact on the security performance. Specifically, when $\eta$ increases from 0.2 to 1, the SR increases over 3.6 bit/s/Hz for the proposed BCD-MM algorithm. #### V-B7 Impact of the Discrete Phase Shifts In practice, it is difficult to realize continuous phase shifts at the reflecting elements of the IRS due to the high manufacturing cost. It is more cost-effective to implement only discrete phase shifts with a small number of control bits for each element, e.g., 1-bit for two-level (0 or $\pi$) phase shifts. Thus, the impact of the controlling bits $b$ of the discrete phase shifts on the security performance is investigated in Fig. 12. The transmit power limit is 10dBm. It is shown that the SR with continuous phase shifts of the IRS is higher than those with discrete phase shifts. The limited discrete phase shifts inevitably cause SR performance degradation. The SR of the IRS with discrete phase shifts increases with the number of controlling bits $b$, and becomes saturated when $b\geq 4$, which means that the SR loss is inevitable even when the number of controlling bits $b$ is high. For the proposed BCD-MM algorithm, the maximum SR gap between the continuous phase shifts and the discrete phase shifts is 1.4 bit/s/Hz. Figure 11: Achievable SR versus the discrete phase bits $b$. Figure 12: Achievable SR versus the transmit power limit for multiple IRs. #### V-B8 Multiple IRs Scenario Finally, we consider the multiple IRs scenario to investigate the security enhancement brought by the IRS on the AN-aided MIMO communication systems. The horizontal distances between the BS and the two IRs are selected as $d_{BI,1}=47$m and $d_{BI,2}=49$m. Considering the heavy computational load, the element number of the IRS is assumed to be $M=20$. The proposed BCD-QCQP- CCP algorithm is utilized to perform the joint optimization of the TPC matrix, AN covariance matrix and the phase shifts of the IRS. The achieved SRs for the proposed algorithm, the random IRS scheme and the No-IRS scheme are shown in Fig. 12. By comparing with the Random IRS scheme and the No-IRS scheme, the proposed BCD-QCQP-CCP algorithm can optimize the phase shifts of the IRS, thus achieve the higher SR. The SR gain increases with the power limit $P_{T}$. However, the performance gain is not as much as that in Fig. 6. On the one hand, the element number of the IRS is set to be lower than that in Fig. 6 due to the heavy computational load. On the other hand, it is more difficult for the IRS to adjust the phase shifts to guarantee higher SRs for more legitimate IRs. ## VI Conclusions In this paper, we propose to enhance the security of AN-aided MIMO secure communication systems by exploiting an IRS. To exploit the IRS efficiently, we formulate a SRM problem by jointly optimizing the TPC matrix at the BS, the covariance matrix of AN and phase shifts at the IRS with the constraints of transmit power limit and unit-modulus of phase shifts. To solve this non- convex problem, we propose to use the BCD algorithm to decouple the optimization variables, and optimize them iteratively. The optimal TPC matrix and AN covariance matrix were obtained in semi-closed form by the Lagrange multiplier method, and the phase shifts at the IRS were obtained in closed form by an efficient MM algorithm. Various simulations validated that significant security gains can be achieved by the proposed algorithm with IRS. Furthermore, useful suggestions for choosing and deploying the IRS are provided. ## Appendix A Derivation of the Problem (29) By substituting $h_{1}({{\bf{U}}_{I}},{\bf{V}},{{\bf{V}}_{E}},{{\bf{W}}_{I}})$ of (III-A), $h_{2}({{\bf{U}}_{E}},{{\bf{V}}_{E}},{{\bf{W}}_{E}})$ of (III-A) and $h_{3}({{\bf{V}}},{{\bf{V}}_{E}},{{\bf{W}}_{X}})$ of (III-A) into (III-A), we have $\displaystyle{{\rm{C}}^{l}_{AN}}({{\bf{U}}_{I}},{{\bf{W}}_{I}},{{\bf{U}}_{E}},{{\bf{W}}_{E}},{{\bf{W}}_{X}},{\bf{V}},{{\bf{V}}_{E}},{\bf{\Phi}})=\log\left|{{{\bf{W}}_{I}}}\right|-{\rm{Tr}}({{\bf{W}}_{I}}{{\bf{E}}_{I}}({{\bf{U}}_{I}},{\bf{V}},{{\bf{V}}_{E}}))+\log\left|{{{\bf{W}}_{E}}}\right|$ $\displaystyle\qquad\qquad\qquad-{\rm{Tr}}({{\bf{W}}_{E}}{{\bf{E}}_{E}}({{\bf{U}}_{E}},{{\bf{V}}_{E}}))+\log\left|{{{\bf{W}}_{X}}}\right|-{\rm{Tr}}({{\bf{W}}_{X}}{{\bf{E}}_{X}}({{\bf{V}}},{{\bf{V}}_{E}}))+d+N_{t}+N_{E}$ $\displaystyle\qquad=C_{g_{0}}-{\underbrace{{\rm{Tr}}({{\bf{W}}_{I}}{{\bf{E}}_{I}}({{\bf{U}}_{I}},{\bf{V}},{{\bf{V}}_{E}}))}_{g_{1}}}-{\underbrace{{\rm{Tr}}({{\bf{W}}_{E}}{{\bf{E}}_{E}}({{\bf{U}}_{E}},{{\bf{V}}_{E}}))}_{g_{2}}}-{\underbrace{{\rm{Tr}}({{\bf{W}}_{X}}{{\bf{E}}_{X}}({{\bf{V}}},{{\bf{V}}_{E}}))}_{g_{3}}},$ (87) where $C_{g_{0}}\buildrel\Delta\over{=}\log\left|{{{\bf{W}}_{I}}}\right|+\log\left|{{{\bf{W}}_{E}}}\right|+\log\left|{{{\bf{W}}_{X}}}\right|+d+N_{t}+N_{E}$. $C_{g_{0}}$ contains the constant terms irrelated with ${\bf{V}},{{\bf{V}}_{E}},{\bf{\Phi}}$. By putting matrix functions ${{\bf{E}}_{I}}$, ${{\bf{E}}_{E}}$ and ${{\bf{E}}_{X}}$ into (A), we expand ${g_{1}}$, ${g_{2}}$, and ${g_{3}}$ respectively as follows. (1) ${{g}_{1}}$ can be reformulated as $\displaystyle{{g}_{1}}$ $\displaystyle=\text{Tr}({{\bf{W}}_{I}}[({\bf{I}}-{{\bf{U}}_{I}}^{H}{{{\hat{\bf{H}}}}_{I}}{\bf{V}}){{({\bf{I}}-{{\bf{U}}_{I}}^{H}{{{\hat{\bf{H}}}}_{I}}{\bf{V}})}^{H}}+{{\bf{U}}_{I}}^{H}({{{\hat{\bf{H}}}}_{I}}{{\bf{V}}_{E}}{{\bf{V}}_{E}}^{H}{{{\hat{\bf{H}}}}_{I}}^{H}+\sigma_{I}^{2}{{\bf{I}}_{{{N}_{I}}}}){{\bf{U}}_{I}}])$ $\displaystyle=\text{Tr}({{\bf{W}}_{I}}[({\bf{I}}-{{\bf{V}}^{H}}{{{\hat{\bf{H}}}}_{I}}^{H}{{\bf{U}}_{I}}-{{\bf{U}}_{I}}^{H}{{{\hat{\bf{H}}}}_{I}}{\bf{V}}+{{\bf{U}}_{I}}^{H}{{{\hat{\bf{H}}}}_{I}}{\bf{V}}{{\bf{V}}^{H}}{{{\hat{\bf{H}}}}_{I}}^{H}{{\bf{U}}_{I}})$ $\displaystyle\quad+({{\bf{U}}_{I}}^{H}{{{\hat{\bf{H}}}}_{I}}{{\bf{V}}_{E}}{{\bf{V}}_{E}}^{H}{{{\hat{\bf{H}}}}_{I}}^{H}{{\bf{U}}_{I}}\text{+}{{\bf{U}}_{I}}^{H}\sigma_{I}^{2}{{\bf{I}}_{{{N}_{I}}}}{{\bf{U}}_{I}})]).$ (88) By gathering the constant terms related with ${{\bf{W}}_{I}},{{\bf{U}}_{I}}$ in ${{C}}_{g_{1}}$, ${{g}_{1}}$ can be simplified as $\displaystyle{{g}_{1}}$ $\displaystyle=-\text{Tr}({{\bf{W}}_{I}}{{\bf{V}}^{H}}{{{\hat{\bf{H}}}}_{I}}^{H}{{\bf{U}}_{I}})-\text{Tr}({{\bf{W}}_{I}}{{\bf{U}}_{I}}^{H}{{{\hat{\bf{H}}}}_{I}}{\bf{V}})+\text{Tr}({{\bf{V}}^{H}}{{{\hat{\bf{H}}}}_{I}}^{H}{{\bf{U}}_{I}}{{\bf{W}}_{I}}{{\bf{U}}_{I}}^{H}{{{\hat{\bf{H}}}}_{I}}{\bf{V}})$ $\displaystyle\quad+\text{Tr}({{\bf{V}}_{E}}^{H}{{{\hat{\bf{H}}}}_{I}}^{H}{{\bf{U}}_{I}}{{\bf{W}}_{I}}{{\bf{U}}_{I}}^{H}{{{\hat{\bf{H}}}}_{I}}{{\bf{V}}_{E}})+{{C}}_{g_{1}},$ (89) where ${{C}}_{g_{1}}\buildrel\Delta\over{=}\text{Tr}({{\bf{W}}_{I}}+\sigma_{I}^{2}{{\bf{W}}_{I}}{{\bf{U}}_{I}}^{H}{{\bf{U}}_{I}})$. (2) ${{g}_{2}}$ can be reformulated as $\displaystyle{{g}_{2}}$ $\displaystyle=\text{Tr}({{\mathbf{W}}_{E}}[(\mathbf{I}-{{\mathbf{U}}_{E}}^{H}{{{\mathbf{\hat{H}}}}_{E}}{{\mathbf{V}}_{E}}){{(\mathbf{I}-{{\mathbf{U}}_{E}}^{H}{{{\mathbf{\hat{H}}}}_{E}}{{\mathbf{V}}_{E}})}^{H}}+\sigma_{E}^{2}{{\mathbf{U}}_{E}}^{H}{{\mathbf{U}}_{E}}])$ $\displaystyle=\text{Tr}({{\mathbf{W}}_{E}}[(\mathbf{I}-{{\mathbf{V}}_{E}}^{H}{{{\mathbf{\hat{H}}}}_{E}}^{H}{{\mathbf{U}}_{E}}-{{\mathbf{U}}_{E}}^{H}{{{\mathbf{\hat{H}}}}_{E}}{{\mathbf{V}}_{E}}+{{\mathbf{U}}_{E}}^{H}{{{\mathbf{\hat{H}}}}_{E}}{{\mathbf{V}}_{E}}{{\mathbf{V}}_{E}}^{H}{{{\mathbf{\hat{H}}}}_{E}}^{H}{{\mathbf{U}}_{E}}$ $\displaystyle\quad+\sigma_{E}^{2}{{\mathbf{U}}_{E}}^{H}{{\mathbf{U}}_{E}}]).$ (90) By gathering the constant terms related with ${{\mathbf{W}}_{E}},{{\mathbf{U}}_{E}}$ in ${{C}}_{g_{2}}$, ${{g}_{2}}$ can be simplified as $\displaystyle{{g}_{2}}=-\text{Tr}({{\mathbf{W}}_{E}}{{\mathbf{V}}_{E}}^{H}{{{\mathbf{\hat{H}}}}_{E}}^{H}{{\mathbf{U}}_{E}})-\text{Tr}({{\mathbf{W}}_{E}}{{\mathbf{U}}_{E}}^{H}{{{\mathbf{\hat{H}}}}_{E}}{{\mathbf{V}}_{E}})+\text{Tr}({{\mathbf{V}}_{E}}^{H}{{{\mathbf{\hat{H}}}}_{E}}^{H}{{\mathbf{U}}_{E}}{{\mathbf{W}}_{E}}{{\mathbf{U}}_{E}}^{H}{{{\mathbf{\hat{H}}}}_{E}}{{\mathbf{V}}_{E}})+{{C}}_{g_{2}},$ (91) where ${{C}}_{g_{2}}\buildrel\Delta\over{=}\text{Tr}({{\bf{W}}_{E}}+\sigma_{E}^{2}{{\bf{W}}_{E}}{{\bf{U}}_{E}}^{H}{{\bf{U}}_{E}})$. (3) ${{g}_{3}}$ can be reformulated as $\displaystyle{{g}_{3}}=\text{Tr}({{\mathbf{W}}_{X}}({{{\bf{I}}_{{N_{E}}}}+\sigma_{E}^{-2}{{\hat{\bf{H}}}_{E}}({\bf{V}}{{\bf{V}}^{H}}+{{\bf{V}}_{E}}{{\bf{V}}_{E}}^{H})\hat{\bf{H}}_{E}^{H}})).$ (92) By gathering the constant terms related with ${{\mathbf{W}}_{X}}$ in ${{C}}_{g_{3}}$, ${{g}_{3}}$ can be simplified as $\displaystyle{{g}_{3}}=\sigma_{E}^{-2}\text{Tr}({{\mathbf{V}}^{H}}\mathbf{\hat{H}}_{E}^{H}{{\mathbf{W}}_{X}}{{{\mathbf{\hat{H}}}}_{E}}\mathbf{V})+\sigma_{E}^{-2}\text{Tr}({{\mathbf{V}}_{E}}^{H}{{{\mathbf{\hat{H}}}}_{E}}^{H}{{\mathbf{W}}_{X}}{{{\mathbf{\hat{H}}}}_{E}}{{\mathbf{V}}_{E}})+{{C}}_{g_{3}},$ (93) where ${{C}}_{g_{3}}\buildrel\Delta\over{=}\text{Tr}({{\bf{W}}_{X}})$. By substituting (89), (91) and (93) into (A), we have $\displaystyle{{\rm{C}}^{l}_{AN}}({{\bf{U}}_{I}},{{\bf{W}}_{I}},{{\bf{U}}_{E}},{{\bf{W}}_{E}},{{\bf{W}}_{X}},{\bf{V}},{{\bf{V}}_{E}},{\bf{\Phi}})=\text{Tr}({{\mathbf{W}}_{I}}{{\mathbf{V}}^{H}}{{{\mathbf{\hat{H}}}}_{I}}^{H}{{\mathbf{U}}_{I}})+\text{Tr}({{\mathbf{W}}_{I}}{{\mathbf{U}}_{I}}^{H}{{{\mathbf{\hat{H}}}}_{I}}\mathbf{V})$ $\displaystyle-\text{Tr}({{\mathbf{V}}^{H}}{{{\mathbf{\hat{H}}}}_{I}}^{H}{{\mathbf{U}}_{I}}{{\mathbf{W}}_{I}}{{\mathbf{U}}_{I}}^{H}{{{\mathbf{\hat{H}}}}_{I}}\mathbf{V})-\text{Tr}({{\mathbf{V}}_{E}}^{H}{{{\mathbf{\hat{H}}}}_{I}}^{H}{{\mathbf{U}}_{I}}{{\mathbf{W}}_{I}}{{\mathbf{U}}_{I}}^{H}{{{\mathbf{\hat{H}}}}_{I}}{{\mathbf{V}}_{E}})+\text{Tr}({{\mathbf{W}}_{E}}{{\mathbf{V}}_{E}}^{H}{{{\mathbf{\hat{H}}}}_{E}}^{H}{{\mathbf{U}}_{E}})$ $\displaystyle+\text{Tr}({{\mathbf{W}}_{E}}{{\mathbf{U}}_{E}}^{H}{{{\mathbf{\hat{H}}}}_{E}}{{\mathbf{V}}_{E}})-\text{Tr}({{\mathbf{V}}_{E}}^{H}{{{\mathbf{\hat{H}}}}_{E}}^{H}{{\mathbf{U}}_{E}}{{\mathbf{W}}_{E}}{{\mathbf{U}}_{E}}^{H}{{{\mathbf{\hat{H}}}}_{E}}{{\mathbf{V}}_{E}})-\sigma_{E}^{-2}\text{Tr}({{\mathbf{V}}^{H}}{{{\mathbf{\hat{H}}}}}_{E}^{H}{{\mathbf{W}}_{X}}{{{\mathbf{\hat{H}}}}_{E}}\mathbf{V})$ $\displaystyle-\sigma_{E}^{-2}\text{Tr}({{\mathbf{V}}_{E}}^{H}{{{\mathbf{\hat{H}}}}_{E}}^{H}{{\mathbf{W}}_{X}}{{{\mathbf{\hat{H}}}}_{E}}{{\mathbf{V}}_{E}})+C_{g},$ (94) where $C_{g}\buildrel\Delta\over{=}C_{g_{0}}-C_{g_{1}}-C_{g_{2}}-C_{g_{3}}$. Equation (A) can be rewritten more compactly as $\displaystyle{{\rm{C}}^{l}_{AN}}({{\bf{U}}_{I}},{{\bf{W}}_{I}},{{\bf{U}}_{E}},{{\bf{W}}_{E}},{{\bf{W}}_{X}},{\bf{V}},{{\bf{V}}_{E}},{\bf{\Phi}})=C_{g}+\text{Tr}({{\mathbf{W}}_{I}}{{\mathbf{V}}^{H}}{{{\mathbf{\hat{H}}}}_{I}}^{H}{{\mathbf{U}}_{I}})+\text{Tr}({{\mathbf{W}}_{I}}{{\mathbf{U}}_{I}}^{H}{{{\mathbf{\hat{H}}}}_{I}}\mathbf{V})$ $\displaystyle\quad-\text{Tr}({{\mathbf{V}}^{H}}{{\mathbf{H}}_{V}}\mathbf{V})+\text{Tr}({{\mathbf{W}}_{E}}{{\mathbf{V}}_{E}}^{H}{{{\mathbf{\hat{H}}}}_{E}}^{H}{{\mathbf{U}}_{E}})+\text{Tr}({{\mathbf{W}}_{E}}{{\mathbf{U}}_{E}}^{H}{{{\mathbf{\hat{H}}}}_{E}}{{\mathbf{V}}_{E}})-\text{Tr}({{\mathbf{V}}_{E}}^{H}{{\mathbf{H}}_{VE}}{{\mathbf{V}}_{E}}).$ (95) where $\displaystyle{{\mathbf{H}}_{V}}={{\mathbf{\hat{H}}}_{I}}^{H}{{\mathbf{U}}_{I}}{{\mathbf{W}}_{I}}{{\mathbf{U}}_{I}}^{H}{{\mathbf{\hat{H}}}_{I}}+\sigma_{E}^{-2}\mathbf{\hat{H}}_{E}^{H}{{\mathbf{W}}_{X}}{{\mathbf{\hat{H}}}_{E}}.$ (96) $\displaystyle{{\mathbf{H}}_{VE}}={{\mathbf{\hat{H}}}_{I}}^{H}{{\mathbf{U}}_{I}}{{\mathbf{W}}_{I}}{{\mathbf{U}}_{I}}^{H}{{\mathbf{\hat{H}}}_{I}}+{{\mathbf{\hat{H}}}_{E}}^{H}{{\mathbf{U}}_{E}}{{\mathbf{W}}_{E}}{{\mathbf{U}}_{E}}^{H}{{\mathbf{\hat{H}}}_{E}}+\sigma_{E}^{-2}{{\mathbf{\hat{H}}}_{E}}^{H}{{\mathbf{W}}_{X}}{{\mathbf{\hat{H}}}_{E}}.$ (97) By substituting (95) into Problem (28), and removing the constant term $C_{g}$, we arrive at the Problem (29). ## Appendix B Derivation of the new OF form in (57) The objective function of Problem (56) is $\displaystyle{g_{0}}(\mathbf{V},{{\mathbf{V}}_{E}},\mathbf{\Phi})=$ $\displaystyle-\text{Tr}({{\mathbf{W}}_{I}}{{\mathbf{V}}^{H}}{{{\mathbf{\hat{H}}}}_{I}}^{H}{{\mathbf{U}}_{I}})-\text{Tr}({{\mathbf{W}}_{I}}{{\mathbf{U}}_{I}}^{H}{{{\mathbf{\hat{H}}}}_{I}}\mathbf{V})+\text{Tr}({{\mathbf{V}}^{H}}{{\mathbf{H}}_{V}}\mathbf{V})$ $\displaystyle-\text{Tr}({{\mathbf{W}}_{E}}{{\mathbf{V}}_{E}}^{H}{{{\mathbf{\hat{H}}}}_{E}}^{H}{{\mathbf{U}}_{E}})-\text{Tr}({{\mathbf{W}}_{E}}{{\mathbf{U}}_{E}}^{H}{{{\mathbf{\hat{H}}}}_{E}}{{\mathbf{V}}_{E}})+\text{Tr}({{\mathbf{V}}_{E}}^{H}{{\mathbf{H}}_{VE}}{{\mathbf{V}}_{E}}).$ (98) The third term of (B) is $\displaystyle{\rm{Tr}}\left({{{\bf{V}}^{H}}{{\bf{H}}_{V}}{\bf{V}}}\right)$ $\displaystyle={\rm{Tr}}\left[{{{\bf{V}}^{H}}\left({{\bf{\hat{H}}}_{I}^{H}{{\bf{U}}_{I}}{{\bf{W}}_{I}}{\bf{U}}_{I}^{H}{{{\bf{\hat{H}}}}_{I}}+\sigma_{E}^{-2}{\bf{\hat{H}}}_{E}^{H}{{\bf{W}}_{X}}{{{\bf{\hat{H}}}}_{E}}}\right){\bf{V}}}\right]$ $\displaystyle={\rm{Tr}}\left[{{{{\bf{\hat{H}}}}_{I}}{\bf{V}}{{\bf{V}}^{H}}{\bf{\hat{H}}}_{I}^{H}{{\bf{U}}_{I}}{{\bf{W}}_{I}}{\bf{U}}_{I}^{H}}\right]+\sigma_{E}^{-2}{\rm{Tr}}\left[{{{{\bf{\hat{H}}}}_{E}}{\bf{V}}{{\bf{V}}^{H}}{\bf{\hat{H}}}_{E}^{H}{{\bf{W}}_{X}}}\right].$ (99) The sixth term of (B) is $\displaystyle{\rm{Tr}}\left({{\bf{V}}_{E}^{H}{{\bf{H}}_{VE}}{{\bf{V}}_{E}}}\right)=$ $\displaystyle{\rm{Tr}}\left[{{\bf{V}}_{E}^{H}\left({{\bf{\hat{H}}}_{I}^{H}{{\bf{U}}_{I}}{{\bf{W}}_{I}}{\bf{U}}_{I}^{H}{{{\bf{\hat{H}}}}_{I}}+{\bf{\hat{H}}}_{E}^{H}{{\bf{U}}_{E}}{{\bf{W}}_{E}}{\bf{U}}_{E}^{H}{{{\bf{\hat{H}}}}_{E}}+\sigma_{E}^{-2}{\bf{\hat{H}}}_{E}^{H}{{\bf{W}}_{X}}{{{\bf{\hat{H}}}}_{E}}}\right){{\bf{V}}_{E}}}\right]$ $\displaystyle=$ $\displaystyle{\rm{Tr}}\left[{{{{\bf{\hat{H}}}}_{I}}{{\bf{V}}_{E}}{\bf{V}}_{E}^{H}{\bf{\hat{H}}}_{I}^{H}{{\bf{U}}_{I}}{{\bf{W}}_{I}}{\bf{U}}_{I}^{H}}\right]+{\rm{Tr}}\left[{{{{\bf{\hat{H}}}}_{E}}{{\bf{V}}_{E}}{\bf{V}}_{E}^{H}{\bf{\hat{H}}}_{E}^{H}{{\bf{U}}_{E}}{{\bf{W}}_{E}}{\bf{U}}_{E}^{H}}\right]$ $\displaystyle+\sigma_{E}^{-2}{\rm{Tr}}\left[{{{{\bf{\hat{H}}}}_{E}}{{\bf{V}}_{E}}{\bf{V}}_{E}^{H}{\bf{\hat{H}}}_{E}^{H}{{\bf{W}}_{X}}}\right].$ (100) The summation of Equation (B) and Equation (B) is $\displaystyle{\rm{Tr}}\left({{{\bf{V}}^{H}}{{\bf{H}}_{V}}{\bf{V}}}\right)+{\rm{Tr}}\left({{\bf{V}}_{E}^{H}{{\bf{H}}_{VE}}{{\bf{V}}_{E}}}\right)=$ $\displaystyle{\rm{Tr}}\left[{{{{\bf{\hat{H}}}}_{I}}\left({{\bf{V}}{{\bf{V}}^{H}}+{{\bf{V}}_{E}}{\bf{V}}_{E}^{H}}\right){\bf{\hat{H}}}_{I}^{H}{{\bf{U}}_{I}}{{\bf{W}}_{I}}{\bf{U}}_{I}^{H}}\right]$ $\displaystyle+\sigma_{E}^{-2}{\rm{Tr}}\left[{{{{\bf{\hat{H}}}}_{E}}\left({{\bf{V}}{{\bf{V}}^{H}}+{{\bf{V}}_{E}}{\bf{V}}_{E}^{H}}\right){\bf{\hat{H}}}_{E}^{H}{{\bf{W}}_{X}}}\right]$ $\displaystyle+{\rm{Tr}}\left[{{{{\bf{\hat{H}}}}_{E}}{{\bf{V}}_{E}}{\bf{V}}_{E}^{H}{\bf{\hat{H}}}_{E}^{H}{{\bf{U}}_{E}}{{\bf{W}}_{E}}{\bf{U}}_{E}^{H}}\right].$ (101) By defining ${{\bf{V}}_{X}}=\left({{\bf{V}}{{\bf{V}}^{H}}+{{\bf{V}}_{E}}{\bf{V}}_{E}^{H}}\right)$ and ${{\bf{M}}_{I}}={{\bf{U}}_{I}}{{\bf{W}}_{I}}{\bf{U}}_{I}^{H}$, the first part of (B) can be derived as $\displaystyle{\rm{Tr}}\left[{{{{\bf{\hat{H}}}}_{I}}\left({{\bf{V}}{{\bf{V}}^{H}}+{{\bf{V}}_{E}}{\bf{V}}_{E}^{H}}\right){\bf{\hat{H}}}_{I}^{H}{{\bf{U}}_{I}}{{\bf{W}}_{I}}{\bf{U}}_{I}^{H}}\right]$ $\displaystyle={\rm{Tr}}\left[{{{{\bf{\hat{H}}}}_{I}}{{\bf{V}}_{X}}{\bf{\hat{H}}}_{I}^{H}{{\bf{M}}_{I}}}\right]$ $\displaystyle={\rm{Tr}}\left[{\left({{{\bf{H}}_{b,I}}+{{\bf{H}}_{R,I}}{\bf{\Phi G}}}\right){{\bf{V}}_{X}}\left({{\bf{H}}_{b,I}^{H}+{{\bf{G}}^{H}}{{\bf{\Phi}}^{H}}{\bf{H}}_{R,I}^{H}}\right){{\bf{M}}_{I}}}\right]$ $\displaystyle={\rm{Tr}}\left[{\left({{{\bf{H}}_{b,I}}{{\bf{V}}_{X}}{\bf{H}}_{b,I}^{H}+{{\bf{H}}_{b,I}}{{\bf{V}}_{X}}{{\bf{G}}^{H}}{{\bf{\Phi}}^{H}}{\bf{H}}_{R,I}^{H}+{{\bf{H}}_{R,I}}{\bf{\Phi G}}{{\bf{V}}_{X}}{\bf{H}}_{b,I}^{H}+{{\bf{H}}_{R,I}}{\bf{\Phi G}}{{\bf{V}}_{X}}{{\bf{G}}^{H}}{{\bf{\Phi}}^{H}}{\bf{H}}_{R,I}^{H}}\right){{\bf{M}}_{I}}}\right]$ $\displaystyle={\rm{Tr}}[{{\bf{H}}_{b,I}}{{\bf{V}}_{X}}{\bf{H}}_{b,I}^{H}{{\bf{M}}_{I}}+{{\bf{H}}_{b,I}}{{\bf{V}}_{X}}{{\bf{G}}^{H}}{{\bf{\Phi}}^{H}}{\bf{H}}_{R,I}^{H}{{\bf{M}}_{I}}+{{\bf{H}}_{R,I}}{\bf{\Phi G}}{{\bf{V}}_{X}}{\bf{H}}_{b,I}^{H}{{\bf{M}}_{I}}$ $\displaystyle\quad+{{\bf{H}}_{R,I}}{\bf{\Phi G}}{{\bf{V}}_{X}}{{\bf{G}}^{H}}{{\bf{\Phi}}^{H}}{\bf{H}}_{R,I}^{H}{{\bf{M}}_{I}}].$ (102) The derivation in (B) can be used for the second and third parts of (B). Based on the derivation in (B), it is obvious that the second part of (B) can be derived as $\displaystyle\sigma_{E}^{-2}{\rm{Tr}}\left[{{{{\bf{\hat{H}}}}_{E}}\left({{\bf{V}}{{\bf{V}}^{H}}+{{\bf{V}}_{E}}{\bf{V}}_{E}^{H}}\right){\bf{\hat{H}}}_{E}^{H}{{\bf{W}}_{X}}}\right]$ $\displaystyle=\sigma_{E}^{-2}{\rm{Tr}}[{{\bf{H}}_{b,E}}{{\bf{V}}_{X}}{\bf{H}}_{b,E}^{H}{{\bf{W}}_{X}}+{{\bf{H}}_{b,E}}{{\bf{V}}_{X}}{{\bf{G}}^{H}}{{\bf{\Phi}}^{H}}{\bf{H}}_{R,E}^{H}{{\bf{W}}_{X}}+{{\bf{H}}_{R,E}}{\bf{\Phi G}}{{\bf{V}}_{X}}{\bf{H}}_{b,E}^{H}{{\bf{W}}_{X}}$ $\displaystyle\quad+{{\bf{H}}_{R,E}}{\bf{\Phi G}}{{\bf{V}}_{X}}{{\bf{G}}^{H}}{{\bf{\Phi}}^{H}}{\bf{H}}_{R,E}^{H}{{\bf{W}}_{X}}].$ (103) Based on the derivation in (B) and by defining ${{\bf{M}}_{E}}={{\bf{U}}_{E}}{{\bf{W}}_{E}}{\bf{U}}_{E}^{H}$, it is obvious that the third part of (B) can be derived as $\displaystyle{\rm{Tr}}\left[{{{{\bf{\hat{H}}}}_{E}}{{\bf{V}}_{E}}{\bf{V}}_{E}^{H}{\bf{\hat{H}}}_{E}^{H}{{\bf{U}}_{E}}{{\bf{W}}_{E}}{\bf{U}}_{E}^{H}}\right]$ $\displaystyle={\rm{Tr}}\left[{{{{\bf{\hat{H}}}}_{E}}\left({{{\bf{V}}_{E}}{\bf{V}}_{E}^{H}}\right){\bf{\hat{H}}}_{E}^{H}{{\bf{M}}_{E}}}\right]$ $\displaystyle={\rm{Tr}}[{{\bf{H}}_{b,E}}{{\bf{V}}_{E}}{\bf{V}}_{E}^{H}{\bf{H}}_{b,E}^{H}{{\bf{M}}_{E}}+{{\bf{H}}_{b,E}}{{\bf{V}}_{E}}{\bf{V}}_{E}^{H}{{\bf{G}}^{H}}{{\bf{\Phi}}^{H}}{\bf{H}}_{R,E}^{H}{{\bf{M}}_{E}}+{{\bf{H}}_{R,E}}{\bf{\Phi G}}{{\bf{V}}_{E}}{\bf{V}}_{E}^{H}{\bf{H}}_{b,E}^{H}{{\bf{M}}_{E}}$ $\displaystyle\quad+{{\bf{H}}_{R,E}}{\bf{\Phi G}}{{\bf{V}}_{E}}{\bf{V}}_{E}^{H}{{\bf{G}}^{H}}{{\bf{\Phi}}^{H}}{\bf{H}}_{R,E}^{H}{{\bf{M}}_{E}}].$ (104) By adding (B), (B) and (B), and gathering constant terms irreverent with ${\bf{\Phi}}$, Equation (B) becomes $\displaystyle{\rm{Tr}}\left({{{\bf{V}}^{H}}{{\bf{H}}_{V}}{\bf{V}}}\right)+{\rm{Tr}}\left({{\bf{V}}_{E}^{H}{{\bf{H}}_{VE}}{{\bf{V}}_{E}}}\right)$ $\displaystyle={\rm{Tr}}\left[{{{\bf{\Phi}}^{H}}\left({{\bf{H}}_{R,I}^{H}{{\bf{M}}_{I}}{{\bf{H}}_{b,I}}{{\bf{V}}_{X}}{{\bf{G}}^{H}}+\sigma_{E}^{-2}{\bf{H}}_{R,E}^{H}{{\bf{W}}_{X}}{{\bf{H}}_{b,E}}{{\bf{V}}_{X}}{{\bf{G}}^{H}}+{\bf{H}}_{R,E}^{H}{{\bf{M}}_{E}}{{\bf{H}}_{b,E}}{{\bf{V}}_{E}}{\bf{V}}_{E}^{H}{{\bf{G}}^{H}}}\right)}\right]$ $\displaystyle\quad+{\rm{Tr}}\left[{{\bf{\Phi}}\left({{\bf{G}}{{\bf{V}}_{X}}{\bf{H}}_{b,I}^{H}{{\bf{M}}_{I}}{{\bf{H}}_{R,I}}+\sigma_{E}^{-2}{\bf{G}}{{\bf{V}}_{X}}{\bf{H}}_{b,E}^{H}{{\bf{W}}_{X}}{{\bf{H}}_{R,E}}+{\bf{G}}{{\bf{V}}_{E}}{\bf{V}}_{E}^{H}{\bf{H}}_{b,E}^{H}{{\bf{M}}_{E}}{{\bf{H}}_{R,E}}}\right)}\right]$ $\displaystyle\quad+{\rm{Tr}}\left[{{\bf{\Phi G}}{{\bf{V}}_{X}}{{\bf{G}}^{H}}{{\bf{\Phi}}^{H}}\left({{\bf{H}}_{R,I}^{H}{{\bf{M}}_{I}}{{\bf{H}}_{R,I}}+\sigma_{E}^{-2}{\bf{H}}_{R,E}^{H}{{\bf{W}}_{X}}{{\bf{H}}_{R,E}}}\right)}\right]$ $\displaystyle\quad+{\rm{Tr}}\left[{{\bf{\Phi G}}{{\bf{V}}_{E}}{\bf{V}}_{E}^{H}{{\bf{G}}^{H}}{{\bf{\Phi}}^{H}}{\bf{H}}_{R,E}^{H}{{\bf{M}}_{E}}{{\bf{H}}_{R,E}}}\right]+C_{{t}_{1}},$ (105) where $\displaystyle C_{{t}_{1}}={\rm{Tr}}\left[{{\bf{H}}_{b,I}}{{\bf{V}}_{X}}{\bf{H}}_{b,I}^{H}{{\bf{M}}_{I}}\right]+\sigma_{E}^{-2}{\rm{Tr}}\left[{{\bf{H}}_{b,E}}{{\bf{V}}_{X}}{\bf{H}}_{b,E}^{H}{{\bf{W}}_{X}}\right]+{\rm{Tr}}\left[{{\bf{H}}_{b,E}}{{\bf{V}}_{E}}{\bf{V}}_{E}^{H}{\bf{H}}_{b,E}^{H}{{\bf{M}}_{E}}\right].$ (106) The first term of ${g_{0}}(\mathbf{V},{{\mathbf{V}}_{E}},\mathbf{\Phi})$ is derived as $\displaystyle{\rm{Tr}}\left({{{\bf{W}}_{I}}{{\bf{V}}^{H}}{\bf{\hat{H}}}_{I}^{H}{{\bf{U}}_{I}}}\right)\\!=\\!{\rm{Tr}}\left({{{\bf{U}}_{I}}{\bf{W}}_{I}^{H}{{\bf{V}}^{H}}{\bf{\hat{H}}}_{I}^{H}}\right)\\!=\\!\underbrace{{\rm{Tr}}\left[{{{\bf{U}}_{I}}{\bf{W}}_{I}^{H}{{\bf{V}}^{H}}{\bf{H}}_{b,I}^{H}}\right]}_{C_{{t}_{2}}(\text{constant for }\mathbf{\Phi})}\\!+\\!{\rm{Tr}}\left[{{\bf{H}}_{R,I}^{H}{{\bf{U}}_{I}}{\bf{W}}_{I}^{H}{{\bf{V}}^{H}}{{\bf{G}}^{H}}{{\bf{\Phi}}^{H}}}\right].$ (107) The second term of ${g_{0}}(\mathbf{V},{{\mathbf{V}}_{E}},\mathbf{\Phi})$ is derived as $\displaystyle{\rm{Tr}}\left({{{\bf{W}}_{I}}{\bf{U}}_{I}^{H}{{{\bf{\hat{H}}}}_{I}}{\bf{V}}}\right)={\rm{Tr}}\left({{{{\bf{\hat{H}}}}_{I}}{\bf{V}}{{\bf{W}}_{I}}{\bf{U}}_{I}^{H}}\right)={\rm{Tr}}\left[{\left({{{\bf{H}}_{b,I}}+{{\bf{H}}_{R,I}}{\bf{\Phi G}}}\right){\bf{V}}{{\bf{W}}_{I}}{\bf{U}}_{I}^{H}}\right]$ $\displaystyle=\underbrace{{\rm{Tr}}\left[{{{\bf{H}}_{b,I}}{\bf{V}}{{\bf{W}}_{I}}{\bf{U}}_{I}^{H}}\right]}_{C_{{t}_{3}}(\text{constant for }\mathbf{\Phi})}+{\rm{Tr}}\left[{{\bf{\Phi GV}}{{\bf{W}}_{I}}{\bf{U}}_{I}^{H}{{\bf{H}}_{R,I}}}\right].$ (108) The fourth term of ${g_{0}}(\mathbf{V},{{\mathbf{V}}_{E}},\mathbf{\Phi})$ is derived as $\displaystyle{\rm{Tr}}\left({{{\bf{W}}_{E}}{\bf{V}}_{E}^{H}{\bf{\hat{H}}}_{E}^{H}{{\bf{U}}_{E}}}\right)={\rm{Tr}}\left({{{\bf{U}}_{E}}{{\bf{W}}_{E}^{H}}{\bf{V}}_{E}^{H}{\bf{\hat{H}}}_{E}^{H}}\right)$ $\displaystyle=\underbrace{{\rm{Tr}}\left[{{{\bf{U}}_{E}}{\bf{W}}_{E}^{H}{\bf{V}}_{E}^{H}{\bf{H}}_{b,E}^{H}}\right]}_{C_{{t}_{4}}(\text{constant for }\mathbf{\Phi})}+{\rm{Tr}}\left[{{\bf{H}}_{R,E}^{H}{{\bf{U}}_{E}}{\bf{W}}_{E}^{H}{\bf{V}}_{E}^{H}{{\bf{G}}^{H}}{{\bf{\Phi}}^{H}}}\right].$ (109) The fifth term of ${g_{0}}(\mathbf{V},{{\mathbf{V}}_{E}},\mathbf{\Phi})$ is derived as $\displaystyle{\rm{Tr}}\left({{{\bf{W}}_{E}}{\bf{U}}_{E}^{H}{{{\bf{\hat{H}}}}_{E}}{{\bf{V}}_{E}}}\right)={\rm{Tr}}\left({{{{\bf{\hat{H}}}}_{E}}{{\bf{V}}_{E}}{{\bf{W}}_{E}}{\bf{U}}_{E}^{H}}\right)$ $\displaystyle=\underbrace{{\rm{Tr}}\left[{{{\bf{H}}_{b,E}}{{\bf{V}}_{E}}{{\bf{W}}_{E}}{\bf{U}}_{E}^{H}}\right]}_{C_{{t}_{5}}(\text{constant for }\mathbf{\Phi})}+{\rm{Tr}}\left[{{\bf{\Phi G}}{{\bf{V}}_{E}}{{\bf{W}}_{E}}{\bf{U}}_{E}^{H}{{\bf{H}}_{R,E}}}\right].$ (110) By including the first term in (107), the second term in (B), the fourth term in (B), the fifth term in (B), and the sum of the third and six terms in (B) of ${g_{0}}(\mathbf{V},{{\mathbf{V}}_{E}},\mathbf{\Phi})$ and gathering constant terms irreverent with ${\bf{\Phi}}$, we have $\displaystyle{g_{0}}(\mathbf{\Phi})=-\rm{Equation\ }\eqref{firstoffirstconst}-\rm{Equation\ }\eqref{secondoffirstconst}-\rm{Equation\ }\eqref{fourthoffirstconst}-\rm{Equation\ }\eqref{fifthoffirstconst}+\rm{Equation\ }\eqref{third_sixoffirstconst_form2}$ $\displaystyle={\rm{Tr}}\left[{{{\bf{\Phi}}^{H}}\left(\begin{array}[]{l}{\bf{H}}_{R,I}^{H}{{\bf{M}}_{I}}{{\bf{H}}_{b,I}}{{\bf{V}}_{X}}{{\bf{G}}^{H}}+\sigma_{E}^{-2}{\bf{H}}_{R,E}^{H}{{\bf{W}}_{X}}{{\bf{H}}_{b,E}}{{\bf{V}}_{X}}{{\bf{G}}^{H}}+{\bf{H}}_{R,E}^{H}{{\bf{M}}_{E}}{{\bf{H}}_{b,E}}{{\bf{V}}_{E}}{\bf{V}}_{E}^{H}{{\bf{G}}^{H}}\\\ -{\bf{H}}_{R,I}^{H}{{\bf{U}}_{I}}{\bf{W}}_{I}^{H}{{\bf{V}}^{H}}{{\bf{G}}^{H}}-{\bf{H}}_{R,E}^{H}{{\bf{U}}_{E}}{\bf{W}}_{E}^{H}{\bf{V}}_{E}^{H}{{\bf{G}}^{H}}\end{array}\right)}\right]$ (112) $\displaystyle\quad+{\rm{Tr}}\left[{{\bf{\Phi}}\left(\begin{array}[]{l}{\bf{G}}{{\bf{V}}_{X}}{\bf{H}}_{b,I}^{H}{{\bf{M}}_{I}}{{\bf{H}}_{R,I}}+\sigma_{E}^{-2}{\bf{G}}{{\bf{V}}_{X}}{\bf{H}}_{b,E}^{H}{{\bf{W}}_{X}}{{\bf{H}}_{R,E}}+{\bf{G}}{{\bf{V}}_{E}}{\bf{V}}_{E}^{H}{\bf{H}}_{b,E}^{H}{{\bf{M}}_{E}}{{\bf{H}}_{R,E}}\\\ -{\bf{GV}}{{\bf{W}}_{I}}{\bf{U}}_{I}^{H}{{\bf{H}}_{R,I}}-{\bf{G}}{{\bf{V}}_{E}}{{\bf{W}}_{E}}{\bf{U}}_{E}^{H}{{\bf{H}}_{R,E}}\end{array}\right)}\right]$ (114) $\displaystyle\quad+{\rm{Tr}}\left[{{\bf{\Phi G}}{{\bf{V}}_{E}}{\bf{V}}_{E}^{H}{{\bf{G}}^{H}}{{\bf{\Phi}}^{H}}\left({{\bf{H}}_{R,I}^{H}{{\bf{M}}_{I}}{{\bf{H}}_{R,I}}+\sigma_{E}^{-2}{\bf{H}}_{R,E}^{H}{{\bf{W}}_{X}}{{\bf{H}}_{R,E}}+{\bf{H}}_{R,E}^{H}{{\bf{M}}_{E}}{{\bf{H}}_{R,E}}}\right)}\right]$ $\displaystyle\quad+{\rm{Tr}}\left[{{\bf{\Phi GV}}{{\bf{V}}^{H}}{{\bf{G}}^{H}}{{\bf{\Phi}}^{H}}\left({{\bf{H}}_{R,I}^{H}{{\bf{M}}_{I}}{{\bf{H}}_{R,I}}+\sigma_{E}^{-2}{\bf{H}}_{R,E}^{H}{{\bf{W}}_{X}}{{\bf{H}}_{R,E}}}\right)}\right]+C_{t}$ $\displaystyle={\rm{Tr}}\left[{{{\bf{\Phi}}^{H}}\left(\begin{array}[]{l}{\bf{H}}_{R,I}^{H}{{\bf{M}}_{I}}{{\bf{H}}_{b,I}}{{\bf{V}}_{X}}{{\bf{G}}^{H}}+\sigma_{E}^{-2}{\bf{H}}_{R,E}^{H}{{\bf{W}}_{X}}{{\bf{H}}_{b,E}}{{\bf{V}}_{X}}{{\bf{G}}^{H}}+{\bf{H}}_{R,E}^{H}{{\bf{M}}_{E}}{{\bf{H}}_{b,E}}{{\bf{V}}_{E}}{\bf{V}}_{E}^{H}{{\bf{G}}^{H}}\\\ -{\bf{H}}_{R,I}^{H}{{\bf{U}}_{I}}{\bf{W}}_{I}^{H}{{\bf{V}}^{H}}{{\bf{G}}^{H}}-{\bf{H}}_{R,E}^{H}{{\bf{U}}_{E}}{\bf{W}}_{E}^{H}{\bf{V}}_{E}^{H}{{\bf{G}}^{H}}\end{array}\right)}\right]$ (116) $\displaystyle\quad+{\rm{Tr}}\left[{{\bf{\Phi}}\left(\begin{array}[]{l}{\bf{G}}{{\bf{V}}_{X}}{\bf{H}}_{b,I}^{H}{{\bf{M}}_{I}}{{\bf{H}}_{R,I}}+\sigma_{E}^{-2}{\bf{G}}{{\bf{V}}_{X}}{\bf{H}}_{b,E}^{H}{{\bf{W}}_{X}}{{\bf{H}}_{R,E}}+{\bf{G}}{{\bf{V}}_{E}}{\bf{V}}_{E}^{H}{\bf{H}}_{b,E}^{H}{{\bf{M}}_{E}}{{\bf{H}}_{R,E}}\\\ -{\bf{GV}}{{\bf{W}}_{I}}{\bf{U}}_{I}^{H}{{\bf{H}}_{R,I}}-{\bf{G}}{{\bf{V}}_{E}}{{\bf{W}}_{E}}{\bf{U}}_{E}^{H}{{\bf{H}}_{R,E}}\end{array}\right)}\right]$ (118) $\displaystyle\quad+{\rm{Tr}}\left[{{\bf{\Phi G}}{{\bf{V}}_{E}}{\bf{V}}_{E}^{H}{{\bf{G}}^{H}}{{\bf{\Phi}}^{H}}\left({{\bf{H}}_{R,I}^{H}{{\bf{U}}_{I}}{{\bf{W}}_{I}}{\bf{U}}_{I}^{H}{{\bf{H}}_{R,I}}+\sigma_{E}^{-2}{\bf{H}}_{R,E}^{H}{{\bf{W}}_{X}}{{\bf{H}}_{R,E}}+{\bf{H}}_{R,E}^{H}{{\bf{U}}_{E}}{{\bf{W}}_{E}}{\bf{U}}_{E}^{H}{{\bf{H}}_{R,E}}}\right)}\right]$ $\displaystyle\quad+{\rm{Tr}}\left[{{\bf{\Phi GV}}{{\bf{V}}^{H}}{{\bf{G}}^{H}}{{\bf{\Phi}}^{H}}\left({{\bf{H}}_{R,I}^{H}{{\bf{U}}_{I}}{{\bf{W}}_{I}}{\bf{U}}_{I}^{H}{{\bf{H}}_{R,I}}+\sigma_{E}^{-2}{\bf{H}}_{R,E}^{H}{{\bf{W}}_{X}}{{\bf{H}}_{R,E}}}\right)}\right]+C_{t},$ (119) where $\displaystyle C_{t}=C_{{t}_{1}}+C_{{t}_{2}}+C_{{t}_{3}}+C_{{t}_{4}}+C_{{t}_{5}}.$ (120) Then ${g_{0}}(\mathbf{\Phi})$ becomes $\displaystyle{g_{0}}(\mathbf{\Phi})={\rm{Tr}}\left({{{\bf{\Phi}}^{H}{\bf{D}}^{H}}}\right)+{\rm{Tr}}\left({{\bf{\Phi D}}}\right)+{\rm{Tr}}\left[{{\bf{\Phi}}{{\bf{C}}_{VE}}{{\bf{\Phi}}^{H}}{{\bf{B}}_{VE}}}\right]+{\rm{Tr}}\left({{\bf{\Phi}}{{\bf{C}}_{V}}{{\bf{\Phi}}^{H}}{{\bf{B}}_{V}}}\right)+C_{t}$ $\displaystyle={\rm{Tr}}\left({{{\bf{\Phi}}^{H}{\bf{D}}^{H}}}\right)+{\rm{Tr}}\left({{\bf{\Phi D}}}\right)+{\rm{Tr}}\left[{{{\bf{\Phi}}^{H}}{{\bf{B}}_{VE}}{\bf{\Phi}}{{\bf{C}}_{VE}}}\right]+{\rm{Tr}}\left({{{\bf{\Phi}}^{H}}{{\bf{B}}_{V}}{\bf{\Phi}}{{\bf{C}}_{V}}}\right)+C_{t},$ (121) where
* Bédard et al. [2024] Bédard A., Blouin S., Cheng S. (2024): Buoyant crystals halt the cooling of white dwarf stars. Nature 627(8003):286–288. 10.1038/s41586-024-07102-y * Benvenuto & Althaus [1997] Benvenuto O. G., Althaus L. G. (1997): DB white dwarf evolution in the frame of the full spectrum turbulence theory. MNRAS 288(4):1004–1014. 10.1093/mnras/288.4.1004 * Bergeron & Liebert [2002] Bergeron P., Liebert J. (2002): Spectroscopic Analysis of the DAB White Dwarf PG 1115+166: An Unresolved DA+DB Degenerate Binary. ApJ 566(2):1091–1094. 10.1086/338279, arXiv:astro-ph/0110549 [astro-ph] * Bergeron et al. [1992] Bergeron P., Saffer R. A., Liebert J. (1992): A Spectroscopic Determination of the Mass Distribution of DA White Dwarfs. ApJ 394:228. 10.1086/171575 * Bergeron et al. [1993] Bergeron P., Wesemael F., Lamontagne R., et al. (1993): Metal-Line Blanketing and the Peculiar H beta Line Profile in the DAO Star Feige 55. ApJ 407:L85. 10.1086/186812 * Bergeron et al. [1994] Bergeron P., Wesemael F., Beauchamp A., et al. (1994): A Spectroscopic Analysis of DAO and Hot DA White Dwarfs: The Implications of the Presence of Helium and the Nature of DAO Stars. ApJ 432:305. 10.1086/174571 * Bergeron et al. [1997] Bergeron P., Ruiz M. T., Leggett S. K. (1997): The Chemical Evolution of Cool White Dwarfs and the Age of the Local Galactic Disk. ApJS 108(1):339–387. 10.1086/312955 * Bergeron et al. [2001] Bergeron P., Leggett S. K., Ruiz M. T. (2001): Photometric and Spectroscopic Analysis of Cool White Dwarfs with Trigonometric Parallax Measurements. ApJS 133(2):413–449. 10.1086/320356, arXiv:astro-ph/0011286 [astro-ph] * Bergeron et al. [2011] Bergeron P., Wesemael F., Dufour P., et al. (2011): A Comprehensive Spectroscopic Analysis of DB White Dwarfs. ApJ 737(1):28. 10.1088/0004-637X/737/1/28, arXiv:1105.5433 [astro-ph.SR] * Bergeron et al. [2019] Bergeron P., Dufour P., Fontaine G., et al. (2019): On the Measurement of Fundamental Parameters of White Dwarfs in the Gaia Era. ApJ 876(1):67. 10.3847/1538-4357/ab153a, arXiv:1904.02022 [astro-ph.SR] * Bergeron et al. [2022] Bergeron P., Kilic M., Blouin S., et al. (2022): On the Nature of Ultracool White Dwarfs: Not so Cool after All. ApJ 934(1):36. 10.3847/1538-4357/ac76c7, arXiv:2206.03174 [astro-ph.SR] * Blöcker [2001] Blöcker T. (2001): Evolution on the AGB and beyond: on the formation of H-deficient post-AGB stars. Ap&SS 275:1–14. 10.1023/A:1002777931450, arXiv:astro-ph/0102135 [astro-ph] * Blouin [2022] Blouin S. (2022): Missing metals in DQ stars: A simple explanation. A&A 666:L7. 10.1051/0004-6361/202244944, arXiv:2209.11626 [astro-ph.SR] * Blouin & Dufour [2019] Blouin S., Dufour P. (2019): The evolution of carbon-polluted white dwarfs at low effective temperatures. MNRAS 490(3):4166–4174. 10.1093/mnras/stz2915, arXiv:1910.06168 [astro-ph.SR] * Blouin & Xu [2022] Blouin S., Xu S. (2022): No evidence for a strong decrease of planetesimal accretion in old white dwarfs. MNRAS 510(1):1059–1067. 10.1093/mnras/stab3446, arXiv:2111.12152 [astro-ph.SR] * Blouin et al. [2019a] Blouin S., Dufour P., Allard N. F., et al. (2019a): A New Generation of Cool White Dwarf Atmosphere Models. III. WD J2356-209: Accretion of a Planetesimal with an Unusual Composition. ApJ 872(2):188. 10.3847/1538-4357/ab0081, arXiv:1902.03219 [astro-ph.SR] * Blouin et al. [2019b] Blouin S., Dufour P., Thibeault C., et al. (2019b): A New Generation of Cool White Dwarf Atmosphere Models. IV. Revisiting the Spectral Evolution of Cool White Dwarfs. ApJ 878(1):63. 10.3847/1538-4357/ab1f82, arXiv:1905.02174 [astro-ph.SR] * Blouin et al. [2021] Blouin S., Daligault J., Saumon D. (2021): 22Ne Phase Separation as a Solution to the Ultramassive White Dwarf Cooling Anomaly. ApJ 911(1):L5. 10.3847/2041-8213/abf14b, arXiv:2103.12892 [astro-ph.SR] * Blouin et al. [2023a] Blouin S., Bédard A., Tremblay P.-E. (2023a): Carbon dredge-up required to explain the Gaia white dwarf colour-magnitude bifurcation. MNRAS 523(3):3363–3375. 10.1093/mnras/stad1574, arXiv:2305.02827 [astro-ph.SR] * Blouin et al. [2023b] Blouin S., Kilic M., Bédard A., et al. (2023b): The ubiquity of carbon dredge-up in hydrogen-deficient white dwarfs as revealed by GALEX. MNRAS 525(1):L112–L116. 10.1093/mnrasl/slad105, arXiv:2307.14295 [astro-ph.SR] * Bonsor et al. [2020] Bonsor A., Carter P. J., Hollands M., et al. (2020): Are exoplanetesimals differentiated? MNRAS 492(2):2683–2697. 10.1093/mnras/stz3603, arXiv:2001.04499 [astro-ph.EP] * Brassard et al. [2007] Brassard P., Fontaine G., Dufour P., et al. (2007): The Origin and Evolution of DQ White Dwarfs: The Carbon Pollution Problem Revisited. In: Napiwotzki R., Burleigh M. R. (eds.) ASP Conf. Ser. 372: 15th European Workshop on White Dwarfs. San Francisco: Astronomical Society of the Pacific, p. 19 * Buchan et al. [2022] Buchan A. M., Bonsor A., Shorttle O., et al. (2022): Planets or asteroids? A geochemical method to constrain the masses of White Dwarf pollutants. MNRAS 510(3):3512–3530. 10.1093/mnras/stab3624, arXiv:2111.08779 [astro-ph.EP] * Caiazzo et al. [2023] Caiazzo I., Burdge K. B., Tremblay P.-E., et al. (2023): A rotating white dwarf shows different compositions on its opposite faces. Nature 620(7972):61–66. 10.1038/s41586-023-06171-9, arXiv:2308.07430 [astro-ph.SR] * Camisassa et al. [2023] Camisassa M., Torres S., Hollands M., et al. (2023): A hidden population of white dwarfs with atmospheric carbon traces in the Gaia bifurcation. A&A 674:A213. 10.1051/0004-6361/202346628, arXiv:2305.02110 [astro-ph.SR] * Camisassa et al. [2016] Camisassa M. E., Althaus L. G., Córsico A. H., et al. (2016): The Effect of 22NE Diffusion in the Evolution and Pulsational Properties of White Dwarfs with Solar Metallicity Progenitors. ApJ 823(2):158. 10.3847/0004-637X/823/2/158, arXiv:1604.01744 [astro-ph.SR] * Camisassa et al. [2017] Camisassa M. E., Althaus L. G., Rohrmann R. D., et al. (2017): Updated Evolutionary Sequences for Hydrogen-deficient White Dwarfs. ApJ 839(1):11. 10.3847/1538-4357/aa6797, arXiv:1703.05340 [astro-ph.SR] * Caron et al. [2023] Caron A., Bergeron P., Blouin S., et al. (2023): A spectrophotometric analysis of cool white dwarfs in the Gaia and pan-STARRS footprint. MNRAS 519(3):4529–4549. 10.1093/mnras/stac3733, arXiv:2212.08014 [astro-ph.SR] * Cenarro et al. [2019] Cenarro A. J., Moles M., Cristóbal-Hornillos D., et al. (2019): J-PLUS: The Javalambre Photometric Local Universe Survey. A&A 622:A176. 10.1051/0004-6361/201833036, arXiv:1804.02667 [astro-ph.GA] * Chambers et al. [2016] Chambers K. C., Magnier E. A., Metcalfe N., et al. (2016): The Pan-STARRS1 Surveys. arXiv e-prints arXiv:1612.05560. 10.48550/arXiv.1612.05560, arXiv:1612.05560 [astro-ph.IM] * Chayer [2014] Chayer P. (2014): Radiative levitation of silicon in the atmospheres of two Hyades DA white dwarfs. MNRAS 437(1):L95–L99. 10.1093/mnrasl/slt149, arXiv:1310.6245 [astro-ph.SR] * Chayer et al. [1995a] Chayer P., Fontaine G., Wesemael F. (1995a): Radiative Levitation in Hot White Dwarfs: Equilibrium Theory. ApJS 99:189. 10.1086/192184 * Chayer et al. [1995b] Chayer P., Vennes S., Pradhan A. K., et al. (1995b): Improved Calculations of the Equilibrium Abundances of Heavy Elements Supported by Radiative Levitation in the Atmospheres of Hot DA White Dwarfs. ApJ 454:429. 10.1086/176494 * Chayer et al. [2005] Chayer P., Vennes S., Dupuis J., et al. (2005): Abundance of Elements beyond the Iron Group in Cool DO White Dwarfs. ApJ 630(2):L169–L172. 10.1086/491699 * Chayer et al. [2023] Chayer P., Mendoza C., Meléndez M., et al. (2023): Detection of cesium in the atmosphere of the hot He-rich white dwarf HD 149499B. MNRAS 518(1):368–381. 10.1093/mnras/stac3138, arXiv:2211.01868 [astro-ph.SR] * Chen & Hansen [2011] Chen E. Y., Hansen B. M. S. (2011): Cooling curves and chemical evolution curves of convective mixing white dwarf stars. MNRAS 413(4):2827–2837. 10.1111/j.1365-2966.2011.18355.x * Chen & Hansen [2012] Chen E. Y., Hansen B. M. S. (2012): The Spectral Evolution of Convective Mixing White Dwarfs, the Non-DA Gap, and White Dwarf Cosmochronology. ApJ 753(1):L16. 10.1088/2041-8205/753/1/L16, arXiv:1205.7068 [astro-ph.SR] * Chen et al. [2021] Chen J., Ferraro F. R., Cadelano M., et al. (2021): Slowly cooling white dwarfs in M13 from stable hydrogen burning. Nature Astronomy 5:1170–1177. 10.1038/s41550-021-01445-6, arXiv:2109.02306 [astro-ph.GA] * Cheng et al. [2019] Cheng S., Cummings J. D., Ménard B. (2019): A Cooling Anomaly of High-mass White Dwarfs. ApJ 886(2):100. 10.3847/1538-4357/ab4989, arXiv:1905.12710 [astro-ph.SR] * Coutu et al. [2019] Coutu S., Dufour P., Bergeron P., et al. (2019): Analysis of Helium-rich White Dwarfs Polluted by Heavy Elements in the Gaia Era. ApJ 885(1):74. 10.3847/1538-4357/ab46b9, arXiv:1907.05932 [astro-ph.SR] * Cukanovaite et al. [2018] Cukanovaite E., Tremblay P. E., Freytag B., et al. (2018): Pure-helium 3D model atmospheres of white dwarfs. MNRAS 481(2):1522–1537. 10.1093/mnras/sty2383, arXiv:1809.00590 [astro-ph.SR] * Cukanovaite et al. [2019] Cukanovaite E., Tremblay P. E., Freytag B., et al. (2019): Calibration of the mixing-length theory for structures of helium-dominated atmosphere white dwarfs. MNRAS 490(1):1010–1025. 10.1093/mnras/stz2656, arXiv:1909.10532 [astro-ph.SR] * Cukanovaite et al. [2021] Cukanovaite E., Tremblay P.-E., Bergeron P., et al. (2021): 3D spectroscopic analysis of helium-line white dwarfs. MNRAS 501(4):5274–5293. 10.1093/mnras/staa3684, arXiv:2011.12693 [astro-ph.SR] * Cukanovaite et al. [2023] Cukanovaite E., Tremblay P. E., Toonen S., et al. (2023): Local stellar formation history from the 40 pc white dwarf sample. MNRAS 522(2):1643–1661. 10.1093/mnras/stad1020, arXiv:2209.13919 [astro-ph.SR] * Cummings et al. [2018] Cummings J. D., Kalirai J. S., Tremblay P. E., et al. (2018): The White Dwarf Initial-Final Mass Relation for Progenitor Stars from 0.85 to 7.5 M ⊙. ApJ 866(1):21. 10.3847/1538-4357/aadfd6, arXiv:1809.01673 [astro-ph.SR] * Cunningham et al. [2019] Cunningham T., Tremblay P.-E., Freytag B., et al. (2019): Convective overshoot and macroscopic diffusion in pure-hydrogen-atmosphere white dwarfs. MNRAS 488(2):2503–2522. 10.1093/mnras/stz1759, arXiv:1906.11252 [astro-ph.SR] * Cunningham et al. [2020] Cunningham T., Tremblay P.-E., Gentile Fusillo N. P., et al. (2020): From hydrogen to helium: the spectral evolution of white dwarfs as evidence for convective mixing. MNRAS 492(3):3540–3552. 10.1093/mnras/stz3638, arXiv:1911.00014 [astro-ph.SR] * Dalton et al. [2012] Dalton G., Trager S. C., Abrams D. C., et al. (2012): WEAVE: the next generation wide-field spectroscopy facility for the William Herschel Telescope. In: McLean I. S., Ramsay S. K., Takami H. (eds.) Ground-based and Airborne Instrumentation for Astronomy IV, vol. 8446. Society of Photo-Optical Instrumentation Engineers, p. 84460P * D’Antona & Mazzitelli [1989] D’Antona F., Mazzitelli I. (1989): The Fastest Evolving White Dwarfs. ApJ 347:934. 10.1086/168185 * de Jong et al. [2014] de Jong R. S., Barden S., Bellido-Tirado O., et al. (2014): 4MOST: 4-metre Multi-Object Spectroscopic Telescope. In: Ramsay S. K., McLean I. S., Takami H. (eds.) Ground-based and Airborne Instrumentation for Astronomy V, vol. 9147. Society of Photo-Optical Instrumentation Engineers, p. 91470M * Deal et al. [2013] Deal M., Deheuvels S., Vauclair G., et al. (2013): Accretion from debris disks onto white dwarfs. Fingering (thermohaline) instability and derived accretion rates. A&A 557:L12. 10.1051/0004-6361/201322206, arXiv:1308.5406 [astro-ph.SR] * Debes & Sigurdsson [2002] Debes J. H., Sigurdsson S. (2002): Are There Unstable Planetary Systems around White Dwarfs? ApJ 572(1):556–565. 10.1086/340291, arXiv:astro-ph/0202273 [astro-ph] * Dehner & Kawaler [1995] Dehner B. T., Kawaler S. D. (1995): Thick to Thin: The Evolutionary Connection between PG 1159 Stars and the Thin Helium–enveloped Pulsating White Dwarf GD 358. ApJ 445:L141. 10.1086/187909, arXiv:astro-ph/9503099 [astro-ph] * DESI Collaboration et al. [2016] DESI Collaboration, Aghamousa A., Aguilar J., et al. (2016): The DESI Experiment Part I: Science,Targeting, and Survey Design. arXiv e-prints arXiv:1611.00036. arXiv:1611.00036 [astro-ph.IM] * Doyle et al. [2023] Doyle A. E., Klein B. L., Dufour P., et al. (2023): New Chondritic Bodies Identified in Eight Oxygen-bearing White Dwarfs. ApJ 950(2):93. 10.3847/1538-4357/acbd44, arXiv:2303.00063 [astro-ph.SR] * Dreizler [1999] Dreizler S. (1999): Hubble Space Telescope spectroscopy of hot helium-rich white dwarfs: metal abundances along the cooling sequence. A&A 352:632–644 * Dreizler & Heber [1998] Dreizler S., Heber U. (1998): Spectral analyses of PG 1159 star: constraints on the GW Virginis pulsations from HST observations. A&A 334:618–632 * Dreizler & Werner [1996] Dreizler S., Werner K. (1996): Spectral analysis of hot helium-rich white dwarfs. A&A 314:217–232 * Dreizler & Wolff [1999] Dreizler S., Wolff B. (1999): Analysis of ultraviolet and extreme-ultraviolet spectra of the DA white dwarf G 191-B2B using self-consistent diffusion models. A&A 348:189–197 * Dufour [2011] Dufour P. (2011): Stars with Unusual Compositions: Carbon and Oxygen in Cool White Dwarfs. In: Hoard D. W. (ed.) White Dwarf Atmospheres and Circumstellar Environments. New York: Wiley, p. 53–88 * Dufour et al. [2005] Dufour P., Bergeron P., Fontaine G. (2005): Detailed Spectroscopic and Photometric Analysis of DQ White Dwarfs. ApJ 627(1):404–417. 10.1086/430373, arXiv:astro-ph/0503112 [astro-ph] * Dufour et al. [2007a] Dufour P., Bergeron P., Liebert J., et al. (2007a): On the Spectral Evolution of Cool, Helium-Atmosphere White Dwarfs: Detailed Spectroscopic and Photometric Analysis of DZ Stars. ApJ 663(2):1291–1308. 10.1086/518468, arXiv:astro-ph/0703758 [astro-ph] * Dufour et al. [2007b] Dufour P., Liebert J., Fontaine G., et al. (2007b): White dwarf stars with carbon atmospheres. Nature 450(7169):522–524. 10.1038/nature06318, arXiv:0711.3227 [astro-ph] * Dufour et al. [2008] Dufour P., Fontaine G., Liebert J., et al. (2008): Hot DQ White Dwarfs: Something Different. ApJ 683(2):978–989. 10.1086/589855, arXiv:0805.0331 [astro-ph] * Dufour et al. [2010a] Dufour P., Desharnais S., Wesemael F., et al. (2010a): Multiwavelength Observations of the Hot DB Star PG 0112+104. ApJ 718(2):647–656. 10.1088/0004-637X/718/2/647, arXiv:1006.0365 [astro-ph.SR] * Dufour et al. [2010b] Dufour P., Kilic M., Fontaine G., et al. (2010b): The Discovery of the Most Metal-rich White Dwarf: Composition of a Tidally Disrupted Extrasolar Dwarf Planet. ApJ 719(1):803–809. 10.1088/0004-637X/719/1/803, arXiv:1006.3710 [astro-ph.SR] * Dufour et al. [2012] Dufour P., Kilic M., Fontaine G., et al. (2012): Detailed Compositional Analysis of the Heavily Polluted DBZ White Dwarf SDSS J073842.56+183509.06: A Window on Planet Formation? ApJ 749(1):6. 10.1088/0004-637X/749/1/6, arXiv:1201.6252 [astro-ph.SR] * Dufour et al. [2013] Dufour P., Vornanen T., Bergeron P., et al. (2013): White Dwarfs with Carbon Dominated Atmosphere: New Observations and Analysis. In: Krzesiński J., Stachowski G., Moskalik P., et al. (eds.) ASP Conf. Ser. 469: 18th European White Dwarf Workshop. San Francisco: Astronomical Society of the Pacific, p. 167 * Dufour et al. [2017] Dufour P., Blouin S., Coutu S., et al. (2017): The Montreal White Dwarf Database: A Tool for the Community. In: Tremblay P. E., Gänsicke B., Marsh T. (eds.) ASP Conf. Ser. 509: 20th European Workshop on White Dwarfs. San Francisco: Astronomical Society of the Pacific, p. 3, 1610.00986 * Dunlap & Clemens [2015] Dunlap B. H., Clemens J. C. (2015): Hot DQ White Dwarf Stars as Failed Type Ia Supernovae. In: Dufour P., Bergeron P., Fontaine G. (eds.) ASP Conf. Ser. 493: 19th European Workshop on White Dwarfs. San Francisco: Astronomical Society of the Pacific, p. 547 * Dupuis et al. [1992] Dupuis J., Fontaine G., Pelletier C., et al. (1992): A Study of Metal Abundance Patterns in Cool White Dwarfs. I. Time-dependent Calculations of Gravitational Settling. ApJS 82:505. 10.1086/191728 * Dupuis et al. [1993a] Dupuis J., Fontaine G., Pelletier C., et al. (1993a): A Study of Metal Abundance Patterns in Cool White Dwarfs. II. Simulations of Accretion Episodes. ApJS 84:73. 10.1086/191746 * Dupuis et al. [1993b] Dupuis J., Fontaine G., Wesemael F. (1993b): A Study of Metal Abundance Patterns in Cool White Dwarfs. III. Comparison of the Predictions of the Two-Phase Accretion Model with the Observations. ApJS 87:345. 10.1086/191808 * Dwomoh & Bauer [2023] Dwomoh A. M., Bauer E. B. (2023): Reinterpreting the Polluted White Dwarf SDSS J122859.93+104032.9 in Light of Thermohaline Mixing Models: More Polluting Material from a Larger Orbiting Solid Body. ApJ 952(2):95. 10.3847/1538-4357/acdb69, arXiv:2306.03864 [astro-ph.SR] * Eisenstein et al. [2006] Eisenstein D. J., Liebert J., Koester D., et al. (2006): Hot DB White Dwarfs from the Sloan Digital Sky Survey. AJ 132(2):676–691. 10.1086/504424, arXiv:astro-ph/0606702 [astro-ph] * El-Badry et al. [2018] El-Badry K., Rix H.-W., Weisz D. R. (2018): An Empirical Measurement of the Initial-Final Mass Relation with Gaia White Dwarfs. ApJ 860(2):L17. 10.3847/2041-8213/aaca9c, arXiv:1805.05849 [astro-ph.SR] * Elms et al. [2022] Elms A. K., Tremblay P.-E., Gänsicke B. T., et al. (2022): Spectral analysis of ultra-cool white dwarfs polluted by planetary debris. MNRAS 517(3):4557–4574. 10.1093/mnras/stac2908, arXiv:2206.05258 [astro-ph.SR] * Fantin et al. [2019] Fantin N. J., Côté P., McConnachie A. W., et al. (2019): The Canada-France Imaging Survey: Reconstructing the Milky Way Star Formation History from Its White Dwarf Population. ApJ 887(2):148. 10.3847/1538-4357/ab5521, arXiv:1911.02576 [astro-ph.GA] * Farihi [2016] Farihi J. (2016): Circumstellar debris and pollution at white dwarf stars. New A Rev. 71:9–34. 10.1016/j.newar.2016.03.001, arXiv:1604.03092 [astro-ph.EP] * Farihi et al. [2010] Farihi J., Barstow M. A., Redfield S., et al. (2010): Rocky planetesimals as the origin of metals in DZ stars. MNRAS 404(4):2123–2135. 10.1111/j.1365-2966.2010.16426.x, arXiv:1001.5025 [astro-ph.EP] * Farihi et al. [2011] Farihi J., Brinkworth C. S., Gänsicke B. T., et al. (2011): Possible Signs of Water and Differentiation in a Rocky Exoplanetary Body. ApJ 728(1):L8. 10.1088/2041-8205/728/1/L8, arXiv:1101.0158 [astro-ph.EP] * Farihi et al. [2013] Farihi J., Gänsicke B. T., Koester D. (2013): Evidence for Water in the Rocky Debris of a Disrupted Extrasolar Minor Planet. Science 342(6155):218–220. 10.1126/science.1239447, arXiv:1310.3269 [astro-ph.EP] * Farihi et al. [2016] Farihi J., Koester D., Zuckerman B., et al. (2016): Solar abundances of rock-forming elements, extreme oxygen and hydrogen in a young polluted white dwarf. MNRAS 463(3):3186–3192. 10.1093/mnras/stw2182, arXiv:1608.07278 [astro-ph.EP] * Farihi et al. [2022] Farihi J., Dufour P., Wilson T. G. (2022): Missing Metals in DQ Stars; a Compelling Clue to their Origin. arXiv e-prints arXiv:2208.05990. 10.48550/arXiv.2208.05990, arXiv:2208.05990 [astro-ph.SR] * Finley et al. [1997] Finley D. S., Koester D., Basri G. (1997): The Temperature Scale and Mass Distribution of Hot DA White Dwarfs. ApJ 488(1):375–396. 10.1086/304668 * Fleming et al. [1986] Fleming T. A., Liebert J., Green R. F. (1986): The Luminosity Function of DA White Dwarfs. ApJ 308:176. 10.1086/164488 * Fontaine & Brassard [2002] Fontaine G., Brassard P. (2002): Can White Dwarf Asteroseismology Really Constrain the 12C($\alpha$, $\gamma$)16O Reaction Rate? ApJ 581(1):L33–L37. 10.1086/345787 * Fontaine & Brassard [2005] Fontaine G., Brassard P. (2005): Carbon in Hot DB White Dwarfs: A New Challenge for the Theory of the Spectral Evolution of White Dwarfs. In: Koester D., Moehler S. (eds.) ASP Conf. Ser. 334: 14th European Workshop on White Dwarfs. San Francisco: Astronomical Society of the Pacific, p. 49 * Fontaine & Michaud [1979] Fontaine G., Michaud G. (1979): Diffusion time scales in white dwarfs. ApJ 231:826–840. 10.1086/157247 * Fontaine & van Horn [1976] Fontaine G., van Horn H. M. (1976): Convective white-dwarf envelope model grids for H-, He-, and C-rich compositions. ApJS 31:467–487. 10.1086/190388 * Fontaine & Wesemael [1987] Fontaine G., Wesemael F. (1987): Recent advances in the theory of white dwarf spectral evolution. In: Philip A. G. D., Hayes D. S., Liebert J. W. (eds.) IAU Colloq. 95: Second Conference on Faint Blue Stars. Schenectady: Davis Press, pp. 319–326 * Fontaine et al. [1984] Fontaine G., Villeneuve B., Wesemael F., et al. (1984): Carbon in the cool DC and C2 white dwarfs - Dredge-up in compositionally stratified envelopes. ApJ 277:L61–L64. 10.1086/184203 * Fontaine et al. [2001] Fontaine G., Brassard P., Bergeron P. (2001): The Potential of White Dwarf Cosmochronology. PASP 113(782):409–435. 10.1086/319535 * Fortin-Archambault et al. [2020] Fortin-Archambault M., Dufour P., Xu S. (2020): Modeling of the Variable Circumstellar Absorption Features of WD 1145+017. ApJ 888(1):47. 10.3847/1538-4357/ab585a, arXiv:1911.05690 [astro-ph.SR] * Gaia Collaboration et al. [2016] Gaia Collaboration, Prusti T., de Bruijne J. H. J., et al. (2016): The Gaia mission. A&A 595:A1. 10.1051/0004-6361/201629272, arXiv:1609.04153 [astro-ph.IM] * Gaia Collaboration et al. [2018a] Gaia Collaboration, Babusiaux C., van Leeuwen F., et al. (2018a): Gaia Data Release 2. Observational Hertzsprung-Russell diagrams. A&A 616:A10. 10.1051/0004-6361/201832843, arXiv:1804.09378 [astro-ph.SR] * Gaia Collaboration et al. [2018b] Gaia Collaboration, Brown A. G. A., Vallenari A., et al. (2018b): Gaia Data Release 2. Summary of the contents and survey properties. A&A 616:A1. 10.1051/0004-6361/201833051, arXiv:1804.09365 [astro-ph.GA] * Gaia Collaboration et al. [2021] Gaia Collaboration, Brown A. G. A., Vallenari A., et al. (2021): Gaia Early Data Release 3. Summary of the contents and survey properties. A&A 649:A1. 10.1051/0004-6361/202039657, arXiv:2012.01533 [astro-ph.GA] * Gaia Collaboration et al. [2023a] Gaia Collaboration, Montegriffo P., Bellazzini M., et al. (2023a): Gaia Data Release 3. The Galaxy in your preferred colours: Synthetic photometry from Gaia low-resolution spectra. A&A 674:A33. 10.1051/0004-6361/202243709, arXiv:2206.06215 [astro-ph.SR] * Gaia Collaboration et al. [2023b] Gaia Collaboration, Vallenari A., Brown A. G. A., et al. (2023b): Gaia Data Release 3. Summary of the content and survey properties. A&A 674:A1. 10.1051/0004-6361/202243940, arXiv:2208.00211 [astro-ph.GA] * Gänsicke et al. [2012] Gänsicke B. T., Koester D., Farihi J., et al. (2012): The chemical diversity of exo-terrestrial planetary debris around white dwarfs. MNRAS 424(1):333–347. 10.1111/j.1365-2966.2012.21201.x, arXiv:1205.0167 [astro-ph.EP] * García-Berro et al. [2010] García-Berro E., Torres S., Althaus L. G., et al. (2010): A white dwarf cooling age of 8Gyr for NGC 6791 from physical separation processes. Nature 465(7295):194–196. 10.1038/nature09045, arXiv:1005.2272 [astro-ph.SR] * García-Zamora et al. [2023] García-Zamora E. M., Torres S., Rebassa-Mansergas A. (2023): White dwarf Random Forest classification through Gaia spectral coefficients. A&A 679:A127. 10.1051/0004-6361/202347601, arXiv:2308.07090 [astro-ph.SR] * Genest-Beaulieu & Bergeron [2019a] Genest-Beaulieu C., Bergeron P. (2019a): A Comprehensive Spectroscopic and Photometric Analysis of DA and DB White Dwarfs from SDSS and Gaia. ApJ 871(2):169. 10.3847/1538-4357/aafac6 * Genest-Beaulieu & Bergeron [2019b] Genest-Beaulieu C., Bergeron P. (2019b): A Photometric and Spectroscopic Investigation of the DB White Dwarf Population Using SDSS and Gaia Data. ApJ 882(2):106. 10.3847/1538-4357/ab379e, arXiv:1908.01728 [astro-ph.SR] * Gentile Fusillo et al. [2017] Gentile Fusillo N. P., Gänsicke B. T., Farihi J., et al. (2017): Trace hydrogen in helium atmosphere white dwarfs as a possible signature of water accretion. MNRAS 468(1):971–980. 10.1093/mnras/stx468, arXiv:1702.06542 [astro-ph.SR] * Gentile Fusillo et al. [2019] Gentile Fusillo N. P., Tremblay P.-E., Gänsicke B. T., et al. (2019): A Gaia Data Release 2 catalogue of white dwarfs and a comparison with SDSS. MNRAS 482(4):4570–4591. 10.1093/mnras/sty3016, arXiv:1807.03315 [astro-ph.SR] * Gentile Fusillo et al. [2020] Gentile Fusillo N. P., Tremblay P.-E., Bohlin R. C., et al. (2020): Cool white dwarfs as standards for infrared observations. MNRAS 491(3):3613–3623. 10.1093/mnras/stz2984, arXiv:1910.08087 [astro-ph.SR] * Gentile Fusillo et al. [2021] Gentile Fusillo N. P., Tremblay P. E., Cukanovaite E., et al. (2021): A catalogue of white dwarfs in Gaia EDR3. MNRAS 508(3):3877–3896. 10.1093/mnras/stab2672, arXiv:2106.07669 [astro-ph.SR] * Giammichele et al. [2012] Giammichele N., Bergeron P., Dufour P. (2012): Know Your Neighborhood: A Detailed Model Atmosphere Analysis of Nearby White Dwarfs. ApJS 199(2):29. 10.1088/0067-0049/199/2/29, arXiv:1202.5581 [astro-ph.SR] * Gianninas et al. [2010] Gianninas A., Bergeron P., Dupuis J., et al. (2010): Spectroscopic Analysis of Hot, Hydrogen-rich White Dwarfs: The Presence of Metals and the Balmer-line Problem. ApJ 720(1):581–602. 10.1088/0004-637X/720/1/581 * Gianninas et al. [2011] Gianninas A., Bergeron P., Ruiz M. T. (2011): A Spectroscopic Survey and Analysis of Bright, Hydrogen-rich White Dwarfs. ApJ 743(2):138. 10.1088/0004-637X/743/2/138, arXiv:1109.3171 [astro-ph.SR] * Good et al. [2004] Good S. A., Barstow M. A., Holberg J. B., et al. (2004): Comparison of the effective temperatures, gravities and helium abundances of DAO white dwarfs from Balmer and Lyman line studies. MNRAS 355(3):1031–1040. 10.1111/j.1365-2966.2004.08406.x * Good et al. [2005] Good S. A., Barstow M. A., Burleigh M. R., et al. (2005): Heavy element abundances in DAO white dwarfs measured from FUSE data. MNRAS 363(1):183–196. 10.1111/j.1365-2966.2005.09428.x, arXiv:astro-ph/0507341 [astro-ph] * Greenstein [1986] Greenstein J. L. (1986): The Frequency of Hydrogen White Dwarfs as Observed at High Signal to Noise. ApJ 304:334. 10.1086/164168 * Hansen et al. [2007] Hansen B. M. S., Anderson J., Brewer J., et al. (2007): The White Dwarf Cooling Sequence of NGC 6397. ApJ 671(1):380–401. 10.1086/522567, arXiv:astro-ph/0701738 [astro-ph] * Harrison et al. [2018] Harrison J. H. D., Bonsor A., Madhusudhan N. (2018): Polluted white dwarfs: constraints on the origin and geology of exoplanetary material. MNRAS 479(3):3814–3841. 10.1093/mnras/sty1700, arXiv:1806.09917 [astro-ph.EP] * Harrison et al. [2021] Harrison J. H. D., Bonsor A., Kama M., et al. (2021): Bayesian constraints on the origin and geology of exoplanetary material using a population of externally polluted white dwarfs. MNRAS 504(2):2853–2867. 10.1093/mnras/stab736, arXiv:2103.05713 [astro-ph.EP] * Heber et al. [1997] Heber U., Napiwotzki R., Lemke M., et al. (1997): Helium line profile variations in the DAB white dwarf HS 0209+0832. A&A 324:L53–L56 * Heinonen et al. [2020] Heinonen R. A., Saumon D., Daligault J., et al. (2020): Diffusion Coefficients in the Envelopes of White Dwarfs. ApJ 896(1):2. 10.3847/1538-4357/ab91ad, arXiv:2005.05891 [astro-ph.SR] * Heintz et al. [2022] Heintz T. M., Hermes J. J., El-Badry K., et al. (2022): Testing White Dwarf Age Estimates Using Wide Double White Dwarf Binaries from Gaia EDR3. ApJ 934(2):148. 10.3847/1538-4357/ac78d9, arXiv:2206.00025 [astro-ph.SR] * Herwig et al. [1999] Herwig F., Blöcker T., Langer N., et al. (1999): On the formation of hydrogen-deficient post-AGB stars. A&A 349:L5–L8. arXiv:astro-ph/9908108 [astro-ph] * Holberg et al. [1993] Holberg J. B., Barstow M. A., Buckley D. A. H., et al. (1993): Two New Extremely Iron-rich Hot DA White Dwarfs and the Nature of the EUV Opacity. ApJ 416:806. 10.1086/173278 * Hollands et al. [2017] Hollands M. A., Koester D., Alekseev V., et al. (2017): Cool DZ white dwarfs - I. Identification and spectral analysis. MNRAS 467(4):4970–5000. 10.1093/mnras/stx250, arXiv:1701.07827 [astro-ph.SR] * Hollands et al. [2018a] Hollands M. A., Gänsicke B. T., Koester D. (2018a): Cool DZ white dwarfs II: compositions and evolution of old remnant planetary systems. MNRAS 477(1):93–111. 10.1093/mnras/sty592, arXiv:1801.07714 [astro-ph.SR] * Hollands et al. [2018b] Hollands M. A., Tremblay P. E., Gänsicke B. T., et al. (2018b): The Gaia 20 pc white dwarf sample. MNRAS 480(3):3942–3961. 10.1093/mnras/sty2057, arXiv:1805.12590 [astro-ph.SR] * Hollands et al. [2020] Hollands M. A., Tremblay P. E., Gänsicke B. T., et al. (2020): An ultra-massive white dwarf with a mixed hydrogen-carbon atmosphere as a likely merger remnant. Nature Astronomy 4:663–669. 10.1038/s41550-020-1028-0, arXiv:2003.00028 [astro-ph.SR] * Hollands et al. [2021] Hollands M. A., Tremblay P.-E., Gänsicke B. T., et al. (2021): Alkali metals in white dwarf atmospheres as tracers of ancient planetary crusts. Nature Astronomy 5:451–459. 10.1038/s41550-020-01296-7, arXiv:2101.01225 [astro-ph.EP] * Hollands et al. [2022] Hollands M. A., Tremblay P. E., Gänsicke B. T., et al. (2022): Spectral analysis of cool white dwarfs accreting from planetary systems: from the ultraviolet to the optical. MNRAS 511(1):71–82. 10.1093/mnras/stab3696, arXiv:2112.08887 [astro-ph.SR] * Hoskin et al. [2020] Hoskin M. J., Toloza O., Gänsicke B. T., et al. (2020): White dwarf pollution by hydrated planetary remnants: hydrogen and metals in WD J204713.76-125908.9. MNRAS 499(1):171–182. 10.1093/mnras/staa2717, arXiv:2009.05053 [astro-ph.EP] * Hoyer et al. [2017] Hoyer D., Rauch T., Werner K., et al. (2017): Complete spectral energy distribution of the hot, helium-rich white dwarf RX J0503.9-2854. A&A 598:A135. 10.1051/0004-6361/201629869, arXiv:1610.09177 [astro-ph.SR] * Hoyer et al. [2018] Hoyer D., Rauch T., Werner K., et al. (2018): Search for trans-iron elements in hot, helium-rich white dwarfs with the HST Cosmic Origins Spectrograph. A&A 612:A62. 10.1051/0004-6361/201732401, arXiv:1801.02414 [astro-ph.SR] * Hügelmeyer et al. [2005] Hügelmeyer S. D., Dreizler S., Werner K., et al. (2005): Spectral analyses of DO white dwarfs and PG 1159 stars from the Sloan Digital Sky Survey. A&A 442(1):309–314. 10.1051/0004-6361:20053280, arXiv:astro-ph/0508101 [astro-ph] * Hügelmeyer et al. [2006] Hügelmeyer S. D., Dreizler S., Homeier D., et al. (2006): Spectral analyses of eighteen hot H-deficient (pre-) white dwarfs from the Sloan Digital Sky Survey Data Release 4. A&A 454(2):617–624. 10.1051/0004-6361:20064869, arXiv:astro-ph/0605551 [astro-ph] * Iben & Tutukov [1984] Iben J.I., Tutukov A. V. (1984): Cooling of low-mass carbon-oxygen dwarfs from the planetary nucleus stage through the crystallization stage. ApJ 282:615–630. 10.1086/162241 * Iben et al. [1983] Iben J.I., Kaler J. B., Truran J. W., et al. (1983): On the evolution of those nuclei of planetary nebulae that experiencea final helium shell flash. ApJ 264:605–612. 10.1086/160631 * Isern [2019] Isern J. (2019): The Star Formation History in the Solar Neighborhood as Told by Massive White Dwarfs. ApJ 878(1):L11. 10.3847/2041-8213/ab238e, arXiv:1905.10779 [astro-ph.GA] * Izquierdo et al. [2021] Izquierdo P., Toloza O., Gänsicke B. T., et al. (2021): GD 424 - a helium-atmosphere white dwarf with a large amount of trace hydrogen in the process of digesting a rocky planetesimal. MNRAS 501(3):4276–4288. 10.1093/mnras/staa3987, arXiv:2012.12957 [astro-ph.EP] * Jiménez-Esteban et al. [2018] Jiménez-Esteban F. M., Torres S., Rebassa-Mansergas A., et al. (2018): A white dwarf catalogue from Gaia-DR2 and the Virtual Observatory. MNRAS 480(4):4505–4518. 10.1093/mnras/sty2120, arXiv:1807.02559 [astro-ph.SR] * Jiménez-Esteban et al. [2023] Jiménez-Esteban F. M., Torres S., Rebassa-Mansergas A., et al. (2023): Spectral classification of the 100 pc white dwarf population from Gaia-DR3 and the virtual observatory. MNRAS 518(4):5106–5122. 10.1093/mnras/stac3382, arXiv:2211.08852 [astro-ph.SR] * Johnson et al. [2022] Johnson T. M., Klein B. L., Koester D., et al. (2022): Unusual Abundances from Planetary System Material Polluting the White Dwarf G238-44. ApJ 941(2):113. 10.3847/1538-4357/aca089, arXiv:2211.02673 [astro-ph.EP] * Jordan & Koester [1986] Jordan S., Koester D. (1986): Model atmospheres and synthetic spectra for white dwarfs with chemically stratified atmospheres. A&AS 65:367–377 * Jura & Xu [2010] Jura M., Xu S. (2010): The Survival of Water Within Extrasolar Minor Planets. AJ 140(5):1129–1136. 10.1088/0004-6256/140/5/1129, arXiv:1001.2595 [astro-ph.EP] * Jura & Young [2014] Jura M., Young E. D. (2014): Extrasolar Cosmochemistry. Ann. Rev. Earth Planet. Sci. 42(1):45–67. 10.1146/annurev-earth-060313-054740 * Jura et al. [2012] Jura M., Xu S., Klein B., et al. (2012): Two Extrasolar Asteroids with Low Volatile-element Mass Fractions. ApJ 750(1):69. 10.1088/0004-637X/750/1/69, arXiv:1203.2885 [astro-ph.EP] * Kaiser et al. [2021] Kaiser B. C., Clemens J. C., Blouin S., et al. (2021): Lithium pollution of a white dwarf records the accretion of an extrasolar planetesimal. Science 371(6525):168–172. 10.1126/science.abd1714, arXiv:2012.12900 [astro-ph.EP] * Kalirai [2012] Kalirai J. S. (2012): The age of the Milky Way inner halo. Nature 486(7401):90–92. 10.1038/nature11062, arXiv:1205.6802 [astro-ph.GA] * Kawka et al. [2023] Kawka A., Ferrario L., Vennes S. (2023): The non-explosive stellar merging origin of the ultra-massive carbon-rich white dwarfs. MNRAS 520(4):6299–6311. 10.1093/mnras/stad553, arXiv:2302.11118 [astro-ph.SR] * Kepler et al. [2019] Kepler S. O., Pelisoli I., Koester D., et al. (2019): White dwarf and subdwarf stars in the Sloan Digital Sky Survey Data Release 14. MNRAS 486(2):2169–2183. 10.1093/mnras/stz960, arXiv:1904.01626 [astro-ph.SR] * Kepler et al. [2021] Kepler S. O., Koester D., Pelisoli I., et al. (2021): White dwarf and subdwarf stars in the Sloan Digital Sky Survey Data Release 16. MNRAS 507(3):4646–4660. 10.1093/mnras/stab2411, arXiv:2108.10915 [astro-ph.SR] * Kilic et al. [2017] Kilic M., Munn J. A., Harris H. C., et al. (2017): The Ages of the Thin Disk, Thick Disk, and the Halo from Nearby White Dwarfs. ApJ 837(2):162. 10.3847/1538-4357/aa62a5, arXiv:1702.06984 [astro-ph.SR] * Kilic et al. [2018] Kilic M., Hambly N. C., Bergeron P., et al. (2018): Gaia reveals evidence for merged white dwarfs. MNRAS 479(1):L113–L117. 10.1093/mnrasl/sly110, arXiv:1805.01227 [astro-ph.SR] * Kilic et al. [2020] Kilic M., Bergeron P., Kosakowski A., et al. (2020): The 100 pc White Dwarf Sample in the SDSS Footprint. ApJ 898(1):84. 10.3847/1538-4357/ab9b8d, arXiv:2006.00323 [astro-ph.SR] * Kilic et al. [2024] Kilic M., Bergeron P., Blouin S., et al. (2024): White Dwarf Merger Remnants: The DAQ Subclass. arXiv e-prints arXiv:2403.08878. 10.48550/arXiv.2403.08878, arXiv:2403.08878 [astro-ph.SR] * Klein et al. [2010] Klein B., Jura M., Koester D., et al. (2010): Chemical Abundances in the Externally Polluted White Dwarf GD 40: Evidence of a Rocky Extrasolar Minor Planet. ApJ 709(2):950–962. 10.1088/0004-637X/709/2/950, arXiv:0912.1422 [astro-ph.EP] * Klein et al. [2011] Klein B., Jura M., Koester D., et al. (2011): Rocky Extrasolar Planetary Compositions Derived from Externally Polluted White Dwarfs. ApJ 741(1):64. 10.1088/0004-637X/741/1/64, arXiv:1108.1565 [astro-ph.EP] * Klein et al. [2021] Klein B. L., Doyle A. E., Zuckerman B., et al. (2021): Discovery of Beryllium in White Dwarfs Polluted by Planetesimal Accretion. ApJ 914(1):61. 10.3847/1538-4357/abe40b, arXiv:2102.01834 [astro-ph.SR] * Koester [1976] Koester D. (1976): Convective Mixing and Accretion in White Dwarfs. A&A 52:415 * Koester [2009] Koester D. (2009): Accretion and diffusion in white dwarfs. New diffusion timescales and applications to GD 362 and G 29-38. A&A 498(2):517–525. 10.1051/0004-6361/200811468, arXiv:0903.1499 [astro-ph.SR] * Koester [2015] Koester D. (2015): On Thermohaline Mixing in Accreting White Dwarfs. In: Dufour P., Bergeron P., Fontaine G. (eds.) ASP Conf. Ser. 493: 19th European Workshop on White Dwarfs. San Francisco: Astronomical Society of the Pacific, p. 129, 10.48550/arXiv.1408.6934, 1408.6934 * Koester & Kepler [2015] Koester D., Kepler S. O. (2015): DB white dwarfs in the Sloan Digital Sky Survey data release 10 and 12. A&A 583:A86. 10.1051/0004-6361/201527169, arXiv:1509.08244 [astro-ph.SR] * Koester & Kepler [2019] Koester D., Kepler S. O. (2019): Carbon-rich (DQ) white dwarfs in the Sloan Digital Sky Survey. A&A 628:A102. 10.1051/0004-6361/201935946, arXiv:1905.11174 [astro-ph.SR] * Koester & Knist [2006] Koester D., Knist S. (2006): New DQ white dwarfs in the Sloan Digital Sky Survey DR4: confirmation of two sequences. A&A 454(3):951–956. 10.1051/0004-6361:20065287, arXiv:astro-ph/0603734 [astro-ph] * Koester & Wilken [2006] Koester D., Wilken D. (2006): The accretion-diffusion scenario for metals in cool white dwarfs. A&A 453(3):1051–1057. 10.1051/0004-6361:20064843, arXiv:astro-ph/0603185 [astro-ph] * Koester et al. [1982] Koester D., Weidemann V., Zeidler E. M. (1982): Atmospheric parameters and carbon abundance of white dwarfs of spectral types C2 and DC. A&A 116:147–157 * Koester et al. [1994] Koester D., Liebert J., Saffer R. A. (1994): GD 323: New Observations and Analysis of the Prototype DAB White Dwarf. ApJ 422:783. 10.1086/173770 * Koester et al. [2005] Koester D., Rollenhagen K., Napiwotzki R., et al. (2005): Metal traces in white dwarfs of the SPY (ESO Supernova Ia Progenitor Survey) sample. A&A 432(3):1025–1032. 10.1051/0004-6361:20041927 * Koester et al. [2011] Koester D., Girven J., Gänsicke B. T., et al. (2011): Cool DZ white dwarfs in the SDSS. A&A 530:A114. 10.1051/0004-6361/201116816, arXiv:1105.0268 [astro-ph.SR] * Koester et al. [2014a] Koester D., Gänsicke B. T., Farihi J. (2014a): The frequency of planetary debris around young white dwarfs. A&A 566:A34. 10.1051/0004-6361/201423691, arXiv:1404.2617 [astro-ph.SR] * Koester et al. [2014b] Koester D., Provencal J., Gänsicke B. T. (2014b): Atmospheric parameters and carbon abundance for hot DB white dwarfs. A&A 568:A118. 10.1051/0004-6361/201424231, arXiv:1407.6157 [astro-ph.SR] * Koester et al. [2020] Koester D., Kepler S. O., Irwin A. W. (2020): New white dwarf envelope models and diffusion. Application to DQ white dwarfs. A&A 635:A103. 10.1051/0004-6361/202037530, arXiv:2002.10170 [astro-ph.SR] * Kollmeier et al. [2017] Kollmeier J. A., Zasowski G., Rix H.-W., et al. (2017): SDSS-V: Pioneering Panoptic Spectroscopy. arXiv e-prints arXiv:1711.03234. 10.48550/arXiv.1711.03234, arXiv:1711.03234 [astro-ph.GA] * Krzesinski et al. [2009] Krzesinski J., Kleinman S. J., Nitta A., et al. (2009): A hot white dwarf luminosity function from the Sloan Digital Sky Survey. A&A 508(1):339–344. 10.1051/0004-6361/200912094 * Kudritzki & Puls [2000] Kudritzki R.-P., Puls J. (2000): Winds from Hot Stars. ARA&A 38:613–666. 10.1146/annurev.astro.38.1.613 * Kupka et al. [2018] Kupka F., Zaussinger F., Montgomery M. H. (2018): Mixing and overshooting in surface convection zones of DA white dwarfs: first results from ANTARES. MNRAS 474(4):4660–4671. 10.1093/mnras/stx3119, arXiv:1712.00641 [astro-ph.SR] * Lawlor & MacDonald [2006] Lawlor T. M., MacDonald J. (2006): The mass of helium in white dwarf stars and the formation and evolution of hydrogen-deficient post-AGB stars. MNRAS 371(1):263–282. 10.1111/j.1365-2966.2006.10641.x, arXiv:astro-ph/0605747 [astro-ph] * Leggett et al. [1998] Leggett S. K., Ruiz M. T., Bergeron P. (1998): The Cool White Dwarf Luminosity Function and the Age of the Galactic Disk. ApJ 497(1):294–302. 10.1086/305463 * Liebert [1983] Liebert J. (1983): G35-26: carbon in a peculiar DA white dwarf. PASP 95:878–882. 10.1086/131264 * Liebert et al. [2005] Liebert J., Bergeron P., Holberg J. B. (2005): The Formation Rate and Mass and Luminosity Functions of DA White Dwarfs from the Palomar Green Survey. ApJS 156(1):47–68. 10.1086/425738, arXiv:astro-ph/0406657 [astro-ph] * Limoges & Bergeron [2010] Limoges M. M., Bergeron P. (2010): A Spectroscopic Analysis of White Dwarfs in the Kiso Survey. ApJ 714(2):1037–1051. 10.1088/0004-637X/714/2/1037, arXiv:1003.4313 [astro-ph.SR] * Limoges et al. [2009] Limoges M. M., Bergeron P., Dufour P. (2009): Spectroscopic Analysis of the White Dwarf KUV 02196+2816: A New Unresolved DA+DB Degenerate Binary. ApJ 696(2):1461–1465. 10.1088/0004-637X/696/2/1461, arXiv:0902.3640 [astro-ph.SR] * Limoges et al. [2015] Limoges M. M., Bergeron P., Lépine S. (2015): Physical Properties of the Current Census of Northern White Dwarfs within 40 pc of the Sun. ApJS 219(2):19. 10.1088/0067-0049/219/2/19, arXiv:1505.02297 [astro-ph.SR] * Löbling et al. [2020] Löbling L., Maney M. A., Rauch T., et al. (2020): First discovery of trans-iron elements in a DAO-type white dwarf (BD-22∘3467). MNRAS 492(1):528–548. 10.1093/mnras/stz3247, arXiv:1911.09573 [astro-ph.SR] * López-Sanjuan et al. [2022] López-Sanjuan C., Tremblay P. E., Ederoclite A., et al. (2022): J-PLUS: Spectral evolution of white dwarfs by PDF analysis. A&A 658:A79. 10.1051/0004-6361/202141746, arXiv:2110.14421 [astro-ph.SR] * MacDonald & Vennes [1991] MacDonald J., Vennes S. (1991): How Much Hydrogen Is There in a White Dwarf? ApJ 371:719. 10.1086/169937 * MacDonald et al. [1998] MacDonald J., Hernanz M., Jose J. (1998): Evolutionary calculations of carbon dredge-up in helium envelope white dwarfs. MNRAS 296(3):523–530. 10.1046/j.1365-8711.1998.01392.x, arXiv:astro-ph/9803121 [astro-ph] * Macfarlane et al. [2017] Macfarlane S. A., Woudt P. A., Dufour P., et al. (2017): The OmegaWhite Survey for short-period variable stars - IV. Discovery of the warm DQ white dwarf OW J175358.85-310728.9. MNRAS 470(1):732–741. 10.1093/mnras/stx741, arXiv:1703.08122 [astro-ph.SR] * Manseau et al. [2016] Manseau P. M., Bergeron P., Green E. M. (2016): A Spectroscopic Search for Chemically Stratified White Dwarfs in the Sloan Digital Sky Survey. ApJ 833(2):127. 10.3847/1538-4357/833/2/127 * Manser et al. [2024] Manser C. J., Gänsicke B. T., Izquierdo P., et al. (2024): The frequency of metal-enrichment of cool helium-atmosphere white dwarfs using the DESI Early Data Release. MNRAS 10.1093/mnrasl/slae026, arXiv:2402.18644 [astro-ph.EP] * Martin et al. [2005] Martin D. C., Fanson J., Schiminovich D., et al. (2005): The Galaxy Evolution Explorer: A Space Ultraviolet Survey Mission. ApJ 619(1):L1–L6. 10.1086/426387, arXiv:astro-ph/0411302 [astro-ph] * McCleery et al. [2020] McCleery J., Tremblay P.-E., Gentile Fusillo N. P., et al. (2020): Gaia white dwarfs within 40 pc II: the volume-limited Northern hemisphere sample. MNRAS 499(2):1890–1908. 10.1093/mnras/staa2030, arXiv:2006.00874 [astro-ph.SR] * Melis et al. [2011] Melis C., Farihi J., Dufour P., et al. (2011): Accretion of a Terrestrial-like Minor Planet by a White Dwarf. ApJ 732(2):90. 10.1088/0004-637X/732/2/90, arXiv:1102.0311 [astro-ph.SR] * Michaud et al. [2015] Michaud G., Alecian G., Richer J. (2015): Atomic Diffusion in Stars. Cham: Springer International * Miller Bertolami & Althaus [2006] Miller Bertolami M. M., Althaus L. G. (2006): Full evolutionary models for PG 1159 stars. Implications for the helium-rich O(He) stars. A&A 454(3):845–854. 10.1051/0004-6361:20054723, arXiv:astro-ph/0603846 [astro-ph] * Miller Bertolami et al. [2006] Miller Bertolami M. M., Althaus L. G., Serenelli A. M., et al. (2006): New evolutionary calculations for the born again scenario. A&A 449(1):313–326. 10.1051/0004-6361:20053804, arXiv:astro-ph/0511406 [astro-ph] * Miller Bertolami et al. [2017] Miller Bertolami M. M., Althaus L. G., Córsico A. H. (2017): On the Formation of DA White Dwarfs with low Hydrogen Contents: Preliminary Results. In: Tremblay P. E., Gänsicke B., Marsh T. (eds.) ASP Conf. Ser. 509: 20th European Workshop on White Dwarfs. San Francisco: Astronomical Society of the Pacific, p. 435, 1609.08683 * Moss et al. [2024] Moss A., Bergeron P., Kilic M., et al. (2024): Discovery of a magnetic double-faced DBA white dwarf. MNRAS 527(4):10111–10122. 10.1093/mnras/stad3825 * Napiwotzki [1992] Napiwotzki R. (1992): Analysis of central stars of old planetary nebulae: Problems with the Balmer lines. In: Heber U., Jeffery C. S. (eds.) The Atmospheres of Early-Type Stars, vol. 401. Berlin: Springer, p. 310, 10.1007/3-540-55256-1_328 * Napiwotzki [1999] Napiwotzki R. (1999): Spectroscopic investigation of old planetaries. IV. Model atmosphere analysis. A&A 350:101–119. arXiv:astro-ph/9908181 [astro-ph] * Napiwotzki & Rauch [1994] Napiwotzki R., Rauch T. (1994): The Balmer line problem of hot stars and the impact of ion-dynamical effects on the Stark broadening of HI and HeII lines. A&A 285:603–608 * O’Brien et al. [2023] O’Brien M. W., Tremblay P. E., Gentile Fusillo N. P., et al. (2023): Gaia white dwarfs within 40 pc - III. Spectroscopic observations of new candidates in the Southern hemisphere. MNRAS 518(2):3055–3073. 10.1093/mnras/stac3303, arXiv:2210.01608 [astro-ph.SR] * O’Brien et al. [2024] O’Brien M. W., Tremblay P. E., Klein B. L., et al. (2024): The 40 pc sample of white dwarfs from Gaia. MNRAS 527(3):8687–8705. 10.1093/mnras/stad3773, arXiv:2312.02735 [astro-ph.SR] * Oswalt et al. [1996] Oswalt T. D., Smith J. A., Wood M. A., et al. (1996): A lower limit of 9.5 Gyr on the age of the Galactic disk from the oldest white dwarf stars. Nature 382(6593):692–694. 10.1038/382692a0 * Ourique et al. [2019] Ourique G., Romero A. D., Kepler S. O., et al. (2019): A study of cool white dwarfs in the Sloan Digital Sky Survey Data Release 12. MNRAS 482(1):649–657. 10.1093/mnras/sty2751, arXiv:1810.03554 [astro-ph.SR] * Ourique et al. [2020] Ourique G., Kepler S. O., Romero A. D., et al. (2020): Evidence of spectral evolution on the white dwarf sample from the Gaia mission. MNRAS 492(4):5003–5010. 10.1093/mnras/staa120, arXiv:2001.04378 [astro-ph.SR] * Paquette et al. [1986] Paquette C., Pelletier C., Fontaine G., et al. (1986): Diffusion in White Dwarfs: New Results and Comparative Study. ApJS 61:197. 10.1086/191112 * Paxton et al. [2011] Paxton B., Bildsten L., Dotter A., et al. (2011): Modules for Experiments in Stellar Astrophysics (MESA). ApJS 192(1):3. 10.1088/0067-0049/192/1/3, arXiv:1009.1622 [astro-ph.SR] * Pelletier et al. [1986] Pelletier C., Fontaine G., Wesemael F., et al. (1986): Carbon Pollution in Helium-rich White Dwarf Atmospheres: Time-dependent Calculations of the Dredge-up Process. ApJ 307:242. 10.1086/164410 * Pereira et al. [2005] Pereira C., Bergeron P., Wesemael F. (2005): Discovery of Spectroscopic Variations in the DAB White Dwarf GD 323. ApJ 623(2):1076–1082. 10.1086/429219, arXiv:astro-ph/0501620 [astro-ph] * Petitclerc et al. [2005] Petitclerc N., Wesemael F., Kruk J. W., et al. (2005): FUSE Observations of DB White Dwarfs. ApJ 624(1):317–330. 10.1086/428750 * Preval et al. [2013] Preval S. P., Barstow M. A., Holberg J. B., et al. (2013): A comprehensive near- and far-ultraviolet spectroscopic study of the hot DA white dwarf G191-B2B. MNRAS 436(1):659–674. 10.1093/mnras/stt1604, arXiv:1308.4825 [astro-ph.SR] * Preval et al. [2019] Preval S. P., Barstow M. A., Bainbridge M., et al. (2019): A far-UV survey of three hot, metal-polluted white dwarf stars: WD0455-282, WD0621-376, and WD2211-495. MNRAS 487(3):3470–3487. 10.1093/mnras/stz1506, arXiv:1905.12350 [astro-ph.SR] * Provencal et al. [2000] Provencal J. L., Shipman H. L., Thejll P., et al. (2000): Carbon and Hydrogen in Hot DB White Dwarfs. ApJ 542(2):1041–1056. 10.1086/317030 * Quirion et al. [2012] Quirion P. O., Fontaine G., Brassard P. (2012): Wind Competing Against Settling: A Coherent Model of the GW Virginis Instability Domain. ApJ 755(2):128. 10.1088/0004-637X/755/2/128 * Raddi et al. [2015] Raddi R., Gänsicke B. T., Koester D., et al. (2015): Likely detection of water-rich asteroid debris in a metal-polluted white dwarf. MNRAS 450(2):2083–2093. 10.1093/mnras/stv701, arXiv:1503.07864 [astro-ph.SR] * Rauch et al. [1998] Rauch T., Dreizler S., Wolff B. (1998): Spectral analysis of O(He)-type post-AGB stars. A&A 338:651–660 * Rauch et al. [2013] Rauch T., Werner K., Bohlin R., et al. (2013): The virtual observatory service TheoSSA: Establishing a database of synthetic stellar flux standards. I. NLTE spectral analysis of the DA-type white dwarf G191-B2B. A&A 560:A106. 10.1051/0004-6361/201322336, arXiv:1308.6450 [astro-ph.SR] * Rauch et al. [2014a] Rauch T., Werner K., Quinet P., et al. (2014a): Stellar laboratories. II. New Zn iv and Zn v oscillator strengths and their validation in the hot white dwarfs G191-B2B and RE 0503-289. A&A 564:A41. 10.1051/0004-6361/201423491, arXiv:1403.2183 [astro-ph.SR] * Rauch et al. [2014b] Rauch T., Werner K., Quinet P., et al. (2014b): Stellar laboratories. III. New Ba v, Ba vi, and Ba vii oscillator strengths and the barium abundance in the hot white dwarfs G191-B2B and RE 0503-289. A&A 566:A10. 10.1051/0004-6361/201423878, arXiv:1404.6094 [astro-ph.SR] * Rauch et al. [2015a] Rauch T., Hoyer D., Quinet P., et al. (2015a): Stellar laboratories. V. The Xe vi ultraviolet spectrum and the xenon abundance in the hot DO-type white dwarf RE 0503-289. A&A 577:A88. 10.1051/0004-6361/201526078, arXiv:1504.01991 [astro-ph.SR] * Rauch et al. [2015b] Rauch T., Werner K., Quinet P., et al. (2015b): Stellar laboratories. IV. New Ga iv, Ga v, and Ga vi oscillator strengths and the gallium abundance in the hot white dwarfs G191-B2B and RE 0503-289. A&A 577:A6. 10.1051/0004-6361/201425326, arXiv:1501.07751 [astro-ph.SR] * Rauch et al. [2016a] Rauch T., Quinet P., Hoyer D., et al. (2016a): Stellar laboratories. VI. New Mo iv-vii oscillator strengths and the molybdenum abundance in the hot white dwarfs G191-B2B and RE 0503-289. A&A 587:A39. 10.1051/0004-6361/201527324, arXiv:1512.07525 [astro-ph.SR] * Rauch et al. [2016b] Rauch T., Quinet P., Hoyer D., et al. (2016b): Stellar laboratories. VII. New Kr iv - vii oscillator strengths and an improved spectral analysis of the hot, hydrogen-deficient DO-type white dwarf RE 0503-289. A&A 590:A128. 10.1051/0004-6361/201628131, arXiv:1603.00701 [astro-ph.SR] * Rauch et al. [2017a] Rauch T., Gamrath S., Quinet P., et al. (2017a): Stellar laboratories . VIII. New Zr iv-vii, Xe iv-v, and Xe vii oscillator strengths and the Al, Zr, and Xe abundances in the hot white dwarfs G191-B2B and RE 0503-289. A&A 599:A142. 10.1051/0004-6361/201629794, arXiv:1611.07364 [physics.atom-ph] * Rauch et al. [2017b] Rauch T., Quinet P., Knörzer M., et al. (2017b): Stellar laboratories . IX. New Se v, Sr iv-vii, Te vi, and I vi oscillator strengths and the Se, Sr, Te, and I abundances in the hot white dwarfs G191-B2B and RE 0503-289. A&A 606:A105. 10.1051/0004-6361/201730383, arXiv:1706.09215 [astro-ph.SR] * Rauch et al. [2020] Rauch T., Gamrath S., Quinet P., et al. (2020): Stellar laboratories. X. New Cu IV-VII oscillator strengths and the first detection of copper and indium in hot white dwarfs. A&A 637:A4. 10.1051/0004-6361/201936620, arXiv:2004.01633 [astro-ph.SR] * Reindl et al. [2014a] Reindl N., Rauch T., Werner K., et al. (2014a): Analysis of cool DO-type white dwarfs from the Sloan Digital Sky Survey data release 10. A&A 572:A117. 10.1051/0004-6361/201424861, arXiv:1410.7666 [astro-ph.SR] * Reindl et al. [2014b] Reindl N., Rauch T., Werner K., et al. (2014b): On helium-dominated stellar evolution: the mysterious role of the O(He)-type stars. A&A 566:A116. 10.1051/0004-6361/201423498, arXiv:1405.1589 [astro-ph.SR] * Reindl et al. [2023] Reindl N., Islami R., Werner K., et al. (2023): The bright blue side of the night sky: Spectroscopic survey of bright and hot (pre-) white dwarfs. A&A 677:A29. 10.1051/0004-6361/202346865, arXiv:2307.03721 [astro-ph.SR] * Renedo et al. [2010] Renedo I., Althaus L. G., Miller Bertolami M. M., et al. (2010): New Cooling Sequences for Old White Dwarfs. ApJ 717(1):183–195. 10.1088/0004-637X/717/1/183, arXiv:1005.2170 [astro-ph.SR] * Rogers et al. [2024] Rogers L. K., Bonsor A., Xu S., et al. (2024): Seven white dwarfs with circumstellar gas discs I: white dwarf parameters and accreted planetary abundances. MNRAS 527(3):6038–6054. 10.1093/mnras/stad3557, arXiv:2311.14048 [astro-ph.EP] * Rolland et al. [2018] Rolland B., Bergeron P., Fontaine G. (2018): On the Spectral Evolution of Helium-atmosphere White Dwarfs Showing Traces of Hydrogen. ApJ 857(1):56. 10.3847/1538-4357/aab713, arXiv:1803.05965 [astro-ph.SR] * Rolland et al. [2020] Rolland B., Bergeron P., Fontaine G. (2020): A Convective Dredge-up Model as the Origin of Hydrogen in DBA White Dwarfs. ApJ 889(2):87. 10.3847/1538-4357/ab6602, arXiv:2001.01085 [astro-ph.SR] * Romero et al. [2012] Romero A. D., Córsico A. H., Althaus L. G., et al. (2012): Toward ensemble asteroseismology of ZZ Ceti stars with fully evolutionary models. MNRAS 420(2):1462–1480. 10.1111/j.1365-2966.2011.20134.x, arXiv:1109.6682 [astro-ph.SR] * Salaris & Cassisi [2017] Salaris M., Cassisi S. (2017): Chemical element transport in stellar evolution models. RSOS 4(8):170192. 10.1098/rsos.170192, arXiv:1707.07454 [astro-ph.SR] * Salaris et al. [2022] Salaris M., Cassisi S., Pietrinferni A., et al. (2022): The updated BASTI stellar evolution models and isochrones - III. White dwarfs. MNRAS 509(4):5197–5208. 10.1093/mnras/stab3359, arXiv:2111.09285 [astro-ph.SR] * Saumon et al. [2022] Saumon D., Blouin S., Tremblay P.-E. (2022): Current challenges in the physics of white dwarf stars. Phys. Rep. 988:1–63. 10.1016/j.physrep.2022.09.001, arXiv:2209.02846 [astro-ph.SR] * Schreiber et al. [2019] Schreiber M. R., Gänsicke B. T., Toloza O., et al. (2019): Cold Giant Planets Evaporated by Hot White Dwarfs. ApJ 887(1):L4. 10.3847/2041-8213/ab42e2, arXiv:1912.02345 [astro-ph.SR] * Schuh et al. [2002] Schuh S. L., Dreizler S., Wolff B. (2002): Equilibrium abundances in hot DA white dwarfs as derived from self-consistent diffusion models. I. Analysis of spectroscopic EUVE data. A&A 382:164–173. 10.1051/0004-6361:20011588, arXiv:astro-ph/0111245 [astro-ph] * Scóccola et al. [2006] Scóccola C. G., Althaus L. G., Serenelli A. M., et al. (2006): DQ white-dwarf stars with low C abundance: possible progenitors. A&A 451(1):147–155. 10.1051/0004-6361:20053769, arXiv:astro-ph/0602196 [astro-ph] * Sion [1984] Sion E. M. (1984): Implications of the absolute magnitude distribution functions of DA and non-DA white dwarfs. ApJ 282:612–614. 10.1086/162240 * Sion [1999] Sion E. M. (1999): White Dwarfs in Cataclysmic Variables. PASP 111(759):532–555. 10.1086/316361 * Sion et al. [1983] Sion E. M., Greenstein J. L., Landstreet J. D., et al. (1983): A proposed new white dwarf spectral classification system. ApJ 269:253–257. 10.1086/161036 * Subasavage et al. [2017] Subasavage J. P., Jao W.-C., Henry T. J., et al. (2017): The Solar Neighborhood. XXXIX. Parallax Results from the CTIOPI and NOFS Programs: 50 New Members of the 25 parsec White Dwarf Sample. AJ 154(1):32. 10.3847/1538-3881/aa76e0, arXiv:1706.00709 [astro-ph.SR] * Swan et al. [2019] Swan A., Farihi J., Koester D., et al. (2019): Interpretation and diversity of exoplanetary material orbiting white dwarfs. MNRAS 490(1):202–218. 10.1093/mnras/stz2337, arXiv:1908.08047 [astro-ph.EP] * Swan et al. [2023] Swan A., Farihi J., Melis C., et al. (2023): Planetesimals at DZ stars - I. Chondritic compositions and a massive accretion event. MNRAS 526(3):3815–3831. 10.1093/mnras/stad2867, arXiv:2309.06467 [astro-ph.EP] * Tassoul et al. [1990] Tassoul M., Fontaine G., Winget D. E. (1990): Evolutionary Models for Pulsation Studies of White Dwarfs. ApJS 72:335. 10.1086/191420 * Torres et al. [2023] Torres S., Cruz P., Murillo-Ojeda R., et al. (2023): White dwarf spectral type-temperature distribution from Gaia DR3 and the Virtual Observatory. A&A 677:A159. 10.1051/0004-6361/202346977, arXiv:2307.13629 [astro-ph.SR] * Tremblay & Bergeron [2008] Tremblay P. E., Bergeron P. (2008): The Ratio of Helium- to Hydrogen-Atmosphere White Dwarfs: Direct Evidence for Convective Mixing. ApJ 672(2):1144–1152. 10.1086/524134, arXiv:0710.1073 [astro-ph] * Tremblay et al. [2011] Tremblay P. E., Bergeron P., Gianninas A. (2011): An Improved Spectroscopic Analysis of DA White Dwarfs from the Sloan Digital Sky Survey Data Release 4. ApJ 730(2):128. 10.1088/0004-637X/730/2/128, arXiv:1102.0056 [astro-ph.SR] * Tremblay et al. [2013] Tremblay P. E., Ludwig H. G., Steffen M., et al. (2013): Pure-hydrogen 3D model atmospheres of cool white dwarfs. A&A 552:A13. 10.1051/0004-6361/201220813, arXiv:1302.2013 [astro-ph.SR] * Tremblay et al. [2014] Tremblay P. E., Kalirai J. S., Soderblom D. R., et al. (2014): White Dwarf Cosmochronology in the Solar Neighborhood. ApJ 791(2):92. 10.1088/0004-637X/791/2/92, arXiv:1406.5173 [astro-ph.SR] * Tremblay et al. [2015] Tremblay P. E., Ludwig H. G., Freytag B., et al. (2015): Calibration of the Mixing-length Theory for Convective White Dwarf Envelopes. ApJ 799(2):142. 10.1088/0004-637X/799/2/142, arXiv:1412.1789 [astro-ph.SR] * Tremblay et al. [2019] Tremblay P. E., Cukanovaite E., Gentile Fusillo N. P., et al. (2019): Fundamental parameter accuracy of DA and DB white dwarfs in Gaia Data Release 2. MNRAS 482(4):5222–5232. 10.1093/mnras/sty3067, arXiv:1811.03084 [astro-ph.SR] * Unglaub [2008] Unglaub K. (2008): Mass-loss and diffusion in subdwarf B stars and hot white dwarfs: do weak winds exist? A&A 486(3):923–940. 10.1051/0004-6361:20078019, arXiv:0808.1072 [astro-ph] * Unglaub & Bues [1998] Unglaub K., Bues I. (1998): The effect of diffusion and mass loss on the helium abundance in hot white dwarfs and subdwarfs. A&A 338:75–84 * Unglaub & Bues [2000] Unglaub K., Bues I. (2000): The chemical evolution of hot white dwarfs in the presence of diffusion and mass loss. A&A 359:1042–1058 * Vennes & Fontaine [1992] Vennes S., Fontaine G. (1992): An Interpretation of the Spectral Properties of Hot Hydrogen-rich White Dwarfs with Stratified H/He Model Atmospheres. ApJ 401:288. 10.1086/172060 * Vennes et al. [1988] Vennes S., Pelletier C., Fontaine G., et al. (1988): The Presence of Helium in Hot DA White Dwarfs: The Role of Radiative Levitation and the Case for Stratified Atmospheres. ApJ 331:876. 10.1086/166606 * Vennes et al. [2005] Vennes S., Chayer P., Dupuis J. (2005): Discovery of Photospheric Germanium in Hot DA White Dwarfs. ApJ 622(2):L121–L124. 10.1086/429667 * Vennes et al. [2006] Vennes S., Chayer P., Dupuis J., et al. (2006): Iron in Hot DA White Dwarfs. ApJ 652(2):1554–1562. 10.1086/508509, arXiv:astro-ph/0608416 [astro-ph] * Vennes et al. [2024] Vennes S., Kawka A., Klein B. L., et al. (2024): A cool, magnetic white dwarf accreting planetary debris. MNRAS 527(2):3122–3138. 10.1093/mnras/stad3370, arXiv:2311.07937 [astro-ph.SR] * Veras [2021] Veras D. (2021): Planetary Systems Around White Dwarfs. In: Oxford Research Encyclopedia of Planetary Science. Oxford: Oxford University Press, p. 1, 10.1093/acrefore/9780190647926.013.238 * Veras et al. [2014] Veras D., Shannon A., Gänsicke B. T. (2014): Hydrogen delivery onto white dwarfs from remnant exo-Oort cloud comets. MNRAS 445(4):4175–4185. 10.1093/mnras/stu2026, arXiv:1409.7691 [astro-ph.SR] * Vincent et al. [2024] Vincent O., Barstow M. A., Jordan S., et al. (2024): Classification and parameterization of a large Gaia sample of white dwarfs using XP spectra. A&A 682:A5. 10.1051/0004-6361/202347694, arXiv:2308.05572 [astro-ph.SR] * Voss et al. [2007] Voss B., Koester D., Napiwotzki R., et al. (2007): High-resolution UVES/VLT spectra of white dwarfs observed for the ESO SN Ia progenitor survey. II. DB and DBA stars. A&A 470(3):1079–1088. 10.1051/0004-6361:20077285 * Wachlin et al. [2017] Wachlin F. C., Vauclair G., Vauclair S., et al. (2017): Importance of fingering convection for accreting white dwarfs in the framework of full evolutionary calculations: the case of the hydrogen-rich white dwarfs GD 133 and G 29-38. A&A 601:A13. 10.1051/0004-6361/201630094, arXiv:1612.09320 [astro-ph.SR] * Wachlin et al. [2022] Wachlin F. C., Vauclair G., Vauclair S., et al. (2022): New simulations of accreting DA white dwarfs: Inferring accretion rates from the surface contamination. A&A 660:A30. 10.1051/0004-6361/202142289, arXiv:2109.11370 [astro-ph.SR] * Weidemann & Koester [1995] Weidemann V., Koester D. (1995): Surface carbon abundances and compositional stratification of cool helium-rich white dwarfs. A&A 297:216–222 * Werner [1996a] Werner K. (1996a): On the Balmer Line Problem. ApJ 457:L39. 10.1086/309889 * Werner [1996b] Werner K. (1996b): Search for trace amounts of hydrogen in hot DO white dwarfs. A&A 309:861–866 * Werner & Dreizler [1994] Werner K., Dreizler S. (1994): On the nickel abundance in hot hydrogen-rich white dwarfs. A&A 286:L31–L34 * Werner & Herwig [2006] Werner K., Herwig F. (2006): The Elemental Abundances in Bare Planetary Nebula Central Stars and the Shell Burning in AGB Stars. PASP 118(840):183–204. 10.1086/500443, arXiv:astro-ph/0512320 [astro-ph] * Werner & Rauch [2014] Werner K., Rauch T. (2014): Weak metal lines in optical high-resolution Very Large Telescope and Keck spectra of “cool” PG 1159 stars. A&A 569:A99. 10.1051/0004-6361/201424051 * Werner et al. [1991] Werner K., Heber U., Hunger K. (1991): Non-LTE analysis of four PG1159 stars. A&A 244:437 * Werner et al. [2007] Werner K., Rauch T., Kruk J. W. (2007): Discovery of photospheric argon in very hot central stars of planetary nebulae and white dwarfs. A&A 466(1):317–322. 10.1051/0004-6361:20077101, arXiv:astro-ph/0702387 [astro-ph] * Werner et al. [2012] Werner K., Rauch T., Ringat E., et al. (2012): First Detection of Krypton and Xenon in a White Dwarf. ApJ 753(1):L7. 10.1088/2041-8205/753/1/L7 * Werner et al. [2014] Werner K., Rauch T., Kepler S. O. (2014): New hydrogen-deficient (pre-) white dwarfs in the Sloan Digital Sky Survey Data Release 10. A&A 564:A53. 10.1051/0004-6361/201423441 * Werner et al. [2017] Werner K., Rauch T., Kruk J. W. (2017): Far-UV spectroscopy of two extremely hot, helium-rich white dwarfs. A&A 601:A8. 10.1051/0004-6361/201630266 * Werner et al. [2018a] Werner K., Rauch T., Knörzer M., et al. (2018a): First detection of bromine and antimony in hot stars. A&A 614:A96. 10.1051/0004-6361/201832723, arXiv:1803.04809 [astro-ph.SR] * Werner et al. [2018b] Werner K., Rauch T., Kruk J. W. (2018b): Metal abundances in hot white dwarfs with signatures of a superionized wind. A&A 609:A107. 10.1051/0004-6361/201731740, arXiv:1711.04138 [astro-ph.SR] * Werner et al. [2019] Werner K., Rauch T., Reindl N. (2019): Spectral analysis of the extremely hot DA white dwarf PG 0948+534. MNRAS 483(4):5291–5300. 10.1093/mnras/sty3408, arXiv:1812.07486 [astro-ph.SR] * Wesemael et al. [1993] Wesemael F., Greenstein J. L., Liebert J., et al. (1993): An Atlas of Optical Spectra of White-Dwarf Stars. PASP 105:761. 10.1086/133228 * Wesemael et al. [1994] Wesemael F., Bergeron P., Lamontagne R. L., et al. (1994): Hot Degenerates in the Montreal-Cambridge-Tololo Survey. II. Two New Hybrid White Dwarfs, MCT 0128-3846 and MCT 0453-2933, and the Nature of the DAB Stars. ApJ 429:369. 10.1086/174326 * Williams et al. [2013] Williams K. A., Winget D. E., Montgomery M. H., et al. (2013): Photometric Variability in a Warm, Strongly Magnetic DQ White Dwarf, SDSS J103655.39+652252.2. ApJ 769(2):123. 10.1088/0004-637X/769/2/123, arXiv:1304.3165 [astro-ph.SR] * Williams et al. [2016] Williams K. A., Montgomery M. H., Winget D. E., et al. (2016): Variability in Hot Carbon-dominated Atmosphere (Hot DQ) White Dwarfs: Rapid Rotation? ApJ 817(1):27. 10.3847/0004-637X/817/1/27, arXiv:1511.08834 [astro-ph.SR] * Wilson et al. [2015] Wilson D. J., Gänsicke B. T., Koester D., et al. (2015): The composition of a disrupted extrasolar planetesimal at SDSS J0845+2257 (Ton 345). MNRAS 451(3):3237–3248. 10.1093/mnras/stv1201, arXiv:1505.07466 [astro-ph.EP] * Winget et al. [1987] Winget D. E., Hansen C. J., Liebert J., et al. (1987): An Independent Method for Determining the Age of the Universe. ApJ 315:L77. 10.1086/184864 * Wolff et al. [2000] Wolff B., Jordan S., Koester D., et al. (2000): The nature of the DAB white dwarf HS 0209+0832. A&A 361:629–640 * Xu et al. [2013] Xu S., Jura M., Klein B., et al. (2013): Two Beyond-primitive Extrasolar Planetesimals. ApJ 766(2):132. 10.1088/0004-637X/766/2/132, arXiv:1302.4799 [astro-ph.EP] * Xu et al. [2014] Xu S., Jura M., Koester D., et al. (2014): Elemental Compositions of Two Extrasolar Rocky Planetesimals. ApJ 783(2):79. 10.1088/0004-637X/783/2/79, arXiv:1401.4252 [astro-ph.EP] * Xu et al. [2017] Xu S., Zuckerman B., Dufour P., et al. (2017): The Chemical Composition of an Extrasolar Kuiper-Belt-Object. ApJ 836(1):L7. 10.3847/2041-8213/836/1/L7, arXiv:1702.02868 [astro-ph.EP] * Xu et al. [2019] Xu S., Dufour P., Klein B., et al. (2019): Compositions of Planetary Debris around Dusty White Dwarfs. AJ 158(6):242. 10.3847/1538-3881/ab4cee, arXiv:1910.07197 [astro-ph.SR] * York et al. [2000] York D. G., Adelman J., Anderson J.John E., et al. (2000): The Sloan Digital Sky Survey: Technical Summary. AJ 120(3):1579–1587. 10.1086/301513, arXiv:astro-ph/0006396 [astro-ph] * Zuckerman & Reid [1998] Zuckerman B., Reid I. N. (1998): Metals in Cool DA White Dwarfs. ApJ 505(2):L143–L146. 10.1086/311608 * Zuckerman et al. [2003] Zuckerman B., Koester D., Reid I. N., et al. (2003): Metal Lines in DA White Dwarfs. ApJ 596(1):477–495. 10.1086/377492 * Zuckerman et al. [2007] Zuckerman B., Koester D., Melis C., et al. (2007): The Chemical Composition of an Extrasolar Minor Planet. ApJ 671(1):872–877. 10.1086/522223, arXiv:0708.0198 [astro-ph] * Zuckerman et al. [2010] Zuckerman B., Melis C., Klein B., et al. (2010): Ancient Planetary Systems are Orbiting a Large Fraction of White Dwarf Stars. ApJ 722(1):725–736. 10.1088/0004-637X/722/1/725, arXiv:1007.2252 [astro-ph.SR] * Zuckerman et al. [2011] Zuckerman B., Koester D., Dufour P., et al. (2011): An Aluminum/Calcium-rich, Iron-poor, White Dwarf Star: Evidence for an Extrasolar Planetary Lithosphere? ApJ 739(2):101. 10.1088/0004-637X/739/2/101, arXiv:1107.2167 [astro-ph.SR]
# Think about it! Improving defeasible reasoning by first modeling the question scenario Aman Madaan, Niket Tandon†, Dheeraj Rajagopal, Peter Clark†, Yiming Yang, Eduard Hovy Language Technologies Institute, Carnegie Mellon University, Pittsburgh, PA, USA † Allen Institute for Artificial Intelligence, Seattle, WA, USA <EMAIL_ADDRESS> {nikett<EMAIL_ADDRESS> ###### Abstract Defeasible reasoning is the mode of reasoning where conclusions can be overturned by taking into account new evidence. Existing cognitive science literature on defeasible reasoning suggests that a person forms a mental model of the problem scenario before answering questions. Our research goal asks whether neural models can similarly benefit from envisioning the question scenario before answering a defeasible query. Our approach is, given a question, to have a model first create a graph of relevant influences, and then leverage that graph as an additional input when answering the question. Our system, curious, achieves a new state-of-the-art on three different defeasible reasoning datasets. This result is significant as it illustrates that performance can be improved by guiding a system to “think about” a question and explicitly model the scenario, rather than answering reflexively.111Code and data located at https://github.com/madaan/thinkaboutit ## 1 Introduction Defeasible inference is a mode of reasoning where additional information can modify conclusions Koons (2017). Here we consider the specific formulation and challenge in Rudinger et al. (2020): Given that some premise $\mathbf{P}$ plausibly implies a hypothesis $\mathbf{H}$, does new information that the situation is $\mathbf{S}$ weaken or strengthen the conclusion $\mathbf{H}$? For example, consider the premise “The drinking glass fell” with a possible implication “The glass broke”. New information that “The glass fell on a pillow” here weakens the implication. Figure 1: curious improves defeasible reasoning by modeling the question scenario with an inference graph $G$ adapted from cognitive science literature. The graph is encoded judiciously using our graph encoder $h(.)$, resulting in end task performance improvement. We borrow ideas from the cognitive science literature that supports defeasible reasoning for humans with an _inference graph_ (Pollock, 2009, 1987). Inference graphs formulation in Madaan et al. (2021), which we use in this paper, draws connections between the $\mathbf{P}$, $\mathbf{H}$, and $\mathbf{S}$ through mediating events. This can be seen as a _mental model_ of the question scenario before answering the question Johnson-Laird (1983). This paper asks the natural question: can modeling the question scenario with inference graphs help machines in defeasible reasoning? Our approach is as follows. First, given a question, generate an inference graph describing important influences between question elements. Then, use that graph as an additional input when answering the defeasible reasoning query. Our proposed system, curious, comprises a graph generation module and a graph encoding module to use the generated graph for the query (Figure 2). To generate inference graphs, we build upon past work that uses a sequence to sequence approach Madaan et al. (2021). However, our analysis revealed that the graphs can often be erroneous, and curious also includes an error correction module to generate higher quality inference graphs. This was important because we found that better graphs are more helpful in the downstream QA task. Figure 2: An overview of curious The generated inference graph is then used for the QA task on three existing defeasible inference datasets from diverse domains, viz., $\delta$-snli (natural language inference) Bowman et al. (2015), $\delta$-social (reasoning about social norms) Forbes et al. (2020), and $\delta$-atomic (commonsense reasoning) Sap et al. (2019). We show that the way the graph is encoded for input is important. If we simply augment the question with the generated graphs, there are some gains on all datasets. However, the accuracy improves substantially across all datasets with a more judicious encoding of the graph- augmented question that accounts for interactions between the graph nodes. To achieve this, we use the mixture of experts approach Jacobs et al. (1991) to include a mixture of experts layers during encoding, enabling the ability to attend to specific nodes while capturing their interactions selectively. In summary, our contribution is in drawing on the idea of an inference graph from cognitive science to show benefits in a defeasible inference QA task. Using an error correction module in the graph generation process, and a judicious encoding of the graph augmented question, curious achieves a new state-of-the-art over three defeasible datasets. This result is significant also because our work illustrates that guiding a system to “think about" a question before answering can improve performance. Figure 3: An overview of our method to perform graph-augmented defeasible reasoning using a hierarchical mixture of experts. First, moe-v selectively pools the node representations to generate a representation $\mathbf{h}_{\mathbf{G}}$ of the inference graph. Then, moe-gx pools the query representation $\mathbf{h}_{\mathbf{x}}$ and the graph representation generated by moe-v to pass to the upstream classifier. ## 2 Task We use the defeasible inference task and datasets defined in Rudinger et al. (2020), namely given an input $\mathbf{x}$ = ($\mathbf{P}$,$\mathbf{H}$,$\mathbf{S}$), predict the output $\mathbf{y}\in\\{strengthens,weakens\\}$, where $\mathbf{P}$, $\mathbf{H}$, and $\mathbf{S}$ are sentences describing a premise, hypothesis, and scenario respectively, and $y$ denotes whether $S$ strengthens/weakens the plausible conclusion that $\mathbf{H}$ follows from $\mathbf{P}$, as described in Section 1. ## 3 Approach Inspired by past results Madaan et al. (2021) that humans found inference graphs useful for defeasible inference, we investigate whether neural models can benefit from envisioning the question scenario using an inference graph before answering a defeasible inference query. ##### Inference graphs As inference graphs are central to our work, we give a brief description of their structure next. Inference graphs were introduced in philosophy by Pollock (2009) to aid defeasible reasoning for humans, and in NLP by Tandon et al. (2019) for a counterfactual reasoning task. We interpret the inference graphs as having four kinds of nodes Pollock (2009); Madaan et al. (2021): * i. Contextualizers (C-, C+): these nodes set the context around a situation and connect to the $\mathbf{P}$. * ii. Situations (S, S-): these nodes are new situations that emerge which might overturn an inference. * iii. Hypothesis (H-, H+): Hypothesis nodes describe the outcome/conclusion of the situation. * iv. Mediators (M-, M+): Mediators are nodes that help bridge the knowledge gap between a situation and a hypothesis node by explaining their connection explicitly. These node can either act as a _weakener_ or _strengthener_. Each node in an influence graph is labeled with an event (a sentence or a phrase). The signs - and + capture the nature of the influence event node. Concrete examples are present in Figures 1, 4, and in Appendix §D. ### 3.1 Overview of curious Our system, curious, comprises three components, (i) a graph generator $\textsc{gen}_{\text{init}}$, (ii) a graph corrector $\textsc{gen}_{\text{corr}}$, (iii) a graph encoder (Figure 1). $\textsc{gen}_{\text{init}}$ generates an inference graph from the input $\mathbf{x}$. We borrow the sequence to sequence approach of $\textsc{gen}_{\text{init}}$ from Madaan et al. (2021) without any architectural changes. However, we found that the resulting graphs can often be erroneous (which hurts task performance), so curious includes an error correction module $\textsc{gen}_{\text{corr}}$ to generate higher quality inference graphs that are then judiciously encoded using the graph encoder. This encoded representation is then passed through a classifier to generate an end task label. The overall architecture is shown in Figure 2. ### 3.2 Graph generator Figure 4: The graphs generated by $\textsc{gen}_{\text{init}}$.The input graph has repetitions for nodes $\\{C{-},C{+}\\}$ and $\\{S,S{-}\\}$. The corrected graph generated by $\textsc{gen}_{\text{corr}}$ replaces the repetitions with meaningful labels. As the initial graph generator, we use the method described in Madaan et al. (2021) ($\textsc{gen}_{\text{init}}$) to generate inference graphs for defeasible reasoning.222We use their publicly available code and data Their approach involves first training a graph-generation module, and then performing zero-shot inference on a defeasible query to obtain an inference graph. They obtain training data for the graph-generation module from wiqa dataset Tandon et al. (2019). wiqa is a dataset of 2107 $(\mathbf{T}_{i},\mathbf{G}_{i})$ tuples, where $\mathbf{T}_{i}$ is the passage text that describes a process (e.g., waves hitting a beach), and $\mathbf{G}_{i}$ is the corresponding influence graph. The graph generator $\textsc{gen}_{\text{init}}$ is trained as a seq2seq model, by setting $\texttt{input}=\text{[Premise] }\mathbf{T}_{i}\mid\text{[Situation] }\mathbf{S}_{i}\mid\text{[Hypothesis] }\mathbf{H}_{i}$, and $\texttt{output}=\mathbf{G}_{i}$. Note that $\mathbf{S}_{i}$ and $\mathbf{H}_{i}$ are nodes in the influence graph $\mathbf{G}_{i}$, allowing grounded generation. [Premise], [Situation], [Hypothesis] are special tokens used to demarcate the input. ### 3.3 Graph corrector We found that 70% of the randomly sampled 100 graphs produced by $\textsc{gen}_{\text{init}}$ (undesirably) had repeated nodes (an example of repeated nodes is in Figure 4). Repeated nodes introduce noise because they violate the semantic structure of a graph, e.g., in Figure 4, nodes C+ and C- are repeated, although they are expected to have opposite semantics. Higher graph quality yields better end task performance when using inference graphs (as we will show in §4.3.1) To repair such problems, we train a graph corrector, $\textsc{gen}_{\text{corr}}$, that takes as input $\mathbf{G}^{\prime}$, and as output it gives a graph $\mathbf{G}^{*}$, with repetitions fixed. To train the model, we require ($\mathbf{G}^{\prime}$, $\mathbf{G}^{*}$) examples, which we generate using a data augmentation technique described in the Appendix §A. Because the nodes in the graph are from an open vocabulary, we then train a T5 sequence-to-sequence model Raffel et al. (2020) with input = $\mathbf{G}^{\prime}$ and output = $\mathbf{G}^{*}$. In summary, given a defeasible query $\mathbf{PHS}$, we generate a potentially incorrect initial graph $\mathbf{G}^{\prime}$ using $\textsc{gen}_{\text{init}}$. We then feed $\mathbf{G}^{\prime}$ to $\textsc{gen}_{\text{corr}}$ to obtain an improved graph $\mathbf{G}$. ### 3.4 Graph Encoder For each defeasible query ($\mathbf{P}$, $\mathbf{H}$, $\mathbf{S}$), we add the inference graph $\mathbf{G}$ from curious (the corrected graph from §3.3), to provide additional context for the query, as we now describe. We concatenate the components ($\mathbf{P}$, $\mathbf{H}$, $\mathbf{S}$) of the defeasible query into a single sequence of tokens $\mathbf{x}=(\mathbf{P}\|\mathbf{H}\|\mathbf{S})$, where $\|$ denotes concatenation. Thus, each sample of our graph-augmented binary-classification task takes the form $((\mathbf{x},\mathbf{G}),\mathbf{y})$, where $\mathbf{y}\in\\{\text{{strengthener}, {weakener}}\\}$. Following Madaan et al. (2021), we do not use edge labels and treat all the graphs as undirected graphs. ##### Overview: We first use a language model $\mathcal{L}$ to obtain a dense representation $\mathbf{h}_{\mathbf{x}}$ for the defeasible query $\mathbf{x}$, and a dense representation $\mathbf{h}_{\mathbf{v}}$ for each node $\mathbf{v}\in\mathbf{G}$. The node representations $\mathbf{h}_{\mathbf{v}}$ are then pooled using a hierarchical mixture of experts (MoE) to obtain a graph representation $\mathbf{h}_{\mathbf{G}}$. The query representation $\mathbf{h}_{\mathbf{x}}$ and the graph representation $\mathbf{h}_{\mathbf{G}}$ are combined to solve the defeasible task. We now provide details on obtaining $\mathbf{h}_{\mathbf{x}}$, $\mathbf{h}_{\mathbf{v}}$, $\mathbf{h}_{\mathbf{G}}$. #### 3.4.1 Encoding the query and nodes Let $\mathcal{L}$ be a pre-trained language model (in our case RoBERTa Liu et al. (2019)). We use $\mathbf{h}_{\mathbf{S}}=\mathcal{L}(\mathbf{S})\in\mathbb{R}^{d}$ to denote the dense representation of sequence of tokens $\mathbf{S}$ returned by the language model $\mathcal{L}$. Specifically, we use the pooled representation of the beginning-of-sequence token <s> as the sequence representation. We encode the defeasible query $\mathbf{x}$ and the nodes of the graph using $\mathcal{L}$. Query representation is computed as $\mathbf{h}_{\mathbf{x}}=\mathcal{L}(\mathbf{x})$, and we similarly obtain a matrix of node representations $\mathbf{h}_{\mathbf{V}}$ by encoding each node $\mathbf{v}$ in $\mathbf{G}$ with $\mathcal{L}$ as follows: $\displaystyle\mathbf{h}_{\mathbf{V}}=[\mathbf{h}_{v_{1}};\mathbf{h}_{v_{2}};\ldots;\mathbf{h}_{|\mathbf{V}|}]$ (1) where $\mathbf{h}_{v_{i}}\in\mathbb{R}^{d}$ refers to the dense representation for the $i^{th}$ node of $\mathbf{G}$ derived from $\mathcal{L}$ (i.e., $\mathbf{h}_{v_{i}}=\mathcal{L}(v_{i})$), and $\mathbf{h}_{\mathbf{V}}\in\mathbb{R}^{|\mathbf{V}|\times d}$ to refer to the matrix of node representations. #### 3.4.2 Graph representations using MoE Recently, mixture-of-experts Jacobs et al. (1991); Shazeer et al. (2017); Fedus et al. (2021) has emerged as a promising method of combining multiple feature types. Mixture-of-experts (MoE) is especially useful when the input consists of multiple facets, where each facet has a specific semantic meaning. Previously, Gu et al. (2018); Chen et al. (2019) have used the ability of MoE to pool disparate features on low-resource and cross-lingual language tasks. Since each node in the inference graphs used by us plays a specific role in defeasible reasoning (contextualizer, situation node, or mediator), we take inspiration from these works to design a hierarchical MoE model Jordan and Xu (1995) to pool node representations $\mathbf{h}_{\mathbf{V}}$ into a graph representation $\mathbf{h}_{\mathbf{G}}$. An MoE consists of $n$ expert networks $\mathbf{E_{1}},\mathbf{E_{2}},\ldots,\mathbf{E_{n}}$ and a gating network $\mathbf{M}$. Given an input $\mathbf{x}\in\mathbb{R}^{d}$, each expert network $\mathbf{E_{i}}:\mathbb{R}^{d}\rightarrow\mathbb{R}^{k}$ learns a transform over the input. The gating network $\mathbf{M}:\mathbb{R}^{d}\rightarrow\Delta^{d}$ gives the weights $\mathbf{p}=\\{p_{1},p_{2},\ldots,p_{n}\\}$ to combine the expert outputs for input $\mathbf{x}$. Finally, the output $\mathbf{y}$ is returned as a convex combination of the expert outputs: $\displaystyle\mathbf{p}$ $\displaystyle=\mathbf{M}(\mathbf{x})$ $\displaystyle\mathbf{y}$ $\displaystyle=\sum_{i=1}^{n}p_{i}\mathbf{E_{i}(x)}$ (2) The output $\mathbf{y}$ can either be the logits for an end task Shazeer et al. (2017); Fedus et al. (2021) or pooled features that are passed to a downstream learner Chen et al. (2019); Gu et al. (2018). The gating network $\mathbf{M}$ and the expert networks $\mathbf{E_{1}},\mathbf{E_{2}},\ldots,\mathbf{E_{n}}$ are trained end-to-end. During learning, the gradients to $\mathbf{M}$ train it to generate a distribution over the experts that favors the best expert for a given input. Appendix §B presents a further discussion on our MoE formulation and an analysis of the gradients. ##### Hierarchical MoE for defeasible reasoning Different parts of the inference graphs might help answer a query to a different degree. Further, for certain queries, graphs might not be helpful (and could even be distracting), and the model could rely primarily on the input query alone. This motivates a two-level architecture that can: (i) select a subset of the nodes in the graph and ii) selectively reason across the query and the graph to varying degrees. Given these requirements, a hierarchical MoE Jordan and Jacobs (1994) model presents itself as a natural choice to model this task. The first MoE (moe-v) creates a graph representation by taking a convex combination of the node representations. The second MoE (moe-gx) then takes a convex-combination of the graph representation returned by moe-v and query representation and passes it to an MLP for the downstream task. * $\bullet$ moe-v consists of five node-experts and gating network to selectively pool node representations $\mathbf{h}_{\mathbf{v}}$ to graph representation $\mathbf{h}_{\mathbf{G}}$: $\displaystyle\mathbf{p}$ $\displaystyle=\mathbf{M}(\mathbf{h}_{\mathbf{V}})$ $\displaystyle\mathbf{h}_{\mathbf{G}}$ $\displaystyle=\sum_{v\in\mathbf{V}}p_{v}E_{v}(v)$ (3) * $\bullet$ moe-gx contains two experts (graph expert $E_{\mathbf{G}}$ and question expert $E_{\mathbf{Q}}$) and a gating network to combine the graph representation $\mathbf{h}_{\mathbf{G}}$ returned by moe-gx and the query representation $\mathbf{h}_{\mathbf{x}}$: $\displaystyle\mathbf{p}$ $\displaystyle=\mathbf{M}([\mathbf{h}_{\mathbf{G}};\mathbf{h}_{\mathbf{Q}}])$ $\displaystyle\mathbf{h}_{\mathbf{y}}$ $\displaystyle=E_{\mathbf{G}}(\mathbf{h}_{\mathbf{G}})+E_{\mathbf{Q}}(\mathbf{h}_{\mathbf{Q}})$ (4) $\mathbf{h}_{\mathbf{y}}$ is then passed to a 1-layer MLP to perform classification. The gates and the experts in our MoE model are single-layer MLPs, with equal input and output dimensions for the experts. ## 4 Experiments In this section, we empirically investigate if curious can improve defeasible inference by first modeling the question scenario using inference graphs. We also study the reasons for any improvements. ### 4.1 Experimental setup Dataset | Split | # Samples | Total ---|---|---|--- $\delta$-atomic | train | 35,001 | 42,977 test | 4137 dev | 3839 $\delta$-social | train | 88,675 | 92,295 test | 1836 dev | 1784 $\delta$-snli | train | 77,015 | 95,795 test | 9438 dev | 9342 Table 1: Number of samples in each dataset by split. ##### Datasets Our end task performance is measured on the three existing datasets for defeasible inference provided by Rudinger et al. (2020):333 github.com/rudinger/defeasible-nli $\delta$-atomic, $\delta$-snli, $\delta$-social (Table 1). These datasets exhibit substantial diversity because of their domains: $\delta$-snli (natural language inference), $\delta$-social (reasoning about social norms), and $\delta$-atomic (commonsense reasoning). Thus, it would require a general model to perform well across these diverse datasets. ##### Baselines and setup The previous state-of-the-art (SOTA) is the RoBERTa Liu et al. (2019) model presented in Rudinger et al. (2020), and we report the published numbers for this baseline. For an additional baseline, we directly use the initial inference graph $\mathbf{G}^{\prime}$ generated by $\textsc{gen}_{\text{init}}$, and provide it to the model simply as a string (i.e., sequence of tokens; a simple, often-used approach). This baseline is called e2e-STR. We use the same hyperparameters as Rudinger et al. (2020), and add a detailed description of the hyperparameters in Appendix §C. For all the qa experiments, we report the accuracy on the test set using the checkpoint with the highest accuracy on the development set. We use the McNemar’s test McNemar (1947); Dror et al. (2018) and use $p<0.05$ as a threshold for statistical significance. All the p-values are reported in Appendix §G. ### 4.2 Results Table 2 compares QA accuracy on these datasets without and with modeling the question scenario. The results suggest that we get consistent gains across all datasets, with $\delta$-snli gaining about 4 points. curious achieves a new state-of-the-art across three datasets, as well as now producing justifications for its answers with inference graphs. | $\delta$-atomic | $\delta$-snli | $\delta$-social ---|---|---|--- Prev-SOTA | 78.3 | 81.6 | 86.2 e2e-STR | 78.8 | 82.2 | 86.7 curious | 80.2* | 85.6* | 88.6* Table 2: curious is better across all the datasets. This demonstrates that understanding the question scenario through generating an inference graph helps. * indicates statistical significance. ### 4.3 Understanding curious gains In this section, we study the contribution of the main components of the curious pipeline. #### 4.3.1 Impact of graph corrector We ablate the graph corrector module $\textsc{gen}_{\text{corr}}$ in curious by directly supplying the output from $\textsc{gen}_{\text{init}}$ to the graph encoder. Table 3 shows that this ablation consistently hurts across all the datasets. $\textsc{gen}_{\text{corr}}$ provides 2 points gain across datasets. This indicates that better graphs lead to better task performance, assuming that $\textsc{gen}_{\text{corr}}$ actually reduces the noise. Next, we investigate if $\textsc{gen}_{\text{corr}}$ can produce more informative graphs. | $\delta$-atomic | $\delta$-snli | $\delta$-social ---|---|---|--- $\mathbf{G}^{\prime}$ | 78.5 | 83.8 | 88.2 $G$ | 80.2* | 85.6* | 88.6 Table 3: Performance w.r.t. the graph used. $\mathbf{G}^{\prime}$ is the initial graph from $\textsc{gen}_{\text{init}}$, $G$ is the corrected graph from $\textsc{gen}_{\text{corr}}$. Better graphs lead to better task performance. * indicates statistical significance. ##### Do graphs corrected by $\textsc{gen}_{\text{corr}}$ show fewer repetitions? We evaluate the repetitions in the graphs produced by $\textsc{gen}_{\text{init}}$ and $\textsc{gen}_{\text{corr}}$ using two metrics: (i) repetitions per graph: average number of repeated nodes in a graph. (ii) % with repetitions: % of graphs with at least one repeated node. | Repetitions | $\textsc{gen}_{\text{init}}$ | $\textsc{gen}_{\text{corr}}$ ---|---|---|--- $\delta$-atomic | per graph | 2.05 | 1.26 | % graphs | 72 | 48 $\delta$-snli | per graph | 2.09 | 1.18 | % graphs | 73 | 46 $\delta$-social | per graph | 2.2 | 1.32 | % graphs | 75 | 49 overall | per graph | | $\Delta$ -40% | % graphs | | $\Delta$ -25.7% Table 4: $\textsc{gen}_{\text{corr}}$ reduces the inconsistencies in graphs. The number of repetitions per graph and percentage of graphs with some repetition, both improve. Table 4 shows $\textsc{gen}_{\text{corr}}$ does reduce repetitions by approximately 40% (2.11 to 1.25) per graph across all datasets, and also reduces the fraction of graphs with at least one repetition by 25.7% across. #### 4.3.2 Impact of graph encoder We experiment with two alternative approaches to graph encoding to compare our MoE approach by using the graphs generated by $\textsc{gen}_{\text{corr}}$: 1\. Graph convolutional networks: We follow the approach of Lv et al. (2020) who use gcn Kipf and Welling (2017) to learn rich node representations from graphs. Broadly, node representations are initialized by $\mathcal{L}$ and then refined using a gcn. Finally, multi-headed attention Vaswani et al. (2017) between question representation $\mathbf{h}_{\mathbf{x}}$ and the node representations is used to yield $\mathbf{h}_{\mathbf{G}}$. We add a detailed description of this method in Appendix §H. 2\. String based representation: Another popular approach Sakaguchi et al. (2021) is to concatenate the string representation of the nodes, and then using $\mathcal{L}$ to obtain the graph representation $\mathbf{h}_{\mathbf{G}}$ $=\mathcal{L}(v_{1}\|v_{2}\|..)$ where $\|$ denotes string concatenation. Table 5 shows that MoE graph encoder improves end task performance significantly compared to the baseline.444Appendix §E provides an analysis on time and memory requirements. In the following analysis, we study the reasons for these gains in-depth. We hypothesize that gcn is less resistant to noise than MoE in our setting, thus causing a lower performance. The graphs augmented with each query are not human-curated and are instead generated by a language model in a zero-shot inference setting. Thus, the gcn style message passing might amplify the noise in graph representations. On the other hand, moe-v first selects the most useful nodes to answer the query to form the graph representation $\mathbf{h}_{\mathbf{G}}$. Further, moe-gx can also decide to completely discard the graph representations, as it does in many cases where the true answer for the defeasible query is weakens. To further establish the possibility of message passing hampering the downstream task performance, we experiment with a gcn-MoE hybrid, wherein we first refine the node representations using a 2-layer gcn as used by Lv et al. (2020), and then pool the node representations using an MoE. We found the results to be about the same as ones we obtained with gcn (3rd-row Table 5), indicating that bad node representations are indeed the root cause for the bad performance of gcn. This is also supported by Shi et al. (2019) who found that noise propagation directly deteriorates network embedding and gcn is sensitive to noise. Interestingly, graphs help the end-task even when encoded using a relatively simple str based encoding scheme, further establishing their utility. | $\delta$-atomic | $\delta$-snli | $\delta$-social ---|---|---|--- str | 79.5 | 83.1 | 87.2 gcn | 78.9 | 82.4 | 88.1 gcn \+ MoE | 78.7 | 84.3 | 87.8 MoE | 80.2 | 85.6 | 88.6 Table 5: Contribution of MoE-based graph encoding compared with alternative graph encoding methods. The gains of MoE over gcn are statistically significant for all the datasets, and the gains over str are significant for $\delta$-snli and $\delta$-social. #### 4.3.3 Detailed MoE analysis We now analyze the two MoEs used in curious: (i) the MoE over the nodes (moe-v), and (ii) the MoE over $\mathbf{G}$ and input $x$ (moe-gx). Figure 5: moe-gx gate values for the classes strengthens and weakens, averaged over the three datasets. ##### moe-gx performs better for $y=\text{strengthens:}$ Figure 5 shows that the graph makes a stronger contribution than the input, when the label is strengthens. In instances where the label is weakens, the gate of moe-gx gives a higher weight to the question. This trend was present across all the datasets. We conjecture that this happens because language models are tuned to generate events that happen rather than events that do not. In the case of a weakener, the nodes must be of the type event1 leads to less of event2, whereas language models are naturally trained for event1 leads to event2. Understanding this in-depth requires further investigation in the future. ##### moe-v relies more on specific nodes: We study the distribution over the types of nodes and their contribution to moe-v. Recall from Figure 3 that C- and C+ nodes are contextualizers that provide more background context to the question, and S- node is typically an inverse situation (i.e., inverse $\mathbf{S}$), while M- and M+ are the mediator nodes leading to the hypothesis. Figure 6 shows that the situation node S- was the most important, followed by the contextualizer and the mediator. Notably, our analysis shows that mediators are less important for machines than they were for humans in the experiments conducted by Madaan et al. (2021). This is probably because humans and machines use different pieces of information. As our error analysis shows in §5, the mediators can be redundant given the query $\mathbf{x}$. Humans might have used the redundancy to reinforce their beliefs, whereas machines leverage the unique signals present in S- and the contextualizers. Figure 6: moe-v gate values for the three datasets. ##### moe-v, moe-gx have a peaky distribution: A peaky distribution over the gate values implies that the network is judiciously selecting the right expert for a given input. We compute the average entropy of moe-v and moe-gx and found the entropy values to be 0.52 (max 1.61) for moe-v, and 0.08 (max 0.69) for moe-gx. The distribution of the gate values of moe-v is relatively flat, indicating that specialization of the node experts might have some room for improvement (additional discussion in Appendix §B). Analogous to scene graphs-based explanations in visual QA Ghosh et al. (2019), peaky distributions over nodes can be considered as an explanation through supporting nodes. ##### moe-v learns the node semantics: The network learned the semantic grouping of the nodes (contextualizers, situation, mediators), which became evident when plotting the correlation between the gate weights. As Figure 7 shows, there is a strong negative correlation between the situation nodes and the context nodes, indicating that only one of them is activated at a time. Figure 7: Correlation between probability assigned to each semantic type of the node by moe-v ## 5 Error analysis | | ---|---|--- | now fail | now succ prev. fail | $\delta$-atomic 615 $\delta$-snli 197 $\delta$-social 772 | $\delta$-atomic 294 $\delta$-snli 124 $\delta$-social 398 prev. succ | $\delta$-atomic 207 $\delta$-snli 68 $\delta$-social 302 | $\delta$-atomic 3022 $\delta$-snli 1448 $\delta$-social 7967 Table 6: Confusion matrix: can curious fix previously failing or successful examples? Table 6 shows that curious is able to correct several previously wrong examples. When curious corrected previously failing cases, the moe-v relied more on mediators, as the average mediator probabilities go up from 0.09 to 0.13 averaged over the datasets. curious still fails, and more concerning are the cases when previously successful examples now fail. To study this, we annotate 50 random dev samples over the three datasets (26/24 examples for weakens/strengthens label). For each sample, a human-annotated if the graph had errors. We observe the following error categories:555Concrete examples in Appendix §F * $\bullet$ All nodes off-topic (4%): The graph nodes were not on topic. This (rarely) happens when curious cannot distinguish the sense of a word in the input question. For instance, $\mathbf{S}$ = there is a water fountain in the center – curious generated based on an incorrect word sense of natural water spring. * $\bullet$ Repeated nodes (20%): These may be exact or near-exact matches. Node pairs with similar effects tend to be repeated in some samples. E.g., the S- node is often repeated with contextualizer C- perhaps because these nodes indirectly affect graph nodes in a similar way. * $\bullet$ Mediators are uninformative (34%): The mediating nodes are not correct or informative. One source of these errors is when the $\mathbf{H}$ and $\mathbf{S}$ are nearly connected by a single hop, e.g., $\mathbf{H}$ = personX pees, and $\mathbf{S}$ = personX drank a lot of water previously. * $\bullet$ Good graphs are ineffective (42%): These graphs contained the information required to answer the question, but the gating MOE mostly ignored this graph. This could be attributed in part to the observation in the histogram in Figure 5, that samples with weakens label disproportionately ignore the graph. In accordance with the findings of Rudinger et al. (2020), the maximum percentage of errors was in $\delta$-atomic, in part due to low question quality. ## 6 Explainability In this section, we analyze the explainability of curious model. Jacovi and Goldberg (2020) note that an explanation should aim towards two complementary goals: i) Plausibility: provide an interpretation of system outputs that is convincing for humans, and ii) Faithfulness: capture the actual reasoning process of a model. We discuss how our approach takes a step towards addressing these goals. ##### Plausibility In a prior work, Madaan et al. (2021) show that human annotators selectively picked and chose parts of the graph that explained a model decision and enabled them in improving on the task of defeasible reasoning. We show in §4.3.3, the MoE gate values gives insights into the part of the graph (contextualizer, mediator, situation node) that the model leveraged to answer a query. Our model thus produces a reasoning chain that is similar to the explanation that humans understand, providing a step towards building inherently plausible models, while also achieving better performance. ##### Measuring faithfulness w.r.t. graphs Since faithfulness is a widely debated term, we restrict its definition to measure faithfulness w.r.t to the reasoning graph. This can be measured by the correlation between the model performance and graph correctness. A high correlation implies that the model uses both the graph and query to generate an answer and thus is faithful to the stated reasoning mechanism (i.e., graphs used to answer a question). Our analysis reveals this to be the case: in cases where the model answers incorrectly, 42% of the graphs were entirely correct (§5). In contrast, when the model answers correctly, 82% of the graphs are correct. In summary, we hope that curious serves as a step towards building reasoning models that are both plausible and faithful. ## 7 Related work ##### Mental Models Cognitive science has long promoted mental models - coherent, constructed representations of the world - as central to understanding, communication, and problem-solving Johnson-Laird (1983); Gentner and Stevens (1983); Hilton (1996). Our work draws on these ideas, using inference graphs to represent the machine’s “mental model” of the problem at hand. Building the inference graph can be viewed as first asking clarification questions about the context before answering. This is similar to self-talk Shwartz et al. (2020) but directed towards eliciting chains of influence. ##### Injecting Commonsense Knowledge Many prior systems use commonsense knowledge to aid question-answering, e.g., using sentences retrieved from a corpus Yang et al. (2019); Guu et al. (2020), or with knowledge generated from a separate source Shwartz et al. (2020); Bosselut et al. (2019); and injected either as extra sentences fed directly to the model Clark et al. (2020), via the loss function Tandon et al. (2018), or via attention Ma et al. (2019). Unlike prior work, we use conditional language generation techniques to create graphs that are relevant to answering a question. ##### Encoding Graph Representations Several existing methods use graphs as an additional input for commonsense reasoning Sun et al. (2018); Lin et al. (2019); Lv et al. (2020); Feng et al. (2020); Bosselut et al. (2021); Ma et al. (2021); Kapanipathi et al. (2020). These methods first retrieve a graph relevant to a question using information retrieval techniques and then encode the graph using graph representation techniques like gcn Kipf and Welling (2017) and graph attention Velickovic et al. (2018). Different from these works, we use a graph generated from the query for answering the commonsense question. The graphs consumed by these works contain entities grounded in knowledge graphs like ConceptNet Speer et al. (2017), whereas we perform reasoning over event inference graphs where each node describes an event. Our best model uses a mixture-of-experts (MoE) Jacobs et al. (1991) model to pool multi-faceted input. Prior work has shown the effectiveness of using MoE for graph classification Zhou and Luo (2019); Hu et al. (2021), cross-lingual language learning Chen et al. (2019); Gu et al. (2018), and model ensemble learning Fedus et al. (2021); Shazeer et al. (2017). To the best of our knowledge, we are the first to use MoE for learning and pooling graph representations for qa task. ## 8 Summary and Conclusion Cognitive science suggests that people form “mental models” of a situation to answer questions about it. Drawing on those ideas, we have presented a simple instantiation in which the situational model is an inference graph. Different from gcn-based models popular in graph learning, we use mixture-of-experts to pool graph representations. Our experiments show that MoE-based pooling can be a strong (both in terms of performance and explainability) alternative to gcn for graph-based learning for reasoning tasks. Our method establishes a new state-of-the-art on three defeasible reasoning datasets. Overall, our method shows that performance can be improved by guiding a system to “think about” a question and explicitly model the scenario, rather than answering reflexively. ## Acknowledgments We thank the anonymous reviewers for their feedback. Special thanks to reviewer 2 for their insightful comments on our MoE formulation. This material is partly based on research sponsored in part by the Air Force Research Laboratory under agreement number FA8750-19-2-0200. The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright notation thereon. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of the Air Force Research Laboratory or the U.S. Government. ## References * Bosselut et al. (2021) Antoine Bosselut, Ronan Le Bras, and Yejin Choi. 2021. Dynamic neuro-symbolic knowledge graph construction for zero-shot commonsense question answering. In _Proceedings of the 35th AAAI Conference on Artificial Intelligence (AAAI)_. * Bosselut et al. (2019) Antoine Bosselut, Hannah Rashkin, Maarten Sap, Chaitanya Malaviya, Asli Celikyilmaz, and Yejin Choi. 2019. COMET: Commonsense transformers for automatic knowledge graph construction. In _Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics_ , pages 4762–4779, Florence, Italy. Association for Computational Linguistics. * Bowman et al. (2015) Samuel R. Bowman, Gabor Angeli, Christopher Potts, and Christopher D. Manning. 2015\. A large annotated corpus for learning natural language inference. In _Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing_ , pages 632–642, Lisbon, Portugal. Association for Computational Linguistics. * Chen et al. (2019) Xilun Chen, Ahmed Hassan Awadallah, Hany Hassan, Wei Wang, and Claire Cardie. 2019\. Multi-source cross-lingual model transfer: Learning what to share. In _Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics_ , pages 3098–3112, Florence, Italy. Association for Computational Linguistics. * Clark et al. (2020) P. Clark, Oren Etzioni, Daniel Khashabi, Tushar Khot, B. D. Mishra, Kyle Richardson, Ashish Sabharwal, Carissa Schoenick, Oyvind Tafjord, Niket Tandon, Sumithra Bhakthavatsalam, Dirk Groeneveld, Michal Guerquin, and Michael Schmitz. 2020. From ’f’ to ’a’ on the n.y. regents science exams: An overview of the aristo project. _AI Mag._ , 41:39–53. * Dror et al. (2018) Rotem Dror, Gili Baumer, Segev Shlomov, and Roi Reichart. 2018. The hitchhiker’s guide to testing statistical significance in natural language processing. In _Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)_ , pages 1383–1392, Melbourne, Australia. Association for Computational Linguistics. * Falcon (2019) et al. Falcon, WA. 2019. Pytorch lightning. _GitHub. Note: https://github.com/PyTorchLightning/pytorch-lightning_ , 3. * Fedus et al. (2021) William Fedus, Barret Zoph, and Noam Shazeer. 2021. Switch transformers: Scaling to trillion parameter models with simple and efficient sparsity. _arXiv preprint arXiv:2101.03961_. * Feng et al. (2020) Yanlin Feng, Xinyue Chen, Bill Yuchen Lin, Peifeng Wang, Jun Yan, and Xiang Ren. 2020. Scalable multi-hop relational reasoning for knowledge-aware question answering. In _Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)_ , pages 1295–1309, Online. Association for Computational Linguistics. * Forbes et al. (2020) Maxwell Forbes, Jena D. Hwang, Vered Shwartz, Maarten Sap, and Yejin Choi. 2020\. Social chemistry 101: Learning to reason about social and moral norms. In _Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)_ , pages 653–670, Online. Association for Computational Linguistics. * Gentner and Stevens (1983) Dedre Gentner and Albert L. Stevens. 1983. _Mental Models_. Lawrence Erlbaum Associates. * Ghosh et al. (2019) S. Ghosh, Giedrius Burachas, Arijit Ray, and Avi Ziskind. 2019. Generating natural language explanations for visual question answering using scene graphs and visual attention. _ArXiv_ , abs/1902.05715. * Gu et al. (2018) Jiatao Gu, Hany Hassan, Jacob Devlin, and Victor O.K. Li. 2018. Universal neural machine translation for extremely low resource languages. In _Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers)_ , pages 344–354, New Orleans, Louisiana. Association for Computational Linguistics. * Guu et al. (2020) Kelvin Guu, Kenton Lee, Zora Tung, Panupong Pasupat, and Mingwei Chang. 2020. Retrieval augmented language model pre-training. In _International Conference on Machine Learning_ , pages 3929–3938. PMLR. * Hilton (1996) D. Hilton. 1996. Mental models and causal explanation: Judgements of probable cause and explanatory relevance. _Thinking & Reasoning_, 2:273–308. * Hu et al. (2021) Fenyu Hu, Liping Wang, Shu Wu, Liang Wang, and Tieniu Tan. 2021. Graph classification by mixture of diverse experts. _arXiv preprint arXiv:2103.15622_. * Jacobs et al. (1991) Robert A Jacobs, Michael I Jordan, Steven J Nowlan, and Geoffrey E Hinton. 1991\. Adaptive mixtures of local experts. _Neural computation_ , 3(1):79–87. * Jacovi and Goldberg (2020) Alon Jacovi and Yoav Goldberg. 2020. Towards faithfully interpretable NLP systems: How should we define and evaluate faithfulness? In _Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics_ , pages 4198–4205, Online. Association for Computational Linguistics. * Johnson-Laird (1983) P. Johnson-Laird. 1983. _Mental Models : Towards a Cognitive Science of Language_. Harvard University Press. * Jordan and Jacobs (1994) Michael I Jordan and Robert A Jacobs. 1994. Hierarchical mixtures of experts and the em algorithm. _Neural computation_ , 6(2):181–214. * Jordan and Xu (1995) Michael I Jordan and Lei Xu. 1995. Convergence results for the em approach to mixtures of experts architectures. _Neural networks_ , 8(9):1409–1431. * Kapanipathi et al. (2020) Pavan Kapanipathi, Veronika Thost, Siva Sankalp Patel, Spencer Whitehead, Ibrahim Abdelaziz, Avinash Balakrishnan, Maria Chang, Kshitij P. Fadnis, R. Chulaka Gunasekara, Bassem Makni, Nicholas Mattei, Kartik Talamadupula, and Achille Fokoue. 2020. Infusing knowledge into the textual entailment task using graph convolutional networks. In _The Thirty-Fourth AAAI Conference on Artificial Intelligence, AAAI 2020, The Thirty-Second Innovative Applications of Artificial Intelligence Conference, IAAI 2020, The Tenth AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI 2020, New York, NY, USA, February 7-12, 2020_ , pages 8074–8081. AAAI Press. * Kipf and Welling (2017) Thomas N. Kipf and Max Welling. 2017. Semi-supervised classification with graph convolutional networks. In _5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Conference Track Proceedings_. OpenReview.net. * Koons (2017) Robert Koons. 2017. Defeasible Reasoning. In Edward N. Zalta, editor, _The Stanford Encyclopedia of Philosophy_ , Winter 2017 edition. Metaphysics Research Lab, Stanford University. * Lin et al. (2019) Bill Yuchen Lin, Xinyue Chen, Jamin Chen, and Xiang Ren. 2019. KagNet: Knowledge-aware graph networks for commonsense reasoning. In _Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)_ , pages 2829–2839, Hong Kong, China. Association for Computational Linguistics. * Liu et al. (2019) Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining approach. _arXiv preprint arXiv:1907.11692_. * Lv et al. (2020) Shangwen Lv, Daya Guo, Jingjing Xu, Duyu Tang, Nan Duan, Ming Gong, Linjun Shou, Daxin Jiang, Guihong Cao, and Songlin Hu. 2020. Graph-based reasoning over heterogeneous external knowledge for commonsense question answering. In _The Thirty-Fourth AAAI Conference on Artificial Intelligence, AAAI 2020, The Thirty-Second Innovative Applications of Artificial Intelligence Conference, IAAI 2020, The Tenth AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI 2020, New York, NY, USA, February 7-12, 2020_ , pages 8449–8456. AAAI Press. * Ma et al. (2019) Kaixin Ma, Jonathan Francis, Quanyang Lu, Eric Nyberg, and Alessandro Oltramari. 2019. Towards generalizable neuro-symbolic systems for commonsense question answering. In _Proceedings of the First Workshop on Commonsense Inference in Natural Language Processing_ , pages 22–32, Hong Kong, China. Association for Computational Linguistics. * Ma et al. (2021) Kaixin Ma, Filip Ilievski, Jonathan Francis, Yonatan Bisk, Eric Nyberg, and Alessandro Oltramari. 2021. Knowledge-driven data construction for zero-shot evaluation in commonsense question answering. In _Proceedings of the AAAI Conference on Artificial Intelligence_ , volume 35, pages 13507–13515. * Madaan et al. (2021) Aman Madaan, Dheeraj Rajagopal, Niket Tandon, Yiming Yang, and Eduard Hovy. 2021\. Could you give me a hint ? generating inference graphs for defeasible reasoning. In _Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021_ , pages 5138–5147, Online. Association for Computational Linguistics. * McNemar (1947) Quinn McNemar. 1947. Note on the sampling error of the difference between correlated proportions or percentages. _Psychometrika_ , 12(2):153–157. * Paszke et al. (2017) Adam Paszke, Sam Gross, Soumith Chintala, Gregory Chanan, Edward Yang, Zachary DeVito, Zeming Lin, Alban Desmaison, Luca Antiga, and Adam Lerer. 2017. Automatic differentiation in pytorch. _NIPS 2017 Workshop Autodiff Submission_. * Pollock (1987) J. Pollock. 1987. Defeasible reasoning. _Cogn. Sci._ , 11:481–518. * Pollock (2009) J. Pollock. 2009. A recursive semantics for defeasible reasoning. In _Argumentation in Artificial Intelligence_. * Raffel et al. (2020) Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. _Journal of Machine Learning Research_ , 21:1–67. * Rudinger et al. (2020) Rachel Rudinger, Vered Shwartz, Jena D. Hwang, Chandra Bhagavatula, Maxwell Forbes, Ronan Le Bras, Noah A. Smith, and Yejin Choi. 2020. Thinking like a skeptic: Defeasible inference in natural language. In _Findings of the Association for Computational Linguistics: EMNLP 2020_ , pages 4661–4675, Online. Association for Computational Linguistics. * Sakaguchi et al. (2021) Keisuke Sakaguchi, Chandra Bhagavatula, Ronan Le Bras, Niket Tandon, Peter Clark, and Yejin Choi. 2021. proscript: Partially ordered scripts generation via pre-trained language models. _arxiv_. * Sap et al. (2019) Maarten Sap, Ronan Le Bras, Emily Allaway, Chandra Bhagavatula, Nicholas Lourie, Hannah Rashkin, Brendan Roof, Noah A. Smith, and Yejin Choi. 2019. ATOMIC: an atlas of machine commonsense for if-then reasoning. In _The Thirty-Third AAAI Conference on Artificial Intelligence, AAAI 2019, The Thirty-First Innovative Applications of Artificial Intelligence Conference, IAAI 2019, The Ninth AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI 2019, Honolulu, Hawaii, USA, January 27 - February 1, 2019_ , pages 3027–3035. AAAI Press. * Shazeer et al. (2017) Noam Shazeer, Azalia Mirhoseini, Krzysztof Maziarz, Andy Davis, Quoc V. Le, Geoffrey E. Hinton, and Jeff Dean. 2017. Outrageously large neural networks: The sparsely-gated mixture-of-experts layer. In _5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Conference Track Proceedings_. OpenReview.net. * Shi et al. (2019) M. Shi, Yufei Tang, Xingquan Zhu, and J. Liu. 2019. Feature-attention graph convolutional networks for noise resilient learning. _ArXiv_ , abs/1912.11755. * Shwartz et al. (2020) Vered Shwartz, Peter West, Ronan Le Bras, Chandra Bhagavatula, and Yejin Choi. 2020\. Unsupervised commonsense question answering with self-talk. In _Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)_ , pages 4615–4629, Online. Association for Computational Linguistics. * Speer et al. (2017) Robyn Speer, Joshua Chin, and Catherine Havasi. 2017. Conceptnet 5.5: An open multilingual graph of general knowledge. In _Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence, February 4-9, 2017, San Francisco, California, USA_ , pages 4444–4451. AAAI Press. * Sun et al. (2018) Haitian Sun, Bhuwan Dhingra, Manzil Zaheer, Kathryn Mazaitis, Ruslan Salakhutdinov, and William Cohen. 2018. Open domain question answering using early fusion of knowledge bases and text. In _Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing_ , pages 4231–4242, Brussels, Belgium. Association for Computational Linguistics. * Tandon et al. (2018) Niket Tandon, Bhavana Dalvi, Joel Grus, Wen-tau Yih, Antoine Bosselut, and Peter Clark. 2018. Reasoning about actions and state changes by injecting commonsense knowledge. In _Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing_ , pages 57–66, Brussels, Belgium. Association for Computational Linguistics. * Tandon et al. (2019) Niket Tandon, Bhavana Dalvi, Keisuke Sakaguchi, Peter Clark, and Antoine Bosselut. 2019. WIQA: A dataset for “what if…” reasoning over procedural text. In _Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)_ , pages 6076–6085, Hong Kong, China. Association for Computational Linguistics. * Vaswani et al. (2017) Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In _Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, December 4-9, 2017, Long Beach, CA, USA_ , pages 5998–6008. * Velickovic et al. (2018) Petar Velickovic, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Liò, and Yoshua Bengio. 2018. Graph attention networks. In _6th International Conference on Learning Representations, ICLR 2018, Vancouver, BC, Canada, April 30 - May 3, 2018, Conference Track Proceedings_. OpenReview.net. * Wolf et al. (2019) Thomas Wolf, L Debut, V Sanh, J Chaumond, C Delangue, A Moi, P Cistac, T Rault, R Louf, M Funtowicz, et al. 2019. Huggingface’s transformers: State-of-the-art natural language processing. _ArXiv, abs/1910.03771_. * Yang et al. (2019) Wei Yang, Yuqing Xie, Aileen Lin, Xingyu Li, Luchen Tan, Kun Xiong, Ming Li, and Jimmy Lin. 2019. End-to-end open-domain question answering with BERTserini. In _Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics (Demonstrations)_ , pages 72–77, Minneapolis, Minnesota. Association for Computational Linguistics. * Yang and Liu (1999) Yiming Yang and Xin Liu. 1999. A re-examination of text categorization methods. In _Proceedings of the 22nd annual international ACM SIGIR conference on Research and development in information retrieval_ , pages 42–49. * Zhou and Luo (2019) Xuanyu Zhou and Yuanhang Luo. 2019. Explore mixture of experts in graph neural networks. _Stanford CS224W_. ## Appendix A Training graph corrector As mentioned in Section §3.2, the graph generator $\textsc{gen}_{\text{init}}$ is trained as a seq2seq model from wiqa with $\texttt{input}=\text{[Premise] }\mathbf{T}_{i}\mid\text{[Situation] }\mathbf{S}_{i}\mid\text{[Hypothesis] }\mathbf{H}_{i}$, and $\texttt{output}=\mathbf{G}_{i}$. Graphs in wiqa additionally capture the influence that the situation has on the hypothesis. Denoting this influence label by $y_{i}$ can be either helps or hurts From our experiments, we observe that appending $y_{i}$ to the training data (from $\texttt{input}=\text{[Premise] }\mathbf{T}_{i}\mid\text{[Situation] }\mathbf{S}_{i}\mid\text{[Hypothesis] }\mathbf{H}_{i}$ to $\texttt{input}=\text{[Premise] }\mathbf{T}_{i}\mid\text{[Situation] }\mathbf{S}_{i}\mid\text{[Hypothesis] }\mathbf{H}_{i}\mid y_{i}$) reduces repetitions by 13%. We refer to this data generator as $\textsc{gen}_{\text{init}}^{*}$, and the graphs produced by it as $\mathbf{G}^{*}$. However, we do not have access to $y$ during test time, and thus $\textsc{gen}_{\text{init}}^{*}$ cannot be used directly to produce $\mathbf{G}^{*}$ for defeasible queries. We circumvent this by using $\textsc{gen}_{\text{init}}^{*}$ to train a graph-to-graph generation model, that takes as input $\mathbf{G}^{\prime}$ and generates $\mathbf{G}^{*}$ as output ($\mathbf{G}^{\prime}\rightarrow\mathbf{G}^{*}$). We call this system $\textsc{gen}_{\text{corr}}$. We give an overview of the process in Figure 8. In Figure 9, we give examples of an intial graph produced by $\textsc{gen}_{\text{init}}$, the corresponding graph produced by $\textsc{gen}_{\text{init}}^{*}$, and the graph produced by $\textsc{gen}_{\text{corr}}$. Figure 8: Training data generation to train $\textsc{gen}_{\text{corr}}$. Figure 9: The graphs generated by $\textsc{gen}_{\text{init}}$ (left), $\textsc{gen}_{\text{init}}^{*}$ (middle), and $\textsc{gen}_{\text{corr}}$ (right).The input graph has repetitions for nodes $\\{C{-},S{-}\\}$, $\\{C{+},H{+}\\}$, and $\\{M{-},M{+}\\}$. The corrected graph replaces the repetitions with meaningful labels. ## Appendix B MoE gradient analysis Figure 10: MoE gradient analysis setup: we consider a simple setting where the weighted output of the experts (using the expert weights $p$) is directly fed to a softmax and is used for generating class probabilities $\hat{y}$. We restate Equation 2 for quick reference: $\displaystyle\mathbf{p}$ $\displaystyle=\mathbf{M}(\mathbf{x})$ $\displaystyle\mathbf{o}$ $\displaystyle=\sum_{i=1}^{n}p_{i}\mathbf{E_{i}(x)}$ where we have changed the notation slightly to use $\mathbf{o}$ as the MoE output instead of $\mathbf{y}$. We also refer to $\mathbf{E_{i}}(x)$ as $\mathbf{E_{i}}$. Further, $o_{j}=\sum_{i=1}^{n}p_{i}E_{ij}$. We present the analysis for a generic multi-class classification setting with $k$ classes, with training done using a cross-entropy loss $\mathcal{L}$ (Figure 10) Let $\hat{y}_{c}$ be the normalized probability of the correct class $c$ calculated using softmax: $\displaystyle\hat{y}_{c}$ $\displaystyle=\frac{\exp(o_{c})}{\sum_{j=1}^{k}\exp(o_{j})}$ $\displaystyle=\frac{\exp(\sum_{i=1}^{n}p_{i}E_{ic})}{\sum_{j=1}^{k}\exp(\sum_{i=1}^{n}p_{i}E_{ij})}$ Let $\mathcal{L}$ be the cross-entropy loss: $\displaystyle\mathcal{L}$ $\displaystyle=-\log\hat{y}_{c}=-o_{c}+\log{\sum_{j=1}^{k}\exp(o_{j})}$ $\displaystyle=-\sum_{i=1}^{n}p_{i}E_{ic}+\log{\sum_{j=1}^{k}\exp(\sum_{i=1}^{n}p_{i}E_{ij})}$ ##### Evaluating $\frac{\partial\mathcal{L}}{\partial p_{m}}$ The derivatives w.r.t. the $m^{th}$ expert gate probability $p_{m}$ is given by: $\displaystyle\frac{\partial\mathcal{L}}{\partial p_{m}}$ $\displaystyle=-E_{mc}+\frac{\sum_{j=1}^{k}E_{mj}\exp(\sum_{i=1}^{n}p_{i}E_{ij})}{\sum_{j=1}^{k}\exp(\sum_{i=1}^{n}p_{i}E_{ij})}$ $\displaystyle=-E_{mc}+\sum_{j=1}^{k}\hat{y}_{j}E_{mj}$ $\displaystyle=-E_{mc}(1-\hat{y}_{c})+\sum_{j=1,j\not=c}^{k}\hat{y}_{j}E_{mj}$ (5) ##### Evaluating $\frac{\partial\mathcal{L}}{\partial E_{mc}}$ the derivatives w.r.t. the logits $E_{mc}$ (logit for the correct class by $m^{th}$ expert) is given by: $\displaystyle\frac{\partial\mathcal{L}}{\partial E_{mc}}$ $\displaystyle=-p_{m}+\frac{\exp(o_{c})p_{m}}{\sum_{j=1}^{k}\exp o_{j}}$ $\displaystyle=-p_{m}(1-\hat{y}_{c})$ (6) Equations 5 and 6 have natural interpretations: the gradient on both the mixture probability $p_{m}$ and the logits $E_{mc}$ will be 0 (note that for Equation 5, $\mathbf{y}^{c}=1\implies\mathbf{y}^{j}=0$ for $j\not=c$) when the network makes perfect predictions ($\hat{y}_{c}=1$). As noted by Jacobs et al. (1991) (Section 1), this might cause the network to specialize slower, as the gradient will be small for experts that are helping in making the correct prediction. They suggest a different loss function that promotes faster specialization by redefining the error function in terms of a mixture distribution, with the mixture weights supplied by the $p_{i}$ terms. Analyzing the effect of loss function for applications where the MoE is used to pool representations remains an interesting future work. ## Appendix C Hyperparameters ##### Training details All of our experiments were done on a single Nvidia GeForceRTX 2080 Ti. We base our implementation on PyTorch Paszke et al. (2017) and also use PyTorch Lightning Falcon (2019) and Huggingface Wolf et al. (2019). The gates and the experts in our MoE model were a single layer MLP. For the experts, we set the input size set to be the same as output size. Table 7 shows the parameters shared by all the methods, and 8 shows the hyperparameters applicable to gcn encoder. Hyperparameter | Value ---|--- Pre-trained model | RoBERTa-base Learning rate | 2e-5 Gradient accumulation batches | 2 Num epochs | 30 Optimizer | AdamW Dropout | 0.1 Learning rate scheduling | linear Warmup | 3 epochs Batch size | 16 Weight decay | 0.01 Gradient clipping | 1.0 Table 7: General hyperparameters used by all the models. Hyperparameter | Value ---|--- # Layers | 2 Layer dropout | 0.1 Number of attention heads | 1 Attention dimension | 256 Table 8: Hyperparameters specific to gcn. ## Appendix D Schema of an influence graph Figure 11 shows the skeleton of an influence graph. Figure 11: Schema of an inference graph. ## Appendix E Runtime Analysis Finally, we discuss the cost-performance tradeoffs for various encoding mechanisms (Table 9). As Table 9 shows, both gcn and MoE take about 7% more number of parameters than the str encoding scheme and have about 2x the runtime. Further, as we use one expert per node, the number of parameters scales linearly with the number of nodes. While this is not prohibitive in our setting (each graph has a small number of nodes), our analysis shows that the behavior of the nodes that have similar semantics is correlated, indicating that the experts for those nodes can share parameters. Alternatively, MoE with more than two layers Jordan and Xu (1995) can also help in scaling the number of parameters only logarithmically with the number of nodes. Method | str | gcn | MoE ---|---|---|--- #Params | 124M | 131M | 133M Runtime | 0.17 | 0.47 | 0.40 Table 9: Number of parameters in the different encoding methods. Runtime reports the number of seconds to process one training example. ## Appendix F Error Analysis Examples We show three examples with different types of errors. These examples are taken from Dev set, and these are for the cases where curious introduced a wrong answer, while baseline answered this correctly without the graph. * $\bullet$ Figure 12 shows a failure case when a good graph is unused. Example from $\delta$-atomic dev set. * $\bullet$ Figure 13 shows a failure case when an off topic graph is produced due to confusion in the sense of water fountain. Example from $\delta$-snli dev set. * $\bullet$ Figure 14 shows a failure case when the mediator is wrong. Example from $\delta$-social dev set. Figure 12: Example of a failure case: A good graph is unused. Example from $\delta$-atomic dev set. Figure 13: Example of a failure case: The generated graph is off topic (wrong sense of water fountain is used). Example from $\delta$-snli dev set. Figure 14: Example of a failure case: The mediator nodes (second last level in the graph) are unhelpful. Example from $\delta$-social dev set. ## Appendix G Significance Tests We perform two statistical tests for verifying our results: i) The micro-sign test (s-test) Yang and Liu (1999), and ii) McNermar’s test Dror et al. (2018). Dataset | s-test | McNemar’s test ---|---|--- $\delta$-atomic | 5.07e-05 | 1.1e-04 $\delta$-snli | 2.65e-05 | 6.5e-05 $\delta$-social | 1.4e-04 | 3.2e-04 Table 10: p-values for the three datasets and two different statistical tests while comparing the results with and without graphs (Table 2). As the p-values show, the results in Table 2 are highly significant Dataset | s-test | McNemar’s test ---|---|--- $\delta$-atomic | 0.001 | 0.003676 $\delta$-snli | 0.01 | 0.026556 $\delta$-social | 0.06 | 0.146536 Table 11: p-values for the three datasets and two different statistical tests while comparing the results with noisy vs. cleaned graphs (Table 3). | $\delta$-atomic | $\delta$-snli | $\delta$-social ---|---|---|--- str | 0.13 | 1.8e-06 | 8.7e-06 gcn | 0.006 | 1.31e-05 | 0.03 Table 12: p-values for the s-test for Table 5. | $\delta$-atomic | $\delta$-snli | $\delta$-social ---|---|---|--- str | 0.28 | 4e-06 | 2e-05 gcn | 0.015127 | 3.2e-05 | 0.06 Table 13: p-values for the McNemar’s for Table 5. ## Appendix H Description of gcn encoder We now describe our adaptation of the method by Lv et al. (2020) to pool $\mathbf{h}_{\mathbf{V}}$ into $\mathbf{h}_{\mathbf{G}}$ using gcn. Figure 15 captures the overall design. ##### Refining node representations The representation for each node $v\in\mathbf{V}$ is first initialized using: $\mathbf{h}_{\mathbf{v}}^{0}=\mathbf{W}^{0}\mathbf{h}_{\mathbf{v}}$ Where $\mathbf{h}_{\mathbf{v}}\in\mathbb{R}^{d}$ is the node representation returned by the $\mathcal{L}$, and $\mathbf{W}^{0}\in\mathbb{R}^{d\times k}$. This initial representation is then refined by running L-layers of a GCN Kipf and Welling (2017), where each layer $l+1$ is updated by using representations from the $l^{th}$ layer as follows: $\displaystyle\mathbf{h}_{v}^{(l+1)}$ $\displaystyle=\sigma\left(\frac{1}{|\mathbf{A}(v)|}{\sum_{w\in\mathbf{A}(v)}\mathbf{W}^{l}\mathbf{h}_{w}^{l}}+\mathbf{W}^{l}\mathbf{h}_{v}^{l}\right)$ $\displaystyle\mathbf{H}^{L}$ $\displaystyle=[\mathbf{h}_{0}^{L};\mathbf{h}_{1}^{L};\ldots;\mathbf{h}_{|\mathbf{V}|-1}^{L}]$ (7) Where $\sigma$ is a non-linear activation function, $\mathbf{W}^{l}\in\mathbb{R}^{k\times k}$ is the GCN weight matrix for the $l^{th}$ layer, $\mathbf{A}(v)$ is the list of neighbors of a vertex $v$, and $\mathbf{H}^{L}\in\mathbb{R}^{|\mathbf{V}|\times k}$ is a matrix of the $L^{th}$ layer representations the $|\mathbf{V}|$ nodes such that $\mathbf{H}^{L}_{i}=\mathbf{h}_{i}^{L}$. ##### Learning graph representation We use multi-headed attention Vaswani et al. (2017) to combine the query representation $\mathbf{h}_{\mathbf{Q}}$ and the nodes representations $\mathbf{H}^{L}$ to learn a graph representation $\mathbf{h}_{\mathbf{G}}$. The multiheaded attention operation is defined as follows: $\displaystyle\mathbf{a}_{i}$ $\displaystyle=\text{softmax}\left(\frac{(\mathbf{W^{q}_{i}}\mathbf{h}_{\mathbf{Q}})(\mathbf{W^{k}_{i}}\mathbf{H}^{L})^{T}}{\sqrt{d}}\right)$ $\displaystyle\text{head}_{i}$ $\displaystyle=\mathbf{a}_{i}(\mathbf{W^{v}_{i}}\mathbf{H}^{L})$ $\displaystyle\mathbf{h}_{\mathbf{G}}$ $\displaystyle=Concat(head_{1},\ldots,head_{h})\mathbf{W}^{O}$ $\displaystyle=\text{MultiHead}(\mathbf{h}_{Q},\mathbf{H}^{L})$ (9) Where $h$ is the number of attention heads, $\mathbf{W}^{q}_{i},\mathbf{W}^{k}_{i},\mathbf{W}^{v}_{i}\in\mathbb{R}^{k\times d}$ and $\mathbf{W}^{O}\in\mathbb{R}^{hd\times d}$. Finally, the graph representation generated by the the MultiHead attention $\mathbf{h}_{\mathbf{G}}\in\mathbb{R}^{n}$ is concatenated with with the question representation $\mathbf{h}_{\mathbf{Q}}$ to get the prediction: $\hat{y}=\text{softmax}([\mathbf{h}_{\mathbf{G}},\mathbf{h}_{\mathbf{Q}}]\mathbf{W}_{out})$ where $\mathbf{W}_{out}\in\mathbb{R}^{d\times 2}$ is a single linear layer MLP. ## Appendix I All results Our experiments span two types of graphs ($\mathbf{G}^{\prime}$, $G$), three datasets ($\delta$-snli, $\delta$-social, $\delta$-atomic), and three graph encoding schemes (str, gcn, MoE). Table 14 above shows the results on all 18 combinations of $\\{\text{graph types}\\}\times\\{\text{datasets}\\}\times\\{\text{graph encoding schemes}\\}$ Dataset | Encoder | Graph Type | Accuracy ---|---|---|--- $\delta$-atomic | | n/a | str | $\mathbf{G}^{\prime}$ | 78.78 str | $G$ | 79.48 gcn | $\mathbf{G}^{\prime}$ | 78.25 gcn | $G$ | 78.85 MoE | $\mathbf{G}^{\prime}$ | 78.83 MoE | $G$ | 80.15 $\delta$-snli | | n/a | str | $\mathbf{G}^{\prime}$ | 82.16 str | $G$ | 83.11 gcn | $\mathbf{G}^{\prime}$ | 82.63 gcn | $G$ | 83.09 MoE | $\mathbf{G}^{\prime}$ | 83.83 MoE | $G$ | 85.59 $\delta$-social | | n/a | 87.6 str | $\mathbf{G}^{\prime}$ | 86.75 str | $G$ | 87.24 gcn | $\mathbf{G}^{\prime}$ | 87.92 gcn | $G$ | 88.12 MoE | $\mathbf{G}^{\prime}$ | 88.45 MoE | $G$ | 88.62 Table 14: Results for different combinations of graph encoder, graph type. ## Appendix J Graph-augmented defeasible reasoning algorithm In Algorithm 1, we outline our graph-augmented defeasible learning process. Given: A language model $\mathcal{L}$, defeasible query with graph $(\mathbf{x},\mathbf{G})$. Result: Result for the query. // Encode query $\mathbf{h}_{\mathbf{Q}}\leftarrow\mathcal{L}(\mathbf{x})$; // encode nodes of $\mathbf{G}$ $\mathbf{h}_{\mathbf{V}}\leftarrow\mathcal{L}(\mathbf{v}\in\mathbf{G})$ ; // MOE1: Combine nodes $\mathbf{h}_{\mathbf{G}}\leftarrow$ Equation 3; // MOE2: Combine $Q$, $G$ $\mathbf{h}_{\mathbf{y}}\leftarrow$ Equation 4; return _softmax(MLP($\mathbf{h}_{\mathbf{y}}$))_ Algorithm 1 Graph-augmented defeasible reasoning using MoE. Figure 15: Overview of the gcn encoder.
# Multi-clue Consistency Learning to Bridge Gaps Between General and Oriented Object in Semi-supervised Detection Chenxu Wang Nanjing University of Science and TechnologyNanjingChina <EMAIL_ADDRESS>, Chunyan Xu Nanjing University of Science and TechnologyNanjingChina , Ziqi Gu Nanjing University of Science and TechnologyNanjingChina and Zhen Cui Nanjing University of Science and TechnologyNanjingChina ###### Abstract. While existing semi-supervised object detection (SSOD) methods perform well in general scenes, they encounter challenges in handling oriented objects in aerial images. We experimentally find three gaps between general and oriented object detection in semi-supervised learning: 1) Sampling inconsistency: the common center sampling is not suitable for oriented objects with larger aspect ratios when selecting positive labels from labeled data. 2) Assignment inconsistency: balancing the precision and localization quality of oriented pseudo-boxes poses greater challenges which introduces more noise when selecting positive labels from unlabeled data. 3) Confidence inconsistency: there exists more mismatch between the predicted classification and localization qualities when considering oriented objects, affecting the selection of pseudo-labels. Therefore, we propose a Multi-clue Consistency Learning (MCL) framework to bridge gaps between general and oriented objects in semi-supervised detection. Specifically, considering various shapes of rotated objects, the Gaussian Center Assignment is specially designed to select the pixel-level positive labels from labeled data. We then introduce the Scale-aware Label Assignment to select pixel-level pseudo-labels instead of unreliable pseudo-boxes, which is a divide-and-rule strategy suited for objects with various scales. The Consistent Confidence Soft Label is adopted to further boost the detector by maintaining the alignment of the predicted results. Comprehensive experiments on DOTA-v1.5 and DOTA-v1.0 benchmarks demonstrate that our proposed MCL can achieve state-of-the-art performance in the semi-supervised oriented object detection task. The code will be available at https://github.com/facias914/sood-mcl ## 1\. Introduction The fully-supervised object detection in aerial images (Xia et al., 2018; Han et al., 2021; Xie et al., 2021; Li et al., 2023) has achieved remarkable success in the past few years. However, labeling a large amount of accurate annotations is labour-consuming, especially for oriented dense objects. To reduce the annotation burden, the semi-supervised object detection (SSOD) attempts to leverage both labeled data and unlabeled data for boosting the detector performance. Existing advanced SSOD methods, which mainly adopt the self-training framework (Liu et al., 2021; Li et al., 2022a; Liu et al., 2022; Wang et al., 2023) to employ these unlabeled data, have achieved significant performance in general scenes. However, they encounter various challenges in handling the oriented object detection task due to the tricky characteristics such as arbitrary rotation angles, large aspect ratio changes and dense arrangement. This prompts us to study the performance gaps between general and oriented object detection when performing semi-supervised learning. With comprehensive investigations in Section 3.2, we figure out that the performance of SOOD is significantly hindered by three gaps and inconsistency problems. Specifically, the sampling inconsistency represents that the positive labels selected from labeled data are inconsistent with the supervision information required by the oriented objects. It is caused by the sub-optimal center-based label assignment and the gap in aspect ratio distribution between general and oriented objects. That is to say, oriented objects usually have more extreme aspect ratio distributions than general ones, while the center-sampling strategy only focuses on the center region, which results in the discriminative information of large aspect ratio objects distributed at the edge being assigned as background, affecting the optimization of the detector. Figure 1. The pipeline of our MCL. To address the sampling inconsistency issue, the Gaussian Center Assignment is introduced to select more accurate pixel-level positive labels from labeled data. The Scale-aware Label Assignment is proposed to select pixel-level pseudo-labels for objects with various scales. The Consistent Confidence Soft Label is adopted to mitigate the mismatch problem between classification and localization qualities through maintaining the alignment of the predicted results. In addition, the assignment inconsistency denotes that the predicted oriented pseudo-boxes from unlabeled data are difficult to guide the semi-supervised learning process, resulting in noisy label assignment. In general, most advanced SSOD methods (Li et al., 2022a; Wang et al., 2023; Zhang et al., 2023) follow the pseudo-boxes paradigm (Sohn et al., 2020) that using the pseudo-boxes predicted by teacher model act as the supervised signals to optimize the student model. However, the introduction of angle parameters in rotated pseudo-boxes makes their localization quality less controllable than that of horizontal ones. Under the guidance of low-quality rotated pseudo- boxes, noisy labels are inevitably introduced during the label assignment process. Moreover, even with the aid of NMS (Non-Maximum Suppression) and threshold filtering, it is impossible to guarantee there are not redundant boxes which will bring additional noisy labels. On the other hand, the confidence inconsistency indicates that there exists the mismatch between the predicted classification and localization qualities, affecting the selection of pseudo-labels. Prior researches (Sun et al., 2021; Xu et al., 2021; Li et al., 2022a) have validated that the classification score is unable to measure the localization quality, and the lack of interaction between the two confidences causes an inconsistency in predictions, leading to sub-optimal pseudo-label selection results. Moreover, while numerous researches have studied the proxy localization quality of horizontal box, there is a lack of research for oriented boxes. Our experiments show that there remains a gap in the representation of localization quality between horizontal and oriented boxes, while sub-optimal proxy localization quality also affects the performance of oriented object detection. Based on the above observations, we propose a Multi-clue Consistency Learning (MCL) framework to boost the performance of the SOOD task through bridging multi-gaps between general and oriented object detection in semi-supervised learning process. Figure 1 illustrates the pipeline of MCL. Specifically, to address the sampling inconsistency problem, we introduce the Gaussian Center Assignment (GCA) strategy to select more accurate positive labels from the limited annotated data. Here each oriented object region can be represented as a $2D$ Gaussian Distribution (Yang et al., 2022) according to its corresponding ground truth box, which can be adopted to guide the pixel-level positive label selection. In addition, the normalized distance to the center of the object (termed centerness) can be formulated as a $2D$ Gaussian distribution, which is used to better measure the localization quality of each pixel-level positive label. Compared to other label assignment strategies (Ming et al., 2021; Hou et al., 2022; Xu et al., 2022, 2023) for fully- supervised detection, our designed GCA can not only extract more comprehensive target information from limited labeled data but also avoid the performance degradation caused by the sampling inconsistent supervision of large-ratio aerial objects. To mine accurate potential information from a large amount of unlabeled data, we would resist the assignment and confidence inconsistency. As for the assignment inconsistency, we propose the Scale-aware Label Assignment (SLA) method to select pixel-level pseudo-labels instead of unreliable pseudo-boxes, which adopts a divide-and-rule strategy to develop different pseudo-labels selection rules for objects with different scales. For small objects, a coarse-to-fine pseudo-label selection is used to filter high quality pseudo labels from dense features, which can preserve only a few candidates with both high classification and localization qualities. For large-scale objects with sparse features, we just use a score threshold filtering mechanism to maintain scale balance. Different from the previous dense pseudo-labels assignments (Zhou et al., 2022; Liu et al., 2023a) used in general scenes, our proposed SLA is more suitable for various scale objects in oriented object detection and then select more reliable pixel-level pseudo-labels from the unlabeled data. To address the confidence inconsistency problem, we employ the soft label (Li et al., 2020) that the value at the ground-truth category index indicates its corresponding localization quality, replacing the one-hot label to supervise the classification branch learning. However, there is a gap in the proxy representations of localization quality between horizontal and oriented boxes. Through experimental analysis in Section 3.2, we find that centerness (Tian et al., 2019) can effectively represent the localization quality of the oriented box and outperforms the predicted IoU (Intersection over Union) used on the horizontal box (Li et al., 2020; Liu et al., 2023a). Thereby we propose the Consistent Confidence Soft Label (CCSL) based on centerness to further promote the selection of pseudo-labels through mitigating the inconsistency problem. Prior to this, we have not seen any work on the localization quality representation of oriented box. In conclusion, our contributions are summarized as follows: * • We look deep into the gaps between general and oriented object detection in semi-supervised learning, and then propose a multi-clue consistency learning framework to improve the performance of SOOD task. * • To address three inconsistency problems, we specially design the Gaussian Center Assignment, the Scale-aware Label Assignment, and the Consistent Confidence Soft Label methods to mine more instructive supervised signals from the annotated data and unlabeled training data. * • Comprehensive evaluations and comparisons demonstrate that our MCL achieves state-of-the-art performance on DOTA-v1.5 and DOTA-v1.0 benchmark (Xia et al., 2018). ## 2\. RELATED WORK Semi-supervised Object Detection. Most of previous methods (Liu et al., 2021; Yang et al., 2021; Li et al., 2022c, d; Liu et al., 2023b; Zhang et al., 2023; Nie et al., 2023) are inherited from the Mean Teacher paradigm (Tarvainen and Valpola, 2017), where the teacher model is updated via Exponential Moving Average (EMA) of the student weights and produce pseudo labels over unlabeled data as ground truth for training the student model. Under this scheme, the quality and precision of pseudo labels play a substantial role. ISMT (Yang et al., 2021) maintains a memory bank to ensure the consistency of pseudo labels across various iteration stages. Unbiased Teacher (Liu et al., 2021) applies Focal Loss (Lin et al., 2017b) to tackle the class-imbalance pseudo labels issues. Soft Teacher (Xu et al., 2021) employs a box jittering argumentation to select reliable pseudo labels. For misleading instances, Unbiased Teacher v2 (Liu et al., 2022) selects pseudo labels through utilizing uncertainty predictions. Consistent Teacher (Wang et al., 2023) dynamically adjusting the threshold for pseudo labels filtering. Mix Teacher (Liu et al., 2023b) enhances the quality of pseudo labels through mixed-scale prediction. Different from these, some methods (Zhou et al., 2022; Li et al., 2022b; Liu et al., 2023a) directly use the dense predictions of the teacher model as pixel-level pseudo-labels for semi-supervised learning. However, since the usage scenarios of oriented object detection are more complex, there still exist the performance gaps when applying these SSOD methods in semi-supervised oriented object detection. Semi-supervised Oriented Object Detection. Oriented object detection requires predicting rotated bounding boxes by adding an angle parameter in the regression task, posing significant challenges for controlling the quality and accuracy of pseudo boxes. SOOD (Hua et al., 2023) uses the absolute distance of the predicted angle between the teacher and student models to weight the regression loss. Additionally, it introduces the optimal transport cost (Monge, 1781) to evaluate the global similarity of layouts between the predictions of teacher and student. However, SOOD fails to recognize the root issue hindering semi-supervised learning in oriented object detection, which lies in how to eliminate inconsistency issues caused by the divergence of general and oriented object detection. In this work, we dive into the performance gaps between SSOD and SOOD methods, proposing a multi-clue consistency learning method for semi-supervised learning in oriented object detection. Label Assignment in Oriented Object Detection. Previous works have (Zhang et al., 2020; Ge et al., 2021; Kim and Lee, 2020) revealed that label assignment plays a crucial role in object detection. For oriented object detection, DAL (Ming et al., 2021) introduce matching degree to select anchors with high prior and regression qualities. SASM (Hou et al., 2022) set dynamic IoU thresholds decided by object shapes to filter prior anchors. In order to achieve scale-balance learning for tiny objects, RFLA (Xu et al., 2022)replaces the IoU-based or center sampling assignment with hierarchical label assignment built on gaussian respective field, while DCFL (Xu et al., 2023) develops the dynamic priors instead of static priors. Although these assignment methods are effective, they are limited to fully-supervised settings. With such over-designed label assignment strategies, the model is insufficient to learn perfect target information from sparse labeled data. In this work, we show that a simple changed sampling prior significantly improves the performance of SOOD task. Representation of Localization Quality. The representation of localization quality has attracted increasing attention in general object detection task. FCOS (Tian et al., 2019) utilize a separate branch to perform localization quality estimation in the form of centerness. After that, GFL (Li et al., 2020) points out that centerness is easier to get very small values in object edge by its definition, making it perform consistently worse than IoU. In SSOD task, several researches (Liu et al., 2022; Li et al., 2022a; Liu et al., 2023a) also focus on accurately measuring the localization quality of pseudo labels. For instance, PseCo (Li et al., 2022a) measures the localization quality of pseudo labels by estimating the consistency of predicted proposals (Ren et al., 2015) for two-stage detectors. Unbiased Teacher v2 (Liu et al., 2022) reveals that the centerness is not reliable for reflecting whether a prediction is a positive sample due to lack of background supervision. Thereby it uses the predicted localization uncertainty of four box boundaries instead. ARSL(Liu et al., 2023a) also replaces the centerness branch with the IoU branch following the VFNet(Zhang et al., 2021) and GFL(Li et al., 2020). However, none of these works are designed for rotating bounding boxes in oriented object detection. In this work, we figure out that centerness is still the best choice as the proxy localization quality and we will verify this in Section 3.2. ## 3\. METHODOLOGY To study the gaps between the general and oriented object detection in the semi-supervised learning, we start from the classic SSOD framework (Tarvainen and Valpola, 2017), as introduced in Section 3.1. We then experimentally analyze the three inconsistency issues (Section 3.2). To mitigate the inconsistency and bridge the gaps, we subsequently propose a Multi-clue Consistency Learning (MCL) framework for the SOOD task. It mainly comprises three modules: Gaussian Center Assignment (Section 3.3), Scale-aware Label Assignment (Section 3.4), and Consistent Confidence Soft Label (Section 3.5). ### 3.1. The Basic SSOD Framework We introduce the classic Pseudo-labeling SSOD framework inherited from Mean Teacher (Tarvainen and Valpola, 2017) as our baseline, which adopts a rotated FCOS version (Tian et al., 2019) as the detector. In general, the SSOD framework comprises two branches: one dedicated to the supervised learning branch and the other to the unsupervised learning branch. In the burn-in stage, the student model undergoes pre-training with the limited annotated data, where the supervised loss $\mathcal{L}^{s}$ is minimized by combining classification, centerness, and regression. Simultaneously, the learned weights of the student model are mirrored onto the teacher model. In the self- training stage, the unlabeled training data with weak data augmentation is fed into the teacher model to generate pseudo-labels. Subsequently, the student model is further optimized with these generated pseudo-labels (i.e., supervised by $\mathcal{L}^{u}$), while the input comprises unlabeled data augmented more aggressively. Meanwhile, the supervised learning with $\mathcal{L}^{s}$ is maintained on the student network. The overall loss function is thus formulated as $\mathcal{L}=\mathcal{L}^{s}+\beta\mathcal{L}^{u}$, where $\beta$ is a hyper- parameter regulating the impact of unsupervised learning. The network parameters of the student model are continually updated onto the teacher network through an exponential moving average (EMA) mechanism at each iteration. ### 3.2. Inconsistency Analysis Figure 2. Analysis of the sampling inconsistency on the general COCO and aerial DOTA-v1.5 datasets. (a) Statistics of object aspect ratio distribution on both datasets. (b) The class activation mapping of an aerial object with large aspect ratio. (c) The positive label selection of an aerial object by the common center sampling strategy, where the red points and blue points represent negatives and positives, respectively. Since oriented objects in aerial images have some specific characteristics (e.g., the bird’s-eye view perspective, the complex environment, and the variant scales of objects), the standard SSOD framework would encounter various challenging problems when directly applied to the SOOD task. We attempt to analyze the core components of the SSOD framework (including positive label assignment and pseudo-label selection), and then find some gaps/inconsistencies between general and oriented object detection in the semi-supervised learning. Specifically, we conduct a series of experimental analysis on two datasets representing general scenes and aerial perspectives, namely COCO (Lin et al., 2014) and DOTA-v1.5 (Xia et al., 2018), respectively. To ensure a fair comparison, we use the FCOS as the object detector on both datasets. Sampling Inconsistency: When employing these limited annotated data, the common center sampling strategy is adopted to select positive labels for general objects, but it is not suitable for oriented objects with larger aspect ratios. For an intuitive understanding, we statistically analyze the aspect ratio distributions of targets on the COCO and DOTA-v1.5 datasets, as shown in Figure 2(a). More than 90% of objects are with the aspect ratios exceeding 0.5 on the COCO dataset, thus the center sampling strategy can be adopted to select positive points of the general objects, thereby providing sufficient supervision signals. However, more than 70% of objects exhibit aspect ratios less than 0.5 on the DOTA-v1.5 dataset, especially for some objects with extreme aspect ratios (seen in Figure 2(b)) in the bird’s-eye view perspective. As seen in Figure 2(c), only these blue points can be selected as pixel-level positive points with the common center sampling strategy, while these unselective red points would be seen as negative background information, leading to incorrect guidance for the model optimization. Therefore, the inconsistency between the shape-agnostic center sampling and objects with large aspect ratios would cause the performance degradation in the SOOD task. Assignment Inconsistency. Compared with the horizontal object detection in the general images, the localization of oriented objects in aerial images is more challenging due to their arbitrary rotation angles. To elucidate this, we conduct evaluations with the FCOS detector on the DOTA-v1.5 and COCO datasets, and then utilize the Intersection over Union (IoU) metric between predicted boxes and the corresponding ground-truth boxes to assess the localization quality of the pseudo-boxes. As shown in Figure 3(a), we use different IoU thresholds to obtain the pseudo-box precision under the same recall rate on the DOTA-v1.5 and COCO datasets, respectively. With a lower IoU threshold, we can achieve a higher precision on the DOTA-v1.5 dataset compared to the COCO dataset, but the pseudo-boxes with the poor localization quality would affect the detection optimization. Under the guidance of low-quality rotated pseudo- boxes, noisy labels are inevitably introduced during the label assignment process. When improving the requirement of localization quality by increasing the IoU threshold (from 0.5 to 0.9) on both datasets, the detection precision would severely drop from 84% to 17% on the DOTA-v1.5 dataset, while it will be also reduced from 76% to 54% on the COCO dataset. A large number of redundant noisy boxes caused by reduced precision can also introduce the noisy labels. Hence, it’s hard to achieve a trade-off between the precision and localization quality of rotated pseudo-boxes, and the assignment inconsistency problem should be addressed to guide the semi-supervised learning process. Figure 3. Investigation on the assignment and confidence inconsistency problems. (a) The pseudo-box precision of the DOTA-v1.5 dataset and COCO dataset under different IoU thresholds. (b) The top heat-map illustrates the consistency of centerness and IoU between ground truth boxes and their corresponding true positive boxes on the DOTA-v1.5 dataset, while the bottom one is on the COCO dataset. Confidence Inconsistency: Prior research (Li et al., 2020; Liu et al., 2023a) has validated that the confidence inconsistency problem is existing between the predicted classification and localization qualities. To enhance the correlation between them, most previous horizontal object detectors (Li et al., 2020; Liu et al., 2023a) use the IoU-based soft label to supervise the classification branch. However, predicting a confidence IoU value is difficult for the oriented objects due to the increased uncertainty introduced by the rotation angle parameter. Meanwhile, we discover that centerness can well represent the localization quality of oriented bounding boxes. As in Figure 3(b), we statically analyze the IoU and centerness between all ground truth boxes and their corresponding true positive boxes on the DOTA-v1.5 dataset (the top subfigure) and the COCO dataset ((the bottom subfigure). Compared to the COCO dataset, the correlation between the IoU and centerness values is significantly higher on the DOTA-v1.5 dataset. Therefore, to address the confidence inconsistency problem, we utilize the predicted centerness value to measure the localization quality of oriented objects. ### 3.3. Gaussian Center Assignment To deal with the sampling inconsistency problem, we attempt to select more suitable positive labels from the following aspects: 1) making full use of sparse annotated data; 2) taking the shape information of oriented objects into consideration. For the first condition, the optimal solution is to leverage these sparse annotations to assign more positive samples, hence some carefully designed label assignment strategies are excluded. Also considering the second condition, we propose a shape-aware method named Gaussian Center Assignment (GCA) to selectpixel-level positive labels for oriented object detection in semi-supervised learning. Concretely, We model the object as a $2D$ Gaussian distribution that can well reflect the shape and direction of the aerial object. We denote a rotated bounding box as $B=\left(x_{c},y_{c},w,h,\theta\right)$, where $\theta$ represents the angle parameter; $(x_{c},y_{c})$, $w$, and $h$ represent the center coordinates, width, and height of the oriented box, respectively. The coordinate of the center point $(x_{c},y_{c})$ serves as the mean vector of $2D$ Gaussian distribution $\boldsymbol{\mu}_{g}=(x_{c},y_{c})^{\top}$, and the co-variance matrix $\boldsymbol{\Sigma}_{g}$ can be formulated as: (1) $\boldsymbol{\Sigma}_{g}=\left[\begin{array}[]{cc}\cos\theta&-\sin\theta\\\ \sin\theta&\cos\theta\end{array}\right]\left[\begin{array}[]{cc}\frac{w^{2}}{4}&0\\\ 0&\frac{h^{2}}{4}\end{array}\right]\left[\begin{array}[]{cc}\cos\theta&\sin\theta\\\ -\sin\theta&\cos\theta\end{array}\right].$ We then define a point coordinate as $\boldsymbol{P}=(x,y)^{\top}$ and determine whether it is a positive label based on the following formula: (2) $\text{ label }=\left\\{\begin{array}[]{ll}1,&\text{ if }(\boldsymbol{P}-\boldsymbol{\mu}_{g})^{\top}\boldsymbol{\Sigma}_{g}^{-1}(\boldsymbol{P}-\boldsymbol{\mu}_{g})\leq 1\\\ 0,&\text{ otherwise }.\end{array}\right.$ This label assignment ensures that positive labels cover not only the central region but also the edge region of the target, thereby avoiding the omission of discriminative features for aerial objects with large aspect ratios. Moreover, a large number of sub-optimal positive samples (pixels near the boundaries) also provide sufficient supervision information for the detector optimization. For the oriented object, the new expression of centerness is formulated as follows: (3) $\text{ centerness }^{*}=1-(\boldsymbol{P}-\boldsymbol{\mu}_{g})^{\top}\boldsymbol{\Sigma}_{g}^{-1}(\boldsymbol{P}-\boldsymbol{\mu}_{g}).$ According to the prior that pixels located around the center of the object are more representative than those near the object boundaries, we establish a new normalized distinction in the object region to represent the pixel-wise localization quality. The Gaussian Center Assignment (GCA) can select comprehensive supervision signals from the sparse annotated data to guide the learing, and further improve the performance for objects with large aspect ratios. ### 3.4. Scale-aware Label Assignment Figure 4. Analysis on difference feature maps and the object centerness. (a) Score imbalance between feature maps at different levels. (b) For centerness- based soft label, condition 1 introduce ambiguities while condition 2 is more suitable. To tackle the assignment inconsistency problem, we conduct the pixel-level pseudo-label selection based on the feature level predictions of the teacher model rather than the post-processed pseudo-boxes. According to multi-level prediction with FPN (Lin et al., 2017a), different sizes of objects can be detected on different levels of feature maps (i.e., $P_{3},P_{4},P_{5},P_{6},P_{7}$). However, as shown in Figure 4(a), there is a score imbalance problem between feature maps at different levels, where the higher level feature map tend to predict the lower scores. To balance the allocation of pseudo-labels for oriented targets with different scales, we propose Scale-aware Label Assignment (SLA) to improve the quality of pseudo- labels by employing different sampling rules for different level feature maps. For the low-level feature maps (i.e., $P_{3},P_{4}$) with dense features, we design a coarse-to-fine rule to predict these small-scale objects with two following stages. In the coarse selection stage, the joint confidence (denoted as the multiplication of score and centerness) is used to rank all pixels and the top-k sorted pixels will be selected as candidate pseudo-labels. The joint confidence is expected to select samples with the high classification and localization qualities. As has been proven that the joint confidence is dominated by centerness due to its higher value than classification score (Liu et al., 2022), a high centerness value is unreliable to determine whether a pixel is positive since the centerness branch lacks supervision from negative instances. Therefore, we further filter these candidates through a score threshold to ensure the precision. For the high-level feature maps (i.e., $P_{5},P_{6},P_{7}$) with the lower predicted scores, we only use the score threshold to filter high-confidence pseudo-labels, thereby avoiding the unbalanced pseudo-labels allocation across various scales. Here the unsupervised loss consists of three parts: (4) $\mathcal{L}_{u}=\frac{1}{N_{all}}\sum_{i}^{N_{all}}\mathcal{L}_{i}^{cls}+\frac{\alpha}{N_{pos}}\sum_{j}^{N_{pos}}w_{j}(\mathcal{L}_{j}^{cen}+\mathcal{L}_{j}^{reg})$ where $\alpha$ is a weighting parameter, $N_{all}$ represents the number of pixels across five level feature maps, $N_{pos}$ denotes the number of selected pseudo-labels. We apply the quality focal loss (Li et al., 2020), the binary cross-entropy loss and the smooth-l1 Loss for guiding the classification, centerness and regression branches, respectively. Besides, the localization weights $w$ are determined by the following formula: (5) $w_{j}=\left\\{\begin{array}[]{ll}s_{j}\times c_{j}&j\in[P_{3},P_{4}]\\\ s_{j}&j\in[P_{5},P_{6},P_{7}]\end{array}\right.$ where $s$ and $c$ denote the score and centerness, respectively. Since the pseudo-labels selection is based on model predictions, the existence of false positive labels is inevitable and the localization task is more sensitive to such noisy samples. Through employing the weight $w$, we reduce the contribution of low-confidence samples to the overall loss. ### 3.5. Consistent Confidence Soft Label In this part, we focus on alleviating the confidence inconsistency issue. Based on the analysis in Section 3.2, we design a centerness-based soft label to supervise the classification branch. The soft label have to satisfy two conditions: (1) being positively correlated with centerness; (2) the score variance among all points within the target should not be too large, especially for small targets. Therefore, we propose the Consistent Confidence Soft Label (CCSL) strategy to align the classification and localization confidences. The core idea of CCSL is to replace the value of one-hot label at corresponding category index with a float value $y\in[0,1]$, which satisfies the condition that the covariance calculated with centerness equal to 1. The soft label $y\in[0,1]$ is given as follows: (6) $\text{ y }=[1-(\boldsymbol{P}-\boldsymbol{\mu}_{g})^{\top}\boldsymbol{\Sigma}_{g}^{-1}(\boldsymbol{P}-\boldsymbol{\mu}_{g})]^{\gamma}$ where $\gamma$ is a scale factor designed for the second condition. As shown in Figure 4(b), all points belonging to a small object have smaller stride, resulting in very similar point features. We hope that the score values of all points within the target are very close. Therefore, we weight them with the scale factor $\gamma$, which is formulated as $\sqrt[\beta]{(h\times w)/(H\times W)}$. $h$ and $w$ represent the height and width of the oriented box, $H$ and $W$ denote the height and width of the whole image, while $\beta$ controls the smoothing degree of $\gamma$. $\gamma$ can help prevent confusion during model training caused by excessive differences in point scores within the target. The proposed CCSL can bridge the gap between horizontal and oriented boxes in localization quality, and effectively mitigate the mismatch between classification and localization confidences, thereby bolstering the semi-supervised learning performance. ## 4\. Experiment ### 4.1. Experimental Setup Dataset: The experiments are conducted on DOTA-v1.5 (Xia et al., 2018) and DOTA-v1.0 (Xia et al., 2018). DOTA-v1.5 comprises 16 categories which contains 400$k$ annotated oriented instances. DOTA-v1.5-train, DOTA-v1.5-val, and DOTAv1.5-test contain 1411, 458, and 937 images, respectively. DOTA-v1.0 uses the same images as DOTA-v1.5 and the number of annotated instances is half the DOTA-v1.5’s, with targets having a pixel area smaller than 10 being ignored. We include three evaluation protocols: 1) DOTA1.5-Partial. Following SOOD method (Hua et al., 2023), we randomly sample10%, 20%, and 30% images from DOTA-v1.5-train as labeled data and set the remaining images as unlabeled data. 2) DOTA1.5-Full. We set DOTA-v1.5-train as labeled data, DOTA-v1.5-test as unlabeled data and perform the evaluation on the DOTA-v1.5-val. 3) DOTA1.0-Full. The DOTA-v1.0-train and DOTA-v1.0-test are employed as labeled data and unlabeled data respectively, while the DOTA-v1.0-val is set as the evaluation dataset. For all evaluation protocols, the mAP (Xia et al., 2018) is adopted as the evaluation metric. Implementation Details: We adopt the FCOS (Tian et al., 2019) as the detector and ResNet-50 (He et al., 2016) pretrained on ImageNet (Deng et al., 2009) as the backbone. Following the previous works (Li et al., 2023; Xie et al., 2021; Han et al., 2021), we crop the original images into patches of size 1024 $\times$ 1024, with an overlap region of 200 pixels between adjacent patches. To ensure a fair comparison, the MCL is trained on 2 RTX4090 GPUs with 3 images per GPU (2 labeled images and 1 unlabeled image). The SGD optimizer is applied with an initial learning rate of 0.0025, a momentum of 0.9, and a weight decay of 0.0001. For 30% partial and full setting experiments, the MCL is trained for 180$k$ iterations, and the learning rate is divided by 10 at 120k and 160k. For 10% partial and 20% partial setting experiments, we train the MCL for 120$k$ iterations, while the learning rate is divided by 10 at 80k and 110k. We follow the same pipeline in (Hua et al., 2023; Zhou et al., 2022), employing a burn-in strategy to initialize the parameters of the teacher model and applying the EMA strategy to update its parameters. The weak data augmentation strategy for the teacher model and strong data augmentation strategy for the student model are also consistent with them (Hua et al., 2023; Zhou et al., 2022). Table 1. Performance comparison with other state-of-the-art methods on the DOTA-v1.5-Partial dataset. ${\circ}$ and ${\dagger}$ denotes the base detector is Faster RCNN and FCOS respectively. | | mAP(%) $\uparrow$ ---|---|--- Task | Method | 10% | 20% | 30% OD | Faster RCNN (Girshick, 2015) | 43.43 | 51.32 | 53.14 FCOS (Tian et al., 2019) | 42.78 | 50.11 | 54.79 SSOD | Unbaised Teacher∘ (Liu et al., 2021) | 44.51 | 52.80 | 53.33 Soft Teacher∘ (Xu et al., 2021) | 48.46 | 54.89 | 57.83 Dense Teacher† (Zhou et al., 2022) | 46.90 | 53.93 | 57.86 PseCo∘ (Li et al., 2022a) | 48.04 | 55.28 | 58.03 DualPolish∘ (Zhang et al., 2023) | 49.02 | 55.17 | 58.44 ARSL† (Liu et al., 2023a) | 48.17 | 55.34 | 59.02 SOOD | SOOD*† (Hua et al., 2023) | 48.63 | 55.58 | 59.23 PST∘ (Wu et al., 2024) | 49.63 | 57.39 | 60.40 MCL†(Ours) | 52.98 | 59.63 | 62.63 Table 2. Performance comparison on DOTA-v1.5-Full and DOTA-v1.0-Full, while numbers in font of the arrow indicate the result of supervised baseline Method | DOTA-v1.5-Full | DOTA-v1.0-Full ---|---|--- Unbiased Teacher (Liu et al., 2021) | 66.12 $\xrightarrow{-1.27}$ 64.85 | - Soft Teacher (Xu et al., 2021) | 66.12 $\xrightarrow{+0.28}$ 66.40 | - Dense Teacher (Zhou et al., 2022) | 65.46 $\xrightarrow{+0.92}$ 66.38 | 66.72 $\xrightarrow{+3.58}$ 70.30 SOOD* (Hua et al., 2023) | 65.46 $\xrightarrow{+2.24}$ 67.70 | 66.72 $\xrightarrow{+4.09}$ 70.81 MCL(Ours) | $\textbf{65.46}\xrightarrow{\textbf{+3.62}}\textbf{69.08}$ | $\textbf{66.72}\xrightarrow{\textbf{+6.91}}\textbf{73.63}$ ### 4.2. Comparison with State-of-the-Arts To validate the effectiveness of MCL, we compare it with previous methods under DOTA1.5-Partial, DOTA1.5-Full and DOTA1.0-Full evaluation protocols. For distinguishing with SOOD task, the SOOD (Hua et al., 2023) method is termed as SOOD*. Moreover, some SSOD methods with their oriented version also participate in comparison. DOTA-v1.5-Partial: Experiment results under the DOTA-v1.5-Partial protocol are presented in Table 1. In SOOD task, SSOD methods typically underperform compared to SOOD methods, which exhibits the necessity of bridging the gap between horizontal boxes and oriented boxes in semi-supervised detection. SOOD* (Hua et al., 2023) and PST (Wu et al., 2024) are specifically designed for the characteristics of oriented boxes and remote sensing images, yet the performance gains are marginal. By bridging the gaps and addressing the concerns of inconsistency, our MCL achieves remarkable enhancements over the supervised baseline FCOS and outperforms all SOOD methods under all proportions. It scores 52.98, 59.63 and 62.63 mAP on 10% 20% 30% experiments, respectively, largely surpassing the state-of-the-arts method PST by 2 $\sim$ 3.5 mAP, validating the necessity of our analyses and methods. DOTA-v1.5-Full and DOTA-v1.0-Full: Table 2 gives the comparison results in terms of mAP on the DOTA-v1.5-Full and DOTA-v1.0-Full configurations. Our MCL demonstrates remarkable advancements over existing methods on the DOTA-v1.5-Full and DOTA-v1.0-Full experiment settings. Compared to the state- of-the-art SOOD* using the same detector, the MCL can achieve the notable improvement: 1.38% on the the DOTA-v1.5-Full setting, and 2.82% on the DOTA-v1.0-Full setting, exhibit its effectiveness. Figure 5. Some visualization results from the DOTA-v1.5 dataset. The first and the last rows are the results of Dense Teacher and MCL respectively. True Positive, False Negative, and False Positive predictions are marked in green, red, and blue respectively. Visualization Analysis: The visualization results under the 30% setting of DOTA-v1.5-Partial are shown in Figure 5. Compared with the Dense Teacher (Zhou et al., 2022), MCL shows an enhancement in the detection of targets with large aspect ratios aided by the GCA module. Additionally, supported by the SLA module, the MCL demonstrates an improved capability in handling targets of various sizes in remote sensing scenarios. Moreover, the CCSL module assists the MCL in identifying high-quality pseudo-labels in densely populated target scenes, effectively reducing the incidence of False Negative predictions. This collective integration of GCA, SLA, and CCSL within the MCL not only refines its detection precision but also establishes a robust approach for enhancing object detection in complex remote sensing images. ### 4.3. Ablation Studies Subsequently, we delve into the detailed analysis of each module. All the ablation experiments are performed on the 30% setting of DOTA-v1.5-Partial without special instructions. We set the Dense Teacher (Zhou et al., 2022) as the baseline. It also uses pixel-level pseudo labels by sorting all pixels according to classification scores and selects the top ratio(%) ones. The original parameter setting sets ratio to 1%, and we set ratio to 3% by default to maximize the performance. Table 3. Performance comparison with different modules. | | | | mAP(%) $\uparrow$ ---|---|---|---|--- | GCA | SLA | CCSL | 10% | 20% | 30% baseline | $\times$ | $\times$ | $\times$ | 49.71 | 55.64 | 59.65 MCL | $\checkmark$ | $\times$ | $\times$ | 51.12 | 56.74 | 60.62 $\times$ | $\checkmark$ | $\times$ | 51.84 | 57.16 | 61.32 $\checkmark$ | $\checkmark$ | $\times$ | 52.02 | 57.28 | 61.46 $\checkmark$ | $\checkmark$ | $\checkmark$ | 52.98 | 59.63 | 62.63 The effect of each module: We study the effectiveness of our proposed three modules (GCA, SLA, CCSL) under different experiment settings (i.e., 10% 20% 30% ). The results are shown in Table 3. After comprehensively extracting supervision information from annotated data, GCA facilitates a performance enhancement for MCL. Furthermore, SLA enhances the quality of unsupervised information extracted from unlabeled data and also contributes to performance improvement. The CCSL can further improve the precision of pseudo-labels by mitigating the inconsistency between classification and localization confidence, thereby the participation of the CCSL leads to additional performance improvement for MCL. All Sampling vs GCA: To evaluate the effective of the GCA in SOOD task, we compare GCA with the center sampling strategy used in FCOS (Tian et al., 2019) and the all sampling strategy that labels all pixels insides ground-truth boxes as positives and the remaining pixels as negatives. The detailed comparable results are shown in Table 5. Although the aim is to fully utilize labeled data to extract more positive supervision information, all sampling strategy still shows performance degradation compared to center sampling. We speculate that this is due to all sampling introducing excessive background at the target’s edges. In contrast, GCA can provide sufficient positive supervision information while accounting for the target’s aspect ratio without introducing excessive noise, thereby improving the performance. Table 4. Performance comparison between GCA and All Sampling strategy. Sampling strategies | mAP ---|--- Center Sampling | 59.65 All Sampling | 58.49 GCA | 60.62 Table 5. Performance comparison between Mean Teacher and Dense Teacher. Method | mAP ---|--- FCOS(base) | 54.79 Mean Teacher | 26.02 (-28.77) Dense Teacher | 59.65 (+4.86) Table 6. Ablation studies on SLA. | ratio | thr | topk | Recall | Precision | mAP ---|---|---|---|---|---|--- score | 1% | - | - | 40.08 | 30.76 | 57.86 3% | - | - | 69.44 | 17.76 | 59.65 score $\times$ center | 3% | - | - | 70.38 | 18.01 | 60.45 SLA | - | 0.01 | 1500 | 90.35 | 12.74 | 59.97 0.02 | 88.76 | 19.50 | 60.61 0.03 | 87.15 | 24.03 | 60.59 - | 0.01 | 2000 | 93.72 | 12.39 | 60.34 0.02 | 91.89 | 19.13 | 61.32 0.03 | 90.04 | 23.69 | 60.89 The analysis of SLA: To verify the impact of SLA hyper-parameters and demonstrate the advantages of SLA over other selection strategies, we conduct the experiments with the pixel-level recall and precision as the metrics. In detail, we denote the selected points from the ground truth boxes as ground truth points. The selected pseudo-labels that coincide with ground truth points are denoted as true positives and the remaining ones are false positives. The computation forms of pixel-level recall and precision are consistent with the previous work (Lin et al., 2014). The experimentation is shown in Table 6. It can be observed that the performance of semi-supervised learning improves 1.79% mAP when recall increases from 40.08% to 69.44%, demonstrating that maintaining high recall of pseudo-labels is crucial for ensuring the performance of SOOD task. In addition, replacing score selection used in Dense Teacher (Zhou et al., 2022) with joint confidence selection can slightly enhances both recall and precision, thereby improving the performance from 59.65 mAP to 60.45 mAP. However, it is still not the optimal solution. By considering the imbalance between scores of objects of different scales and the disparity between scores and centerness values, SLA significantly enhances recall while maintaining satisfactory precision, thereby markedly improving SOOD performance. The analysis of CCSL: To validate our analysis in Section 3.2 that predicted centerness is a more suitable proxy localization quality of oriented box than predicted IoU, we perform the comparative experiment between them. As shown in Table 7, compared to predicting the uncontrollable IoU, predicting the centerness with considering the scale information improves the SOOD performance. In addition, the scale factor $\gamma$ is also a critical component. The central and edge features of extremely small objects are very similar, leading to significant differences in the predicted centerness values confuse the algorithm’s training. By combining the scale factor $\gamma$, the variance of the target internal center distribution can be dynamically adjusted according to the target scale. Therefore, compared to vanilla centerness, centerness with the scale factor can better enhance the detection performance. Table 7. Ablation studies on CCSL. Localization Quality | $\beta$ | mAP ---|---|--- IoU | - | 60.98 Centerness | 0 | 60.87 0.1 | 61.38 0.2 | 62.63 0.25 | 62.26 Table 8. The impacts of GCA on objects with large aspect ratios. | HB | BD | GTF | mAP ---|---|---|---|--- | Recall | mAP | Recall | mAP | Recall | mAP FCOS | 81.7 | 71.8 | 60.8 | 40.1 | 72.6 | 55.2 | 65.5 +GCA | 84.6 | 74.1 | 66.5 | 45.7 | 83.0 | 62.5 | 67.2 ### 4.4. Inconsistency Mitigation Sampling Inconsistency: We conduct additional experiments to demonstrate the effectiveness of GCA in sampling inconsistency mitigation. The metric is the detection performance for objects with large aspect ratios. The models are trained on the DOTA-v1.5-train and evaluated on the DOTA-v1.5-val. We extract the three categories with the highest aspect ratio from the 16 categories in DOTA-v1.5, namely HB (harbor), BD (bridge), and GTF (ground-track-field). The experimental results are shown in Table 8. With the support of GCA, there is a notable enhancement in both recall and mAP for these objects with the large aspect ratios, which verifies that GCA can effectively alleviating the sampling inconsistency, enhancing the performance of large aspect ratio object detection. Assignment Inconsistency: To verify the necessity of using pixel-level pseudo- labels to eliminate assignment inconsistency as discussed in Section 3.2, we compare the performance of the pseudo-boxes-based method Mean Teacher(we follow the Consistent Teacher (Wang et al., 2023) that call the vanilla pseudo-boxes framework Mean Teacher) and the pixel-level pseudo-labels-based method Dense Teacher (Zhou et al., 2022) on FCOS (Tian et al., 2019). As shown in Table 5, Mean Teacher exhibits a significant performance drop due to the fact that dense object detectors adopt a binary label assignment strategy, making them highly sensitive to the overall quality of the oriented pseudo- boxes which is difficult to guarantee. In contrast, pixel-level pseudo-labels greatly alleviate the assignment inconsistency introduced during label assignment, thereby improving the performance. Confidence Inconsistency: To assess the effectiveness of CCSL in eliminate confidence inconsistency, we train two FCOS detectors on DOTA-v1.5-train, one of which is supervised with one-hot label as usual and the other with CCSL. We visualize the score-centerness heatmaps of all ground truth points on the DOAT-v1.5-val set, which is shown in Figure 6. As highlighted in the blue squares, with the incorporation of CCSL, there is an improvement in the positive correlation between classification scores and centerness that represents the localization quality, which demonstrates that the CCSL is effective in eliminating the inconsistency between classification and localization qualities. ## 5\. Conclusion In this work, we propose a Multi-clue Consistency Learning (MCL) framework to boost the performance of semi-supervised oriented object detection. We attempt to bridge the gaps between general and oriented object detection in semi- supervised learning through investigating three inconsistency issues. To mitigate the sampling inconsistency, we design the Gaussian Center Assignment to consider various shapes of aerial objects by employing the limited annotation data. The proposed Scale-aware Label Assignment and Consistent Confidence Soft Label strategies to mine more accurate supervised information from a large amount of unlabeled data, which can further promote the semi- supervised learning. Extensive experimental results on the public DOTA-v1.0 and DOTA-v1.0 datasets clearly demonstrate our proposed MCL is effective to bridge the gaps between the general and oriented object detection in the semi- supervised learning. Figure 6. Illustrated analysis of CCSL. (a) The heatmap of the scores and centerness of positive samples trained based on CCSL. (b) The heatmap of the scores and centerness of positive samples trained based on one-hot label. ## References * (1) * Deng et al. (2009) Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. 2009. Imagenet: A large-scale hierarchical image database. In _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition_. Ieee, 248–255. * Ge et al. (2021) Zheng Ge, Songtao Liu, Zeming Li, Osamu Yoshie, and Jian Sun. 2021. Ota: Optimal transport assignment for object detection. In _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition_. 303–312. * Girshick (2015) Ross Girshick. 2015. Fast r-cnn. In _Proceedings of the IEEE International Conference on Computer Vision_. 1440–1448. * Han et al. (2021) Jiaming Han, Jian Ding, Nan Xue, and Gui-Song Xia. 2021. Redet: A rotation-equivariant detector for aerial object detection. In _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition_. 2786–2795. * He et al. (2016) Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recognition. In _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition_. 770–778. * Hou et al. (2022) Liping Hou, Ke Lu, Jian Xue, and Yuqiu Li. 2022. Shape-adaptive selection and measurement for oriented object detection. In _Proceedings of the AAAI Conference on Artificial Intelligence_ , Vol. 36. 923–932. * Hua et al. (2023) Wei Hua, Dingkang Liang, Jingyu Li, Xiaolong Liu, Zhikang Zou, Xiaoqing Ye, and Xiang Bai. 2023. SOOD: Towards semi-supervised oriented object detection. In _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition_. 15558–15567. * Kim and Lee (2020) Kang Kim and Hee Seok Lee. 2020. Probabilistic anchor assignment with iou prediction for object detection. In _Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part XXV 16_. Springer, 355–371. * Li et al. (2022d) Aoxue Li, Peng Yuan, and Zhenguo Li. 2022d. Semi-supervised object detection via multi-instance alignment with global class prototypes. In _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition_. 9809–9818. * Li et al. (2022a) Gang Li, Xiang Li, Yujie Wang, Yichao Wu, Ding Liang, and Shanshan Zhang. 2022a. Pseco: Pseudo labeling and consistency training for semi-supervised object detection. In _European Conference on Computer Vision_. 457–472. * Li et al. (2022b) Gang Li, Xiang Li, Yujie Wang, Wu Yichao, Ding Liang, and Shanshan Zhang. 2022b. Dtg-ssod: Dense teacher guidance for semi-supervised object detection. _Advances in Neural Information Processing Systems_ 35 (2022), 8840–8852. * Li et al. (2022c) Hengduo Li, Zuxuan Wu, Abhinav Shrivastava, and Larry S Davis. 2022c. Rethinking pseudo labels for semi-supervised object detection. In _Proceedings of the AAAI Conference on Artificial Intelligence_ , Vol. 36. 1314–1322. * Li et al. (2020) Xiang Li, Wenhai Wang, Lijun Wu, Shuo Chen, Xiaolin Hu, Jun Li, Jinhui Tang, and Jian Yang. 2020. Generalized focal loss: Learning qualified and distributed bounding boxes for dense object detection. _Advances in Neural Information Processing Systems_ 33 (2020), 21002–21012. * Li et al. (2023) Yuxuan Li, Qibin Hou, Zhaohui Zheng, Ming-Ming Cheng, Jian Yang, and Xiang Li. 2023. Large selective kernel network for remote sensing object detection. In _Proceedings of the IEEE/CVF International Conference on Computer Vision_. 16794–16805. * Lin et al. (2017a) Tsung-Yi Lin, Piotr Dollár, Ross Girshick, Kaiming He, Bharath Hariharan, and Serge Belongie. 2017a. Feature pyramid networks for object detection. In _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition_. 2117–2125. * Lin et al. (2017b) Tsung-Yi Lin, Priya Goyal, Ross Girshick, Kaiming He, and Piotr Dollár. 2017b. Focal loss for dense object detection. In _Proceedings of the IEEE International Conference on Computer Vision_. 2980–2988. * Lin et al. (2014) Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dollár, and C Lawrence Zitnick. 2014. Microsoft coco: Common objects in context. In _European Conference Computer Vision_. 740–755. * Liu et al. (2023a) Chang Liu, Weiming Zhang, Xiangru Lin, Wei Zhang, Xiao Tan, Junyu Han, Xiaomao Li, Errui Ding, and Jingdong Wang. 2023a. Ambiguity-resistant semi-supervised learning for dense object detection. In _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition_. 15579–15588. * Liu et al. (2023b) Liang Liu, Boshen Zhang, Jiangning Zhang, Wuhao Zhang, Zhenye Gan, Guanzhong Tian, Wenbing Zhu, Yabiao Wang, and Chengjie Wang. 2023b. Mixteacher: Mining promising labels with mixed scale teacher for semi-supervised object detection. In _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition_. 7370–7379. * Liu et al. (2021) Yen-Cheng Liu, Chih-Yao Ma, Zijian He, Chia-Wen Kuo, Kan Chen, Peizhao Zhang, Bichen Wu, Zsolt Kira, and Peter Vajda. 2021. Unbiased teacher for semi-supervised object detection. _arXiv preprint arXiv:2102.09480_ (2021). * Liu et al. (2022) Yen-Cheng Liu, Chih-Yao Ma, and Zsolt Kira. 2022. Unbiased teacher v2: Semi-supervised object detection for anchor-free and anchor-based detectors. In _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition_. 9819–9828. * Ming et al. (2021) Qi Ming, Zhiqiang Zhou, Lingjuan Miao, Hongwei Zhang, and Linhao Li. 2021. Dynamic anchor learning for arbitrary-oriented object detection. In _Proceedings of the AAAI Conference on Artificial Intelligence_ , Vol. 35. 2355–2363. * Monge (1781) Gaspard Monge. 1781. Mémoire sur la théorie des déblais et des remblais. _Mem. Math. Phys. Acad. Royale Sci._ (1781), 666–704. * Nie et al. (2023) Yuxiang Nie, Chaowei Fang, Lechao Cheng, Liang Lin, and Guanbin Li. 2023. Adapting object size variance and class imbalance for semi-supervised object detection. In _Proceedings of the AAAI Conference on Artificial Intelligence_ , Vol. 37. 1966–1974. * Ren et al. (2015) Shaoqing Ren, Kaiming He, Ross Girshick, and Jian Sun. 2015. Faster r-cnn: Towards real-time object detection with region proposal networks. _Advances in Neural Information Processing Systems_ 28 (2015). * Sohn et al. (2020) Kihyuk Sohn, Zizhao Zhang, Chun-Liang Li, Han Zhang, Chen-Yu Lee, and Tomas Pfister. 2020. A simple semi-supervised learning framework for object detection. _arXiv preprint arXiv:2005.04757_ (2020). * Sun et al. (2021) Peize Sun, Yi Jiang, Enze Xie, Wenqi Shao, Zehuan Yuan, Changhu Wang, and Ping Luo. 2021. What makes for end-to-end object detection?. In _International Conference on Machine Learning_. PMLR, 9934–9944. * Tarvainen and Valpola (2017) Antti Tarvainen and Harri Valpola. 2017. Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results. _Advances in Neural Information Processing Systems_ 30 (2017). * Tian et al. (2019) Zhi Tian, Chunhua Shen, Hao Chen, and Tong He. 2019. Fcos: Fully convolutional one-stage object detection. In _Proceedings of the IEEE International Conference on Computer Vision_. 9627–9636. * Wang et al. (2023) Xinjiang Wang, Xingyi Yang, Shilong Zhang, Yijiang Li, Litong Feng, Shijie Fang, Chengqi Lyu, Kai Chen, and Wayne Zhang. 2023. Consistent-Teacher: Towards Reducing Inconsistent Pseudo-Targets in Semi-Supervised Object Detection. In _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition_. 3240–3249. * Wu et al. (2024) Wenhao Wu, Hau-San Wong, and Si Wu. 2024. Pseudo-Siamese Teacher for Semi-Supervised Oriented Object Detection. _IEEE Transactions on Geoscience and Remote Sensing_ (2024). * Xia et al. (2018) Gui-Song Xia, Xiang Bai, Jian Ding, Zhen Zhu, Serge Belongie, Jiebo Luo, Mihai Datcu, Marcello Pelillo, and Liangpei Zhang. 2018. DOTA: A large-scale dataset for object detection in aerial images. In _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition_. 3974–3983. * Xie et al. (2021) Xingxing Xie, Gong Cheng, Jiabao Wang, Xiwen Yao, and Junwei Han. 2021. Oriented R-CNN for object detection. In _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition_. 3520–3529. * Xu et al. (2023) Chang Xu, Jian Ding, Jinwang Wang, Wen Yang, Huai Yu, Lei Yu, and Gui-Song Xia. 2023. Dynamic coarse-to-fine learning for oriented tiny object detection. In _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition_. 7318–7328. * Xu et al. (2022) Chang Xu, Jinwang Wang, Wen Yang, Huai Yu, Lei Yu, and Gui-Song Xia. 2022. RFLA: Gaussian receptive field based label assignment for tiny object detection. In _European Conference on Computer Vision_. Springer, 526–543. * Xu et al. (2021) Mengde Xu, Zheng Zhang, Han Hu, Jianfeng Wang, Lijuan Wang, Fangyun Wei, Xiang Bai, and Zicheng Liu. 2021. End-to-end semi-supervised object detection with soft teacher. In _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition_. 3060–3069. * Yang et al. (2021) Qize Yang, Xihan Wei, Biao Wang, Xian-Sheng Hua, and Lei Zhang. 2021. Interactive self-training with mean teachers for semi-supervised object detection. In _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition_. 5941–5950. * Yang et al. (2022) Xue Yang, Gefan Zhang, Xiaojiang Yang, Yue Zhou, Wentao Wang, Jin Tang, Tao He, and Junchi Yan. 2022. Detecting rotated objects as gaussian distributions and its 3-d generalization. _IEEE Transactions on Pattern Analysis and Machine Intelligence_ 45, 4 (2022), 4335–4354. * Zhang et al. (2021) Haoyang Zhang, Ying Wang, Feras Dayoub, and Niko Sunderhauf. 2021. Varifocalnet: An iou-aware dense object detector. In _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition_. 8514–8523. * Zhang et al. (2023) Lei Zhang, Yuxuan Sun, and Wei Wei. 2023. Mind the gap: Polishing pseudo labels for accurate semi-supervised object detection. In _Proceedings of the AAAI Conference on Artificial Intelligence_ , Vol. 37. 3463–3471. * Zhang et al. (2020) Shifeng Zhang, Cheng Chi, Yongqiang Yao, Zhen Lei, and Stan Z Li. 2020. Bridging the gap between anchor-based and anchor-free detection via adaptive training sample selection. In _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition_. 9759–9768. * Zhou et al. (2022) Hongyu Zhou, Zheng Ge, Songtao Liu, Weixin Mao, Zeming Li, Haiyan Yu, and Jian Sun. 2022. Dense teacher: Dense pseudo-labels for semi-supervised object detection. In _European Conference on Computer Vision_. 35–50.
# Using Graph Algorithms to Pretrain Graph Completion Transformers Jonathan Pilault1 , Michael Galkin1, Bahare Fatemi2∗, Perouz Taslakian3∗, David Vasquez4, Christopher Pal1,4,5 1Polytechnique Montreal & Mila, 2 Google Research, 3Samsung AI, 4ServiceNow Research, 5Canada CIFAR AI Chair <EMAIL_ADDRESS>Work performed at ServiceNow Research. ###### Abstract Recent work on Graph Neural Networks has demonstrated that self-supervised pretraining can further enhance performance on downstream graph, link, and node classification tasks. However, the efficacy of pretraining tasks has not been fully investigated for downstream large knowledge graph completion tasks. Using a contextualized knowledge graph embedding approach, we investigate five different pretraining signals, constructed using several graph algorithms and _no external data_ , as well as their combination. We leverage the versatility of our Transformer-based model to explore _graph structure generation_ pretraining tasks, typically inapplicable to most graph embedding methods. We further propose a new path-finding algorithm guided by information gain and find that it is the best-performing pretraining task across three downstream knowledge graph completion datasets. In a multitask setting that combines all pretraining tasks, our method surpasses some of the latest and strong performing knowledge graph embedding methods on all metrics for fb15k-237, on Hit@1 for wn18rr and on MRR and hit@10 for jf17k (a knowledge hypergraph dataset). ## 1 Introduction Transfer learning has recently emerged as a powerful technique in several domains Ramachandran et al. (2017); Pennington et al. (2014); Tomar et al. (2017); Subramanian et al. (2018); Oquab et al. (2014); Yosinski et al. (2014); Trinh et al. (2019), in which a model is first pretrained on relevant tasks before being fine-tuned on a downstream task. In most modern vision and NLP applications, such pretraining is often based on the versatile Transformer model Vaswani et al. (2017) using self-supervised learning on unlabeled data Devlin et al. (2019); Raffel et al. (2020); Bao et al. (2022). Transformer- based and self-supervised pretraining have also been applied in the graph representation learning scenarios. These studies, however, focus mostly on non-relational graphs Thakoor et al. (2022); Fatemi et al. (2021) or small molecular graphs Ying et al. (2021), leaving pretraining approaches for Knowledge Graphs (KG) relatively unexplored. Many Graph Neural Network (GNN) techniques use positional embeddings of the entities in the relation along with entity/relation embeddings to represent tuples You et al. (2019). Transformer Language Models Vaswani et al. (2017) use positional embeddings in a similar way. Transformers can perform _contextualized_ link prediction by masking out one of the entities in an input tuple. In an arity-2 graph, a KG can be viewed as a set of triplets $\mathcal{G}=\\{(h_{i},r_{i},t_{i})\\}$, $i\in\\{1,\dotsc,T\\}$, where $h_{i}$ is the head entity, $t_{i}$ is the tail entity, and $r$ is the relation between $h_{i}$ and $t_{i}$. We can feed the sequence $h_{i}$, $r_{i}$, [mask] or [mask], $r_{i}$, $t_{i}$, where [mask] represents the entity to predict. Since Transformers can process sequences of arbitrary length, we can go beyond triple-based KGs, and consider the more general case of _Knowledge Hypergraphs_ (KHGs, Fatemi et al. (2020)), where edges are composed of tuples with an arbitrary number of entities. With the versatility of our base model, we can also perform other tasks, such as autoregressive path generation or query conditioned neighborhood prediction, that GNNs or KG embeddings cannot. In this paper, we study the effectiveness of various graph-based pretraining schemes that _do not use external data_ applied to a Transformer model, trained either separately or jointly, and evaluated on the downstream task of KG completion (link prediction). Not using external data is instrumental in situations where data is scarce, confidential, or classified, for example, medical records, crime networks, or terrorist activity. Our techniques can be applied to a generalization of KGs, i.e. KHGs. Our contributions are as follows: (i) formalizing and introducing five pretraining objectives suitable for sequence-based KG Transformers; (ii) introducing a new path-finding algorithm that is guided by information gain; (iii) evaluating the effect of pretraining strategies on triple-based KGs as well as on KHGs. ## 2 Related Works Our method is a _Contextualized KG Embedding_ method Wang et al. (2019) that uses _Self-Supervised Graph Representation_ pretraining. #### Knowledge Graph Embedding methods encode distributed representations of entities and relations using a continuous vector. The representation is learned via a scoring function that measures the plausibility that a given triple $\\{(h_{i},r_{i},t_{i})\\}$ exists. Such embeddings maximize the plausibility of observed triples, which may not be predictive enough for downstream tasks Wang et al. (2015); Wei et al. (2015). To make the embeddings more transferable, researchers have incorporated other types of information. For example, _external_ data sources can complement a KG embedding by also encoding entity types Guo et al. (2015); Xie et al. (2016b) or textual descriptions Xie et al. (2016a); Wang and Li (2016). As explored here, we can also enhance KG embedding by _only_ using structural features from the graph. For example, relational paths and multi- hop relationships between entities (see section A for more details), are a type of structural information that has proven useful Lin et al. (2015); Toutanova et al. (2016); Das et al. (2017) for KG completion. #### Self-Supervised Graph Representation learning techniques broadly fall into three categories and include methods that: (1) use random walk procedures to encode diverse neighborhoods Perozzi et al. (2014); Grover and Leskovec (2016); Hamilton et al. (2017a); (2) reconstruct a graph’s adjacency matrix Kipf and Welling (2016); Hasanzadeh et al. (2019); Fatemi et al. (2021); (3) maximize mutual information between local node representations and global graph representations Veličković et al. (2019); You et al. (2020); Liang et al. (2021). For applications in chemistry and biology, most pre-training techniques also use additional datasets and signals like MAE loss Hu et al. (2020a); Ying et al. (2021), however, our technique does not use _external_ data. With the exception of Deep Relational Graph Infomax (drgi) Liang et al. (2021), most self-supervised pretraining schemes based on graph structure have never been applied to solve large KG completion tasks. Other techniques such as kg-bert Yao et al. (2019) use a bert-based Devlin et al. (2019) pretrained language models with entity descriptions for KG link prediction. Again, such work relies on external data and additional information (descriptions) while our pretraining signals are self-contained. ## 3 Methodology In this section, we provide an overview of the five different graph algorithms that we use to create pretraining tasks for our experiments: (1) Relational Shortest Path sequence generation (sp); (2) Information gain Path sequence generation (ip); (3) K-Hop Neighbor prediction (khn); (4) Invariant Adjacency matrix classification (iva); (5) Local Clustering Coefficient estimation (lcc). Please see Appendix A for more details on definitions and notation used in this section. When jointly trained (all), we use all pretraining tasks and prepend a task token to delineate each task. Note that for all, the total is the unweighted sum of each pretraining task’s loss. An overview of the task objectives is outlined in Table 1. Task | Input | Target | Objective | Type ---|---|---|---|--- Path-based sp | $\\{e_{i},r,e_{j}\\}$ | $\\{P_{rel}\\}_{r}$ | $\prod_{k}^{n}f(p_{k}|p_{<k},e_{i},r,e_{j})$ | SG ip Neighborhood Based khn | $\\{e_{i},E_{\mathcal{N}_{k-1}}\\}$ | $\mathbf{O}_{\mathcal{N}_{k}}$ | $\text{KL}(\mathbf{O}_{\mathcal{N}_{k}}||f(e_{i},E_{\mathcal{N}_{k-1}}))$ | MP iva | $\\{\mathbf{A},\widetilde{\mathbf{A}}|\mathbf{A}^{\prime}\\}$ | $\\{1|0\\}$ | $y_{i}log(f(\mathbf{A},\widetilde{\mathbf{A}}|\mathbf{A}^{\prime}))$ | BC lcc | $\\{e_{i},E_{\mathcal{N}_{k-1}}\\}$ | $c_{e_{i}}$ | $(f(e_{i})-c_{e_{i}})_{k}^{2}$ | R Table 1: Overview of Graph Algorithm Pretraining Tasks. Rel=Uses Relations; SG=Sequence Generation; BC=Binary Classification; MP=Multi-label Prediction; R=Regression. ### 3.1 Relational Shortest Paths (sp) The set of relational paths $\\{P_{rel}\\}_{r}$ as defined in Section A may yield an exponential number of choices. In practice, we would like to limit the number of such paths and hence, we need a way to select a subset of $\\{P_{rel}\\}_{r}$. There are a few possible heuristics for selecting a subset of such paths. For example, we can find shortest paths based on Dijkstra’s algorithm and then just keep the sequences of relations joining two entities $\\{e_{i},r,e_{j}\\}$. As specified in Table 1, we condition our path sequence generation with $\\{e_{i},r,e_{j}\\}$. The next token generated is the first relation on a sampled path $\\{P_{rel}\\}_{r}$. Our path-based pretraining algorithms allow the model to explore ${\mathcal{G}}$ beyond the tuples included in $\mathcal{G^{\prime}}$ since ${\mathcal{G}}$ includes relational paths between entities that may not appear in any of the train, validation, or test sets. A masked query “[Spain] [form of government] [mask]” can have many relational paths linking the [Spain] entity to a target entity that we want to predict. For example, a relational path may be “[nominated for], [film/country]” that will link [Spain] to the target entity to be predicted. When no path exist between $\\{e_{i},e_{j}\\}$, the model target is the [no_path] token. ### 3.2 Information gain Paths (ip) Input: Training set $\mathcal{G^{\prime}}$; query relation $r$; $k$ number of top paths; max relational paths length (max hops) $l$ Output: at most $n=k^{l}$ relational paths $\\{P_{rel}\\}_{r}$; 1 $R^{\prime}\leftarrow\text{FindIncidentRelations}(r,\emptyset)$ // Algorithm 2 $R^{\prime}\leftarrow\text{TopEntropy}(k,R^{\prime})$ // Algorithm 3 2 $\\{P_{rel}\\}_{r}\leftarrow R^{\prime}$ 3 for _$i\in\\{1,\dotsc,l\\}$_ do 4 for _$r^{\prime}\in R^{\prime}$ ; $p\in\\{P_{rel}\\}_{r}$_ do 5 $R^{\prime\prime}\leftarrow\text{FindIncidentRelations}(r^{\prime})$ $R^{\prime\prime}\leftarrow\text{BottomCondEntropy}(k,R^{\prime\prime},r^{\prime})$ // Algorithm 3 6 7 for _$r^{\prime\prime}\in R^{\prime\prime}$_ do 8 if _$r^{\prime}$ is the last element in $p$_ then 9 $\\{P_{rel}\\}_{r}\leftarrow\\{p\\}\ \cup\\{r^{\prime\prime}\\}$ 10 end if 11 12 end for 13 14 end for 15 $R^{\prime}\leftarrow R^{\prime\prime}$ 16 end for Return: $\\{P_{rel}\\}_{r}$ Algo 1 Top_k Path Information Gain While sp provides a measure of subgraph distance and connectivity, i.e. the number of elements between entities $\\{e_{i},e_{j}\\}$ in the graph, it may often yield reasoning chains having little semantic overlap with the original query. For example, a masked query “[Spain] [form of government] [mask]” produces many sp such as “[nominated for], [film/country]”, which is a sequence of relations unrelated to “[form of government]”. Further, sp algorithms select paths that minimize the distance (number of hops) between two entities. It is therefore formed from shortest paths of the same length, which may limit subgraph exploration. We propose _Information Gain Paths_ in Algorithm 1. ip has the same initial condition $\\{e_{i},r,e_{j}\\}$ to start generating the paths and has the same learning objective as sp. ip also helps us reduce the number of paths by selecting the ones that constitute a beneficial context for answering the query based on a measure of information gain. Given a query tuple $Q_{r}$ (having relation $r$), the algorithm progressively builds relational paths $\\{P_{rel}\\}_{r}$ starting from a relation $r$ such that at each step, it selects the (at most) $k$ _incident_ relations that would yield the $k$ highest information gain for the paths constructed so far. With infinite $k$, the algorithm is identical to the breadth-first search as no relations are ignored. Algorithm 1 bears some similarities with beam search Goldberg and Reddy (1977). There are however a few differences: (1) for $l$ hops and $k$ beam size, we obtain a maximum of $k^{l}$ paths; (2) a relation already in $P_{rel}$ cannot be used to form the next hop; (3) the paths are formed via a back-chaining process starting from the last relation on path $P_{rel}$ that connects $e_{i}$ and $e_{j}$; (4) an extra step is performed to select paths in $\\{P_{rel}\\}_{r}$ linking entities in $Q_{r}$. We let $\text{IG}({P_{rel}})$ denote the _Information Gain_ of relational path $P_{rel}:r_{1},\dotsc,r_{l}\in{\mathcal{G}}^{\prime}$ having length $l$ as $\text{IG}({P_{rel}})=H(r_{l})-\sum\limits_{i=1}^{l-1}H(r_{l-i}|r_{l-i+1})$. We define the _Information Entropy_ $H(r_{l})$ for a relation $r_{l}$, and entities $E(r_{l})$ that appear in a tuple in ${\mathcal{G}}^{\prime}$ having relation $r_{l}$ as: $\scriptstyle H(r_{l})=-\left(\frac{|E(r_{l})|}{|{\mathcal{E}}|}\log(\frac{|E(r_{l})|}{|{\mathcal{E}}|})+\frac{|{\mathcal{E}}\setminus E(r_{l})|}{|{\mathcal{E}}|}\log(\frac{|{\mathcal{E}}\setminus E(r_{l})|}{|{\mathcal{E}}|})\right),$ (1) where ${\mathcal{E}}$ is all entities in ${\mathcal{G}}^{\prime}$. The _Conditional Information Entropy_ $H(r_{i-1}|r_{i})$ of two consecutive relations is defined as: $\displaystyle H(r_{i-1}|r_{i})=$ $\displaystyle-\frac{|E(r_{i-1})|}{|{\mathcal{E}}|}\biggl{(}\mathrm{U}\log(\mathrm{U})+\mathrm{V}\log(\mathrm{V})\biggr{)}$ (2) where $\mathrm{U}=\frac{|E(r_{i-1})\cup E(r_{i})\setminus E(r_{i-1})\cap E(r_{i})|}{|E(r_{i-1})|}$ and $\mathrm{V}=\frac{|E(r_{i-1})\cap E(r_{i})|}{|E(r_{i-1})|}$. | Pre | fb15k-237 | wn18rr | jf17k ---|---|---|---|--- Trained | MRR | H@1 | H@3 | H@10 | MRR | H@1 | H@3 | H@10 | MRR | H@1 | H@3 | H@10 Contextualized KG Embedding Results kg-trsf 1 | $\times$ | 0.364 | 0.272 | 0.400 | 0.549 | 0.484 | 0.450 | 0.496 | 0.553 | 0.532 | 0.445 | 0.561 | 0.687 kg-trsf sp | ✓ | 0.363 | 0.272 | 0.403 | 0.559 | 0.473 | 0.438 | 0.488 | 0.549 | 0.537 | 0.450 | 0.569 | 0.703 kg-trsf ip | ✓ | 0.370 | 0.273 | 0.408 | 0.571 | 0.489 | 0.454 | 0.505 | 0.573 | 0.549 | 0.457 | 0.582 | 0.723 kg-trsf khn | ✓ | 0.361 | 0.266 | 0.398 | 0.558 | 0.479 | 0.444 | 0.491 | 0.556 | 0.536 | 0.450 | 0.565 | 0.701 kg-trsf lcc | ✓ | 0.350 | 0.258 | 0.386 | 0.535 | 0.477 | 0.442 | 0.488 | 0.544 | 0.532 | 0.445 | 0.561 | 0.679 kg-trsf iva | ✓ | 0.358 | 0.264 | 0.397 | 0.552 | 0.479 | 0.445 | 0.494 | 0.556 | 0.538 | 0.453 | 0.568 | 0.705 kg-trsf all | ✓ | 0.380 | 0.277 | 0.422 | 0.591 | 0.499 | 0.456 | 0.511 | 0.588 | 0.554 | 0.465 | 0.594 | 0.715 SOTA KG Embedding Results boxe | ✓ | 0.337 | 0.238 | 0.374 | 0.538 | 0.451 | 0.400 | 0.472 | 0.541 | 0.553 | 0.467 | 0.596 | 0.711 drgi | ✓ | 0.362 | 0.270 | 0.399 | 0.549 | 0.479 | 0.445 | 0.496 | 0.543 | — | — | — | — star* | ✓ | 0.358 | 0.205 | 0.322 | 0.482 | 0.401 | 0.243 | 0.491 | 0.675 | — | — | — | — lp-bert* | ✓ | 0.310 | 0.223 | 0.336 | 0.490 | 0.482 | 0.343 | 0.563 | 0.752 | — | — | — | — Table 2: Link prediction test results . H<EMAIL_ADDRESS>Results from: 1Wang et al. (2019). In the table, bold is best and underline is second best. _*Uses external data and is a contextualized KG embedding method_. ### 3.3 k-Hop Neighborhood prediction (khn) Our next graph algorithm to create our pretraining data is based on the observation that the representation of nodes in a neighborhood $\mathcal{N}_{k}(e_{i})$ encodes subgraph structure and provides a more powerful representation of $e_{i}$ Hamilton et al. (2017b). To coerce a Transformer model to understand local structure up to $k$ hops, we ask the model to generate $\mathbf{O}_{\mathcal{N}_{k}}$ of the next hop from Eq. 5, given $E_{\mathcal{N}_{k-1}}$ and $e_{i}$. We input an arbitrarily ordered set of entities $E_{\mathcal{N}_{k-1}}$ to condition our prediction. Note that typically entities are ordered according to their appearance in the original datasets. The loss to learn output entity occurrence probability of entities for hops up to $k=3$ is $L_{\textsl{{khn}}}=\sum^{k}_{i=1}\big{(}\text{KL}(\mathbf{O}_{\mathcal{N}_{k}}||f(e_{i},E_{\mathcal{N}_{k-1}}))\big{)}$, where $E_{\mathcal{N}_{0}}=\\{\\}$ and KL is the Kullback–Leibler divergence Kullback and Leibler (1951). Please note that we directly chose $k=3$ since it typically performs better than $k=2$ in a variety of settings Nikolentzos et al. (2020). ### 3.4 Invariant Adjacency matrix (iva) One of the key issues with previous heuristics in Section 3.3 is that a certain order is assumed. However, graph representations are more powerful when they are order invariant Khasanova and Frossard (2017). To allow order invariance, we propose a binary classification task based on adjacency matrices $E_{\mathcal{N}_{k}}$ of the local neighborhood subgraph. The label is $1$ if the Transformer is given inputs $\\{\mathbf{A},\widetilde{\mathbf{A}}\\}$, where $\widetilde{\mathbf{A}}$ is the equivalent adjacency matrix to $\mathbf{A}$ where the entity order is randomly permuted. The label is $0$ if the Transformer is given inputs $\\{\mathbf{A},\mathbf{A}^{\prime}\\}$, where $\mathbf{A}^{\prime}$ is a corrupted adjacency matrix. We corrupt the adjacency matrix by randomly swapping the columns or by randomly assigning different adjacency matrix values. Since $\mathbf{A}$ is a symmetric matrix, the input to the Transformer is a flattened sequence of the upper triangular matrix that contains (1) column entities and (2) $\text{row}_{1}(\mathbf{A}),\dots,\text{row}_{|E_{\mathcal{N}_{k}}|}(\mathbf{A})$. For example, $\mathbf{A}=\begin{bmatrix}1&2&1\\\ 2&0&3\\\ 1&3&1\\\ \end{bmatrix}$ with columns $[e_{1},e_{2},e_{3}]$, will produce the flattened sequence $[e_{1},e_{2},e_{3},1,2,1,0,3,1]$. ### 3.5 Local Clustering Coefficient (lcc) For lcc, we use the same inputs as with khn in section 3.3. However, we are trying to estimate the local clustering coefficient of a subgraph $k$-hops away from $e_{i}$. The loss for $k=3$ is therefore $L_{\textsl{{lcc}}}=\sum_{i=1}^{k}(f(e_{i})-c_{e_{i}})_{k}^{2}$. ## 4 Experiments Experimental set-up and training details are left to Appendix C. We compare our results to several strong baselines described in Appendix D. Our results are presented in Table 2. Our baseline model, kg-trsf or “CoKe” Wang et al. (2019), is a Transformer model that is trained from scratch on link prediction. ### 4.1 Results and Discussion Our new algorithm ip provides benefit over kg-trsf across _all_ datasets. Further, we see that H@10 is typically better for all pretraining signals compared to kg-trsf. Moreover, all pretraining signals (except lcc) provide gains on the hypergraph jf17k compared to kg-trsf. We then see that using all pretraining tasks jointly provides $5$% relative MRR score increase over the unpretrained baseline, and often surpassing the competitive performances of state-of-the-art models. The all results are in-line with recent graph pretraining methods Hu et al. (2020a, b) that show that multitask pretraining performs better than any individual task. Interestingly, our path-based algorithm ip provides the largest single task increase in performance over the unpretrained baseline. Except for our wn18rr results, path-based pretraining surpasses neighborhood- based pretraining. Compared to sp, we hypothesize that ip provides a higher quality pretraining signal for two reasons: (1) paths are more diverse and (2) paths are semantically closer to a query $Q_{r}$. sp produces paths that have often redundant first 2-hop relations on a k-hop path $\\{P_{rel}\\}_{r}$. Further, all the paths have the same minimum length and often transit through high degree entities. As seen in Section 3.1, high degree entities do not necessarily provide the most meaningful reasoning chains; though the ip-based reasoning chains seem more semantically relevant. For example, for a masked query “Spain”, “form of government” and [mask]”, ip paths are [“military conflict/combatants”, “international organization/member states”], [“adjoining relationship/adjoins”, “continents/countries within”] or [“organization member/member of”, “international organization/member states”]. ### 4.2 Analysis and Ablation In this section, we discuss the choice and combination of tasks. An ablation study is presented in table 3 that allows us to compare various multitask combinations. We first notice that given sp, ip or sp \+ ip relational path- based pretraining schemes, adding any other pretraining signal (khn, lcc or iva) typically results in an improvement. khn provides the largest MRR increase when combined with relational path-based signals, with sp \+ ip \+ khn providing a 0.06 improvement in MRR. iva comes in as the second best signal to combine and lcc only improving MRR slightly. Pretraining Signal Combination (MRR) --- sp | 0.363 | ip | 0.370 | sp \+ ip | 0.372 \+ khn | 0.367 | \+ khn | 0.373 | \+ khn | 0.378 \+ lcc | 0.364 | \+ lcc | 0.370 | \+ lcc | 0.374 \+ iva | 0.366 | \+ iva | 0.373 | \+ iva | 0.375 Table 3: Task combination ablation study on fb15k-237 using MRR. Adding individual signals to sp, ip or sp \+ ip pretraining schemes. ## 5 Conclusion We have investigated the effectiveness of five graph algorithmic pretraining schemes that do not rely on external data sources. On three downstream KG completion tasks, we found that kg-trsf: (1) multitask pretraining results in performance increases, (2) generally, pretrained models exhibit better improvements in recall (H@10) rather than precision (H@1), and (3) the ip pretraining tasks work best. Our study has important implications since it shows that using various graph structural signals that do not rely on external data can outperform strong baselines pretrained with external data. ## Limitations. Our technique has a few limitations. First, the pretraining signals that we used in this paper require a highly adaptive model. Out of the five pretraining schemes, two include the task of relational path sequence generation (sp and ip) and another is a local clustering coefficient regression task (lcc). Such tasks typically cannot be performed by most Graph Neural Networks or Graph Embedding methods. However, the versatility of our Transformer-based technique also means that our model can be pretrained on multiple modalities Mustafa et al. (2022); Lee et al. (2022). For example, pretraining with text and KGs has already proven very powerful in language generation tasks Agarwal et al. (2021). The point of our study was to show that we can surpass other methods such as drgi and lp-bert (see Appendix D for an overview of baselines) that use pretrained bert models and entity descriptions. We suspect that enhancing our entity and relation representations with text will only make the model even stronger at KG completion. Finally, we notice that jointly training on all pretraining tasks yields the best results. However, since it is possible that negative task interference occurs (a negative side effect of multitask learning Zhang et al. (2020); Pilault et al. (2021), a more throughout study of task combinations can help unlock even better performances for each specific dataset. ## References * Abboud et al. (2020) Ralph Abboud, Ismail Ceylan, Thomas Lukasiewicz, and Tommaso Salvatori. 2020. Boxe: A box embedding model for knowledge base completion. In _Advances in Neural Information Processing Systems_ , volume 33, pages 9649–9661. Curran Associates, Inc. * Agarwal et al. (2021) Oshin Agarwal, Heming Ge, Siamak Shakeri, and Rami Al-Rfou. 2021. Knowledge graph based synthetic corpus generation for knowledge-enhanced language model pre-training. In _Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies_ , pages 3554–3565, Online. Association for Computational Linguistics. * Bao et al. (2022) Hangbo Bao, Li Dong, Songhao Piao, and Furu Wei. 2022. BEit: BERT pre-training of image transformers. In _International Conference on Learning Representations_. * Bollacker et al. (2008) Kurt D. Bollacker, Colin Evans, Praveen K. Paritosh, Tim Sturge, and Jamie Taylor. 2008. Freebase: a collaboratively created graph database for structuring human knowledge. In _SIGMOD Conference_. * Bordes et al. (2013) Antoine Bordes, Nicolas Usunier, Alberto Garcia-Durán, Jason Weston, and Oksana Yakhnenko. 2013. Translating embeddings for modeling multi-relational data. In _Proceedings of the 26th International Conference on Neural Information Processing Systems - Volume 2_ , NIPS’13, page 2787–2795, Red Hook, NY, USA. Curran Associates Inc. * Das et al. (2017) Rajarshi Das, Arvind Neelakantan, David Belanger, and Andrew McCallum. 2017. Chains of reasoning over entities, relations, and text using recurrent neural networks. In _Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Papers_ , pages 132–141, Valencia, Spain. Association for Computational Linguistics. * Dettmers et al. (2018) Tim Dettmers, Pasquale Minervini, Pontus Stenetorp, and Sebastian Riedel. 2018. Convolutional 2d knowledge graph embeddings. In _Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence, (AAAI-18), the 30th innovative Applications of Artificial Intelligence (IAAI-18), and the 8th AAAI Symposium on Educational Advances in Artificial Intelligence (EAAI-18), New Orleans, Louisiana, USA, February 2-7, 2018_ , pages 1811–1818. AAAI Press. * Devlin et al. (2019) Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In _Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)_ , pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics. * Dong et al. (2019) Li Dong, Nan Yang, Wenhui Wang, Furu Wei, Xiaodong Liu, Yu Wang, Jianfeng Gao, Ming Zhou, and Hsiao-Wuen Hon. 2019. Unified language model pre-training for natural language understanding and generation. In _Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, NeurIPS 2019, December 8-14, 2019, Vancouver, BC, Canada_ , pages 13042–13054. * Fatemi et al. (2021) Bahare Fatemi, Layla El Asri, and Seyed Mehran Kazemi. 2021. Slaps: Self-supervision improves structure learning for graph neural networks. _Advances in Neural Information Processing Systems_ , 34:22667–22681. * Fatemi et al. (2020) Bahare Fatemi, Perouz Taslakian, David Vazquez, and David Poole. 2020. Knowledge hypergraphs: Prediction beyond binary relations. In _Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence, IJCAI-20_ , pages 2191–2197. International Joint Conferences on Artificial Intelligence Organization. Main track. * Goldberg and Reddy (1977) H. G. Goldberg and R. Reddy. 1977. Speech understanding systems. Summary of results of the five-year research effort at Carnegie-Mellon University. Interim Report Carnegie-Mellon Univ. * Grover and Leskovec (2016) Aditya Grover and Jure Leskovec. 2016. node2vec: Scalable feature learning for networks. In _Proceedings of the 22nd ACM SIGKDD international conference on Knowledge discovery and data mining_ , pages 855–864. * Guo et al. (2015) Shu Guo, Quan Wang, Bin Wang, Lihong Wang, and Li Guo. 2015. Semantically smooth knowledge graph embedding. In _Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)_ , pages 84–94, Beijing, China. Association for Computational Linguistics. * Hamilton et al. (2017a) Will Hamilton, Zhitao Ying, and Jure Leskovec. 2017a. Inductive representation learning on large graphs. In _Advances in Neural Information Processing Systems_ , volume 30. Curran Associates, Inc. * Hamilton et al. (2017b) William L. Hamilton, Rex Ying, and Jure Leskovec. 2017b. Representation learning on graphs: Methods and applications. _IEEE Data Eng. Bull._ , 40(3):52–74. * Hasanzadeh et al. (2019) Arman Hasanzadeh, Ehsan Hajiramezanali, Krishna Narayanan, Nick Duffield, Mingyuan Zhou, and Xiaoning Qian. 2019. Semi-implicit graph variational auto-encoders. In _Advances in Neural Information Processing Systems_ , volume 32. Curran Associates, Inc. * Hu et al. (2020a) Weihua Hu, Bowen Liu*, Joseph Gomes, Marinka Zitnik, Percy Liang, Vijay Pande, and Jure Leskovec. 2020a. Strategies for pre-training graph neural networks. In _International Conference on Learning Representations_. * Hu et al. (2020b) Ziniu Hu, Yuxiao Dong, Kuansan Wang, Kai-Wei Chang, and Yizhou Sun. 2020b. GPT-GNN: generative pre-training of graph neural networks. In _KDD ’20: The 26th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, Virtual Event, CA, USA, August 23-27, 2020_ , pages 1857–1867. ACM. * Khasanova and Frossard (2017) Renata Khasanova and Pascal Frossard. 2017. Graph-based isometry invariant representation learning. In _Proceedings of the 34th International Conference on Machine Learning, ICML 2017, Sydney, NSW, Australia, 6-11 August 2017_ , volume 70 of _Proceedings of Machine Learning Research_ , pages 1847–1856. PMLR. * Kipf and Welling (2016) Thomas N. Kipf and Max Welling. 2016. Variational graph auto-encoders. _CoRR_ , abs/1611.07308. * Kullback and Leibler (1951) S. Kullback and R. A. Leibler. 1951. On Information and Sufficiency. _The Annals of Mathematical Statistics_ , 22(1):79 – 86. * Lee et al. (2022) Kuang-Huei Lee, Ofir Nachum, Mengjiao Yang, Lisa Lee, Daniel Freeman, Winnie Xu, Sergio Guadarrama, Ian Fischer, Eric Jang, Henryk Michalewski, and Igor Mordatch. 2022. Multi-game decision transformers. * Li et al. (2022) Da Li, Ming Yi, and Yukai He. 2022. LP-BERT: multi-task pre-training knowledge graph BERT for link prediction. _CoRR_ , abs/2201.04843. * Liang et al. (2021) Shuang Liang, Jie Shao, Dongyang Zhang, Jiasheng Zhang, and Bin Cui. 2021. Drgi: Deep relational graph infomax for knowledge graph completion. _IEEE Transactions on Knowledge and Data Engineering_ , pages 1–1. * Lin et al. (2015) Yankai Lin, Zhiyuan Liu, Huanbo Luan, Maosong Sun, Siwei Rao, and Song Liu. 2015\. Modeling relation paths for representation learning of knowledge bases. In _Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing_ , pages 705–714, Lisbon, Portugal. Association for Computational Linguistics. * Mohamed et al. (2019) Sameh K. Mohamed, Vít Novácek, Pierre-Yves Vandenbussche, and Emir Muñoz. 2019. Loss functions in knowledge graph embedding models. In _DL4KG@ESWC_. * Mustafa et al. (2022) Basil Mustafa, Carlos Riquelme, Joan Puigcerver, Rodolphe Jenatton, and Neil Houlsby. 2022. Multimodal contrastive learning with limoe: the language-image mixture of experts. * Nikolentzos et al. (2020) Giannis Nikolentzos, George Dasoulas, and Michalis Vazirgiannis. 2020. k-hop graph neural networks. _Neural Networks_ , 130:195–205. * Oquab et al. (2014) Maxime Oquab, Leon Bottou, Ivan Laptev, and Josef Sivic. 2014. Learning and transferring mid-level image representations using convolutional neural networks. In _2014 IEEE Conference on Computer Vision and Pattern Recognition_ , pages 1717–1724. * Pennington et al. (2014) Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. GloVe: Global vectors for word representation. In _Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)_ , pages 1532–1543, Doha, Qatar. Association for Computational Linguistics. * Perozzi et al. (2014) Bryan Perozzi, Rami Al-Rfou, and Steven Skiena. 2014. Deepwalk: Online learning of social representations. In _Proceedings of the 20th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining_ , KDD ’14, page 701–710, New York, NY, USA. Association for Computing Machinery. * Pilault et al. (2021) Jonathan Pilault, Amine El hattami, and Christopher Pal. 2021. Conditionally adaptive multi-task learning: Improving transfer learning in NLP using fewer parameters & less data. In _International Conference on Learning Representations_. * Raffel et al. (2020) Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. _Journal of Machine Learning Research_ , 21(140):1–67. * Ramachandran et al. (2017) Prajit Ramachandran, Peter Liu, and Quoc Le. 2017. Unsupervised pretraining for sequence to sequence learning. In _Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing_ , pages 383–391, Copenhagen, Denmark. Association for Computational Linguistics. * Subramanian et al. (2018) Sandeep Subramanian, Adam Trischler, Yoshua Bengio, and Christopher J Pal. 2018\. Learning general purpose distributed sentence representations via large scale multi-task learning. In _International Conference on Learning Representations_. * Thakoor et al. (2022) Shantanu Thakoor, Corentin Tallec, Mohammad Gheshlaghi Azar, Mehdi Azabou, Eva L Dyer, Remi Munos, Petar Veličković, and Michal Valko. 2022. Large-scale representation learning on graphs via bootstrapping. In _International Conference on Learning Representations_. * Tomar et al. (2017) Gaurav Singh Tomar, Thyago Duque, Oscar Täckström, Jakob Uszkoreit, and Dipanjan Das. 2017. Neural paraphrase identification of questions with noisy pretraining. In _Proceedings of the First Workshop on Subword and Character Level Models in NLP_ , pages 142–147, Copenhagen, Denmark. Association for Computational Linguistics. * Toutanova et al. (2016) Kristina Toutanova, Victoria Lin, Wen-tau Yih, Hoifung Poon, and Chris Quirk. 2016\. Compositional learning of embeddings for relation paths in knowledge base and text. In _Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)_ , pages 1434–1444, Berlin, Germany. Association for Computational Linguistics. * Trinh et al. (2019) Trieu H. Trinh, Minh-Thang Luong, and Quoc V. Le. 2019. Selfie: Self-supervised pretraining for image embedding. _CoRR_ , abs/1906.02940. * Vaswani et al. (2017) Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In _Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, December 4-9, 2017, Long Beach, CA, USA_ , pages 5998–6008. * Veličković et al. (2019) Petar Veličković, William Fedus, William L. Hamilton, Pietro Liò, Yoshua Bengio, and R Devon Hjelm. 2019. Deep graph infomax. In _International Conference on Learning Representations_. * Wang et al. (2021) Bo Wang, Tao Shen, Guodong Long, Tianyi Zhou, Ying Wang, and Yi Chang. 2021. Structure-augmented text representation learning for efficient knowledge graph completion. In _Proceedings of the Web Conference 2021_ , WWW ’21, page 1737–1748, New York, NY, USA. Association for Computing Machinery. * Wang et al. (2019) Quan Wang, Pingping Huang, Haifeng Wang, Songtai Dai, Wenbin Jiang, Jing Liu, Yajuan Lyu, Yong Zhu, and Hua Wu. 2019. Coke: Contextualized knowledge graph embedding. _CoRR_ , abs/1911.02168. * Wang et al. (2015) Quan Wang, Bin Wang, and Li Guo. 2015. Knowledge base completion using embeddings and rules. In _Proceedings of the 24th International Conference on Artificial Intelligence_ , IJCAI’15, page 1859–1865. AAAI Press. * Wang and Li (2016) Zhigang Wang and Juanzi Li. 2016. Text-enhanced representation learning for knowledge graph. In _Proceedings of the Twenty-Fifth International Joint Conference on Artificial Intelligence_ , IJCAI’16, page 1293–1299. AAAI Press. * Watts and Strogatz (1998) D.J. Watts and S.H. Strogatz. 1998. Collective dynamics of ’small-world’ networks. _Nature_ , (393):440–442. * Wei et al. (2015) Zhuoyu Wei, Jun Zhao, Kang Liu, Zhenyu Qi, Zhengya Sun, and Guanhua Tian. 2015. Large-scale knowledge base completion: Inferring via grounding network sampling over selected instances. In _Proceedings of the 24th ACM International on Conference on Information and Knowledge Management_ , CIKM ’15, page 1331–1340, New York, NY, USA. Association for Computing Machinery. * Wen et al. (2016) Jianfeng Wen, Jianxin Li, Yongyi Mao, Shini Chen, and Richong Zhang. 2016. On the representation and embedding of knowledge bases beyond binary relations. In _Proceedings of the Twenty-Fifth International Joint Conference on Artificial Intelligence, IJCAI 2016, New York, NY, USA, 9-15 July 2016_ , pages 1300–1307. IJCAI/AAAI Press. * Xie et al. (2016a) Ruobing Xie, Zhiyuan Liu, Jia Jia, Huanbo Luan, and Maosong Sun. 2016a. Representation learning of knowledge graphs with entity descriptions. In _Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence_ , AAAI’16, page 2659–2665. AAAI Press. * Xie et al. (2016b) Ruobing Xie, Zhiyuan Liu, and Maosong Sun. 2016b. Representation learning of knowledge graphs with hierarchical types. In _Proceedings of the Twenty-Fifth International Joint Conference on Artificial Intelligence_ , IJCAI’16, page 2965–2971. AAAI Press. * Yao et al. (2019) Liang Yao, Chengsheng Mao, and Yuan Luo. 2019. Kg-bert: Bert for knowledge graph completion. _ArXiv_ , abs/1909.03193. * Ying et al. (2021) Chengxuan Ying, Tianle Cai, Shengjie Luo, Shuxin Zheng, Guolin Ke, Di He, Yanming Shen, and Tie-Yan Liu. 2021. Do transformers really perform badly for graph representation? In _Advances in Neural Information Processing Systems_. * Yosinski et al. (2014) Jason Yosinski, Jeff Clune, Yoshua Bengio, and Hod Lipson. 2014. How transferable are features in deep neural networks? In _Advances in neural information processing systems_ , pages 3320–3328. * You et al. (2019) Jiaxuan You, Rex Ying, and Jure Leskovec. 2019. Position-aware graph neural networks. In _International conference on machine learning_ , pages 7134–7143. PMLR. * You et al. (2020) Yuning You, Tianlong Chen, Yongduo Sui, Ting Chen, Zhangyang Wang, and Yang Shen. 2020. Graph contrastive learning with augmentations. In _Advances in Neural Information Processing Systems_ , volume 33, pages 5812–5823. Curran Associates, Inc. * Zhang et al. (2020) Wen Zhang, Lingfei Deng, and Dongrui Wu. 2020. Overcoming negative transfer: A survey. _CoRR_ , abs/2009.00909. ## Appendix A Definition and Notation Without loss of generality, in this section, we formulate our definitions and notation based on hypergraphs. Given a finite set of entities ${\mathcal{E}}$ and a finite set of relations ${\mathcal{R}}$, a _tuple_ is an ordered set of the form $r(e_{1},\dots,e_{n})$, where $r\in{\mathcal{R}}$, $e_{i}\in{\mathcal{E}}$ for all $i=1,\dots,n$ and $|r|=n$ is its _arity_. Let ${\mathcal{G}}$ be the set of ground truth tuples; that is, it specifies all of the tuples that are true so that if a tuple is not in $\mathcal{G}$, it is false. A _knowledge hypergraph_ (KHG) consists of a subset of the tuples $\mathcal{G^{\prime}}\subseteq{\mathcal{G}}$. A knowledge graph is a special case of a knowledge hypergraph where all relations have arity $2$. We let $E(r)\subseteq{\mathcal{E}}$ denote the set of all entities that appear in a tuple in ${\mathcal{G}}^{\prime}$ having relation $r$. That is, $E(r)=\\{e_{i}|r(\dots,e_{i},\dots)\in{\mathcal{G}}^{\prime}\\}$. Similarly, we let $R(e_{i})\subseteq{\mathcal{R}}$ denote the set of all relations that appear in a tuple in ${\mathcal{G}}^{\prime}$ linked to entity $e_{i}$. That is, $R(e_{i})=\\{r|r(\dots,e_{i},\dots)\in{\mathcal{G}}^{\prime}\\}$. We say that two tuples are _incident_ in ${\mathcal{G}}^{\prime}$ if they share at least one entity. Two entities are _connected_ in $\mathcal{G}$’ if they appear together in a tuple. For example, $r_{1}(e_{1},e_{2},e_{3})$ is incident to $r_{2}(e_{4},e_{3},e_{5})$ because they share $e_{3}$. Entities $e_{2}$ and $e_{3}$ are connected because they both appear in the first tuple. A query for the knowledge completion task is a tuple with one missing entity that needs to be predicted. We let $Q_{r}=[r,e_{1},\dots,\mbox{{[mask]}},\dots,e_{n}]_{q}$ denote a query tuple with relation $r$, where [mask] is the placeholder token (the masked-out entity) for the entity we want to predict. A _path_ $P$ in a KHG is a sequence of tuples where two consecutive tuples share at least one entity. We say that $P$ _connects_ entities $e_{i}$ and $e_{j}$ if the first tuple in $P$ contains $e_{i}$ and the last tuple contains $e_{j}$. A _relational path_ $P_{rel}$ between $e_{i}$ and $e_{j}$ is the sequence of relations along the edges of a path connecting $e_{i}$ and $e_{j}$ such that no relation is repeated along the path (without cycles). For example, $e_{1}$ and $e_{6}$ are connected through path $P:r_{1}(e_{1},e_{2},e_{3}),r_{2}(e_{3},e_{4},e_{5}),r_{3}(e_{4},e_{5},e_{6})$, and have a relational path $P_{rel}:r_{1},r_{2},r_{3}$. Given the set of entities in queries $\\{Q_{r}\\}_{r}$, we define the set of all possible paths between pairwise entities as $\\{P_{rel}\\}_{r}$. The _k-hop neighborhood_ of entity $e_{i}$, denoted $\mathcal{N}_{k}(e_{i})$, is the unordered set of entities $E_{\mathcal{N}_{k}}=\\{e_{k},\dots,e_{j}\\}\subseteq{\mathcal{E}}$ enclosed in a $k$-hop radius around $e_{i}$. We denote the relation-less adjacency matrix of entities in $\mathcal{N}_{k}(e_{i})$ as $\mathbf{A}_{E_{\mathcal{N}_{k}}}\in\mathbb{R}^{|E_{\mathcal{N}_{k}}|\times|E_{\mathcal{N}_{k}}|}$. For a relational graph $\mathcal{G^{\prime}}$, we define a relation-less adjacency matrix $\mathbf{A}_{E}$ as: $\displaystyle\mathbf{A}_{E}=\sum_{r\in{\mathcal{R}}}\mathbf{A}_{E(r)},\mathbf{A}_{E}\in\mathbb{R}^{|E_{{\mathcal{E}}}|\times|E_{{\mathcal{E}}}|}.$ (3) Adjacency $\mathbf{A}$ has a certain order of appearance of entities in columns and rows which is arbitrarily set by the order in which dataset tuples are sampled. We let $\widetilde{\mathbf{A}}$ be the equivalent adjacency matrix with a permuted order of entities in columns and rows (e.g.: $r_{1},r_{2},r_{3},r_{4}$ can become $r_{2},r_{3},r_{1},r_{4}$). The local clustering coefficient $c_{e_{i}}$ Watts and Strogatz (1998) of $\mathcal{N}_{k}(e_{i})$ measures the proportion of closed triangles in the local k-hop neighborhood of an entity such that: $\displaystyle c_{e_{i}}=\frac{|(e_{1},e_{2}\in{\mathcal{E}}:e_{1},e_{2}\in\mathcal{N}_{k}(e_{i})|}{\binom{d_{e_{i}}}{2}},$ (4) where $d_{e_{i}}=\sum_{e_{j}\in\mathcal{N}_{k}(e_{i})}\mathbf{A}[e_{i},e_{j}]$ is the node degree of $e_{i}$. For the pretraining task in section 3.3, we define the probability that an entity $e_{i}$ is found in a in $k$-hop neighborhood (probability of occurrence) $O(e_{i})$ as: $\displaystyle O(e_{i})=\frac{d_{e_{i}}}{\sum_{e_{j}\in E_{\mathcal{N}_{k}}}d_{e_{j}}},\text{ and }\mathbf{O}_{\mathcal{N}_{k}}=\begin{bmatrix}O(e_{1})\\\ \vdots\\\ O(e_{|{\mathcal{E}}|})\end{bmatrix}$ (5) ## Appendix B More details on ip Algorithms 1 Function _FindIncidentRelations(_$r$ , $P_{rel}$_)_: 2 $R^{\prime}\leftarrow\emptyset$ 3 for _$r^{\prime}\in{\mathcal{R}}\backslash\\{r\\}$_ do 4 if _$E(r^{\prime})\cap E(r)\neq\emptyset\textrm{{ and }}r^{\prime}\notin P_{rel}$_ then 5 $R^{\prime}\leftarrow r^{\prime}$ 6 7 8 return $R^{\prime}$ 9End Function Algo 2 Finding all relations incident to $r$ in ${\mathcal{G}}^{\prime}$. 1 Function _TopEntropy(_$k$ , $R$_)_: 2 Compute $H(r)$ for all the relations $r\in R$ (Eq. (1)) return set ${r\in R}$ with top-k highest $H(r)$ 3End Function 4 Function _BottomCondEntropy(_$k$ , $R^{\prime}$, $r$_)_: 5 Compute $H(r|r^{\prime})$ for all $r^{\prime}\in R$ (Eq. (2)) return set ${r^{\prime}}\in R^{\prime}$ with k lowest $H(r|r^{\prime})$ 6End Function Algo 3 Finding $r\in R$ with top $k$ entropy $H(r)$ or with with bottom $k$ conditional entropy $H(r|r^{\prime})$. ## Appendix C More details on Experimental Set-Up We use fb15k-237 Bollacker et al. (2008) (extracted from Freebase) and wn18rr Dettmers et al. (2018). We also test our method on a hypergraph link prediction task based off the jf17k Wen et al. (2016) dataset. Table 4 summarizes dataset properties. Dataset | $|\mathcal{E}|$ | $|\mathcal{R}|$ | #train | #valid | #test | density ---|---|---|---|---|---|--- wn18rr | 40,943 | 11 | 86,835 | 3,034 | 3,134 | 2.1 fb15k-237 | 14,541 | 237 | 272,115 | 17,535 | 20,466 | 18.7 jf17k | 29,177 | 327 | 61,911 | 15,822 | 24,915 | 35.9 Table 4: Dataset Statistics. The individual pretraining signals seem to lower precision (H@1) and increase recall (H@10) compared to kg-trsf from scratch. Further, individual pretraining signals have a larger positive effect as graph density increases. Individual pretraining signals typically show MRR improvements for jf17k, the densest of the KGs. #### Training and Evaluation MRR is the Mean Reciprocal Rank and H@(1,3,10) are HIT@ measures all commonly used in link prediction Mohamed et al. (2019). For all tasks, we use the same autoregressive Transformer model that applies a transformation $f$ on the input and that is optimized with different loss functions. Our Transformer model is a single monolithic architecture that uses a masking scheme similar to UniLM Dong et al. (2019), allowing the model to play a variety of roles (encoder-only, decoder-only, or encoder-decoder) and tackle a variety of objectives (classification, regression or sequence generation). In our experiments, we have used $L=12$ Transformer layers, with $D=256$ hidden dimension, $A=12$ self-attention heads, an intermediate layer of size $2D$. For path algorithms, we truncate our path sequences (max hops) to $l=4$, and, for neighborhood-based algorithms, we limit $k=3$. The pretraining data generated from each dataset is applied to separate model instances. For each dataset, we save the weights of the pretrained model that performs best on the evaluation set of the link prediction task. During link prediction finetuning, we use a dropout rate $\rho\in\\{0.2,0.3,0.5\\}$, a label smoothing rate $\zeta\in\\{0.6,0.7,0.8,0.9\\}$ and a learning rate of $\eta=5^{-4}$. In our multitask setting (all), each task is sampled from the uniform distribution $\frac{1}{|D_{t}|}$, for dataset $D_{t}$ of a pretraining task $t$. A batch may therefore contain multiple tasks. We weight the losses of tasks from $\tau=\\{\textsl{{sp}},\textsl{{ip}},\textsl{{khn}},\textsl{{lcc}},\textsl{{iva}}\\}$ according the available pretraining dataset size such that $L_{\textsl{{all}}}=\sum_{t\in\tau}\alpha_{t}L_{t}$, where the $\alpha_{t}=|D_{t}|/\sum_{t^{\prime}\in\tau}|D_{t}^{\prime}|$. For khn, note that if $|E_{\mathcal{N}_{k}-1}|>1024$, we clip the token sequence for a maximum length of 1024. At inference, each entity in $Q_{r}$ is masked and evaluated once. We use two evaluation metrics: HIT@$n$ and Mean Reciprocal Rank (MRR). ## Appendix D More details on Baselines In table 2, we compare our method against strong and recent KG Embedding methods. boxe Abboud et al. (2020) is a spatio-translational graph embedding model that uses logical rules (similar to paths) and that can support hierarchical inference patterns and higher-arity relations (knowledge hypergraphs or KHG). This is one of the rare methods that can be applied to both graphs and hypergraphs. boxe achieves state-of-the-art results on jf17k while remaining competitive on fb15k-237 and wn18rr. It is unclear however if boxe is pretrainable111In table 2, we wrote “N/A” since we are not certain if pretraining is applicable.. To our knowledge, drgi Liang et al. (2021) is the only other KG completion method that was pretrained using only signals from the graph structure. Similarly to Deep Graph Infomax Veličković et al. (2019), drgi is pretrained on artifacts of the graph structure by maximizing the mutual information between local and global graph representations. We were not able to ascertain at the moment if drgi is applicable to KHGs. Both lp-bert Li et al. (2022) and star Wang et al. (2021) are Transformer-based Contextualized KG Embedding methods. Both techniques are improvements over kg-bert Yao et al. (2019) that uses textual descriptions of entities and relations on top of a bert model, which is pretrained on a large text corpus222The text pretraining data is external data and not self-contained to the KG data. lp-bert also uses a multitask pretraining step by jointly training on three denoising tasks: random word masking in the descriptions, entity masking, and relation masking. star combines kg-bert and TransE Bordes et al. (2013). star contextualizes the translation function by embedding the head and relation descriptions with bert. The authors claim that the technique is structure-aware since translation-based graph embedding approaches conduct structure learning by measuring spatial distance. Note that KHG tuples in jf17k can have up to 6 entities. In most cases, the description of all entities in a tuple exceeds the standard bert maximum sequence length of 1024. For this reason, we were not able to apply lp-bert and star to jf17k.
# Binding energies and thermal width $\Gamma$ of the QGP at finite quark chemical potential Siddhartha Solanki, Manohar Lal, and Vineet Kumar Agotiya Department of Physics, Central University of Jharkhand, Ranchi, India, 835 222. Corresponding author id-<EMAIL_ADDRESS> ###### Abstract The present work have investigated the properties of the heavy quarkonia in the presence of finite quark chemical potential for different number of flavors by using the quasi particle approach. The effect of the finite quark chemical potential has been incorporated through the quasi-particle Debye mass to examine the binding energies of the quarkonium states. From the imaginary part of the potential we have calculated the thermal width of the ground state of the quarkonia and found that the thermal width increases with finite quark chemical potential. The dissociation temperature $(T_{D})$ of the $J/\psi$ and $\Upsilon$ have been calculated in the presence of finite quark chemical potential for different flavors. The effect of the finite quark chemical potential on the mass spectra of the quarkonium states has been also studied. qusi-particle Debye mass, Heavy quark complex potential, Number of flavors, quark chemical potential, thermal width, dissociation temperature. ## I Introduction The ongoing heavy ion collision experiment at the relativistic heavy ion collider (RHIC) accelerator in BNL, USA and the LHC collider at CERN, Switzerland has their great importance for exploring the phase diagram of the Quantum Chromo-dynamics (QCD) after the discovery of the Quark Gluon Plasma (QGP), the fourth state of matter. The study of the fundamental forces between quark and gluon is essential for the understanding of QCD and it has been expected that at varying temperature, finite quark chemical potential scales (low or high) and different phases contribute in the T-$\mu$ plane. For instance, at small or vanishing temperature quarks and gluons are limited by the strong force while at high temperature asymptotic freedom suggests a somewhat different QCD medium which consists of rather weakly coupled deconfined quarks and gluons, the so-called QGP. The anomalous suppression of the $J/\psi$ production in heavy-ion collisions which has been observed experimentally M.C.Abreu in the depletion of the dilepton multiplicity in the region of invariant mass which corresponds to the $J/\psi$ meson was proposed long back as a possibly unmistakable sign of the beginning of deconfinement. Matsui and Satz T.Matsui argued that charmonium states generated before the formation of a thermalized QGP would tend to melt in their way through the deconfined medium, because the binding coulomb potential is screened by the large number of color charges, thereby producing an anomalous drop in the $J/\psi$ yields. The pair develops into the physical resonance during formation time and passes through the plasma and the hadronic matter before they leave the interacting system to decay into a dilepton to be detected. Figure 1: The variation of potential with the distance ’r’ for the quasi- particle, leading order and lattice parameterized Debye mass at $N_{f}$= $0$, $2$, $3$ has been shown. Figure 2: Shows the variation of quasi-particle Debye mass with finite quark chemical potential at different values of temperature 2(a) and with temperature at different values of finite quark chemical potential 2(b) respectively. Figure 3: Shows the variation of binding energy of $J/\psi$ and $\Upsilon$ with the temperature in figures 3(a) and 3(b) for different flavors at $N_{f}$ = $2$ and $3$. This long ’trek’ within the interacting system is fairly dangerous for the $J/\psi$. Even before the resonance occurs, it may be absorbed by the nucleons streaming past it C.Gerschel . By the time the quarkonium resonance is formed, the screening of the color forces in the plasma may be sufficient to inhibit a binding of the charmonium T.Matsui or an energetic gluon X.M.Xu or a comoving hadron could dissociate the resonance. Quarkonia at finite temperature is an important tool for the studying QGP formation in heavy-ion collisions. Many effort have been devoted to determine the $T_{D}$ of quarkonium state in the deconfined medium, using either lattice calculations of $Q\bar{Q}$ spectral function or non-relativistic calculations based upon some effective screened potential. Lattice studies are directly based on QCD and these studies answer most of the questions that arises while studying the QCD phase diagram. However, in lattice studies the spectral function must be extracted using rather limited sets of data from the Euclidian (imaginary time) correlators, which are directly based on the lattice. This along with the intrinsic technical difficulties of lattice calculations, limit the reliability of the results obtained so far, and also their scope is essentially limited to the mass of the ground states in each quarkonium channel. Potential models, on the other hand, provide a simple and intuitive framework to study the properties of quarkonium at finite temperature, from which quantities can be calculated that are beyond the current possibilities for lattice studies. Umeda and Alberico T.Umeda ; W.M.Alberico(2008) have shown that the lattice computations of mesons correlators at finite temperature contain a constant contribution, due to the presence of zero modes in the spectral functions. The problem of dissociation of bound states in a hot QCD medium is of great importance in heavy-ion collisions as it provides evidence for the creation of the QGP Leitch . The physical understanding of the quarkonium dissociation within a deconfined medium has undergone several refinements in the last couple of years Laine:2008cf ; BKP:2000 ; BKP:2001 ; BKP:2002 ; BKP:2004 . As the heavy quark and anti-quark in a quarkonia state are bound together by almost static (off-shell) gluons, therefore, the issue of their dissociation boils down to how the gluon self-energy behaves at high temperature. It has been noticed that the gluon self-energy has both real and imaginary parts laine . Note that the real part lead to the Debye screening, while the imaginary part leads to Landau damping and give rise the thermal width to the quarkonia. It indeed provides a useful way to examine quarkonium binding energies, quarkonium wave functions, reaction rates, transition rates, and decay widths. It further allows the extrapolation to the region of high temperatures by expressing screening effects reflecting on the temperature dependence of the potential and finite quark chemical potential. The effects of dynamics of quarks on the stability of quarkonia can be studied by using finite quark chemical potential extracted from thermodynamic quantities that are computed in full QCD. At high temperatures, the deconfined phase of QCD exhibits screening of static color-electric fields E.V.Shuryak ; GPY ; it is, therefore, expected that the screening will lead to the dissociation of quarkonium states. After the success at zero temperature while predicting hadronic mass spectra, potential model descriptions have been also applied to understand quarkonium properties at finite temperature and finite quark chemical potential. It is well known that the production of $J/\psi$ and $\Upsilon$ mesons in hadronic reactions occur in part via the production of higher excited $c\bar{c}$ (or $b\bar{b}$) states and they decay into the respective ground state. Since the lifetime of different quarkonium state is much larger than the typical lifetime of the medium produced in nucleus- nucleus collisions; their decay occurs almost completely outside the produced medium lain ; he1 . Since the produced medium can be probed not only by the ground state quarkonium but also by different excited quarkonium states. So, the potential model in this context could be helpful in predicting for the binding energies of various quarkonia state by setting up and solving appropriate Schrodinger equation in the hot QCD medium. The first step towards this is to model an appropriate medium dependent interquark interaction potential at finite temperature and finite quark chemical potential. Thereafter the dissociation of excited states of quarkonia has been studied. The present work is different from the work U.Kakade in which the authors have studied the effect of baryonic chemical potential on the Quarkonium states using leading order Debye mass. In the current work we have started with the modified form of the potential V.Agotiya:2009 in which the effect of the finite quark chemical potential has been incorporated through the quasi- particle Debye mass. The energy eigen values of the ground states of the charmonium and bottomonium for different flavors has been obtained by solving the Schrodinger equation. Further we have studied the effect of the finite quark chemical potential on the thermal width and the dissociation temperature of the quarkonium states. The present manuscript is organized in the following manner: a brief description about the real and the imaginary part of the heavy quark potential is given in section-II. Whereas in section-III the quark finite quark chemical potential is introduced into the the quasi-particle Debye mass. In Section-IV, we examine the binding energy and $T_{D}$ of various quarkonium states at different values of flavor and the finite quark chemical potential. The effect of the finite quark chemical potential on the mass spectra of the quarkonium states is evaluated in the section-V. In section-VI we discuss the result of present work. Finally, in section-VII, we conclude with the future prospects of the present work. Figure 4: Shows the variation of binding energy of $J/\psi$ and$\Upsilon$ with the temperature in the figures 4(a) and 4(b) for different values of the finite quark chemical potential at $N_{f}=3$. Figure 5: Shows the variation of Binding energy of $\psi^{\prime}$ and $\Upsilon^{\prime}$ with the temperature in the figures 5(a) and 5(b) for different values of the finite quark chemical potential at $N_{f}$= $3$. Figure 6: Mass spectra of $J/\psi$ and $\Upsilon$ with the temperature in the figures 6(a) and 6(b) for different masses has been shown. ## II Heavy quark complex potential ### II.1 Real Part of the potential Quantitative understanding of the bound state properties needs the exact potential at finite-temperature which should be derived directly from QCD, like the Cornell potential at zero temperature has been derived from pNRQCD from the zeroth-order matching coefficient. Such derivations at finite temperature for weakly-coupled plasma have been recently come up in the literature Brambilla05 ; Brambilla08 but they are plagued by the existence of temperature-driven hard as well as soft scales, $T$, $gT$, $g^{2}T$, respectively. Due to these difficulties in finite temperature extension in effective field theories, the lattice-based potentials become popular. However, neither the free energy nor the internal energy can be directly used as the potential. What kind of screened potentials should be used in the Schrödinger equation which well describes is the bound states at finite temperature is still an open question. The potential model based phenomenology as well as the lattice QCD approaches infers that the quark-antiquark interaction potential in presence of medium plays a crucial role in understanding the nature of quark-antiquark bound state in the hot QCD/QGP medium. The potential employed is commonly screening columns (Yukawa form) Brambilla05 ; L.Kluberg . In case of finite temperature QCD we employ the ansatz that the medium modification enter in the Fourier transform of heavy quark potential V(k) as V.Agotiya:2009 . $\tilde{V(k)}=\frac{V(k)}{\varepsilon(k)}$ (1) Where, $\varepsilon(k)$ is dielectric permitivity which is obtain from the static limit of the longitudnal part of the gluon self energy R.A.Schneider ; H.A.Weldon . $\varepsilon(k)=\left(1+\frac{\pi_{L}(0,k,T)}{k^{2}}\right)\equiv\left(1+\frac{m^{2}_{D}(T,\mu)}{k^{2}}\right)$ (2) V(k) is the Fourier transform of the Cornell potential given as, $\mbox{\boldmath$V$}(k)=-\sqrt{\frac{2}{\pi}}\frac{\alpha}{k^{2}}\\\ -\frac{4\sigma}{\sqrt{2\pi}k^{4}}$ (3) Substituting the value of Eq.(2) and Eq.(3) in equation Eq.(1), and solving using inverse Fourier transform, we get the medium modified potential depending upon ’r’ V.Chandra:2007 ; R.A.Schneider ; A.Ranjan . $\mbox{\boldmath$V$}(r,T,\mu)=\left(\frac{2\sigma}{m^{2}_{D}(T,\mu)}-\alpha\right)\frac{exp(-m_{D}(T,\mu)r)}{r}\\\ -\frac{2\sigma}{m^{2}_{D}(T,\mu)r}+\frac{2\sigma}{m_{D}(T,\mu)}-\alpha m_{D}(T,\mu)$ (4) In addition to standard Yukawa potential, this potential $\mbox{\boldmath$V$}(r,T,\mu)$ given by Eq.(4) has also long-range coulomb tail. The constant added to yield could appear naturally while performing the basic computation of the real time static potential in the hot QCD, these constant are required to yield the correct limit of $\mbox{\boldmath$V$}(r,T,\mu)$ M.Laine as T$\longrightarrow$0 and can also found from the real and imaginary time correlators in thermal QCD medium A.Beraudo . The potential in the Eq.(4) will look like, in the limiting case $r\gg\frac{1}{m_{D}}$ as, $V(r,T,\mu)\approx-\frac{2\sigma}{m^{2}_{D}(T,\mu)r}-\alpha m_{D}(T,\mu)$ (5) Figure 7: Shows variation of the $J/\psi$ and $\Upsilon$ thermal width $\Gamma$ with the temperature in the figures 7(a) and 7(b) for different quark finite quark chemical potential. Figure 8: Shows the real and imaginary binding energies of the $J/\psi$ and $\Upsilon$ for different finite quark chemical potential at $N_{f}$= $0$ in the figures 8(a) and 8(b). ### II.2 Imaginary Part of the potential Heavy quark complex potential contains both the real and the imaginary part which were obtained by using the gluon self energy which in turn responsible for both the Debye screening and the Landau damping respectively. However, Debye screening is obtained by using both the retard and self-energy propagator whereas the static limit of the symmetric self-energy has been used for calculating the imaginary part of the potential. The imaginary part of the potential has its importance while studying the threshold enhancement of the bound state or the thermal width of the charmonium and bottomonium resonances. This thermal width in spectral function is further used to determine the $T_{D}$ of the quarkonium resonances. From the previous studies AgnesMocsy ; M.LaineJHEP2007 is it clear that, the dissociation of the quarkonium states takes place when thermal width becomes equal to the twice of the binding energy. Figure 9: Shows the real and imaginary binding energies of the $J/\psi$ and $\Upsilon$ for different finite quark chemical potential at $N_{f}$= $2$ in the figures 9(a) and 9(b). Figure 10: Shows the real and imaginary binding energies of the $J/\psi$ and $\Upsilon$ for different finite quark chemical potential at $N_{f}$= $3$ in the figures 10(a) and 10(b) The symmetric propagator for the imaginary part is given below: $ImD_{F}^{00}(0,p)=\frac{-2\pi{T}{m_{D}^{2}}}{p\left(p^{2}+m_{D}^{2}\right)^{2}}$ (6) Eq.6 gives the imaginary part of the dielectric function as; $\epsilon^{-1}(p)=-\pi{T}{m_{D}^{2}}\ \frac{p^{2}}{p\left(p^{2}+m_{D}^{2}\right)^{2}}$ (7) Thus, we obtained the imaginary part of the potential using the following equation: $V(r,T,\mu)=\int{\frac{d^{3}k}{\left(2\pi\right)^{\frac{3}{2}}}\left(e^{i.pr}-1\right)\frac{V(p)}{\epsilon(p)}}$ (8) This implies $ImV\left(r,T,\mu\right)=\int{\frac{d^{3}k}{\left(2\pi\right)^{\frac{3}{2}}}\left(e^{i.pr}-1\right)\left(-\sqrt{\frac{2}{\pi}}\frac{\alpha}{p^{2}}-\frac{4\sigma}{\sqrt{\pi}{p^{4}}}\right)}\\\ \times p^{2}\left({\frac{-\pi^{2}{T}{m_{D}^{2}}}{p\left(p^{2}+m_{D}^{2}\right)^{2}}}\right)$ (9) After performing the integration of the above Eq.9, the contribution due to coulomb and the string term becomes: $ImV\left(r,T,\mu\right)=-2{\alpha}{T}\int_{0}^{\infty}\\\ {\frac{dz}{{(z^{2}+1)}^{2}}\left(1-\frac{\sin{z}}{z\hat{r}}\right)+\frac{4\sigma{T}}{m_{D}^{2}}\int_{0}^{\infty}{\frac{dz}{{(z^{2}+1)}^{2}}\left(1-\frac{\sin{z\hat{r}}}{z\hat{r}}\right)}}$ (10) Where z = $\frac{P}{m_{D}}$. This can be further simplified as: $ImV\left(r,T,\mu\right)=\\-{\alpha}{T}\phi_{0}\left(\hat{r}\right)+\frac{2\sigma{T}}{m_{D}^{2}}\psi_{0}\left(\hat{r}\right)$ (11) Where $\phi_{0}\left(\hat{r}\right)=-{\alpha}{T}\left(\frac{{\hat{r}}^{2}}{9}\left(-4+3\gamma_{E}+3log(\hat{r})\right)\right)$ and $\psi_{0}\left(\hat{r}\right)=-\frac{{\hat{r}}^{2}}{6}\ +\left(\frac{-107+60\gamma_{E}+60log(\hat{r})}{3600}\right){\hat{r}}^{4}+O({\hat{r}}^{5})$ For the limit $\hat{r}<<1$, we have $ImV\left(r,T,\mu\right)=-T\left(\frac{\alpha{\hat{r}}^{2}}{3}+\frac{\sigma{\hat{r}}^{4}}{30m_{D}^{2}}\right)log\left(\frac{1}{\hat{r}}\right)$ (12) Now, by using Eq.12 we calculate the thermal width $\Gamma$ of the resonance state(1S). This can be done by folding with unperturbed (1S) Coulomb wave function and is given by prd2016_vin , $\Gamma=\left(\frac{4T}{\alpha{m_{Q}^{2}}}+\frac{12\sigma{T}}{\alpha^{2}m_{Q}^{4}}\right)m_{D}^{2}log\frac{\alpha{m_{Q}}}{2m_{D}}$ (13) ## III Quasi-particle Debye mass in the presence of finite quark chemical potential The perturbative nature of the leading order Debye mass in QCD coupling at high temperature is known for a long time A.Rebhan . In QCD the Debye mass is non-perturbative and gauge independent defined in E.Braaten . Debye mass is also calculated for the two polyakov loops by Braaten and Nicto at high temperature Y.Burnier . The basic definition of the Debye mass itself creates a difficulty because of the gauge variant nature of electric correlators in K.Kajantie . To overcome this problem many approaches have been purposed so far K.Kajantie ; Anbazavov ; S.Nadkarni . Keeping in view of all the interaction between the quasiparticle because of the quasi-parton, a number of attempts have been made so far such as, effective mass model V.Goloviznin ; A.Peshier , the effective mass with Polyakov loop M.D.Elia , model based on PNJL and NJL A.Dumitru , effective fugacity model V.Chandra:2007 ; V.Chandra:2009 . Quasi-particle model is important to describe the non-ideal behavior of the QGP and the masses arise because of the surrounding matter in the medium around the parton and this quasi-parton acquired the same quantum number as the real particle i.e quarks and gluon P.K.Srivastava . Here we use quasi-particle EoS M.Cheng . The quasi-particle Debye mass ${m_{D}}$ in terms of the temperature and finite quark chemical potential for full QCD case is given as, Table 1: The dissociation temperature of the ${J/\psi}$ and ${\Upsilon}$ at $N_{f}=0$ has been calculated for the different values of finite quark chemical potential when the thermal width $\Gamma=2B.E$. $State$ | $\mu=300MeV$ | $\mu=325MeV$ | $\mu=350MeV$ ---|---|---|--- $J/\psi$ | 0.8628 | 0.8571 | 0.8234 $\Upsilon$ | 0.9307 | 0.9209 | 0.9124 Table 2: The dissociation temperature of the ${J/\psi}$ and ${\Upsilon}$ at $N_{f}=2$ has been calculated for the different values of finite quark chemical potential when the thermal width $\Gamma=2B.E$. $State$ | $\mu=300MeV$ | $\mu=325MeV$ | $\mu=350MeV$ ---|---|---|--- $J/\psi$ | 0.8652 | 0.8596 | 0.8546 $\Upsilon$ | 0.9354 | 0.9299 | 0.9248 $\displaystyle m^{2}_{D}\left(T,\mu\right)$ $\displaystyle=$ $\displaystyle g^{2}(T)T^{2}\bigg{[}\bigg{(}\frac{N_{c}}{3}\times\frac{6PolyLog[2,z_{g}]}{\pi^{2}}\bigg{)}$ (14) $\displaystyle+{\bigg{(}\frac{\hat{N_{f}}}{6}\times\frac{-12PolyLog[2,-z_{q}]}{\pi^{2}}\bigg{]}}$ and the value of $\hat{N_{f}}$ is, $\displaystyle\hat{N_{f}}$ $\displaystyle=$ $\displaystyle\bigg{(}N_{f}+\frac{3}{\pi^{2}}\bigg{(}\frac{\mu^{2}}{T^{2}}\bigg{)}\bigg{)}$ (15) and $\displaystyle\mu$ $\displaystyle=$ $\displaystyle\frac{\mu_{b}}{3}$ (16) U.Kakade , where $(\mu)$ defined the finite quark chemical potential and $(\mu_{b})$ is finite baryonic chemical potential. After introducing the value of $\hat{N_{f}}$ in the Eq.(14), we get the full expression of quasi-particle debye mass in terms of temperature and finite quark chemical potential as, $m^{2}_{D}\left(T,\mu\right)=T^{2}\left\\{\bigg{(}\frac{N_{c}}{3}Q^{2}_{g}\bigg{)}+\bigg{[}\frac{N_{f}}{6}+\frac{1}{2\pi^{2}}\bigg{(}\frac{\mu^{2}}{T^{2}}\bigg{)}\bigg{]}Q^{2}_{q}\right\\}$ (17) Where, $Q_{g}$ and $Q_{q}$ are the effective charges given by the equations: $z_{g,q}=a_{q,g}\exp\bigg{(}-\frac{b_{g,q}}{x^{2}}-\frac{c_{g,q}}{x^{4}}-\frac{d_{g,q}}{x^{6}}\bigg{)}.$ (18) Here $x=T/T_{c}$ and $a$, $b$ and $c$ and $d$ are fitting parameters, for equation of state in the quasi-particle description V.Chandra:2007 ; V.Chandra:2009 respectively. $\displaystyle Q^{2}_{g}$ $\displaystyle=$ $\displaystyle g^{2}(T)\frac{6PolyLog[2,z_{g}]}{\pi^{2}}$ $\displaystyle Q^{2}_{q}$ $\displaystyle=$ $\displaystyle g^{2}(T)\frac{-12PolyLog[2,-z_{q}]}{\pi^{2}}$ (19) Here, $g(T)$ is the QCD running coupling constant, $N_{c}=3$ ($SU(3)$) and $N_{f}$ is the number of flavor, the function $PolyLog[2,z]$ having form, $PolyLog[2,z]=\sum_{k=1}^{\infty}\frac{z^{k}}{k^{2}}$ and $z_{g}$ is the quasi-gluon effective fugacity and $z_{q}$ is quasi-quark effective fugacity. These distribution functions are isotropic in nature. Both $z_{g}$ and $z_{q}$ have a very complicated temperature dependence and asymptotically reach to the ideal value unity. In our present analysis we have used the quasi-particle Debye mass, $m_{D}^{QP}$ depending upon the temperature and finite quark chemical potential for different number of flavors. ## IV Binding energy and dissociation temperature In this section, we have studied the effect of the finite quark chemical potential on the binding energies and the dissociation temperature of c$\bar{c}$ and b$\bar{b}$ resonances with different flavors number. To reach this end we solve the Schrodinger equation for the complete understanding of the quarkonia in the hot QGP medium. The binding energy of charmonium and bottomonium state at $T$=$0$ is defined by the difference of energy between the $m_{Q}$ (mass of quarkonia) and the bottom/open charm threshold. But distance between the continuum threshold and the peak position is defined the binding energy at finite value of temperature AgnesMocsy . Hence, the solution of Schrodinger equation for the potential given by Eq.(5) gives the energy eigen values for the ground and excited states of charmonium and bottomonium i.e, $J/\psi$, $\psi^{\prime}$, $\Upsilon$ and $\Upsilon^{\prime}$ as: $E_{n}=-\frac{1}{n^{2}}\frac{m_{Q}\sigma^{2}}{m^{4}_{D}}$ (20) It has been observed that the binding energy decreases with the temperature and also with finite quark chemical potential which is shown in the figures, 3, 4, 5 and 6. The binding energy of charmonium and bottomonium state at particular values of temperature becomes smaller or equal to the value of mean thermal energy; i.e, the state of quarkonia is said to be dissociated at that given value of temperature. The dissociation temperature for the states of charmonium and bottomonium is also discussed in P.Sandin ; prd2016_vin ; prd2018_vin . The dissociation temperature of $J/\psi$ and $\Upsilon$ with different number of flavors at different values of the finite quark chemical potential has been obtained by using the condition: $\Gamma$ = 2B.E. The intersecting point between the binding energy curve and the thermal width of a quarkonium states ($J/\psi$ and $\Upsilon$) is taken as dissociation point of such states. Table 3: The dissociation temperature of the ${J/\psi}$ and ${\Upsilon}$ at $N_{f}=3$ has been calculated for the different values of finite quark chemical potential when the thermal width $\Gamma=2B.E$. $State$ | $\mu=300MeV$ | $\mu=325MeV$ | $\mu=350MeV$ ---|---|---|--- $J/\psi$ | 0.8653 | 0.8597 | 0.8547 $\Upsilon$ | 0.9350 | 0.9247 | 0.9258 Table 4: Comparison of the mass spectra for $J/\psi$ and $\Upsilon$ obtained in the present work with the theoretical and experimental data. $State$ | present work | Exp.massTanabashi | Error [in GeV] ---|---|---|--- $J/\psi$ | 3.1 | 3.096 | 0.00129 $\Upsilon$ | 9.5 | 9.460 | 0.00421 ## V Mass Spectra of Quarkonium state For calculating the mass spectra of heavy quarkonia the relation is: $M=2m_{Q}+B.E$ (21) Here, mass spectra is equal to the sum of the exact formula of energy and twice the quark-mass. Now, we substitute the values of $E_{nl}^{n}$ (B.E) in the above equation, $M=2m_{Q}+\frac{1}{n^{2}}\frac{m_{Q}\sigma^{2}}{m^{4}_{D}}$ (22) Where, $m_{Q}$ is used for the mass of quarkonium state, such as charmonium and bottomonium mass and $n$ is used for the states of quarkonium (n = $1$ for ground states and n = $2$ for excited states). ## VI Results and Discussion In the present work we have investigated the propeties of the charmonium and the bottomonium at finite temperature and quark chemical potential using quasi particle approach. Figure 1 shows the variation of the potential with distance (r) for different form of the Debye mass (e.g. Quasi-Particle, Leading order and Lattice parametrized) at finite quark chemical potential $\mu$ = $300$ MeV and $T$ = $150$ MeV with different number of flavours. It has been found that the variation of potential increases with the distance (r). This potential is not similar to the lattice free energy of heavy-quark in the deconfined phase, which is a well known coulomb potential H.Satz , the Cornell potential is solvable by one-dimensional Fourier-transform (FT) in the hot QCD medium and has a similar form that has been used for the study of quarkonium properties (which is consider like the color flux tube structure). The variation of quasi-particle Debye mass with finite quark chemical potential at different values of temperature (i.e, $150$, $160$ and $170$ MeV) is shown in figure 2 (a), and the variation of quasi-particle Debye mass with temperature at different values of finite quark chemical potential (i.e, $150$, $200$ and $250$ MeV) is shown in the figure 2 (b) respectively. The Debye screening mass at baryon density and temperature has been studied by the lattice Taylor expansion method M.Doring . If we increase the value of finite quark chemical potential, Debye mass increases for different temperatures is shown in the figure 2 (a). But in the figure 2 (b), Debye mass increases with the temperature for different quark chemical potential. The variation of the binding energy for the $J/\psi$ and $\Upsilon$ with the temperature for different number of the flavors $N_{f}$ = $2$, $3$ has been shown in figure 3 (a) and in figure 3 (b) respectively. When we increase the value of number of flavors in the figure 3, the variation of binding energy decreases at $\mu$ = $100$ MeV. Figures 4 and 5 shows the variation of binding energy of $J/\psi$ and $\psi^{\prime}$ in the figure 4 (a) and 5 (a), $\Upsilon$ and $\Upsilon^{\prime}$ in the figure 4 (b) and 5 (b) respectively, with the temperature at $N_{f}$ = $3$ for different values of the finite quark chemical potential ($\mu$ = $100$, $110$, $120$ MeV). From these figures, we notice that the binding energy $J/\psi$, $\psi^{\prime}$, $\Upsilon$ and $\Upsilon^{\prime}$ decreases with the temperature as we increase the value of finite quark chemical potential. So, it become an interesting fact about the rate of exponential decay of binding energy. This behavior of binding energy could be understood by the strongerness of Debye screening with increasing values of the finite quark chemical potential $(\mu)$ and also the strength of inter-quark potential, which is weaker as compared to the zero finite quark chemical potential $\mu$ = $0$. The binding energy at finite value of temperature and quark chemical potential gives information about the dissociation of the quarkonium states (charmonium and bottomonium states). Since, it is known that the binding energy is directly proportional to the mass of quarkonia so, if the value of quarkonia mass increases, in other words we turns towards higher masses i.e. from mass of $J/\psi$ ($1.5$ GeV) to mass of $\Upsilon$ ($4.5$ GeV), we notice that the binding energy increases. The mass spectra of the quarkonia states in the presence of finite quark chemical potential has been also studied and its variation with the temperature has been shown in the figures 6. It has been observed that the mass spectra of the $J/\psi$ in the figure 6 (a) and $\Upsilon$ in the figure 6 (b) increases as the mass of the quarkonia increaes. We have also made a comparison of the masses of the $J/\psi$ and $\Upsilon$ with the experimental data Tanabashi and this has been given in the Table-IV. The variation of the thermal width of $J/\psi$ in the figure 7 (a) and $\Upsilon$ in the figure 7 (b) with the temperature for different values of the finite quark chemical potential (i.e, $200$, $250$ and $300$ MeV) has been shown. It has been seen that the thermal width of the quarkonium states with the finite quark chemical potential increase. The real and the imaginary part of the binding energies of the $J/\psi$ in the figures 8 (a), 9 (a) and 10 (a) and $\Upsilon$ in the figures 8 (b), 9 (b) and 10 (b) with the $T/T_{c}$ for $N_{f}$ = $0$, $2$, $3$ at $\mu$ = $300$, $325$, $350$ MeV has been shown respectively. From these figures we have examined the dissociation temperature (By the intersection point of the thermal width and two times of the binding energy of the quarkonia) of the $J/\psi$ and $\Upsilon$ for different flavors $N_{f}$= $0$, $2$, $3$ and the obtained values of the dissociation temperatures, given in the Table-I, II and III respectively. The dissociation temperature of the $J/\psi$ and $\Upsilon$ at $N_{f}$ = $0$ for the finite quark chemical potential $\mu$ = $300$, $325$, $350$ MeV are found $0.8628$, $0.8571$, $0.8234$ and $0.9307$, $0.9209$, $0.9124$ respectively (in terms of $T_{c}$). Similarly at $N_{f}$ = $2$ the dissociation temperature of the $J/\psi$ and $\Upsilon$ for different finite quark chemical potential $\mu$ = $300$, $325$, $350$ MeV are $0.8652$, $0.8596$, $0.8546$ and $0.9354$, $0.9299$, $0.9248$ respectively (in unit of $T_{c}$). Finally, the dissociation point of the $J/\psi$ and $\Upsilon$ for different values of the finite quark chemical potential $\mu$ = $300$, $325$, $350$ MeV was found as $0.8653$, $0.85970$, $8547$ and $0.9350$, $0.9247$, $0.9258$ respectively (in unit of $T_{c}$) when the number of flavor was taken as $N_{f}$ = $3$. ## VII Conclusion and future outcomes The current work have determined the quarkonium dissociation behavior in the hot and dense QGP medium, in the presence of the finite quark chemical potential. It has been observed that the Cornell potential with distance ’r’, increases for the different Debye screening. The behavior of binding energy with temperature has also seen by the earlier studies prd2016_vin and with anisotropic parameter at zero finite quark chemical potential in prd2018_vin . Here we consider the finite values of the quark chemical potential. It is noticed that the binding energy for the different quarkonium states decreases with the temperature as we increase the value of finite quark chemical potential. The dissociation temperature of the quarkonium states, on the other hand, increases with the increasing values of the finite quark chemical potential. This can be seen from the Table-I, II and III for number of flavor $N_{f}$= $0$, $2$, $3$ respectively. The maximum error in the mass spectra of the $J/\psi$ and $\Upsilon$ deduced by using Chi square function is $0.0019$ and $0.00421$ (in GeV) respectively. The present work might be helpful in exploring the studies of the compact objects like neutron stars. Since the Compressed Baryonic Matter (CBM) experiment at FAIR is exploring the quark gluon plasma at higher baryon densities, so such type of theoretical studies may contribute to the physics of compact bodies with high baryon densities. ## VIII Acknowledgments One of the author, V.K.Agotiya acknowledge the Science and Engineering Research Board (SERB) Project No. EEQ/2018/000181 New Delhi for the research support in basic sciences. ## IX Declarations ### IX.1 Funding No any other funding. ### IX.2 Conflict of interest The author declare no competing interests. ## References * (1) M. C. Abreu et al, Physics Letters B 477 no.1-3 pp.28-36 (2000). * (2) T. Matsui and H. Satz, Physics Letters B 178 no.4 pp.416-422 (1986). * (3) C. Gerschel and J. Hufner, Physics Letters B 207 pp.253 (1988). * (4) X. M. Xu, D. Kharzeev, H. Satz and X. N. Wang, Physical Review C 53 no.6 pp.3051 (1996). * (5) T. Umeda, Physical Review D 75 pp.094502 (2007). * (6) W. M. Alberico, A. Beraudo, A. De. Pace and A. Molinari, Physical Review D 77 pp.017502 (2008). * (7) M. J. Leitch [PHENIX Collaboration] arXiv: 0806.1244 [nucl-ex]. * (8) M. Laine, Nuclear Physics A 820 pp.25C (2009). * (9) D. Pal, B. K. Patra and D. K. Srivastava, European Physics Journal C 17, pp.179-186, (2000). * (10) Binoy Krishna Patra and D. K. Srivastava, Physics Letters B 505 no.1-4 pp.113-118 (2001). * (11) Binoy Krishna Patra, V. J. Menon, Nuclear Physics A 708 pp.353-364 (2002). * (12) Binoy Krishna Patra and V. J. Menon, The European Physical Journal C 37 pp.115-121 (2004). * (13) Y. Burnier, M. Laine, M. Vepsáláinen, Physics Letters B 678 no.1 pp.86-89 (2009). * (14) E. V. Shuryak, Physics Reports 61 no.2 pp.71-158 (1980). * (15) D. J. Gross, R. D. Pisarki and L. G. Yaffe, Reviews of Modern Physics 53 pp.43 (1981). * (16) M. Laine, Journal of High Energy Physics 04 pp.124 (2011). * (17) M. He, R. J. Fries and R. Rapp, Physics Letters B 701 pp.445 (2011). * (18) N. Brambilla, A. Pineda, J. Soto, and A. Vairo, Reviews of Modern Physics 77 no.4 pp.1423-1496 (2005). * (19) N. Brambilla, J. Ghiglieri, A. Vairo and P. Petreczky, Physical Review D 78 no.01 pp.4017 (2008). * (20) L. Kluberg and H. Satz, https://arXiv:hep-ph/0901.3831. * (21) V. Agotiya et al, Physical Review C 80 no.2 id-025210 (2009). * (22) R. A. Schneider, Physical Review D 66 no.3 id-036003 (2002). * (23) H. A. Weldon, Physical Review D 26 no.6 pp.1394-1407 (1982). * (24) V. Chandra, R. Kumar and V. Ravishankar, Physical Review C 76 no.6 id-054909 (2007). * (25) A. Ranjan and V. Ravishankar, https://arXiv:0707.3697. * (26) M. Laine, O. Philipsen, M. Tassler, and P. Romatschke, Journal of High Energy Physics 03 pp.054 (2007). * (27) A. Beraudo, J. P. Blaizot and C. Ratti, Nuclear Physics A 806 no.1-4 pp.312-338 (2008). * (28) M. Laine, O. Philipsen and M. Tassler, Journal of High Energy Physics 09,pp.066 (2007). * (29) A. Rebhan, Physical Review D 48 no.9 pp.R3967 (1993). * (30) E. Braaten and A. Nieto, Physical Review Letters 73 no.18 pp.2402 (1994). * (31) Y. Burnier and A. Rothkopf, Physics Letters B 753 pp.232-236 (2016). * (32) K. Kajantie, M. Laine, J. Peisa, A. Rajantie, K. Rummukainen and M. E. Shaposhnikov, Physical Review Letters 79 no.17 pp.3130 (1997). * (33) Anbazavov, F. Kirsch, P. Petrezky and S. Mukherjee, Physical Review D 91 no.5 id-054503 (2015). * (34) S. Nadkarni, Physical Review D 33 no.12 pp.3738 (1986). * (35) V. Goloviznin and H. Satz, Zeitschrift fur Physik C particlse and fields 57 pp.671-675 (1993). * (36) A. Peshier, B. Kampfer, O. P. Pavlenko, and G. Soff, Physical Review D 54 no.3 pp.2399-2402 (1996). * (37) M. D’Elia, A. Di. Giacomo, and E. Meggiolaro, Physical Review D 67 no.11 id-114504 (2003). * (38) A. Dumitru and R. D. Pisarski, Physics Letters B 525 no.1-2 pp.95-100 (2002). * (39) V. Chandra, V. K. Agotiya and B. K. Patra, https://arXiv:0901.2084v1 (2009). * (40) P. K. Srivastava, S. K. Tiwari and C. P. Singh, Physical Review D 82 no.01 id-014023 (2010). * (41) M. Cheng et al, Physical Review D 77 no.01 id-014511 (2008). * (42) U. Kakade and B. K. Patra, Physical Review C 92 no.2 id-024901 (2015). * (43) A. Mocsy and P. Petreczky, Physics Review Letters 99 no.21 id-211602 (2007). * (44) M. Margotta, K. McCarty, C. McGahan, M. Strick-land and D. Y. Elorriaga, Physical Review D 83 no.10 id-105019 (2011). * (45) L. Thakur, U. Kakade and B. K. Patra, Physical Review D 89 id-094020 (2014). * (46) P. Sandin, M. Ogren and M. Gulliksson, Physical Review E 93 no.3 id-033301 (2016). * (47) V. K. Agotiya, V. Chandra, M. Y. JamaL and I. Nilima, Physical Review D 94 no.9 id-094006 (2016). * (48) M Y Jamal, I Nilima, V. Chandra and V. K. Agotiya, https://arXiv:1805.04763 (2018). * (49) H. Satz, Reports on Progress in Physics 63 pp-1511 https://arXiv:hep-ph/0007069 (2000). * (50) M. Doring, S. Ejiri, O. Kaczmarek, F. Karsch and E. Laermann, Proceedings of Science (XXIIIrd international symposium on lattice field theory) LAT2005 pp.193 (2005). * (51) M. Tanabashi, C. Carone, T. G. Trippe and C. G. Wohl, Phys. Rev. D 98 546-548 (2018).
$\displaystyle+\ln\left(1+\exp\left(y\right)\right)-\ln\left(1+\exp\left(e\left(\boldsymbol{\theta}_{2}\right)\right)\right),$ and $h\left(\tau,\boldsymbol{\theta}\right):=\tau\left[FZ_{\tau}^{sp}\left(q\left(\boldsymbol{\theta}_{1}\right),e\left(\boldsymbol{\theta}_{2}\right),Y\right)-\ln\left(1+\exp\left(Y\right)\right)\right].$ Let $\mathcal{T}:=\left(0,1\right)$ and $\displaystyle B_{1}\left(V\right)$ $\displaystyle:=$ $\displaystyle\sup_{\boldsymbol{\theta}_{1}\in\boldsymbol{\Theta}_{1}}\left|q\left(\boldsymbol{\theta}_{1}\right)\right|,$ $\displaystyle B_{2}\left(V\right)$ $\displaystyle:=$ $\displaystyle\sup_{\boldsymbol{\theta}\in\boldsymbol{\Theta}}\left\|q\left(\boldsymbol{\theta}_{1}\right)\nabla_{\boldsymbol{\theta}_{2}}e\left(\boldsymbol{\theta}_{2}\right)\right\|,$ $\displaystyle B_{3}\left(V\right)$ $\displaystyle:=$ $\displaystyle\sup_{\boldsymbol{\theta}\in\boldsymbol{\Theta}_{1}}\left\|\nabla_{\boldsymbol{\theta}_{1}}q\left(\boldsymbol{\theta}_{1}\right)\right\|,$ $\displaystyle B_{4}\left(V\right)$ $\displaystyle:=$ $\displaystyle\sup_{\boldsymbol{\theta}_{2}\in\boldsymbol{\Theta}_{2}}\left\|Y\nabla_{\boldsymbol{\theta}_{2}}e\left(\boldsymbol{\theta}_{2}\right)\right\|,$ $\displaystyle B_{5}\left(V\right)$ $\displaystyle:=$ $\displaystyle\sup_{\boldsymbol{\theta}\in\boldsymbol{\Theta}}\left\|\left(e\left(\boldsymbol{\theta}_{2}\right)-q\left(\boldsymbol{\theta}_{1}\right)\right)\nabla_{\boldsymbol{\theta}_{2}}e\left(\boldsymbol{\theta}_{2}\right)\right\|.$ Let $\boldsymbol{\theta}^{1}:=\left(\boldsymbol{\theta}_{1}^{1},\boldsymbol{\theta}_{2}^{1}\right)$ and $\boldsymbol{\theta}^{2}:=\left(\boldsymbol{\theta}_{1}^{2},\boldsymbol{\theta}_{2}^{2}\right)$ be two vectors of parameter values. ###### Lemma 3 (Global Lipschitz continuity) If Assumption 2.4 holds, for all $\left(\tau_{1},\tau_{2}\right)\in\mathcal{T}$, $\left(\boldsymbol{\theta}_{1}^{1},\boldsymbol{\theta}_{1}^{2}\right)\in\boldsymbol{\Theta}_{1}$ and $\left(\boldsymbol{\theta}_{2}^{1},\boldsymbol{\theta}_{2}^{2}\right)\in\boldsymbol{\Theta}_{2}$, given $V=\left(Y,W\right)$, there exists a constant $B_{\tau}^{*}\left(V\right)$ such that $\left|h\left(\tau_{1},\boldsymbol{\theta}^{1}\right)-h\left(\tau_{2},\boldsymbol{\theta}^{2}\right)\right|\leq B_{\tau}^{*}\left(V\right)\left(\left\|\tau_{1}-\tau_{2}\right\|+\left\|\boldsymbol{\theta}^{1}-\boldsymbol{\theta}^{2}\right\|\right).$ Proof. We use similar notations and strategy as in proving Lemma 2. At first note that $h\left(\tau,\boldsymbol{\theta}\right)$ can be rewritten as $\displaystyle h\left(\tau,\boldsymbol{\theta}\right)$ $\displaystyle=$ $\displaystyle\frac{\exp\left(e\left(\boldsymbol{\theta}_{2}\right)\right)}{1+\exp\left(e\left(\boldsymbol{\theta}_{2}\right)\right)}\left\\{0.5\left[\left|q\left(\boldsymbol{\theta}_{1}\right)-Y\right|+q\left(\boldsymbol{\theta}_{1}\right)-Y\right]+\tau\left(e\left(\boldsymbol{\theta}_{2}\right)-q\left(\boldsymbol{\theta}_{1}\right)\right)\right\\}$ $\displaystyle-\tau\ln\left(1+\exp\left(e\left(\boldsymbol{\theta}_{2}\right)\right)\right).$ Then $\displaystyle h\left(\tau_{1},\boldsymbol{\theta}^{1}\right)-h\left(\tau_{2},\boldsymbol{\theta}^{2}\right)$ $\displaystyle=$ $\displaystyle\frac{0.5\exp\left(e_{1}\right)}{1+\exp\left(e_{1}\right)}\left[\left|q_{1}-Y\right|+q_{1}-Y\right]-\frac{0.5\exp\left(e_{2}\right)}{1+\exp\left(e_{2}\right)}\left[\left|q_{2}-Y\right|+q_{2}-Y\right]$ $\displaystyle+\left.\left[\frac{\tau_{1}\exp\left(e_{1}\right)}{1+\exp\left(e_{1}\right)}\left(e_{1}-q_{1}\right)-\frac{\tau_{2}\exp\left(e_{2}\right)}{1+\exp\left(e_{2}\right)}\left(e_{2}-q_{2}\right)\right.\right.$ $\displaystyle\left.-\left(\tau_{1}\ln\left(1+\exp\left(e_{1}\right)\right)-\tau_{2}\ln\left(1+\exp\left(e_{2}\right)\right)\right)\right]$ We assume $q_{1}\geq q_{2}$ and $\tau_{1}\geq\tau_{2}$. The results for $q_{1}<q_{2}$ and $\tau_{1}<\tau_{2}$ can be proved in a similar manner. We separate the proof into three cases. Cases 1. Suppose $q_{1}\geq q_{2}>Y$. It can be shown that $\displaystyle\left|h\left(\tau_{1},\boldsymbol{\theta}^{1}\right)-h\left(\tau_{2},\boldsymbol{\theta}^{2}\right)\right|$ $\displaystyle\leq$ $\displaystyle\left|\frac{\exp\left(e_{1}\right)}{1+\exp\left(e_{1}\right)}q_{1}-\frac{\exp\left(e_{2}\right)}{1+\exp\left(e_{2}\right)}q_{2}\right|$ $\displaystyle+\left|Y\frac{\exp\left(e_{1}\right)}{1+\exp\left(e_{1}\right)}-Y\frac{\exp\left(e_{2}\right)}{1+\exp\left(e_{2}\right)}\right|$ $\displaystyle+\left|\frac{\tau_{1}\exp\left(e_{1}\right)}{1+\exp\left(e_{1}\right)}\left(e_{1}-q_{1}\right)-\frac{\tau_{2}\exp\left(e_{2}\right)}{1+\exp\left(e_{2}\right)}\left(e_{2}-q_{2}\right)\right.$ $\displaystyle\left.-\left(\tau_{1}\ln\left(1+\exp\left(e_{1}\right)\right)-\tau_{2}\ln\left(1+\exp\left(e_{2}\right)\right)\right)\right|$ Case 2. Suppose $q_{1}\geq Y\geq q_{2}$. It can be shown that $\displaystyle\left|h\left(\tau_{1},\boldsymbol{\theta}^{1}\right)-h\left(\tau_{2},\boldsymbol{\theta}^{2}\right)\right|$ $\displaystyle\leq$ $\displaystyle\left|\frac{\exp\left(e_{1}\right)}{1+\exp\left(e_{1}\right)}q_{1}-\frac{\exp\left(e_{2}\right)}{1+\exp\left(e_{2}\right)}q_{2}\right|$ $\displaystyle+\left|\frac{\tau_{1}\exp\left(e_{1}\right)}{1+\exp\left(e_{1}\right)}\left(e_{1}-q_{1}\right)-\frac{\tau_{2}\exp\left(e_{2}\right)}{1+\exp\left(e_{2}\right)}\left(e_{2}-q_{2}\right)\right.$ $\displaystyle\left.-\left(\tau_{1}\ln\left(1+\exp\left(e_{1}\right)\right)-\tau_{2}\ln\left(1+\exp\left(e_{2}\right)\right)\right)\right|$ Case 3. Suppose $Y>q_{1}\geq q_{2}$. It can be shown that $\displaystyle\left|h\left(\tau_{1},\boldsymbol{\theta}^{1}\right)-h\left(\tau_{2},\boldsymbol{\theta}^{2}\right)\right|$ $\displaystyle\leq$ $\displaystyle+\left|\frac{\tau_{1}\exp\left(e_{1}\right)}{1+\exp\left(e_{1}\right)}\left(e_{1}-q_{1}\right)-\frac{\tau_{2}\exp\left(e_{2}\right)}{1+\exp\left(e_{2}\right)}\left(e_{2}-q_{2}\right)\right.$ $\displaystyle\left.-\left(\tau_{1}\ln\left(1+\exp\left(e_{1}\right)\right)-\tau_{2}\ln\left(1+\exp\left(e_{2}\right)\right)\right)\right|$ Let $\left(\bar{\tau},\bar{q},\bar{e}\right)$ be some middle point between $\left(\tau_{1},q_{1},e_{1}\right)$ and $\left(\tau_{2},q_{2},e_{2}\right)$. Using the mean value theorem, it is straightforward to show that $\displaystyle\left|\frac{\exp\left(e_{1}\right)}{1+\exp\left(e_{1}\right)}q_{1}-\frac{\exp\left(e_{2}\right)}{1+\exp\left(e_{2}\right)}q_{2}\right|$ $\displaystyle\leq$ $\displaystyle\left|\frac{\exp\left(\bar{e}\right)}{1+\exp\left(\bar{e}\right)}\right|\left|q_{1}-q_{2}\right|$ (50) $\displaystyle+\left|\frac{\exp\left(\bar{e}\right)}{1+\exp\left(\bar{e}\right)}\left(1-\frac{\exp\left(\bar{e}\right)}{1+\exp\left(\bar{e}\right)}\right)\right|\left|\bar{q}\left(e_{1}-e_{2}\right)\right|$ $\displaystyle\leq$ $\displaystyle\left|q_{1}-q_{2}\right|+\left|\bar{q}\left(e_{1}-e_{2}\right)\right|,$ $\displaystyle\left|Y\frac{\exp\left(e_{1}\right)}{1+\exp\left(e_{1}\right)}-Y\frac{\exp\left(e_{2}\right)}{1+\exp\left(e_{2}\right)}\right|$ $\displaystyle\leq$ $\displaystyle\left|\frac{\exp\left(\bar{e}\right)}{1+\exp\left(\bar{e}\right)}\left(1-\frac{\exp\left(\bar{e}\right)}{1+\exp\left(\bar{e}\right)}\right)\right|\left|Y\left(e_{1}-e_{2}\right)\right|$ (51) $\displaystyle\leq$ $\displaystyle\left|Y\left(e_{1}-e_{2}\right)\right|,$ and $\displaystyle\left|\frac{\tau_{1}\exp\left(e_{1}\right)}{1+\exp\left(e_{1}\right)}\left(e_{1}-q_{1}\right)-\frac{\tau_{2}\exp\left(e_{2}\right)}{1+\exp\left(e_{2}\right)}\left(e_{2}-q_{2}\right)\right.$ $\displaystyle\left.-\left(\tau_{1}\ln\left(1+\exp\left(e_{1}\right)\right)-\tau_{2}\ln\left(1+\exp\left(e_{2}\right)\right)\right)\right|$ $\displaystyle\leq$ $\displaystyle\left|\frac{\exp\left(\bar{e}\right)}{1+\exp\left(\bar{e}\right)}\bar{e}-\ln\left(1+\exp\left(\bar{e}\right)\right)\right|\left|\tau_{1}-\tau_{2}\right|$ $\displaystyle+\left|\frac{\exp\left(\bar{e}\right)\bar{q}}{1+\exp\left(\bar{e}\right)}\right|\left|\tau_{1}-\tau_{2}\right|+\left|\frac{\bar{\tau}\exp\left(\bar{e}\right)}{1+\exp\left(\bar{e}\right)}\right|\left|q_{1}-q_{2}\right|$ $\displaystyle+\left|\frac{\bar{\tau}\exp\left(\bar{e}\right)}{1+\exp\left(\bar{e}\right)}\left(1-\frac{\exp\left(\bar{e}\right)}{1+\exp\left(\bar{e}\right)}\right)\right|$ $\displaystyle\times\left|\left(\bar{e}-\bar{q}\right)\left(e_{1}-e_{2}\right)\right|$ $\displaystyle\leq$ $\displaystyle\left(\ln 2+\left|\bar{q}\right|\right)\left|\tau_{1}-\tau_{2}\right|+\left|q_{1}-q_{2}\right|$ $\displaystyle+\left|\left(\bar{e}-\bar{q}\right)\left(e_{1}-e_{2}\right)\right|$ To see why (A.4) holds, let $\varpi\left(x\right)=\exp\left(x\right)\left(1+\exp\left(x\right)\right)^{-1}x-\ln\left(1+\exp\left(x\right)\right)$. It can be shown that $\varpi\left(x\right)$ is monotonically decreasing for $x<0$ and monotonically increasing for $x\geq 0$ and $\varpi\left(0\right)=-\ln 2$, $\lim_{x\rightarrow\infty}\varpi\left(x\right)=0$, and $\lim_{x\rightarrow-\infty}\varpi\left(x\right)=0$. Therefore we can conclude that $\left|\varpi\left(x\right)\right|\leq\ln 2$ for all $x\in\mathbb{R}$. If Assumption 2.4 holds, it can be shown for (50), $\displaystyle\left|q_{1}-q_{2}\right|+\left|\bar{q}\left(e_{1}-e_{2}\right)\right|$ $\displaystyle\leq$ $\displaystyle\left|\nabla_{\boldsymbol{\theta}_{1}}q\left(\bar{\boldsymbol{\theta}}_{1}\right)\right|\left\|\boldsymbol{\theta}_{1}^{1}-\boldsymbol{\theta}_{1}^{2}\right\|+\left|q\left(\boldsymbol{\theta}_{1}\right)\nabla_{\boldsymbol{\theta}_{2}}e\left(\boldsymbol{\theta}_{2}\right)\right|\left\|\boldsymbol{\theta}_{2}^{1}-\boldsymbol{\theta}_{2}^{2}\right\|$ $\displaystyle\leq$ $\displaystyle B_{3}\left(V\right)\left\|\boldsymbol{\theta}_{1}^{1}-\boldsymbol{\theta}_{1}^{2}\right\|+B_{2}\left(V\right)\left\|\boldsymbol{\theta}_{2}^{1}-\boldsymbol{\theta}_{2}^{2}\right\|,$ for (51), $\displaystyle\left|Y\left(e_{1}-e_{2}\right)\right|$ $\displaystyle\leq$ $\displaystyle B_{4}\left(V\right)\left\|\boldsymbol{\theta}_{2}^{1}-\boldsymbol{\theta}_{2}^{2}\right\|,$ and for (A.4), $\displaystyle\left(\ln 2+\left|\bar{q}\right|\right)\left|\tau_{1}-\tau_{2}\right|+\left|q_{1}-q_{2}\right|+\left|\left(\bar{e}-\bar{q}\right)\left(e_{1}-e_{2}\right)\right|$ $\displaystyle\leq$ $\displaystyle\left(\ln 2+B_{1}\left(V\right)\right)\left|\tau_{1}-\tau_{2}\right|+B_{3}\left(V\right)\left\|\boldsymbol{\theta}_{1}^{1}-\boldsymbol{\theta}_{1}^{2}\right\|$ $\displaystyle+B_{5}\left(V\right)\left\|\boldsymbol{\theta}_{2}^{1}-\boldsymbol{\theta}_{2}^{2}\right\|.$ Summing the above inequalities together, given $V=\left(Y,W\right)$, we can conclude that $\left|h\left(\tau_{1},\boldsymbol{\theta}^{1}\right)-h\left(\tau_{2},\boldsymbol{\theta}^{2}\right)\right|\leq B^{*}\left(V\right)\left(\left\|\tau_{1}-\tau_{2}\right\|+\left\|\boldsymbol{\theta}^{1}-\boldsymbol{\theta}^{2}\right\|\right),$ where $B^{*}\left(V\right)$ is a constant and is determined by a sum of $\ln 2$ and constants $B_{i}\left(V\right)$, $i=1,\ldots,5$. ### A.5 Constructing the simultaneous confidence bands (scb) with the bootstrap We use the models of linear in parameters of (6) and (7) as an example to illustrate the bootstrap procedures for constructing the simultaneous confidence bands. Let $\boldsymbol{\theta}_{\tau}=\left(\alpha_{1,\tau},\boldsymbol{\beta}_{1,\tau}^{\top},\alpha_{2,\tau},\boldsymbol{\beta}_{2,\tau}^{\top}\right)^{\top}$ be a vector for parameters at the $\tau$-quantile. The bootstrap procedures are summarized as follows. 1. 1. Draw a bootstrap sample of size $n$: $W_{1}^{*},\ldots,W_{n}^{*}$ with replacement, where $W_{i}^{*}=\left(Y_{i}^{*},D_{i}^{*},X_{i}^{*\top},Z_{i}^{*}\right)^{\top},$ $i=1,\ldots,n$, is a vector for the $i$th random draw sample. 2. 2. Re-estimate the weight $\bar{K}$ with the bootstrap sample. Let $\hat{\tilde{K}}_{i}^{*}$, $i=1,\ldots,n$ denote the bootstrap estimated truncated weight evaluated with $\left(Y_{i}^{*},D_{i}^{*},X_{i}^{*}\right)$. 3. 3. With the bootstrap estimated truncated weight $\hat{\bar{K}}_{i}^{*}$, obtain the bootstrap estimated parameters $\hat{\boldsymbol{\theta}}_{\tau,b}^{*}=\left(\hat{\alpha}_{1,\tau,b}^{*},\hat{\boldsymbol{\beta}}_{1,\tau,b}^{*\top},\hat{\alpha}_{2,\tau,b}^{*},\hat{\boldsymbol{\beta}}_{2,\tau,b}^{*\top}\right)^{\top}$ for $\tau\in\left(0,1\right)$. 4. 4. Repeat procedures 1 to 3 $B$ times to obtain $B$ bootstrap estimated parameters $\hat{\boldsymbol{\theta}}_{\tau,1}^{*},\ldots,\hat{\boldsymbol{\theta}}_{\tau,B}^{*}$. With the bootstrap estimated parameters, we use $\hat{\alpha}_{2,\tau}$ as an example to illustrate the procedures to construct the scb as follows. 1. 1. Calculate the bootstrap standard deviation of $\hat{\alpha}_{2,\tau}$, $\hat{\sigma}_{\hat{\alpha}_{2,\tau}}^{*}$ with the $B$ bootstrap estimates $\left(\hat{\alpha}_{2,\tau,1}^{*},\ldots,\hat{\alpha}_{2,\tau,B}^{*}\right)$. 2. 2. Calculate the bootstrap $t$ statistic of $\hat{\alpha}_{2,\tau}$: $\hat{t}_{\hat{\alpha}_{2,\tau},b}^{*}=\frac{\sqrt{n}\left(\hat{\alpha}_{2,\tau,b}^{*}-\hat{\alpha}_{2,\tau}\right)}{\hat{\sigma}^{*}_{\hat{\alpha}_{2,\tau}}},$ $b=1,\ldots,B$. 3. 3. For each $b=1,\ldots,B$, calculate the maximal absolute bootstrap $t$ statistic for $\tau\in\left(0,1\right)$: $\hat{t}_{\hat{\alpha}_{2},b}^{*}=\sup_{\tau\in\left(0,1\right)}\left|\hat{t}_{\hat{\alpha}_{2,\tau},b}\right|.$ 4. 4. Let $\hat{t}_{\hat{\alpha}_{2}}^{*(1-g)}$ be the $\left(1-g\right)$th sample quantile of $\hat{t}_{\hat{\alpha}_{2},1}^{*},\ldots,\hat{t}_{\hat{\alpha}_{2},B}^{*}$. The $1-g$ scb of $\hat{\alpha}_{2,\tau}$ is $\left[\hat{\alpha}_{2,\tau}-\hat{t}_{\hat{\alpha}_{2}}^{*(1-g)}\frac{\hat{\sigma}_{\hat{\alpha}_{2,\tau}}^{*}}{\sqrt{n}},\hat{\alpha}_{2,\tau}+\hat{t}_{\hat{\alpha}_{2}}^{*(1-g)}\frac{\hat{\sigma}_{\hat{\alpha}_{2,\tau}}^{*}}{\sqrt{n}}\right].$ (53) ## References * Abadie (2003) Abadie, A. (2003): “Semiparametric instrumental variable estimation of treatment response models,” _Journal of Econometrics_ , 113, 231–263. * Abadie et al. (2002) Abadie, A., J. Angrist, and G. Imbens (2002): “Instrumental Variables Estimates of the Effect of Subsidized Training on the Quantiles of Trainee Earnings,” _Econometrica_ , 70, 91–117. * Angrist et al. (2006) Angrist, J., V. Chernozhukov, and I. Fernández-Val (2006): “Quantile Regression under Misspecification, with an Application to the U.S. Wage Structure,” _Econometrica_ , 74, 539–563. * Angrist et al. (1996) Angrist, J. D., G. W. Imbens, and D. B. Rubin (1996): “Identification of Causal Effects Using Instrumental Variables,” _Journal of the American Statistical Association_ , 91, 444–455. * Belloni et al. (2017) Belloni, A., V. Chernozhukov, I. Fernández-Val, and C. Hansen (2017): “Program Evaluation and Causal Inference With High-Dimensional Data,” _Econometrica_ , 85, 233–298. * Chen et al. (2020) Chen, Y.-T., Y.-C. Hsu, and H.-J. Wang (2020): “A Stochastic Frontier Model with Endogenous Treatment Status and Mediator,” _Journal of Business & Economic Statistics_, 38, 243–256. * Chernozhukov et al. (2013) Chernozhukov, V., I. Fernández-Val, and B. Melly (2013): “Inference on Counterfactual Distributions,” _Econometrica_ , 81, 2205–2268. * Chernozhukov and Hansen (2005) Chernozhukov, V. and C. Hansen (2005): “An IV Model of Quantile Treatment Effects,” _Econometrica_ , 73, 245–261. * Chernozhukov and Hansen (2006) ——— (2006): “Instrumental quantile regression inference for structural and treatment effect models,” _Journal of Econometrics_ , 132, 491–525. * Chernozhukov and Hansen (2008) ——— (2008): “Instrumental variable quantile regression: A robust inference approach,” _Journal of Econometrics_ , 142, 379–398. * Chernozhukov et al. (2007) Chernozhukov, V., C. Hansen, and M. Jansson (2007): “Inference approaches for instrumental variable quantile regression,” _Economics Letters_ , 95, 272–277. * Chernozhukov et al. (2009) ——— (2009): “Finite sample inference for quantile regression models,” _Journal of Econometrics_ , 152, 93–103. * Chou et al. (2020) Chou, R. Y., T.-J. Yen, and Y.-M. Yen (2020): “Forecasting Expected Shortfall and Value-at-Risk with the FZ Loss and Realized Variance Measures,” Available at SSRN: https://ssrn.com/abstract=3448882. * Dimitriadis and Bayer (2019) Dimitriadis, T. and S. Bayer (2019): “A joint quantile and expected shortfall regression framework,” _Electron. J. Statist._ , 13, 1823–1871. * Engle and Manganelli (2004) Engle, R. F. and S. Manganelli (2004): “CAViaR: Conditional Autoregressive Value at Risk by Regression Quantiles,” _Journal of Business & Economic Statistics_, 22, 367–381. * Fissler and Ziegel (2016) Fissler, T. and J. F. Ziegel (2016): “Higher order elicitability and Osband’s principle,” _The Annals of Statistics_ , 44, 1680–1707. * Fricke et al. (2020) Fricke, H., M. Frölich, M. Huber, and M. Lechner (2020): “Endogeneity and non-response bias in treatment evaluation – nonparametric identification of causal effects by instruments,” _Journal of Applied Econometrics_ , 35, 481–504. * Frölich and Huber (2017) Frölich, M. and M. Huber (2017): “Direct and indirect treatment effects–causal chains and mediation analysis with instrumental variables,” _Journal of the Royal Statistical Society: Series B (Statistical Methodology)_ , 79, 1645–1666. * Frölich and Melly (2013) Frölich, M. and B. Melly (2013): “Unconditional Quantile Treatment Effects Under Endogeneity,” _Journal of Business and Economic Statistics_ , 31, 346–357. * Heckman et al. (1997) Heckman, J. J., J. Smith, and N. Clements (1997): “Making The Most Out Of Programme Evaluations and Social Experiments: Accounting For Heterogeneity in Programme Impacts,” _The Review of Economic Studies_ , 64, 487–535. * Hill (2015) Hill, J. B. (2015): “Expected Shortfall Estimation and Gaussian Inference for Infinite Variance Time Series,” _Journal of Financial Econometrics_ , 13, 1–44. * Imbens and Angrist (1994) Imbens, G. and J. Angrist (1994): “Identification and Estimation of Local Average Treatment Effects,” _Econometrica_ , 62, 467–75. * Imbens and Rubin (2015) Imbens, G. W. and D. B. Rubin (2015): _Causal Inference for Statistics, Social, and Biomedical Sciences: An Introduction_ , Cambridge University Press. * Kianian et al. (2021) Kianian, B., J. I. Kim, J. P. Fine, and L. Peng (2021): “Causal Proportional Hazards Estimation with a Binary Instrumental Variable,” _Statistica Sinica_ , 31, 673–699. * Koenker (2005) Koenker, R. (2005): _Quantile Regression_ , Econometric Society Monographs, Cambridge University Press. * Koenker and Bassett (1978) Koenker, R. and G. Bassett (1978): “Regression Quantiles,” _Econometrica_ , 46, 33–50. * Levy (2016) Levy, H. (2016): _Stochastic Dominance-Investment Decision Making under Uncertainty_ , Springer, 3 ed. * Linton and Xiao (2013) Linton, O. and Z. Xiao (2013): “Estimation of and Inference about the expected shortfall for time series with infinite variance,” _Econometric Theory_ , 29, 771–807. * Meng and Taylor (2020) Meng, X. and J. W. Taylor (2020): “Estimating Value-at-Risk and Expected Shortfall using the intraday low and range data,” _European Journal of Operational Research_ , 280, 191 – 202. * Newey (1991) Newey, W. K. (1991): “Uniform Convergence in Probability and Stochastic Equicontinuity,” _Econometrica_ , 59, 1161–1167. * Newey (1997) ——— (1997): “Convergence rates and asymptotic normality for series estimators,” _Journal of Econometrics_ , 79, 147 – 168. * Newey and McFadden (1994) Newey, W. K. and D. McFadden (1994): “Chapter 36: Large sample estimation and hypothesis testing,” in _Handbook of Econometrics_ , Elsevier, vol. 4, 2111–2245. * Patton et al. (2019) Patton, A. J., J. F. Ziegel, and R. Chen (2019): “Dynamic semiparametric models for expected shortfall (and Value-at-Risk),” _Journal of Econometrics_ , 211, 388 – 413. * Powell (1984) Powell, J. L. (1984): “Least absolute deviations estimation for the censored regression model,” _Journal of Econometrics_ , 25, 303–325. * Taylor (2019) Taylor, J. W. (2019): “Forecasting Value at Risk and Expected Shortfall Using a Semiparametric Approach Based on the Asymmetric Laplace Distribution,” _Journal of Business & Economic Statistics_, 37, 121–133. * van der Vaart (1998) van der Vaart, A. W. (1998): _Asymptotic Statistics_ , Cambridge Series in Statistical and Probabilistic Mathematics, Cambridge University Press. * Wei et al. (2021) Wei, B., L. Peng, M.-J. Zhang, and J. P. Fine (2021): “Estimation of causal quantile effects with a binary instrumental variable and censored data,” _Journal of the Royal Statistical Society: Series B (Statistical Methodology)_ , forthcoming. * Wüthrich (2020) Wüthrich, K. (2020): “A Comparison of Two Quantile Models With Endogeneity,” _Journal of Business & Economic Statistics_, 38, 443–456. Figure 1: Bias, variance and MSE of the estimated conditional CTATE for the compliers under different quantile levels when $\rho=0$. Figure 2: Bias, variance and MSE of the estimated conditional QTE for the compliers under different quantile levels when $\rho=0$. Figure 3: Bias, variance and MSE of the estimated conditional CTATE for the compliers under different quantile levels when $\rho=0.5$. Figure 4: Bias, variance and MSE of the estimated conditional QTE for the compliers under different quantile levels when $\rho=0.5$. Figure 5: CTATE, QTE and the corresponding 95% pointwise and simultaneous confidence bands: Adult men’s earnings. Figure 6: CTATE, QTE and the corresponding 95% pointwise and simultaneous confidence bands: Adult women’s earnings. Figure 7: IQATE and 95% pointwise confidence bands. Upper panel: Adult men’s earnings. Lower panel: Adult women’s earnings.
Universal Source Separation with Weakly Labelled Data M. Shell was with the Department of Electrical and Computer Engineering, Georgia Institute of Technology, Atlanta, GA, 30332. E-mail: see http://www.michaelshell.org/contact.html J. Doe and J. Doe are with Anonymous University. Manuscript received April 19, 2005; revised August 26, 2015. Journal of Class Files, Vol. 14, No. 8, August 2015 Shell et al.: Bare Demo of IEEEtran.cls for Computer Society Journals Universal source separation (USS) is a fundamental research task for computational auditory scene analysis (CASA), which aims to separate mono recordings into individual source tracks. From the perspective of system designs, there are two potential challenges awaiting the solution to the audio source separation task. First, previous audio source separation systems mainly focus on separating one or a limited number of specific sources, such as speech, singing, or musical instruments. However, there is a lack of research on building a unified system that can separate arbitrary sources via a single model. Second, most previous systems require strongly labeled data, i.e., clean source and mixture tracks to train a separator. Nonetheless, strongly labeled data is scarce. There should be a potential usage of large-scale weakly labeled/unlabeled audio data for audio source separation. To combat those problems, we propose a universal audio source separation framework containing: 1) an audio tagging model trained on weakly labeled data only as a query net for source separation; and 2) a conditional source separation model that can separate arbitrary sound sources according to the mixture and the query. We investigate various query nets, source separation models, and different training strategies to verify how they affect the performance in the universal source separation framework. We propose a hierarchy USS strategy to automatically detect and separate sound classes from the AudioSet ontology. Our separation system is recognized with a strong generalization ability. By solely leveraging AudioSet with 2 million weakly labeled audio event samples for the model training, our USS system could achieve compatible performance on various source separation benchmarks, such as sound event separation, music source separation, and speech enhancement. Namely, an average signal-to-distortion ratio improvement (SDRi) of 5.57 dB over AudioSet's 527 sound classes; an SDRi of 10.57 dB on the DCASE 2018 Task 2 dataset; an SDRi of 8.12 dB on the MUDDB18 dataset; an SDRi of 7.28 dB on the Slakh2100 dataset; and an SSNR of 9.00 on the voicebank-demand dataset. We release the source code at <https://github.com/bytedance/audioset_source_separation>. Universal source separation, weakly labelled data, AudioSet, computational auditory scene analysis. § INTRODUCTION Source separation is a task to separate mono audio recordings into individual source tracks. An audio recording may consist of several sound events and acoustic scenes. Source separation has been researched for several years and has a wide range of applications, including speech enhancement <cit.>, music source separation <cit.>, and sound events separation<cit.>. Universal source separation (USS) is closely related to the well-known cocktail party problem <cit.>, where sounds from different sources in the world mix in the air before arriving at the ear, requiring the brain to estimate individual sources from the received mixture. Humans can focus on a particular sound source and discriminate it from others, alternatively described as selective hearing. As a study of auditory scene analysis by computational means, computational auditory scene analysis (CASA) <cit.> systems are machine listening systems that aim to separate mixtures of sound sources in the same way that human listeners do. Different from specified source separation tasks such as speech enhancement or music source separation, a universal source separation system aims to separate the tracks of arbitrary sound sources from a mixture, regarded as a unified system to separate all different sounds. However, many previous works mainly focus on specific source separation that only separate one or a few sources, such as the speech enhancement <cit.> and music source separation <cit.> tasks. USS remains a challenging problem. One difficulty is that there are hundreds of different sounds in the world, and it is difficult to separate all sounds using a unified model. Recently, the USS problem has attracted several interests. A system <cit.> was proposed to separate arbitrary sounds by predicting the masks of sounds. Unsupervised USS systems <cit.> were proposed to separate sounds by mixture invariant training, where training examples are constructed by mixing together existing mixtures, and the model separates them into a variable number of latent sources. A Free Universal Sound Separation (FUSS) dataset and a time-domain convolutional network (TDCN++) were proposed to separate a mixture into up to 4 separate sources. Some other methods use sound embeddings as conditions to separate sources from a mixture <cit.>. A sound selector applies one-hot encodings of sound classes as conditions to separate corresponding sources <cit.>. Weak supervision methods were proposed in <cit.>. SuDoRM-RM systems <cit.> were proposed to separate sound signals. Language-based source separation was proposed in <cit.>. A class conditions system <cit.> was proposed for music source separation. Those systems are mainly trained on small datasets and do not scale to hundreds of sound classes. The standard architecture of deep-learning-based audio source separation model. Left: two types of time-domain separation model. Right: the general type of frequency-domain separation model. Another challenge of universal source separation is that there is a lack of clean source data for a wide range of sound events. For the source separation problem, we define strongly labelled data as audio segments that only contain target sources without other sources. We can mix those clean sources as audio mixtures and train the separators. However, the collection of clean data is time-consuming. For sound classes such as speech, clean data can be recorded in the laboratory. But for many other sounds, the collection of clean data is difficult. Recently, the concept of weakly labelled data <cit.> was used in audio signal processing. In contrast to strongly labelled data, weakly labelled data are only labelled with one or multiple tags, while the time information of tags is unknown. For example, a 10-second audio track is labelled as “thunder” and “rain”, but when these two events exactly appear within this 10-second is not provided. Sometimes, different audio events may interfere with each other. Weakly labelled data has been widely used in audio tagging <cit.> and sound event detection <cit.>. There is a strong potential usage of it in the audio source separation task. To address the challenge of lacking strongly labelled data to separate hundreds of sound classes. In this work, we propose a USS framework that can be trained with weakly labelled data only. This work is an extension of our previously proposed source separation systems <cit.>. The whole system contains: 1) pretrained audio tagging models to provide both the anchor segments of events in weakly labelled data and latent embeddings for source separation controls; 2) a query-based separation model for conditional audio source separation via the given source queries; and 3) a pipeline to train USS systems with weakly labelled data. The contribution of this work includes: * We are the first to propose USS systems trained only on weakly labelled data only, without using any strongly labelled data. We train the USS system on AudioSet <cit.>. * We propose an anchor segment mining strategy to detect short segments from long weakly labelled audio recordings. We show that anchor segments are the core parts to constitute mixtures to build the USS system. * We propose several latent condition strategies and show that pretrained audio tagging system is essential to build USS systems. * Our proposed USS systems are the first to separate 527 sound classes. We propose a hierarchy USS strategy to separate sources of different AudioSet ontology levels. Our proposed USS system is able to automatically detect and separate active sound classes of an audio recording without the need to specify the sound classes to separate. * We show that a single USS system is able to achieve compatible performance to various source separation benchmarks, including sound events separation, music source separation, and speech enhancement tasks. We conduct comprehensive ablation studies to investigate how different factors in our system affect the separation performance. This article is organized as follows. Section 2 introduces neural network-based source separation systems. Section 3 introduces our proposed weakly labelled source separation framework. Section 4 shows experiments. Section 5 concludes this work. § SOURCE SEPARATION VIA NEURAL NETWORKS Deep learning methods for audio source separation has already outperformed traditional methods such as Non-negative Matrix Factorization <cit.>. Fig. <ref> shows the source separation methods in the time-domain (left) and in the frequency-domain (right). Here, we introduce the basic methodology of those separation models. §.§ Time-domain Separation Models A time-domain separation model $f$ via neural networks is typically constructed as an encoder-decoder architecture, as shown in the left of Fig. <ref>. Formally, given a mono audio clip $x \in \mathbb{R}^{L}$ and a separation target $s \in \mathbb{R}^{L}$, where $L$ is sample length, the separator $f$ performs two methods: synthesis separation and mask separation. Synthesis separation systems such as Demucs V2 <cit.> and Wave-U-Net <cit.> $f$ directly estimates the final target samples of separation: $\hat{s} = f(x)$. The encoder is designed to extract the key feature for the separated source, and the decoder is designed to reconstruct the feature back to the separation samples together with the input features. Mask separation systems such as TasNet <cit.> and ConvTasNet <cit.> predict masks in the latent space. Then, decoders are designed to reconstruct the separated waveform from the masked latent feature. §.§ Frequency-domain Separation Models Different from time-domain models, frequency-domain models leverages the spectrogram such as short-time Fourier transform (STFT) to facilitate the separation process. The harmonic features are more discriminative in the frequency domain than those in the time domain, which might help improve the separation performance in specific source separation tasks, such as music source separation and environmental sound separation. Formally, given a mono audio clip $x$. We denote the STFT of $x$ as $X \in \mathbb{C}^{T \times F}$, where $T$ is the number of time frames and $F$ is the number of frequency bins. The STFT spectrogram $X$ is in the complex domain. We denote the magnitude and the phase of $ X $ as $|X|$ and $\angle S_X$, respectively. The right part of Fig. <ref> shows the frequency-domain separation system $f$ predict the mask $ M $, where $ M $ can be an ideal ratio mask (IRM) $M \in \mathbb{R}^{T \times F}$ or a complex mask $M \in \mathbb{C}^{T \times F}$. When using the complex mask $ M $, the complex STFT of the separated source $ \hat{S} \in \mathbb{C}^{T \times F} $ can be calculated by: \begin{align} \hat{S}=M \odot X \end{align} Then, the separated source $ \hat{s} \in \mathbb{R}^{L} $ can be obtained by applying an inverse STFT on $ \hat{S} $. Frequency domain models include fully connected neural networks <cit.>, recurrent neural networks (RNNs) <cit.>, and convolutional neural networks (CNNs) <cit.>. UNets <cit.> are variants of CNN that contain encoder and decoder layers for source separation. Band-split RNN (BSRNN) <cit.> proposes to apply RNNs along both time and frequency axes to capture time and frequency domain dependencies. There are also researches such as hybrid Demucs <cit.> combines time and frequency domain systems to build source separation systems. §.§ Challenges of Source Separation Models As mentioned above, many previous source separation systems are trained with strongly labelled data, which requires clean source tracks. However, the collection of strongly labelled data is difficult and time-consuming. Table <ref> summarizes the datasets that can be used for source separation. On one hand, previous strongly labelled datasets have durations of around tens of hours. On the other hand, weakly labelled datasets are usually larger than strongly labelled datasets and clean datasets. AudioSet <cit.> is a representative of weakly labelled datasets containing over 5,800 hours of 10-second audio clips and is larger in both size and sound class numbers than strongly labelled datasets. AudioSet has an ontology of 527 sound classes in its released version. The ontology of AudioSet has a tree structure. Each audio clip may contain multiple tags. In this work, we use the weakly labelled AudioSet dataset containing 5,800 hours to train the USS system that can separate hundreds of sound classes as described in the next section. Source separation datasets Dataset Dur. (h) Classes Labels Voicebank-Demand 18.9 1 Strong MUSDB18 <cit.>  6.0 4 Strong UrbanSound8K  9.7 10 Strong FSDKaggle 2018 41 Strong FUSS 357 Strong AudioCaps — Language AudioSet 5800 527 Weak Left: Strongly labelled data of sound class “Flute”. Right: Weakly labelled data of sound class “Air horn, truck horn” which only occur between 2.5s - 4.0s. The architecture of our proposed query-based audio source separation pipeline trained from weakly-labeld data, including datasets, sampling strategies, audio tagging model, and conditional audio source separation models. Two audio tagging models for audio classification, sound event detection, and latent feature production. Left: Pretrained Audio Neural Networks (PANN) in CNN-14 architecture. Right: Hierarchical Token-Semantic Transformer (HTS-AT) in 4-block architecture. § USS WITH WEAKLY LABELLED DATA §.§ Weakly Labelled Data Different from strongly labelled data containing clean sources of sound classes, weakly labelled data only contains the labels of what sound classes are present in an audio recording. Weakly labelled data may contain interference sounds. There are no time stamps information of sound classes nor clean sources. We denote a weakly labelled audio clip as $ x $ and its tags as $ y \in \{0, 1\}^{K} $, where $ K $ is the number of sound classes. The value $ 1 $ indicates the presence of sound class and the value $ 0 $ indicates the absence of the sound class in the audio clip. We denote a weakly labelled dataset as $ D = \{x_{n}, y_{n}\}_{n=1}^{N} $, where $ N $ is the number of training samples in the dataset. The left part of Fig. <ref> shows a strongly labelled audio clip containing the clean waveform of “Flute”. The right part of Fig. <ref> shows a weakly labelled audio clip containing a target sound class “Air horn, truck horn” which only occur between 2.5 s and 4.0 s. The weakly labelled audio recording also contain unknown interference sounds. The goal of a weakly labelled USS system is to separate arbitrary sounds via a unified model. Fig. <ref> depicts the architecture of our proposed system, containing four steps: * Apply a sampling strategy to sample audio clips from a weakly labelled audio dataset, such as AudioSet. * Apply an anchor segment mining algorithm to localize the occurrence of events/tags in the weakly labelled audio tracks. * Calculate the segment predictions or latent embeddings of anchor segments using pretrained audio tagging models. Mix anchor segments as input mixtures. * Train a query-based separation network to separate the mixture into target sources queried by the source embedding. §.§ Audio Clips Sampling Strategies For a large-scale unbalanced dataset, we apply two sampling strategies: 1) random sampling: randomly sample audio clips from the dataset to constitute a mini-batch; and 2) balanced sampling: sample audio clips from different sound classes to constitute a mini-batch to ensure the mixture of them contain as many as independent sounds. AudioSet is highly unbalanced. Sound classes such as “Speech” and “Music” have almost 1 million audio clips, while sound classes such as “tooth breath” has only tens of training samples. Without balanced sampling, “tooth breath” are less likely to be selected for training. Following the training scheme of the audio classification systems <cit.>, we apply the balanced sampling strategy to retrieve audio data from AudioSet. We denote a mini-batch of sampled audio clips as $ \{ x_{1}, ..., x_{B} \} $, where $ B $ is the mini-batch size. The anchor segment mining algorithm involves an audio tagging model to localize the sound events in the given audio track. Since the whole pipeline requires only weakly labelled data, it is better that we could leverage audio tagging models that require only the weakly-labelled data for training, but is able to localize the occurrence (i.e., time stamps) of the sound classes. Recently, audio tagging systems trained with the weakly labelled AudioSet <cit.> have outperformed systems trained with strongly labelled data. We apply Pretrained Audio Neural Networks (PANN) <cit.> and Hierarchical Token-semantic Audio Transformer (HTS-AT) <cit.> as our audio tagging models to perform the anchor segment mining procedure. Such models are able to extract audio clips, with relatively clean sound sources, from weakly labelled audio samples. In section <ref>, we introduce the detail of audio tagging models and the mining algorithm. To extract the embedding from audio clips, we have different formulations. The most intuitive embedding can be the one-hot class vector, and the class probability vector or the latent embedding from audio tagging models could be a potential choice. Together with the query-based source separation model, we demonstrate how it works through this separation pipeline in section <ref> and <ref>. §.§ Anchor Segments Mining Anchor segment mining is the core part of USS systems with weakly labelled data. Since the weakly labeled audio track does not always contain the labeled sound class throughout its timeline, we need to extract a short audio segment inside this track to create a relative cleaner source data for training the separation model. Formally, given an audio clip $x \in \mathbb{R}^L$, an anchor segment mining algorithm extracts an anchor segment $s \in \mathbb{R}^{L'}$ from $x$, where $L' < L$ is the samples number of the anchor segment. The center of $s$ is the time stamp where the sound class label is most likely to occur. As shown in Fig <ref> and illustrated in Algorithm <ref>, an anchor mining algorithm is performed as three steps: * By the sampling strategy, we sample $n$ audio tracks $\{x_1, x_2, ..., x_n\}$ with the corresponding tags $\{e_1,e_2, ..., e_n\}$ as one input for our system. * Two anchor mining algorithms can be performed to extract the audio clip $s_{i}$ from $x_{i}$: i) random method: randomly select an audio clip from the audio track; or ii) sound event detection (SED) method: apply an audio tagging model to localize the labeled sound class as an anchor position $t$ in the audio track, then extract an audio clip whose center is the anchor position. * To further verify the localization, an optional step can be performed to filter the audio clips from duplicate sound classes, ensuring that the mixture does not come from the same sound classes. The detail demonstration of each step is illustrated in Algorithm <ref>. For the SED method of anchor mining, in this paper, we leverage two audio tagging models, Pretrain Audio Neural Networks (PANN) <cit.> and Hierarchical Token-Semantic Audio Transformer (HTS-AT) <cit.>, to perform the sound event detection of the audio track. These models are also applied into the construction of query embeddings during the separation process. Formally, given an audio track $x$ as input, the audio tagging model produces two main outputs: 1) event classification vector $v_e \in \mathbb{R}^K$; and 2) framewise event presence map $m_e \in \mathbb{R}^{T \times K}$, where $T$ the size of time frames and $K$ the number of sound classes. The framewise event presence map is usually connected to the last neural network layer by taking the average value along the time axis to produce the event classification vector. As mentioned in <ref>, both PANN and HTS-AT are trained with the weakly labeled data by minimizing the binary cross entropy loss between the label vector $y \in [0,1]^K$ and $v_e$: \begin{equation} \label{eq:aggregate_max} l = - \sum_{k=1}^{K} y(k) \ln v_e(k) + (1 - y(k)) \ln (1 - v_e(k)). \end{equation} where $y(k)$ and $v_e(k)$ denote the $k$-th component of $y$ and $v_e$. After training the audio tagging model, we can not only use it to predict the event of the given audio track, but also weakly predict the event of each time frame via framewise event presence map $m_e$ (i.e., sound event detection). We denote $m_e(t,k)$ as the probability value of the $k$-th sound class at the $t$-th time frame. Then the anchor segment $t_{anchor}$ for the labeled sound class $k$ is obtain by: \begin{align} \label{eq:sed_area} q_{k}(t) = \sum_{t - \tau / 2}^{t + \tau / 2} m_e(t, k) \\ t_{\text{anchor}} = \underset{t}{\text{argmax}} \ q_{k}(t). \end{align} where $ \tau $ is the duration of anchor segments. We apply the sound event detection algorithm to the $n$-track audio input and get $n$ audio clips $\{s_{1}, ..., s_{n}\}$ as anchor segments (if Step-3 in Algorithm <ref> is applied, the number of anchor segments will be smaller than the number of input audio tracks). Finally, we mix these segments as the input mixture of the separation model. Fig. <ref> shows the model architectures of both PANN and HTS-AT as two audio tagging models we employed. PANN contains VGG-like CNNs to convert an audio mel-spectrogram into a $(T, K)$ featuremap. The model averages the featuremap over the time axis to obtain a final event classification vector $(1, K)$ and computes the binary cross-entropy loss between it and the groudtruth label. Since CNNs can capture the information in each time window, the featuremap $(T, K)$ is empirically regarded as a framewise presence probability map of each sound event at each time frame. Additionally, the penultimate layer's output $(T, H)$ can be used to obtain its averaged vector $(1, H)$ as a latent source embedding for the query-based source separation in our system. HTS-AT is a hierarchical token-semantic transformer for audio classification. It applies swin-transformer <cit.> into the audio classification task. In the right of Figure <ref>, a mel-spectrogram is cut into different patch tokens with a patch-embed CNN and sent into the transformer in order. The time and frequency lengths of the patch is equal as $P \times P$. To better capture the relationship between frequency bins of the same time frame, HTS-AT first splits the mel-spectrogram into windows $w_1, w_2, ..., w_n$ and then splits the patches in each window. The order of tokens $Q$ follows time$\to$frequency$\to$window. The patch tokens pass through several network groups, each of which contains several transformer-encoder blocks. Between every two groups, a patch-merge layer is applied to reduce the number of tokens to construct a hierarchical representation. Each transformer-encoder block is a swin-transformer block with the shifted window attention module <cit.>, a modified self-attention module to improve the training efficiency. Then, HTS-AT applies a token-semantic 2D-CNN to further process the reshaped output $(\frac{T}{8P}, \frac{F}{8P}, H)$ into the framewise event presence map $(T,K)$, then average it as the event classification vector $(1,K)$. The latent embedding, at the same time, is produced by averaging the reshaped output into a $H$-dimension vector with an average-pooling layer. In the following sections, we will evaluate how these two models perform in our universal source separation system. Anchor segment mining. [1] Input: dataset $ D $. Step 1 - Balanced Sampling: Sample $ n $ sound classes from $ \{1, ..., K\} $ without replacement. For each selected sound class, sample one audio track $x_{i} \in \mathbb{R}^L$ to constitute $ \{x_{1}, ..., x_{n}\} $. Step 2 - Random Method: for each audio track $x_{i}$, randomly select an audio clip $s_{i} \in \mathbb{R}^{L'} (L' < L)$ from the audio track. Step 2 - Sound Event Detection: Apply an audio tagging model to detect an anchor position $t_{i}$ throughout the audio track $x_{i}$, which is the most probable occurrence of the labeled sound class. Then extract the audio clip $s_{i} \in \mathbb{R}^{L'} (L' < L)$ whose center is $t_{i}$. Step 3 (optional): Use the same audio tagging model to obtain the event presence probability of each extracted audio clip $\{s_1, s_2, ..., s_n\}$ and apply thresholds $ \theta \in [0, 1]^{K} $ to get binary results $ r \in \{0, 1\}^{K} $. Then, given audio clips $S=\{s_1, s_2, ..., s_n\}$: each $ r_{i} $ $ r_{i} \cap tr = \o $ $tr = tr \lor r_{i} $ Remove $s_i$ from $S$ Output: A set $S$ of anchor segments . §.§ Query-based Source Separation A typical source separator is a single-input-single-output (SISO) model that deals with one specific source, such as vocal, drum, bass, and others. To enable the model to separator arbitrary sound sources, we apply a query-based source separator by adding conditional embeddings into the source separation backbone – ResUNet <cit.>. As mentioned in <ref>, the input to the ResUNet separator is a mixture audio segment. First, we apply short-time Fourier transform (STFT) on the waveform to extract the complex spectrum $ X = \mathbb{C}^{T \times F} $. Then, we follow the same setting of <cit.> to construct an encoder-decoder network to process the magnitude spectrogram $ |X| $. The ResUNet encoder-decoder consists of 6 encoder blocks, 4 bottleneck blocks, and 6 decoder blocks. Each encoder block consists of 4 residual convolutional blocks to downsample the spectrogram into a bottleneck feature, and each decoder block consists of 4 residual deconvolutional blocks to upsample the feature back to separation components. The skip-connection is applied from each encoder block to the corresponding decoder block of the same downsampling/upsampling rate. For the residual block, it contains 2 CNN/DCNN layers, 2 batch normalization <ref> layers and 2 Leaky-ReLU activation layers. An additional residual shortcut is added between the input and the output of each residual block. The detail of the model architecture can be found at <cit.>. The ResUNet separator outputs the magnitude and the phases of the complex mask $ M \in \mathbb{C}^{T \times F} $. The separated complex spectrum can be obtained by: \begin{equation} \label{eq:mask_mul} \begin{split} \hat{S} &= M \odot X. \\ & = |M| \odot |X|e^{j (\angle M + \angle X)}. \end{split} \end{equation} where both $|M|$ and $\angle M$ are the output of the separator. The separated source can be obtained by multiplying the STFT of mixture and the complex ideal ratio mask (cIRM). The multiplication can be also decoupled into a magnitude and a phase part. The magnitude $ |M| $ controls how much the magnitude of $ |X| $ should be scaled, and the angle $ \angle M $ controls how much the angle of $ X $ should be rotated. Based on the ResUNet separator, we adopt a feature-wise linear modulation (FiLM) <cit.> method to construct convolutional blocks within the separator. Similar to voice filtering work <cit.> and multi-task source separation work <cit.>, for each encoder and decoder block, we incorporate the conditional embedding as: \begin{equation} \label{eq:FiLM} h = W * (\sigma(\text{bn}(x) + Vc)) \end{equation} where $x$ is the output feature of the encoder and decoder block, $ V $ is a embedding layer to map the conditional vector $ c $ into an embedding space. Then, this embedding is regarded as a bias added to the the linear convolutional operation after batch normalization (BN). A ReLU activation layer is applied to increase the non-linearity of the neural network. The final output feature $h$ will be sent to the next encoder and decoder block. Let $f(s,c)$ denote the query-based source separator, the complete algorithm of our weakly labeled universal source separation system is illustrated in Algorithm <ref>. §.§ Source Query Embeddings The query-based source separator extracts pre-defined representations <cit.> or learnable representations <cit.> used to control what sources to separate from a mixture. To determine the choice of the source embedding $c$ for query-based source separation in our system, we investigate three types as follows. §.§.§ Hard One-Hot Condition The first type is the one-hot vector of sound classes. Each anchor segment $ s $ has a corresponding tag $ y \in [0,1]^K$. Then, the source query embedding is $ c = y $ in both training and inference stages. §.§.§ Soft Probability Condition The second type of source query embeddings is to use pretrained audio tagging models, such as PANN and HTS-AT, to obtain its final output, the event classification probability vector $v_e$, as the query embedding. Since the audio tagging model $ f_{\text{AT}} $ is trained on weakly labelled data with the binary cross entropy loss, which indicates that each audio clip $s$ can be categorized into each sound class with at some probabilities. Compared with the one-hot vector The event classification probability vector $v_e$ is a good description of the audio clip $ s $. §.§.§ Latent Embedding Condition As mentioned in <ref>, the latent embedding $(1,H)$ from pretrained audio tagging models can be applied as the source query embedding. The advantage of using the latent embedding is that the separation are not limited to the given $ K $ sound classes. In the experiment section, we investigate a variety of PANNs, including CNN5, CNN10, CNN14, and a HTS-AT model to extract the latent embeddings and we evaluate their efficiency on the separation performance. Apart from AudioSet tagging embeddings, our conditional source separation system also supports to use automatic speech recognition (ASR) embeddings (e.g., wav2vec <cit.>) and speaker embeddings <cit.>. We also compare them with our PANN and HST-AT latent embeddings. Furthermore, the latent embedding can be learnable during the training of our universal source separation system. One method is to train both audio tagging model and source separator from scratch. Another method is to use the pretrained audio tagging model but add a learnable shortcut to build a joint representation: $c = $ . Or, the pretrained audio tagging model can connect to a fully connected layer as an adaptor by $c=f_{\text{ada}}(f_{\text{AT}}(s))$. We will investigate how these variations of latent embeddings contribute to the source separation performance. Source separation [1] Dataset $D$: AudioSet. loss function $l$ does not converge Sample $n$ audio tracks $\{x_1,x_2,...,x_n\}$ from the balanced sampling strategy. Obtain $n$ audio clips $\{\s_1, s_2, ..., s_n\\}$ using the anchor segment mining algorithm <ref>. Create the mixture $x=\sum_i s_i$ . Obtain the source query embedding $ \{c_1,c_2,...,c_n\} $. each $s_i$ Obtain the separation $\hat{s_{i}}=f(x,c_i)$ Calculate loss by $ l(\hat{s_{i}}, s_{i}) $ §.§ Data augmentation AudioSet is highly unbalanced. Some sound classes only have tens number of training samples. Furthermore, when constituting the mixture with $ s_{1} $ and $ s_{2} $, the loudness of $ s_{1} $ and $ s_{2} $ can be different. To address those problems, we propose a loudness augmentation method. That is, we first calculate the energy of a signal $ s $ by $ E = || s ||_{2}^{2} $. We denote the energy of $ s_{1} $ and $ s_{2} $ as $ E_{1} $ and $ E_{2} $. We apply a scaling factor $ \alpha = \sqrt{E_{1} / E_{2}} $ to $ s_{2} $ so that both anchor segments have the same energy. That is: \begin{equation} \label{eq:scale_augmentation} x = s_{1} + \alpha s_{2}. \end{equation} On one hand, we match the energy of anchor segments to let the neural network to learn to separate the sound classes. On the other hand, the loudness diversity of sound classes are increased. We will show this loudness augmentation is beneficial in our experiments. §.§ Loss functions We propose to use loss functions to train the end-to-end universal source separation system. Several loss functions have been discussed in <cit.> for source separation. The most commonly used loss function is the spectrogram domain L2 loss. However, the spectrogram based loss function does not consider the phase information of the separated source. In this work, we apply a L1 loss in the time-domain waveform between ground truth source and separated source: \begin{equation} \label{eq:loss} l = ||s - \hat{s}||_{1}, \end{equation} where $ l $ is the loss function to train the neural network. Lower loss in (<ref>) indicates that the separated signal $ \hat{s} $ is close to the ground truth signal $ s $. In training, the gradients of parameters are calculated by $ \partial l / \partial \theta $, where $ \theta $ are the parameters of a neural network. §.§ Inference In inference, the oracle embedding is unavailable due to the clean sources are unknown. Instead, we calculate $ c $ from the training dataset by: \begin{equation} \label{eq:avg_emb} c = \frac{1}{N}\sum_{n=1}^{N}f_{\text{emb}}(s_{n}). \end{equation} where $ \{s_{n}\}_{n=1}^{N} $ are query samples of one sound class and $ N $ is the number of query samples. To explain, we average all conditional embeddings of query samples from a same sound class to constitute $ c $. The $ f_{\text{emb}} $ can be either segment prediction or the latent embedding prediction. Automatic Filtering and Separation [1] Inputs: audio $ x $, source separation system $ f_{\text{ss}} $, audio tagging system $ f_{\text{at}} $. Outputs: Separated sources $ D = \{\hat{s}_{1}, ..., \hat{s}_{I}\} $. Split $ x $ into 2-second segments. Apply $ f_{\text{at}} $ on all segments to obtain $ P(t, k) $. # Calculate ontology predictions. hierarchy separation $ Q(t, m) = \text{HierarchyOntologyGrouping}(P(t, k), l) $ $ Q(t, m) = P(t, k) $ # Detect sound event candidates. $ C = \{\} $ $m=1, ..., M$ $ \text{max}_{t}Q(t, m) > \delta$: $ C = C \cup \{m\} $ # Do separation. $ D = \{\} $ $ m \in C $ $t=1, ..., T$ $ Q(t, m) > \delta$: Get condition $ c $ by (<ref>). $ \hat{s}_{t} = f_{\text{ss}}(x, c) $ $ \hat{s}_{t} = \textbf{0} $ $\hat{s_{k}} = \{\hat{s}_{1}, ..., \hat{s}_{T}\}$ $ D = D \cup \{\hat{s}_{k}\} $ Hierarchy Ontology Grouping. [1] Inputs: segment-wise prediction $ P(t, k) $, hierarchy level $ l $. Outputs: $ Q(t, m) $ Denote sound classes from AudioSet ontology with level $ l $ as a set $ C = \{ c_{1}, ..., c_{M} \} $. $ m = 1, ..., M $ $ Q(t, m) = \text{max}\{P(t, c')\}_{c' \in \text{child}(c_{m})} $ # $\text{child}(c_{m})$ is all children sound classes of $ c_{m} $ including $ c_{m} $. §.§ Inference with Hierarchy AudioSet Ontology For the CASA problem, usually it is unknown how many and what sound classes are present and to separate in an audio clip. To address this problem, we propose a hierarchy sound class detection strategy to detect the present sound classes. Then, we separate the present sound classes by using the trained USS system. We first split long audio recordings into short segments. Algorithm <ref> shows the automatic sound classes detection and separation steps. The input to the USS system is an audio clip $ x $. We first split the audio clip into 2-second segments and apply an audio tagging system $ f_{\text{at}} $ to calculate the segment-wise prediction $ P(t, k) $ with a shape of $ T \times K $, where $ T $ is the number of segments. The AudioSet ontology has a tree structure. The first level of the tree structure contains seven sound classes, including “Human sounds”, “Animal”, “Music”, “Source-ambiguous sounds”, “Sounds of things”, “Nature sounds”, and “Channel, environment and background”. Each root category contains several sub-level sound classes. The second level contains 41 sound classes. The third level contains 251 sound classes. The tree structure have a maximum depth of six levels. In inference, the USS system supports hierarchy source separation with different levels. To separate sources of level $ l $, we denote the sound classes of level $ l $ as $ C = \{c_{1}, ..., c_{M}\} $, where $ M $ is the number of sound classes in the level $ l $. For a sound class $ m $, we denote the set of its all children sound classes including itself as $ \text{child}(m) $. For example, for the human sounds class 0, $ \text{children}(0) = \{0, 1, ..., 72\} $. For each sound class $ c_{m} $ we set its hierarchy score $ Q(t, m) $ as the maximum probability of its all children sound classes including itself: $\text{max}\{P(t, c')\}_{c' \in \text{child}(c_{m})}$. The hierarchy ontology grouping is described in Algorithm <ref>. \begin{equation} \label{eq:inference_hierarchy} \left\{\begin{matrix} c_{j}=f_{\text{AT}}(x),j \in \text{child}(c)\\ c_{j}=0,j \notin \text{child}(c). \end{matrix}\right. \end{equation} Then, we detect active sound classes $ C = \{m\} $ if $ \text{max}_{t}Q(t, m) $ is larger than a threshold $\delta$. Then, for each acitve sound class $m$, we detect it active segments if $ \text{max}_{t}Q(t, m) $ is larger than a threshold $ \delta $. We set separated segments to silence if $ Q(t, m) $ is smaller than $ \theta $. Then, we apply the conditional source separation system by using (<ref>) as the condition. § EXPERIMENTS In this section, we investigate our proposed universal source separation system on several tasks, including AudioSet separation <cit.>, sound events separation <cit.>, music source separation <cit.>, and speech enhancement <cit.>. Our USS system is only trained on the large-scale weakly labelled AudioSet <cit.> without using any clean training data, which is a major difference from the previous source separation systems that are trained on specific datasets with clean sources <cit.>. The trained USS system can address a wide range of source separation tasks without being finetuned. §.§ Training Dataset AudioSet is a large-scale weakly labelled audio dataset containing 2 million 10-second audio clips sourced from the YouTube website[<https://www.youtube.com/>]. Audio clips are only labelled with the presence or absence of sound classes, without knowing when sound events occur. There are 527 sound classes in its released version, covering a wide range of sound classes in the world, such as “Human sounds”, “Animal”, etc. The training set consists of 2,063,839 audio clips, including a balanced subset of 22,160 audio clips. There are at least 50 audio clips for each sound class in the balanced training set. Due to some audio links are no longer available, we successfully downloaded 1,934,187 (94%) audio clips from the full training set. All audio clips are padded with silence or truncated into 10 seconds. Due to the fact that a large amount of audio recordings from YouTube have sampling rates lower than 32 kHz, we resample all audio recordings into mono and 32 kHz which will remain the information of most audio clips. §.§ Training Details We select anchor segments as described in <ref> and mix two anchor segments to constitute a mixture $ x $. The duration of each anchor segment is 2 seconds. We apply matching energy data augmentation to scale two anchor segments to have the same energy. and We extract the short-time Fourier transform (STFT) feature $ X $ from $ x $ with a Hann window size 1024 and a hop size 320. This hop size leads to 100 frames in a second consistent to the audio tagging systems in PANNs <cit.> and HTS-AT <cit.>. The query net is a Cnn14 of PANNs or HTS-AT. The query net is pretrained on the AudioSet tagging task <cit.> and the parameters are frozen during the training of the USS system. The prediction and the embedding layers of the query net have dimensions of 527 and 2048, respectively. Either the prediction or the embedding layer is connected to fully connected layers and input to all layers of the source separation branch as FiLMs. We adopt ResUNet <cit.> as the source separation branch. The 30-layer ResUNet consists of 6 encoder and 6 decoder blocks. Each encoder block consists of two convolutional layers with kernel sizes of $ 3 \times 3 $. Following the pre-activation strategy <cit.>, we apply batch normalization <cit.> and leaky ReLU <cit.> before each convolutional layer. The FiLM is added to each convolutional layer as described in (<ref>). The number of output feature maps of the encoder blocks are 32, 64, 128, 256, 512, and 1024, respectively. The decoder blocks are symmetric to the encoder blocks. We apply an Adam optimizer <cit.> with a learning rate of $ 10^{-3} $ to train the system. A batch size of 16 is used to train the USS system. The total training steps is 600 k trained for 3 days on a single Tesla V100 GPU card. §.§ Conditional Embedding Calculation For AudioSet source separation, the oracle embedding is calculated by: \begin{equation} \label{eq:ora_emb} c = f_{\text{emb}}(s), \end{equation} where $ s $ is the clean source. Using oracle embedding as condition indicates the upper bound of the universal source separation system. We calculate the conditional embeddings by (<ref>) from the training set of the AudioSet, FSD50Kaggle2018, FSD50k, Musdb18, Slakkh2100, and VoicebankDemand datasets to evaluate on those datasets, respectively. Confusion matrix of event based accuracy in Task 1. Learning curve. USS results with different conditional embedding types 2cAudioSet (SDRi) 2cFSDK2018 2cFSD50k 4cMusdb18 2cSlakh2100 2cVoicebankDemand (lr)2-3 (lr)4-5 (lr)6-7(lr)8-11(lr)12-13(lr)14-15 ora. emb avg. emb SDR SDRi SDR SDRi SDR SDRi wSDR wSDRi SDR SDRi PESQ SSNR wav2vec (46c) 8.87 4.30 8.95 8.91 4.52 4.70 1.90 7.03 2.96 8.37 -1.08 6.66 2.11 6.02 speaker (46d) 8.87 2.82 6.69 6.65 3.00 3.03 1.52 6.85 2.48 7.94 0.18 7.92 2.13 4.72 Cnn6 (45a2) 8.68 5.30 10.36 10.31 5.25 5.50 3.05 8.43 3.94 9.42 -0.37 7.37 2.27 9.39 Cnn10 (45a3) 8.35 5.36 9.95 9.90 5.19 5.43 2.87 8.10 4.11 9.34 -0.27 7.47 2.27 8.68 +Cnn14 (44a) 8.26 5.57 10.61 10.57 5.54 5.79 3.08 8.12 4.02 9.22 -0.46 7.28 2.18 9.00 HTSAT (45c) 9.38 3.78 7.95 7.91 3.38 3.51 2.83 8.48 3.77 9.36 0.81 8.55 2.23 8.78 Segment prediction condition and embedding condition 2cAudioSet (SDRi) 2cFSDK2018 2cFSD50k 4cMusdb18 2cSlakh2100 2cVoicebankDemand (lr)2-3 (lr)4-5 (lr)6-7(lr)8-11(lr)12-13(lr)14-15 ora. emb avg. emb SDR SDRi SDR SDRi SDR SDRi wSDR wSDRi SDR SDRi PESQ SSNR Segment prediction (dim=527) (46b) 7.80 6.42 11.22 11.18 6.60 6.92 2.48 7.26 3.58 8.80 -1.69 6.05 2.20 8.30 +Embedding (dim=2048) 8.26 5.57 10.61 10.57 5.54 5.79 3.08 8.12 4.02 9.22 -0.46 7.28 2.18 9.00 §.§ Evaluation Datasets §.§.§ AudioSet The evaluation set of AudioSet <cit.> contains 20,317 audio clips with 527 sound classes. We successfully downloaded 18,887 audio clips from the evaluation set (93%) out of 20,317 audio clips. The AudioSet source separation is a challenging problem due to the USS need to separate 527 sound classes using a single model. We are the first to propose using AudioSet <cit.> to evaluate USS. To create evaluation data, similar to Section <ref>, we first apply a sound event detection system to each 10-second audio clip to detect anchor segments. Then, select two anchor segments from different sound classes and sum them as a mixture. We constitute 100 mixtures for each sound class, leading to 52,700 mixtures for all sound classes in total. §.§.§ FSDKaggle2018 The FSDKaggle2018 <cit.> is a general-purpose audio tagging dataset containing 41 sound classes ranging from musical instruments, human sounds, domestic sounds, and animals, etc. The duration of audio clips ranges from 300 ms to 30 s. Each audio clips contain a unique audio tag. The test set is composed of 1,600 audio clips with manually-verified annotations. We pad or truncate each audio clip into 2-second segments from the start due to sound events usually occur in the start of audio clips. We mix two segments from different sound classes to consist a pair. We constitute 100 mixtures for each sound class. This leads to in total 4,100 evaluation pairs. §.§.§ FSD50K dataset The Freesound Dataset 50k (FSD50K) dataset <cit.> contains 51,197 training clips distributed in 200 sound classes from the AudioSet ontology. Different from the FSDKaggle2018 dataset, each audio clip may contain multiple tags with a hierarchy architecture. There are in average 2.79 tags in each audio clip. All audio clips are sourced from Freesound[<https://freesound.org/>]. There are 10,231 audio clips distributed in 195 sound classes in the test set. Audio clips have variable durations between 0.3s to 30s. the average duration of audio clips are 7.1 seconds. We mix two segments from different sound classes to consist a pair. We constitute 100 mixtures for each sound class. This leads to in total 19,500 evaluation pairs. §.§.§ MUSDB18 The MUSDB18 dataset <cit.> is designed for the music source separation task. The test set of the MUSDB18 dataset contains 50 songs with four types of stems, including vocals, bass, drums, and other. We linearly sum all stems to constitute mixtures as input to the USS system. We use the museval toolkit[https://github.com/sigsep/sigsep-mus-eval] to evaluate the SDR metrics. §.§.§ Slakh2100 The Slakh2100 dataset <cit.> is a multiple-instrument dataset for music source separation and transcription. The test of the Slakh2100 dataset contains 225 songs. The sound of different instruments are rendered by 167 different types of plugins. We filtered 151 non-silent plugin types for evaluation. Different from the MUSDB18 dataset, there can be over 10 instruments in a song, leading to the Slakh2100 instrument separation a challenging problem. §.§.§ Voicebank-Demand The Voicebank-Demand <cit.> dataset is designed for the enhancement task. The Voicebank dataset <cit.> contains clean speech. The Demand dataset <cit.> contains multiple different background sounds that are used to create mixtures. The noisy utterances are created by mixing the VoiceBank dataset and the Demand dataset under under signal-to-noise ratios of 15, 10, 5, and 0 dB. The test set of the Voicebank-Demand dataset contains in total 824 utterances. §.§ Evaluation Metrics We use the signal-to-distortion ratio (SDR) <cit.> and SDR improvement (SDRi) <cit.> to evaluate the source separation performance. The SDR is defined as: \begin{equation} \label{eq:sdr} \text{SDR}(s, \hat{s}) = 10 \text{log}_{10} \left ( \frac{||s||^2}{||s - \hat{s}||^2} \right ), \end{equation} where $ s $ and $ \hat{s} $ are the target source and estimated source, respectively. Larger SDR indicates better separation performance. The SDRi is proposed to evaluate how much SDR a USS system improves compared to without separation: \begin{equation} \label{eq:sdr} \text{SDRi} = \text{SDR}(s, \hat{s}) - \text{SDR}(s, x), \end{equation} where $ x $ is the mixture signal. For the speech enhancement task, we apply the Perceptual evaluation of speech quality (PESQ) <cit.> and segmental signal-to-ratio noise (SSNR) <cit.> for evaluation. Freeze, finetune, and adapt conditional embeddings 2cAudioSet (SDRi) 2cFSDK2018 2cFSD50k 4cMusdb18 2cSlakh2100 2cVoicebankDemand (lr)2-3 (lr)4-5 (lr)6-7(lr)8-11(lr)12-13(lr)14-15 ora. emb avg. emb SDR SDRi SDR SDRi SDR SDRi wSDR wSDRi SDR SDRi PESQ SSNR Cnn14 (random weights) (45b) 8.51 2.82 5.96 5.91 2.82 2.82 -0.48 4.59 1.97 7.08 -1.34 6.40 2.28 6.96 Cnn14 (scratch) (47b) 2.38 2.38 2.46 2.41 2.22 2.15 0.71 5.78 1.16 6.30 -1.20 6.54 1.62 -0.28 Cnn14 (finetune) (47b2) 9.83 1.96 3.42 3.38 1.50 1.40 2.10 7.77 3.10 8.52 1.39 9.12 1.77 3.52 +Cnn14 (freeze) (44a) 8.26 5.57 10.61 10.57 5.54 5.79 3.08 8.12 4.02 9.22 -0.46 7.28 2.18 9.00 Cnn14 + shortcut. (50b) 6.95 4.57 9.29 9.25 4.74 4.94 1.84 7.05 3.40 8.78 -1.44 6.30 2.06 8.91 Cnn14 + adaptor (49a2) 8.01 5.81 11.00 10.96 5.79 6.07 2.95 7.96 3.90 9.24 -0.87 6.87 2.30 9.60 Universal source separation models 2cAudioSet (SDRi) 2cFSDK2018 2cFSD50k 4cMusdb18 2cSlakh2100 2cVoicebankDemand (lr)2-3 (lr)4-5 (lr)6-7(lr)8-11(lr)12-13(lr)14-15 ora. emb avg. emb SDR SDRi SDR SDRi SDR SDRi wSDR wSDRi SDR SDRi PESQ SSNR open-unmix (55c) 3.96 3.39 3.90 3.86 2.96 2.92 0.40 5.50 2.13 7.42 -1.09 6.65 2.40 2.26 ConvTasNet (56a) 6.96 5.00 9.49 9.45 5.31 5.54 0.61 5.61 2.64 8.10 -2.96 4.77 1.87 6.46 UNet (55a) 8.14 5.50 10.83 10.79 5.49 5.75 2.49 7.78 3.70 9.17 -0.45 7.29 2.12 8.47 ResUNet30 8.26 5.57 10.61 10.57 5.54 5.79 3.08 8.12 4.02 9.22 -0.46 7.28 2.18 9.00 ResUNet60 (55b) 7.97 5.70 11.34 11.30 6.04 6.32 1.71 6.81 3.64 9.01 -2.77 4.97 2.40 9.35 §.§ Results Analysis §.§.§ Conditional Embedding Types The default configuration of our USS system is a 30-layer ResUNet30 trained on the balanced set of AudioSet. Table <ref> shows the USS system results trained with different conditional embedding types including wav2vec <cit.>, speaker embeddings[https://github.com/RF5/simple-speaker-embedding], Cnn6, Cnn10, Cnn14 from PANNs <cit.>, and HTS-AT <cit.>. The wav2vec embedding is trained using unsupervised contrastive learning on 960 hours of speech data. The wav2vec embeddings are averaged along the time axis to a single embedding with a dimension of 512. The speaker embedding is a gated recurrent unit (GRU) with three recurrent layers operates on log mel-spectrogram and has output has a shape of 256. The Cnn6 and the Cnn10 have dimensions of 512. The Cnn14 and the HTS-AT have dimensions of 2048. The oracle embedding (ora emb) shows the results using (<ref>) as condition. The average embedding (avg emb) shows the results using (<ref>) as condition. Table <ref> shows that the Cnn6, Cnn10, Cnn14 embeddings achieve AudioSet SDR between 5.30 dB and 5.57 dB using the average embedding, outperforming the wav2vec of 4.30 dB and the speaker embedding of 2.82 dB. One explanation is that both wav2vec and the speaker embeddings are trained on speech data only, so that they are not comparable to PANNs and HTS-AT trained for general audio tagging. The wav2vec embedding slightly outperforms the speaker embedding on FSDKaggle2018, FSD50k, and Musdb18 separation, indicating that the unsupervised learned ASR embeddings are more suitable for universal source separation. The HTS-AT achieves the highest oracle embedding SDR among all systems. All of Cnn6, Cnn10, Cnn14, and HTS-AT outperform the wav2vec embedding and the speaker embedding in AudioSet, FSDKaggle2018, FSD50k, Musdb18, Slakh2100, and Voicebank-Demand datasets by a large margin. The Cnn14 slightly outperforms Cnn6 and Cnn10. In the following experiments, we use Cnn14 as the default conditional embedding. Table <ref> shows the comparison between using the Cnn14 segment prediction with a dimension of 527 and the Cnn14 embedding condition with a dimension of 2048 to build the USS system. On one hand, the segment prediction embedding achieves an SDR of 7.80 dB on AudioSet, outperforming the embedding condition of 5.57 dB. The segment prediction also achieves higher SDRis than the embedding condition on the FSDKaggle2018, and the FSD50k dataset datasets. An explaination is that the sound classes of all of the AudioSet, FSDKaggle2018, and the FSD50k datasets are sub-classes of the AudioSet. The segment prediction performs better than embedding condition in in-vocabulary sound classes separation. On the other hand, the embedding condition achieves higher SDRs than the segment prediction on the Musdb18 and the Slakh2100 dataset. This result indicates that the embedding condition perform better than segment prediction in new-vocabulary sound classes separation. Fig. <ref> in the end of this paper shows the classwise SDRi results of AudioSet separation including 527 sound classes. The dashed lines show the SDRi with oracle segment prediction or embedding as conditions. The solid lines show the SDRi with averaged segment prediction or embedding calculated from the anchor segments mined from the balanced training subset. Fig. <ref> shows that sound classes such as busy signal, sine wave, bicycle bell achieve the highest SDRi over 15 dB. We discovered that clear defined sound classes such as instruments can achieve high SDRi scores. Most of sound classes achieve positive SDRi scores. The tendency of using segment prediction and embedding as conditions are the same, although the segment prediction outperform the embedding and vice versa in some sound classes. §.§.§ Freeze, Finetune, and Adapt Conditional Embeddings Table <ref> shows the comparison of using random, frozen, finetuned, and adapted conditional embeddings to build the USS system. All the variations of the conditional embeddings extractors are based on the Cnn14 architecture. Using random weights to extract conditional embeddings achieves an SDRi of 2.82 dB on AudioSet, compared to use pretrained Cnn14 to extract conditional embeddings achieves an SDR 5.57 dB. We show that using random weights to extract conditional embeddings work for all USS tasks, such as achieves an SDRi of 5.91 dB on the FSDKaggle2018 dataset compared to the pretrained Cnn14 embedding extractor of 10.57 dB. Next, we experiment with learning the parameters of the conditional embedding extractor from scratch or finetune the weights from pretrained models. Table <ref> shows that neither the learning from scratch nor the finetuning approach improves the USS system performance. The learning from scratch approach and the finetuning approaches achieve SDRi of 2.41 dB and 3.38 dB on the FSDKaggle2018 dataset, even underperform the random weights of 5.91 dB. One explanation is that the parameters of the conditional embedding branch and the source separation branch are difficult to be jointly optimized when both of them are deep. The training falls to a collapse mode. Using the pretrained frozen Cnn14 system as conditional embedding extractor significantly improves the SDRi to 10.57 dB on the FSDKaggle2018 dataset. Based on the pretrained frozen Cnn14 conditional embedding extractor, we propose to add a learnable shortcut or add an learnable adaptor on top of the Cnn14 system. The learnable short cut has a Cnn14 architecture with learnable parameters. Table <ref> shows that the learnable shortcut conditional embedding extractor achieves an SDR of 9.29 dB on FSDKaggle2018, less than using the pretrained frozen Cnn14 conditional embedding extractor of 10.57 dB. One explanation is that the learnable shortcut destorys the embedding information for source separation. The adaptor is a 2-layer fully connected neural network on top of the pretrained frozen Cnn14 conditional embedding extractor. With the adaptor, we achieve an SDR of 11.10 dB and outperforms the Cnn14 system. This result indicates that the adaptor is beneficial for the USS task. USS results trained with different anchor segment durations. 2cAudioSet (SDRi) 2cFSDK2018 2cFSD50k 4cMusdb18 2cSlakh2100 2cVoicebankDemand (lr)2-3 (lr)4-5 (lr)6-7(lr)8-11(lr)12-13(lr)14-15 ora. emb avg. emb SDR SDRi SDR SDRi SDR SDRi wSDR wSDRi SDR SDRi PESQ SSNR 0.5 s (52a) 4.07 2.86 4.51 4.47 2.55 2.51 0.97 0.78 2.61 2.43 -0.79 6.95 1.57 5.96 1s (52a2) 7.50 4.99 9.45 9.41 4.81 5.00 0.18 -0.02 2.54 2.50 -1.66 6.08 2.17 8.55 +2s 8.26 5.57 10.61 10.57 5.54 5.79 3.08 8.12 4.02 9.22 -0.46 7.28 2.18 9.00 4s (52a3) 7.39 5.21 10.22 10.17 5.38 5.60 1.83 6.79 3.38 8.68 -2.62 5.12 -2.62 5.12 6s (52a4) 6.39 4.68 9.20 9.16 5.05 5.24 0.00 4.98 2.70 7.97 -4.26 3.48 2.21 2.56 8s (52a5) 6.26 4.48 8.85 8.80 4.77 4.94 -3.67 -4.00 1.60 1.50 -5.68 2.06 2.24 2.35 10s (52a6) 6.29 4.47 9.11 9.07 4.80 4.98 -2.68 -2.79 1.56 1.53 -5.07 2.67 2.13 2.14 USS results with different anchor mining strategies 2cAudioSet (SDRi) 2cFSDK2018 2cFSD50k 4cMusdb18 2cSlakh2100 2cVoicebankDemand (lr)2-3 (lr)4-5 (lr)6-7(lr)8-11(lr)12-13(lr)14-15 ora. emb avg. emb SDR SDRi SDR SDRi SDR SDRi wSDR wSDRi SDR SDRi PESQ SSNR mining 8.26 5.57 10.61 10.57 5.54 5.79 3.08 8.12 4.02 9.22 -0.46 7.28 2.18 9.00 in-clip random (53b) 4.89 3.94 5.53 5.49 3.63 3.66 1.10 6.05 2.36 7.79 -1.72 6.01 2.21 5.69 out-clip random (53a) 8.19 5.90 11.06 11.01 6.04 6.34 2.57 7.68 3.81 9.17 -1.08 6.66 2.39 9.48 §.§.§ Separation Architectures Table <ref> shows the results of building USS systems with different source separation backbones. The open-unmix system <cit.> is a 3-layer bidirectional long short term memory (BLSTM) system. The BLSTM is applied on the mixture spectrogram to output the estimated clean spectrogram. The open-unmix system achieves an SDR of 3.39 dB on AudioSet separation and achieves a PESQ of 2.40 on the Voicebank-Demand speech enhancement task, indicating that the BLSTM backbone performs well for speech enhancement. The open-unmix system underperforms other backbone source separation systems in FSDKaggle2018, FSD50k, Musdb18, and Slakh2100 separation, indicating that the capacity of the open-unmix system is not large enough to separate a wide range of sound classes. The ConvTasNet <cit.> is a time-domain source separation system consists of one-dimensional convolutional encoder and decoder layers. The ConvTasNet achieves an SDRi of 5.00 dB on AudioSet separation and outperforms the open-unmix system. Our proposed UNet30 <cit.> is an encoder-decoder convolutional architecture consists of 30 convolutional layers. The ResUNet30 <cit.> adds residual shortcuts in the encoder and decoder blocks in UNet30. The UNet30 and the ResUNet30 systesm achieve SDRis of 5.50 dB and 5.57 dB on AudioSet, outperforming the ConvTasNet by around 1 dB in all source separation tasks. We extend ResUNet30 to a deeper system ResUNet60 with 60 convolutiona layers. Table <ref> shows that ResUNet60 outperforms ResUNet30 by around 0.5 dB in all USS tasks. This result indicates that deeper architectures are beneficial for USS. §.§.§ Different Anchor Segment Durations Table <ref> shows the results of USS systems trained with different anchor segment durations ranging from 0.5 s to 10 s. The anchor segments are mined by a pretrained SED system as described in Section <ref>. On one hand, Table <ref> shows that the separation scores increase with anchor segment durations increase from 0.5 s to 2 s and achieves the best SDRi of 5.57 dB at anchor segments of 2 s on AudioSet separation. This result shows that the anchor segment should be long enough to contain sufficient context information to build the USS system. On the other hand, the separation scores decrease with anchor segment durations decrease from 2 s to 10 s on all tasks. One explanation is that long anchor segments contain undesired interfering sounds that will impair the training of the USS system. Therefore, we use 2-second anchor segments in all other experiments. §.§.§ Different Anchor Segment Mining Strategies Table <ref> shows the results of different anchor mining strategies. The in-clip random strategy randomly select two anchor segments from a same 10-second audio clip which significantly underperform the SED mining strategy in all of the source separation tasks. The out-clip random strategy randomly select two anchor segments from two different 10-second audio clips. The out-clip random strategy achieves an SDRi of 5.90 dB on AudioSet, outperforms the SED mining of 5.57 dB. On one hand, the out-clip random strategy also outperforms the SED mining strategy in FSDKaggle2018 and the FSD50k dataset. On the other hand, the SED mining strategy outperforms the out-clip random strategy in Musdb18 and Slakh2100 source separation. Both the out-clip and the SED mining strategies outperform the in-clip random strategy. USS results with different sources number 2cAudioSet (SDRi) 2cFSDK2018 2cFSD50k 4cMusdb18 2cSlakh2100 2cVoicebankDemand (lr)2-3 (lr)4-5 (lr)6-7(lr)8-11(lr)12-13(lr)14-15 ora. emb avg. emb SDR SDRi SDR SDRi SDR SDRi wSDR wSDRi SDR SDRi PESQ SSNR 2 srcs to 1 src 8.26 5.57 10.61 10.57 5.54 5.79 3.08 8.12 4.02 9.22 -0.46 7.28 2.18 9.00 3 srcs to 1-2 srcs (54a) 7.37 4.71 8.30 8.26 4.36 4.52 2.43 8.08 3.56 8.69 -0.48 7.26 2.37 8.34 4 srcs to 1-3 srcs (54a2) 7.03 4.38 7.49 7.45 3.99 4.10 2.43 7.99 3.51 8.98 0.70 8.44 2.38 7.78 USS results with different data augmentation 2cAudioSet (SDRi) 2cFSDK2018 2cFSD50k 4cMusdb18 2cSlakh2100 2cVoicebankDemand (lr)2-3 (lr)4-5 (lr)6-7(lr)8-11(lr)12-13(lr)14-15 ora. emb avg. emb SDR SDRi SDR SDRi SDR SDRi wSDR wSDRi SDR SDRi PESQ SSNR no aug (51a) 7.11 3.81 7.19 7.14 3.27 3.35 1.78 7.22 3.09 8.74 0.69 8.43 2.39 6.36 +- 20 dB (51a2) 5.51 3.62 5.77 5.73 2.93 2.94 1.69 7.02 2.51 8.03 -0.34 7.40 2.22 5.34 +match energy 8.26 5.57 10.61 10.57 5.54 5.79 3.08 8.12 4.02 9.22 -0.46 7.28 2.18 9.00 Training with balanced and full AudioSet 2cAudioSet (SDRi) 2cFSDK2018 2cFSD50k 4cMusdb18 2cSlakh2100 2cVoicebankDemand (lr)2-3 (lr)4-5 (lr)6-7(lr)8-11(lr)12-13(lr)14-15 ora. emb avg. emb SDR SDRi SDR SDRi SDR SDRi wSDR wSDRi SDR SDRi PESQ SSNR +Balanced set, 44a 8.26 5.57 10.61 10.57 5.54 5.79 3.08 8.12 4.02 9.22 -0.46 7.28 2.18 9.00 Full set, 44b 8.21 6.14 10.34 10.30 5.45 5.71 2.31 7.20 3.60 8.92 -1.29 6.45 2.40 9.45 Full set (long train), 44b2 9.60 6.76 13.07 13.03 6.52 6.85 1.77 7.00 3.51 9.00 -2.62 5.12 2.45 10.00 §.§.§ Sources number to mix during training Table <ref> shows the USS results trained with different number of sources $ J $ to constitute a mixture. Table <ref> shows that $ J=2 $ performs the best on AudioSet with an SDRi of 5.57 dB and also performs the best on the FSDKaggle2018, FSD50k, and on Musdb18 datasets. This result shows that mixing two sources is sufficient for those source separation tasks. By using $ J=4 $ the USS system perform the beston the Slakh2100 dataset. An explanation is that the Slakh2100 contains audio clips contain multiple instruments being played simultaneously. Using more sources to constitute a mixture perform better than using fewer sources. §.§.§ Data augmentation Table <ref> shows the USS results with different augmentation strategies applied to sources to create a mixture. First, we do not apply any data augmentation to create a mixture. Second, we randomly scale the volume of each source by $ \pm 20 $ dB. Third, we propose a matching energy data augmentation to scale the volume of sources to create a mixture to ensure the sources have the same energy. <ref> shows that the matching energy data augmentation significantly outperform the systems trained without data augmentation and random volume scale augmentation, with an SDRi of 5.57 dB compared to 3.81 dB and 3.63 dB on AudioSet separation. The matching energy data augmentation also outperform no data augmentation and random volume augmentation on all the other tasks. §.§.§ USS results Trained with balanced and full AudioSet Table <ref> shows the results of training the USS systems with the balanced and the full AudioSet, respectively. The full training data is 100 times larger than the balanced data. We also experiment with training the USS system with 4 GPUs and a larger batch size of 64. Fig. <ref> shows the oracle embedding SDRi on the test set of AudioSet. The USS system trained on the full AudioSet outperforms the USS system trained on the balanced set after trained 1 million steps. Fig. <ref> also shows that a large batch sizes of 64 is beneficial to train the USS system. Table <ref> shows that training on the full AudioSet with a batch size of 64 achieves an SDRi of 6.76 dB, outperforming training on the balanced set of 5.57 dB. §.§.§ Visualization of Hierarchy Separation One application of the hierarchy separation is to separate arbitrary audio recordings into individual sources with AudioSet ontology. For example, the USS system can separate the sound in a movie into different tracks. One challenge of the hierarchy separation is the number of present sources are unknown. We use the methods in Section <ref> to detect and separate the present sound events. Fig. <ref> shows the automatically detected and separated waveforms of a movie clip Harry Potter and the Sorcerer's Stone from ontology levels 1 to 3 by using Algorithm <ref>. Level 1 indicates coarse sound classes and level 3 indicates fine sound classes. In level 1, the USS system successfully separate human sounds, music and sounds of things. In level 2, the USS system further separate human group actions, vehicle, and animals. In level 3, the USS system separate fine-grained sound classes such as bell, bird, crowd, and scary music. Confusion matrix of event based accuracy in Task 1. § CONCLUSION In this paper, we propose universal source separation (USS) systems trained on the large-scale weakly labelled AudioSet. The USS systems can separate hundreds of sound classes using a single model. The separation system can achieve zero-shot separation by using the embedding calculated from query examples as condition. In training, we first apply a sound event detection (SED) system to detect the anchor segments that are most likely to contain sound events. We constitute a mixture by mixing several anchor segments. Then, we use a pretrained audio tagging system to calculate the segment prediction probability or the embedding vector as the condition of the target anchor segment. The USS system takes the mixture and the condition as input to output the desired anchor segment waveform. In inference, we propose both zero-shot separation and hierarchy separation with an AudioSet ontology. We evaluated our proposed USS systems on a wide range of separation tasks, including AudioSet separation, FSDKaggle2018 and FSD50k general sound separation, Musdb18 and Slakh2100 music instruments separation, and Voicebank-Demand speech enhancement without training on those datasets. We show the USS system is an approach to address the computation auditory scene analysis (CASA) problem. In future, we will improve the quality of the separated waveforms of the weakly labelled USS systems. AudioSet source separation result. Confusion matrix of event based accuracy in Task 1. Confusion matrix of event based accuracy in Task 1.
# Fully Connected Reconfigurable Intelligent Surface Aided Rate-Splitting Multiple Access for Multi-User Multi-Antenna Transmission Tianyu Fang∗†‡ , Yijie Mao∗, Shanpu Shen§, Zhencai Zhu‡, Bruno Clerckx¶ This work was sponsored by Shanghai Sailing Program under Grant 22YF1428400. ∗School of Information Science and Technology, ShanghaiTech University, Shanghai, China †University of Chinese Academy of Sciences, Beijing, China ‡Innovation Academy for Microsatellites, Chinese Academy of Sciences, Shanghai, China §Department of Electronic and Computer Engineering, The Hong Kong University of Science and Technology, Hong Kong ¶Department of Electrical and Electronic Engineering, Imperial College London, United Kingdom Email: {fangty<EMAIL_ADDRESS><EMAIL_ADDRESS><EMAIL_ADDRESS><EMAIL_ADDRESS> ###### Abstract Rate-splitting multiple access (RSMA) has been recognized as a promising and powerful multiple access (MA) scheme, non-orthogonal transmission framework and interference management strategy for 6G. Inspired by the appealing spectral efficiency gain achieved by RSMA over conventional MA schemes in multi-user multi-antenna transmission, in this paper we introduce RSMA to reconfigurable intelligent surface (RIS)-aided multiple-input single-out (MISO) broadcast channel (BC). To further enhance the spectral efficiency, a more generalized RIS architecture called fully connected RIS is considered. By jointly optimizing the scattering matrix of the fully connected RIS and the transmit beamformers to maximize the sum-rate, we show that the proposed fully connected RIS aided RSMA transmission scheme significantly improves the spectral efficiency compared with the conventional single connected RIS schemes and the schemes without RIS. It acts as a new benchmark for linearly precoded multi-user multi-antenna networks. ###### Index Terms: fully connected, multiple-user multi-antenna network, rate-splitting multiple access, reconfigurable intelligent surface, spectral efficiency. ## I Introduction The past few years have witnessed the development of rate-splitting multiple access (RSMA) for multi-antenna networks. It has been recognized as a promising physical-layer non-orthogonal transmission strategy, a powerful interference management approach, and a candidate of the multiple access technique for the sixth generation wireless network (6G) [1, 2, 3]. By splitting the user messages into common and private parts, encoding the common parts into the common streams to be decoded by multiple users, and encoding the private parts respectively into the private streams to be decoded by the corresponding users only, RSMA enables a flexible interference management capability of partially decoding the interference and partially treating the interference as noise [4, 5]. Existing works have shown that RSMA outperforms other multiple access techniques (including linearly precoded space division multiple access–SDMA, power-domain non-orthogonal multiple access–NOMA, orthogonal multiple access-OMA, and multicasting) in terms of spectral efficiency [4, 6, 5, 7, 8], energy efficiency [9, 7], max-min fairness [8, 10], and robustness towards inaccuracies of channel state information at the transmitter (CSIT) [6, 11]. Reconfigurable intelligent surfaces (RISs), consisting of a large number of reconfigurable scattering elements, is another promising technology for 6G [12]. RISs can smartly reconfigurable the wireless propagation environment so as to effectively enhance the spectral and energy efficiency. Most of existing RIS research relies on using a simple architecture referred to as single connected RIS, which is characterized by a diagonal matrix with constant modulus entries [13, 14, 15, 12]. To enhance the RIS performance, a more general architecture referred to as fully connected RIS has recently been proposed in [16]. The fully connected RIS is characterized by a complex symmetric unitary matrix, which achieves a better performance than the single connected RIS. The appealing performance benefits of RSMA and RIS have motivated the study on the integration of them [17, 18, 19, 20, 21]. The interplay of RSMA and RIS is first investigated in [17] for multi-user multi-antenna networks, where RIS aided RSMA transmission model achieves higher energy efficiency than NOMA and orthogonal frequency division multiple access (OFDMA). RIS aided RSMA transmission is further shown to enhance the fairness among users [18], reduce the transmit power [19], and achieve superior outage performance [20, 21] over conventional RIS-aided networks. However, all the above works only use the single connected RIS architecture. Therefore, we would like to further enhance the performance by leveraging the more advanced fully connected RIS architecture. To the best of our knowledge, there is no existing work that investigates the sum-rate achieved by the fully connected RIS aided RSMA transmission networks. In this work, we propose a fully connected RIS aided RSMA downlink multi- antenna multi-user transmission model. The scattering matrix of RIS and the transmit beamformers of RSMA are jointly optimized in order to maximize the sum-rate. To solve the problem, we propose an alternative optimization framework where the scattering matrix of RIS and the beamformers of RSMA are iteratively optimized. Numerical results show that by synergizing RSMA and fully connected RIS, the proposed scheme significantly improves the spectral efficiency in multi-user multiple-input single-output (MU-MISO) MU-MISO networks. The proposed scheme explores a larger achievable sum-rate than the conventional single connected RIS aided schemes and the schemes without RIS. It acts as a new benchmark for linearly precoded multi-user multi-antenna networks. Notations: Vectors and matrices are denoted by bold lower and upper case letters. $\mathbb{R}^{m\times n}$ and $\mathbb{C}^{m\times n}$ represent the real-valued and complex-valued spaces with dimension $m\times n$. $|x|$ indicates the magnitude of a complex number $x$ and $\Re(x)$ denotes its real part. For a vector $\mathbf{x}$, $\|\mathbf{x}\|$ denotes its Euclidean norm. $\mathbb{E}\\{\cdot\\}$ denotes the statistical expectation operator for a random variable. $(\cdot)^{H}$, $(\cdot)^{T}$ and $\mathrm{tr}(\cdot)$ respectively denote the conjugate transpose, transpose and trace operators. $\mathrm{diag}\left(x_{1},x_{2},\cdots,x_{n}\right)$ is a diagonal matrix with $(x_{1},x_{2},\cdots,x_{n})$ being its diagonal elements. $\mathbf{I}$ is the identity matrix. $\mathcal{CN}(\mu,\sigma^{2})$ denotes the circularly symmetric complex Gaussian (CSCG) distribution with mean $\mu$ and variance $\sigma^{2}$. ## II System Model and Problem Formulation We consider a MU-MISO communication network consisting of one base station (BS) equipped with $M$ antennas, one RIS with a set of $N$ passive reflecting elements indexed by $\mathcal{N}=\\{1,2,\ldots,N\\}$, and a set of $K$ single- antenna users indexed by $\mathcal{K}=\\{1,2,\ldots,K\\}$. As illustrated in Fig. 1, the BS simultaneously serves the $K$ users with the assistance of a fully-connected RIS (as in Fig. 1 (a)). The reconfigurable impedance network of RIS is adjusted and determined by a smart controller attached to the RIS, which also acts as a gateway to exchange the information between the BS and the RIS. The channels from the BS to the users, from the RIS to the users, and from the BS to the RIS are denoted as $\mathbf{g}_{k}\in\mathbb{C}^{M\times 1}$, $\mathbf{h}_{k}\in\mathbb{C}^{N\times 1}$, $k\in\mathcal{K}$ and $\mathbf{G}\in\mathbb{C}^{N\times M}$, respectively. All channels are assumed to be invariant during one transmission block and perfect CSI is available at the BS. Although the assumption of perfect CSI is ideal, the proposed scheme explores a larger achievable sum-rate than the conventional schemes, which therefore acts as a new benchmark for multi-user multi-antenna networks as well as future study for the corresponding imperfect CSIT settings. At the BS, message $W_{k}$ intends to user $k$ is split into a common part $W_{c,k}$ and a private part $W_{p,k}$. The common parts of all users are combined and encoded into a common stream $s_{0}$ while the private parts are independently encoded into the private streams $s_{1},\cdots,s_{K}$. Denote $\mathbf{s}=[s_{0},s_{1},\cdots,s_{K}]^{T}$ and $\mathbf{W}=[\mathbf{w}_{0},\mathbf{w}_{1},\cdots,\mathbf{w}_{K}]\in\mathbb{C}^{M\times(K+1)}$ as the data stream vector and beamforming matrix for all streams, respectively. We assume that each stream $s_{k},$ $k\in\mathcal{K}\cup\\{0\\}$ has zero mean and unit variance, i.e., $\mathbb{E}\\{\mathbf{s}\mathbf{s}^{H}\\}=\mathbf{I}$. The transmitted signal at the BS is $\mathbf{x}=\sum\limits_{k=0}^{K}\mathbf{w}_{k}s_{k},$ (1) and the transmit power constraint is $\mathrm{tr}(\mathbf{WW}^{H})\leq P_{t},$ (2) where $P_{t}$ refers to the maximum transmit power of the BS. The signal is transmitted through the direct signal path from the BS to the users as well as the RIS-aided path. At user $k$, the total received signal is $y_{k}=(\mathbf{g}_{k}^{H}+\mathbf{h}_{k}^{H}\mathbf{\Theta}\mathbf{G})\sum\limits_{i=0}^{K}\mathbf{w}_{i}s_{i}+z_{k},$ (3) where $\mathbf{\Theta}\in\mathbb{C}^{N\times N}$ refers to the scattering matrix of the $N$-port reconfigurable impedance network in the $N$-element RIS, and $z_{k}\sim\mathcal{CN}(0,\sigma_{k}^{2})$ is the additive white Gaussian noise (AWGN). Figure 1: A multi-antenna multi-user transmission network with the assistance of a (a) fully connected RIS, (b) single connected RIS. ### II-A Fully Connected Reconfigurable Intelligent Surface In this work, we focus on using a fully connected RIS [16] to enhance the spectral efficiency. An example of a 4-element fully connected RIS is illustrated in Fig. 1 (a). In the reconfigurable impedance network of fully connected RIS, each port is connected with other ports through a reconfigurable reactance. Accordingly, the scattering matrix of a fully connected RIS $\mathbf{\Theta}$ satisfies the constraints $\displaystyle\mathbf{\Theta}^{H}\bm{\Theta}=\mathbf{I},$ (4a) $\displaystyle\mathbf{\Theta}=\mathbf{\Theta}^{T}.$ (4b) As per [16, 22], constraint (4) is equivalent to $\displaystyle\mathbf{\Theta}$ $\displaystyle=(j\mathbf{X}+Z_{0}\mathbf{I})^{-1}(j\mathbf{X}-Z_{0}\mathbf{I}),$ (5a) $\displaystyle\mathbf{X}$ $\displaystyle=\mathbf{X}^{T},$ (5b) where $Z_{0}$ refers to the reference impedance and $\mathbf{X}$ is a symmetric real matrix referring to the reactance matrix of the reconfigurable impedance network in RIS. With constraint (5), a closed-form expression for scattering matrix $\mathbf{\Theta}$ satisfying constraint (4) is obtained by introducing an unconstrained symmetrical real matrix $\mathbf{X}$. Remark 1. When each port is disconnected with other ports in the reconfigurable impedance network, the fully connected RIS reduces to the single connected RIS [23] as illustrated in Fig. 1 (b). The single connected RIS has been widely used in existing works [17, 18, 20, 19, 21], where the scattering matrix $\mathbf{\Theta}$ satisfies the constraint $\bm{\Theta}=\mathrm{diag}\left(e^{j\theta_{1}},e^{j\theta_{2}},\cdots,e^{j\theta_{N}}\right),$ (6) where $\theta_{n}\in[0,2\pi)$ denotes the phase of the scattering parameter of the $n$-th port in reconfigurable impedance network. Accordingly, it can be also equivalently transformed to $\displaystyle\bm{\Theta}$ $\displaystyle=(j\mathbf{X}+Z_{0}\mathbf{I})^{-1}(j\mathbf{X}-Z_{0}\mathbf{I}),$ (7a) $\displaystyle\mathbf{X}$ $\displaystyle=\mathrm{diag}\left(x_{1},x_{2},\cdots,x_{N}\right),$ (7b) where $x_{n}\in\mathbb{R}$ is the reconfigurable reactance component connected to the $n$-th port. It should be noted that a $N$-port fully connected RIS given in (5) requires to tune $N(N+1)/2$ scattering parameters while a $N$-port single connected RIS given in (7) only requires to tune $N$ scattering parameters. The fully connected RIS therefore brings a larger searching space for the optimal RIS design. ### II-B Problem Formulation At user sides, each user first decodes the common stream by treating all private streams as interference. Thus, the signal-to-interference-plus-noise ratio (SINR) of $s_{0}$ at user $k$ is $\gamma_{0,k}=\frac{|(\mathbf{g}_{k}^{H}+\mathbf{h}_{k}^{H}\mathbf{\Theta}\mathbf{G})\mathbf{w}_{0}|^{2}}{\sum\limits_{i=1}^{K}|(\mathbf{g}_{k}^{H}+\mathbf{h}_{k}^{H}\mathbf{\Theta}\mathbf{G})\mathbf{w}_{i}|^{2}+\sigma_{k}^{2}},$ (8) and the corresponding transmission rate is $r_{0,k}=\log_{2}\left(1+\gamma_{0,k}\right)$. To ensure common message $s_{0}$ is successfully decoded by all users, the achievable rate of $s_{0}$ should satisfy $r_{0}=\min_{k\in\mathcal{K}}r_{0,k}$. After decoding the common stream $s_{0}$, each user employs SIC to remove the common stream from the received signal, and then decodes the intended private stream with the SINR $\gamma_{k}=\frac{|(\mathbf{g}_{k}^{H}+\mathbf{h}_{k}^{H}\mathbf{\Theta}\mathbf{G})\mathbf{w}_{k}|^{2}}{\sum\limits_{i=1,i\neq k}^{K}|(\mathbf{g}_{k}^{H}+\mathbf{h}_{k}^{H}\mathbf{\Theta}\mathbf{G})\mathbf{w}_{i}|^{2}+\sigma_{k}^{2}}.$ (9) The rate of decoding private message is $r_{k}=\log_{2}\left(1+\gamma_{k}\right)$. User $k$ then reconstructs its message by combining the submessages $W_{c,k}$ and $W_{p,k}$ respectively decoded from the common and private streams [4]. In this work, we aim at jointly optimizing the scattering matrix of RIS $\mathbf{\Theta}$ and the beamforming matrix $\mathbf{W}$ of RSMA to maximize the sum-rate of the system. The sum-rate problem for the downlink fully connected RIS aided RSMA network can be formulated as: $\displaystyle(\mathcal{P}_{1})\,\,\max\limits_{\bm{\Theta},\mathbf{W}}\,\,$ $\displaystyle\sum\limits_{k=0}^{K}r_{k}$ (10a) s.t. $\displaystyle\mathrm{tr}(\mathbf{WW}^{H})\leq P_{t},$ (10b) $\displaystyle\bm{\Theta}^{H}\bm{\Theta}=\bm{I},$ (10c) $\displaystyle\bm{\Theta}=\bm{\Theta}^{T}.$ (10d) Constraint (10b) is the transmit power constraint at the BS. (10c) and (10d) show that the reconfigurable impedance network in fully connected RIS is a lossless and reciprocal circuit network. When constraints (10c) and (10d) for scattering matrix $\mathbf{\Theta}$ are replaced by constraint (6), $\mathcal{P}_{1}$ reduces to the sum-rate problem of the single connected RIS aided RSMA [24]. When the power allocated to $\mathbf{w}_{0}$ is fixed to zero, the problem reduces to the sum-rate problem for the conventional single connected RIS aided SDMA [14]. ## III Alternative Optimization Framework Problem $\mathcal{P}_{1}$ is a joint beamforming matrix and RIS scattering matrix optimization problem. It is non-convex and the beamformers are coupled with the scattering matrix in multiple fractional SINR expressions. Following existing works [13, 14, 17, 18, 20, 19, 21], we propose an alternative optimization (AO) framework to solve $\mathcal{P}_{1}$. Specifically, the problem is first decomposed into the subproblems of beamforming design and scattering matrix design. The former is solved by a weighted minimum mean square error (WMMSE)-based approach while the latter is solved by the quasi- Newton algorithm. The two subproblems are solved iteratively until convergence. In the following subsections, the proposed optimization algorithm is delineated. ### III-A Beamforming Optimization With a given scattering matrix $\bm{\Theta}$, the channel responses from the RIS to the users are fixed. To ease notations, we denote the effective channel from the BS and the RIS to user $k$ as $\widetilde{\mathbf{g}}_{k}^{H}=\mathbf{g}_{k}^{H}+\mathbf{h}_{k}^{H}\mathbf{\Theta}\mathbf{G}.$ (11) And $\mathcal{P}_{1}$ reduces to $\displaystyle(\mathcal{P}_{2})\,\,\max\limits_{\mathbf{W}}\,\,$ $\displaystyle\sum\limits_{k=0}^{K}r_{k}$ (12a) s.t. $\displaystyle\mathrm{tr}(\mathbf{WW}^{H})\leq P_{t},$ (12b) which can be solved by the WMMSE algorithm [6] as briefly introduced below. Denote the equalizers to estimate $s_{0}$ and $s_{k}$ as $e_{0,k}$ and $e_{k}$, respectively. $\hat{s}_{0,k}=e_{0,k}y_{k}$ is the estimate of $s_{0}$, and $\hat{s}_{k}=e_{k}(y_{k}-\widetilde{\mathbf{g}}_{k}^{H}\mathbf{w}_{0}\hat{s}_{0,k})$ is the estimate of $s_{k}$ at user $k$. The mean square errors (MSEs) of decoding $s_{0}$ and $s_{k}$ are calculated as $\begin{split}\varepsilon_{0,k}&\triangleq\mathbb{E}\\{|\hat{s}_{0,k}-s_{0,k}|^{2}\\}=|e_{0,k}|^{2}T_{0,k}-2\Re\\{e_{0,k}\widetilde{\mathbf{g}}_{k}^{H}\mathbf{w}_{0}\\}+1,\\\ \varepsilon_{k}&\triangleq\mathbb{E}\\{|\hat{s}_{k}-s_{k}|^{2}\\}=|e_{k}|^{2}T_{k}-2\Re\\{e_{k}\widetilde{\mathbf{g}}_{k}^{H}\mathbf{w}_{k}\\}+1,\end{split}$ (13) where $T_{0,k}=\textstyle\sum_{i=0}^{K}|\widetilde{\mathbf{g}}_{k}^{H}\mathbf{w}_{i}|^{2}+\sigma_{k}^{2}$, and $T_{k}=\textstyle\sum_{i=1}^{K}|\widetilde{\mathbf{g}}_{k}^{H}\mathbf{w}_{i}|^{2}+\sigma_{k}^{2}$ are the average power of the received signal and the signal after removing the common stream, respectively. By setting $\partial\varepsilon_{0,k}/\partial e_{0,k}$ and $\partial\varepsilon_{k}/\partial e_{k}$ to zero respectively, the minimum MSE (MMSE) equalizers are given by $e_{0,k}^{\text{MMSE}}=\mathbf{w}_{0}^{H}\widetilde{\mathbf{g}}_{k}T_{0,k}^{-1},\,\,e_{k}^{\text{MMSE}}=\mathbf{w}_{k}^{H}\widetilde{\mathbf{g}}_{k}T_{k}^{-1}.$ (14) Substituting (14) into (13), the MMSEs are given by $\varepsilon_{0,k}^{\text{MMSE}}=(T_{0,k}-|\widetilde{\mathbf{g}}_{k}^{H}\mathbf{w}_{0}|^{2})T_{0,k}^{-1},\,\,\varepsilon_{k}^{\text{MMSE}}=(T_{k}-|\widetilde{\mathbf{g}}_{k}^{H}\mathbf{w}_{k}|^{2})T_{k}^{-1}.$ Then, the SINRs coresponding to $s_{0}$ and $s_{k}$ can be transformed to $\gamma_{0,k}=1/\varepsilon_{0,k}^{\text{MMSE}}-1,\,\gamma_{k}=1/\varepsilon_{k}^{\text{MMSE}}-1.$ The rates of common and private streams become $r_{0,k}=-\log_{2}(\varepsilon_{0,k}^{\text{MMSE}})$ and $\,\,r_{k}=-\log_{2}(\varepsilon_{k}^{\text{MMSE}}).$ However, the logarithmic rate-MMSE relationships above cannot be used directly for the sum-rate problem. To tackle the issue, the augmented MMSEs are introduced as follows $\xi_{0,k}\triangleq\lambda_{0,k}\varepsilon_{0,k}-\log_{2}(\lambda_{0,k}),\,\,\xi_{k}\triangleq\lambda_{k}\varepsilon_{k}-\log(\lambda_{k}),$ (15) where $\lambda_{0,k}$ and $\lambda_{k}$ are auxiliary variables (also known as weights) for the rate-WMMSE relationships of $r_{0,k}$ and $r_{k}$, respectively. By calculating$\frac{\partial\xi_{0,k}}{\partial\lambda_{0,k}}=0,\,\frac{\partial\xi_{k}}{\partial\lambda_{k}}=0$, we obtain the optimum weights given by $\lambda_{0,k}^{\text{MMSE}}=(\varepsilon_{0,k}^{\text{MMSE}})^{-1},\,\,\lambda_{k}^{\text{MMSE}}=(\varepsilon_{k}^{\text{MMSE}})^{-1}.$ (16) Substituting (13) and (16) into (15), the rate-WMMSE relationships are established as $\xi_{0,k}^{\text{MMSE}}=1-r_{0,k},\,\,\xi_{k}^{\text{MMSE}}=1-r_{k}.$ With the rate-WMMSE relationships above, $\mathcal{P}_{2}$ is equivalently transformed into the WMMSE problem $\displaystyle(\mathcal{P}_{3})\,\min\limits_{\mathbf{W},\bm{\lambda},\mathbf{e}}\,$ $\displaystyle\sum\limits_{k=0}^{K}\xi_{k}$ (17a) s.t. $\displaystyle\mathrm{tr}(\mathbf{WW}^{H})\leq P_{t},$ (17b) where $\bm{\lambda}=[\lambda_{0,1},\cdots,\lambda_{0,K},\lambda_{1},\cdots,\lambda_{K}]^{T}$ is the weight vector and $\mathbf{e}=[e_{0,1},\cdots,e_{0,K},e_{1},\cdots,e_{K}]^{T}$ is the equalizer vector. $\xi_{0}=\max_{k\in\mathcal{K}}\xi_{0,k}$. $\mathcal{P}_{3}$ is still non-convex, an AO framework is applied to decompose it into three convex subproblems. For each block, one of $\mathbf{W},\bm{\lambda},\bm{e}$ is optimized by fixing the other two blocks. Algorithm 1 specifies the procedure of the WMMSE method to optimize the beamforming vectors. Readers are referred to [25] for the details of the convergence proof for Algorithm 1. 1 Initialize: $t\leftarrow 0$, $\epsilon$, $\mathbf{W}^{[t]}$, $\mathrm{SR}^{[t]}$; 2 repeat 3 t $\leftarrow$ t+1; 4 update $\mathbf{e}^{[t]},\mathbf{\lambda}^{[t]}$ by (14), (16); 5 update $\mathbf{W}^{[t]}$ by solving problem $\mathcal{P}_{3}$ using $\mathbf{e}^{[t]},\mathbf{\lambda}^{[t]}$ ; 6 7until _$|\mathrm{SR}^{[t]}-\mathrm{SR}^{[t-1]}| <\epsilon$_; Algorithm 1 WMMSE algorithm for beamforming design ### III-B Scattering Matrix Optimization Similarly, with a given beamforming design $\mathbf{W}$, problem $\mathcal{P}_{1}$ is simplified as $\displaystyle(\mathcal{P}_{4})\,\,\max\limits_{\bm{\Theta}}\,\,$ $\displaystyle\sum\limits_{k=0}^{K}r_{k}$ (18a) s.t. $\displaystyle\bm{\Theta}^{H}\bm{\Theta}=\bm{I},$ (18b) $\displaystyle\bm{\Theta}=\bm{\Theta}^{T}.$ (18c) However, it is challenging to transform problem $\mathcal{P}_{4}$ into a convex problem due to the non-convex matrix equality constraints. Hence, we apply equality (5) to equivalently reformulate $\mathcal{P}_{4}$ as $\displaystyle(\mathcal{P}_{5})\,\max\limits_{\bm{X}}\,$ $\displaystyle\sum\limits_{k=0}^{K}r_{k}$ (19a) s.t. $\displaystyle\mathbf{\Theta}=(j\mathbf{X}+Z_{0}\mathbf{I})^{-1}(j\mathbf{X}-Z_{0}\mathbf{I}),$ (19b) $\displaystyle\mathbf{X}=\mathbf{X}^{T}.$ (19c) Substituting (19b) into (19a) , and removing (19c) (by defining a symmetric matrix variable), problem $\mathcal{P}_{5}$ becomes an unconstrained optimization problem. Moreover, the matrix variable $\mathbf{X}$ is a real symmetric matrix in which $N(N+1)/2$ variables are adjustable. Such unconstrained optimization problem can be directly solved by the quasi-Newton method [16]. ### III-C Alternative Optimization Algorithm 1 Initilize: $n\leftarrow 0$, $\epsilon$, $\mathbf{W}^{[n]}$, $\mathbf{X}^{[n]}$, $\mathrm{SR}^{[n]}$; 2 repeat 3 $n\leftarrow n+1$ ; 4 calculate $\mathbf{\Theta}^{[n-1]}$ by (19b); 5 Given $\mathbf{\Theta}^{[n-1]}$, update $\mathbf{W}^{[n]}$ by solving $\mathcal{P}_{3}$ with Algorithm 1; 6 Given $\mathbf{W}^{[n]}$, update $\mathbf{X}^{[n]}$ by solving $\mathcal{P}_{5}$ with the quasi-Newton method using $\mathbf{X}^{[n-1]}$ as the initial point; 7 8until _$|\mathrm{SR}^{[n]}-\mathrm{SR}^{[n-1]}| <\epsilon$_; Algorithm 2 Alternative Optimization to solve ($\mathcal{P}_{1}$) The proposed AO algorithm to jointly maximize the scattering matrix and the beamforming matrix is specified in Algorithm 2. Starting with a feasible beamforming matrix $\mathbf{W}^{[0]}$ and a symmetric reactance matrix $\mathbf{X}^{[0]}$, in $n$-th iteration, we first calculate the scattering matrix $\mathbf{\Theta}^{[n-1]}$ with reactance matrix $\mathbf{X}^{[n-1]}$ from the last iteration. Next, with a fixed scattering matrix $\mathbf{\Theta}^{[n-1]}$, the beamforming matrix $\mathbf{W}^{[n]}$ is updated by Algorithm 1. For a given $\mathbf{W}^{[n]}$, the reactance matrix $\mathbf{X}^{[n]}$ is then updated based on the quasi-Newton method using $\mathbf{X}^{[n-1]}$ as the initial point. The sum-rate $\mathrm{SR}^{[n]}$ is then calculated based on the updated $\mathbf{W}^{[n]}$ and $\mathbf{X}^{[n]}$. The process is repeated until convergence. Convergence Analysis: In each iteration $[n]$, the solution of $\mathcal{P}_{1}$ is also a feasible solution of $\mathcal{P}_{1}$ for the next iteration. Hence, the sum-rate $\mathrm{SR}^{[n+1]}$ is larger than or equal to $\mathrm{SR}^{[n]}$. Moreover, the non-decreasing sequences generated by Algorithm 2 is bounded above by the transmit power constraint. Therefore, the proposed AO algorithm is guaranteed to converge with a given tolerance $\epsilon$. ## IV Numerical Results In this section, we evaluate the performance of the proposed system model and the proposed algorithm. The following six schemes are compared: * • Fully RIS RSMA: This is the scheme proposed in Section II. * • Fully RIS SDMA: This is a special case of the proposed scheme when the power allocated to the common stream is fixed to zero. * • Single RIS RSMA: This is the single connected RIS aided RSMA scheme, as studied in [24]. * • Single RIS SDMA: This is the conventional single connected RIS aided SDMA scheme, as studied in [14]. * • no RIS RSMA: This is the conventional RSMA scheme without using RIS, as studied in [4, 6, 5, 7, 8]. * • no RIS SDMA: This is the conventional multi-user linearly precoded SDMA scheme without using RIS, as studied in [4, 26]. The sum-rate maximization problems of fully/single RIS RSMA and fully/single RIS SDMA are solved by Algorithm 2 while the corresponding problems of no RIS RSMA and no RIS SDMA are solved by WMMSE directly. Problem $\mathcal{P}_{3}$ is solved by the CVX toolbox [27] with the interior-point method, and problem $\mathcal{P}_{5}$ is solved by the optimization toolbox in Matlab with the quasi-Newton method. The setting of the simulation follows [13], which is a planar RIS-aided network as shown in Fig. 2. The BS and RIS are located at $(0,0)$ and $(50,50)$, respectively. In addition, there are $K=4$ users randomly generated in a circle centered at $(150,0)$ meters with a diameter of 20 meters. The path loss of the channels are modeled as $P(d)=L_{0}d^{-\alpha}$, where $L_{0}=-30$ dB is the reference path loss at $d=1$ m, $d$ refers to the link distance, and $\alpha$ denotes the path loss exponent. Assuming that the location of RIS is chosen carefully, we set the path loss exponent of BS to users, BS to RIS, and RIS to users are 3.5, 2, 2.2, respectively [13]. For simplicity, the small-scale fading of all channels are modeled as Rayleigh fading. Hence, the channels are given as ${\mathbf{g}}_{k}\sim\mathcal{CN}(0,P(d_{k}^{g})\mathbf{I})),\mathbf{h}_{k}\sim\mathcal{CN}(0,P(d_{k}^{h})\mathbf{I}),$ and $\mathbf{G}\sim\mathcal{CN}(0,P(d^{G})\mathbf{I})$, where $d_{k}^{g},d_{k}^{h}$ and $d^{G}$ respectively denote the distance between the BS and user $k$, the distance between the RIS and user $k$, and the distance between the BS and RIS. Besides, the reference impedance of RIS is $Z_{0}=50$ $\Omega$, the convergence tolerance is $\epsilon=10^{-3}$, and the noise at user $k$ is $\sigma_{k}^{2}=1$. The transmit singal-to-noise ratio (SNR) SNR$\triangleq P_{t}/\sigma_{k}^{2}$ is therefore equal to the transmit power numerically. All simulation results are averaged over 100 random channel realizations. Figure 2: The Simulated RIS-aided $K$-user MISO transmission scenario. Figure 3: Sum rate versus the transmit power, when $M=4$, $K=4$, and $N=32$. Fig. 3 shows the sum-rate of different strategies versus the transmit power when $M=4$, $K=4$, and $N=32$. It shows that the proposed fully connected RIS aided RSMA scheme outperforms all other baseline schemes. The relative sum- rate gain of fully RIS RSMA over single RIS RSMA and no RIS RSMA are at least $4.6\%$ and $16.5\%$ respectively when SNR is $30$ dB. By using the fully connected RIS aided RSMA model, the sum-rate of the multi-user multi-antenna network increases significantly. Moreover, the single connected RIS aided RSMA achieves approximately the same sum-rate as the fully connected RIS aided SDMA in the high SNR regime. Figure 4: Sum rate versus the number of RIS Elements, when $M=4$, $K=4$, and SNR is $30$ dB. Fig. 4 shows the impact of the number of passive reflecting elements at the RIS (i.e., $N$) to the sum-rate of different strategies when $M=4$, $K=4$, and SNR is $30$ dB. For both fully RIS RSMA and SDMA schemes, the sum-rate increases faster than the corresponding single RIS RSMA and SDMA as $N$ increases. Particularly, the single RIS RSMA achieves a higher sum-rate than the fully RIS SDMA when the number of elements is less than $16$. In this regime, the gain obtained by RSMA scheme is more significant than the gain obtained by fully connected RIS. Figure 5: Convergence of the algorithms in one channel realization. Fig. 5 illustrates the convergence of Algorithm 2 for the fully RIS RSMA scheme and other baseline schemes (no RIS schemes are not included since they only adopt Algorithm 1) when $M=4$, $K=4$, $N=32$ and SNR is $25$ dB. It can be observed that single connected RIS aided schemes converge faster than the fully connected RIS aided schemes due to the smaller number of variables in the RIS scattering matrix. In general, the algorithm can converge with 100 iterations. ## V Conclusion In this work, we propose a fully connected RIS aided RSMA downlink transmission network. The beamforming vectors at the BS and the scattering matrix of the fully connected RIS are jointly designed to maximize the sum- rate of the network. To solve this problem, we propose an effective algorithm that alternatively optimizes the beamforming and scattering matrices. Simulation results show the outstanding spectral efficiency of the proposed fully connected RIS aided RSMA scheme over the existing transmission schemes. It acts as a new benchmark for linearly precoded multi-user multi-antenna networks. Moreover, we show that the single connected RIS aided RSMA can achieve approximately the same spectral efficiency as the fully connected RIS aided SDMA in the high SNR regime. Therefore, we conclude that by marrying RSMA and RIS, the spectral efficiency can be enhanced significantly. ## References * [1] O. Dizdar, Y. Mao, Y. Xu, P. Zhu, and B. Clerckx, “Rate-splitting multiple access for enhanced URLLC and eMBB in 6G,” _Proc. Int. Symp. Wirel. Commun. Syst. (ISWCS)_ , 2021. * [2] H. Tataria, M. Shafi, A. F. Molisch, M. Dohler, H. Sjoland, and F. Tufvesson, “6G wireless systems: Vision, requirements, challenges, insights, and opportunities,” _Proc. IEEE_ , vol. 109, no. 7, pp. 1166–1199, 2021. * [3] Y. Mao, O. Dizdar, B. Clerckx, R. Schober, P. Popovski, and H. V. Poor, “Rate-splitting multiple access: Fundamentals, survey, and future research trends,” _arXiv:2201.03192_ , 2021. * [4] Y. Mao, B. Clerckx, and V. O. Li, “Rate-splitting multiple access for downlink communication systems: Bridging, generalizing, and outperforming SDMA and NOMA,” _Eurasip J. Wireless Commun. Networking_ , 2018. * [5] B. Clerckx, H. Joudeh, C. Hao, M. Dai, and B. Rassouli, “Rate splitting for MIMO wireless networks: A promising PHY-layer strategy for LTE evolution,” _IEEE Commun. Mag._ , vol. 54, no. 5, pp. 98–105, 2016. * [6] H. Joudeh and B. Clerckx, “Sum-rate maximization for linearly precoded downlink multiuser MISO systems with partial CSIT: A rate-splitting approach,” _IEEE Trans. Commun._ , vol. 64, no. 11, pp. 4847–4861, 2016\. * [7] Y. Mao, B. Clerckx, and V. O. K. Li, “Rate-splitting for multi-antenna non-orthogonal unicast and multicast transmission: Spectral and energy efficiency analysis,” _IEEE Trans. Commun._ , vol. 67, no. 12, pp. 8754–8770, 2019. * [8] B. Clerckx, Y. Mao, R. Schober, E. Jorswieck, D. J. Love, J. Yuan, L. Hanzo, G. Y. Li, E. G. Larsson, and G. Caire, “Is NOMA efficient in multi-antenna networks? A critical look at next generation multiple access techniques,” _IEEE open J. Commun. Soc._ , 2021. * [9] Y. Mao, B. Clerckx, and V. O. Li, “Energy efficiency of rate-splitting multiple access, and performance benefits over SDMA and NOMA,” in _Proc. Int. Symp. Wirel. Commun. Syst. (ISWCS)_ , 2018, pp. 1–5. * [10] Y. Mao, B. Clerckx, J. Zhang, V. O. Li, and M. A. Arafah, “Max-min fairness of k-user cooperative rate-splitting in MISO broadcast channel with user relaying,” _IEEE Trans. Wireless Commun._ , vol. 19, no. 10, pp. 6362–6376, 2020. * [11] Y. Mao and B. Clerckx, “Beyond dirty paper coding for multi-antenna broadcast channel with partial CSIT: A rate-splitting approach,” _IEEE Trans. Commun._ , vol. 68, no. 11, pp. 6775–6791, 2020. * [12] Q. Wu, S. Zhang, B. Zheng, C. You, and R. Zhang, “Intelligent reflecting surface aided wireless communications: A tutorial,” _IEEE Trans. Commun._ , 2021. * [13] Q. Wu and R. Zhang, “Intelligent reflecting surface enhanced wireless network via joint active and passive beamforming,” _IEEE Trans. Wireless Commun._ , vol. 18, no. 11, pp. 5394–5409, 2019. * [14] H. Guo, Y.-C. Liang, J. Chen, and E. G. Larsson, “Weighted sum-rate maximization for reconfigurable intelligent surface aided wireless networks,” _IEEE Trans. Wireless Commun._ , vol. 19, no. 5, pp. 3064–3076, 2020. * [15] X. Yu, V. Jamali, D. Xu, D. W. K. Ng, and R. Schober, “Smart and reconfigurable wireless communications: From IRS modeling to algorithm design,” _arXiv preprint arXiv:2103.07046_ , 2021. * [16] S. Shen, B. Clerckx, and R. Murch, “Modeling and architecture design of reconfigurable intelligent surfaces using scattering parameter network analysis,” _IEEE Trans. Wireless Commun._ , pp. 1–1, 2021. * [17] Z. Yang, J. Shi, Z. Li, M. Chen, W. Xu, and M. Shikh-Bahaei, “Energy efficient rate splitting multiple access (RSMA) with reconfigurable intelligent surface,” in _IEEE Int. Conf. Commun. Workshops (ICC Workshops)_ , 2020, pp. 1–6. * [18] H. Fu, S. Feng, and D. W. Kwan Ng, “Resource allocation design for IRS-aided downlink MU-MISO RSMA systems,” in _IEEE Int. Conf. Commun. Workshops (ICC Workshops)_ , 2021, pp. 1–6. * [19] K. Weinberger, A. A. Ahmad, and A. Sezgin, “On dynergistic benefits of tate dplitting in IRS-assisted cloud radio access networks,” in _IEEE Int. Conf. Commun. (ICC)_ , 2021, pp. 1–6. * [20] A. Bansal, K. Singh, and C.-P. Li, “Analysis of hierarchical rate splitting for intelligent reflecting surfaces-aided downlink multiuser MISO communications,” _IEEE open J. Commun. Soc._ , vol. 2, pp. 785–798, 2021\. * [21] A. Bansal, K. Singh, B. Clerckx, C.-P. Li, and M.-S. Alouini, “Rate-splitting multiple access for intelligent reflecting surface aided multi-user communications,” _IEEE Trans. Veh. Technol_ , vol. 70, no. 9, pp. 9217–9229, 2021. * [22] D. M. Pozar, “Microwave engineering USA: John Wiley & Sons,” 2009. * [23] N. K. Kundu, Z. Li, J. Rao, S. Shen, M. R. McKay, and R. Murch, “Optimal grouping strategy for reconfigurable intelligent surface assisted wireless Ccommunications,” _arXiv preprint arXiv:2111.10550_ , 2021. * [24] A. Jolly, S. Biswas, and K. Singh, “An analysis on rate-splitting multiple access for IRS aided 6G communication,” _arXiv preprint arXiv:2106.04418_ , 2021. * [25] Q. Shi, M. Razaviyayn, Z.-Q. Luo, and C. He, “An iteratively weighted MMSE approach to distributed sum-utility maximization for a MIMO interfering broadcast channel,” _IEEE Trans. Signal Processing_ , vol. 59, no. 9, pp. 4331–4340, 2011. * [26] S. S. Christensen, R. Agarwal, E. De Carvalho, and J. M. Cioffi, “Weighted sum-rate maximization using weighted MMSE for MIMO-BC beamforming design,” _IEEE Trans. Wireless Commun._ , vol. 7, no. 12, pp. 4792–4799, 2008. * [27] M. Grant, S. Boyd, and Y. Ye, “CVX: Matlab software for disciplined convex programming,” 2008.
# Non-minimal coupling inspires the Dirac cosmological model H<EMAIL_ADDRESS>H. <EMAIL_ADDRESS>A. H<EMAIL_ADDRESS>U. K<EMAIL_ADDRESS>1 Research Institute for Astronomy and Astrophysics of Maragha (RIAAM), University of Maragheh, P.O. Box 55136-553, Maragheh, Iran 2 Physics Department, Faculty of Sciences, University of Sistan and Baluchestan, Zahedan, Iran 3 Department of Mathematics, Institute of Applied Sciences and Humanities, GLA University, Mathura-281406, Uttar Pradesh, India ###### Abstract In the framework of the generalized Rastall theory (GRT), we study the ability of a non-minimal coupling between geometry and matter fields in order to provide a setting which allows for a variable $G$ during the cosmic evolution. In this regard, the compatibility of this theory with Dirac hypothesis on the variations of $G$ is investigated, and additionally, the possibility of obtaining the current accelerated universe is also addressed. In summary, our study indicates that, in GRT, having in hand the $G$ profile, one may find the corresponding non-minimal coupling between the energy source and geometry and vise versa, in a compatible way with the current accelerated universe. ## I Introduction The idea that $G$ (the Newtonian gravitational coupling) has probably experienced diverse values during the cosmic evolution has many motivations. It began with Dirac’s proposal dir1 ; dir2 ; dir3 which states that, the ubiquitousness of certain large dimensionless numbers (LDN’s), arising in combinations of physical constants and cosmological quantities WZE was not a coincidence but an outcome of an underlying relationship between them BTB . In his proposal, Dirac pointed out that the electrical force between proton and electron within a hydrogen atom i.e., $F_{e}=e^{2}/4\pi\epsilon_{0}r^{2}$ is a large number being 40 orders of magnitude greater than their gravitational force $F_{G}=Gm_{p}m_{e}/r^{2}$, i.e., $\displaystyle{\rm LN}_{1}=\frac{F_{e}}{F_{G}}=\frac{e^{2}}{4\pi\epsilon_{0}Gm_{p}m_{e}}\approx 10^{40},$ (1) where $m_{e},e,m_{p},\epsilon_{0}$ and $G$ are the mass and charge of electron, the proton mass, the vacuum permittivity and gravitational constant, respectively. On the other side, the ratio of the age of the universe and the time for light to traverse an electron is also nearly of the same size, i.e., $\displaystyle{\rm LN}_{2}=\frac{t}{e^{2}/4\pi\epsilon_{0}m_{e}c^{3}}\approx 10^{40}.$ (2) Dirac then suggested that the above two quantities are equal. As a result of such a relationship, some of the fundamental constants cannot remain constant for ever since ${\rm LN}_{2}$ varies with the age of the universe. According to Dirac’s hypothesis, atomic parameters cannot change with time and thus $G$ should change inversely with time, i.e., $G\propto t^{-1}$ CHK , see also DIRACREV for recent reviews. Since the advent of this idea, it has led to interesting implications within theoretical physics, and has attracted a great deal of attention during the past decades ras2 ; vin ; sab ; bap ; bee ; wu ; deg ; bar1 ; bar2 ; bar3 ; mans ; gaz ; clif ; bro ; sol ; uza1 ; uza2 ; smo ; fri ; les . Moreover, it has even interesting power to justify baryogenesis les , the current and primary accelerated universes ell and can support the de Sitter spacetime uza1 ; uza2 . In Newtonian gravity one is allowed to write an explicit time variation of $G$ without the need of satisfying any further constraint. However, the situation is different in GR as there are further constraints to be satisfied. Consider the Einstein field equation $G^{\mu}_{\,\nu}=8\pi GT^{\mu}_{\,\,\nu}$ with the assumption of $G=G(t)$ and $c\equiv 1$. If one takes the covariant divergence of this equation the left hand side vanishes as a result of Bianchi identity. Then, if the ordinary energy-momentum conservation law (OCL) is assumed to hold, i.e., $T^{\mu}_{\,\,\nu;\mu}=0$, one finds that $G$ must be a constant with respect to spacetime coordinates, i.e., $\partial G/\partial x^{\mu}=0$ always. In this respect, GR does not allow for any variation in the gravitational coupling $G$ owing to the fact that the Einstein tensor is divergence free and the divergence of energy-momentum tensor is also zero. Hence, in the light of Dirac’s proposal, some modifications of GR field equation are essential. This is because, if we simply let $G$ to be a variable then the OCL is violated CanutoAdams . In this respect, investigating the effects of a varying $G$ can be performed only through modified field equations along with modified conservation laws. From these arguments, one may intuitively imagine that a varying $G$ could contribute as a new degree of freedom within the OCL. As in GR, $G$ denotes mutual relation between geometry and matter fields, hence, variations of $G$ together with the violation of OCL may be considered as a signal for the idea that another relation between geometry and matter fields may exist that connects their changes to each other. However, there are modifications of GR with a varying $G$ that respect the OCL such as Brans-Dicke theory, in which, the dynamical scalar field $\phi$ can be considered as the origin of gravitational coupling and thus it varies as $G\propto\frac{1}{\phi}$ 11 ; 12 ; 13 ; 14 ; bar2 . OCL, as one of the cornerstones of GR pois , is not respected in all modified gravity theories, for example, it is broken in the non-minimal curvature matter coupling theories od1 ; all ; koi ; bert ; hark ; car ; boh ; ras1 ; mor1 ; mora . Rastall gravity is a pioneering theory in this area ras1 in accordance with various observations li ; raw1 ; raw2 ; raw3 ; maj ; arb ; rah1 ; rah2 ; rah3 ; mor2 ; man ; ortiz2020 ; shabooni2020 and its corresponding cosmology avoids the age and entropy problems arisen in the framework of the standard cosmology fab . In fact, this theory can even provide a better platform for describing the matter dominated era compared to the Einstein theory raw2 . A generalized form of this theory allows us to relate the current and primary accelerated universe to the ability of the spacetime to couple with the energy-momentum sources, filling the background, and in fact, introduces this coupling as a candidate for dark energy and inflaton field mor1 . In addition to inflationary models powered by employing varying $G$ theories ell , there are also other models to describe inflation without considering an inflaton field jos ; mor1 ; wat ; gam . In Ref. mor1 , it has been shown that while the existence of an interaction between the geometry and matter fields may model the primary and current inflationary eras, it does not necessarily lead to the break-down of OCL. In fact, if geometry has the ability of being non-minimally coupled with the matter fields, then this ability may support the primary inflationary era and the current accelerated phase mor1 . To obtain these results, authors focus on the special case of $T^{\mu\nu}_{\ \ ;\mu}=0$, and find out the form of non-minimal coupling in each cosmic era. The study of various non-minimal couplings can at least make us familiar with their consequences and properties which may finally lead to a better understanding of spacetime that helps us provide better predictions about its behavior and nature. In GRT, cosmological scenarios jos ; mor1 ; das ; lin imply the power of non-minimal coupling in $i$) providing both singular and non-singular universes, $ii$) describing the particle production process, $iii$) avoiding the coincidence problem, and $iv$) playing the role of dark energy (unlike the original Rastall theory mor1 ; batis ). In this regard, thermodynamically it has also been shown that the confusion in defining energy and some of its outcomes which may lead to the OCL generalization (or equivalently, the breakdown of OCL) could make the cosmos dark mor3 ; mor4 . Since in Rastall gravity, the gravitational coupling is a constant, but differs from those of GR and Newtonian gravity (NG) mor5 ; ras1 , Rastall theory (and indeed a mutual non-minimal coupling between the geometry and matter fields in the Rastall way) cannot provide a theoretical basis for the probable variations of $G$ during the cosmic evolution. These points will be reopened in more details in the next section. Motivated by the above arguments, it is reasonable to $i$) examine the ability of non-minimal coupling between geometry and matter fields in producing a non- constant $G$, and also $ii$) study the results of a non-constant $G$ in the framework of Rastall theory. The latter is tried to be answered by some authors in Ref. ref , by combining Rastall and Brans-Dicke theories with each other. In the present study, the changes in $G$ is not originated by the Rastall theory meaning that the first part is still unsolved and debateable. We therefore focus on GRT to describe the compatibility of a non-minimal coupling with Dirac’s idea on evolution of $G$. Indeed, we are eager to show that, at least phenomenologically, a non-minimal coupling may itself change $G$ and play the role of dark energy. The present work is then arranged as follows. In Sects. II and III, a brief review on the Rastall theory and its generalization mor1 has been provided, and some of their predictions about the variations of $G$ are addressed. Sect. IV includes our survey on the possibility of explaining a well-known Dirac cosmological model, previously introduced by other authors, within the framework of GRT. To show the ability of non-minimal coupling in satisfying Dirac hypothesis and describing the cosmic evolution, simultaneously, a new model is also introduced in Sect. V. Sect. VI is devoted to concluding remarks. Here, we use $c=\hbar=1$ units. ## II Rastall theory and a model for varying $G$ Originally, P. Rastall argued that the OCL may not be valid in a curved spacetime leading to ras1 $\displaystyle T^{\mu\nu}_{\ \ ;\mu}\neq 0,$ (3) in the non-flat spacetimes. From the mathematical point of view, $T^{\mu\nu}_{\ \ ;\mu}$ is a ranked one tensor field written as $T^{\mu\nu}_{\ \ ;\mu}=Q^{,\nu}$ where $Q$ is an unknown scalar function found out from other parts of physics, mathematics and observations ras1 . Since $Q$ is a scalar and Rastall hypothesis admits the violation of OCL in a curved spacetime (where Ricci scalar is not always zero), therefore Ricci scalar, R, can be considered as a suitable suggestion for $Q$, and thus ras1 $\displaystyle T^{\mu\nu}_{\ \ ;\mu}=\lambda^{\prime}R^{;\nu},$ (4) where $\lambda^{\prime}$ is called the Rastall constant parameter. Using the Bianchi identity, it is easy to get $\displaystyle G_{\mu\nu}+\kappa^{\prime}\lambda^{\prime}g_{\mu\nu}R=\kappa^{\prime}T_{\mu\nu},$ (5) which $\kappa^{\prime}$ is a constant ras1 called the Rastall gravitational coupling constant. Applying the Newtonian limit on this result, we obtain ras1 $\displaystyle\frac{\kappa^{\prime}}{4\kappa^{\prime}\lambda^{\prime}-1}\left(3\kappa^{\prime}\lambda^{\prime}-\frac{1}{2}\right)=\kappa_{G},$ (6) where $\kappa_{G}\equiv 4\pi G$. Hence, since $\kappa^{\prime}$ and $\lambda^{\prime}$ are constants, $G$ should also be a constant as well (the current value of $G$, namely $G_{0}$, is proper option leading to $\kappa_{G}\equiv\kappa_{G_{0}}=4\pi G_{0}$). We therefore conclude that, since the left hand side of (6) is a constant then a mutual non-minimal interaction between the geometry and matter fields within the framework of original version of Rastall gravity does not support the varying $G$ theories. Eq. (6) also reveals that the Rastall gravitational coupling constant ($\kappa^{\prime}$) differs from that of GR ($2\kappa_{G}=8\pi G$) and only if $\lambda^{\prime}=0$ then they will be equal. It is also useful to note that one may use Eq. (5) in order to introduce the generalized energy-momentum tensor $\Theta_{\mu\nu}=T_{\mu\nu}-(\kappa^{\prime}\lambda^{\prime})/(4\kappa^{\prime}\lambda^{\prime}-1)Tg_{\mu\nu}$ which finally leads to the GR counterpart form of the Rastall field equations, given as $G_{\mu\nu}=\kappa^{\prime}\Theta_{\mu\nu}$. In this manner, although the obtained field equations are similar to those of GR, their solutions for $T_{\mu\nu}$ differ in general from those of GR mor4 ; dar , a result confirmed by various observational data, see e.g., li ; mor2 ; dar and references therein). One can also generalize the Rastall theory by considering $\lambda^{\prime}\rightarrow\lambda$, where $\lambda$ is a varying parameter. Therefore Eq. (4) is extended as follows mor1 $\displaystyle T^{\mu\nu}_{\ \ \ ;\mu}=\left(\lambda R\right)^{;\nu},$ (7) which finally leads to $\displaystyle G_{\mu\nu}+\kappa\lambda g_{\mu\nu}R=\kappa T_{\mu\nu},$ (8) where $\kappa$ is again a constant but $\lambda$ can change over time. Using the trace of Eq. (8), one can also rewrite this equation as $G_{\mu\nu}+\tau Tg_{\mu\nu}=\kappa T_{\mu\nu},$ (9) in which $\tau=\frac{\kappa^{2}\lambda}{4\kappa\lambda-1}.$ (10) Now, since $\kappa$ is constant, the covariant derivative of Eq. (9) leads to $\tau^{,\nu}T+\tau T^{,\nu}=\kappa T^{\nu\,\,\,;\mu}_{\,\,\,\mu},$ (11) meaning that even if OCL is respected and until $\tau\neq constant$ (or equally, $\lambda\neq constant$), the non-minimal coupling affects the evolution of the energy-momentum source and vice versa mor1 . Therefore, unlike the Rastall theory, OCL can be met in this framework even in the presence of non-minimal coupling. In this regard, it is shown that, in the framework of Eq. (8), even if OCL is met, the accelerated universe can be explained under the shadow of $\lambda$ without resorting to a dark energy source mor1 . Now, considering the Newtonian limit (ignoring the pressure of $T_{\mu\nu}$ and utilizing relation $R_{00}=\nabla^{2}\phi$, in which $\phi$ denotes the Newtonian potential mor6 ), one can easily find $\displaystyle\frac{\kappa}{4\kappa\lambda-1}\left(3\kappa\lambda-\frac{1}{2}\right)=\kappa_{G}.$ (12) Due to the similarity of Eqs. (8) and (5), one could expect that the Newtonian limit of field equations (8) is obtainable by replacing $\kappa^{\prime}$ and $\lambda^{\prime}$ with $\kappa$ and $\lambda$, respectively, in Eq. (6). Eq. (12) also indicates that $G$ (or equally $\kappa_{G}$) does not necessarily remain constant in this theory. Therefore, this generalization of Rastall theory provides a basis for theories including a varying $G$ dir1 ; dir2 ; vin ; sab ; bap ; bee ; wu ; mans ; gaz ; bar1 ; bar2 ; bar3 ; clif ; uza1 ; uza2 ; smo ; fri ; les . In fact, this equation tells that a non-minimal coupling between the geometry and matter fields can make $G$ variable mot meaning that such coupling can be considered as a theoretical basis for varying $G$ theories. ## III Newtonian limit, a model for running $G$, and the value of $\kappa$ Now, using Eq. (12), and following Ref. mor1 , in which $\kappa\lambda\equiv\beta=[4+\theta(1+z)^{3}]^{-1}$, where $\theta$ is an unknown constant and $z$ denotes the redshift, one can obtain $\displaystyle\kappa_{G}=\frac{\kappa}{2}\left[1-\frac{2}{\theta(1+z)^{3}}\right],$ (13) finally leading to $\displaystyle\kappa_{G}=\frac{\kappa}{2}\left[1-\frac{2}{\theta}\right]\equiv\kappa_{G_{0}},$ (14) and $\displaystyle\kappa_{G}=\frac{\kappa}{2},$ (15) for $z\rightarrow 0$ and $z\rightarrow\infty$, respectively. Based on Ref. mor1 , whenever $0<\theta\leq 1/2$ (leading to $\beta>0$), the current accelerated universe is explainable in the presence of OCL, and without considering a dark energy-like source. Moreover, expression $\beta=[4+\theta(1+z)^{3}]^{-1}$ is present in both of the matter dominated era (MDE) and the current accelerated universe mor1 . Hence, Eq. (15) can be considered as the value of $G$ at the beginning of MDE whereas the value of $\kappa$ is obtainable by using Eq. (14) $\displaystyle\kappa=\frac{8\pi G_{0}}{1-\frac{2}{\theta}},$ (16) combined with Eq. (15) to see that $\kappa$, and thus $\kappa_{G}$ are negative at the beginning of MDE. Therefore, in the model proposed in Ref. mor1 which still respects OCL in the framework of (8), $G$ is not always positive during the cosmic evolution. Negative values of $\kappa$ provide a setting for baryonic matters to support traversable wormholes in the Rastall framework mor5 . Moreover, in the framework of GRT, it has been shown that negative values of $\kappa$ could have their own effects on matter perturbations and formation of structures in large scale universe AHH2020 . In this regard, overdense and underdense regions in the universe could form periodically so that both large scale structures and voids could form as the universe evolves from MDE to present time. Also, emergence of structures in a class of alternative theories of gravity has been reported in Lohiya1996 , where the authors considered a non-minimally coupled scalar field in addition to an induced negative gravitational constant and studied structure formation with repulsive gravitation on the large scale. In the framework of general scalar tensor-theories, a cosmological mechanism has been proposed in which it is possible for $G$ to change sign from a positive branch (attracting) to a negative branch (repulsive gravity) and vice versa Nunez2019 . It is also worth mentioning that negative values of $G$ have previously been reported in some other approaches studying the variations of $G$ bar2 ; uza1 ; uza2 . Beside the effects of repulsive gravity (represented by a universal negative coupling) on the evolution of perturbations and formation of structures, the study of possible consequences of $\kappa<0$ on the stability of the model is of particular importance. In this regard, from the viewpoint of perturbative analysis, the existence of a repulsive gravity phase in the evolution of the universe could lead to growing models with respect to scalar perturbations producing then, large inhomogeneities. Hence a repulsive phase may destroy homogeneity and in this sense it may be unstable Batista2001 . In Star1981 , it has been discussed that a transition from positive gravitational coupling $G$ to negative one results in an instability, in such a way that, small deviations from isotropy and homogeneity within the gravitational field will grow unboundedly, leading to a true cosmological singularity at the boundary between gravity and anti gravity. Also, investigating classical stability of the model through dynamical system approach is of long-standing interest and significance. Work along this line has been carried out for a class of GRT models Lin2020 , where the authors have shown that the eventual fate of the universe ends in late time attractors which are classically stable. However, investigating these issues for the present model needs a deeper analysis with more scrutiny and future studies will be reported elsewhere. Finally, we note that, since $\dot{G}$ does not decrease with time for $0<\theta\leq 1/2$ ($\dot{G}>0$ in this manner), this model does not respect the Dirac’s hypothesis claiming that $G$ should decrease as a function of time dir1 ; dir2 ; vin ; bap . Hence, more comprehensive non-minimal couplings are needed to provide settings for Dirac hypothesis and also to model the cosmic evolution without considering a mysterious fluid (dark energy), simultaneously. ### III.1 Another possibility In Ref. das , choosing $\lambda=(1+d_{0}H)/[3\kappa(w+1)]$, in which $w\equiv p/\rho$ (where $p$ and $\rho$ denote the pressure and energy density of the cosmic fluid, respectively), it has been shown that non-singular cosmic evolution is obtainable in GRT. In this case $d_{0}$ is a free parameter, and some outcomes of this proposal in various cosmic eras have also been studied in Ref. das . Accepting this proposal along with considering the unit $\kappa=8\pi G_{0}$ and also assuming $G(H_{0})=G_{0}$ (which helps us in finding $d_{0}$), one easily reaches $\displaystyle G(H)=G_{0}\frac{3(1-w)H_{0}-6H}{(1-3w)H_{0}-4H},$ (17) where $H_{0}$ is the current value of $H$ and use has been made of Eq. (12). ## IV Dirac cosmological model As in the present model there is no evolution equation for the variation of $G$ which is promoted as a dynamical field, one then has to impose a suitable ansatz on the behavior of this parameter. Based on Dirac hypothesis, $G$ should decrease with time, i.e, $G\propto t^{-1}$ CHK . In general, one may consider $G=G_{0}f$, in which $f$ is a decreasing function of time dir1 ; dir2 ; vin ; bap ; clif ), in order to preserve Dirac hypothesis. Now, combining Eq. (12) with $\kappa=8\pi G_{0}\alpha$ raw2 , along with Eqs. (7) and (8) for a flat FLRW universe, one finds $\displaystyle\gamma\equiv\lambda\kappa=\frac{f-\alpha}{4f-6\alpha},$ (18) $\displaystyle 3\int(\rho+p)\frac{da}{a}=\frac{1}{2\alpha}\Big{[}(f-3\alpha)\rho-3(f-\alpha)p\Big{]},$ $\displaystyle H^{2}=\frac{1}{6}\Big{[}(3\alpha-f)\rho+3(f-\alpha)p\Big{]},$ $\displaystyle q=-1-\frac{\dot{H}}{H^{2}}=-1+\frac{3\alpha(\rho+p)}{\rho(3\alpha-f)+3(f-\alpha)p},$ whenever a fluid with energy density $\rho$ and pressure $p$ fills the background. We note that $\gamma$ is a varying parameter and, $q$ and $a$ denote deceleration parameter and scale factor, respectively, and we also have assumed $8\pi G_{0}=1$. Figure 1: The evolution of $q$ and state parameter $w$ versus $z$ for $H(z=0)=67$ dom . Upper panels are provided for the case (i) and the lower ones are depicted for case (ii) discussed in Sect. IV. The model parameters used to draw the curves of $w$ are the same as those of $q$ diagrams. The case with $f=a^{-n}$ leads to a decreasing function of time whenever $n>0$ gaz ; smo . In this manner, assuming $w\equiv p/\rho=0$, together with using Eqs. (18), one easily finds $q=(3\alpha-1)^{-1}$, and $\rho=\rho_{0}a^{n}(1-3\alpha a)^{-(n+2)/n}$, where $\rho_{0}$ is the integration constant. These results indicate that, at limit $a\rightarrow 1$, the obtained pressureless fluid can accelerate the universe expansion with $q\leq-1/2$ for $-1/3\leq\alpha<1/3$. Consequently, the non-minimal coupling $\gamma=[(1+z)^{n}-\alpha]/[4(1+z)^{n}-6\alpha]$ allows $G$ to vary as $G=G_{0}(1+z)^{n}$ gaz , where we used the $1+z=1/a$ relation. It is also easy to see that the universe described by this model has begun from a primary inflationary phase ($q=-1$) corresponding to the $a\rightarrow 0$ point. In fact, in this limit, we also have $\gamma=1/4$, a value that supports an inflationary phase for even an empty universe mor1 . Now, let us consider two more comprehensive cases i.e., $i$) $p=k\rho^{1+1/m}$, where $m$ and $k$ are unknown constants to be evaluated later, and $ii$) $p=\sigma\rho/(a-b\rho)-c\rho^{2}$ in which $\sigma$, $a$, $b$ and $c$ are unknown coefficients. In this manner, as it is obvious from Fig. 1, a proper behavior is obtainable for the cosmos. Here, $w\equiv p/\rho$ denotes the equation of state of cosmic fluids. Depending on the values of unknown parameters, the universe can also experience a transition at $z_{t}$ which can even take values smaller than $1$. Clearly, both fluids behave as dark energy sources, and the corresponding non-minimal coupling can not be considered as a dark energy source. ## V A new proposal for $\lambda$ parameter Now, let us consider a flat FRW universe filled by a pressureless fluid with energy density $\rho$ when $\lambda R=\zeta H^{n}$ in which $\zeta$ and $n$ are unknown constants. In this manner, the $\lambda$ parameter takes the form $\displaystyle\lambda=\zeta\frac{H^{n}}{R}=\frac{\zeta}{6}\frac{H^{n}}{\dot{H}+2H^{2}},$ (19) whence, the corresponding Friedmann equations read $\displaystyle H^{2}-\frac{\kappa\zeta}{3}H^{n}=\frac{\kappa}{3}\rho,$ $\displaystyle H^{2}+\frac{2}{3}\dot{H}-\frac{\kappa\zeta}{3}H^{n}=0.$ (20) Defining $\Omega=8\pi G\rho/3H^{2}$, while $\Omega_{0}$ denotes its current value macq , the evolution of $q$ and $G/G_{0}$ have been plotted in Fig. (2). For the employed parameters, transition redshift ($z_{t}$) lies within the range of $0.4\leq z_{t}\leq 0.88$. The sensitivity of diagrams to the values of $\Omega_{0}$ and $H_{0}$ is so weak compared with those of $\zeta$ and $n$ and $\kappa$. Indeed, although we only consider a baryonic source for current density parameter $\Omega_{0}=0.049$ macq , and $H_{0}=67.66$ agh , the obtained behaviors are also achievable for other candidates of $\Omega$ (such as dark matter) and also the other values of $H_{0}$, reported in the literature. Hence, suitable behavior of $q$ is obtainable by only considering the baryonic content of the universe, meaning that the $\zeta H^{n}$ term may play the role of the unknown parts (dark components) of cosmos. Dirac hypothesis is also respected during the cosmic evolution. Remarkably, $G$ will take negative values in future meaning that gravity will become repulsive which speeds the universe expansion rate up more i.e., $q$ decreases. All these happen under the shadow of the existence of non-minimal coupling $\lambda$ which varies during the evolution of the universe. In Fig. (3), $H(z)$ far and the distance modulus ama are plotted for the $\Lambda$CDM model and also our model. Figure 2: The evolution of $q$ and $G/G_{0}$ assuming $w=0$, for the case discussed in Sec.V. The diagrams for $G/G_{0}$ are plotted using the same model parameters as of $q$ diagrams. Figure 3: The evolution of $H(z)$ and $\mu(z)$ whenever $w=0$, for the case discussed in Sec.V. The same values of parameters as of Fig. 2 have been used. The black dashed lines show $H(z)$ and $\mu(z)$ for $\Lambda$CDM model. The negative value of $G$ is the direct result of the assumed $\lambda$, and changes in the values of model parameters do not affect this result. There are also other works that predict negative values for $G$ bar2 ; uza1 ; uza2 . Theoretically, our model shows that a non-minimal coupling between geometry and matter fields can accelerate the universe expansion and has an ability to satisfy Dirac hypothesis. ## VI concluding remarks After addressing some properties of previously cosmological models introduced in the framework of GRT mor1 ; das , the implications of GRT on obtaining varying $G$ has been studied through considering the Newtonian limit of the field equations. Thereinafter, following a proposal of Dirac hypothesis introduced in gaz ; smo , the required non-minimal coupling needed to support Dirac model was also obtained. Our results show that the dark sectors of cosmos can be unified into one cosmic fluid which behaves as a pressureless fluid in high redshift limit, and also accelerates the universe in line with the current observations (Fig. 1). We also proposed a non-minimal coupling (Sec. V) which can play the role of dark side of cosmos satisfying Dirac hypothesis. Indeed, the present study addresses a deep connection between non- minimal coupling (between the matter fields and geometry) and the idea of variable $G$. This translates into saying that one may find the footprints of non-minimal coupling between the matter fields and geometry by having the observationally confirmed profile of $G$ and conversely. Although relying on the Rastall hypothesis on relation between the changes in spacetime curvature and violation of OCL we only focused on the implications of the violation of OCL in cosmology and its connection with Dirac hypothesis, the OCL violation can also be allowed due to the quantum considerations such as uncertainty principle, and in the framework of unimodular gravity producing significant cosmological outcomes jos . Indeed, even in the framework of GR and thanks to the Bianchi identity, OCL is violated as the result of the existence of a non-constant $G$. In summary, it was our goal to address $i$) probable connection between Dirac hypothesis and non-minimal couplings, and simultaneously, $ii$) the ability of such couplings in being responsible for the unknown parts (dark sides) of cosmos. Therefore, such couplings need to be further studied from both of the theoretical and observational viewpoints. Finally, we would like to mention that, though Rastall gravity and its generalizations provide interesting results, cosmological models based on this theory need to be accurately tested by observations. In the present model, we tried to explore theoretical consequences of a varying G cosmology based on GRT and also briefly examined observational aspects of the theory. However, a full observational treatment of the present model, e.g., in light of Akarsu2020 , needs to be done and work along this line can be considered as an interesting subject for future studies and developments. ## References * (1) Dirac P. A. M., Nature 139 (1937), 323 * (2) Dirac P. A. M., Proc. Roy. Soc. London, Ser. A 165 (1938), 199. * (3) Dirac P. A. M., Nature, 139, (1937), 1001. * (4) Weyl H. Ann. Phys., 59, 129 (1919); Zwicky F., Phys. Rev., 55, 726 (1939); Eddington A. S., The Mathematical Theory of Relativity, Cambridge University Press, London (1923). * (5) Barrow J. D., Tipler F. J., The Anthropic Cosmological Principle, Oxford University Press, Oxford, (1986); J. D. Barrow, Varying G and Other Constants, In: S$\acute{a}$nchez N., Zichichi A. (eds) Current Topics in Astrofundamental Physics: Primordial Cosmology. NATO ASI Series (Series C: Mathematical and Physical Sciences), vol 511. Springer, Dordrecht (1998). * (6) Chandrasekhar S., Nature 139 (1937), 757; Kothari D. S., Nature, 142 (1938), 354. * (7) S. Ray, U. Mukhopadhyay, S. Ray, A. Bhattacharjee, Int. Journal Mod. Phys. D 28 (2019), 1930014. * (8) Rastall P., Can. J. Phys. 54 (1976), 66 * (9) Vinti J. P., Celestial Mechanics 16 (1977), 391 * (10) De Sabbata V., Acta Cosmologica Zesz. 9 (1980), 63 * (11) Baptista J. P., Batista A. B., Fabris J. C., Revista Brasileira de Fisica. 14 (1984), 208 * (12) Beesham A., Int. J. Theo. Phys. 25 (1986), 1295 * (13) Wu Y. S., Wang Z., Phys. Rev. Lett. 57 (1986), 16 * (14) Degl’Innocenti S. et al., A&A 312 (1996), 345 * (15) Barrow J. D., Mon. Not. R. Astron. Soc. 282 (1996), 1397 * (16) Barrow J. D., 1997, arXiv:gr-qc/9711084. * (17) Barrow J. D., The Constants of Nature, (Vintage Books, London, 2002) * (18) Mansouri R., Nasseri F., Khorrami M., Phys. Lett. A 259 (1999), 194 * (19) Gaztañaga E. et al., Phys. Rev. D 65 (2001), 023506 * (20) Clifton T., Mota D., Barrow J. D, Mon. Not. R. Astron. Soc. 358 (2005), 601 * (21) Bronnikov K. A., Kononogov S. A., Metrologia 43 (2006), 1 * (22) Solà J., J. Phys. A: Math. Theor. 41 (2008), 164066 * (23) Uzan J. P., Rev. Mod. Phys. 75 (2003), 403 * (24) Uzan J. P., Liv. Rev. Relativ. 14 (2011), 2 * (25) Smolin L., Class. Quantum Grav. 33 (2016), 025011 * (26) Fritzsch H., Solà J., Nunes R. C., Eur. Phys. J. C 77 (2017), 193 * (27) Leszczyńska K., Da̧browski M. P., Denkiewicz T., Eur. Phys. J. C 79 (2019), 222 * (28) Ellis G. F. R., Maartens R., Maccallum M. A. H., The Constants of Nature, (Cambridge University Press, UK, 2012). * (29) Canuto, V., Adams, P. J., Hsieh, S. H., Tsiang, E., Phy. Rev. D 16, 6 (1977); Wesson, P., Goodson, R. E., Observ. 101, 105 (1981). * (30) C. Brans, R. H. Dicke, Phys. Rev. 124, 925 (1961). * (31) R. H. Dicke, Phys. Rev. 125, 2163 (1962). * (32) R. H. Dicke, Rev. Mod. Phys. 29, 355 (1957). * (33) R. H. Dicke, Nature 192, 440 (1961). * (34) Poisson E., A Relativist’s Toolkit, (Cambridge University Press, UK, 2004). * (35) Nojiri S., Odintsov S. D., Phys. Lett. B 599 (2004), 137 * (36) Allemandi G. et al., Phys. Rev. D 72 (2005), 063505 * (37) Koivisto T.,Class. Quant. Grav. 23 (2006), 4289 * (38) Bertolami O. et al., Phys. Rev. D 75 (2007), 104016. * (39) Harko T., Lobo F. S. N., Galaxies 2 (2014), 410. * (40) Carloni S., Phys. Lett. B 766 (2017), 55. * (41) Boehmer C. G., Carloni S.,Phys. Rev. D 98 (2018), 024054. * (42) Rastall P., Phys. Rev. D 6 (1972), 3357. * (43) Moradpour H. et al., The European Physical Journal C, 77 (2017), 259. * (44) De Moraes W. A. G., Santos A. F., Gen. Relativ. Gravit. 51 (2019), 167. * (45) Li R. et al., Mon. Not. R. Astron. Soc. 486 (2019), 2407. * (46) Al-Rawaf A. S., Taha O. M., Phys. Lett. B 366 (1996), 69. * (47) Al-Rawaf A. S., Taha O. M., Gen. Relat. Gravit. 28 (1996), 935. * (48) Al-Rawaf A. S., Int. J. Mod. Phys. D 14 (2005), 1941. * (49) Majernik V., Gen. Relat. Gravit. 35 (2003), 1007. * (50) Arbab A. I., J. Cosmol. Astropart. Phys. 05 (2003), 008. * (51) Abdel-Rahman A. M. M., Astrophys. Space. Sci. 278 (2001), 383. * (52) Abdel-Rahman A. M. M., Hashim M. H. A., Astrophys. Space. Sci. 298 (2005), 519. * (53) Abdel-Rahman A. M. M., Riad I. F., Astron. J. 134 (2007), 1931. * (54) Moradpour H. et al., Phys. Rev. D. 96 (2017), 123504. * (55) Manna T., Rahaman F., Mondal M., Mod. Phys. Lett. A 35 (2020), 2050034. * (56) S. K. Maurya and F. T.-Ortiz, Phys. Dark Univ. 29 (2020), 100577. * (57) H. Shabani and A. H. Ziaie, Europhysics Letters 129, (2020) 20004. * (58) Fabris J. C., Kerner R., Tossa J., Int. J. Mod. Phys. D 9 (2000), 111. * (59) Josset T., Perez A., Phys. Rev. Lett. 118 118 (2017), 021102. * (60) Watson S. et al., J. Cosmol. Astropart. Phys. 07 (2017), 11. * (61) Gamboa J. et al., Phys. Rev. D 96 (2017), 083534. * (62) Das D., Dutta S., Chakraborty S., Eur. Phys. J. C 78 (2018), 810. * (63) Lin K., Qian W. L., Eur. Phys. J. C 80 (2020), 561. * (64) C. E. M. Batista, M. H. Daouda, J. C. Fabris, O. F. Piattella, D. C. Rodrigues, Phys. Rev. D 85, (2012), 084008. * (65) Moradpour H. et al., Mod. Phys. Lett. A 32 (2017), 1750078 * (66) Moradpour H. et al., Adv. High Energy Phys. 2018 (2018), 7124730 * (67) Moradpour H., Sadeghnezhad N., Hendi S. H., Can. J. Phys. 95 (2017), 1257 * (68) T. R. P. Carames. et al., Eur. Phys. J. C74 (2014) 3145. * (69) Darabi F. et al., Eur. Phys. J. C 78 (2018), 25 * (70) Moradpour H. et al., Mod. Phys. Lett. A 33 (2019), 1950096 * (71) Mota C.E., et al., arXiv:2007.01968. * (72) A. H. Ziaie, H. Moradpour, H. Shabani, Eur. Phys. J. Plus 135 (2020), 916. * (73) D. Lohiya, A. Batra, S. Mehra, S. Mahajan and A. Mukherjee, Astron. Astrophys. Trans. 14 (1997), 199. * (74) I. Ayuso, J. P. Mimoso and N. J. Nunes, Galaxies, 7 (2019), 38. * (75) A. B. Batista, J. C. Fabris and S. V. B. Goncalves, Class. Quant. Grav. 18 (2001), 1389. * (76) A. A. Starobinskij, Pisma v Astronomicheskii Zhurnal, 7, (1981), 67; Soviet Astronomy Letters, 7, (1981), 36, Translation. * (77) K. Lin and W.-L. Qian, Eur. Phys. J. C 80 (2020), 561. * (78) Domínguez A. et al., Astrophys. J. 885 (2019), 137. * (79) Macquart J. et al., Nature 581 (2020), 391. * (80) Farooq O. et al., Astrophys. J. 835 (2017), 26. * (81) Aghanim N. et al., A&A 641 (2020), A6. * (82) Amanullah et al., Astrophys. J. 716 (2010), 712. * (83) O. Akarsu, N. Katirci, S. Kumar, R. C. Nunes, B. Ozturk and S. Sharma, Eur. Phys. J. C 80 (2020), 1050.
[1]Otto Brookes [1]Majid Mirmehdi [2]Mimi Arandjelovic [2,9]Hjalmar Kühl [1]Tilo Burghardt [1]Department of Computer Science, University of Bristol, United Kingdom 2]Max Planck Institute for Evolutionary Anthropology, Leipzig, Germany 3]Wild Chimpanzee Foundation, Leipzig, Germany 4]Department of Human Evolutionary Biology, Harvard University, Massachusetts, USA 5]School of Psychology & Neuroscience, University of St Andrews, St. Andrews, Scotland 6]Faculty of Artes Liberales, University of Warsaw, Warsaw, Poland 7]Institute for Cognitive Sciences Marc Jeannerod, University of Lyon, Lyon, France 8]Institute of Human Origins, Arizona State University, Arizona, USA 9]Senckenberg Museum of Natural History Goerlitz, Goerlitz, Germany # PanAf20K: A Large Video Dataset for Wild Ape Detection and Behaviour Recognition <EMAIL_ADDRESS><EMAIL_ADDRESS>Colleen Stephens Samuel Angedakin Katherine Corogenes Dervla Dowd Paula Dieguez Thurston C. Hicks Sorrel Jones Kevin Lee Vera Leinert Juan Lapuente Maureen S. McCarthy Amelia Meier Mizuki Murai Emmanuelle Normand Virginie Vergnes Erin G. Wessling Roman M. Wittig Kevin Langergraber Nuria Maldonado Xinyu Yang Klaus Zuberbühler Christophe Boesch<EMAIL_ADDRESS><EMAIL_ADDRESS>* [ [ [ [ [ [ [ [ ###### Abstract We present the PanAf20K dataset, the largest and most diverse open-access annotated video dataset of great apes in their natural environment. It comprises more than 7 million frames across $\sim$20,000 camera trap videos of chimpanzees and gorillas collected at 14 field sites in tropical Africa as part of the Pan African Programme: The Cultured Chimpanzee. The footage is accompanied by a rich set of annotations and benchmarks making it suitable for training and testing a variety of challenging and ecologically important computer vision tasks including ape detection and behaviour recognition. Furthering AI analysis of camera trap information is critical given the International Union for Conservation of Nature now lists all species in the great ape family as either Endangered or Critically Endangered. We hope the dataset can form a solid basis for engagement of the AI community to improve performance, efficiency, and result interpretation in order to support assessments of great ape presence, abundance, distribution, and behaviour and thereby aid conservation efforts. The dataset and code are available from the project website: PanAf20K ###### keywords: animal biometrics, video dataset, behaviour recognition, wildlife, imageomics, conservation technology ## 1 Introduction Figure 1: PanAf20K Visual Overview. We present the largest and most diverse open-access video dataset of great apes in the wild. It comprises $\sim$20,000 videos and more than 7 million frames extracted from camera traps at 14 study sites spanning 6 African countries. Shown are 25 representative still frames from the dataset highlighting its diversity with respect to many important aspects such as behavioural activities, species, number of apes, habitat, day/night recordings, scene lighting, and more. Motivation. As the biodiversity crisis intensifies, the survival of many endangered species grows increasingly precarious, evidenced by species diversity continuing to fall at an unprecedented rate (Vié et al, 2009, Ceballos et al, 2020). The great ape family, whose survival is threatened by habitat degradation and fragmentation, climate change, hunting and disease, is a prime example (Carvalho et al, 2021). The International Union for Conservation of Nature (IUCN) considers all three member species, that is orangutans, gorillas, chimpanzees (including bonobos), to be either endangered or critically endangered. The threat to great apes has far-reaching ecological implications. Great apes contribute to the balance of healthy ecosystems by seed dispersal, consumption of leaves and bark, and shaping habitats by creating canopy gaps and trails (Haurez et al, 2015, Tarszisz et al, 2018, Chappell and Thorpe, 2022). They also form part of complex forest food webs, their removal from which would have cascading consequences for local food chains. In addition, great apes are our closest evolutionary relatives and a key target for anthropological research. We share 97% of our DNA with the phylogenetically most distant orangutans and 98.8% with the closer chimpanzees and bonobos. The study of great apes, including their physiology, genetics, and behaviour, is essential to addressing questions of human nature and evolution (Pollen et al, 2023). Urgent conservation action for the protection and preservation of these emblematic species is therefore essential. The timely and efficient assessment of great ape presence, abundance, distribution, and behaviour is becoming increasingly important in evaluating the effectiveness of conservation policies and intervention measures. The potential of exploiting camera trap imagery for conservation or biological modelling is well recognised (Kühl and Burghardt, 2013, Tuia et al, 2022). However, even small camera networks generate large volumes of data (Fegraus et al, 2011) and the number and complexity of downstream processing tasks required to perform ecological analysis is extensive. Typically, ecologists first need to identify those videos that contain footage of the target study species followed by further downstream analyses, such as estimating the distance of the animals from the camera (i.e., camera trap distance sampling) to calculate species density or identification of ecologically or anthropologically important behaviours, such as tool use or camera reactivity (Houa et al, 2022). Performing these tasks manually is time consuming and limited by the availability of human resources and expertise, becoming infeasible at large scale. This underlines the need for rapid, scalable, and efficient deep learning methods for automating the detection and assessment of great ape populations and analysis of their behaviours. To facilitate the development of methods for automating the interpretation of camera trap data, large-scale, open-access video datasets must be available to the relevant scientific communities, whilst removing geographic details that could potentially threaten the safety of animals (Tuia et al, 2022). Unlike the field of human action recognition and behaviour understanding, where several large, widely acknowledged datasets exist for benchmarking (Kuehne et al, 2011, Soomro et al, 2012, Kay et al, 2017), the number of great ape datasets is limited and those that are currently available lack scale, diversity and rich annotations. Contribution. In this study, we present the PanAf20K dataset, the largest and most diverse open-access video dataset of great apes in the wild – ready for AI training. The dataset comprises footage collected from 14 study sites across 6 African countries, featuring apes in over 20 distinct habitats (i.e., forests, savannahs, and marshes). It displays great apes in over 100 individual locations (e.g., trails, termite mounds, and water sources) displaying an extensive range of 18 behaviour categories. A visual overview of the dataset is presented in Fig. 1. The footage is accompanied by a rich set of annotations suitable for a range of ecologically important tasks such as detection, action localisation, fine-grained and multi-label behaviour recognition. Paper Organisation. Following this introduction, Sec. 2 reviews existing animal behaviour datasets and methodologies for great ape detection and behaviour recognition. Sec. 3 describes both parts of the dataset, the PanAf20K and the PanAf500, and details how the data was collected and annotated. Benchmark results for several computer vision tasks are presented in Sec 4. Sec 5 discusses the main findings as well as any limitations alongside future research directions while Sec 6 summarises the dataset and highlights its potential applications. ## 2 Related Work Great Ape Video Datasets for AI Development. While there have been encouraging trends in the creation of new animal datasets (Swanson et al, 2015, Cui et al, 2018, Van Horn et al, 2018, Beery et al, 2021), there is still only a limited number specifically designed for great apes and even fewer suitable for behavioural analysis. In this section, the most relevant datasets are described. Bain et al. (Bain et al, 2021), curated a large camera trap video dataset ($>40$ hours) with fine-grained annotations for two behaviours; buttress drumming and nut cracking. However, the data and corresponding annotations are not yet publicly available and the range of annotations is limited to two audio-visually distinct behaviours. The Animal Kingdom dataset (Ng et al, 2022), created for advancing behavioural understanding, comprises footage sourced from YouTube (50hr, 30K videos) along with annotations that cover a wide range of actions, from eating to fighting. The MammalNet dataset (Chen et al, 2023), which is larger and more diverse, is also composed from YouTube footage (18K videos, 539 hours) and focuses on behavioural understanding across species. It comprises taxonomy-guided annotations for 12 common behaviours, identified through previous animal behaviour studies, for 173 mammal categories. While both datasets are valuable resources for the study of animal behaviour, they contain relatively few great ape videos since these species make up only a small proportion of the overall dataset. Animal Kingdom spans $\sim$100 videos while MammalNet includes $\sim$1000 videos across the whole great ape family, representing $\sim$0.5% and $\sim$5% of all videos, respectively. Other work to curate great ape datasets has focused annotation efforts on age, sex, facial location, and individual identification (Freytag et al, 2016, Brookes and Burghardt, 2020, Schofield et al, 2019), rather than behaviour. For the study of great ape behaviour, the currently available datasets have many limitations. First, they are too small to capture the full breadth of behavioural diversity. This is particularly relevant for great apes, which are a deeply complex species, displaying a range of individual, paired and group behaviours, that are still not well understood (Tennie et al, 2016, Samuni et al, 2021). Secondly, they are not composed of footage captured by sensors commonly used in ecological studies, such as camera traps and drones. This means that apes are not observed in their natural environment and the distribution of behaviours will not be representative of the wild (i.e., biased towards ’interesting’ or ’entertaining’ behaviours). Additionally, the footage may be biased towards captive or human-habituated animals which display altered or unnatural behaviours and are unsuitable for studying their wild counterparts (Clark, 2011, Chappell and Thorpe, 2022). All these factors may limit the ability of trained models to generalise effectively to wild footage of great apes where conservation efforts are most urgently needed. This, in turn, limits their practical and immediate utility. We aim to overcome these limitations by introducing a large scale, open-access video dataset that enables researchers to develop models for analysing the behaviour of great apes in the wild and evaluate them against established methods.xxx Great Ape Detection & Individual Recognition. Yang et al. (Yang et al, 2019) developed a multi-frame system capable of accurately detecting the full body location of apes in challenging camera-trap footage. In more recent work, Yang et al. developed a curriculum learning approach that enables the utilisation of large volumes of unlabelled data to improve detection performance (Yang et al, 2023). Several other works focus on facial detection and individual identification. In early research, Freytag et al. (Freytag et al, 2016) applied YOLOv2 (Redmon and Farhadi, 2017), to localise the faces of chimpanzees. They utilised a second deep CNN for feature extraction (AlexNet (Krizhevsky et al, 2012) and VGGFaces (Parkhi et al, 2015)), and a linear support vector machine for identification. Later, Brust et al. (Brust et al, 2017) extended their work utilising a much larger and diverse dataset. Schofield et al. (Schofield et al, 2019) presented a pipeline for identification of 23 chimpanzees across a video archive spanning 14 years. Similar to (Brust et al, 2017), they trained the single-shot object detector, SSD (Schofield et al, 2019), to perform initial localisation, and a secondary CNN model to perform individual classification. Brookes et al. (Brookes and Burghardt, 2020) employed YOLOv3 (Redmon and Farhadi, 2018) to perform one- step simultaneous facial detection and individual identification on captive gorillas. Great Ape Action & Behaviour Recognition. To date, three systems have attempted automated great ape behavioural action recognition. The first (Sakib and Burghardt, 2020) was based on the two-stream convolutional architecture by (Simonyan and Zisserman, 2014) and uses 3D ResNet-18s for feature extraction and LSTM-based fusion of RGB and optical flow features. They reported a strong top-1 accuracy of 73% across the nine behavioural actions alongside a relatively low average per class accuracy of 42%. The second, proposed by Bain et al. (Bain et al, 2021), utilises both audio and video inputs to detect two specific behaviours; buttress drumming and nut cracking. Their system utilises a 3D ResNet-18 and a 2D ResNet-18 for extraction of visual and audio features, respectively, in different streams. They achieved an average precision of 87% for buttress drumming and 85% for nut cracking on their unpublished dataset. Lastly, Brookes et al. (Brookes et al, 2023) introduced a triple-stream model that utilises RGB, optical flow and DensePose within a metric learning framework, and achieved top-1 and average per-class accuracy of 85% and 65%, respectively. ## 3 Dataset Overview Figure 2: Manually annotated full-body location, species and behavioural action labels. Sample frames extracted from PanAf20K videos with species (row 1) and behavioural action annotations (row 2) displayed. Green bounding boxes indicate the full-body location of an ape. Species and behavioural action annotations are shown in the corresponding text. Task-focused Data Preparation. The PanAf20K dataset consists of two distinct parts. The first includes a large video dataset containing 19,973 videos annotated with multi-label behavioural labels. The second part comprises 500 videos with fine-grained annotations across $\sim$180,000 frames. Videos are recorded at 24 FPS and resolutions of $720\times 404$ for 15 seconds ($\sim$360 frames). In this section, we provide an overview of the dataset, including how the video data was originally collected (see Sec. 3.1) and annotated for both parts (see Sec. 3.2). ### 3.1 Data Acquisition Camera Trapping in the Wild. The PanAf Programme: The Cultured Chimpanzee has 39 research sites and data collection has been ongoing since January 2010. The data included in this paper samples 14 of these sites and the available data were obtained from studies of varying duration (7–22 months). Grids comprising 20 to 96 $1\times 1$ km cells were established for the distribution of sampling units (to cover a minimum of 20–50 km2 in rainforest and 50–100 km2 in woodland savannah). An average of 29 (range 5–41) movement-triggered Bushnell cameras were installed per site. One camera was installed per grid cell where possible. However, in larger grids cameras were placed in alternate cells. If certain grid cells did not contain suitable habitat, such as grassland in forest-savanna mosaic sites, two cameras were placed instead as far away from each other as possible, in cells containing suitable habitat to maximize coverage. In areas where activities of interest (e.g., termite fishing sites) were likely to take place, a second camera was installed to capture the same scene from a different angle. Cameras were placed approx. 1m high above ground, in locations that were frequently used by apes (e.g., trail, fruit trees). This method ensured a strategic installation of cameras, with maximal chance of capturing footage of terrestrial activity of apes. Both GPS location and habitat type for each location was noted. Footage was recorded for 60 seconds with a 1 second interval between triggers and cameras were visited every 1-3 months for maintenance and to download the recorded footage throughout the study periods. ### 3.2 Data Annotation Figure 3: Number of Apes & Bounding Box Size Distribution in the PanAf500 Data. The top row shows the distribution of apes across frames and videos in (a) and (b), respectively, while the distribution of bounding box sizes is shown in (c). The middle row shows still frame examples of videos containing one, two, four and eight apes (viewing from left to right). The bottom row demonstrates still frames with bounding boxes of various sizes; the colour of bounding box and associated number represent the intra-video individual IDs. Fine-grained Annotation of PanAf500. The PanAf500 was ground-truth labelled by users on the community science platform Chimp&See (Arandjelovic et al, 2016) and researchers at the University of Bristol (Yang et al, 2019, Sakib and Burghardt, 2020) (examples are shown in Fig. 2). We re-formatted the metadata from these sources specifically for use in computer vision under reproducible and comparable benchmarks ready for AI-use. The dataset includes frame-by- frame annotations for full-body location, intra-video individual identification, and nine behavioural actions (Sakib and Burghardt, 2020) across 500 videos and $\sim$180,000 frames. Figure 4: Behavioural Actions in the PanAf500 Data. Examples of each one of the nine behavioural action classes (right) and their distribution (left) across 500 videos. The total number of per-frame annotations for each behavioural action class is shown on top of the corresponding bar. As shown in Fig. 3, the number of individual apes varies significantly, from one to nine, with up to eight individuals appearing together simultaneously. Individuals and pairs occur the most frequently while groups occur less frequently, particularly those exceeding four apes. Bounding boxes are categorised according to the COCO dataset (Lin et al, 2014) (i.e., $>96^{2}$, $96^{2}$ and $32^{2}$ for large, medium and small, respectively) with small bounding boxes occurring relatively infrequently compared to large and medium boxes. The behavioural action annotations cover 9 basic behavioural actions; sitting, standing, walking, running, climbing up, climbing down, hanging, sitting on back, and camera interaction. We refer to these classes as behavioural actions in recognition of historical traditions in biological and computer vision disciplines, which would consider them behaviours and actions, respectively. Fig. 4 displays the behavioural actions classes in focus together with their per-frame distribution. The class distribution is severely imbalanced, with the majority of samples ($>85\%$) belonging to three head classes (i.e., sitting, walking and standing). The remaining behavioural actions are referred to as tail classes. The same imbalance is observed at the clip level, as shown in Tab. 1, although the distribution of classes across clips does not match the per-frame distribution exactly. While behavioural actions with longer durations (i.e., sitting) have more labelled frames, this does not necessarily translate to more clips. For example, there are more clips of walking and standing than sitting, and more clips of climbing up than hanging, although the latter have fewer labelled frames. Table 1: Behavioural Action Class Statistics. The total number of clips for each behavioural action alongside the average duration in seconds and frames. Action | Clips | Time (s) | Frames ---|---|---|--- walking | 747 | $3.49\pm 2.94$ | 83.69 standing | 366 | $4.77\pm 4.59$ | 114.57 sitting | 308 | $10.30\pm 5.51$ | 247.13 climbing up | 81 | $2.11\pm 1.67$ | 50.59 hanging | 50 | $7.35\pm 5.24$ | 176.28 climbing down | 35 | $1.73\pm 1.24$ | 41.57 running | 34 | $2.61\pm 2.06$ | 62.59 camera interaction | 32 | $2.52\pm 3.77$ | 60.59 sitting on back | 26 | $3.89\pm 4.04$ | 93.46 Figure 5: PanAf20K Behaviour Examples. Triplets of example frames for six categories (i.e., feeding, travel, camera reaction, social interaction, chimp carrying and tool use) in the PanAf20K dataset are shown. Note that camera reaction, social interaction and chimp carrying have been abbreviated to reaction, social and carrying, respectively. Figure 6: Behavioural Annotations of the PanAf20K Dataset. The distribution of behaviour categories for the PanAf20K dataset is shown. Figures above each bar represent the dataset proportion (%) of each class. Figure 7: Co-occurrence of Behaviours in the PanAf20k Dataset. A co-occurrence matrix for the PanAf20K behaviours, where each cell reflects the number of times two behaviours occurred together. Diagonal cells are reset to aid visibility. Figure 8: Examples of Fine-grained and Multi-label Annotations. For videos with fine-grained annotations, full-body locations and behavioural actions are associated with each ape on a frame-by-frame basis (left). In contrast, multi-label behaviour annotations are provided at the video level (right); behaviours are not localised or assigned specifically to each ape. Multi-label Behavioural Annotation of PanAf20K. Community scientists on the Chimp&See platform provided multi-label behavioural annotations for $\sim$20,000 videos. They were shown 15-second clips at a time and asked to annotate whether animals were present or whether the clip was blank. To obtain a balance between specificity and keeping the task accessible and interesting to a broad group of people, annotators were presented with a choice of classification categories. These categories allowed focus to be given to ecologically important behaviours such as tool use, camera reaction and bipedalism. Hashtags for behaviours not listed in the classification categories were also permitted, allowing new and interesting behaviours to be added when they were discovered in the videos. The new behaviours were subcategories of the existing behaviours, many of them relating to tool use (e.g., algae scooping and termite fishing in aboreal nests). To ensure annotation quality and consistency a video was only deemed to be analyzed when either three volunteers marked the video as blank, unanimous agreement between seven volunteers was observed, or 15 volunteers annotated the video. These annotations were then extracted and expertly grouped into 18 co-occurring classes, which form the multi-label behavioural annotations presented here. The annotations follow a multi-hot binary format that indicates the presence of one or many behaviours. It should also be noted that behaviours are not assigned to individual apes or temporally localised within each video. Fig. 5 presents examples for several of the most commonly occurring behaviours. Fig. 7 shows the full distribution of behaviours across videos, which is highly imbalanced. Four of the most commonly occurring classes are observed in $>60\%$ videos, while the least commonly occurring classes are observed in $<1\%$. The relationship between behaviours is shown in Fig. 7 which presents co-occurring classes. The behaviours differ from the behavioural actions included in the PanAf500 dataset, corresponding to higher order behaviours that are commonly monitored in ecological studies. For example, instances of travel refer to videos that contain an individual or group of apes travelling, whereas associated behavioural actions such as walking or running may occur in many contexts (i.e., walking towards another ape during a social interaction or while searching for a tool). Both parts of the dataset are suitable for different computer vision tasks. The PanAf500 supports great ape detection, tracking, action grounding, and multi-class action recognition, while the PanAf20k supports multi-label behaviour recognition. The difference between the two annotation types can be observed in Fig. 8. Machine Labels for Animal Location and IDs. We generated full-body bounding boxes for apes present in the remaining, unlabelled videos using state-of-the- art (SOTA) object detection models evaluated on the PanAf500 dataset (see Sec. 4). Additionally, we assigned intra-video IDs to detected apes using the multi-object tracker, OC-SORT (Cao et al, 2023). Note that these pseudo-labels do not yet associate behaviours with individual bounding boxes. ## 4 Experiments & Results Table 2: Ape Detection Benchmarks. Detection performance on the PanAf500 dataset. Results are reported for the MegaDetector (Beery et al, 2019), ResNet-101 (+SCM+TCM) (Yang et al, 2019), VarifocalNet (Zhang et al, 2021), Swin Transformer (Liu et al, 2021) and ConvNeXt (Liu et al, 2022). The highest scores for each metric are shown in bold. Model | mAP (%) | Other (%) ---|---|--- All | L | M | S | Precision | Recall | F1 MegaDetector | 88.0 | 98.05 | 82.60 | 68.21 | 56.93 | 90.58 | 69.92 ResNet-101 | 81.2 | 77.60 | 88.97 | 88.84 | 42.37 | 88.93 | 57.40 VarifocalNet | 84.1 | 84.50 | 88.73 | 82.07 | 21.73 | 88.57 | 34.90 Swin Transformer | 87.2 | 82.47 | 96.86 | 88.53 | 83.66 | 92.03 | 87.65 ConvNeXt | 86.6 | 83.51 | 95.16 | 81.13 | 81.80 | 91.93 | 86.57 This section describes experiments relating to the PanAf500 and PanAf20K datasets. For the former, we present benchmark results for great ape detection and fine-grained action recognition. For the latter, we present benchmark results for multi-label behavioural classification. For both sets of experiments, several SOTA architectures are used. ### 4.1 PanAf500 Dataset Baseline Models. We report benchmark results for ape detection and fine- grained behavioural action recognition for the PanAf500 dataset, trained and evaluated on SOTA architectures. For ape detection, this entails the MegaDetector (Beery et al, 2019), ResNet-101 (+SCM+TCM) (Yang et al, 2019), VarifocalNet (VFNet) (Zhang et al, 2021), SwinTransformer (Liu et al, 2021) and ConvNext (Liu et al, 2022) architectures. For fine-grained action recognition, we considered X3D (Feichtenhofer, 2020), I3D (Carreira and Zisserman, 2017), 3D ResNet-50 (Tran et al, 2018), Timesformer (Bertasius et al, 2021) and MViTv2 (Li et al, 2022) architectures. Action recognition models were chosen based on SOTA performance on human action recognition datasets and to be consistent with the best performing models on the AnimalKingdom (Ng et al, 2022) and MammalNet datasets (Chen et al, 2023). In all cases, train-val- test (80:05:15) splits were generated at the video-level to ensure generalisation across video/habitat and splits remained consistent across tasks. Figure 9: Megadetector (Beery et al, 2019) achieves higher precision for the majority of cases although ConvNeXt (Liu et al, 2022) and Swin Transformer (Liu et al, 2021) achieve better precision scores at high recall rates ($R_{det}>0.84$). blahblahblahblahblahblahblah Figure 10: R101 (+SCM +TCM) (Yang et al, 2019) and VFNet (Zhang et al, 2021) achieve the highest true positive rates at low false positive rates ($FPR<0.15$). At higher false positive rates R101(+SCM+TCM) (Yang et al, 2019) performs better. Great Ape Detection. We initialised all models with pretrained feature extractors. For all models, except the Megadetector, we utilised MS COCO (Lin et al, 2014) pretrained weights. We use the out-of-the-box Megadetector implementation since it is pretrained on millions of camera trap images and provides a strong initialisation for camera trap specific detection tasks. We then fine-tuned each model for 50 epochs using SGD with a batch size of 16. Training was carried out using an input image resolution of $416^{2}$ and an Intersection over Union (IoU) threshold of $0.5$ for non maximum suppression, at an initial learning rate of $1\times 10^{-2}$ which was reduced by $10\%$ at $80\%$ and $90\%$ of the total training epochs. All ape detection models were evaluated using the commonly used object detection metrics: mean average precision (mAP), precision, recall and F1-scores. All metrics follow the open images standard (Krasin et al, 2017) and are considered in combination during evaluation. Performance is provided separately for small ($32^{2}$), medium ($96^{2}$) and large bounding boxes ($>96^{2}$), as per the COCO object detection standard, in addition to overall performance. Performance. Tab. 2 shows that the fine-tuned Megadetector achieves the best mAP score overall and for large bounding boxes, although it is outperformed by the Swin Transformer and ResNet-101 (+Cascade R-CNN+SCM+TCM) on medium and small bounding boxes, respectively. This shows that in-domain pre-training of the feature extractor is valuable for fine-tuning since the Megadetector is the only model pretrained on a camera trap dataset, rather than the COCO dataset (Lin et al, 2014). Performance across the remaining metrics, precision, recall and F1-score, is dominated by the Swin Transformer, which shows the importance of modelling spatial dependencies for good detection performance. The precision-recall (PR) curve displayed in Fig. 10 shows that most models maintain precision of more than 90% ($P_{det}>0.9$) at lower recall rates ($R_{det}<0.80$), except ResNet-101 (+SCM+TCM) which falls below this at recall of 78% ($R_{det}=0.78$). The fine-tuned Megadetector achieves consistently higher precision than other models for more than 84% of cases ($R_{det}=0.84$), outperforming other models by 5% ($P_{det}=0.05$) on average. However, at higher recall rates ($R_{det}>0.84$) ConvNeXt and SwinTransformer achieve higher precision, with the latter achieving marginally better performance. The ROC curve presented in Fig. 10 shows that VFNet and ResNet-101 (+SCM+TCM) achieve higher true positive rate than all other models at false positive rates less than 5% ($FPR<0.05$) and 40% ($FPR<0.40$), respectively. At higher false positive rates ConvNext and SwinTransformer are competitive with ResNet-101 (+SCM+TCM), with marginally better performance being established by ConvNeXt at very high false positive rates. Figure 11 presents qualitative examples of success and failure cases for the best performing model. Figure 11: Megadetector detection examples. A sequence of frames (along each row) extracted from 3 different videos. The ground truth bounding boxes (green) are shown alongside detections (red). The first sequence (row 1) shows successful detections. The second set of sequences (row 2-4) provide examples of false positive detections. The third set of sequences (row 5-6) provide examples of false negative detections. Figure 12: Per-class Distribution vs. Behavioural Thresholds. Distribution of each behavioural action class at various behavioural thresholds. Note that tail classes are effected more significantly by longer thresholds than head classes. Table 3: Behavioural Action Recognition Benchmarks. Behavioural action recognition performance on the PanAf500 dataset. Results are reported for X3D (Feichtenhofer, 2020), I3D (Carreira and Zisserman, 2017), 3D ResNet-50 (Hara et al, 2017), MViTV2 (Li et al, 2022), and TimeSformer (Bertasius et al, 2021) models. The highest scores for top-1 and average per-class accuracy are shown in bold. Model | Top-1 (%) | C-Avg (%) ---|---|--- 16 | 32 | 64 | 128 | 16 | 32 | 64 | 128 X3D | 80.00 | 80.04 | 79.40 | 74.24 | 50.35 | 56.10 | 53.02 | 40.89 I3D | 79.29 | 78.48 | 76.90 | 67.45 | 42.15 | 48.14 | 31.65 | 24.46 3D ResNet-50 | 77.45 | 76.41 | 74.02 | 73.31 | 55.17 | 33.79 | 38.72 | 36.03 MViTV2 | 78.31 | 81.09 | 79.29 | 79.19 | 40.45 | 54.91 | 48.28 | 41.11 TimeSformer | 78.53 | 79.45 | 79.26 | 78.18 | 45.05 | 56.38 | 48.27 | 41.10 Figure 13: Class-wise Performance vs. Proportion of Data. The per-class accuracy for each behavioural action recognition model is plotted against the proportion of data for each class. All models consistently achieve strong performance on the head classes, whereas performance is variable across tail classes. Behavioural Action Recognition. We trained all models using the protocol established by (Sakib and Burghardt, 2020). During training we imposed a temporal behaviour threshold that ensures that only frame sequences in which a behaviour is exhibited for $t$ consecutive frames are utilised during training in order to retain well-defined behaviour instances. We then sub-sampled 16-frame sequences from clips that satisfy the behaviour threshold. The test threshold is always kept consistent ($t=16$). Fig. 12 shows the effect of different behaviour thresholds on the number of clips available for each class. Higher behaviour thresholds have a more significant effect on minority/tail classes since they occur more sporadically. For example, there are no training clips available for the climbing down class where $t=128$. All models were initialised with feature extractors pre-trained on Kinetics-400 (Kay et al, 2017) and fine-tuned for 200 epochs using the Adam optimiser and a standard cross-entropy loss. We utilised a batch size of 32, momentum of 0.9 and performed linear warm-up followed by cosine annealing using an initial learning rate of $1\times 10^{-5}$ that increases to $1\times 10^{-4}$ over 20 epochs. All behavioural action recognition models were evaluated using average top-1 and average per-class accuracy (C-Avg). Figure 14: Confusion Matrix & Example Errors. The confusion matrix (left) is shown alongside examples of mis-classified frames (right). For mis-classified examples, ground truth labels are shown on the y-axis (i.e., hanging, running, sitting) and examples of the classes most likely to be incorrectly predicted for the ground truth class are shown on the x-axis. Note that a high proportion of errors are due to predictions made in favour of majority classes. Performance. Tab. 3 shows the X3D model attains the best top-1 accuracy at behaviour thresholds $t=16$ and $t=64$, although similar performance is achieved by MViTV2 and TimeSformer for the latter threshold. It also achieves the best average per-class performance at $t=64$, while TimeSformer achieves the best performance at $t=32$ and $t=128$. The MVITV2 models realise the best top-1 accuracy at $t=32$ and $t=128$, although they do not achieve the best average per-class performance at any threshold. The 3D ResNet-50 achieves the best average per-class performance at $t=16$. When considering top-1 accuracy, model performance is competitive. At lower behavioural thresholds, i.e., $t=16$ and $t=32$, the difference in top-1 performance is 2.55% and 4.68%, respectively, between the best and worst performing models, although this increases to 5.38% and 11.74% at $t=64$ and $t=128$, respectively. There is greater variation in average per-class performance and it is rare that a model achieves the best performance across both metrics. Although we observe strong performance with respect to top-1 accuracy, our models exhibit relatively poor average per-class performance. Fig. 13 plots per-class performance against class frequency and shows that the average per- class performance is caused by poor performance on tail classes. The average per-class accuracy across all models for the head classes is 83.22% while only 28.33% is achieved for tail classes. There is significant variation in the performance of models; I3D performs well on hanging and climbing up but fails to classify any of the other classes correctly. Similarly, X3D performs extremely well on sitting on back but achieves poor results on the other classes. None of the models except for TimeSformer correctly classify any instances of running during testing. Fig. 14 presents the confusion matrix calculated on validation data alongside examples of misclassified instances. ### 4.2 PanAf20K Dataset Table 4: Multi-label Behaviour Recognition Benchmarks. Results are reported for I3D (Carreira and Zisserman, 2017), 3D ResNet-50 (Hara et al, 2017), X3D (Feichtenhofer, 2020), MViTV2 (Li et al, 2022), and TimeSformer (Bertasius et al, 2021) models with focal loss (Cui et al, 2019), logit adjustment (Menon et al, 2020), and focal loss with weight balancing (Alshammari et al, 2022). The highest scores across all metrics are shown in bold. Model | mAP (%) c | Other (%) ---|---|--- All | Head | Middle | Tail | Accuracy | Precision | Recall I3D | 45.49 | 87.92 | 53.43 | 6.62 | 41.99 | 51.77 | 38.62 +FocalLoss | 46.65 | 87.67 | 53.51 | 10.17 | 42.51 | 60.02 | 37.46 +LogitAdjustment | 46.81 | 87.52 | 52.54 | 12.05 | 43.02 | 57.99 | 38.88 +WeightBalancing | 46.41 | 88.41 | 51.91 | 11.07 | 41.93 | 57.36 | 35.62 3D ResNet-50 | 46.03 | 86.12 | 53.22 | 9.73 | 40.76 | 54.13 | 35.87 +FocalLoss | 47.93 | 87.07 | 54.31 | 13.35 | 43.35 | 57.77 | 38.89 +LogitAdjustment | 48.04 | 87.44 | 53.91 | 13.96 | 41.15 | 58.86 | 36.27 +WeightBalancing | 46.68 | 87.09 | 54.06 | 9.90 | 41.54 | 54.05 | 40.62 X3D | 46.06 | 87.32 | 52.95 | 9.36 | 42.70 | 49.26 | 39.25 +FocalLoss | 47.19 | 89.26 | 52.75 | 11.75 | 41.67 | 50.93 | 36.99 +LogitAdjustment | 47.85 | 88.58 | 53.06 | 13.77 | 42.64 | 59.18 | 36.94 +WeightBalancing | 45.64 | 88.45 | 51.15 | 9.75 | 40.96 | 48.40 | 33.29 MViTV2 | 45.71 | 88.72 | 51.16 | 9.76 | 42.38 | 52.03 | 36.55 +FocalLoss | 45.78 | 89.27 | 50.82 | 10.05 | 42.83 | 56.54 | 36.38 +LogitAdjustment | 45.91 | 88.97 | 50.75 | 10.74 | 41.02 | 57.73 | 38.11 +WeightBalancing | 45.36 | 88.58 | 49.82 | 10.59 | 42.44 | 49.77 | 36.20 TimeSformer | 47.24 | 88.83 | 51.91 | 13.29 | 41.60 | 57.66 | 38.63 +FocalLoss | 48.82 | 88.75 | 52.65 | 17.10 | 42.70 | 68.01 | 37.37 +LogitAdjustment | 49.39 | 88.52 | 53.21 | 18.20 | 42.44 | 71.31 | 35.45 +WeightBalancing | 48.17 | 87.98 | 51.81 | 16.78 | 41.86 | 61.16 | 39.36 Data Setup. We generate train-val-test splits (70:10:20) using iterative stratification (Sechidis et al, 2011, Szymanski and Kajdanowicz, 2019). During training, we uniformly sub-sample $t=16$ frames from each video, equating to $\sim$1 frame per second (i.e., a sample interval of 22.5 frames). Baseline Models. To establish benchmark performance for multi-label behaviour recognition, we trained the X3D, I3D, 3D ResNet-50s, and MViTv2 models. All models were initialised with feature extractors pre-trained on Kinetics-400 (Kay et al, 2017) and fine-tuned for 200 epochs using the Adam optimiser. We utilised a batch size of 32, momentum of 0.9 and performed linear warm-up followed by cosine annealing using an initial learning rate of $1\times 10^{-5}$ that increases to $1\times 10^{-4}$ over 20 epochs. Models were evaluated using mAP, subset accuracy (i.e., exact match), precision and recall. Behaviour classes were grouped, based on class frequency, into head ($>10\%$), middle ($>1\%$) and tail ($<1\%$) segments, and mAP performance is reported for each segment. To address the long-tailed distribution, we substitute the standard loss for those calculated using long-tail recognition techniques. Specifically, we implement (i) focal loss (Cui et al, 2019) $L_{CB}$; (ii) logit adjustment (Menon et al, 2020) $L_{LA}$; and (iii) focal loss with weight balancing via a MaxNorm constraint (Alshammari et al, 2022). Figure 15: Class-wise Accuracy vs. Proportion of Data. The per-class average precision for the 3D ResNet-50 (Hara et al, 2017) and 3D ResNet-50 (+LogitAdjustment) (Hara et al, 2017, Menon et al, 2020) models is plotted against the proportion of data for each class. In general, better model performance is achieved on classes with high data proportions and the ResNet-50 (+LogitAdjustment) model shows improved performance on middle and tail classes. Multi-label Behaviour Recognition. As shown in Table 4.2, performance is primarily dominated by the 3D ResNet-50 and TimeSformer models when coupled with the various long-tailed recognition techniques. The TimeSformer (+LogitAdjustment) attains the highest mAP scores for both overall and tail classes, while the MViTV2 (+FocalLoss) and 3D ResNet-50 (+FocalLoss) demonstrate superior performance in terms of head and middle class mAP, respectively. The 3D ResNet-50 (+FocalLoss) and 3D ResNet-50 (+WeightBalancing) models achieve the best subset accuracy and recall, respectively, while the highest precision is realised by the TimeSformer (+LogitAdjustment) model. Although the 3D ResNet-50 and TimeSformer models perform strongest, it should be noted that the difference in overall mAP across all models is small (i.e., 4.03% between best and worst performing models). Figure 16: Multi-label Errors. Frames extracted from three videos exhibit success and failure cases of the 3D ResNet-50 model. Behaviour predictions are shown in light boxes of the first frame of each sequence; true positives are green, false positive are blue, and false negatives are red. In the first video (row 1), the model fails to classify feeding by the chimp visible in frames 1 and 2 whereas in the second video (row 2), it fails to classify tool use by the infant chimp in the final frame. Climbing is predicted incorrectly in the final video (row 3). As demonstrated by the head, middle and tail mAP scores, higher performance is achieved for more frequently occurring classes with performance deteriorating significantly for middle and tail classes. Across models, the average difference between head and middle, and middle and tail classes is 35.68 ($\pm$1.88)% and 40.55 ($\pm$3.02)%, respectively. The inclusion of long- tailed recognition techniques results in models that consistently attain higher tail class mAP performance than their standard counterparts (i.e., models that do not use long-tail recognition techniques). The logit adjustment technique consistently results in the best tail class mAP across models, whereas the focal loss results in the best performance on the middle classes for all models except the X3D model. None of the standard models achieve the best performance on any metric. Fig. 15 plots per-class mAP performance of the 3D ResNet-50 and 3D ResNet-50(+LogitAdjustment) models against the per-class proportion of data. The best performance is observed for the three most commonly occurring classes (i.e., feeding, travel, and no behaviour) whereas the worst performance is obtained by the most infrequently occurring classes (i.e., display, aggression, sex, bipedal, and cross species interaction) with the exception of piloerection. It can also be observed that the ResNet-50(+LogitAdjustment) model outperforms its standard counterpart on the majority of middle and tail classes, although it is outperformed on tail classes. Examples of success and failure cases by the 3D ResNet-50 model are presented in Fig. 16. ## 5 Discussion & Future Work Results. The performance of current SOTA methods is not currently sufficient for facilitating the large-scale, automated behavioural monitoring required to support conservation efforts. The conclusions drawn in ecological studies rely on the highly accurate classification of all observed behaviours by expert primatologists. While the current methods achieve strong performance on head classes, relatively poor performance is observed for rare classes. Our results are consistent with recent work on similar datasets (i.e., AnimalKingdom (Ng et al, 2022) and MammalNet (Chen et al, 2023)) which demonstrate the significance of the long-tailed distribution that naturally recorded data exhibits (Liu et al, 2019). Similar to (Ng et al, 2022), our experiments show that current long-tailed recognition techniques can help to improve performance on tail classes, although a large discrepancy between head and middle, and head and tail classes still exists. The extent of this performance gap (see Tab. 4.2) emphasises the difficulty of tackling long-tailed distributions and highlights an important direction for future work (Perrett et al, 2023). Additionally, the near perfect performance at training time (i.e., $>95\%$ mAP) highlights the need for methods that can learn effectively from a minimal number of examples. Community Science & Annotation. Although behavioural annotations are provided by non-expert community scientists, several studies have shown the effectiveness of citizen scientists to perform complex data annotation tasks (Danielsen et al, 2014, McCarthy et al, 2021) typically carried out by researchers (i.e., species classification, individual identification etc.). However, it should be noted that, as highlighted by (Cox et al, 2012), community scientists are more prone to errors relating to rare species. In the case of our dataset, this may translate to simple behaviours being identified correctly (e.g., feeding and tool use) whereas more nuanced or subtle behaviours (e.g., display and aggression) are missed or incorrectly interpreted, amongst other problems. This may occur despite the behaviour categories were predetermined by experts as suitable for non-expert annotation. The dataset’s rich annotations suit various computer vision tasks, despite key differences from other works. Unlike similar datasets (Ng et al, 2022, Chen et al, 2023), behaviours in the PanAf20K dataset are not temporally located within the video. However, the videos in our dataset are relatively short (i.e., 15 seconds) in contrast to the long form videos included in other datasets. Therefore, the time stamping of behaviour may be less significant considering it is possible to utilise entire videos, with a suitably fine- grained sample interval (i.e., 0.5-1 second), as input to standard action recognition models. With that being said, behaviours occur sporadically and chimpanzees are often only in frame for very short periods of time. Therefore, future work will consider augmenting the existing annotations with temporal localisation of actions. Moreover, while our dataset comprises a wide range of behaviour categories, many of them exhibit significant intra-class variation. In the context of ecological/primatological studies, this variation often necessitates the creation of separate ethograms for individual behaviours (Nishida et al, 1999, Zamma and Matsusaka, 2015). For instance, within the tool use behaviour category, we find subcategories like nut cracking (utilizing rock, stone, or wood), termite fishing, and algae fishing. Similarly, within the camera reaction category, distinct subcategories include attraction, avoidance, and fixation. In future, we plan to extend the existing annotations to include more granular subcategories. Ethics Statement. All data collection, including camera trapping, was done non-invasively, with no animal contact and no direct observation of the animals under study. Full research approval, data collection approval and research and sample permits of national ministries and protected area authorities were obtained in all countries of study. Sample and data export was also done with all necessary certificates, export and import permits. All work conformed to the relevant regulatory standards of the Max Planck Society, Germany. All community science work was undertaken according to the Zooniverse User Agreement and Privacy Policy. No experiments or data collection were undertaken with live animals. ## 6 Conclusion We present by-far the largest open-access video dataset of wild great apes with rich annotations and SOTA benchmarks. The dataset is directly suitable for visual AI training and model comparison. The size of the dataset and extent of labelling across $>$7M frames and $\sim$20K videos (lasting $>$80 hours) now offers the first comprehensive view of great ape populations and their behaviours to AI researchers. Task-specific annotations make the data suitable for a range of associated, challenging computer vision tasks (i.e, animal detection, tracking, and behaviour recognition) which can facilitate ecological analysis urgently required to support conservation efforts. We believe that given its immediate AI compatibility, scale, diversity, and accessibility, the PanAf20K dataset provides an unmatched opportunity for the many communities working in the ecological, biological, and computer vision domains to benchmark and expand great ape monitoring capabilities. We hope that this dataset can, ultimately, be a step towards better understanding and more effectively conserving these charismatic species. ## Data availability All data and code will be made publicly available from the PanAf20K project website upon publication and is available now upon request from the authors. Acknowledgments We thank the Pan African Programme: ‘The Cultured Chimpanzee’ team and its collaborators for allowing the use of their data for this paper. We thank Amelie Pettrich, Antonio Buzharevski, Eva Martinez Garcia, Ivana Kirchmair, Sebastian Schütte, Linda Gerlach and Fabina Haas. We also thank management and support staff across all sites; specifically Yasmin Moebius, Geoffrey Muhanguzi, Martha Robbins, Henk Eshuis, Sergio Marrocoli and John Hart. Thanks to the team at https://www.chimpandsee.org particularly Briana Harder, Anja Landsmann, Laura K. Lynn, Zuzana Macháčková, Heidi Pfund, Kristeena Sigler and Jane Widness. The work that allowed for the collection of the dataset was funded by the Max Planck Society, Max Planck Society Innovation Fund, and Heinz L. Krekeler. In this respect we would like to thank: Ministre des Eaux et Forêts, Ministère de l’Enseignement supérieur et de la Recherche scientifique in Côte d’Ivoire; Institut Congolais pour la Conservation de la Nature, Ministère de la Recherche Scientifique in Democratic Republic of Congo; Forestry Development Authority in Liberia; Direction Des Eaux Et Forêts, Chasses Et Conservation Des Sols in Senegal; Makerere University Biological Field Station, Uganda National Council for Science and Technology, Uganda Wildlife Authority, National Forestry Authority in Uganda; National Institute for Forestry Development and Protected Area Management, Ministry of Agriculture and Forests, Ministry of Fisheries and Environment in Equatorial Guinea. This work was supported by the UKRI CDT in Interactive AI under grant EP/S022937/1. ## References * * Vié et al (2009) Vié JC, Hilton-Taylor C, Stuart SN (2009) Wildlife in a changing world: an analysis of the 2008 IUCN Red List of threatened species. IUCN * Ceballos et al (2020) Ceballos G, Ehrlich PR, Raven PH (2020) Vertebrates on the brink as indicators of biological annihilation and the sixth mass extinction. _Proceedings of the National Academy of Sciences_ 117(24):13596–13602 * Carvalho et al (2021) Carvalho JS, Graham B, Bocksberger G, et al (2021) Predicting range shifts of african apes under global change scenarios. _Diversity and Distributions_ 27(9):1663–1679 * Haurez et al (2015) Haurez B, Daïnou K, Tagg N, et al (2015) The role of great apes in seed dispersal of the tropical forest tree species dacryodes normandii (burseraceae) in gabon. _Journal of Tropical Ecology_ 31(5):395–402 * Tarszisz et al (2018) Tarszisz E, Tomlinson S, Harrison ME, et al (2018) An ecophysiologically informed model of seed dispersal by orangutans: linking animal movement with gut passage across time and space. _Conservation Physiology_ 6(1):coy013 * Chappell and Thorpe (2022) Chappell J, Thorpe SK (2022) The role of great ape behavioral ecology in one health: Implications for captive welfare and re-habilitation success. _American journal of primatology_ 84(4-5):e23328 * Pollen et al (2023) Pollen AA, Kilik U, Lowe CB, et al (2023) Human-specific genetics: New tools to explore the molecular and cellular basis of human evolution. _Nature Reviews Genetics_ pp 1–25 * Kühl and Burghardt (2013) Kühl HS, Burghardt T (2013) Animal biometrics: quantifying and detecting phenotypic appearance. _Trends in Ecology & Evolution_ 28(7):432–441 * Tuia et al (2022) Tuia D, Kellenberger B, Beery S, et al (2022) Perspectives in machine learning for wildlife conservation. _Nature Communications_ 13(1):1–15 * Fegraus et al (2011) Fegraus EH, Lin K, Ahumada JA, et al (2011) Data acquisition and management software for camera trap data: A case study from the team network. _Ecological Informatics_ 6(6):345–353 * Houa et al (2022) Houa NA, Cappelle N, Bitty EA, et al (2022) Animal reactivity to camera traps and its effects on abundance estimate using distance sampling in the taï national park, côte d’ivoire. _PeerJ_ 10:e13510 * Kuehne et al (2011) Kuehne H, Jhuang H, Garrote E, et al (2011) Hmdb: a large video database for human motion recognition. In: _2011 International conference on computer vision_ , IEEE, pp 2556–2563 * Soomro et al (2012) Soomro K, Zamir AR, Shah M (2012) Ucf101: A dataset of 101 human actions classes from videos in the wild. _arXiv preprint arXiv:12120402_ * Kay et al (2017) Kay W, Carreira J, Simonyan K, et al (2017) The kinetics human action video dataset. _arXiv preprint arXiv:170506950_ * Swanson et al (2015) Swanson A, Kosmala M, Lintott C, et al (2015) Snapshot serengeti, high-frequency annotated camera trap images of 40 mammalian species in an african savanna. _Scientific Data_ 2(1):1–14 * Cui et al (2018) Cui Y, Song Y, Sun C, et al (2018) Large scale fine-grained categorization and domain-specific transfer learning. In: _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , pp 4109–4118 * Van Horn et al (2018) Van Horn G, Mac Aodha O, Song Y, et al (2018) The inaturalist species classification and detection dataset. In: _Proceedings of the IEEE International Conference on Computer Vision_ , pp 8769–8778 * Beery et al (2021) Beery S, Agarwal A, Cole E, et al (2021) The iwildcam 2021 competition dataset. _arXiv preprint arXiv:210503494_ * Bain et al (2021) Bain M, Nagrani A, Schofield D, et al (2021) Automated audiovisual behavior recognition in wild primates. _Science Advances_ 7(46):eabi4883 * Ng et al (2022) Ng XL, Ong KE, Zheng Q, et al (2022) Animal kingdom: A large and diverse dataset for animal behavior understanding. In: _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , pp 19023–19034 * Chen et al (2023) Chen J, Hu M, Coker DJ, et al (2023) Mammalnet: A large-scale video benchmark for mammal recognition and behavior understanding. In: _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , pp 13052–13061 * Freytag et al (2016) Freytag A, Rodner E, Simon M, et al (2016) Chimpanzee faces in the wild: Log-euclidean cnns for predicting identities and attributes of primates. In: _German Conference on Pattern Recognition_ , Springer, pp 51–63 * Brookes and Burghardt (2020) Brookes O, Burghardt T (2020) A dataset and application for facial recognition of individual gorillas in zoo environments. In: _Workshop on the Visual Observation and Analysis of Vertebrate and Insect Behaviour_ , 2012.04689 * Schofield et al (2019) Schofield D, Nagrani A, Zisserman A, et al (2019) Chimpanzee face recognition from videos in the wild using deep learning. _Science advances_ 5(9):eaaw0736 * Tennie et al (2016) Tennie C, Jensen K, Call J (2016) The nature of prosociality in chimpanzees. _Nature communications_ 7(1):13915 * Samuni et al (2021) Samuni L, Crockford C, Wittig RM (2021) Group-level cooperation in chimpanzees is shaped by strong social ties. _Nature communications_ 12(1):539 * Clark (2011) Clark FE (2011) Great ape cognition and captive care: Can cognitive challenges enhance well-being? _Applied Animal Behaviour Science_ 135(1-2):1–12 * Yang et al (2019) Yang X, Mirmehdi M, Burghardt T (2019) Great ape detection in challenging jungle camera trap footage via attention-based spatial and temporal feature blending. In: _Proceedings of the IEEE/CVF International Conference on Computer Vision Workshops_ , pp 0–0 * Yang et al (2023) Yang X, Burghardt T, Mirmehdi M (2023) Dynamic curriculum learning for great ape detection in the wild. _International Journal of Computer Vision_ pp 1–19 * Redmon and Farhadi (2017) Redmon J, Farhadi A (2017) Yolo9000: better, faster, stronger. In: _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition_ , pp 7263–7271 * Krizhevsky et al (2012) Krizhevsky A, Sutskever I, Hinton GE (2012) Imagenet classification with deep convolutional neural networks. In: _Advances in Neural Information Processing Systems_ , pp 1097–1105 * Parkhi et al (2015) Parkhi O, Vedaldi A, Zisserman A (2015) Deep face recognition. In: _Proceedings of the British Machine Vision Conference_ , British Machine Vision Association * Brust et al (2017) Brust CA, Burghardt T, Groenenberg M, et al (2017) Towards automated visual monitoring of individual gorillas in the wild. In: _Proceedings of the IEEE International Conference on Computer Vision Workshops_ , pp 2820–2830 * Redmon and Farhadi (2018) Redmon J, Farhadi A (2018) Yolov3: An incremental improvement. _arXiv preprint arXiv:180402767_ * Sakib and Burghardt (2020) Sakib F, Burghardt T (2020) Visual recognition of great ape behaviours in the wild. In: _Workshop on the Visual Observation and Analysis of Vertebrate and Insect Behaviour_ , 2011.10759 * Simonyan and Zisserman (2014) Simonyan K, Zisserman A (2014) Two-stream convolutional networks for action recognition in videos. In: _Advances in Neural Information Processing Systems_ * Brookes et al (2023) Brookes O, Mirmehdi M, Kühl HS, et al (2023) Triple-stream deep metric learning of great ape behavioural actions. In: _Proceedings of the 18th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications_ , pp 294–302 * Arandjelovic et al (2016) Arandjelovic M, Stephens CR, McCarthy MS, et al (2016) Chimp&see: An online citizen science platform for large-scale, remote video camera trap annotation of chimpanzee behaviour, demography and individual identification. _PeerJ Preprints_ * Lin et al (2014) Lin TY, Maire M, Belongie S, et al (2014) Microsoft coco: Common objects in context. In: _Proceedings of the European Conference on Computer Vision_ , Springer, pp 740–755 * Cao et al (2023) Cao J, Pang J, Weng X, et al (2023) Observation-centric sort: Rethinking sort for robust multi-object tracking. In: _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , pp 9686–9696 * Beery et al (2019) Beery S, Morris D, Yang S (2019) Efficient pipeline for camera trap image review. _arXiv preprint arXiv:190706772_ * Zhang et al (2021) Zhang H, Wang Y, Dayoub F, et al (2021) Varifocalnet: An iou-aware dense object detector. In: _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , pp 8514–8523 * Liu et al (2021) Liu Z, Lin Y, Cao Y, et al (2021) Swin transformer: Hierarchical vision transformer using shifted windows. In: _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , pp 10012–10022 * Liu et al (2022) Liu Z, Mao H, Wu CY, et al (2022) A convnet for the 2020s. In: _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , pp 11976–11986 * Feichtenhofer (2020) Feichtenhofer C (2020) X3d: Expanding architectures for efficient video recognition. In: _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition_ , pp 203–213 * Carreira and Zisserman (2017) Carreira J, Zisserman A (2017) Quo vadis, action recognition? a new model and the kinetics dataset. In: _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition_ , pp 6299–6308 * Tran et al (2018) Tran D, Wang H, Torresani L, et al (2018) A closer look at spatiotemporal convolutions for action recognition. In: _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition_ , pp 6450–6459 * Bertasius et al (2021) Bertasius G, Wang H, Torresani L (2021) Is space-time attention all you need for video understanding? In: _Proceedings of the International Conference on Machine Learning (ICML)_ * Li et al (2022) Li Y, Wu CY, Fan H, et al (2022) Mvitv2: Improved multiscale vision transformers for classification and detection. In: _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , pp 4804–4814 * Krasin et al (2017) Krasin I, Duerig T, Alldrin N, et al (2017) Openimages: A public dataset for large-scale multi-label and multi-class image classification. _Dataset available from https://storagegoogleapiscom/openimages/web/indexhtml_ * Hara et al (2017) Hara K, Kataoka H, Satoh Y (2017) Learning spatio-temporal features with 3d residual networks for action recognition. In: _Proceedings of the IEEE International Conference on Computer Vision Workshops,_ , pp 3154–3160 * Cui et al (2019) Cui Y, Jia M, Lin TY, et al (2019) Class-balanced loss based on effective number of samples. In: _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , pp 9268–9277 * Menon et al (2020) Menon AK, Jayasumana S, Rawat AS, et al (2020) Long-tail learning via logit adjustment. In: _Proceedings of the International Conference on Learning Representations_ * Alshammari et al (2022) Alshammari S, Wang YX, Ramanan D, et al (2022) Long-tailed recognition via weight balancing. In: _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , pp 6897–6907 * Sechidis et al (2011) Sechidis K, Tsoumakas G, Vlahavas I (2011) On the stratification of multi-label data. In: _Machine Learning and Knowledge Discovery in Databases_ , Springer, pp 145–158 * Szymanski and Kajdanowicz (2019) Szymanski P, Kajdanowicz T (2019) Scikit-multilearn: a scikit-based python environment for performing multi-label classification. _The Journal of Machine Learning Research_ 20(1):209–230 * Liu et al (2019) Liu Z, Miao Z, Zhan X, et al (2019) Large-scale long-tailed recognition in an open world. In: _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , pp 2537–2546 * Perrett et al (2023) Perrett T, Sinha S, Burghardt T, et al (2023) Use your head: Improving long-tail video recognition. In: _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , pp 2415–2425 * Danielsen et al (2014) Danielsen F, Jensen PM, Burgess ND, et al (2014) A multicountry assessment of tropical resource monitoring by local communities. _BioScience_ 64(3):236–251 * McCarthy et al (2021) McCarthy MS, Stephens C, Dieguez P, et al (2021) Chimpanzee identification and social network construction through an online citizen science platform. _Ecology and Evolution_ 11(4):1598–1608 * Cox et al (2012) Cox T, Philippoff J, Baumgartner E, et al (2012) Expert variability provides perspective on the strengths and weaknesses of citizen-driven intertidal monitoring program. _Ecological Applications_ 22(4):1201–1212 * Nishida et al (1999) Nishida T, Kano T, Goodall J, et al (1999) Ethogram and ethnography of mahale chimpanzees. _Anthropological Science_ 107(2):141–188 * Zamma and Matsusaka (2015) Zamma K, Matsusaka T (2015) Ethograms and the diversity of behaviors, Cambridge University Press, p 510–518
Rota-Baxter operators on cocommutative Hopf algebras Maxim Goncharov ###### Abstract We generalize the notion of a Rota-Baxter operator on groups and the notion of a Rota-Baxter operator of weight 1 on Lie algebras and define and study the notion of a Rota-Baxter operator on a cocommutative Hopf algebra $H$. If $H=F[G]$ is the group algebra of a group $G$ or $H=U(\mathfrak{g})$ the universal enveloping algebra of a Lie algebra $\mathfrak{g}$, then we prove that Rota-Baxter operators on $H$ are in one to one correspondence with corresponding Rota-Baxter operators on groups or Lie algebras. Keywords: Rota—Baxter operator, cocommutative Hopf algebra, Rota-Baxter Lie algebra, Rota-Baxter group. ## 1 Introduction Given an arbitrary algebra $A$ over a field $F$ and a scalar $\lambda\in F$ a linear operator $R\colon A\rightarrow A$ is called a Rota—Baxter operator on $A$ of weight $\lambda$ if for all $x,y\in A$: $R(x)R(y)=R(R(x)y+xR(y)+\lambda xy)$ (1) Then the pair $(A,R)$ is called a Rota—Baxter algebra. If $R$ is a Rota-Baxter operator of weight $\lambda$ and $\alpha\in F$, then $\alpha R$ is a Rota- Baxter operator of weight $\alpha\lambda$. Thus, there are two principle cases: when $\lambda=0$ or $\lambda=1$. Rota-Baxter operators for associative algebras first appear in the paper of G. Baxter as a tool for studying integral operators in the theory of probability and mathematical statistics [2]. The combinatorial properties of (commutative) Rota-Baxter algebras and operators were studied in papers of F.V. Atkinson, P. Cartier, G.-C. Rota and the others (see [3]-[6]). For basic results and the main properties of Rota- Baxter algebras see [7]. Independently, in early 80-th Rota-Baxter operators on Lie algebras naturally appear in papers of A.A. Belavin, V.G. Drinfeld [8] and M.A. Semenov-Tyan- Shanskii [9] while studying the solutions of the classical Yang-Baxter equation. It turns out that on quadratic Lie algebras skew-symmetric solutions of the classical Yang-Baxter equation are in one to one correspondence with skew-symmetric Rota-Baxter operators. If $\mathfrak{g}$ is a simple Lie algebra then non-skew-symmetric $\mathfrak{g}$-invariant solutions of the classical Yang-Baxter equation (that sometimes called solutions of modified classical Yang-Baxter equation) on $\mathfrak{g}$ are in one to one correspondence with pairs $(R,B)$, where $R$ is a Rota-Baxter operator of weight 1 satisfying $R+R^{*}+id=0$ and $B$ is a non-degenerate symmetric bilinear form on $\mathfrak{g}$ [10]. If $\mathfrak{g}$ is not simple, connections between non-skew-symmetric $\mathfrak{g}$-invariant solutions of the classical Yang-Baxter equation and Rota-Baxter operators were considered in [11]. As a consequence of these results we can note, that every Lie biagrebra structure on a simple Lie algebra is induced by a Rota-Baxter operators of special type. When one considers the problem of quantization of a Lie bialgebra $(\mathfrak{g},\delta)$, one of the first step is to extend the comultiplication $\delta$ to a Poisson co-bracket on the universal enveloping algebra $U(\mathfrak{g})$ that can be done uniquely. From this point of view, it is natural to consider the question of extension of a Rota-Baxter operator $R$ from a Lie algebra $\mathfrak{g}$ to some reasonable operator on the universal enveloping algebra $U(\mathfrak{g})$. Unfortunately, it is not possible to extend $R$ to a Rota-Baxter operator of the algebra $U(\mathfrak{g})$ (that is, to a linear map $B:U(\mathfrak{g})\mapsto U(\mathfrak{g})$ satisfying (1)). Nevertheless, in [12] and [13] it was proved that a structure of a pre- or a post-Lie algebra on a Lie algebra $\mathfrak{g}$ can be extended to some reasonable product on the universal enveloping algebra $U(\mathfrak{g})$. These results can be considered from the point of view of Rota-Baxter operators: it turns out that a Rota-Baxter operator $R$ of weight $\lambda$ on $\mathfrak{g}$ induces on $\mathfrak{g}$ a structure of a pre-Lie algebra (if $\lambda=0$) or a structure of a post-Lie algebra (if $\lambda\neq 0$). Here again we can ask if we can extend $R$ to an operator $B:U(\mathfrak{g})\mapsto U(\mathfrak{g})$ on the universal enveloping algebra $U(\mathfrak{g})$ in such a way that the extension of the pre-(or post-)Lie algebra structure on $U(\mathfrak{g})$ is somehow induced by $B$. Recently, in [1] it was introduced the notion of a Rota-Baxter operator (of weight 1) for groups. If $G$ is a group, then a map $B:G\mapsto G$ is called a Rota-Baxter operator on the group $G$ if for all $g,h\in G$: $B(g)B(h)=B(gB(g)hB(g)^{-1}).$ A group $G$ with a Rota-Baxter operator $B$ is called a Rota-Baxter group. In the same paper it was proved, that if $(G,B)$ is a Rota-Baxter Rota-Baxter Lie group, then the tangent map of $B$ at the identity is a Rota-Baxter operator of weight 1 on the Lie algebra of the Lie group $G$. Also, it was showed that many results that are true for Rota-Baxter operators on algebras have corresponding analogs for Rota-Baxter operators on groups. Lie algebras and groups can be regarded as foundations of two principle examples of cocommutative Hopf algebras. In this paper we in some sense combine notions of Rota-Baxter operators of weight 1 on Lie algebras and of Rota-Baxter operators on groups and give the definition of a Rota-Baxter operator (of weight 1) on cocommutative Hopf algebras. Note, that there already exist notions of Rota-Baxter of algebras and bialgebras (see [14] and [15]). These operators are different from the definition that we give. The paper organised as follows. In section 2 we give the definition of Rota- Baxter operator on a cocommutative Hopf algebra and obtain some basic results about it that are generalisations of known results for Rota-Baxter operators (of weight 1) on groups and algebras. In section 3, we consider two principle cases of cocommutative Hopf algebras - the universal enveloping algebra $U(\mathfrak{g})$ of a Lie algebra $\mathfrak{g}$ and the group algebra of a group $G$. We prove that Rota-Baxter operators on $U(\mathfrak{g})$ (resp. on $F[G]$) are in one-two-one correspondence with Rota-Baxter operators of weight 1 on $\mathfrak{g}$ (resp, on $G$). Given a Rota-Baxter operator $R$ of weight 1 on a Lie algebra $\mathfrak{g}$, one can define the structure of a post-Lie algebra on $\mathfrak{g}$. In section 4 we first show, that this extension of a post-Lie algebra structure to the universal enveloping algebra $U(\mathfrak{g})$ (that was found in [13]) can be defined using the Rota- Baxter Hopf operator $B:U(\mathfrak{g})\mapsto U(\mathfrak{g})$ that is the extension of $R$. Further, we prove that for a given arbitrary Rota-Baxter Hopf algebra $(H,B)$ one can define new multiplication $*$ and new antipod $S_{B}$ that define on the space $H$ a structure of a new Hopf algebra that we call the decedent Hopf algebra. The author is grateful to Vsevolod Gubarev for his helpful and valuable comments and suggestions. ## 2 Basic properties of Rota-Baxter operators on cocommutative Hopf algebras Throughout the paper the characteristic of the ground field $F$ is 0. If $A$ is a vector space over $F$ and $\Delta:A\mapsto A$ is a comultiplication on $A$, then we will use the following sumless Sweedler notation for the image of $a\in A$: $\Delta(a)=a_{(1)}\otimes a_{(2)}.$ In a Hopf algebra $H=(H,\mu,\Delta,\eta,\epsilon,S)$ we use the following notations: \- $\mu:H\otimes H\mapsto H$ is a multiplication, \- $\Delta:H\mapsto H\otimes H$ is a comultiplication, \- $\eta:F\mapsto H$ is a unit, \- $\epsilon:H\mapsto F$ is a counit, \- $S:H\mapsto H$ is the antipode. If $(A,\Delta,\epsilon)$ is a coalgebra, then a linear map $\varphi:A\mapsto A$ is called a coalgebra map, if for all $x\in A$: $\displaystyle\Delta(\varphi(x))=\varphi(x_{(1)})\otimes\varphi(x_{(2)}).$ $\displaystyle\epsilon(\varphi(x))=\epsilon(x).$ A Hopf algebra $H$ is called cocommutative if for all $x\in H$ $x_{(1)}\otimes x_{(2)}=x_{(2)}\otimes x_{(1)}.$ If $H$ is a cocommutative coalgebra, then the antipode $S:H\mapsto H$ is a coalgebra map. Recall that in arbitrary Hops algebra the antipode $S$ is an algebra antihomomorphism, that is, for all $a,b\in H$ $S(ab)=S(b)S(a)$. Definition. Let $(H,\mu,\eta,\Delta,\epsilon,S)$ be a cocommutative Hopf algebra. A coalgebra map $B:H\mapsto H$ is called a Rota-Baxter operator on $H$ if for all $x,y\in H$: $B(x)B(y)=B(x_{(1)}B(x_{(2)})yS(B(x_{(3)}))),$ (2) where $\Delta(x)=x_{(1)}\otimes x_{(2)}$. By a Rota-Baxter Hopf algebra we mean a pair $(H,B)$ of a cocommutative Hopf algebra $H$ and a Rota-Baxter operator $B$ on $H$. As an example on a Rota-Baxter operator on arbitrary cocommutative Hopf algebra one can consider $B=S$, the antipode (see Corollary 2 below). Remark. Note, that if $H$ is a commutative and cocommutative Hopf algebra, then a coalgebra map $B$ is a Rota-Baxter operator if and only if $B$ is an algebra map, that is $B(xy)=B(x)B(y)$ for all $x,y\in H$. Lemma 1. Let $H$ be a cocommutative Hops algebra and $B$ be a Rota-Baxter operator on $H$. Then (1) If $g\in H$ is a group-like element, then $B(g)$ is also a group-like element. (2) $B(1)=1$. (3) If $x\in L$ is a primitive element, then $B(x)$ is also a primitive element. Proof. (1) Let $g\in H$ be a group-like element. Since $B$ is a coalgebra map, we have $\Delta(B(g))=(B\otimes B)\Delta(g)=B(g)\otimes B(g).$ And we have two options: $B(g)=0$ or $B(g)$ is a group-like element of $H$. Since $\epsilon(B(g))=\epsilon(g)=1$, then $B(g)\neq 0$. Therefore, $B(g)$ is a group-like element of $H$. (2) Since 1 is a group-like element, then so is $B(1)$. Also, by (2) we have that $B(1)B(1)=B(1B(1)1S(B(1))=B(1).$ And since $B(1)$ is inevitable, we get that $B(1)=1$. (3) Let $x\in H$ be a primitive element. Consider $B(x)$: $\Delta(B(x))=(B\otimes B)\Delta(x)=1\otimes B(x)+B(x)\otimes 1.$ That’s mean that $B(x)$ is a primitive element of $H$. It is well known that if $R:\mathfrak{g}\mapsto\mathfrak{g}$ is a Rota-Baxter operator of weight 1 on a Lie algebra $\mathfrak{g}$, then $-R-id:\mathfrak{g}\mapsto\mathfrak{g}$ is again a Rota-Baxter operator of weight 1 on $\mathfrak{g}$. Similar results for groups was proved in [1]: if $G$ is a group and $B$ is a Rota-Baxter operator on $G$, then $\tilde{B}:G\mapsto G$ defined by $\tilde{B}(g)=g^{-1}B(g^{-1})$ is also a Rota-Baxter operator on $G$. For cocommutative Hopf algebras we can generalise these results: Proposition 1. Let $H$ be a cocommutative Hopf algebra and $B$ be a Rota- Baxter operator on $H$. Define $\tilde{B}:H\mapsto H$ as $\tilde{B}(x)=S(x_{(1)})B(S(x_{(2)})).$ Then $\tilde{B}$ is also a Rota-Baxter operator on $H$. Proof. Clearly, $\tilde{B}$ is a linear map. Prove that $\tilde{B}$ is a coalgebra map. Indeed, $\displaystyle\Delta(\tilde{B}(x))=\Delta(S(x_{(1)})B(S(x_{(2)})))=(S(x_{(2)})\otimes S(x_{(1)}))(B(S(x_{(4)}))\otimes B(S(x_{(3)}))=$ $\displaystyle=S(x_{(1)})B(S(x_{(2)}))\otimes S(x_{(3)})B(S(x_{(4)}))=\tilde{B}(x_{(1)})\otimes\tilde{B}(x_{(2)}).$ In order to prove that $\tilde{B}$ is a Rota-Baxter operator consider $\displaystyle\tilde{B}(x)\tilde{B}(y)=S(x_{(1)})B(S(x_{(2)}))S(y_{(1)})B(S(y_{(2)}))=$ $\displaystyle=S(x_{(1)})\epsilon(x_{(2)})B(S(x_{(3)}))S(y_{(1)})B(S(y_{(2)}))=$ $\displaystyle=S(x_{(1)})B(S(x_{(2)}))S(y_{(1)})\epsilon(B(S(x_{(3)})))B(S(y_{(2)}))=$ $\displaystyle=S(x_{(1)})B(S(x_{(2)}))S(y_{(1)})S(B(S(x_{(3)})))B(S(x_{(4)}))B(S(y_{(2)}))=$ $\displaystyle=hB(S(x_{(3)}))B(S(y_{(2)})),$ where $h=S(x_{(1)})B(S(x_{(2)}))S(y_{(1)})S(B(S(x_{(3)})))$. For $h$ we have: $\displaystyle h=S(x_{(1)})B(S(x_{(2)}))S(y_{(1)})S(B(S(x_{(3)})))=\tilde{B}(x_{(1)})S(y_{(1)})S(B(S(x_{(2)})))=$ $\displaystyle=\tilde{B}(x_{(1)})S(y_{(1)})S(B(S(x_{(2)})))x_{(3)}S(x_{(4)})=$ $\displaystyle=\tilde{B}(x_{(1)})S(y_{(1)})S(S(x_{(3)})B(S(x_{(2)})))S(x_{(4)})=\tilde{B}(x_{(1)})S(y_{(1)})S(\tilde{B}(x_{(2)}))S(x_{(3)})=$ $\displaystyle=S(x_{(1)}\tilde{B}(x_{(2)})y_{(1)}S(\tilde{B}(x_{(3)}))).$ Now consider $hB(S(x_{(4)}))B(S(y_{(2)}))$. Using similar arguments as above, we can conclude that: $\displaystyle hB(S(x_{(4)}))B(S(y_{(2)}))=hB(S(x_{(4)})B(S(x_{(5)}))S(y_{(2)})S(B(x_{(6)})))=$ $\displaystyle=hB(S(x_{(4)}\tilde{B}(x_{(5)})y_{(2)}S(\tilde{B}(x_{(6)}))))=\tilde{B}(x_{(1)}\tilde{B}(S(x_{(2)}))y_{(2)}S(\tilde{B}(x_{(3)})).$ And the proposition is proved. Another well-known result says that if a Lie algebra $\mathfrak{g}$ splits into direct sum of two subalgebras $\mathfrak{g}_{1}$ and $\mathfrak{g}_{2}$: $\mathfrak{g}=\mathfrak{g}_{1}\oplus\mathfrak{g}_{2}$, then the map $R$ defined as $R(x_{1}+x_{2})=-x_{2}$, where $x_{i}\in\mathfrak{g}_{i}$, is a Rota-Baxter operator of weight 1. For groups similar result ([1]) says that if a group $G$ can be presented as a product of two subgroups $G_{1}$ and $G_{2}$ $G=G_{1}G_{2}$ such that $G_{1}\cap G_{2}=\\{e\\}$, then a map $B$ defined as $B(g_{1}g_{2})=g_{2}^{-1}$, where $g_{i}\in G_{i}$, is a Rota-Baxter operator on $G$. Note, that unlike the Lie algebra case, the inverse of the projection to the first factor is not a Rota-Baxter operator on $G$. We can generalise these results for cocommutative Hopf algebras as: Proposition 2. Let $H$ be a cocommutative Hops algebra. Suppose $H_{1}$ and $H_{2}$ are two Hopf subalgebras of $H$ and as a Hopf algebra $H=H_{1}H_{2}$. Suppose that the product is direct, that is, $H$ is isomorphic to $H_{1}\otimes_{F}H_{2}$ as a vector space. Define a map $B$ as $B(h_{1}h_{2})=\epsilon(h_{1})S(h_{2}),$ where $h_{i}\in H_{i}$. Then $B$ is a Rota-Baxter operator on $H$. Proof. Clearly, $B$ is a well-defined linear map. First we proof that $B$ is a coalgebra map. For $x=\sum h_{i}g_{i}$, where $h_{i}\in H_{1}$, $g_{i}\in H_{2}$ we have: $\displaystyle\Delta(B(x))=\sum\limits_{i}\epsilon(h_{i})\Delta(S(g_{i}))=\sum\limits_{i}\epsilon(h_{i})S(g_{i(2)})\otimes S(g_{i(1)})=$ $\displaystyle=\sum\limits_{i}\epsilon(h_{i(1)})S(g_{i(1)})\otimes\epsilon(h_{i(2)})S(g_{i(2)})=(B\otimes B)\Delta(x).$ In order to prove that $B$ satisfies (2) consider $x=hg\in H$, and $y=h^{\prime}g^{\prime}$ where $h,h^{\prime}\in H_{1}$, $g,g^{\prime}\in H_{2}$. We have $\displaystyle B(x_{(1)}B(x_{(2)})yS(B(x_{(3)})))=$ $\displaystyle=B((h_{(1)}g_{(1)})(\epsilon(h_{(2)})S(g_{(2)}))(h^{\prime}g^{\prime})(\epsilon(h_{(3)})S(S(g_{(3)})))=$ $\displaystyle=B(\epsilon(h_{(1)})\epsilon(h_{(2)})h_{(3)}g_{(1)}S(g_{(2)})g^{\prime}g_{(3)})=B(hh^{\prime}g^{\prime}g)=$ $\displaystyle=\epsilon(hh^{\prime})S(g^{\prime}g)=\epsilon(h)\epsilon(h^{\prime})S(g)S(g^{\prime})=B(x)B(y).$ Since $H_{1}H_{2}$ is spanned by elements of the form $hg$, the equation (2) holds for all $x,y\in H$. Remark. Note that as in the case of groups, the operator $B^{\prime}$ defined as $B(h_{1}h_{2})=S(h_{1})\epsilon(h_{2})$ is not a Rota-Baxter operator on $H$ in general. Corollary 1. If $H$ is a cocommutative Hopf algebra with the antipode $S$, then $B=S$ is a Rota-Baxter operator on $H$. ## 3 Rota-Baxter operators on $F[G]$ and $U(L)$. In this section we consider two principle examples of cocommutative Hopf algebra: the group algebra of a group $G$ and the universal enveloping algebra $U(\mathfrak{g})$ of a Lie algebra $\mathfrak{g}$. Theorem 1. Let $(G,B)$ be a Rota-Baxter group. Then $B$ can be uniquely extended to a Rota-Baxter operator $B:F[G]\mapsto F[G]$ on the group algebra $F[G]$. Conversely, if $B$ is a Rota-Baxter operator on $F[G]$, then $B(G)\subset G$ and $(G,B|_{G})$ is a Rota-Baxter group, where $B|_{G}$ is the restriction of $B$ on $G$. Proof. Since elements of $B$ form a linear basis of $F[G]$, we can uniquely extend $B$ on $F[G]$ as $B(\sum\alpha_{i}g_{i})=\sum\alpha_{i}B(g_{i}).$ It is easy to see that $B$ is a coalgebra map. We need to check that $(F[G],B)$ is a Rota-Baxter Hopf algebra. Let $x=\sum\alpha_{i}g_{i}\in F[G]$, $y\in G$. Then $\displaystyle B(x)B(y)=\sum\alpha_{i}B(g_{i})B(y)=\sum\alpha_{i}B(g_{i}B(g_{i})yB(g_{i})^{-1})$ $\displaystyle=\sum\alpha_{i}B(g_{i}B(g_{i})hS(B(g_{i})))=B(x_{(1)}B(x_{(2)})yS(B(x_{(3)})).$ And since elements of $G$ form a basis of $F[G]$, the equation (2) holds for all $x,y\in F[G]$. Conversely, let $B$ be a Rota-Baxter operator on the Hopf algebra $F[G]$. By Lemma 1, if $g\in G$, then $B(g)$ is a group-like element. Therefore, $B(g)\in G$ for every $g\in G$. The rest is obvious. Lemma 2. Let $\mathfrak{g}$ be a Lie algebra and $R$ be a Rota-Baxter operator of weight 1 on $\mathfrak{g}$. Then the map $R$ can be extended to a linear map $B:U(\mathfrak{g})\mapsto U(\mathfrak{g})$ such that 1\. The restriction of $B$ on $\mathfrak{g}$ is $B|_{\mathfrak{g}}=R$. 2\. $B$ satisfies (2). Proof. Put $B(1)=1$ and if $x,x_{1},\ldots,x_{k}\in\mathfrak{g}$, $h=x_{1}x_{2}\ldots x_{k}$, then define $B(xh)=B(x)B(h)-B([B(x),h]).$ (3) First we need to prove that $B$ is well-defined. Consider elements $f,g\in U(\mathfrak{g})$, $x,y\in\mathfrak{g}$. We want to prove that $B(f(xy- yx-[x,y])g)=0$. If $f=1$, then by the definition $\displaystyle B(xyg)=B(x)B(yg)-B([B(x),yg])=B(x)B(yg)-B([B(x),y]g)-B(y[B(x),g])=$ $\displaystyle=B(x)B(y)B(g)-B(x)B([B(y),g])-B([B(x),y])B(g)+B([B([B(x),y]),g])-$ $\displaystyle-B(y)B([B(x),g])+B([B(y),[B(x),g]]).$ Similarly, $\displaystyle B(xyg)=B(y)B(x)B(g)-B(y)B([B(x),g])-B([B(y),x])B(g)+B([B([B(y),x]),g])-$ $\displaystyle-B(x)B([B(y),g])+B([B(x),[B(y),g]]).$ And $\displaystyle B(xyg-yxg)=[B(x),B(y)]B(g)-B([B(x),y]+[x,B(y)])B(g)-$ $\displaystyle+B(B([B(x),y]+B(B([x,B(y)]-[[B(x),B(y)],g])=$ $\displaystyle=B([x,y])B(g)-B(B[x,y],g])=B([x,y]g).$ Suppose that $f=x_{1}\ldots x_{k}$ for some $x_{i}\in\mathfrak{g}$ and use induction on $k$. Denote by $f_{1}=x_{2}\ldots x_{k}$. We have $\displaystyle B(f(xy-yx-[x,y])g)=B(x_{1}f_{1}(xy-yx-[x,y])g)=$ $\displaystyle=B(x_{1})B(f_{1}(xy-yx-[x,y])g)-B([B(x_{1}),f_{1}(xy- yx-[x,y])g]).$ By the induction hypothesis, $B(f_{1}(xy-yx-[x,y])g)=0$. Consider the second summand. $\displaystyle B([B(x_{1}),f_{1}(xy-yx-[x,y])g])=B([B(x_{1}),f_{1}](xy- yx-[x,y])g)+$ $\displaystyle+B(f_{1}[B(x_{1}),xy-yx-[x,y]]g)+B(f_{1}(xy- yx-[x,y])[B(x_{1}),g]).$ Recall, that $B(x_{1})\in\mathfrak{g}$. Then, by the induction hypothesis, $B([B(x_{1}),f_{1}](xy-yx-[x,y])g)=B(f_{1}(xy-yx-[x,y])[B(x_{1}),g])=0.$ Note that $\displaystyle[B(x_{1}),xy-yx-[x,y]]=([B(x_{1}),x]y-y[B(x_{1}),x]-$ $\displaystyle-[[B(x_{1}),x],y])+(x[B(x_{1}),y]-[B(x_{1}),y]x-[x,[B(x_{1}),y]])$ and here we can also use the induction hypothesis to conclude that $B(f_{1}[B(x_{1}),xy-yx-[x,y]]g)=0$. Therefore $B$ is well defined. Now prove that $(U(\mathfrak{g}),B)$ is a Rota- Baxter Hopf algebra. Take $f=x_{1}\ldots,x_{k}$, $x_{i}\in\mathfrak{g}$ and $q\in U(\mathfrak{g})$ and use induction on $k$. If $k=0$ then clearly $B(1)B(q)=B(q)=B(1\cdot B(1)q\cdot S(B(1))).$ Suppose that $f=xh$ where $h=x_{2},\ldots,x_{k}$. First note that $\Delta^{(2)}(xh)=\sum\limits_{(h)}xh_{(1)}\otimes h_{(2)}\otimes h_{(3)}+h_{(1)}\otimes xh_{(2)}\otimes h_{(3)}+h_{(1)}\otimes h_{(2)}\otimes xh_{(3)}.$ Then $\displaystyle B(f_{(1)}B(f_{(2)})qS(B(f_{(3)})))=B(xh_{(1)}B(h_{(2)})qS(B(h_{(3)})))+B(h_{(1)}B(xh_{(2)})qS(B(h_{(3)})))+$ $\displaystyle+B(h_{(1)}B(h_{(2)})qS(B(xh_{(3)}))).$ Consider the first term. Using 3 and the induction hypotheses, we have $\displaystyle B(xh_{(1)}B(h_{(2)})qS(B(h_{(3)})))=B(x)B(h_{(1)}B(h_{(2)})qS(B(h_{(3)})))-$ $\displaystyle-B([B(x),h_{(1)}B(h_{(2)})qS(B(h_{(3)}))])=B(x)B(h)B(g)-B([B(x),h_{(1)}B(h_{(2)})qS(B(h_{(3)}))])$ Similarly, $\displaystyle B(h_{(1)}B(xh_{(2)})qS(B(h_{(3)})))=$ $\displaystyle=B(h_{(1)}B(x)B(h_{(2)})qS(B(h_{(3)})))-B(h_{(1)}B([B(x,h_{(2)}])qS(B(h_{(3)}))))$ and $\displaystyle B(h_{(1)}B(h_{(2)})qS(B(xh_{(3)})))=$ $\displaystyle=B(h_{(1)}B(h_{(2)})qS(B(x)B(h_{(3)})))-B(h_{(1)}B(h_{(2)})qS(B([B(x),h_{(3)}])))=$ $\displaystyle=-B(h_{(1)}B(h_{(2)})qS(B(h_{(3)}))B(x))-B(h_{(1)}B(h_{(2)})qS(B([B(x),h_{(3)}])))$ Note that $\displaystyle-B([B(x),h_{(1)}B(h_{(2)})qS(B(h_{(3)}))])+B(h_{(1)}B(x)B(h_{(2)})qS(B(h_{(3)})))-$ $\displaystyle-B(h_{(1)}B(h_{(2)})qS(B(h_{(3)}))B(x))=-B([B(x),h_{(1)}]B(h_{(2)})qS(B(h_{(3)}))).$ Summing up the obtained equations, we get $\displaystyle B(f_{(1)}B(f_{(2)})qS(B(f_{(3)})))=$ $\displaystyle=B(x)B(h)B(q)-B([B(x),h_{(1)}]B(h_{(2)})qS(B(h_{(3)})))-$ $\displaystyle-B(h_{(1)}B([B(x,h_{(2)}])qS(B(h_{(3)}))))-B(h_{(1)}B(h_{(2)})qS(B([B(x),h_{(3)}])))$ $\displaystyle=B(x)B(h)B(q)-B([B(x),h])B(q)=B(xh)B(q).$ The lemma is proved. Lemma 3. Let $\mathfrak{g}$ be a Lie algebra, $R$ be a Rota-Baxter operator on $\mathfrak{g}$ of weight 1 and $B:U(\mathfrak{g})\mapsto U(\mathfrak{g})$ be the operator from Lemma 2. Then $B$ is a coalgebra map, that is, for all $f\in U(\mathfrak{g})$: $\displaystyle\Delta(B(f))=(B\otimes B)\Delta(f)=B(f_{(1)})\otimes B(f_{(2)}).$ $\displaystyle\epsilon(B(f))=\epsilon(f).$ Proof. First we proof that $B$ preserves the comultiplication. Take $f=x_{1},\ldots x_{k}$, where $x_{i}\in\mathfrak{g}$, and use the induction on $k$. The statement is obvious if $f=1$. If $k=1$ we have $\Delta(B(x))=B(x)\otimes 1+1\otimes B(x)=(B\otimes B)(\Delta(x)).$ Suppose that $f=xh$, where $h=x_{2},\ldots,x_{k}$. Then $\Delta(B(xh))=\Delta(B(x)B(h)-B([B(x),h]))=\Delta(B(x))\Delta(B(h))-\Delta(B([B(x),h])$ By the the induction hypotheses we have: $\Delta(B(h))=B(h_{(1)})\otimes B(h_{(2)})$ and since $B(x)\in L$: $\Delta(B([B(x),h]))=B([B(x),h_{(1)}])\otimes B(h_{(2)})+B(h_{(1)})\otimes B([B(x),h_{(2)}]).$ Therefore, $\displaystyle\Delta(B(x))\Delta(B(h))-\Delta(B([B(x),h]))=(B(x)B(h_{(1)}))\otimes B(h_{(2)})+B(h_{(1)})\otimes(B(x)B(h_{(2)}))-$ $\displaystyle-B([B(x),h_{(1)}])\otimes B(h_{(2)})-B(h_{(1)})\otimes B([B(x),h_{(2)}])=$ $\displaystyle=B(xh_{(1)})\otimes B(h_{(2)})+B(h_{(1)})\otimes B(xh_{(2)})=(B\otimes B)(\Delta(xh)).$ In order to prove that $B$ preserves the counit one can use similar arguments: take $f=x_{1},\ldots x_{k}$ ($x_{i}\in g$) and use the induction on $k$. The case $k=1$ is trivial. If $f=x_{1}h$ then $\displaystyle\epsilon(B(xh))=\epsilon(B(x)B(h))-\epsilon(B[B(x),h])=\epsilon(B(x))\epsilon(B(h))-\epsilon(B([B(x),h]))=$ $\displaystyle=\epsilon(x)\epsilon(h)-\epsilon([B(x),h])=\epsilon(xh).$ The lemma is proved. Lemma 4. Let $U(\mathfrak{g})$ be a universal enveloping algebra of a Lie algebra $\mathfrak{g}$, $B$ be a Rota-Baxter operator on $U(\mathfrak{g})$. Then $B(\mathfrak{g})\subset\mathfrak{g}$ and the restriction $R=B|_{\mathfrak{g}}$ is a Rota-Baxter operator of weight 1 on $\mathfrak{g}$. Proof. By Lemma 1, if $x\in\mathfrak{g}$, then $B(x)\in\mathfrak{g}$. Now consider the restriction $R=B|_{\mathfrak{g}}$. For arbitrary $x,y\in\mathfrak{g}$ we have $[R(x),R(y)]=[B(x),B(y)]=B(x)B(y)-B(y)B(x)=B(xy+[B(x),y])-B(yx+[B(y),x])=$ $=B([B(x),y]+[x,B(y)]+[x,y])=R([R(x),y]+[x,R(y)]+[x,y]).$ Therefore, $R$ is a Rota-Baxter operator of weight 1 on the Lie algebra $\mathfrak{g}$. Theorem 2. Rota-Baxter operators of weight 1 on a Lie algebra $\mathfrak{g}$ are in one to one correspondence with Rota-Baxter operators on the universal enveloping algebra $U(\mathfrak{g})$. Proof. Let $R$ be a Rota-Baxter operator of weight 1 on a Lie algebra $\mathfrak{g}$. It is only left to prove that the extension of $R$ from Lemma 2 is unique. For this we note, that if $B$ is a Rota-Baxter operator on $U(\mathfrak{g})$, then from (2) it is follows that $B(x)B(a)=B(xa)+B([B(x),a])$ for all $x\in\mathfrak{g}$ and $a\in U(\mathfrak{g})$. The rest can be proved using similar arguments as in Lemma 2. Example 1. Let $\mathfrak{g}=sl_{2}(F)$, $x,h,y$ be a basis of $sl_{2}(F)$ with the following table of multiplication: $[h,x]=2x,\quad[h,y]=-2y,\quad[x,y]=h.$ Consider a map $R$ defined as $R(x)=0,\quad R(h)=-\frac{h}{2},\quad R(y)=-y.$ Then $R$ is a Rota-Baxter operator of weight 1 on $\mathfrak{g}$ ([10]). Consider the extension $B$ of $R$ on $U(\mathfrak{g})$. Let $a=xb$, where $b\in U(\mathfrak{g})$. By the definition of $B$: $B(a)=B(xb)=B(x)B(h)-B([B(x),b])=0.$ Similarly, if $a=yb$, we have: $B(a)=B(yb)=B(y)B(b)-B([B(y),b])=-yB(b)+B([y,b])=-yB(b)+B(yb)-B(by).$ Therefore, $B(by)=-yB(b)$. Monomials $x^{i}h^{j}y^{k}$ form a linear basis of $U(\mathfrak{g})$. For them we proved that $B(x^{i}h^{j}y^{k})=\left\\{\begin{matrix}0,\ \text{if}\ i>0\\\ (-1)^{j+k}\frac{y^{k}h^{j}}{2^{j}},\ \ \text{if}\ i=0.\end{matrix}\right.$ ## 4 The descendent Hopf algebra Definition [16]. Let $(\mathfrak{g},[,])$ be a Lie algebra and $\cdot$ is a bilinear operation on L. If for all $x,y,z\in g$: $\displaystyle[x,y]\cdot z=(y\cdot x)\cdot z-y\cdot(x\cdot z)-(x\cdot y)\cdot z+x\cdot(y\cdot z),$ $\displaystyle x\cdot[y,z]=[x\cdot y,z]+[y,x\cdot z],$ then $(\mathfrak{g},[,],\cdot)$ is called a Post-Lie algebra. Given a Post-Lie algebra $(\mathfrak{g},[,],\cdot)$, one can define new Lie bracket on $\mathfrak{g}$ by the formula $\\{x,y\\}=x\cdot y-y\cdot x+[x,y].$ (4) If $\mathfrak{g}$ is a Lie algebra and $R$ is a Rota-Baxter operator of weight 1 on $\mathfrak{g}$, then one can define the following structure of a post-Lie algebra: $x\cdot y=[R(x),y]$ for all $x,y\in\mathfrak{g}$ [17]. In this case the multiplication (4) is equal to $\\{x,y\\}=[R(x),y]+[x,R(y)]+[x,y].$ Definition [1]. If $R$ is a Rota-Baxter operator of weight 1 on a Lie algebra $\mathfrak{g}$, then the pair $(\mathfrak{g},\\{,\\})$ is called the descendent Lie algebra of the Rota-Baxter Lie algebra $(\mathfrak{g},R)$. In the same paper it was proved that Statement 1 [1]. If $(G,B)$ is a Rota-Baxter group, then $G$ with the product $g*h=gB(g)hB(g)^{-1}$ is again a group called the descendent group $G_{B}$ of the Rota-Baxter group $(G,B)$. The inverse of $g\in G_{B}$ in the descendent group is equal to $B(g)^{-1}g^{-1}B(g)$. Let $(\mathfrak{g},[,],\cdot)$ be a post-Lie algebra. In [13] it was proved, that there is a unique extension of the post-Lie product $\cdot$ on the universal enveloping algebra $U(\mathfrak{g})$ given by $\displaystyle 1\cdot f=f,$ (5) $\displaystyle xf\cdot g=x\cdot(f\cdot g)-(x\cdot f)\cdot g,$ (6) $\displaystyle f\cdot(gh)=(f_{(1)}\cdot g)(f_{(2)}\cdot h)$ (7) for all $x\in\mathfrak{g}$, $f,g\in U(\mathfrak{g})$. Define a new multiplication on $U(\mathfrak{g})$ by $f*g=f_{(1)}(f_{(2)}\cdot g)$ where $f,g\in U(\mathfrak{g})$. In the same paper it was proved that $(U(\mathfrak{g}),*)$ is isomorphic to the universal enveloping algebra of $(\mathfrak{g},\\{,\\})$. Proposition 3. Let $(\mathfrak{g},[,])$ be a Lie algebra, $R$ be a Rota-Baxter operator on $\mathfrak{g}$ of weight 1, $(\mathfrak{g},[,],\cdot)$ be the corresponding post-Lie algebra and $(U(\mathfrak{g}),B)$ be the enveloping Rota-Baxter algebra of $(\mathfrak{g},R)$. Then the extension $\cdot$ of the post-Lie product on $U(\mathfrak{g})$ can be defined as $f\cdot g=B(f_{(1)})gS(B(f_{(2)})).$ Proof. Since the extension defined by (5)-(7) is unique, it is enough to prove that our product satisfies (5)-(7). Let $x\in\mathfrak{g}$, $f,g,h\in U(\mathfrak{g})$. The first equation is obvious. Consider (6). We have $\displaystyle x\cdot(f\cdot g)-(x\cdot f)\cdot g=x\cdot(B(f_{(1)})gS(B(f_{(2)})))-[B(x),f]\cdot g=$ $\displaystyle=[B(x),B(f_{(1)})gS(B(f_{(2)})]-B([B(x),f_{(1)}])gS(B(f_{(2)}))-$ $\displaystyle-B(f_{(1)})gS([B(x),B(f_{(2)})])=B(xf_{(1)}gS(B(f_{(2)}))+B(f_{(1)})gS(B(xf_{(2)}))=(xf)\cdot g.$ Consider $(\ref{e3})$. We have $\displaystyle(f_{(1)}\cdot g)(f_{(2)}\cdot h)=(B(f))_{(1)}gS((B(f))_{(2)})(B(f))_{(3)}hS((B(f))_{(4)})=$ $\displaystyle=(B(f))_{(1)}g\epsilon((B(f))_{(2)}hS((B(f))_{(3)}))=(B(f))_{(1)}ghS((B(f))_{(2)})=f\cdot(gh).$ The proposition is proved. Corollary 2. Let $(\mathfrak{g},[,])$ be a Lie algebra, $R$ be a Rota-Baxter operator of weight 1 on $\mathfrak{g}$ and $B$ be the extension of $R$ on $U(\mathfrak{g})$ from Lemma 2. Define new multiplication $*:U(\mathfrak{g})\otimes U(\mathfrak{g})\mapsto U(\mathfrak{g})$ as $f*g=f_{(1)}B(f_{(2)})gS(B(f_{(3)})).$ Then $(U(\mathfrak{g}),*,\Delta,\eta,\epsilon)$ is a bialgebra isomorphic to the universal enveloping algebra of the Lie algebra $(g,\\{,\\})$, where product $\\{,\\}$ is defined as $\\{x,y\\}=[R(x),y]+[x,R(y)]+[x,y]$ for all $x,y\in\mathfrak{g}$. Remark. As we will see in Theorem 4, the antipode $S_{B}$ on $(U(\mathfrak{g}),*,\Delta,\eta,\epsilon)$ is defined as $S_{B}(x)=S(B(x_{(1)}))S(x_{(2)})B(x_{(3)})$ for all $x\in U(\mathfrak{g})$. Here, $S$ is the ”old” antipode of the universal enveloping algebra $U(\mathfrak{g})$ of $\mathfrak{g}$. We want to generalise Corollary 2 to arbitrary cocommuative Rota-Baxter Hopf algebra. Let $(H,\mu,\Delta,\eta,\epsilon,S)$ be a cocommutative Hopf algebra and $B$ a Rota-Baxter operator on $H$. Define new operation on $H$: for all $x,y\in H$ put $x*y=x_{(1)}B(x_{2})yS(B(x_{3})).$ Proposition 4. We have that $\displaystyle\Delta(x*y)=(x_{(1)}*y_{(1)})\otimes(x_{(2)}*y_{(2)}),$ $\displaystyle\epsilon(x*y)=\epsilon(x)\epsilon(y).$ Proof. Consider $x*y=x_{(1)}B(x_{2})yS(B(x_{3}))$. Since $\Delta$ preserves multiplication, we have $\Delta(x*y)=\Delta(x_{(1)}B(x_{2})yS(B(x_{3})))=$ $=(x_{(1)}\otimes x_{(2)})(B(x_{(3)})\otimes B(x_{(4)}))(y_{(1)}\otimes y_{(2)})(S(B(x_{(6)}))\otimes S(B(x_{(5)})))$ Since the comultiplication is cocommutative, we can rewrite the last term as $x_{(1)}B(x_{(2)})y_{(1)}S(B(x_{(3)}))\otimes x_{(4)}B(x_{(5)})y_{(2)}S(B(x_{(6)}))=x_{(1)}*y_{(1)}\otimes x_{(2)}*y_{(2)}.$ Now consider $\displaystyle\epsilon(x*y)=\epsilon(x_{(1)}B(x_{2})yS(B(x_{3})))=\epsilon(x_{(1)})\epsilon(B(x_{2}))\epsilon(y)\epsilon(S(B(x_{3})))=$ $\displaystyle=\epsilon(x_{(1)})\epsilon(B(x_{2})S(B(x_{3})))\epsilon(y)=\epsilon(x_{(1)})\epsilon(B(x_{(2)})\epsilon(y)=\epsilon(x_{(1)}\epsilon(x_{(2)}))\epsilon(y)=$ $\displaystyle=\epsilon(x)\epsilon(y).$ Define a linear map $S_{B}:H\mapsto H$ as $S_{B}(x)=S(B(x_{(1)}))S(x_{(2)})B(x_{(3)})$ for all $x\in H$. We will need the following Proposition 5. For all $x\in H$ we have $\epsilon(x)1=B(x_{(1)})B(S_{B}(x_{(2)})).$ Proof. Indeed, $\displaystyle B(x_{(1)})B(S_{B}(x_{(2)}))=B(x_{(1)}B(x_{(2)})S_{B}(x_{(3)})S(B(x_{(4)})))=$ $\displaystyle=B(x_{(1)}[B(x_{(2)})S(B(x_{(3)}))]S(x_{(4)})[B(x_{(5)})S(B(x_{(6)}))])=$ $\displaystyle B(x_{(1)}\epsilon(B(x_{(2)}))S(x_{(3)})\epsilon(B(x_{(4)})))=B(x_{(1)}S(x_{(2)}))=\epsilon(x)B(1)=\epsilon(x)1.$ Theorem 4. $H_{B}=(H,*,\Delta,\eta,\epsilon,S_{B})$ is a cocommutative Hopf algebra. Proof. Note that since $B$ is a Rota-Baxter operator on $H$, we have that $B(x*y)=B(x)B(y)$ for all $x,y\in H$. First we proof that $(H,*)$ is an associative algebra. Indeed, take $x,y,z\in H$. Using Proposition 4 and Proposition 5, we have $\displaystyle(x*y)*z=(x_{(1)}*y_{(1)})B(x_{(2)}*y_{(2)})zS(B(x_{(3)}*y_{(3)})=$ $\displaystyle=x_{(1)}B(x_{(2)})y_{(1)}[S(B(x_{(3)}))B(x_{(4)})]B(y_{(2)})zS(B(y_{(3)}))S(B(x_{(5)}))=$ $\displaystyle=x_{(1)}B(x_{(2)})y_{(1)}[\epsilon(B(x_{(3)}))]B(y_{(2)})zS(B(y_{(3)}))S(B(x_{(4)}))=$ $\displaystyle=x_{(1)}B(x_{(2)})y_{(1)}B(y_{(2)})zS(B(y_{(3)}))S(B(x_{(3)}))=x*(y*z).$ Since $B(1)=1$, it is easy to see that $1*x=x*1=x$ for all $x\in H$. By Proposition 4, $(H,*,\eta,\Delta,\epsilon)$ is a bialgebra. It is left to proof that $S_{B}$ is the antipode of $(H,*,\Delta,\eta,\epsilon)$. We need to prove that for all $x\in H$: $x_{(1)}*S_{B}(x_{(2)})=S_{B}(x_{(1)})*x_{(2)}=\epsilon(x)1.$ Direct computation shows $\displaystyle x_{(1)}*S_{B}(x_{(2)})=x_{(1)}[B(x_{(2)})S(B(x_{(3)}))]S(x_{(4)})[B(x_{(5)})S(B(x_{(6)}))]=$ $\displaystyle=x_{(1)}\epsilon(x_{(2)})S(x_{(3)})\epsilon(x_{(4)})=x_{(1)}S(x_{(2)})=\epsilon(x)1.$ Note that $B(x_{(1)}*S_{B}(x_{(2)}))=B(x_{(1)})B(S_{B}(x_{(2)}))$. Then we get the equality $B(x_{(1)})B(S_{B}(x_{(2)}))=\epsilon(x)1.$ (8) Also, we have that $\displaystyle B(S_{B}(x_{(1)}))B(x_{(2)})=B(S_{B}(x_{(1)}))\epsilon(B(x_{(2)}))B(x_{(3)})=$ $\displaystyle=\epsilon(B(x_{(1)}))B(S_{B}(x_{(1)}))B(x_{(3)})=S(B(x_{(1)}))[B(x_{(2)})B(S_{B}(x_{(3)}))]B(x_{(4)})=$ $\displaystyle=S(B(x_{(1)}))\epsilon(x_{(2)})B(x_{(3)})=\epsilon(x)1.$ That is, we proved that $B(S_{B}(x_{(1)}))B(x_{(2)})=\epsilon(x)1.$ (9) Now consider the second equality. Using (8) and (9) we compute: $\displaystyle S_{B}(x_{(1)})*x_{(2)}=S_{B}(x_{(1)})B(S_{B}(x_{(2)}))x_{(3)}S(B(S_{B}(x_{(4))}))=$ $\displaystyle=S(B(x_{(1)}))S(x_{(2)})[B(x_{(3)})B(S_{B}(x_{(4)}))]x_{(5)}S(B(S_{B}(x_{(6))}))=$ $\displaystyle=S(B(x_{(1)}))S(x_{(2)})[\epsilon(x_{(3)})]x_{(4)}S(B(S_{B}(x_{(5))}))=$ $\displaystyle=S(B(x_{(1)}))S(x_{(2)})x_{(3)}S(B(S_{B}(x_{(4))}))=S(B(x_{(1)}))S(B(S_{B}(x_{(2))}))=$ $\displaystyle=S(B(S_{B}(x_{(2)})B(x_{(1)}))=\epsilon(x)1.$ And the theorem is proved. Proposition 6. $B$ is a homomorphism of Hopf algebras $H$ and $H_{B}$ and is a Rota-Baxter operator on the Hopf algebra $H_{B}$. Proof. First we note, that by the definition of the product $*$ and by (2), for all $x,y\in H$ we have that: $B(x*y)=B(x)B(y)$. It is left to to proof that $B\circ S_{B}=S\circ B$. For this note that $S(B(x_{(1)}))B(x_{(2)})=B(x_{(1)})S(B(x_{(2)}))=\epsilon(B(x))=\epsilon(x)1.$ This means that the map $S\circ B$ is the inverse for the map $B$ in $(End(B),\star,\eta\circ\epsilon)$ where $\star:End(B)\mapsto End(B)$ is the convolution product defined as $(f\star g)(x)=f(x_{(1)})g(x_{(2)})$ for all $f,g\in\mathrm{End}(H),\ x\in H$. On the other hand, for every $x\in H$: $B(S_{B}(x_{(1)}))B(x_{(2)})=B(S_{B}(x_{(1)})*x_{(2)})=B(\epsilon(x)1)=\epsilon(x)1.$ Similarly, $B(x_{(1)})B(S_{B}(x_{(2)}))=\epsilon(x)1$ and $B\circ S_{B}$ is also the inverse for $B$ in $(\mathrm{End}(B),\star,\eta\circ\epsilon)$. Therefore, $B\circ S_{B}=S\circ B$ and $B$ is a homomorphism of Hopf algebras $H$ and $H_{B}$. For the second statement we compute: $\displaystyle B(x_{(1)}*B(x_{(2)})*y*S(B(x_{(3)})))=B(x_{(1)})B(B(x_{(2)}))B(y)B(S(B(x_{(3)})))=$ $\displaystyle=B(x)*B(y).$ By analogy with Lie algebras and groups, we may give the following Definition. The Hopf algebra $H_{B}$ is called the descendent Hopf algebra of the Rota-Baxter Hopf algebra $(H,B)$. Remark. Corollary 2 says that any descendent Hopf algebra of the universal enveloping algebra $U(\mathfrak{g})$ of a Lie algebra $\mathfrak{g}$ is the universal enveloping algebra of the corresponding descendent Lie algebra. And it is easy to see that if $H=F[G]$ is the group algebra of a group $G$, then any descendent Hopf algebra $H_{B}$ is the group algebra of the correspondent descendent group $G_{B}$. ## Acknowledgements The work was supported by Russian Scientific Fond (project N 19-11-00039). ## References * [1] L.Guo, H. Lang, Y. Sheng, Integration and Geometrization of Rota-Baxter Lie algebras, arxiv * [2] Baxter G., _An analytic problem whose solution follows from a simple algebraic identity_ , Pacific J. Math., 10 (1960), 731–742. * [3] Atkinson, F.V., _Some aspects of Baxter’s functional equation_ , J. Math. Anal. Appl., 7 (1963), 1–30. * [4] Rota G.C., _Baxter algebras and combinatorial identities I and II_ , Bull. Amer. Math. Soc., 75 (1969), 325–334. * [5] Miller J.B., _Some properties of Baxter operators_ , Acta Math. Acad. Sci. Hungar., 17 (1966), 387–400. * [6] Cartier P., _On the structure of free Baxter algebras_ , Adv. Math., 9 (1972), 253–265. * [7] Guo L., _An Introduction to Rota—Baxter Algebra_ , Surveys of Modern Mathematics, Somerville, MA: International Press; Beijing: Higher education press, 4 (2012). * [8] Belavin A.A., Drinfeld V.G., _Solutions of the classical Yang—Baxter equation for simple Lie algebras_ , Funct. Anal. Appl., 16:3 (1982), 159–180. * [9] Semenov-Tyan-Shanskii M.A., _What a classical r-matrix is_ , Funct. Anal. Appl., 17:4 (1983), 259–272. * [10] Goncharov M.E. On Rota-Baxter operators of non-zero weight arisen from the solutions of the classical Yang-Baxter equation, Sib. El. Math. Rep., 14 (2017) 1533-1544 * [11] Goncharov M.E. Rota-Baxter operators and non-skew-symmetric solutions of the classical Yang-Baxter equation on quadratic Lie algebras, Sib. El. Math. Rep. 16(2019), 2098-2109. * [12] Oudom, J.-M., Guin D., On the Lie enveloping algebra of a pre-Lie algebra, Journal of K-theory: K-theory and its Applications to Algebra, Geometry, and Topology, 2 (2008), 147–167. * [13] Ebrahimi-Fard K., Lundervold A., Munthe-Kaas H.Z., On the Lie enveloping algebra of a post-Lie algebra, Journal of Lie Theory 25 (2015), 4, 1139-1165 * [14] Jian R.Q., Zhang J. Rota–Baxter coalgebras. 2014, arXiv:1409.3052. * [15] Ma T., Liu L., Rota–Baxter coalgebras and Rota–Baxter bialgebras. Linear and Multilinear Algebra, 64 (2016), 968 - 979. * [16] Vallette B., Homology of generalized partition posets, J. Pure Appl. Algebra 208 (2007), 2, 699–725. * [17] Bai C., Guo L., Ni X., Nonabelian generalized Lax pairs, the classical Yang-Baxter equation and PostLie algebras, Comm. Math. Phys. 297 (2010), 2, 553–596. Maxim Goncharov Novosibirsk State University Sobolev Institute of Mathematics Novosibirsk, Russia e-mail<EMAIL_ADDRESS>
# Accounting for survey design in Bayesian disaggregation of survey-based areal estimates of proportions: an application to the American Community Survey Marco H<EMAIL_ADDRESS>[ Veronica J<EMAIL_ADDRESS>[ Roderick J. <EMAIL_ADDRESS>[ Nationwide Children’s Hospital Center for Injury Research and Policy 575 Children’s Crossroad Columbus, OH 43205 Department of Statistics School of Information and Computer Sciences Donald Bren Hall University of California, Irvine Irvine, CA 92697 Department of Biostatistics School of Public Health 1415 Washington Heights University of Michigan Ann Arbor, MI 48109 ###### Abstract Understanding the effects of social determinants of health on health outcomes requires data on characteristics of the neighborhoods in which subjects live. However, estimates of these characteristics are often aggregated over space and time in a fashion that diminishes their utility. Take, for example, estimates from the American Community Survey (ACS), a multi-year nationwide survey administered by the U.S. Census Bureau: estimates for small municipal areas are aggregated over 5-year periods, whereas 1-year estimates are only available for municipal areas with populations $>$65,000. Researchers may wish to use ACS estimates in studies of population health to characterize neighborhood-level exposures. However, 5-year estimates may not properly characterize temporal changes or align temporally with other data in the study, while the coarse spatial resolution of the 1-year estimates diminishes their utility in characterizing neighborhood exposure. To circumvent this issue, in this paper we propose a modeling framework to disaggregate estimates of proportions derived from sampling surveys which explicitly accounts for the survey design effect. We illustrate the utility of our model by applying it to the ACS data, generating estimates of poverty for the state of Michigan at fine spatio-temporal resolution. Spatio-temporal change of support problem, Bayesian hierarchical model, Multi-resolution approximation, Latent spatio-temporal process, American Community Survey, Survey-based estimates, ###### keywords: , and t1Post-doctoral Scientist t2Associate Professor t3Richard D. Remington Distinguished University Professor ## 1 Introduction Interest and attention in the social determinants of health, that is, the social and economic factors that characterize where and how people live, have soared in the last 20 years (Braverman, Egerter and Williams, 2011; Marmot et al., 2012) as awareness of health disparities within countries’ populations has become more prevalent. Discussions on social determinants of health have also been at the forefront of national news (TV, newspapers, magazines, etc.) during the first months of the current COVID-19 pandemic, as various social determinants of health – poverty, homelessness, smoke exposure, etc. – are suspected to worsen COVID-19 outcomes (Abrams and Szefler, 2020; Rollston and Galea, 2020; Singu et al., 2020). A public source of information on social determinants of health is the American Community Survey (ACS), a multi-year national survey administered by the United States Census Bureau. Sampling annually approximately 3.5 million Americans, including those residing in unincorporated territories (U.S. Census Bureau, 2008), the ACS releases every year up-to-date, timely, and accurate population and housing information to the general public and to data-users. Due to privacy concerns and sample size limitations, often these estimates are aggregated over space and/or time. Currently, ACS estimates for small municipal sub-divisions, such as census tracts, are aggregated over 5-year time periods, whereas estimates of neighborhood characteristics corresponding to 1-year time periods are only available for municipal sub-divisions with populations greater than 65,000. This aggregation, if justified by privacy and statistical considerations, can result in estimates whose spatial and/or temporal resolution is misaligned with the target spatial and/or temporal resolution of a research study. As an example, a researcher who wishes to incorporate an ACS estimate of poverty (e.g. proportion of households living in poverty) in an epidemiological analysis is typically faced with a choice: (a) utilize estimates with fine spatial resolution whose 5-year temporal resolution is unlikely to conform to other data sources and, in the case of longitudinal studies, fail to properly characterize yearly changes; or (b) utilize 1-year estimates, whose aggregation over large areal units diminishes their ability to characterize neighborhoods in a meaningful way. Having access to estimates at fine spatial and temporal resolution would eliminate these problems. This is the goal of our paper. Taking the ACS as a case study, we propose a Bayesian hierarchical spatio-temporal model that aims to generate estimates of certain social indicators at fine spatial and temporal resolution starting from estimates - the ACS estimates - that are either available at fine resolution in space but not in time, or are temporally resolved but not in space. Hence, we offer a model that solves the so-called spatio-temporal _change of support problem_ (COSP), that is, the problem of performing inference about a spatial or spatio-temporal process at a resolution (or support) that differs from that of the data, in the case of multi-year estimates of proportions derived from a complex survey. The COSP is one of the most common problem in spatial statistics, and reviews of methods to address it can be found in Banerjee, Carlin and Gelfand (2004) and Gotway and Young (2002). Gelfand, Zhu and Carlin (2001) offer an extension to the space-time setting. Our model is not the first attempt at solving the COSP for ACS data, nor it is the first paper that models these data spatially: Bradley, Wikle and Holan (2015), Bradley, Wikle and Holan (2016), Bradley, Holan and Wikle (2016), Savitsky (2016), and Simpson et al. (2019) have all contributed to this literature. In particular, Bradley, Wikle and Holan (2015) were the first to present a statistical model that derives estimates of a socio-economic indicator at a different spatial support than that of the ACS data, namely over three different Native American reservations. Our model differs from previous work in several ways. First, it deals with spatio-temporal estimates of proportions: previous efforts considered either variables that could be modeled using a Gaussian distribution or dealt with estimates that referred to counts, and thus could be modeled as Poisson random variables. As we show in Section 5.3, application and adaptation of the aforementioned models to handle proportions, while they yield point-level estimates that are for the most part in agreement with the ACS estimates, tend to underrepresent the estimates’ uncertainty, leading to credible intervals that, when validated with hold-out data, do not achieve the correct nominal coverage. Another key distinction of our modeling approach is that it explicitly accounts for the survey design effect, thus merging survey methodology with spatial statistical modeling frameworks. Although Bradley, Wikle and Holan (2016) did account for the sampling design when modeling ACS estimates for a given year, they only did so when specifying a COSP model for estimates of count data: in that case, they provided a model for both the ACS estimates and the ACS sampling-based variance, leveraging the known relationship between the mean and the variance of a Poisson distribution. No model for the ACS sampling-based variance was formulated in the case of Gaussian-distributed indicators (see Bradley, Wikle and Holan (2015)): rather, the ACS variance was taken as known and used as the variance of the normal likelihood. Our model proposes to account for the sampling design in two ways: first, by including the design effect (Kish, 1965), building upon the work of Korn and Graubard (1998), Ghitza and Gelman (2013), Mercer et al. (2014), and Chen, Wakefield and Lumley (2014), secondly by introducing random effects specified at the spatial resolution of the sampling frame. Specifically, using both the ACS estimates of proportions and their sampling based variance, we create two working variables - the _effective number of cases_ (ENC) and the _effective sample size_ (ESS) - which we use in a Binomial likelihood. Furthermore, since the ACS sampling design uses counties as sampling frames, to account for the fact that estimates relative to administrative areal units within the same county might be more strongly correlated than estimates relative to areal units that are spatially close but within different counties, our model introduces county-level random effects. As in Bradley, Wikle and Holan (2015) and Bradley, Wikle and Holan (2016), we handle the COSP by assuming that the true area-level proportions result from the aggregation of an underlying, point-referenced spatio-temporal process over the specified area. As in Bradley, Wikle and Holan (2015) and Bradley, Wikle and Holan (2016), such specification allows us to derive estimates over spatio-temporal resolutions that are equal or larger than the smallest spatial and temporal resolution for which we have data. In our application, we focus on generating estimates at the 1-year time scale and at census tract level, but our modeling framework could be applied to generate estimates over any type of areal unit. To handle the large number of areal units for which we have data, another contribution of the paper is to introduce an approximation that alleviates computation when trying to infer upon a point-referenced, spatio-temporal process. The approximation, called the Spatio-Temporal Multi- Resolution Approximation (ST-MRA), is achieved through a novel basis function expansion, which builds upon the Multi-Resolution Approximation (MRA) of a Gaussian process presented by Katzfuss (2017). In its focus on yielding estimates of socio-economic indicators over areal units, our model shares similarities with other efforts within the rich small- area estimation (SAE) literature. In particular, of the two broad classes of methods within SAE (Pfeffermann, 2013), our model fits within the class of model-based methods. The latter comprises statistical approaches where a stochastic formulation is offered for the sample data, and optimal predictors, or approximately optimal predictors, are used to derive estimates of the quantity of interest. In using the ACS estimates as data and in specifying a hierarchical model, we follow the same approach as Fay and Herriot (1979), however, differently from the latter, we account directly for the spatial dependence in the estimates. Including spatial random effects into SAE model- based methods is not unheard of: Singh, Shukla and Kundu (2005), Pratesi and Salvati (2008), Pereira and Coelho (2010), and Porter et al. (2014), to name a few, have all explicitly accounted for spatial correlation in the estimates. However, differently from us, these models do not adjust for the sampling design, nor do they explicitly address the change of support problem in multi- year survey estimates, which is the raison d’être of our modeling effort. We apply our model to ACS multi-year estimates of the proportion of families in Michigan living in poverty, and we show the ability of our model to generate estimates with high precision, highlighting the potential for this model to become a tool that can be used by epidemiological researchers to derive reliable, fine-scale estimates of socioeconomic indicators. These estimates can be subsequently incorporated into health studies examining the role of social determinants of health on various health outcomes. The remainder of this paper is organized as follows. Section 2 provides more detailed background information on the ACS. Sections 3.1 to 3.5 describe our modeling framework whereas Section 3.6 provides a succinct description of alternative models that we apply to survey-based estimates of proportions. Section 4 illustrates the capabilities of our model in simulation experiments, while Section 5 presents results for the proportion of families living in poverty in Michigan from 2006 through 2016. In both cases, the predictive performance of our model is compared to that of alternative models. The paper concludes with a discussion in Section 6. ## 2 Data In this section, we provide general information on the American Community Survey (ACS) and we present results of an exploratory data analysis performed on the ACS estimates of the proportion of families living in poverty in Michigan between 2006 and 2016. ### 2.1 The American Community Survey The American Community Survey is an ongoing survey conducted by the U.S. Census Bureau (U.S. Census Bureau, 2008, 2014). It replaced the Census long form in the 2000 Census. It samples approximately 3.5 million households annually, collecting data on social, housing, economic, and other community characteristics. In contrast to the Census long form, for which data were gathered every 10 years, the ACS surveys are administered continuously, allowing for the timely dissemination of up-to-date community information that are statistically representative of the time period during which the surveys were administered. A comprehensive report on the ACS sampling methodology is available in (U.S. Census Bureau, 2014). Here we provide a brief overview and focus on the sampling of housing units rather than group quarters (e.g. college dormitories or correctional facilities). The ACS sampling procedure is broken up into two phases: the first phase consists of the initial sample selection while the second phase deals with follow-up surveys being sent to unmailable and non- responding addresses. Housing units are sampled into the ACS independently for each county in the US. To ensure that no household is selected for the ACS more than once in a 5-year period, the sampling frame within each county is subdivided into five disjoint sub-frames, which are rotated through every five years. For example, ACS surveys from 2006, 2011, and 2016 are all selected from the same sub-frame. Each year, the first phase of the ACS sampling begins by sorting any new housing units into one of the five sub-frames. The ACS sampling rate varies depending on the characteristics of the neighborhood in which a housing unit resides. Housing units belong to several municipal sub-divisions of varying sizes, or sampling entities, for example, the unit’s city, census tract, or school district. Each of these sampling entities is provided with a measure of size (MOS), which is approximated based on the number of addresses contained within the entity. Blocks of housing units are stratified based on the MOS of the smallest sampling entity that contains that unit, which is referred to as the units’ smallest entity’s measure of size (SEMOS). The sampling rates for the ACS are inversely proportional to the housing units’ SEMOS. Tables 4-1 and 4-2 in (U.S. Census Bureau, 2014) provide details on the ACS sampling rates. Once the initial sample is selected, each address is assigned a month in which it will receive the survey. In the second phase of sampling, follow-up surveys are sent to a set of randomly selected, non-responding households with higher sampling fractions for populations with high rates of non-response. Much like sample selection, computation of the ACS sampling weights takes place in several stages. The first stage provides a housing unit with a so- called basic sampling weight, which is inversely related to the unit’s probability of selection. A series of additional calibrations then occurs, including adjustments to ensure that the weighted estimates derived from the ACS conform to the Census Bureau’s Population Estimates Program (PEP). Weighted estimates of neighborhood characteristics are weighted functions of survey responses within a neighborhood and time period. Margins of error of are computed using successive differences replication (U.S. Census Bureau, 2014). Given statistical accuracy, precision and privacy concerns, ACS estimates are released with varying spatial and temporal resolution. Specifically, estimates for small municipal subdivisions, such as census tracts, are aggregated and provided in the form of averages over a 5-year time period, whereas yearly estimates are provided for administrative regions that have over 65,000 inhabitants. While certain counties meet this criterion, a sizeable number of counties in the US have less than 65,000 residents and are therefore excluded from a dataset with a 1-year temporal resolution ACS estimates. An alternative to using county-level estimates is to use 1-year estimates at the Public Use Microdata Areas (PUMA) level, that is, collections of contiguous counties and/or census tracts whose total population exceeds 100,000 people. ### 2.2 Families in poverty in Michigan The proportion of families in poverty in an area is one of the indicators that the US Census Bureau employs to measure poverty in the population. A family is deemed to live in poverty if the total income of all the family members living together is lower than a predetermined threshold. There are multiple poverty thresholds (now, a total of 48) depending on the size of the family and the age of the family members. Thresholds do not vary geographically across the U.S. but are updated annually for inflation. In this paper, we consider data on the proportion of families living in poverty in Michigan in the period 2006-2016. Specifically, we will utilize 1-year PUMA level estimates (for a total of 68 PUMAs in Michigan) and 5-year census tract estimates. We will use both sets of estimates to derive census tract-level, 1-year estimates of the proportion of families living in poverty in Michigan for every year from 2006 to 2016. Of the 2,813 census tracts in Michigan, 84 (3%) did not have enough data to provide estimates due to a low number of residential buildings. Hence, these census tracts were not considered in the analysis. Our exploratory data analysis started with an inspection of the 1-year estimates at the PUMA level, which showed considerable spatial variability in poverty across Michigan. While in some PUMA’s only 1.4% of the families lived in poverty, in others that percentage raised to about 45%. However, when averaged across Michigan, the average proportion of families living in poverty in Michigan’s PUMAs varied between 12.1% and 12.8% in the period 2006-2016. Investigating whether the level of poverty changed over time, we fitted a linear mixed model to the entire times series of estimates. We considered both a model with a linear time trend and a model with linear and quadratic terms of time. In both models, the temporal correlation was accounted for through the inclusion of PUMA-specific intercepts which were assumed to be independent, identically distributed and following a common normal distribution. The model with the quadratic trend fitted the data better and indicated, on average, a growing level of poverty among families in Michigan from 2006 until 2011 followed by a gradual decline. As the ACS estimates of the proportion of families in poverty are multi-year estimates, another goal of our exploratory data analysis was to investigate the type of spatio-temporal dependence in the data. As we discuss in Section 3.2, our model assumes an underlying, continuous in space, discrete in time spatio-temporal process driving the true areal proportions. Thus, to examine the nature of the spatio-temporal dependence, we took the centroids of the Michigan PUMA’s as observation sites, treated the data as geostatistical data, and we used two approaches: (i) we conducted a formal Likelihood Ratio Test (LRT) to assess separability of space and time; and (ii) we performed a more exploratory investigation based on comparing yearly variograms. As the boundaries of the PUMAs in Michigan changed following the 2010 Census, in assessing space-time separability, we split the 2006-2016 data into two sets: one consisting of data relative to the 2006-2011 pre-boundary-changes period, and one made of estimates relative to the 2012-2016 time period. Working on the log scale, and performing the test of separability proposed by Mitchell, Genton and Gumpertz (2005) on the two sets of data individually, we obtained LRT values of $7.4\times 10^{-8}$ and $2.4\times 10^{-5}$, respectively, suggesting time and space separability. We reached a similar conclusion when comparing the empirical semi-variograms and the associated parameters, derived using the log of the ACS estimates of the proportion of families in poverty for each year. Despite some annual variation, the estimates of the marginal variance and decay parameter were generally similar over time. In light of these results, available in the Supplementary Material, when modeling the underlying process driving the true, areal proportion of families living in poverty in Michigan, we decided to adopt a separable space-time covariance function. ## 3 Modeling Approach Our model uses both the 5-year ACS estimates at census tract level, and the 1-year ACS estimates at the PUMA level. Following Bradley, Wikle and Holan (2015), we denote by $z_{t}^{(l)}(A)$ the estimate of a proportion corresponding to areal unit $A$ for the $l$-year time period ending in time $t$. Thus, $z_{t}^{(5)}(A_{ig})$ indicates the ACS estimate for the 5-year time period ending at year $t$ for census tract $g$, $g=1,\ldots,G_{i}$, within PUMA $i$, $i=1,\ldots,N$, whereas $z_{t}^{(1)}(A_{i})$ refers to the 1-year ACS estimate for PUMA $i$ at year $t$. We denote by $\tau^{2(5)}_{t}(A_{ig})$ and $\tau^{2(1)}_{t}(A_{i})$ the design-based variance of $z_{t}^{(5)}(A_{ig})$ and $z_{t}^{(1)}(A_{i})$, respectively, derived from the margins of error provided in the ACS dataset. ### 3.1 Modeling survey-based estimates of areal proportions accounting for the design effect Following Bradley, Wikle and Holan (2015), we assume that the survey-based estimate of the proportion corresponding to areal unit $A$ over the $l$-unit time period ($l=1$ or $5$), ending at time $t$, $z_{t}^{(l)}(A)$, is related to the true proportion, $\pi_{t}^{(l)}(A)$, through some distribution function. A first idea would be to model the ACS estimate $z_{t}^{(l)}(A)$ as following a normal distribution with mean equal to the true proportion, $\pi_{t}^{(l)}(A)$, and variance equal to the design-based variance $\tau^{2(l)}_{t}(A)$. However, as also noted in another context by Chen, Wakefield and Lumley (2014), such modeling choice would be inaccurate for small samples and will not ensure that the estimated $\pi_{t}^{(l)}(A)$ belongs to the interval $[0,1]$. For this reason, building upon the work of Korn and Graubard (1998), and following Mercer et al. (2014) and Chen, Wakefield and Lumley (2014), we introduce a working likelihood for a random variable $q^{*(l)}_{t}(A)$ that we construct from the ACS estimate $z_{t}^{(l)}(A)$ and from the effective sample size $m^{*(l)}_{t}(A)$. The latter represents the sample size that a simple random sample (SRS) should have to yield an estimator for the proportion that has a variance that matches the design-based variance of the ACS estimate. To derive the effective sample size, we use the notion of design effect $d$ introduced by Kish (1995), who calls a survey’s design effect the ratio of the variance of an estimator under SRS to the sampling-based variance of a survey-based estimator. By setting the design effect equal to 1 and solving the equation for the SRS sample size $m_{t}^{(l)}(A)$, we obtain the sample size of the SRS that will yield an estimator with variance corresponding to the ACS design-based variance. We call this sample size the effective sample size (ESS). Including the ESS in the distribution function that relates $q^{*(l)}_{t}(A)$ to the true proportion $\pi_{t}^{(l)}(A)$ allows us to account for the survey’s design effect in our modeling framework. More specifically, in the case of $z_{t}^{(l)}(A)$, for a SRS of size $m_{t}^{(l)}(A)$, the estimated variance of $z_{t}^{(l)}(A)$ would be equal to $\frac{z_{t}^{(l)}(A)(1-z_{t}^{(l)}(A))}{m_{t}^{(l)}(A)}$. Setting the survey’s design effect $d$ equal to 1, yields the following equation $\tau^{2(l)}_{t}(A)=\frac{z_{t}^{(l)}(A)(1-z_{t}^{(l)}(A))}{m_{t}^{(l)}(A)},$ leading to the following expression for the effective sample size, $m_{t}^{*(l)}(A)$: $m_{t}^{*(l)}(A)=\left[\frac{z_{t}^{(l)}(A)(1-z_{t}^{(l)}(A))}{\tau^{2(l)}_{t}(A)}\right],$ (3.1) with $\left[\cdot\right]$ denoting rounding to the nearest integer. Although not necessarily needed in (3.1), we introduce rounding to ensure that the effective sample size is an integer. We then use the effective sample size $m_{t}^{*(l)}(A)$ and the ACS estimate $z_{t}^{(l)}(A)$ for areal unit $A$ and for the $l$-year time period ending in year $t$ to derive the effective number of cases, $q_{t}^{*(l)}(A)$, for the same areal unit and for the same time period, that is: $q_{t}^{*(l)}(A):=\left[m_{t}^{*(l)}(A)\cdot z_{t}^{(l)}(A)\right],$ (3.2) again rounded to the nearest integer. This quantity represents the number of cases that we would have observed in a SRS of size $m_{t}^{*(l)}(A)$ to obtain an estimate of the proportion $\pi_{t}^{(l)}(A)$ that is equal to the ACS estimate $z_{t}^{(l)}(A)$ and with the same variance as the design-based variance $\tau^{2(l)}_{t}(A)$. Using now the effective number of cases in our working likelihood, our Bayesian hierarchical model specifies at the first stage a Binomial likelihood for $q_{t}^{*(l)}(A)$ with number of trials equal to $m_{t}^{*(l)}(A)$ and success probability equal to the true proportion, $\pi_{t}^{(l)}(A)$, our parameter of interest, i.e.: $q_{t}^{*(l)}(A)|\;\pi_{t}^{(l)}(A)\sim\text{Binomial}\left(m_{t}^{*(l)}(A),\pi_{t}^{(l)}(A)\right).$ As we fit our model to 5-year and 1-year ACS estimates of areal proportions, from (3.1) and (3.2), we derive the corresponding number of cases and effective sample sizes - $q_{t}^{(5)}(A_{ig}),m_{t}^{*(5)}(A_{ig})$ and $q_{t}^{(1)}(A_{i}),m_{t}^{*(1)}(A_{i})$ \- which we employ in our working likelihood, made of the following two components $\displaystyle q_{t}^{*(5)}(A_{ig})|\;\pi_{t}^{(5)}(A_{ig})$ $\displaystyle\stackrel{{\scriptstyle ind}}{{\sim}}$ $\displaystyle\text{Binomial}\left(m_{t}^{*(5)}(A_{ig}),\pi_{t}^{(5)}(A_{ig})\right)$ $\displaystyle q_{t}^{*(1)}(A_{i})|\;\pi_{t}^{(1)}(A_{i})$ $\displaystyle\stackrel{{\scriptstyle ind}}{{\sim}}$ $\displaystyle\text{Binomial}\left(m_{t}^{*(1)}(A_{i}),\pi_{t}^{(1)}(A_{i})\right)$ (3.3) with $g=1,\ldots,G_{i}$ and $i=1,\ldots,N$. We note that in (3.3) we are following the tradition of spatial generalized linear models (see Diggle, Tawn and Moyeed (1998) for details) where spatial dependence in the data is accounted for by assuming that the model parameters are spatially correlated. To disaggregate the ACS estimates, we link $\pi_{t}^{(5)}(A_{ig})$ and $\pi_{t}^{(1)}(A_{i})$ to $\pi_{t}^{(1)}(A_{ig})$, $g=1,\ldots,G_{i};i=1,\ldots,N$, the true proportions at our desired spatial and temporal resolution, via: $\displaystyle\pi_{t}^{(5)}(A_{ig})$ $\displaystyle=$ $\displaystyle\frac{1}{5}\sum_{k=t-4}^{t}\pi_{k}^{(1)}(A_{ig})$ $\displaystyle\pi_{t}^{(1)}(A_{i})$ $\displaystyle=$ $\displaystyle\frac{1}{N_{t}(A_{i})}\sum_{h=1}^{G_{i}}N_{t}(A_{ih})\pi_{t}^{(1)}(A_{ih})$ (3.4) where $N_{t}(A)$ generally denotes the number of households in areal unit $A$ at time $t$. ### 3.2 Addressing the Change of Support Problem (COSP) In practice, we may want to infer about proportions over areal units that are not conveniently comprised of combinations of $A_{ig}$ and/or $A_{i}$. To this end, we further decompose $\pi_{t}^{(1)}(A_{ig})$, i.e. the true proportion at one-year and census tract resolution. Following in the tradition of models handling the spatial and spatio-temporal COSP, we assume that a random variable for an areal unit and a time period can be expressed as the aggregation over the areal unit and the time period of a point-referenced spatio-temporal process, continuous in space and discrete in time. Because the process is discrete in time: $\pi_{t}^{(5)}(A_{ig})=\frac{1}{l}\sum_{k=t-l+1}^{t}\pi_{k}^{(1)}(A_{ig})\qquad g=1,\ldots,G_{i};\;i=1,\ldots,N.$ To allow the flexibility to work over any areal unit, we link $\pi_{t}^{(1)}(A_{ig})$ to an underlying point-referenced spatio-temporal process $\zeta_{t}(\mathbf{s})$, $\mathbf{s}\in\mathcal{S}$, via the probit link function, $\Phi^{-1}(\cdot)$, thus yielding $\Phi^{-1}\left(\pi_{t}^{(1)}(A_{ig})\right)=\frac{1}{|A_{ig}|}\int_{\mathbf{s}\in A_{ig}}\zeta_{t}(\mathbf{s})d\mathbf{s}+\xi(C_{A_{ig}})+\epsilon_{t}(A_{ig}),$ (3.5) with $\epsilon_{t}(A_{ig})\stackrel{{\scriptstyle iid}}{{\sim}}N(0,\tau^{2}_{\epsilon})$ and $\xi(C_{A_{ig}})\stackrel{{\scriptstyle iid}}{{\sim}}N(0,\tau^{2}_{C})$. In (3.5), $\epsilon_{t}(A_{ig})$ denote i.i.d. error terms that account for model specification error in linking $\pi^{(1)}_{t}(A_{ig})$ to the latent process $\zeta_{t}(\mathbf{s})$, whereas $\xi(C_{A_{ig}})$ denotes a random effect defined at the same areal unit level as the clustering units of the sampling survey. In (3.5), $C_{A_{ig}}$ indicates the cluster that contains areal unit $A_{ig}$. The cluster-level random effect, $\xi(C_{A_{ig}})$, is introduced to enforce stronger dependence among certain estimates in a way that is reflective of the survey sampling design. To provide an interpretation of the spatio-temporal process $\zeta_{t}(\mathbf{s})$, $\mathbf{s}\in\mathcal{S};t=1,\ldots,T$, in (3.5), we consider the application to the proportion of families in poverty in any areal unit $A$. In this case, $\zeta_{t}(\mathbf{s})$ represents a function of the likelihood that a family living at location $\mathbf{s}\in A$ is in poverty in year $t$. Decomposing the point-referenced spatio-temporal process $\zeta_{t}(\mathbf{s})$, $\mathbf{s}\in\mathcal{S};t=1,\ldots,T$, into a large-scale spatio-temporal trend, $\mu_{t}(\mathbf{s})$, representing the mean of the process, and a spatio-temporal random effect, $w_{t}(\mathbf{s})$, $\mathbf{s}\in\mathcal{S};t=1,\ldots,T$, (3.5) becomes: $\Phi^{-1}\left(\pi_{t}^{(1)}(A_{ig})\right)=\frac{1}{|A_{ig}|}\int_{\mathbf{s}\in A_{ig}}\left(\mu_{t}(\mathbf{s})+w_{t}(\mathbf{s})\right)d\mathbf{s}+\xi(C_{A_{ig}})+\epsilon_{t}(A_{ig}).$ (3.6) In light of the results of our exploratory data analysis, discussed in Section 2.2, we model the spatio-temporal random effect, $w_{t}(\mathbf{s})$, $\mathbf{s}\in\mathcal{S};t=1,\ldots,T$, as a Gaussian spatio-temporal process with a separable space-time covariance function with an AR(1) structure in time and a spatial dependence encoded through the covariance function $C(\mathbf{s},\mathbf{s}^{\prime};\boldsymbol{\theta})$, $\mathbf{s},\mathbf{s}^{\prime}\in\mathcal{S}$. For computations involving a large number of areal units, we approximate the spatio-temporal process $w_{t}(\mathbf{s}),\mathbf{s}\in\mathcal{S};t=1,\ldots,T$, with a linear combination of spatial basis functions with appropriate spatio-temporal basis function weights. Given the nested geographies of the ACS, we elect to use the basis functions implied by the Multi-Resolution Approximation (MRA; Katzfuss (2017)), also characterized by a nested structure. ### 3.3 The Spatio-Temporal Multi-resolution Approximation (ST-MRA) As the MRA is defined only in a spatial context, here we extend it to the spatio-temporal setting. Let $w_{t}(\mathbf{s}),\mathbf{s}\in\mathcal{S};t=1,\ldots,T$, denote a mean-zero spatio-temporal Gaussian process defined on a spatial domain $\mathcal{S}$ with a separable space-time covariance function that invokes a first-order autoregressive structure in time and spatial covariance function $C(\mathbf{s},\mathbf{s}^{\prime};\boldsymbol{\theta})$, $\mathbf{s},\mathbf{s}^{\prime}\in\mathcal{S}$. As in the MRA, we start by introducing a first set of $r$ knots on the spatial domain $\mathcal{S}$ (level 0). Then, at each level $m$ ($m=1,\ldots,M$), we recursively partition the spatial domain $\mathcal{S}$ in $J^{m}$ non-overlapping subregions in which we introduce $r$ knots. Let $S^{*}_{m,j}$ denote the set of $r$ knots defined on partition $j$ of level $m$. We define the basis functions $\mathbf{b}_{m,j}(\mathbf{s})$, for $j=1,\ldots,J^{m};m=0,\ldots,M$ recursively as: $\displaystyle v_{0}(\mathbf{s}_{1},\mathbf{s}_{2})$ $\displaystyle=$ $\displaystyle C(\mathbf{s}_{1},\mathbf{s}_{2};\boldsymbol{\theta})$ $\displaystyle\mathbf{b}_{m,j}(\mathbf{s})$ $\displaystyle:=$ $\displaystyle v_{m}(\mathbf{s},S^{*}_{m,j})$ $\displaystyle\mathbf{K}^{-1}_{m,j}$ $\displaystyle:=$ $\displaystyle v_{m}(S^{*}_{m,j},S^{*}_{m,j})$ $\displaystyle v_{m+1}(\mathbf{s}_{1},\mathbf{s}_{2})$ $\displaystyle=$ $\displaystyle 0\qquad\text{ if }\mathbf{s}_{1}\text{ and }\mathbf{s}_{2}\text{ are in different regions at resolution $m$}$ (3.7) $\displaystyle v_{m+1}(\mathbf{s}_{1},\mathbf{s}_{2})$ $\displaystyle:=$ $\displaystyle v_{m}(\mathbf{s}_{1},\mathbf{s}_{2})-\mathbf{b}_{m,j}(\mathbf{s}_{1})^{\prime}\mathbf{K}_{m,j}\mathbf{b}_{m,j}(\mathbf{s}_{2})\qquad\text{ otherwise.}$ In the MRA construction (Katzfuss, 2017), the basis functions weights $\boldsymbol{\eta}_{m,j}$ are specified to follow a multivariate normal distribution $\boldsymbol{\eta}_{m,j}\sim N_{r}(\mathbf{0},\mathbf{K}_{m,j})$. With this specification for the basis functions and the basis functions weights, the linear combination $\sum_{m=0}^{M}\sum_{j=1}^{J^{m}}\mathbf{b}_{m,j}(\mathbf{s})\boldsymbol{\eta}_{m,j}$ yields an $M$-level approximation to a mean-zero Gaussian process with covariance function $C(\mathbf{s},\mathbf{s}^{\prime};\boldsymbol{\theta})$. For our spatio-temporal process $w_{t}(\mathbf{s})$, $\mathbf{s}\in\mathcal{S};t=1,\ldots,T$, we let the basis function weights $\boldsymbol{\eta}_{t,m,j}$ vary in time, modeling them with a stationary, first-order autoregressive structure (Gelfand, Banerjee and Gamerman, 2005). Hence, at time $t=1$ we assume that $\boldsymbol{\eta}_{1,m,j}\sim N_{r}(\mathbf{0},\mathbf{K}_{m,j})$, while for $t=2,\dots,T$: $\displaystyle\boldsymbol{\eta}_{t,m,j}|\boldsymbol{\eta}_{t-1,m,j},\boldsymbol{\eta}_{t-2,m,j},\ldots,\boldsymbol{\eta}_{1,m,j}$ $\displaystyle\sim$ $\displaystyle N_{r}(\alpha\boldsymbol{\eta}_{t-1,m,j},\mathbf{U}_{m,j}),$ (3.8) $\displaystyle\mathbf{U}_{m,j}$ $\displaystyle=$ $\displaystyle(1-\alpha^{2})\mathbf{K}_{m,j}.$ We call $w_{t,M}(\mathbf{s}):=\sum_{m=0}^{M}\sum_{j=1}^{J^{m}}\mathbf{b}_{m,j}(\mathbf{s})\boldsymbol{\eta}_{t,m,j}$ (3.9) with basis functions $\mathbf{b}_{m,j}(\mathbf{s})$ defined as in (3.7), the M-level ST-MRA approximation of the separable, spatio-temporal process $w_{t}(\mathbf{s})$, $\mathbf{s}\in\mathcal{S};t=1,\ldots,T$, with AR(1) dependence in time and spatial covariance function $C(\mathbf{s},\mathbf{s}^{\prime};\boldsymbol{\theta})$. Section 1 of the Supplementary Material (Benedetti, Berrocal and Little, 2021) shows that the above expression does indeed provide an approximation of the desired spatio- temporal dependence structure. ### 3.4 The Bayesian spatio-temporal disaggregation model Combining the formulations in Sections 3.2 and 3.3, we obtain: $\displaystyle\Phi^{-1}\left(\pi_{t}^{(1)}(A_{ig})\right)$ $\displaystyle=$ $\displaystyle\frac{1}{|A_{ig}|}\int_{\mathbf{s}\in A_{ig}}\left(\mu_{t}(\mathbf{s})+w_{t}(\mathbf{s})\right)d\mathbf{s}+\xi(C_{A_{ig}})+\epsilon_{t}(A_{ig})$ (3.10) $\displaystyle\approx$ $\displaystyle\frac{1}{|A_{ig}|}\int_{\mathbf{s}\in A_{ig}}\left(\mu_{t}(\mathbf{s})+w_{t,M}(\mathbf{s})\right)d\mathbf{s}+\xi(C_{A_{ig}})+\epsilon_{t}(A_{ig})$ $\displaystyle=$ $\displaystyle\frac{1}{|A_{ig}|}\int_{\mathbf{s}\in A_{ig}}\left(\mu_{t}(\mathbf{s})+\sum_{m=0}^{M}\sum_{j=1}^{J^{m}}\mathbf{b}_{m,j}(\mathbf{s})\boldsymbol{\eta}_{t,m,j}\right)d\mathbf{s}+\xi(C_{A_{ig}})+\epsilon_{t}(A_{ig}),$ where the spatio-temporal random effect $w_{t}(\mathbf{s})$, $\mathbf{s}\in\mathcal{S}$, has been replaced by its $M$-level ST-MRA approximation $w_{t,M}(\mathbf{s})$ defined in (3.9). As the sampling frames in the ACS survey consist of counties, denoting by $C_{A_{ig}}$ the county containing census tract $A_{ig}$, our model for disaggregating spatially and temporally the ACS estimates of proportions, has the following hierarchical specification: $\displaystyle q_{t}^{*(5)}(A_{ig})|\;\pi_{t}^{(5)}(A_{ig})$ $\displaystyle\stackrel{{\scriptstyle ind}}{{\sim}}$ $\displaystyle\text{Binomial}\left(m_{t}^{*(5)}(A_{ig}),\pi_{t}^{(5)}(A_{ig})\right)$ $\displaystyle q_{t}^{*(1)}(A_{i})|\;\pi_{t}^{(1)}(A_{i})$ $\displaystyle\stackrel{{\scriptstyle ind}}{{\sim}}$ $\displaystyle\text{Binomial}\left(m_{t}^{*(1)}(A_{i}),\pi_{t}^{(1)}(A_{i})\right)$ $\displaystyle\pi_{t}^{(5)}(A_{ig})$ $\displaystyle=$ $\displaystyle\frac{1}{5}\sum_{k=t-4}^{t}\pi_{k}^{(1)}(A_{ig})$ (3.11) $\displaystyle\pi_{t}^{(1)}(A_{i})$ $\displaystyle=$ $\displaystyle\frac{1}{N_{t}(A_{i})}\sum_{h=1}^{G_{i}}N_{t}(A_{ih})\pi_{t}^{(1)}(A_{ih})$ $\displaystyle\Phi^{-1}\left(\pi_{t}^{(1)}(A_{ig})\right)$ $\displaystyle\approx$ $\displaystyle\frac{1}{|A_{ig}|}\int_{\mathbf{s}\in A_{ig}}\left(\mu_{t}(\mathbf{s})+\sum_{m=0}^{M}\sum_{j=1}^{J^{m}}\mathbf{b}_{m,j}(\mathbf{s})\boldsymbol{\eta}_{t,m,j}\right)d\mathbf{s}$ $\displaystyle+$ $\displaystyle\xi(C_{A_{ig}})+\epsilon_{t}(A_{ig})$ $\displaystyle\xi(C_{A_{ig}})$ $\displaystyle\stackrel{{\scriptstyle iid}}{{\sim}}$ $\displaystyle N(0,\tau_{C}^{2})$ $\displaystyle\epsilon_{t}(A_{ig})$ $\displaystyle\stackrel{{\scriptstyle iid}}{{\sim}}$ $\displaystyle N(0,\tau^{2}_{\epsilon})$ with $q_{t}^{*(5)}(A_{ig})$, $q_{t}^{*(1)}(A_{i})$, $m_{t}^{*(5)}(A_{ig})$, and $m_{t}^{*(1)}(A_{i})$ defined as in (3.2) and (3.1), respectively. The county-level random effects in the expression of $\Phi^{-1}\left(\pi_{t}^{(1)}(A_{ig})\right)$ allow the ACS estimates for census tracts within the same county to exhibit greater dependence with one another than with ACS estimates for census tracts in different counties, even when the distances between those tracts are the same. We speculate that this will account for the fact that factors such as sampling procedure or response rate within a county-wide sampling frame might systematically affect ACS estimates corresponding to most or all of the census tracts within that county. The integral in (3.10) can be re-expressed as: $\displaystyle\Phi^{-1}\left(\pi_{t}^{(1)}(A_{ig})\right)$ $\displaystyle\approx$ $\displaystyle\mu_{t}(A_{ig})+\sum_{m=0}^{M}\sum_{j=1}^{J^{m}}\mathbf{b}_{m,j}(A_{ig})\boldsymbol{\eta}_{t,m,j}+\xi(C_{A_{ig}})+\tilde{\epsilon}_{t}(A_{ig})$ $\displaystyle\tilde{\epsilon}_{t}(A_{ig})$ $\displaystyle\stackrel{{\scriptstyle iid}}{{\sim}}$ $\displaystyle N(0,\tau^{2}),$ (3.12) where $\mu_{t}(A_{ig})$ and $\mathbf{b}_{m,j}(A_{ig})$ denote the integrals of $\mu_{t}(\mathbf{s})$ and of the basis functions $\mathbf{b}_{m,j}(\mathbf{s})$, $m=0,\ldots,M$; $j=1,\ldots,J^{m}$, as $\mathbf{s}$ varies in areal unit $A_{ig}$, with $g=1,\ldots,G_{i};i=1,\ldots,N$. The term $\tilde{\epsilon}_{t}(A_{ig})$ in (3.12) accounts for errors due to model misspecification, aggregation as well as any error that occurs as a result of the multi-resolution space-time approximation. ### 3.5 Prior distributions Our Bayesian model includes Inverse Gamma prior distributions for the error variance parameter $\tau^{2}$ in (3.12), and for the variance of the county- level random effects, $\tau^{2}_{C}$. We assume $\mu_{t}(\mathbf{s})\equiv\mu_{t},t=1,2,...,T$, and model these spatially- constant temporal trend terms as independent a priori, with an improper prior $p(\mu_{t})\propto 1,\forall t$. This modeling choice implies that $\forall t=1,2,\ldots,T$, the spatio-temporal random effect $w_{t}(\mathbf{s}),\mathbf{s}\in\mathcal{S}$, accounts for all the spatial variation in the ACS estimates. We investigated whether allowing the mean terms $\mu_{t}$ vary in space would lead to significantly different results in terms of model fit, but we did not observe any meaningful change. We assign a Uniform $([0,1])$ prior to the autoregressive parameter $\alpha$ of the basis functions weights in (3.8), while the definition of the ST-MRA basis functions is determined once we choose the spatial covariance function $C(\mathbf{s},\mathbf{s}^{\prime};\boldsymbol{\theta})$. Here we take it to be the stationary Matérn covariance function with parameters $\sigma^{2}$, $\phi$ and $\nu$ $C(\mathbf{s},\mathbf{s}^{\prime};\boldsymbol{\theta})=\frac{\sigma^{2}}{2^{\nu-1}\Gamma(\nu)}\left(\frac{||\mathbf{s}-\mathbf{s}^{\prime}||}{\phi}\right)^{\nu}\mathcal{K}_{\nu}\left(\frac{||\mathbf{s}-\mathbf{s}^{\prime}||}{\phi}\right)$ (3.13) where $\mathbf{s},\mathbf{s}^{\prime}\in\mathcal{S}$ and $\mathcal{K}_{\nu}(\cdot)$ is the modified Bessel function of the second kind. We specify a non-informative Inverse Gamma prior on the marginal variance parameter $\sigma^{2}$, while we place a Gamma$(1,1)$ prior on the range parameter $\phi$ and a Uniform$((0,2))$ prior on the smoothness parameter $\nu$, as suggested by Finley, Banerjee and Carlin (2007). As the latter is notoriously difficult to estimate, an alternative specification would entail the use of penalized complexity priors as described in Simpson et al. (2017). ### 3.6 Other models We describe succinctly alternative models that we compare with our model in Sections 4 and 5. More details are available in Section 3 of the Supplementary Material (Benedetti, Berrocal and Little, 2021). To evaluate the utility of the _effective sample size_ and _effective number of cases_ , a first competing model specifies a “standard” Binomial likelihood for the number of cases, obtained by multiplying the ACS estimate, $z^{(l)}_{t}(A)$, by the number of survey responses, $m_{t}^{(l)}(A)$, obtained in areal unit $A$ over the $l$-unit time period ending in year $t$. Calling this product $q^{(l)}_{t}(A)$, the _standard Binomial model_ for disaggregation applied to the 1-year and 5-year ACS data assumes that $\begin{array}[]{rcl}q_{t}^{(1)}(A_{i})|\pi^{(1)}_{t}(A_{i})&\sim&\mbox{Binomial}\left(m^{(1)}_{t}(A_{i}),\pi^{(1)}_{t}(A_{i})\right)\\\ q^{(5)}_{t}(A_{ig})|\pi^{(5)}_{t}(A_{ig})&\sim&\mbox{Binomial}\left(m^{(5)}_{t}(A_{ig}),\pi^{(5)}_{t}(A_{ig})\right).\end{array}$ We keep the other levels of this model exactly as in the Bayesian hierarchical model in (3.11). The second and third model we consider are extensions and adaptations of models proposed by Bradley, Wikle and Holan (2016) and Bradley, Wikle and Holan (2015), respectively, when analyzing Poisson spatial-only and Gaussian space-time ACS data. The _BWH Poisson space-time model_ extends the model for count data proposed by Bradley, Wikle and Holan (2016) to the space-time setting. Interpreting the counts $q^{(1)}_{t}(A_{i})$ and $q^{(5)}_{t}(A_{ig})$ as Poisson random variables, we assume a latent process $Y_{t}(A_{ig})$, $t=1,\ldots,T$, defined at the census tract level, such that $\begin{array}[]{rcl}q^{(1)}_{t}(A_{i})|\left\\{Y_{t}(A_{ig});g=1,\ldots,G_{i},i=1,\ldots,N\right\\}&\sim&\mbox{Poisson}\left(\sum_{h=1}^{G_{i}}\exp\left(Y_{t}(A_{ih})\right)\right)\\\ q^{(5)}_{t}(A_{ig})|\left\\{Y_{k}(A_{ig});k=1,\ldots,T\right\\}&\sim&\mbox{Poisson}\left(\frac{1}{5}\sum_{k=t-4}^{t}\exp\left(Y_{k}(A_{ig})\right)\right)\end{array}$ for $g=1,\ldots,G_{i};i=1,\ldots,N$ and $t=1,\ldots,T$. For each $t=1,\ldots,T$, following Bradley, Wikle and Holan (2016), $Y_{t}(A_{ig})$ is decomposed as: $Y_{t}(A_{ig})=\beta_{t}+\boldsymbol{\psi}\boldsymbol{\vartheta}+\varsigma_{t}(A_{ig})$ (3.14) with $\boldsymbol{\psi}$ Moran’s basis functions, $\boldsymbol{\vartheta}$ basis functions weights defined as in Bradley, Wikle and Holan (2016), and $\varsigma_{t}(A_{ig})$ error terms that account for aggregation and other types of errors. Differently from Bradley, Wikle and Holan (2016), here we are dealing with estimates over multiple years: to accommodate this added dimension, in (3.14) we allow the intercept terms $\beta_{t}$ to vary in time, hence representing a temporal trend. Similarly, we allow the error terms $\varsigma_{t}(A_{ig})$ to change in time. Finally, as in Bradley, Wikle and Holan (2016), the _BWH Poisson space-time model_ provides a stochastic formulation for the ACS design based variances $\tau^{2(1)}_{t}(A_{i})$ and $\tau^{2(5)}_{t}(A_{ig})$, assumed respectively to follow a lognormal distribution: $\begin{array}[]{rcl}\log\left(\tau^{2(1)}_{t}(A_{i})\right)&\sim&N\left(\log\left(\sum_{h=1}^{G_{i}}\exp\left(Y_{t}(A_{ih})\right)\right),\sigma^{2(1)}(A_{i})\right)\\\ \log\left(\tau^{2(5)}_{t}(A_{ig})\right)&\sim&N\left(\log\left(\frac{1}{5}\sum_{k=t-4}^{t}\exp\left(Y_{k}(A_{ig})\right)\right),\sigma^{2(5)}(A_{ig})\right).\end{array}$ The specification of the _BWH Poisson space-time model_ is completed by the following prior distributions, which we take directly from Bradley, Wikle and Holan (2016): $\beta_{t}\stackrel{{\scriptstyle iid}}{{\sim}}N(0,10^{15}),\forall t=1,\ldots,T$; $\varsigma_{t}(A_{ig})\stackrel{{\scriptstyle iid}}{{\sim}}N(0,\sigma^{2}_{\varsigma}),\forall t=1,\ldots,T$, $g=1,\ldots,G_{i},i=1,\ldots,N$; $\sigma^{2(1)}(A_{i})\stackrel{{\scriptstyle iid}}{{\sim}}\mbox{Gamma}(1,1),\forall i=1,\ldots,N$; $\sigma^{2(5)}(A_{ig})\stackrel{{\scriptstyle iid}}{{\sim}}\mbox{Gamma}(1,1),\forall g=1,\ldots,G_{i},i=1,\ldots,N$; and $\sigma^{2}_{\varsigma}\sim\mbox{Gamma}(1,1)$. The last model we consider is an adaption of the spatio-temporal model proposed by Bradley, Wikle and Holan (2015) for Gaussian-distributed ACS variables to ACS estimates of proportions. To frame the ACS estimates of proportions, $z^{(l)}_{t}(A)$, within a Gaussian likelihood, we apply a logistic transformation to them, thus obtaining variables defined in $\mathbf{R}$. As $\tau^{2(l)}_{t}(A)$ is the design-based variance of $z^{(l)}_{t}(A)$,we employ the delta method to derive the expression of the variance of $\log\left(\frac{z^{(l)}_{t}(A)}{(1-z^{(l)}_{t}(A))}\right)$ for every areal unit $A$ and $l$-unit time period. Thus, the first stage of this new Bayesian hierarchical model, which we call the _BWH Gaussian Delta method model_ , is given by: $\begin{array}[]{rcl}\log\left(\frac{z^{(1)}_{t}(A_{i})}{(1-z^{(1)}_{t}(A_{i}))}\right)|\pi^{(1)}_{t}(A_{i})&\sim&N\left(\log\left[\frac{\pi^{(1)}_{t}(A_{i})}{(1-\pi^{(1)}_{t}(A_{i}))}\right],\frac{\tau^{2(1)}_{t}(A_{i})}{z^{(1)}_{t}(1-z^{(1)}_{t}(A_{i}))}\right)\\\ \\\ \log\left(\frac{z^{(5)}_{t}(A_{ig})}{(1-z^{(5)}_{t}(A_{ig}))}\right)|\pi^{(5)}_{t}(A_{ig})&\sim&N\left(\log\left[\frac{\pi^{(5)}_{t}(A_{ig})}{(1-\pi^{(5)}_{t}(A_{ig}))}\right],\frac{\tau^{2(5)}_{t}(A_{ig})}{z^{(5)}_{t}(1-z^{(5)}_{t}(A_{ig}))}\right)\end{array}$ for $i=1,\ldots,N$, $g=1,\ldots,G_{i}$, $t=1,\ldots,T$. Calling $\tilde{y}^{(1)}_{t}(A_{i}):=\log\left[\frac{\pi^{(1)}_{t}(A_{i})}{(1-\pi^{(1)}_{t}(A_{i}))}\right]$ and $\tilde{y}^{(5)}_{t}(A_{ig}):=\log\left[\frac{\pi^{(1)}_{t}(A_{ig})}{(1-\pi^{(1)}_{t}(A_{ig}))}\right]$, we achieve their disaggregation in time through the following equality $\tilde{y}^{(5)}_{t}(A_{ig})=\frac{1}{5}\sum_{k=t-4}^{t}\tilde{y}^{(1)}_{t}(A_{ig})\qquad i=1,\ldots,N;g=1,\ldots,G_{i}$ whereas their disaggregation in space is handled, for any areal unit $A$, through $\tilde{y}^{(1)}_{t}(A)=\frac{1}{|A|}\int_{s\in A}\zeta_{t}(\mathbf{s})d\mathbf{s}\qquad\forall t=1,\ldots,T$ with $\zeta_{t}(\mathbf{s})$ spatio-temporal Gaussian process. Following a similar approach as discussed in Section 3.2, for all $t=1,\ldots,T$, we decompose $\zeta_{t}(\mathbf{s})$ in the sum of a spatio-temporal trend term $\mu_{t}(\mathbf{s})$ and spatio-temporal random effects $w_{t}(\mathbf{s})$, $\mathbf{s}\in\mathcal{S};t=1,\ldots,T$, and we replace $\zeta_{t}(\mathbf{s})$ with $\mu_{t}(\mathbf{s})+w_{t}(\mathbf{s})$ under the integral. As in Section 3.2, $w_{t}(\mathbf{s})$ assumed to be equipped with a separable space-time covariance function. In fitting this model to data we use a dimension reduction approach, and we approximate the spatio-temporal random effects $w_{t}(\mathbf{s})$, $\mathbf{s}\in\mathcal{S};t=1,\ldots,T$ using the ST-MRA approximation discussed in Section 3.3. Also Bradley, Wikle and Holan (2015) handled the large dimensionality of the data through an approximation that involved a basis function expansion. However, they used bisquare basis functions rather than the basis functions we employ here. We believe that the difference in basis functions employed in the approximation should not result in drastic changes in terms of model performance. ### 3.7 Computation We fit our Bayesian hierarchical model and all the other competing models using Markov Chain Monte Carlo (MCMC) algorithms, with Gibbs sampling and Metropolis-Hastings steps. For our model, posterior sampling exploits the data augmentation method of Albert and Chib (1993) to sample the MRA basis function coefficients $\boldsymbol{\eta}_{t,m,j}$ via Gibbs sampling, whereas posterior samples of the Matérn covariance function parameters - $\phi$ and $\nu$ \- are generated using a Metropolis-Hastings algorithm. We assess convergence of the MCMC algorithms both visually, by inspecting trace plots, and numerically using Geweke’s diagnostic for Markov chains (Geweke, 1992). We run each MCMC algorithm for a number of iterations large enough that the effective sample size post burn-in for each model parameter exceeds 1,000. The proposal distributions used in the Metropolis-Hastings steps are tuned during burn-in to achieve desirable acceptance rates (Roberts, Gelman and Gilks, 1997). ## 4 Simulation Studies We now report results for two simulation studies: Simulation study 1 evaluates the ability of the proposed model to disaggregate spatially and temporally areal-level estimates of proportions, even when the data are not generated according to our model specifications. On the other hand, Simulation study 2 gauges the need to account for the design effect. ### 4.1 Generating the true proportions In both simulation studies, we use very similar data generating mechanisms, all very different from our modeling framework. As a spatial domain we envision a geographical configuration that is analogous to that of census tracts and PUMAs. Specifically, we consider a $10\times 10$ square grid made of 100 areal units, all assumed to have the same population size. The 100 areal units are in turn grouped into 4 distinct regions, each with the same population size and each containing 25 areal units (see Figure 1). To simulate data, we proceed as follows: we generate the true 1-year proportions $\pi^{(1)}_{t}(A_{ig})$ for each time $t=1,\ldots,10$ and for each subregion $g$, $g=1,\ldots,25$, nested within region $i$, $i=1,\ldots,4$. We repeat the procedure thirty times, yielding a total of 30 simulated datasets per simulation setting, and we consider 4 different data generating mechanisms. This allows us to assess the performance of our model in settings that differ from that of our model. Figure 1: Areal units utilized in the simulation studies. Under each simulation setting we assume that in each subregion $A_{ig}$, there is a latent covariate $x(A_{ig})$, not varying in time, distributed according to a standard normal distribution. This latent covariate drives the true proportion $\pi^{(1)}_{t}(A_{ig})$, $g=1,2,\ldots,25$; $i=1,\ldots,4$. In the first simulation setting, for each subregion $A_{ig}$ and at each time point $t$, the true proportion $\pi^{(1)}_{t}(A_{ig})$ is obtained by applying the _expit_ (inverse logistic) function to the sum of the latent covariate $x(A_{ig})$ and the randomly generated white noise, thus allowing for temporal and spatial variability in the true proportions. Although the true proportions $\pi^{(1)}_{t}(A_{ig})$ will not be the same across space and time, they are independent in space and time. To induce spatial correlation in the true proportions, in the second simulation setting we introduce a point-referenced spatial process, $\lambda(\mathbf{s})$, with a Matérn covariance function with unit marginal variance (e.g. $\sigma^{2}_{\lambda}=1$), and range and smoothness parameters ($\phi_{\lambda}$ and $\nu_{\lambda}$, respectively) equal to 0.5 and 1. This implies that the effective range of the spatial process $\lambda(\mathbf{s})$ is between 1 and 2 resulting in true proportions for neighboring subregions that are spatially dependent. The true proportion, $\pi^{(1)}_{t}(A_{ig})$, for areal unit $A_{ig}$ at time $t$ is obtained by applying the expit function to the sum of the latent covariate $x(A_{ig})$, the spatial process $\lambda(\mathbf{s}_{ig})$ evaluated at the centroid $\mathbf{s}_{ig}$ of areal unit $A_{ig}$, and the white noise term $e_{t}(A_{ig})$. Although this second data generating mechanism yields spatially correlated true proportions, they are independent over time. The third data generating mechanism allows for a temporal trend in the true proportions by introducing a linear time trend $\alpha_{0}+\alpha_{1}t$. Thus, the true proportion for areal unit $A_{ig}$ at time $t$ is now obtained by applying the expit function to the sum of the latent covariate $x(A_{ig})$, the linear trend, $\alpha_{0}+\alpha_{1}t$, and the white noise $e_{t}(A_{ig})$. We employ $\alpha_{0}=-1.0$ and $\alpha_{1}=0.2$ in the linear temporal trend, resulting in a noticeable increase over time of the true proportions. Despite the temporal dependence in the true proportions generated under the third simulation setting, the $\pi^{(1)}_{t}(A_{ig})$’s are not spatially correlated. To address this shortcoming, the fourth data generating mechanism combines the second and third data generating mechanism together yielding true proportions $\pi^{(1)}_{t}(A_{ig})$ that display a temporal trend and are correlated in space. Thus, in short: $\pi^{(1)}_{t}(A_{ig})=\frac{\exp\\{x(A_{ig})+\lambda(\mathbf{s}_{ig})+\alpha_{0}+\alpha_{1}t+e_{t}(A_{ig})\\}}{1+\exp\\{x(A_{ig})+\lambda(\mathbf{s}_{ig})+\alpha_{0}+\alpha_{1}t+e_{t}(A_{ig})\\}}\qquad i=1,...,4;g=1,...,25\\\ $ $\begin{array}[]{lrllrrl}\mathbf{X}&=&\\{x(A_{ig})\\}_{i=1,...,4;g=1,...,25};&&x(A_{ig})&\stackrel{{\scriptstyle iid}}{{\sim}}&N(0,1)\\\ \\\ \boldsymbol{\lambda}&=&\\{\lambda(\mathbf{s}_{ig})\\}_{i=1,...,4;g=1,...,25};&&\boldsymbol{\lambda}&\sim&\text{MVN}\left(0,\Sigma(\boldsymbol{\theta}_{\lambda})\right)\end{array}$ with $\Sigma(\boldsymbol{\theta}_{\lambda})$, 100$\times$100 covariance matrix induced by a Matérn covariance function, e.g. by (3.13), with $\boldsymbol{\theta}_{\lambda}=\left(\sigma^{2}_{\lambda},\phi_{\lambda},\nu_{\lambda}\right)=\left(1.0,\;0.5,\;1.0\right)^{\prime}$. Table 1: Data generating mechanism used in each of the four simulation settings of both simulation studies. Setting | Equation ---|--- 1 | $\displaystyle\pi^{(1)}_{t}(A_{ig})=\frac{\exp\\{x(A_{ig})+e_{t}(A_{ig})\\}}{1+\exp\\{x(A_{ig})+e_{t}(A_{ig})\\}},\qquad e_{t}(A_{ig})\stackrel{{\scriptstyle iid}}{{\sim}}N(0,0.2^{2})$ 2 | $\displaystyle\pi^{(1)}_{t}(A_{ig})=\frac{\exp\\{x(A_{ig})+\lambda(\mathbf{s}_{ig})+e_{t}(A_{ig})\\}}{1+\exp\\{x(A_{ig})+\lambda(\mathbf{s}_{ig})+e_{t}(A_{ig})\\}},\qquad e_{t}(A_{ig})\stackrel{{\scriptstyle iid}}{{\sim}}N(0,0.2^{2})$ 3 | $\displaystyle\pi^{(1)}_{t}(A_{ig})=\frac{\exp\\{x(A_{ig})+\alpha_{0}+\alpha_{1}t+e_{t}(A_{ig})\\}}{1+\exp\\{x(A_{ig}))+\alpha_{0}+\alpha_{1}t+e_{t}(A_{ig})\\}},\qquad e_{t}(A_{ig})\stackrel{{\scriptstyle iid}}{{\sim}}N(0,0.2^{2})$ 4 | $\displaystyle\pi^{(1)}_{t}(A_{ig})=\frac{\exp\\{x(A_{ig})+\lambda(\mathbf{s}_{ig})+\alpha_{0}+\alpha_{1}t+e_{t}(A_{ig})\\}}{1+\exp\\{x(A_{ig})+\lambda(\mathbf{s}_{ig})+\alpha_{0}+\alpha_{1}t+e_{t}(A_{ig})\\}},\qquad e_{t}(A_{ig})\stackrel{{\scriptstyle iid}}{{\sim}}N(0,0.2^{2})$ ### 4.2 Generating the observed estimates Having generated the true proportions $\pi^{(1)}_{t}(A_{ig})$ under the 4 data generating mechanisms, we proceed to simulate the corresponding “ _observed_ ” 5-year and 1-year estimates, $z^{(5)}_{t}(A_{ig})$ and $z_{t}^{(1)}(A_{i})$, respectively. These estimates play the equivalent role to the ACS estimates, in that they represent the data to which our model is fit. They are obtained by adding to the true proportions additional random error, which represents the survey-based error. Specifically, we first generate the 1-year subregional estimates $z_{t}^{(1)}(A_{ig})$ by adding error to the true proportions on the logit scale: $\begin{array}[]{rcl}\log\left(\frac{z^{(1)}_{t}(A_{ig})}{1-z^{(1)}_{t}(A_{ig})}\right)&=&\text{logit}\left(\pi^{(1)}_{t}(A_{ig})+\tilde{e}_{t}(A_{ig}\right)\\\ \\\ \tilde{e}_{t}(A_{ig})&\stackrel{{\scriptstyle iid}}{{\sim}}&N(0,v_{t}(A_{ig}))\end{array}$ From these we then derive the 1-year regional and the 5-year subregional observed estimates, $z^{(1)}_{t}(A_{i})$ and $z^{(5)}_{t}(A_{ig})$, as follows: $\displaystyle z_{t}^{(5)}(A_{ig})$ $\displaystyle=$ $\displaystyle\frac{1}{5}\sum_{k=t-4}^{t}z^{(1)}_{k}(A_{ig})$ $\displaystyle z_{t}^{(1)}(A_{i})$ $\displaystyle=$ $\displaystyle\frac{1}{25}\sum_{h=1}^{25}z^{(1)}_{t}(A_{ih}).$ We use two different strategies to determine the magnitude of the variances $v_{t}(A_{ig})$’s. In simulation study 1, the $v_{t}(A_{ig})$’s are fixed across the four simulation settings and are chosen so that the variation in the simulated observed estimates at adjacent time periods resembles the year- to-year variation in the ACS estimates of the proportion of families in poverty. This is achieved when $v_{t}(A_{ig})=0.15^{2}$ for $t=1,2,\ldots,10$; $i=1,\ldots,4$; $g=1,2,\ldots,25$. In simulation study 2, we derive the magnitude of the $v_{t}(A_{ig})$’s as a function of the design effect $d$. Since the goal of this simulation study is to evaluate the inferential gain obtained by working with the _effective sample size_ and _effective number of cases_ , rather than with the _observed number of cases_ and the _observed sample size_ , we let $d$ vary. This results in different values of the $v_{t}(A_{ig})$’s. The relationship between the $v_{t}(A_{ig})$’s and the design effect $d$ can be determined based on the following consideration: the $v_{t}(A_{ig})$’s ought to be such that, conditional on the true proportions $\pi^{(1)}_{t}(A_{ig})$, $\displaystyle\text{Var}\left(z^{(1)}_{t}(A_{ig})|\pi^{(1)}_{t}(A_{ig})\right)$ $\displaystyle=$ $\displaystyle d\cdot\mbox{Var}\left[\text{expit}\left\\{\text{logit}\left(\pi^{(1)}_{t}(A_{ig})\right)+\tilde{e}_{t}(A_{ig})\right\\}\right]$ (4.1) $\displaystyle=$ $\displaystyle d\cdot\mbox{Var}^{(1)}_{SRS,t}(A_{ig}),$ with $\mbox{Var}^{(1)}_{SRS,t}(A_{ig})$ variance of the estimator $\hat{\pi}^{(1)}_{SRS,t}(A_{ig})$ of the true proportion $\pi^{(1)}_{t}(A_{ig})$ based on a simple random sample (SRS) . This leads to the following expression for $v_{t}(A_{ig})$: $\displaystyle v^{(1)}_{t}(A_{ig})$ $\displaystyle=$ $\displaystyle d\times\frac{\left(\exp\left\\{\text{logit}\left(\pi^{(1)}_{t}(A_{ig})\right)\right\\}+1\right)^{4}\pi^{(1)}_{t}(A_{ig})\left(1-\pi^{(1)}_{t}(A_{ig})\right)}{m^{(1)}_{t}(A_{ig})\exp\left\\{2\times\text{logit}(\pi^{(1)}_{t}(A_{ig}))\right\\}}.$ (4.2) Letting the sample sizes $m^{(1)}_{t}(A_{ig})$ for each subregion $A_{ig}$, $g=1,\ldots,25;i=1,\ldots,4$, be equal to 100 for each time $t$, we derive from (4.2) the values of the $v_{t}(A_{ig})$’s. In simulation study 1, we fit to our Bayesian hierarchical model to the “observed” estimates $z_{t}^{(5)}(A_{ig})$ and $z_{t}^{(1)}(A_{i})$, whereas in simulation study 2, we fit to them both our Bayesian hierarchical model and the standard Binomial model for disaggregation. In each case, we run the MCMC algorithms for 10,000 iterations, discarding the first 2,000 for burn-in. ### 4.3 Simulation Results Simulation study 1. Taking the posterior means of the $\pi^{(1)}_{t}(A_{ig})$’s as estimates of the true proportions, and denoting them by $\hat{\pi}^{(1)}_{t}(A_{ig})$, $i=1,\ldots,4$; $g=1,2,\ldots,25$, we summarize the performance of our model by evaluating, for each $\pi^{(1)}_{t}(A_{ig})$, the magnitude of the errors and the empirical coverage of both the 50% and the 95% pointwise and joint credible intervals, respectively. Tables 2 and 3 present results from our simulation studies, including the Mean Squared Error (MSE) and the Mean Absolute Error (MAE). The latter are defined as the mean squared difference and the mean absolute difference between the $\hat{\pi}^{(1)}_{t}(A_{ig})$’s and the true values. In addition, Tables 2 and 3 present the mean squared relative error (MSRE) and the mean absolute relative error (MARE) defined respectively as: $\displaystyle MSRE$ $\displaystyle=$ $\displaystyle\frac{1}{100}\sum_{g=1}^{4}\sum_{i=1}^{25}\frac{(\hat{\pi}^{(1)}_{t}(A_{ig})-\pi^{(1)}_{t}(A_{ig}))^{2}}{\pi^{(1)}_{t}(A_{ig})},$ $\displaystyle MARE$ $\displaystyle=$ $\displaystyle\frac{1}{100}\sum_{g=1}^{4}\sum_{i=1}^{25}\frac{|\hat{\pi}^{(1)}_{t}(A_{ig})-\pi^{(1)}_{t}(A_{ig})|}{\pi^{(1)}_{t}(A_{ig})}.$ (4.3) The 50% and 95% pointwise credible intervals, computed by taking the appropriate percentiles of the posterior samples for each $\pi^{(1)}_{t}(A_{ig})$, contain the true values between 46.1% and 53.4% of the time, and between 90.7% and 94.8% of the time, respectively. Similarly, the 50% and the 95% joint credible intervals, constructed using the method of Sørbye and Rue (2011), yield nearly nominal coverage. We observe that in both cases, the credible intervals corresponding to the middle of the time-series ($t=3,4,5,6,7$) have the highest coverage probabilities. The low values for the squared and absolute errors indicate successful recovery of the true $\pi^{(1)}_{t}(A_{ig})$’s. Figure 2 presents scatterplots of the true proportions against the $\hat{\pi}^{(1)}_{t}(A_{ig})$’s: all plots illustrate our model’s ability to disaggregate survey-based estimates of areal proportions regardless of the data generating mechanism. Table 2: Simulation study 1. Results corresponding to 30 simulated datasets generated under the first two of the four settings described in Table 1. For each time $t$, $t=1,\ldots,10$, the table reports: (i) the average empirical coverage of the 95% pointwise credible interval for $\pi^{(1)}_{t}(A_{ig})$, $i=1,\ldots,4$, $g=1,\ldots,25$ averaged across the 100 subregions $A_{ig}$; (ii) the average empirical coverage of the 50% pointwise credible interval for $\pi^{(1)}_{t}(A_{ig})$; (iii) the average empirical coverage of the 95% simultaneous credible interval for $\boldsymbol{\pi}^{(1)}(\mathcal{S})=\left\\{\pi^{(1)}_{t}(A_{ig}):i=1,\ldots,4;g=1,\ldots,25\right\\}$ averaged across the 30 simulated datasets; (iv) the average empirical coverage of the 50% simultaneous credible interval for $\boldsymbol{\pi}^{(1)}(\mathcal{S})$; (v) the mean squared error (MSE); (vi) the mean absolute error (MAE); (vii) the mean squared relative error (MSRE); and (viii) the mean absolute relative error (MARE) as defined in (4.3). | Average | Average | Average | Average | | | | ---|---|---|---|---|---|---|---|--- | Coverage | Coverage | Coverage | Coverage | MSE | MAE | MSRE | MARE $t$ | 95% CI pointwise | 50% CI pointwise | 95% CI joint | 50% CI joint | $\times 10^{3}$ | $\times 10^{2}$ | $\times 10^{2}$ | $\times 10^{2}$ $1$ | $91.9\%$ | 47.7% | $90.0\%$ | 46.7% | $10.3$ | $8.3$ | $4.2$ | $25.2$ $2$ | $92.9\%$ | 48.0% | $90.0\%$ | 46.7% | $8.5$ | $7.3$ | $2.6$ | $22.0$ $3$ | $92.5\%$ | 48.9% | $93.3\%$ | 50.0% | $7.3$ | $6.8$ | $2.1$ | $18.8$ $4$ | $92.6\%$ | 49.4% | $96.7\%$ | 50.0% | $6.7$ | $6.5$ | $1.9$ | $16.9$ $5$ | $93.5\%$ | 49.6% | $96.7\%$ | 50.0% | $6.4$ | $6.3$ | $1.7$ | $16.1$ $6$ | $93.5\%$ | 50.3% | $96.7\%$ | 50.0% | $6.2$ | $6.2$ | $1.7$ | $16.1$ $7$ | $92.4\%$ | 49.8% | $93.3\%$ | 50.0% | $6.6$ | $6.4$ | $1.8$ | $17.0$ $8$ | $91.8\%$ | 50.3% | $93.3\%$ | 50.0% | $7.0$ | $6.5$ | $2.0$ | $19.3$ $9$ | $91.6\%$ | 48.9% | $90.0\%$ | 46.7% | $8.1$ | $7.2$ | $2.4$ | $21.4$ $10$ | $90.7\%$ | 47.4% | $90.0\%$ | 46.7% | $11.0$ | $8.5$ | $3.8$ | $26.1$ (a) Simulation study 1, setting 1 | Average | Average | Average | Average | | | | ---|---|---|---|---|---|---|---|--- | Coverage | Coverage | Coverage | Coverage | MSE | MAE | MSRE | MARE $t$ | 95% CI pointwise | 50% CI pointwise | 95% CI joint | 50% CI joint | $\times 10^{3}$ | $\times 10^{2}$ | $\times 10^{2}$ | $\times 10^{2}$ $1$ | $91.3\%$ | 47.7% | $90.0\%$ | 46.7% | $9.7$ | $8.1$ | $4.0$ | $24.8$ $2$ | $92.0\%$ | 49.7% | $93.3\%$ | 46.7% | $6.3$ | $6.4$ | $2.3$ | $22.3$ $3$ | $92.3\%$ | 51.9% | $93.3\%$ | 50.0% | $5.4$ | $5.8$ | $1.9$ | $21.4$ $4$ | $94.8\%$ | 52.9% | $93.3\%$ | 50.0% | $4.8$ | $5.3$ | $1.5$ | $19.0$ $5$ | $94.4\%$ | 53.4% | $96.7\%$ | 50.0% | $5.0$ | $5.4$ | $1.5$ | $17.4$ $6$ | $94.0\%$ | 53.0% | $96.7\%$ | 50.0% | $5.0$ | $5.4$ | $1.4$ | $16.2$ $7$ | $93.9\%$ | 52.8% | $93.3\%$ | 50.0% | $5.0$ | $5.5$ | $1.6$ | $17.9$ $8$ | $94.0\%$ | 51.0% | $93.3\%$ | 50.0% | $5.3$ | $5.8$ | $1.8$ | $19.4$ $9$ | $92.8\%$ | 50.4% | $93.3\%$ | 50.0% | $6.1$ | $6.3$ | $2.2$ | $20.6$ $10$ | $91.2\%$ | 47.2% | $90.0\%$ | 46.7% | $9.4$ | $8.0$ | $4.0$ | $25.6$ (b) Simulation study 1, setting 2 Table 3: Simulation study 1. Results corresponding to 30 simulated datasets generated under the last two of the four settings described in Table 1. For each time $t$, $t=1,\ldots,10$, the table reports: (i) the average empirical coverage of the 95% pointwise credible interval for $\pi^{(1)}_{t}(A_{ig})$, $i=1,\ldots,4$, $g=1,\ldots,25$ averaged across the 100 subregions $A_{ig}$; (ii) the average empirical coverage of the 50% pointwise credible interval for $\pi^{(1)}_{t}(A_{ig})$; (iii) the average empirical coverage of the 95% simultaneous credible interval for $\boldsymbol{\pi}^{(1)}(\mathcal{S})=\left\\{\pi^{(1)}_{t}(A_{ig}):i=1,\ldots,4;g=1,\ldots,25\right\\}$ averaged across the 30 simulated datasets; (iv) the average empirical coverage of the 50% simultaneous credible interval for $\boldsymbol{\pi}^{(1)}(\mathcal{S})$; (v) the mean squared error (MSE); (vi) the mean absolute error (MAE); (vii) the mean squared relative error (MSRE); and (viii) the mean absolute relative error (MARE) as defined in (4.3). | Average | Average | Average | Average | | | | ---|---|---|---|---|---|---|---|--- | Coverage | Coverage | Coverage | Coverage | MSE | MAE | MSRE | MARE $t$ | 95% CI pointwise | 50% CI pointwise | 95% CI joint | 50% CI joint | $\times 10^{3}$ | $\times 10^{2}$ | $\times 10^{2}$ | $\times 10^{2}$ $1$ | $91.9\%$ | 46.8% | $90.0\%$ | 46.7% | $9.2$ | $7.8$ | $5.8$ | $36.1$ $2$ | $92.5\%$ | 48.1% | $90.0\%$ | 46.7% | $7.1$ | $6.7$ | $3.0$ | $31.0$ $3$ | $91.7\%$ | 48.9% | $90.0\%$ | 46.7% | $6.4$ | $6.4$ | $2.4$ | $25.8$ $4$ | $92.9\%$ | 51.1% | $93.3\%$ | 53.3% | $6.2$ | $6.1$ | $1.9$ | $20.6$ $5$ | $92.9\%$ | 50.9% | $93.3\%$ | 53.3% | $6.0$ | $6.1$ | $1.6$ | $17.9$ $6$ | $93.0\%$ | 50.6% | $93.3\%$ | 50.0% | $6.0$ | $6.1$ | $1.6$ | $14.3$ $7$ | $93.0\%$ | 50.8% | $93.3\%$ | 50.0% | $6.2$ | $6.2$ | $1.5$ | $15.2$ $8$ | $92.6\%$ | 48.3% | $93.3\%$ | 46.7% | $6.7$ | $6.5$ | $1.6$ | $15.9$ $9$ | $91.2\%$ | 48.6% | $90.0\%$ | 46.7% | $7.2$ | $6.8$ | $1.6$ | $16.1$ $10$ | $91.3\%$ | 47.9% | $90.0\%$ | 46.7% | $8.9$ | $7.7$ | $1.8$ | $17.6$ (c) Simulation study 1, setting 3 | Average | Average | Average | Average | | | | ---|---|---|---|---|---|---|---|--- | Coverage | Coverage | Coverage | Coverage | MSE | MAE | MSRE | MARE $t$ | 95% CI pointwise | 50% CI pointwise | 95% CI joint | 50% CI joint | $\times 10^{3}$ | $\times 10^{2}$ | $\times 10^{2}$ | $\times 10^{2}$ $1$ | $91.8\%$ | 45.3% | $90.0\%$ | 46.7% | $9.1$ | $7.7$ | $5.2$ | $34.8$ $2$ | $91.3\%$ | 49.3% | $90.0\%$ | 46.7% | $6.5$ | $6.4$ | $3.3$ | $30.9$ $3$ | $91.1\%$ | 49.8% | $90.0\%$ | 50.0% | $5.7$ | $6.0$ | $2.5$ | $25.1$ $4$ | $92.2\%$ | 51.8% | $93.3\%$ | 53.3% | $5.3$ | $5.6$ | $1.8$ | $20.1$ $5$ | $92.6\%$ | 52.0% | $93.3\%$ | 53.3% | $5.1$ | $5.5$ | $1.5$ | $16.3$ $6$ | $93.0\%$ | 51.9% | $93.3\%$ | 53.3% | $5.2$ | $5.5$ | $1.5$ | $14.7$ $7$ | $92.7\%$ | 51.7% | $93.3\%$ | 50.0% | $5.3$ | $5.6$ | $1.5$ | $15.0$ $8$ | $92.2\%$ | 49.8% | $93.3\%$ | 46.7% | $5.9$ | $6.1$ | $1.6$ | $15.3$ $9$ | $92.0\%$ | 48.7% | $93.3\%$ | 46.7% | $6.7$ | $6.5$ | $1.7$ | $16.8$ $10$ | $91.9\%$ | 46.1% | $90.0\%$ | 46.7% | $9.2$ | $7.8$ | $1.8$ | $18.1$ (d) Simulation study 1, setting 4 (e) Simulation study 1, setting 1; $t$ = 2 (f) Simulation study 1, setting 1; $t$ = 5 (g) Simulation study 1, setting 1; $t$ = 9 (h) Simulation study 1, setting 2; $t$ = 2 (i) Simulation study 1, setting 2; $t$ = 5 (j) Simulation study 1, setting 2; $t$ = 9 (k) Simulation study 1, setting 3; $t$ = 2 (l) Simulation study 1, setting 3; $t$ = 5 (m) Simulation study 1, setting 3; $t$ = 9 (n) Simulation study 1, setting 4; $t$ = 2 (o) Simulation study 1, setting 4; $t$ = 5 (p) Simulation study 1, setting 4; $t$ = 9 Figure 2: Simulation study 1. Scatterplots of the true $\pi^{(1)}_{t}(A_{ig})$’s against their corresponding estimates, $\hat{\pi}^{(1)}_{t}(A_{ig})$, at selected times, $t=2,5,9$. The simulated values and their estimates refer to all the 30 simulated datasets generated under one of the four different simulation settings described in Table 1. Simulation study 2. Here we compare the performance of our proposed model to that of the standard Binomial model for disaggregation. Taking again the posterior means of the $\pi^{(1)}_{t}(A_{ig})$’s as our estimates, $\hat{\pi}^{(1)}_{t}(A_{ig})$’s, $g=1,\ldots,25;i=1,\ldots,4;t=1,\ldots,10$, Table 4 presents, for each model, the average mean squared error and the average mean absolute error, averaged over areal units, time points, and simulated datasets. Conversely, Table 5 provides the empirical coverage probabilities of 50% and 95% pointwise and joint credible intervals. We inspect the difference in inference provided by the two models as the design effect varies. When $d=2$, there is little difference between the standard Binomial model and our model. However, when $d=4,6$ or $8$, the standard Binomial model has an inferior performance with respect to each of the metrics considered, suggesting that by ignoring the design effect, the standard Binomial model places too much certainty in the pseudo-survey- estimates that we have generated. On the other hand, by correctly propagating the uncertainty of the pseudo-survey-estimates through the use of the effective sample size and the effective number of cases, our proposed model achieves lower mean squared and lower mean absolute error, as well as near nominal coverage probability. Table 4: Simulation study 2. Average probability that a 95%, respectively, a 50% pointwise, respectively, joint credible interval covers the true value. Averages are taken over areal units, time points, datasets, and simulation settings for the pointwise credible intervals, whereas they are taken over time points, datasets and simulation settings for the joint credible intervals. | Coverage | Coverage | Coverage | Coverage | Coverage | Coverage | Coverage | Coverage ---|---|---|---|---|---|---|---|--- | 95% CI | 95% CI | 50% CI | 50% CI | 95% CI | 95% CI | 50% CI | 50% CI | pointwise - | pointwise - | pointwise - | pointwise - | joint - | joint - | joint - | joint - | Proposed | Standard | Proposed | Standard | Proposed | Standard | Proposed | Standard $d$ | model | Binomial | model | Binomial | model | Binomial | model | Binomial 2 | 93.2% | 89.7% | 53.3% | 46.7% | 93.1% | 93.1% | 53.8% | 53.8% 4 | 95.4% | 87.4% | 53.7% | 44.7% | 92.1% | 88.7% | 51.4% | 45.2% 6 | 93.2% | 83.6% | 52.8% | 42.6% | 91.2% | 83.6% | 50.1% | 41.0% 8 | 93.9% | 80.4% | 52.3% | 40.1% | 93.1% | 79.3% | 48.8% | 39.8% Table 5: Simulation study 2. Average mean squared error (MSE) $\times 10^{3}$ and average mean absolute error (MAE) $\times 10^{2}$ computed by taking, respectively, the squared difference between the estimated $\hat{\pi}^{(1)}_{t}(A_{ig})$ and the true value $\pi^{(1)}_{t}(A_{ig})$, and the absolute value of the difference between the estimated $\hat{\pi}^{(1)}_{t}(A_{ig})$ and the true value $\pi^{(1)}_{t}(A_{ig})$ for $i=1,\ldots,4;g=1,\ldots,25;t=1,\ldots,10$. For each modeling framework, averages are taken over areal units, time points, datasets, and simulation settings. | MSE $\times 10^{3}$ | MSE $\times 10^{3}$ | MAE $\times 10^{2}$ | MAE $\times 10^{2}$ ---|---|---|---|--- $d$ | Proposed model | Standard Binomial | Proposed model | Standard Binomial 2 | 5.7 | 5.9 | 5.2 | 5.2 4 | 6.0 | 12.3 | 5.7 | 7.3 6 | 8.6 | 14.7 | 6.2 | 9.0 8 | 9.0 | 21.9 | 7.0 | 11.7 ## 5 Data Analysis ### 5.1 Families in poverty We apply the model in Sections 3.1 to 3.5 to the ACS estimates of the proportion of families in Michigan living in poverty from 2006 to 2016, with the goal of obtaining annual estimates at the census tract level. We present results from our model in a variety of ways, including a comparison of the mean and variance of our model-based estimates to those provided in the ACS dataset. We also compare the out-of-sample predictive performance of our model to that of the three competing models described in Section 3.6, which we also fit to the 2006-2016 ACS data. However, our primary focus is on the disaggregated estimates for selected neighborhoods in Detroit. Here, we chose a set of census tracts in Midtown, a mixed-use area in Detroit located north of downtown and comprising several business districts, Wayne State University, and some residential neighborhoods. Some of the census tracts in Midtown have very high poverty, while others host various sporting arenas and other downtown attractions, and thus exhibit considerably lower poverty rate. Some of the high-poverty tracts have been subject to gentrification and development in recent years (Moehlman and Robins-Somerville, 2016; Aguilar, 2015). Due to these spatial inhomogeneities and temporal changes, we believe that these tracts illustrate the need for fine scale spatio-temporal estimates in order to properly characterize neighborhood surroundings. #### 5.1.1 Comparison of 5-year model-based estimates as estimated by our model to ACS 5-year estimates (a) Model-based vs. ACS estimates (b) Posterior SD vs. ACS SE’s Symbol | Tract ID | ACS Estimate | ACS SE ---|---|---|--- $\blacktriangle$ | 26163517200 | 0.00 | 0.19 $\bullet$ | 26161400100 | 0.57 | 0.20 $\blacktriangle$ | 26163550800 | 0.00 | 0 .04 $\blacktriangle$ | 26073000700 | 0.63 | 0.11 $\blacktriangle$ | 26057000400 | 0.00 | 0.61 $\blacktriangle$ | 26037011200 | 0.00 | 0.17 $\blacktriangle$ | 26163500400 | 0.64 | 0.10 $\blacktriangle$ | 26163512900 | 0.72 | 0.11 (c) ACS estimates and SE’s for tracts highlighted in (a) Figure 3: (a) Model-based estimates of 5-year average proportion of families in poverty in Michigan at census tract level for years 2009-2013 against corresponding ACS estimates. Census tracts deviating greatly from the identity line are denoted by triangles. (b) Posterior standard deviation for the 5-year average proportion of families in poverty in Michigan at census tract level as yielded by our model against the ACS standard error of the corresponding estimates. (c) Tabulation of Tract ID’s, ACS estimates, and ACS standard errors for census tracts for which our model-based estimate deviates greatly from the ACS estimate. Our model is intended for spatial and temporal disaggregation, but we can also generate 5-year census tract estimates, which should resemble the corresponding estimates from ACS. Figure 3(a) shows a scatter plot comparing these estimates. The points tend to fall around the identity line, indicating good agreement. Figure 3(b) compares the standard errors of the ACS 5-year census tract estimates with the posterior standard deviations from our model. As our model borrows information from neighboring census tracts and from the 1-year PUMA-level estimates, it yields estimates with smaller posterior standard deviation than the ACS standard errors. Many of the points that deviate from the identity line in Figure 3(a) correspond to census tracts with zero-valued ACS estimates, which have large ACS standard errors compared to the average of 0.041 over all tracts (Figure 3(c)). An example of such a census tract is displayed in panels (a) and (b) of Figure 4 along with its neighboring tracts. Census tract 26161400100 is located in downtown Ann Arbor and, according to the ACS estimate, has an average poverty rate of 0.59 for the 5-year time period from 2009 to 2013. This estimate deviates greatly from that of the neighboring tracts. In addition, it has a design-based standard error around 5 times the average standard error for ACS estimates of poverty in Michigan. As our model borrows information from neighboring census tracts, the model-based estimate for this tract is pushed towards the average of the neighboring tracts more than towards the raw ACS estimates. We observe regression of a model-based estimate towards the average of its neighbors in situations where the design-based standard error is quite large. (a) ACS estimates, Ann Arbor (b) Model-based estimates, Ann Arbor (c) ACS estimates, Romulus (d) Model-based estimates, Romulus (e) Posterior density Figure 4: (a) ACS estimate for the 5-year average proportion of families in poverty in Ann Arbor census tract 26161400100 and (b) our corresponding model- based estimate. (c) ACS estimate for the 5-year average proportion of families in poverty in a census tract in Romulus and (d) our model-based estimate. Here, despite the lower-poverty level in the neighboring census tracts, our model-based estimate is not smoothed towards the poverty-level of the neighboring tracts. (e) Posterior densities of: (i) 5-year average proportion and (ii) 1-year proportion of families living in poverty in census tract 26163563500 according to our model, as well as the truncated normal density function obtained using as mean and standard deviation, respectively, the 5-year ACS estimate and its corresponding standard error. To illustrate this phenomenon, Figure 4 shows in panels (c) and (d) a census tract in Romulus also characterized by high poverty despite being surrounded by census tracts with lower poverty level, according to the ACS. In this case, since the ACS standard error is much lower, our model-based estimate of poverty still reflects the spatial heterogeneity in the ACS estimates and it is not smoothed towards the average of the neighboring tracts. #### 5.1.2 Posterior density of true population proportions ACS estimates are provided with margins of error based on standard errors derived from an iterative estimation procedure (see U.S. Census Bureau (2014) for details) multiplied by standard normal quantiles. These may in turn be utilized to construct confidence intervals for the estimates by adding and subtracting their margins of error, with users being cautioned to use “logical boundaries when creating confidence bounds from the margins of error” (U.S. Census Bureau, 2008) (i.e. zero and one for proportions). This implies a truncated normal distribution centered at the ACS estimate with variance depending on the standard error. Through our Bayesian modeling framework, we obtain the posterior distribution of the true proportions at any spatial and temporal scale without imposing assumptions of symmetry or truncation. For example, Figure 3(e) plots the posterior density of the average proportion of families living in poverty in census tract 26163563500 located in Midtown Detroit for the 5-year period 2009-2013. For this census tract, the confidence interval for the 5-year ACS estimate for 2009-2013 would be subject to truncation at zero. Figure 3(e) shows the truncated normal density with mean and standard deviation equal, respectively, to the ACS estimate and its standard error. To facilitate direct comparison to the ACS estimates, Figure 3(e) also presents the posterior density of the 5-year average proportion of families living in poverty as provided by our model, as well as the posterior density for the 1-year proportions for years 2009, 2010, 2011, 2012 and 2013. As the figure shows, thanks to the borrowing of information from neighboring census tracts, the posterior density of the 5-year average proportion is characterized by smaller uncertainty than the truncated normal density centered at the ACS estimate. #### 5.1.3 Disaggregated estimates of poverty for Detroit Disaggregating the ACS estimates allows us to examine yearly trends in poverty for individual census tracts, as well as for combinations of census tracts which do not form a PUMA or are not part of a highly populated county. For both of these cases we cannot assess temporal trends using the ACS estimates alone. Figure 5 presents various maps of the disaggregated estimates of the proportion of families in poverty in Michigan from our model. Panel (a) displays census tract estimates for all of Michigan for year 2010 while panel (b) presents the same results for Wayne County, which contains areas of extreme poverty, as well as some of the wealthiest neighborhoods in Michigan. Panels (c)-(k) of Figure 5 present yearly results over time for a subset of census tracts in Wayne County located in Midtown Detroit. This set of census tracts was selected because the poverty rates exhibit spatial heterogeneity, with certain pairs of neighboring census tracts differing by over 20%, so a single poverty estimate for this area would not properly characterize the neighborhood conditions of its residents. Recent changes in these census tracts have been well-documented (Moehlman and Robins-Somerville, 2016), particularly in the Cass Corridor, an area of downtown Detroit that has faced high crime and poverty, but has recently experienced sudden gentrification (Aguilar, 2015). The 5-year ACS estimates at the census tract level may not properly characterize yearly changes in these census tracts. (a) Model-based estimates: Michigan 2010 (b) Model-based estimates: Wayne County 2010 (c) 2007 (d) 2008 (e) 2009 (f) 2010 (g) 2011 (h) 2012 (i) 2013 (j) 2014 (k) 2015 Figure 5: Disaggregated estimates of the proportion of families in poverty in Midtown Detroit. Figure 6(a) shows the changes over time of poverty rates for a set of census tracts in the Midtown area of Detroit. Consistent with national trends, Figure 6(b) indicates that, on average, the area experienced an increase in poverty following the 2008 financial crisis in the US, with an eventual improvement in later years. Figure 5, panels (c)-(k), highlights a census tract of particular interest, indicated in green in Figure 6(a), which did not experience a decrease in poverty until 2014. Year to year, it has had among the highest poverty rates in Detroit. However, recent developments such as the groundbreaking of Little Caesars Arena in 2014 and an influx of newly built restaurants and bars, might have contributed to the drop in poverty in that census tract from 2014 to 2015 (Moehlman and Robins-Somerville, 2016). (a) Spaghetti plot of poverty for Midtown census tracts. (b) Mean poverty for Midtown census tracts Figure 6: (a) Spaghetti plot displaying the estimated proportions of families in poverty over time with highlighted the census tract shown in Figure 5 and (b) the estimated average poverty rate across census tracts in Midtown Detroit between 2006 and 2016. ### 5.2 Out-of-Sample Prediction To assess our model’s out-of-sample predictive performance, we consider the county-level proportion of families in poverty during the 3-year time periods 2010-2012 and 2011-2013. We generate 3-year county-level predictions as a weighted average of the disaggregated estimates within each county’s census tracts for the appropriate 3-year periods, with weights proportional to the number of families living in each tract. Then, those yearly county estimates are averaged over the 3-year time periods. Our “true values” are the 3-year ACS estimates, which we did not use for model fitting. This allows us to assess our model’s ability to predict over time periods and areal units that are not utilized in model fitting. Figure 7(a) compares the estimated 3-year proportion yielded by our model with the ACS estimates. As the figure shows, the two sets of estimates tend to be very similar, indicating the strong predictive performance of our model. The mean squared and mean absolute prediction errors are, respectively, 4.16$\times 10^{-5}$ and 4.83$\times 10^{-3}$, whereas the mean squared relative prediction error is 3.13$\times 10^{-4}$ and the mean absolute relative prediction error is 4.18$\times 10^{-2}$. These values demonstrate reduced predictive error in our modeling framework. ### 5.3 Comparison to other models We also generate out-of-sample predictions of the proportions of families in poverty at the 3-year county resolution for the three competing models presented in Section 3.6. Details on how these predictions are derived are provided in Section 3 of the Supplementary Material (Benedetti, Berrocal and Little, 2021). We evaluate the quality of these out-of-sample predictions by validating them against the ACS estimates: Figure 7(b), (c) and (d) show scatter plots of the predicted proportion of families in poverty as yielded by each of the three competing models against the ACS estimate. The standard Binomial model and the BWH Poisson model produce estimates that are fairly in line with the ACS values, while there is a larger discrepancy between the estimates yielded by the BWH Gaussian Delta Method model and the ACS 3-year estimates. Numerically, we compare the predictive performance of our proposed model to that of the other 3 models in terms of average predictive bias, mean squared predictive error, mean absolute predictive error, and coverage of the 50% and 95% prediction intervals. These statistics are reported in Table 6, with the first three summary statistics all functions of the difference between the predicted proportions and the ACS 3-year estimates. As the table shows, our model performs almost equivalently to the standard Binomial model in terms of predictive accuracy with our model yielding a coverage slightly closer to the nominal level than the standard Binomial model. This occurs for both the 50% and the 95% prediction intervals. However, the difference is minimal: we attribute this to the large sample size and careful sampling design of the ACS, which limits the impact of the design effect on the model’s performance. Moving onto the BWH Poisson space-time model, we can see that even though this model exhibits accurate predictions, our model is slightly more accurate. The main differences between the two models are with respect to the posterior predictive standard deviations: the BWH Poisson model has much smaller posterior predictive standard deviations and thus much lower coverage probabilities than our model. Finally, the BWH Gaussian Delta Method model offers a poorer predictive performance than our model with respect to all metrics. We acknowledge that the models that we have attributed to Bradley, Wikle, and Holan are not necessarily the approaches that the authors would have taken to model the ACS spatio-temporal estimates of proportions. Rather, they constitute our best effort to adapt the methods presented in Bradley, Wikle and Holan (2015) and Bradley, Wikle and Holan (2016) to model our data. While we had to modify both models to accommodate estimates of proportions, we took care to do so in a way that would not needlessly favor our model. (a) Our proposed model (b) Standard binomial model (c) BWH Poisson space-time (d) BWH Gaussian Delta Method Figure 7: Comparison to other models. Predicted proportion of families in poverty vs. ACS 3-year estimates of the proportion of families in poverty in Michigan counties for the period 2010-2012 and 2011-2013 as yielded by: (a) our proposed model, (b) the Standard Binomial model, (c) the BWH Poisson space-time model and (d) the BWH Gaussian Delta method model. Table 6: Comparison to other models. Bias, Mean Squared Predictive Error (MSPE), Mean Absolute Predictive Error (MAPE) of the out-of-sample predictions, as well as empirical coverage of the 50% and the 95% prediction intervals (PI) for our proposed model and the three competing models. | Bias | MSPE | MAPE | Coverage | Coverage ---|---|---|---|---|--- Model | $\times 10^{2}$ | $\times 10^{5}$ | $\times 10^{3}$ | 50% PI | 95% PI Our proposed model | 0.03 | 4.16 | 4.83 | 52.3% | 93.0% Standard Binomial | $-$0.03 | 3.69 | 4.39 | 46.9% | 91.4% BWH Poisson space-time | $-$0.70 | 13.91 | 9.52 | 9.4% | 23.4% BWH Gaussian delta method | $-$1.70 | 52.49 | 18.11 | 26.6% | 52.3% ## 6 Discussion This paper proposes a spatio-temporal Bayesian hierarchical model to disaggregate estimates of proportions over areal units derived from sampling surveys while accounting for the survey design. Previous to our work, Bradley, Wikle and Holan (2016) formulated a stochastic model for ACS estimates distributed according to a Poisson distribution. The model explicitly accounted for the survey design, as it specified a lognormal distribution for the ACS design-based variance; however it focused only on addressing the change of support problem in a spatial setting for count variables. Other work by Bradley, Wikle and Holan (2015) considered the space-time setting, but it did not incorporate design effects as it postulated a Gaussian likelihood with the ACS design-based variance taken as known and set equal to the variance of the normal distribution. The main motivation for the development of our modeling framework is the ability to generate data on socio-economic indicators at fine spatial and temporal resolution, thus responding to the needs of health researchers investigating the effect of social determinants of health on health outcomes. We have demonstrated the utility of our Bayesian hierarchical modeling framework by applying it to the ACS estimates of families in poverty. This application highlighted several advantages of our model, among which the fact that it generates annual estimates at census tract spatial resolution. In addition, due to the borrowing of information from neighboring units and from ACS estimates at different spatial and temporal resolutions, these estimates are characterized by smaller uncertainty. We use our disaggregated estimates to examine trends over time of poverty in Michigan focusing on Detroit, for which we could highlight yearly changes at small spatial scale. These changes could not be detected easily using the 5-year ACS census tract estimates. We recognize that a standard Binomial disaggregation model that did not explicitly account for the design effect, applied to the same data, yields a very comparable, or slightly better, predictive performance than our model when evaluated in terms of Mean Squared Predictive Error, Mean Absolute Predictive Error, and Empirical Coverage of the 50% and 95% pointwise prediction intervals. However, our simulation study 2 also indicated that while such a performance by the standard Binomial model is expected for small design effect, the aforementioned model is subject to a worsening in predictive performance as the design effect size increases. In these situations, our model is preferable. While it is true that the ACS design effect for the estimates of proportion of families in poverty in Michigan during the period considered – 2006-2016 – is estimated to be around 2.5 on average, our model has not been developed only for handling estimates resulting from the ACS. Rather, our model has a wider applicability and it has been formulated to disaggregate spatially and temporally any set of survey- based multi-year estimates of proportions. Other surveys typically used in epidemiological studies are characterized by larger design effects than the ACS. For example, the national Behavior Risk Factor Surveillance System (BRFSS) in 2013 had an average design effect of 4.45, with state BRFSS surveys having design effects ranging from 1.47 to 5.16 (Iachan et al., 2016). For estimates provided by these surveys, our model is expected to yield better results than the standard Binomial model. Our model is not the only one using the concept of design effect to yield small-scale spatio-temporal estimates: Li et al. (2019) applied the model of Mercer et al. (2014) to smooth the spatial distribution of 1-year estimates of the under-5 mortality rate over 35 countries in Africa, producing subnational estimates at the 1-year resolution. Although Li et al. (2019) are also concerned with generating small scale spatio-temporal estimates, their work did not address the spatio-temporal change of support problem in the same way we do here. Specifically, Li et al. (2019) introduce a spatio-temporal process that is discrete both in space and time, whereas our model employs a point- referenced spatio-temporal process, discrete in time but continuous in space. Additionally, our model accounts for the clustering units of the survey (e.g. counties in our application) by including random effects specified at the county-level spatial resolution. To deal with the large dimensionality of the data, we approximate the latent spatio-temporal process driving the true population proportions via a basis function expansion. Multiple choices are available to alleviate the computational burden associated with fitting a spatial statistical model to large spatial data, as reviewed by Heaton et al. (2019). Here, acknowledging the nested and multi-resolution geography of the ACS data, also noted by Savitsky (2016), we elect to choose the Multi-Resolution Approximation (MRA) of Katzfuss (2017), extending it to the space-time setting, an additional contribution of our paper. However, other basis functions could be employed, namely, wavelets, radial basis functions, and Moran’s basis functions as in Bradley, Wikle and Holan (2016). In our model, partly for computational considerations, we use a probit link to relate the true areal-level proportions to the underlying Gaussian spatio- temporal process, and we employ the data augmentation algorithm of Albert and Chib (1993) for posterior computation. We believe that it is possible to devise an MCMC algorithm based on the skew-normal posterior results for probit regressions derived by Durante (2019). Additionally, we remark that one could replace the probit link with a logit link. In this case, we encourage readers to employ a Pólya-Gamma augmentation scheme (Polson, Scott and Windle, 2013) for greater computational efficiency. Much of our predictive performance evaluation is based on the empirical probability that credible and/or prediction intervals cover the true value, which inherently conflates a frequentist property (empirical coverage probability) with Bayesian modeling frameworks. This type of assessment is in line with the notion of calibrated Bayes (Little, 2006) and recommended in a predictive context (Dawid, 1982); moreover, it is the authors’ experience that coverage probabilities are frequently used in assessing Bayesian models, particularly in a spatial context (see Entezari, Brown and Rosenthal (2019); Gilani, Berrocal and Batterman (2019); Berrocal, Gelfand and Holland (2010) as example), where prediction is the main goal. We note a potential abuse of terminology in calling “out-of-sample validation” the comparison of the 3-year county-level proportions yielded by our model with the corresponding ACS estimates. Even though the 3-year county level ACS estimates were not used in fitting the model, the microdata that is leveraged to derive such ACS estimates is also employed to calculate the 1-year and 5-year estimates of proportions to which our model was fit. In adopting this terminology, we follow previous examples in the literature on this topic, see Bradley, Wikle and Holan (2015), where this type of assessment was performed and this nomenclature was used. Finally, a characteristic of our model is the assumption of conditional independence between the 1-year ACS PUMA-level estimates and the 5-year census tract estimates, conditional on the true areal proportions. Since both sets of estimates are derived using the same microdata, it is possible that the assumption of conditional independence is not realistic. Not having access to the actual microdata, we have no means to determine whether this assumption is violated. Future work could be devoted to relax the assumption of conditional independence. Supplementary Information In the Supplementary Material, we derive statistical properties for the ST-MRA method, provide details on how predictions were derived, and present results of the exploratory data analysis described in Section 2.2. Specifically: Section 1 shows that the ST-MRA expression presented in Section 3.3 provides an approximation to a Gaussian spatio- temporal process with a separable covariance function, with an AR(1) structure in time and a dependence structure in space encoded by a Matérn covariance function. Section 2 discusses how to derive out-of-sample predictions under the alternative models discussed in Section 3.6, while Section 3 shows results of the exploratory data analysis that supports our modeling choices. Finally, Section 4 concludes the Supplementary Material presenting results for the city of Flint. ## References * Abrams and Szefler (2020) [author] Abrams, E. M.E. M. and Szefler, S. J.S. J. (2020). COVID-19 and the impact of social determinants of health. The Lancet - Respiratory Medicine 8 659–661. * Aguilar (2015) [author] Aguilar, L.L. (2015). Detroit’s Cass Corridor makes way for new era. The Detroit News, published April 2015. * Albert and Chib (1993) [author] Albert, J. H.J. H. and Chib, S.S. (1993). Bayesian analysis of binary and polychotomous data. Journal of the American Statistical Association 88 669–679. * Banerjee, Carlin and Gelfand (2004) [author] Banerjee, S.S., Carlin, B. P.B. P. and Gelfand, A. E.A. E. (2004). Hierarchical modeling and analysis for spatial data. Chapman & Hall/CRC Boca Raton, FL. * Benedetti, Berrocal and Little (2021) [author] Benedetti, M. H.M. H., Berrocal, V. J.V. J. and Little, R.R. (2021). Supplement to “Accounting for survey design in Bayesian disaggregation of survey-based areal estimates of proportions: an application to the American Community Survey”. * Berrocal, Gelfand and Holland (2010) [author] Berrocal, V. J.V. J., Gelfand, A. E.A. E. and Holland (2010). A bivariate space-time downscaler under space and time misalignment. The Annals of Applied Statistics 4 1942–1975. * Bradley, Holan and Wikle (2016) [author] Bradley, J. R.J. R., Holan, S. H.S. H. and Wikle, C. K.C. K. (2016). Multivariate spatio-temporal survey fusion with application to the American Community Survey and local area unemployment statistics. Stat 5 224–233. * Bradley, Wikle and Holan (2015) [author] Bradley, J. R.J. R., Wikle, C. K.C. K. and Holan, S. H.S. H. (2015). Spatio-temporal change of support with application to American Community Survey multi-year period estimates. Stat 4 255–270. * Bradley, Wikle and Holan (2016) [author] Bradley, J. R.J. R., Wikle, C. K.C. K. and Holan, S. H.S. H. (2016). Bayesian spatial change of support for count-valued survey data with application to the American Community Survey. Journal of the American Statistical Association 111 472–487. * Braverman, Egerter and Williams (2011) [author] Braverman, P.P., Egerter, S.S. and Williams, D. R.D. R. (2011). The social determinants of health: coming of age. Annual Reviews of Public Health 32 381–398. * U.S. Census Bureau (2008) [author] U. S. Census Bureau (2008). A Compass for Understanding and Using American Community Survey Data: What General Data Users Need to Know. U.S. Government Printing Office, Washington, DC. * U.S. Census Bureau (2014) [author] U. S. Census Bureau (2014). American Community Survey Design and Methodology. U.S. Government Printing Office, Washington, DC. * Chen, Wakefield and Lumley (2014) [author] Chen, C.C., Wakefield, J.J. and Lumley, T.T. (2014). The use of sampling weights in Bayesian hierarchical models for small area estimation. Spatial and Spatio-temporal Epidemiology 11 33 – 43. * Dawid (1982) [author] Dawid, A. P.A. P. (1982). The well-calibrated Bayesian. Journal of the American Statistical Association 77 605–610. * Diggle, Tawn and Moyeed (1998) [author] Diggle, P. J.P. J., Tawn, J. A.J. A. and Moyeed, R. A.R. A. (1998). Model-based geostatistics. Journal of the Royal Statistical Society: Series C (Applied Statistics) 47 29-9-350. * Durante (2019) [author] Durante, D.D. (2019). Conjugate Bayes for probit regression via unified skew-normal distributions. Biometrika 106 765–779. * Entezari, Brown and Rosenthal (2019) [author] Entezari, R.R., Brown, P. E.P. E. and Rosenthal, J. S.J. S. (2019). Bayesian spatial analysis of hardwood tree counts via MCMC. Environmetrics 31 e2608. * Fay and Herriot (1979) [author] Fay, R.R. and Herriot, R.R. (1979). Estimates of income for small places: an application of James-Stein procedure to census data. Journal of the American Statistical Association 74 269–277. * Finley, Banerjee and Carlin (2007) [author] Finley, A.A., Banerjee, S.S. and Carlin, B. P.B. P. (2007). spBayes: an R package for univariate and multivariate hierarchical point-referenced spatial models. Journal of Statistical Software 19 1–24. * Gelfand, Banerjee and Gamerman (2005) [author] Gelfand, A. E.A. E., Banerjee, S.S. and Gamerman, D.D. (2005). Spatial process modelling for univariate and multivariate dynamic spatial data. Environmetrics 16 465–479. * Gelfand, Zhu and Carlin (2001) [author] Gelfand, A. E.A. E., Zhu, L.L. and Carlin, B. P.B. P. (2001). On the change of support problem for spatio-temporal data. Biostatistics 2 31–45. * Geweke (1992) [author] Geweke, J.J. (1992). Evaluating the accuracy of sampling-based approaches to calculate posterior moments. In Bayesian Statistics 4 (J. M.J. M. Bernardo, J. O.J. O. Berger, A. P.A. P. Dawid and A. F. M.A. F. M. Smith, eds.) 169–193. Clarendon Press. * Ghitza and Gelman (2013) [author] Ghitza, Y.Y. and Gelman, A.A. (2013). Deep interaction with MRP: election turnout and voting patterns among small electoral subgroups. American Journal of Political Science 57 762-776. * Gilani, Berrocal and Batterman (2019) [author] Gilani, O.O., Berrocal, V. J.V. J. and Batterman, S.S. (2019). Nonstationary spatiotemporal Bayesian data fusion for pollutants in the near-road environment. Environmetrics 30 1-19. * Gotway and Young (2002) [author] Gotway, C. AC. A. and Young, L. JL. J. (2002). Combining incompatible spatial data. Journal of the American Statistical Association 97 632–648. * Heaton et al. (2019) [author] Heaton, M. J.M. J., Datta, A.A., Finley, A. O.A. O., Furrer, R.R., Guinness, J.J., Guhaniyogi, R.R., Gerber, F.F., Gramacy, R. B.R. B., Hammerling, D.D., Katzfuss, M.M., Lindgren, F.F., Nychka, D. W.D. W., Sun, F.F. and Zammit-Mangion, A.A. (2019). A case study competition among methods for analyzing large spatial data. Journal of Agricultural, Biological and Environmental Statistics 24 398-425. * Iachan et al. (2016) [author] Iachan, RonaldoR., Pierannunzi, CarolC., Healey, KristieK., Greenlund, KurtK. and Town, MachellM. (2016). National weighting of data from the Behavioral Risk Factor Surveillance System (BRFSS). BMC Medical Research Methodology 16 1–12. * Katzfuss (2017) [author] Katzfuss, M.M. (2017). A Multi-Resolution Approximation for massive spatial datasets. Journal of the American Statistical Association 112 201–214. * Kish (1965) [author] Kish, L.L. (1965). Survey Sampling. John Wiley & Sons, New York. * Kish (1995) [author] Kish, L.L. (1995). Methods for Design Effects. Journal of Official Statistics 11 55–77. * Korn and Graubard (1998) [author] Korn, E. L.E. L. and Graubard, B. I.B. I. (1998). Confidence intervals for proportions with small expected number of positive counts estimated from survey data. Survey Methodology 24 193–201. * Li et al. (2019) [author] Li, ZehangZ., Hsiao, YuanY., Godwin, JessicaJ., Martin, Bryan D.B. D., Wakefield, JonJ. and Clark, Samuel J.S. J. (2019). Changes in the spatial distribution of the under-five mortality rate: Small-area analysis of 122 DHS surveys in 262 subregions of 35 countries in Africa. PLOS ONE 14 1-17. * Little (2006) [author] Little, R. J.R. J. (2006). Calibrated Bayes: A Bayes/frequentist roadmap. The American Statistician 60 213 – 223. * Marmot et al. (2012) [author] Marmot, M.M., Allen, J.J., Bell, R.R., Bloomer, E.E., Goldblatt, P. on behalf of the Consortium for the Eurpopean Review of Social Determinants of HealthP. o. b. o. t. C. f. t. E. R. o. S. D. o. H. and the Health Divide (2012). WHO European review of social determinants of health and the health divide. Lancet 380 1011–1029. * Mercer et al. (2014) [author] Mercer, L.L., Wakefield, J.J., Chen, C.C. and Lumley, T.T. (2014). A comparison of spatial smoothing methods for small area estimation with sampling weights. Spatial Statistics 8 69–85. * Mitchell, Genton and Gumpertz (2005) [author] Mitchell, Matthew W.M. W., Genton, Marc G.M. G. and Gumpertz, Marcia L.M. L. (2005). Testing for separability of space?time covariances. Environmetrics 16 819-831. * Moehlman and Robins-Somerville (2016) [author] Moehlman, L.L. and Robins-Somerville, M.M. (2016). The new Detroit: How gentrification has changed Detroit’s economic landscape. Michigan Daily, published September 2016. * Pereira and Coelho (2010) [author] Pereira, L. N.L. N. and Coelho, P.P. (2010). Small area estimation of habitation transaction using time-series and cross sectional areal-level models. Journal of Applied Statistics 37 651–666. * Pfeffermann (2013) [author] Pfeffermann, D.D. (2013). New important developments in small area estimation. Statistical Science 28 40–68. * Polson, Scott and Windle (2013) [author] Polson, Nicholas G.N. G., Scott, James G.J. G. and Windle, JesseJ. (2013). Bayesian inference for logistic models Using Pólya?Gamma latent variables. Journal of the American Statistical Association 108 1339–1349. * Porter et al. (2014) [author] Porter, A. T.A. T., Holan, S. H.S. H., Wikle, C. K.C. K. and Cressie, N.N. (2014). Spatial Fay-Herriot models for small area estimation with functional covariates. Statistical Methods and Applications 10 27-42. * Pratesi and Salvati (2008) [author] Pratesi, M.M. and Salvati, N.N. (2008). Small area estimation: the EBLUP estimator based on spatially correlated random area effects. Statistical Methods and Applications 17 113-141. * Roberts, Gelman and Gilks (1997) [author] Roberts, G. O.G. O., Gelman, A.A. and Gilks, W. R.W. R. (1997). Weak convergence and optimal scaling of random walk Metropolis algorithms. The Annals of Applied Probability 7 110–120. * Rollston and Galea (2020) [author] Rollston, R.R. and Galea, S.S. (2020). COVID-19 and the social determinants of health. American Journal of Health Promotion 34 687-689. * Savitsky (2016) [author] Savitsky, T. D.T. D. (2016). Bayesian nonparametric multiresolution estimation for the American Community Survey. Annals of Applied Statistics 10 2157–2181. * Simpson et al. (2017) [author] Simpson, D.D., Rue, H.H., Riebler, A.A., Martins, T. G.T. G. and Sørbye, S. H.S. H. (2017). Penalising model component complexity: a principled, practical approach to constructing priors. Statistical Science 32 1–28. * Simpson et al. (2019) [author] Simpson, M.M., Holan, S. H.S. H., Wikle, C. K.C. K. and Bradley, J. R.J. R. (2019). Interpolating distributions for populations in nested geographies using public-use data with application to the American Community Survey. Preprint available at: arXiv:1802.02626. * Singh, Shukla and Kundu (2005) [author] Singh, B.B., Shukla, G.G. and Kundu, D.D. (2005). Spatio-temporal models in small-area estimation. Survey Methodology 31 183–195. * Singu et al. (2020) [author] Singu, S.S., Acharya, A.A., Challagundla, K.K. and Byareddy, S. B.S. B. (2020). Impact of social determinants of health on the emerging COVID-19 pandemic in the United States. Frontiers in Public Health 8 406\. * Sørbye and Rue (2011) [author] Sørbye, Sigrunn H.S. H. and Rue, HåvardH. (2011). Simultaneous Credible Bands for Latent Gaussian Models. Scandinavian Journal of Statistics 38 712-725.
$\gamma\mapsto\vec{\Omega}(\gamma)\cdot\overline{\ell}$ is identically zero, contradicting Lemma 4.2-1, since $\overline{\ell}\neq 0$. Case (b). $(j_{m})_{m\in{\mathbb{N}}}$ is bounded and $|j_{m}^{\prime}|\to\infty$ (or viceversa): this case is excluded by the momentum condition $\vec{\jmath}\cdot\ell_{m}+j_{m}-j_{m}^{\prime}=0$ in (4.22) and since $(\ell_{m})$ is bounded. Case (c). Both $(j_{m})_{m\in{\mathbb{N}}}$, $(j_{m}^{\prime})_{m\in{\mathbb{N}}}$ are bounded: we have definitively that $j_{m}=\overline{\jmath}$ and $j_{m}^{\prime}=\overline{\jmath}^{\prime}$, with $\overline{\jmath},\overline{\jmath}^{\prime}\in{\mathbb{S}}_{0}^{c}$ and, since $j_{m}\neq j_{m}^{\prime}$, $\overline{\jmath}\neq\overline{\jmath}^{\prime}\,.$ (4.25) Therefore (4.22) becomes, in the limit $m\rightarrow\infty$, $\partial_{\gamma}^{n}\big{(}\vec{\Omega}(\gamma)\cdot\overline{\ell}+\Omega_{\overline{\jmath}}(\gamma)-\Omega_{\overline{\jmath}^{\prime}}(\gamma)\big{)}_{|\gamma=\overline{\gamma}}=0\,,\ \forall\,n\in{\mathbb{N}}_{0}\,,\quad\vec{\jmath}\cdot\overline{\ell}+\overline{\jmath}-\overline{\jmath}^{\prime}=0\,.$ By analyticity, we obtain that $\vec{\Omega}(\gamma)\cdot\overline{\ell}+\Omega_{\overline{\jmath}}(\gamma)-\Omega_{\overline{\jmath}^{\prime}}(\gamma)=0\quad\forall\,\gamma\in\Gamma\,,\quad\vec{\jmath}\cdot\overline{\ell}+\overline{\jmath}-\overline{\jmath}^{\prime}=0\,.$ (4.26) We distinguish several cases: * • Let $\overline{\jmath},\overline{\jmath}^{\prime}\notin-{\mathbb{S}}$ and $|\overline{\jmath}|\neq|\overline{\jmath}^{\prime}|$. By (4.26) the vector $(\vec{\Omega}(\gamma),\Omega_{\overline{\jmath}}(\gamma),\Omega_{\overline{\jmath}^{\prime}}(\gamma))$ is degenerate with $c:=(\overline{\ell},1,-1)\neq 0$, contradicting Lemma 4.2-4. * • Let $\overline{\jmath},\overline{\jmath}^{\prime}\notin-{\mathbb{S}}$ and $\overline{\jmath}^{\prime}=-\overline{\jmath}$. In view of (4.1), the first equation in (4.26) becomes $\vec{\omega}(\gamma)\cdot\overline{\ell}+\frac{\gamma}{2}\Big{(}\sum_{a=1}^{\nu}\overline{\ell}_{a}\frac{G_{\overline{\jmath}_{a}}(0)}{\overline{\jmath}_{a}}+2\frac{G_{\overline{\jmath}}(0)}{\overline{\jmath}}\Big{)}=0\quad\forall\gamma\in\Gamma\,.$ By Lemma 4.3-1 the vector $(\vec{\omega}(\gamma),\gamma)$ is non-degenerate, thus $\overline{\ell}=0$ and $2\frac{G_{\overline{\jmath}}(0)}{\overline{\jmath}}=0$, which is a contradiction. * • Let $\overline{\jmath}^{\prime}\notin-{\mathbb{S}}$ and $\overline{\jmath}\in-{\mathbb{S}}$. With no loss of generality suppose $\overline{\jmath}=-\overline{\jmath}_{1}$. In view of (4.1), the first equation in (4.26) implies that, for any $\gamma\in\Gamma$, $(\overline{\ell}_{1}+1)\omega_{\overline{\jmath}_{1}}(\gamma)+\sum_{a=2}^{\nu}\overline{\ell}_{a}\omega_{\overline{\jmath}_{a}}(\gamma)-\omega_{\overline{\jmath}^{\prime}}(\gamma)+\frac{\gamma}{2}\Big{(}(\overline{\ell}_{1}-1)\frac{G_{\overline{\jmath}_{1}}(0)}{\overline{\jmath}_{1}}+\sum_{a=2}^{\nu}\overline{\ell}_{a}\frac{G_{\overline{\jmath}_{a}}(0)}{\overline{\jmath}_{a}}-\frac{G_{\overline{\jmath}^{\prime}}(0)}{\overline{\jmath}^{\prime}}\Big{)}=0\,.$ By Lemma 4.3-2 the vector $\big{(}\vec{\omega}(\gamma),\omega_{\overline{\jmath}^{\prime}}(\gamma),\gamma\big{)}$ is non-degenerate, which is a contradiction. * • Last, let $\overline{\jmath},\overline{\jmath}^{\prime}\in-{\mathbb{S}}$ and $\overline{\jmath}\neq\overline{\jmath}^{\prime}$, by (4.25). With no loss of generality suppose $\overline{\jmath}=-\overline{\jmath}_{1}$ and $\overline{\jmath}^{\prime}=-\overline{\jmath}_{2}$. Then the first equation in (4.26) reads, for any $\gamma\in\Gamma$, $\displaystyle(\overline{\ell}_{1}+1)\omega_{\overline{\jmath}_{1}}(\gamma)+\left(\overline{\ell}_{2}-1\right)\omega_{\overline{\jmath}_{2}}+\sum_{a=3}^{\nu}\overline{\ell}_{a}\omega_{\overline{\jmath}_{a}}(\gamma)$ $\displaystyle\ \ \ \ +\frac{\gamma}{2}\Big{(}(\overline{\ell}_{1}-1)\frac{G_{\overline{\jmath}_{1}}(0)}{\overline{\jmath}_{1}}+(\overline{\ell}_{2}+1)\frac{G_{\overline{\jmath}_{2}}(0)}{\overline{\jmath}_{2}}+\sum_{a=3}^{\nu}\overline{\ell}_{a}\frac{G_{\overline{\jmath}_{a}}(0)}{\overline{\jmath}_{a}}\Big{)}=0\,.$ Since the vector $(\vec{\omega}(\gamma),\gamma)$ is non-degenerate by Lemma 4.3-1, it implies $\overline{\ell}_{1}=-1$, $\overline{\ell}_{2}=1$, $\overline{\ell}_{3}=\ldots=\overline{\ell}_{\nu}=0$. Inserting these values in the momentum condition in (4.26) we obtain $-2\overline{\jmath}_{1}+2\overline{\jmath}_{2}=0$. This contradicts $\overline{\jmath}\neq\overline{\jmath}^{\prime}$. Step 2. We finally consider the case when $(\ell_{m})_{m\in{\mathbb{N}}}$ is unbounded. Up to subsequences $\ell_{m}\rightarrow\infty$ as $m\rightarrow\infty$ and $\lim_{m\to\infty}\ell_{m}/\braket{\ell_{m}}=:\overline{c}\neq 0$. By (4.1), Lemma 4.4, (4.23), we have, for any $n\geq 1$, $\displaystyle\partial_{\gamma}^{n}\frac{1}{\braket{\ell_{m}}}\Big{(}\Omega_{j_{m}}(\gamma)-\Omega_{j_{m}^{\prime}}(\gamma)\Big{)}_{|\gamma=\gamma_{m}}$ $\displaystyle=\partial_{\gamma}^{n}\Big{(}\frac{1}{\braket{\ell_{m}}\sqrt{g}}\Big{(}\frac{c_{j_{m}}(\gamma)}{|j_{m}|^{\frac{1}{2}}}-\frac{c_{j_{m}^{\prime}}(\gamma)}{|j_{m}^{\prime}|^{\frac{1}{2}}}\Big{)}$ $\displaystyle\qquad+\frac{\gamma}{2\braket{\ell_{m}}}\Big{(}\frac{G_{j_{m}}(0)}{j_{m}}-\frac{G_{j_{m}^{\prime}}(0)}{j_{m}^{\prime}}\Big{)}_{|\gamma=\gamma_{m}}\Big{)}\to 0$ as $m\to\infty$. Therefore, for any $n\geq 1$, taking $m\rightarrow\infty$ in (4.22) we get $\partial_{\gamma}^{n}\big{(}\vec{\Omega}(\gamma)\cdot\overline{c}\big{)}_{|\gamma=\overline{\gamma}}=0$. By analyticity this implies $\vec{\Omega}(\gamma)\cdot\overline{c}=\overline{d}$, for all $\gamma\in\Gamma$, contradicting Lemma 4.2-2, since $\overline{c}\neq 0$. Proof of (4.18). It follows as (4.17) and we omit it. ∎ ###### Remark 4.6. For the irrotational gravity water waves equations (1.3) with $\gamma=0$, quasi-periodic traveling waves solutions exist for most values of the _depth_ ${\mathtt{h}}\in[{\mathtt{h}}_{1},{\mathtt{h}}_{2}]$. In detail, the non- degeneracy of the linear frequencies with respect to the parameter ${\mathtt{h}}$ as in Lemma 4.2 is proved precisely in Lemma 3.2 in [2], whereas the transversality properties hold by restricting the bounds in Lemma 3.4 in [2] to the Fourier sites satisfying the momentum conditions. We are not able to use ${\mathtt{h}}$ as a parameter for any value of $\gamma\neq 0$ (in this case we do not know if the non-degeneracy properties of Lemma 4.2 hold with respect to ${\mathtt{h}}$). ## 5 Proof of Theorem 1.2 Under the rescaling $(\eta,\zeta)\mapsto(\varepsilon\eta,\varepsilon\zeta)$, the Hamiltonian system (2.10) transforms into the Hamiltonian system generated by ${\mathcal{H}}_{\varepsilon}(\eta,\zeta):=\varepsilon^{-2}{\mathcal{H}}(\varepsilon\eta,\varepsilon\zeta)={\mathcal{H}}_{L}(\eta,\zeta)+\varepsilon P_{\varepsilon}(\eta,\zeta)\,,$ (5.1) where ${\mathcal{H}}$ is the water waves Hamiltonian (2.9) expressed in the Wahlén coordinates (2.7), ${\mathcal{H}}_{L}$ is defined in (2.15) and, denoting ${\mathcal{H}}_{\geq 3}:={\mathcal{H}}-{\mathcal{H}}_{L}$ the cubic part of the Hamiltonian, $P_{\varepsilon}(\eta,\zeta):=\varepsilon^{-3}{\mathcal{H}}_{\geq 3}(\varepsilon\eta,\varepsilon\zeta)\,.$ We now study the Hamiltonian system generated by the Hamiltonian ${\mathcal{H}}_{\varepsilon}(\eta,\zeta)$, in the action-angle and normal coordinates $(\theta,I,w)$ defined in Section 2.2. Thus we consider the Hamiltonian $H_{\varepsilon}(\theta,I,w)$ defined by $H_{\varepsilon}:={\mathcal{H}}_{\varepsilon}\circ A=\varepsilon^{-2}{\mathcal{H}}\circ\varepsilon A$ (5.2) where $A$ is the map defined in (2.2). The associated symplectic form is given in (2.43). By (2.47) (see also (2.30), (2.38)), in the variables $(\theta,I,w)$ the quadratic Hamiltonian ${\mathcal{H}}_{L}$ defined in (2.15) simply reads, up to a constant, ${\mathcal{N}}:={\mathcal{H}}_{L}\circ A=\vec{\Omega}(\gamma)\cdot I+\tfrac{1}{2}\left({\bf{\Omega}}_{W}w,w\right)_{L^{2}}$ where $\vec{\Omega}(\gamma)\in{\mathbb{R}}^{\nu}$ is defined in (1.20) and ${\bf{\Omega}}_{W}$ in (2.14). Thus the Hamiltonian $H_{\varepsilon}$ in (5.2) is $H_{\varepsilon}={\mathcal{N}}+\varepsilon P\qquad{\rm with}\qquad P:=P_{\varepsilon}\circ A\,.$ (5.3) We look for an embedded invariant torus $i:{\mathbb{T}}^{\nu}\rightarrow{\mathbb{R}}^{\nu}\times{\mathbb{R}}^{\nu}\times\mathfrak{H}_{{\mathbb{S}}^{+},\Sigma}^{\angle}\,,\quad{\varphi}\mapsto i({\varphi}):=(\theta({\varphi}),I({\varphi}),w({\varphi}))\,,$ of the Hamiltonian vector field $X_{H_{\varepsilon}}:=(\partial_{I}H_{\varepsilon},-\partial_{\theta}H_{\varepsilon},\Pi_{{\mathbb{S}}^{+},\Sigma}^{\angle}J\nabla_{w}H_{\varepsilon})$ filled by quasi-periodic solutions with frequency vector $\omega\in{\mathbb{R}}^{\nu}$. ### 5.1 Nash-Moser theorem of hypothetical conjugation Instead of looking directly for quasi-periodic solutions of $X_{H_{\varepsilon}}$ we look for quasi-periodic solutions of the family of modified Hamiltonians, where $\alpha\in{\mathbb{R}}^{\nu}$ are additional parameters, $H_{\alpha}:={\mathcal{N}}_{\alpha}+\varepsilon P\,,\quad{\mathcal{N}}_{\alpha}:=\alpha\cdot I+\tfrac{1}{2}\left(w,{\bf{\Omega}}_{W}w\right)_{L^{2}}\,.$ (5.4) We consider the nonlinear operator $\displaystyle{\mathcal{F}}(i,\alpha)$ $\displaystyle:={\mathcal{F}}(\omega,\gamma,\varepsilon;i,\alpha):=\omega\cdot\partial_{\varphi}i({\varphi})-X_{H_{\alpha}}(i({\varphi}))$ $\displaystyle=\begin{pmatrix}\omega\cdot\partial_{\varphi}\theta({\varphi})&-\alpha-\varepsilon\partial_{I}P(i({\varphi}))\\\ \omega\cdot\partial_{\varphi}I({\varphi})&+\varepsilon\partial_{\theta}P(i({\varphi}))\\\ \omega\cdot\partial_{\varphi}w({\varphi})&-\,\Pi_{{\mathbb{S}}^{+},\Sigma}^{\angle}J({\bf{\Omega}}_{W}w({\varphi})+\varepsilon\nabla_{w}P(i({\varphi})))\end{pmatrix}\,.$ (5.5) If ${\mathcal{F}}(i,\alpha)=0$, then the embedding ${\varphi}\mapsto i({\varphi})$ is an invariant torus for the Hamiltonian vector field $X_{H_{\alpha}}$, filled with quasi-periodic solutions with frequency $\omega$. Each Hamiltonian $H_{\alpha}$ in (5.4) is invariant under the involution $\vec{\mathcal{S}}$ and the translations $\vec{\tau}_{\varsigma}$, $\varsigma\in{\mathbb{R}}$, defined respectively in (2.40) and in (2.41): $H_{\alpha}\circ\vec{\mathcal{S}}=H_{\alpha}\,,\qquad H_{\alpha}\circ\vec{\tau}_{\varsigma}=H_{\alpha}\,,\quad\forall\,\varsigma\in{\mathbb{R}}\,.$ (5.6) We look for a reversible traveling torus embedding $i(\varphi)=$ $(\theta({\varphi}),I({\varphi}),w({\varphi}))$, namely satisfying $\vec{\mathcal{S}}i({\varphi})=i(-{\varphi})\,,\qquad\vec{\tau}_{\varsigma}i({\varphi})=i({\varphi}-\vec{\jmath}\varsigma)\,,\quad\forall\,\varsigma\in{\mathbb{R}}\,.$ (5.7) Note that, by (5.1) and (5.6), the operator ${\mathcal{F}}(\cdot,\alpha)$ maps a reversible, respectively traveling, wave into an anti-reversible, respectively traveling, wave variation, according to Definition 3.26. The norm of the periodic components of the embedded torus ${\mathfrak{I}}({\varphi}):=i({\varphi})-({\varphi},0,0):=\left(\Theta({\varphi}),I({\varphi}),w({\varphi})\right)\,,\quad\Theta({\varphi}):=\theta({\varphi})-{\varphi}\,,$ (5.8) is $\left\|{\mathfrak{I}}\right\|_{s}^{k_{0},\upsilon}:=\left\|\Theta\right\|_{H_{\varphi}^{s}}^{k_{0},\upsilon}+\left\|I\right\|_{H_{\varphi}^{s}}^{k_{0},\upsilon}+\left\|w\right\|_{s}^{k_{0},\upsilon}$, where $k_{0}:=m_{0}+2$ (5.9) and $m_{0}\in{\mathbb{N}}$ is the index of non-degeneracy provided by Proposition 4.5, which only depends on the linear unperturbed frequencies. We will often omit to write the dependence of the various constants with respect to $k_{0}$, which is considered as an absolute constant. We look for quasi- periodic solutions of frequency $\omega$ belonging to a $\delta$-neighbourhood (independent of $\varepsilon$) ${\mathtt{\Omega}}:=\big{\\{}\omega\in{\mathbb{R}}^{\nu}\ :\ \operatorname{dist}\big{(}\omega,\vec{\Omega}[\gamma_{1},\gamma_{2}]\big{)}<\delta\big{\\}}\,,\quad\delta>0\,,$ of the curve $\vec{\Omega}[\gamma_{1},\gamma_{2}]$ defined by (1.20). The next theorem, whose proof is based on an implicit function iterative scheme of Nash-Moser type, provides, for $\varepsilon$ small enough, a solution $(i_{\infty},\alpha_{\infty})(\omega,\gamma;\varepsilon)$ of the nonlinear operator ${\cal F}(\varepsilon,\omega,\gamma;i,\alpha)=0$ for all the values of $(\omega,\gamma)$ in the Cantor like set ${\cal C}_{\infty}^{\upsilon}$ below. ###### Theorem 5.1. (Theorem of hypothetical conjugation) There exist positive constants ${\rm a_{0}},\varepsilon_{0},C$ depending on ${\mathbb{S}}$, $k_{0}$ and $\tau\geq 1$ such that, for all $\upsilon=\varepsilon^{\rm a}$, ${\rm a}\in(0,{\rm a}_{0})$ and for all $\varepsilon\in(0,\varepsilon_{0})$, there exist 1. 1. a $k_{0}$-times differentiable function of the form $\alpha_{\infty}:\,{\mathtt{\Omega}}\times[\gamma_{1},\gamma_{2}]\mapsto{\mathbb{R}}^{\nu}$, $\displaystyle\alpha_{\infty}(\omega,\gamma):=\omega+r_{\varepsilon}(\omega,\gamma)\quad\text{ with }\quad|r_{\varepsilon}|^{k_{0},\upsilon}\leq C\varepsilon\upsilon^{-1}\,;$ (5.10) 2. 2. a family of embedded reversible traveling tori $i_{\infty}({\varphi})$ (cfr. (5.7)), defined for all $(\omega,\gamma)\in{\mathtt{\Omega}}\times[\gamma_{1},\gamma_{2}]$, satisfying $\|i_{\infty}({\varphi})-({\varphi},0,0)\|_{s_{0}}^{k_{0},\upsilon}\leq C\varepsilon\upsilon^{-1}\,;$ (5.11) 3. 3. a sequence of $k_{0}$-times differentiable functions $\mu_{j}^{\infty}:{\mathbb{R}}^{\nu}\times[\gamma_{1},\gamma_{2}]\rightarrow{\mathbb{R}}$, $j\in{\mathbb{S}}_{0}^{c}={\mathbb{Z}}\,\setminus\,({\mathbb{S}}\cup\\{0\\})$, of the form $\mu_{j}^{\infty}(\omega,\gamma)={\mathtt{m}}_{1}^{\infty}(\omega,\gamma)j+{\mathtt{m}}_{\frac{1}{2}}^{\infty}(\omega,\gamma)\Omega_{j}(\gamma)-{\mathtt{m}}_{0}^{\infty}(\omega,\gamma){\rm sgn}(j)+{\mathfrak{r}}_{j}^{\infty}(\omega,\gamma)\,,$ (5.12) with $\Omega_{j}(\gamma)$ defined in (1.13), satisfying $|{\mathtt{m}}_{1}^{\infty}|^{k_{0},\upsilon}\leq C\varepsilon\,,\ |{\mathtt{m}}_{\frac{1}{2}}^{\infty}-1|^{k_{0},\upsilon}+|{\mathtt{m}}_{0}^{\infty}|^{k_{0},\upsilon}\leq C\varepsilon\upsilon^{-1}\,,\quad\sup_{j\in{\mathbb{S}}_{0}^{c}}|j|^{\frac{1}{2}}|{\mathfrak{r}}_{j}^{\infty}|^{k_{0},\upsilon}\leq C\varepsilon\upsilon^{-3}\,,$ (5.13) such that, for all $(\omega,\gamma)$ in the Cantor-like set $\displaystyle{\mathcal{C}}_{\infty}^{\upsilon}:=$ $\displaystyle\Big{\\{}(\omega,\gamma)\in{\mathtt{\Omega}}\times[\gamma_{1},\gamma_{2}]\ :\ |\omega\cdot\ell|\geq\ 8\upsilon\langle\ell\rangle^{-\tau}\,,\ \ \forall\,\ell\in{\mathbb{Z}}^{\nu}\setminus\\{0\\}\,,$ (5.14) $\displaystyle\ \left|\omega\cdot\ell-{\mathtt{m}}_{1}^{\infty}(\omega,\gamma)j\right|\geq 8\upsilon\braket{\ell}^{-\tau}\,,\ \forall\,\ell\in{\mathbb{Z}}^{\nu},\,j\in{\mathbb{S}}_{0}^{c}\text{ with }\vec{\jmath}\cdot\ell+j=0;$ (5.15) $\displaystyle\ \left|\omega\cdot\ell+\mu_{j}^{\infty}(\omega,\gamma)\right|\geq 4\upsilon\left|j\right|^{\frac{1}{2}}\braket{\ell}^{-\tau}\,,\forall\,\ell\in{\mathbb{Z}}^{\nu},\,j\in{\mathbb{S}}_{0}^{c}\text{ with }\vec{\jmath}\cdot\ell+j=0\,;$ (5.16) $\displaystyle\ \left|\omega\cdot\ell+\mu_{j}^{\infty}(\omega,\gamma)-\mu_{j^{\prime}}^{\infty}(\omega,\gamma)\right|\geq 4\upsilon\,\braket{\ell}^{-\tau}\,,$ (5.17) $\displaystyle\ \quad\quad\forall\ell\in{\mathbb{Z}}^{\nu},\,j,j^{\prime}\in{\mathbb{S}}_{0}^{c},\,(\ell,j,j^{\prime})\neq(0,j,j)\text{ with }\vec{\jmath}\cdot\ell+j-j^{\prime}=0\,,$ $\displaystyle\ \left|\omega\cdot\ell+\mu_{j}^{\infty}(\omega,\gamma)+\mu_{j^{\prime}}^{\infty}(\omega,\gamma)\right|\geq 4\upsilon\,\big{(}\left|j\right|^{\frac{1}{2}}+|j^{\prime}|^{\frac{1}{2}}\big{)}\braket{\ell}^{-\tau}\,,$ (5.18) $\displaystyle\ \quad\quad\forall\,\ell\in{\mathbb{Z}}^{\nu},\,j,j^{\prime}\in{\mathbb{S}}_{0}^{c}\,,\text{ with }\vec{\jmath}\cdot\ell+j+j^{\prime}=0\,\Big{\\}}\,,$ the function $i_{\infty}({\varphi}):=i_{\infty}(\omega,\gamma,\varepsilon;{\varphi})$ is a solution of ${\mathcal{F}}(\omega,\gamma,\varepsilon;(i_{\infty},\alpha_{\infty})(\omega,\gamma))=0$. As a consequence, the embedded torus ${\varphi}\mapsto i_{\infty}({\varphi})$ is invariant for the Hamiltonian vector field $X_{H_{\alpha_{\infty}(\omega,\gamma)}}$ as it is filled by quasi-periodic reversible traveling wave solutions with frequency $\omega$. Note that the Cantor-like set ${\cal C}_{\infty}^{\upsilon}$ in (5.14)-(5.18) is defined in terms of the functions ${\mathtt{m}}_{1}^{\infty}(\omega,\gamma)$ and the “final" perturbed normal frequencies $\mu_{j}^{\infty}(\omega,\gamma)$, $j\in{\mathbb{S}}_{0}^{c}$, which are defined for all the values of the parameters $(\omega,\gamma)$. This formulation completely decouples the Nash-Moser implicit function theorem construction of $(\alpha_{\infty},i_{\infty})(\omega,\gamma)$ (in Sections 6-9) from the discussion about the measure of the parameters where all the required “non-resonance" conditions are verified (Section 5.2). This approach simplifies considerably the presentation because the measure estimates required to build $(i_{\infty},\alpha_{\infty})(\omega,\gamma)$ are not verified at each step along the Nash-Moser iteration (the set ${\cal C}_{\infty}^{\upsilon}$ in (5.14)-(5.18) could be empty, in such a case the functions $(\alpha_{\infty},i_{\infty})(\omega,\gamma)$ constructed in Theorem 5.1 are obtained by just finitely many sums). In order to define the extended functions $(i_{\infty},\alpha_{\infty})$ for all the values of $(\omega,\gamma)$, preserving the weighted norm $\|\ \|^{k_{0},\upsilon}$, we use the Whitney extension theory reported in Section 3. We also remind that the conditions on the indexes in (5.14)-(5.18) (where $\vec{\jmath}\in{\mathbb{Z}}^{\nu}$ is the vector in (2.42)) are due to the fact that we look for traveling wave solutions. These restrictions are essential to prove the measure estimates of the next section. ###### Remark 5.2. The Diophantine condition (5.14) could be weakened requiring only $|\omega\cdot\ell|\geq\ \upsilon\langle\ell\rangle^{-\tau}$ for any $\ell\cdot\vec{\jmath}=0$. In such a case the vector $\omega$ could admit one non-trivial resonance, i.e. $\overline{\ell}\in{\mathbb{Z}}^{\nu}\setminus\\{0\\}$ such that $\omega\cdot\overline{\ell}=0$, thus the orbit $\\{\omega t\\}_{t\in{\mathbb{R}}}$ would densely fill a ($\nu-1$)-dimensional torus, orthogonal to $\overline{\ell}$. In any case $\vec{\jmath}\cdot\overline{\ell}\neq 0$ (otherwise $|\omega\cdot\overline{\ell}|\geq\upsilon\langle\overline{\ell}\rangle^{-\tau}>0$, contradicting that $\omega\cdot\overline{\ell}=0$) and then the closure of the set $\\{\omega t-\vec{\jmath}x\\}_{t\in{\mathbb{R}},x\in{\mathbb{R}}}$ is dense in ${\mathbb{T}}^{\nu}$. This is the natural minimal requirement to look for traveling quasi-periodic solutions $U(\omega t-\vec{\jmath}x)$ (Definition 3.1). The next goal is to deduce Theorem 1.2 from Theorem 5.1. ### 5.2 Measure estimates: proof of Theorem 1.2 We now want to prove the existence of quasi-periodic solutions of the original Hamiltonian system $H_{\varepsilon}$ in (5.3), which is equivalent after a rescaling to (2.10), and not of just of the Hamiltonian system generated by the modified Hamiltonian $H_{\alpha_{\infty}}$. We proceed as follows. By (5.10), the function $\alpha_{\infty}(\,\cdot\,,\gamma)$ from ${\mathtt{\Omega}}$ into its image $\alpha_{\infty}({\mathtt{\Omega}},\gamma)$ is invertible and $\displaystyle\beta=\alpha_{\infty}(\omega,\gamma)=\omega+r_{\varepsilon}(\omega,\gamma)\ \Leftrightarrow$ (5.19) $\displaystyle\omega=\alpha_{\infty}^{-1}(\beta,\gamma)=\beta+\breve{r}_{\varepsilon}(\beta,\gamma)\,,\quad\left|\breve{r}_{\varepsilon}\right|^{k_{0},\upsilon}\leq C\varepsilon\upsilon^{-1}\,.$ Then, for any $\beta\in\alpha_{\infty}({\mathcal{C}}_{\infty}^{\upsilon})$, Theorem 5.1 proves the existence of an embedded invariant torus filled by quasi-periodic solutions with Diophantine frequency $\omega=\alpha_{\infty}^{-1}(\beta,\gamma)$ for the Hamiltonian $H_{\beta}=\beta\cdot I+\tfrac{1}{2}(w,{\bf{\Omega}}_{W}w)_{L^{2}}+\varepsilon P\,.$ Consider the curve of the unperturbed tangential frequency vector $\vec{\Omega}(\gamma)$ in (1.20). In Theorem 5.3 below we prove that for "most" values of $\gamma\in[\gamma_{1},\gamma_{2}]$ the vector $(\alpha_{\infty}^{-1}(\vec{\Omega}(\gamma),\gamma),\gamma)$ is in ${\mathcal{C}}_{\infty}^{\upsilon}$, obtaining an embedded torus for the Hamiltonian $H_{\varepsilon}$ in (5.2), filled by quasi-periodic solutions with Diophantine frequency vector $\omega=\alpha_{\infty}^{-1}(\vec{\Omega}(\gamma),\gamma)$, denoted ${\widetilde{\Omega}}$ in Theorem 1.2. Thus $\varepsilon A(i_{\infty}({\widetilde{\Omega}}t))$, where $A$ is defined in (2.2), is a quasi-periodic traveling wave solution of the water waves equations (2.10) written in the Wahlén variables. Finally, going back to the original Zakharov variables via (2.7) we obtain solutions of (1.3). This proves Theorem 1.2 together with the following measure estimates. ###### Theorem 5.3. (Measure estimates) Let $\upsilon=\varepsilon^{\rm a}\,,\quad 0<{\rm a}<\min\\{{\rm a}_{0},1/(4m_{0}^{2})\\}\,,\quad\tau>m_{0}(2m_{0}\nu+\nu+2)\,,$ (5.20) where $m_{0}$ is the index of non-degeneracy given in Proposition 4.5 and $k_{0}:=m_{0}+2$. Then, for $\varepsilon\in(0,\varepsilon_{0})$ small enough, the measure of the set ${\mathcal{G}}_{\varepsilon}:=\big{\\{}\gamma\in[\gamma_{1},\gamma_{2}]\ :\ \big{(}\alpha_{\infty}^{-1}(\vec{\Omega}(\gamma),\gamma),\gamma\big{)}\in{\mathcal{C}}_{\infty}^{\upsilon}\big{\\}}$ (5.21) satisfies $|{\mathcal{G}}_{\varepsilon}|\rightarrow\gamma_{2}-\gamma_{1}$ as $\varepsilon\rightarrow 0$. The rest of this section is devoted to prove Theorem 5.3. By (5.19) we have $\vec{\Omega}_{\varepsilon}(\gamma):=\alpha_{\infty}^{-1}(\vec{\Omega}(\gamma),\gamma)=\vec{\Omega}(\gamma)+\vec{r}_{\varepsilon}\,,$ (5.22) where $\vec{r}_{\varepsilon}(\gamma):=\breve{r}_{\varepsilon}(\vec{\Omega}(\gamma),\gamma)$ satisfies $|\partial_{\gamma}^{k}{\vec{r}}_{\varepsilon}(\gamma)|\leq C\varepsilon\upsilon^{-(1+k)}\,,\quad\forall\,\left|k\right|\leq k_{0}\,,\ \text{uniformly on }[\gamma_{1},\gamma_{2}]\,.$ (5.23) We also denote, with a small abuse of notation, for all $j\in{\mathbb{S}}_{0}^{c}$, $\mu_{j}^{\infty}(\gamma):=\mu_{j}^{\infty}\big{(}\vec{\Omega}_{\varepsilon}(\gamma),\gamma\big{)}:={\mathtt{m}}_{1}^{\infty}(\gamma)j+{\mathtt{m}}_{\frac{1}{2}}^{\infty}(\gamma)\Omega_{j}(\gamma)-{\mathtt{m}}_{0}^{\infty}(\gamma){\rm sgn}(j)+{\mathfrak{r}}_{j}^{\infty}(\gamma)\,,$ (5.24) where ${\mathtt{m}}_{1}^{\infty}(\gamma):={\mathtt{m}}_{1}^{\infty}(\vec{\Omega}_{\varepsilon}(\gamma),\gamma)$, ${\mathtt{m}}_{\frac{1}{2}}^{\infty}(\gamma):={\mathtt{m}}_{\frac{1}{2}}^{\infty}(\vec{\Omega}_{\varepsilon}(\gamma),\gamma)$, ${\mathtt{m}}_{0}^{\infty}(\gamma):={\mathtt{m}}_{0}^{\infty}(\vec{\Omega}_{\varepsilon}(\gamma),\gamma)$ and ${\mathfrak{r}}_{j}^{\infty}(\gamma):={\mathfrak{r}}_{j}^{\infty}(\vec{\Omega}_{\varepsilon}(\gamma),\gamma)$. By (5.13) and (5.23) we get the estimates $\displaystyle|\partial_{\gamma}^{k}{\mathtt{m}}_{1}^{\infty}(\gamma)|\leq C\varepsilon\upsilon^{-k}\,,\,\big{|}\partial_{\gamma}^{k}\big{(}{\mathtt{m}}_{\frac{1}{2}}^{\infty}(\gamma)-1\big{)}\big{|}+|\partial_{\gamma}^{k}{\mathtt{m}}_{0}^{\infty}(\gamma)|\leq C\varepsilon\upsilon^{-k-1},$ (5.25) $\displaystyle\sup_{j\in{\mathbb{S}}_{0}^{c}}|j|^{\frac{1}{2}}\left|\partial_{\gamma}^{k}{\mathfrak{r}}_{j}^{\infty}(\gamma)\right|\leq C\varepsilon\upsilon^{-3-k}\,,\quad\forall\,0\leq k\leq k_{0}\,.$ (5.26) Recalling (5.14)-(5.18), the Cantor set in (5.21) becomes $\displaystyle{\mathcal{G}}_{\varepsilon}:=$ $\displaystyle\Big{\\{}\gamma\in[\gamma_{1},\gamma_{2}]\ :\ |\vec{\Omega}_{\varepsilon}(\gamma)\cdot\ell|\geq 8\upsilon\braket{\ell}^{-\tau}\,,\ \forall\,\ell\in{\mathbb{Z}}^{\nu}\setminus\\{0\\}\,;$ $\displaystyle\ \ |(\vec{\Omega}_{\varepsilon}(\gamma)-{\mathtt{m}}_{1}^{\infty}(\gamma)\vec{\jmath})\cdot\ell|\geq 8\upsilon\braket{\ell}^{-\tau}\,,\ \forall\,\ell\in{\mathbb{Z}}^{\nu}\setminus\\{0\\}\,;$ $\displaystyle\ \ |\vec{\Omega}_{\varepsilon}(\gamma)\cdot\ell+\mu_{j}^{\infty}(\gamma)|\geq 4\upsilon|j|^{\frac{1}{2}}\braket{\ell}^{-\tau}\,,\ \forall\,\ell\in{\mathbb{Z}}^{\nu}\,,\,j\in{\mathbb{S}}_{0}^{c}\,,\text{ with }\vec{\jmath}\cdot\ell+j=0\,;$ $\displaystyle\ \ |\vec{\Omega}_{\varepsilon}(\gamma)\cdot\ell+\mu_{j}^{\infty}(\gamma)-\mu_{j^{\prime}}^{\infty}(\gamma)|\geq 4\upsilon\,\braket{\ell}^{-\tau}\,,$ $\displaystyle\ \ \forall\ell\in{\mathbb{Z}}^{\nu},\,j,j^{\prime}\in{\mathbb{S}}_{0}^{c},\,(\ell,j,j^{\prime})\neq(0,j,j)\text{ with }\vec{\jmath}\cdot\ell+j-j^{\prime}=0\,;$ $\displaystyle\ \ |\vec{\Omega}_{\varepsilon}(\gamma)\cdot\ell+\mu_{j}^{\infty}(\gamma)+\mu_{j^{\prime}}^{\infty}(\gamma)|\geq 4\upsilon\,\big{(}|j|^{\frac{1}{2}}+|j^{\prime}|^{\frac{1}{2}}\big{)}\braket{\ell}^{-\tau}\,,$ $\displaystyle\ \ \forall\,\ell\in{\mathbb{Z}}^{\nu},\,j,j^{\prime}\in{\mathbb{S}}_{0}^{c}\text{ with }\vec{\jmath}\cdot\ell+j+j^{\prime}=0\Big{\\}}\,.$ We estimate the measure of the complementary set $\displaystyle{\mathcal{G}}_{\varepsilon}^{c}$ $\displaystyle:=[\gamma_{1},\gamma_{2}]\setminus{\mathcal{G}}_{\varepsilon}$ (5.27) $\displaystyle=\left(\bigcup_{\ell\neq 0}R_{\ell}^{(0)}\cup R_{\ell}^{(T)}\right)\cup\left(\bigcup_{\ell\in{\mathbb{Z}}^{\nu},\,j\in{\mathbb{S}}_{0}^{c}\atop\vec{\jmath}\cdot\ell+j=0}R_{\ell,j}^{(I)}\right)\cup\left(\bigcup_{(\ell,j,j^{\prime})\neq(0,j,j),j\neq j^{\prime}\atop\vec{\jmath}\cdot\ell+j-j^{\prime}=0}R_{\ell,j,j^{\prime}}^{(II)}\right)\cup\left(\bigcup_{\ell\in{\mathbb{Z}}^{\nu},j,j^{\prime}\in{\mathbb{S}}_{0}^{c}\,,\atop\vec{\jmath}\cdot\ell+j+j^{\prime}=0}Q_{\ell,j,j^{\prime}}^{(II)}\right)\,,$ where the “nearly-resonant sets" are, recalling the notation $\Gamma=[\gamma_{1},\gamma_{2}]$, $\displaystyle R_{\ell}^{(0)}:=R_{\ell}^{(0)}(\upsilon,\tau):=$ $\displaystyle\big{\\{}\gamma\in\Gamma\,:\,|\vec{\Omega}_{\varepsilon}(\gamma)\cdot\ell|<8\upsilon\braket{\ell}^{-\tau}\big{\\}}\,,$ (5.28) $\displaystyle R_{\ell}^{(T)}:=R_{\ell}^{(T)}(\upsilon,\tau):=$ $\displaystyle\big{\\{}\gamma\in\Gamma\,:\,|(\vec{\Omega}_{\varepsilon}(\gamma)-{\mathtt{m}}_{1}^{\infty}(\gamma)\vec{\jmath})\cdot\ell|<8\upsilon\braket{\ell}^{-\tau}\big{\\}}\,,$ (5.29) $\displaystyle R_{\ell,j}^{(I)}:=R_{\ell,j}^{(I)}(\upsilon,\tau):=$ $\displaystyle\big{\\{}\gamma\in\Gamma\,:\,|\vec{\Omega}_{\varepsilon}(\gamma)\cdot\ell+\mu_{j}^{\infty}(\gamma)|<4\upsilon|j|^{\frac{1}{2}}\braket{\ell}^{-\tau}\big{\\}}\,,$ (5.30) $\displaystyle R_{\ell,j,j^{\prime}}^{(II)}:=R_{\ell,j,j^{\prime}}^{(II)}(\upsilon,\tau):=$ $\displaystyle\big{\\{}\gamma\in\Gamma\,:\,|\vec{\Omega}_{\varepsilon}(\gamma)\cdot\ell+\mu_{j}^{\infty}(\gamma)-\mu_{j^{\prime}}^{\infty}(\gamma)|<4\upsilon\,\braket{\ell}^{-\tau}\big{\\}}\,,$ (5.31) $\displaystyle Q_{\ell,j,j^{\prime}}^{(II)}:=Q_{\ell,j,j^{\prime}}^{(II)}(\upsilon,\tau):=$ $\displaystyle\Big{\\{}\gamma\in\Gamma\,:\,|\vec{\Omega}_{\varepsilon}(\gamma)\cdot\ell+\mu_{j}^{\infty}(\gamma)+\mu_{j^{\prime}}^{\infty}(\gamma)|<\frac{4\upsilon\big{(}|j|^{\frac{1}{2}}+|j^{\prime}|^{\frac{1}{2}}\big{)}}{\braket{\ell}^{\tau}}\Big{\\}}\,.$ (5.32) Note that in the third union in (5.27) we may require $j\neq j^{\prime}$ because $R_{\ell,j,j}^{(II)}\subset R_{\ell}^{(0)}$. In the sequel we shall always suppose the momentum conditions on the indexes $\ell,j,j^{\prime}$ written in (5.27). Some of the above sets are empty. ###### Lemma 5.4. For $\varepsilon\in(0,\varepsilon_{0})$ small enough, if $Q_{\ell,j,j^{\prime}}^{(II)}\neq\emptyset$ then $|j|^{\frac{1}{2}}+|j^{\prime}|^{\frac{1}{2}}\leq C\braket{\ell}$. ###### Proof. If $Q_{\ell,j,j^{\prime}}^{(II)}\neq\emptyset$ then there exists $\gamma\in[\gamma_{1},\gamma_{2}]$ such that $\left|\mu_{j}^{\infty}(\gamma)+\mu_{j^{\prime}}^{\infty}(\gamma)\right|<\frac{4\upsilon\big{(}|j|^{\frac{1}{2}}+|j^{\prime}|^{\frac{1}{2}}\big{)}}{\braket{\ell}^{\tau}}+C|\ell|\,.$ (5.33) By (5.24) we have $\mu_{j}^{\infty}(\gamma)+\mu_{j^{\prime}}^{\infty}(\gamma)={\mathtt{m}}_{1}^{\infty}(\gamma)(j+j^{\prime})+{\mathtt{m}}_{\frac{1}{2}}^{\infty}(\gamma)(\Omega_{j}(\gamma)+\Omega_{j^{\prime}}(\gamma))-{\mathtt{m}}_{0}^{\infty}(\gamma)({\rm sgn}(j)+{\rm sgn}(j^{\prime}))+{\mathfrak{r}}_{j}^{\infty}(\gamma)+{\mathfrak{r}}_{j^{\prime}}^{\infty}(\gamma)\,.$ Then, by (5.25)-(5.26) with $k=0$, Lemma 4.4 and the momentum condition $j+j^{\prime}=-\vec{\jmath}\cdot\ell$, we deduce, for $\varepsilon$ small enough, $\displaystyle|\mu_{j}^{\infty}(\gamma)+\mu_{j^{\prime}}^{\infty}(\gamma)|$ $\displaystyle\geq-C\varepsilon|\ell|+\tfrac{\sqrt{g}}{2}\,\big{|}|j|^{\frac{1}{2}}+|j^{\prime}|^{\frac{1}{2}}\big{|}-C^{\prime}-C\varepsilon\upsilon^{-3}\,.$ (5.34) Combining (5.33) and (5.34), we deduce $||j|^{\frac{1}{2}}+|j^{\prime}|^{\frac{1}{2}}|\leq C\braket{\ell}$, for $\varepsilon$ small enough. ∎ In order to estimate the measure of the sets (5.28)-(5.32), the key point is to prove that the perturbed frequencies satisfy transversality properties similar to the ones (4.15)-(4.18) satisfied by the unperturbed frequencies. By Proposition 4.5, (5.22), and the estimates (5.23), (5.25)-(5.26) we deduce the following lemma (cfr. Lemma 5.5 in [7]). ###### Lemma 5.5. (Perturbed transversality) For $\varepsilon\in(0,\varepsilon_{0})$ small enough and for all $\gamma\in[\gamma_{1},\gamma_{2}]$, $\displaystyle\max_{0\leq n\leq m_{0}}|\partial_{\gamma}^{n}\vec{\Omega}_{\varepsilon}(\gamma)\cdot\ell|\geq\frac{\rho_{0}}{2}\braket{\ell}\,,\quad\forall\,\ell\in{\mathbb{Z}}^{\nu}\setminus\\{0\\}\,;$ (5.35) $\displaystyle\max_{0\leq n\leq m_{0}}|\partial_{\gamma}^{n}(\vec{\Omega}_{\varepsilon}(\gamma)-{\mathtt{m}}_{1}^{\infty}(\gamma)\vec{\jmath})\cdot\ell|\geq\frac{\rho_{0}}{2}\braket{\ell}\,,\quad\forall\ell\in{\mathbb{Z}}^{\nu}\setminus\\{0\\}$ (5.36) $\displaystyle\begin{cases}\max_{0\leq n\leq m_{0}}|\partial_{\gamma}^{n}(\vec{\Omega}_{\varepsilon}(\gamma)\cdot\ell+\mu_{j}^{\infty}(\gamma))|\geq\frac{\rho_{0}}{2}\braket{\ell}\,,\\\ \vec{\jmath}\cdot\ell+j=0\,,\quad\ell\in{\mathbb{Z}}^{\nu}\,,\ j\in{\mathbb{S}}_{0}^{c}\,;\end{cases}$ (5.37) $\displaystyle\begin{cases}\max_{0\leq n\leq m_{0}}|\partial_{\gamma}^{n}(\vec{\Omega}_{\varepsilon}(\gamma)\cdot\ell+\mu_{j}^{\infty}(\gamma)-\mu_{j^{\prime}}^{\infty}(\gamma))|\geq\frac{\rho_{0}}{2}\braket{\ell}\\\ \vec{\jmath}\cdot\ell+j-j^{\prime}=0\,,\quad\ell\in{\mathbb{Z}}^{\nu}\,,\ j,j^{\prime}\in{\mathbb{S}}_{0}^{c}\,,\ (\ell,j,j^{\prime})\neq(0,j,j)\,;\end{cases}$ (5.38) $\displaystyle\begin{cases}\max_{0\leq n\leq m_{0}}|\partial_{\gamma}^{n}(\vec{\Omega}_{\varepsilon}(\gamma)\cdot\ell+\mu_{j}^{\infty}(\gamma)+\mu_{j^{\prime}}^{\infty}(\gamma))|\geq\frac{\rho_{0}}{2}\braket{\ell}\\\ \vec{\jmath}\cdot\ell+j+j^{\prime}=0\,,\quad\ell\in{\mathbb{Z}}^{\nu}\,,\ j,j^{\prime}\in{\mathbb{S}}_{0}^{c}\,.\end{cases}$ (5.39) The transversality estimates (5.35)-(5.39) and an application of Rüssmann Theorem 17.1 in [32], directly imply the following bounds for the sets in (5.27) (cfr. Lemma 5.6 in [7]). ###### Lemma 5.6. (Estimates of the resonant sets) The measure of the sets (5.27)- (5.32) satisfy $\displaystyle|R_{\ell}^{(0)}|,|R_{\ell}^{(T)}|\lesssim(\upsilon\braket{\ell}^{-(\tau+1)})^{\frac{1}{m_{0}}}\,,$ $\displaystyle\quad|R_{\ell,j}^{(I)}|\lesssim\big{(}\upsilon|j|^{\frac{1}{2}}\braket{\ell}^{-(\tau+1)}\big{)}^{\frac{1}{m_{0}}}\,,$ $\displaystyle|R_{\ell,j,j^{\prime}}^{(II)}|\lesssim\big{(}\upsilon\braket{\ell}^{-(\tau+1)}\big{)}^{\frac{1}{m_{0}}}\,,$ $\displaystyle\quad|Q_{\ell,j,j^{\prime}}^{(II)}|\lesssim\big{(}\upsilon\,\big{(}|j|^{\frac{1}{2}}+|j^{\prime}|^{\frac{1}{2}}\big{)}\braket{\ell}^{-(\tau+1)}\big{)}^{\frac{1}{m_{0}}}\,.$ We now estimate the measure of all the sets in (5.27). By Lemma 5.6, and the choice of $\tau$ in (5.20), we have $\displaystyle\Big{|}\bigcup_{\ell\neq 0}R^{(0)}_{\ell}\cup R^{(T)}_{\ell}\Big{|}\leq\sum_{\ell\neq 0}|R^{(0)}_{\ell}|+|R^{(T)}_{\ell}|\lesssim\sum_{\ell\neq 0}\Big{(}\frac{\upsilon}{\braket{\ell}^{\tau+1}}\Big{)}^{\frac{1}{m_{0}}}\lesssim\upsilon^{\frac{1}{m_{0}}}\,,$ (5.40) $\displaystyle\left|\bigcup_{\ell\neq 0,j=-\vec{\jmath}\cdot\ell}R_{\ell,j}^{(I)}\right|\leq\sum_{\ell\neq 0}|R_{\ell,-\vec{\jmath}\cdot\ell}^{(I)}|\lesssim\sum_{\ell}\Big{(}\frac{\upsilon}{\braket{\ell}^{\tau+\frac{1}{2}}}\Big{)}^{\frac{1}{m_{0}}}\lesssim\upsilon^{\frac{1}{m_{0}}}\,,$ (5.41) and using also Lemma 5.4, $\displaystyle\left|\bigcup_{\ell,\,j,j^{\prime}\in{\mathbb{S}}_{0}^{c}\atop\vec{\jmath}\cdot\ell+j+j^{\prime}=0}Q_{\ell,j,j^{\prime}}^{(II)}\right|\leq\sum_{\ell,\left|j\right|\leq C\braket{\ell}^{2},\atop j^{\prime}=-\vec{\jmath}\cdot\ell-j}|Q_{\ell,j,j^{\prime}}^{(II)}|\lesssim\sum_{\ell,\left|j\right|\leq C\braket{\ell}^{2}}\left(\frac{\upsilon}{\braket{\ell}^{\tau}}\right)^{\frac{1}{m_{0}}}\lesssim\upsilon^{\frac{1}{m_{0}}}\,.$ (5.42) We are left with estimating the measure of $\displaystyle\bigcup_{(\ell,j,j^{\prime})\neq(0,j,j),j\neq j^{\prime}\atop\vec{\jmath}\cdot\ell+j-j^{\prime}=0}\\!\\!\\!\\!\\!\\!\\!\\!R_{\ell,j,j^{\prime}}^{(II)}$ $\displaystyle=\left(\bigcup_{j\neq j^{\prime}\,,\ j\cdot j^{\prime}<0\atop\vec{\jmath}\cdot\ell+j-j^{\prime}=0}R_{\ell,j,j^{\prime}}^{(II)}\right)\cup\left(\bigcup_{j\neq j^{\prime}\,,\ j\cdot j^{\prime}>0\atop\vec{\jmath}\cdot\ell+j-j^{\prime}=0}R_{\ell,j,j^{\prime}}^{(II)}\right):={\mathtt{I}}_{1}+{\mathtt{I}}_{2}\,.$ (5.43) We first estimate the measure of ${\mathtt{I}}_{1}$. For $j\cdot j^{\prime}<0$, the momentum condition reads $j-j^{\prime}={\rm sgn}(j)(|j|+|j^{\prime}|)=-\vec{\jmath}\cdot\ell$, thus $|j|,|j^{\prime}|\leq C\left\langle\ell\right\rangle$. Hence, by Lemma 5.6 and the choice of $\tau$ in (5.20), we have $\displaystyle|{\mathtt{I}}_{1}|\leq\sum_{\ell,|j|\leq C\left\langle\ell\right\rangle,j^{\prime}=j+\vec{\jmath}\cdot\ell}|R_{\ell,j,j^{\prime}}^{(II)}|\lesssim\sum_{\ell,\left|j\right|\leq C\braket{\ell}}\left(\frac{\upsilon}{\braket{\ell}^{\tau+1}}\right)^{\frac{1}{m_{0}}}\lesssim\upsilon^{\frac{1}{m_{0}}}\,.$ (5.44) Then we estimate the measure of ${\mathtt{I}}_{2}$ in (5.43). The key step is given in the next lemma. Remind the definition of the sets $R_{\ell,j,j^{\prime}}^{(II)}$ and $R_{\ell}^{(T)}$ in (5.30) and (5.31). ###### Lemma 5.7. Let $\upsilon_{0}\geq\upsilon$ and $\tau\geq\tau_{0}\geq 1$. There is a constant $C_{1}>0$ such that, for $\varepsilon$ small enough, for any $\vec{\jmath}\cdot\ell+j-j^{\prime}=0$, $j\cdot j^{\prime}>0$, $\min\\{|j|,|j^{\prime}|\\}\geq C_{1}\upsilon_{0}^{-2}\braket{\ell}^{2(\tau_{0}+1)}\quad\Longrightarrow\quad R_{\ell,j,j^{\prime}}^{(II)}(\upsilon,\tau)\subset\bigcup_{\ell\neq 0}R_{\ell}^{(T)}(\upsilon_{0},\tau_{0})\,.$ (5.45) ###### Proof. If $\gamma\in[\gamma_{1},\gamma_{2}]\setminus\bigcup_{\ell\neq 0}R_{\ell}^{(T)}(\upsilon_{0},\tau_{0})$, then $|(\vec{\Omega}_{\varepsilon}(\gamma)-{\mathtt{m}}_{1}^{\infty}(\gamma)\vec{\jmath})\cdot\ell|\geq 8\upsilon_{0}\braket{\ell}^{-\tau_{0}}\,,\quad\forall\ell\in{\mathbb{Z}}^{\nu}\setminus\\{0\\}\,.$ (5.46) Then, by (5.24), the momentum condition $j-j^{\prime}=-\vec{\jmath}\cdot\ell$, (5.25), (5.26), Lemma 4.4, the condition $j\cdot j^{\prime}>0$, (4.23), and (5.46), we deduce that $\displaystyle|\vec{\Omega}_{\varepsilon}(\gamma)\cdot\ell+\mu_{j}^{\infty}(\gamma)-\mu_{j^{\prime}}^{\infty}(\gamma)|\geq|\vec{\Omega}_{\varepsilon}(\gamma)\cdot\ell+{\mathtt{m}}_{1}^{\infty}(j-j^{\prime})|-|{\mathtt{m}}_{\frac{1}{2}}^{\infty}||\Omega_{j}(\gamma)-\Omega_{j^{\prime}}(\gamma)|-|{\mathfrak{r}}_{j}^{\infty}(\gamma)-{\mathfrak{r}}_{j^{\prime}}^{\infty}(\gamma)|$ $\displaystyle\geq|(\vec{\Omega}_{\varepsilon}(\gamma)-{\mathtt{m}}_{1}^{\infty}\vec{\jmath})\cdot\ell|-(1-C\varepsilon\upsilon^{-1})\big{|}|j|^{\frac{1}{2}}-|j^{\prime}|^{\frac{1}{2}}\big{|}-C\Big{(}\frac{1}{|j|^{\frac{1}{2}}}+\frac{1}{|j^{\prime}|^{\frac{1}{2}}}\Big{)}-C\frac{\varepsilon}{\upsilon^{3}}\Big{(}\frac{1}{|j|^{\frac{1}{2}}}+\frac{1}{|j^{\prime}|^{\frac{1}{2}}}\Big{)}$ $\displaystyle\geq\frac{8\upsilon_{0}}{\braket{\ell}^{\tau_{0}}}-\frac{1}{2}\frac{|j-j^{\prime}|}{|j|^{\frac{1}{2}}+|j^{\prime}|^{\frac{1}{2}}}-C\Big{(}\frac{1}{|j|^{\frac{1}{2}}}+\frac{1}{|j^{\prime}|^{\frac{1}{2}}}\Big{)}\geq\frac{8\upsilon_{0}}{\braket{\ell}^{\tau_{0}}}-C\Big{(}\frac{\braket{\ell}}{|j|^{\frac{1}{2}}}+\frac{\braket{\ell}}{|j^{\prime}|^{\frac{1}{2}}}\Big{)}$ $\displaystyle\geq\frac{4\upsilon_{0}}{\braket{\ell}^{\tau_{0}}}$ for any $|j|,|j^{\prime}|>C_{1}\upsilon_{0}^{-2}\braket{\ell}^{2(\tau_{0}+1)}$, for $C_{1}>C^{2}/64$. Since $\upsilon_{0}\geq\upsilon$ and $\tau\geq\tau_{0}$ we deduce that $|\vec{\Omega}_{\varepsilon}(\gamma)\cdot\ell+\mu_{j}^{\infty}(\gamma)-\mu_{j^{\prime}}^{\infty}(\gamma)|\geq 4\upsilon\braket{\ell}^{-\tau}\,,$ namely that $\gamma\not\in R_{\ell,j,j^{\prime}}^{(II)}(\upsilon,\tau)$. ∎ Note that the set of indexes $(\ell,j,j^{\prime})$ such that $\vec{\jmath}\cdot\ell+j-j^{\prime}=0$ and $\min\\{|j|,|j^{\prime}|\\}<C_{1}\upsilon_{0}^{-2}\braket{\ell}^{2(\tau_{0}+1)}$ is included, for $\upsilon_{0}$ small enough, into the set ${\cal I}_{\ell}:=\Big{\\{}(\ell,j,j^{\prime})\ :\,\vec{\jmath}\cdot\ell+j-j^{\prime}=0\,,\ |j|,|j^{\prime}|\leq\upsilon_{0}^{-3}\langle\ell\rangle^{2(\tau_{0}+1)}\Big{\\}}$ (5.47) because $\max\\{|j|,|j^{\prime}|\\}\leq\min\\{|j|,|j^{\prime}|\\}+|j-j^{\prime}|<C_{1}\upsilon_{0}^{-2}\langle\ell\rangle^{2(\tau_{0}+1)}+C\langle\ell\rangle\leq\upsilon_{0}^{-3}\langle\ell\rangle^{2(\tau_{0}+1)}$. As a consequence, by Lemma 5.7 we deduce that $\displaystyle{\mathtt{I}}_{2}=\bigcup_{j\neq j^{\prime}\,,\ j\cdot j^{\prime}>0\atop\vec{\jmath}\cdot\ell+j-j^{\prime}=0}R_{\ell,j,j^{\prime}}^{(II)}(\upsilon,\tau)\subset\Big{(}\bigcup_{\ell\neq 0}R_{\ell}^{(T)}(\upsilon_{0},\tau_{0})\Big{)}\bigcup\Big{(}\bigcup_{(\ell,j,j^{\prime})\in{\cal I}_{\ell}}R_{\ell,j,j^{\prime}}^{(II)}(\upsilon,\tau)\Big{)}\,.$ (5.48) ###### Lemma 5.8. Let $\tau_{0}:=m_{0}\nu$ and $\upsilon_{0}=\upsilon^{\frac{1}{4m_{0}}}$. Then $|{\mathtt{I}}_{2}|\leq C\upsilon^{\frac{1}{4m_{0}^{2}}}\,.$ (5.49) ###### Proof. By (5.40) (applied with $\upsilon_{0},\tau_{0}$ instead of $\upsilon,\tau$), and $\tau_{0}=m_{0}\nu$, the measure of $\Big{|}\bigcup_{\ell\neq 0}R^{(T)}_{\ell}(\upsilon_{0},\tau_{0})\Big{|}\lesssim\upsilon_{0}^{\frac{1}{m_{0}}}\lesssim\upsilon^{\frac{1}{4m_{0}^{2}}}\,.$ (5.50) Moreover, recalling (5.47), $\Big{|}\bigcup_{(\ell,j,j^{\prime})\in{\cal I}_{\ell}}R_{\ell,j,j^{\prime}}^{(II)}(\upsilon,\tau)\Big{|}\lesssim\sum_{\ell\in{\mathbb{Z}}^{\nu}\atop|j|\leq C_{1}\upsilon_{0}^{-3}\braket{\ell}^{2(\tau_{0}+1)}}\left(\frac{\upsilon}{\braket{\ell}^{\tau+1}}\right)^{\frac{1}{m_{0}}}\lesssim\sum_{\ell\in{\mathbb{Z}}^{\nu}}\frac{\upsilon^{\frac{1}{m_{0}}}\upsilon_{0}^{-3}}{\braket{\ell}^{\frac{\tau+1}{m_{0}}-2(\tau_{0}+1)}}\leq C\upsilon^{\frac{1}{4m_{0}}}\,,$ (5.51) by the choice of $\tau$ in (5.20) and $\upsilon_{0}$. The bound (5.49) follows by (5.50) and (5.51). ∎ ###### Proof of Theorem 5.3 completed.. By (5.27), (5.40), (5.41), (5.42), (5.43), (5.44) and (5.49) we deduce that $\left|{\mathcal{G}}_{\varepsilon}^{c}\right|\leq C\upsilon^{\frac{1}{4m_{0}^{2}}}\,.$ For $\upsilon=\varepsilon^{\mathtt{a}}$ as in (5.20), we get $|{\mathcal{G}}_{\varepsilon}|\geq\gamma_{2}-\gamma_{1}-C\varepsilon^{{\mathtt{a}}/4m_{0}^{2}}$. The proof of Theorem 5.3 is concluded. ∎ ###### Remark 5.9. We have actually imposed in Lemma 5.8 the stronger non-resonance condition (5.15) with $\upsilon_{0}=\upsilon^{\frac{1}{4m_{0}}}>\upsilon$. Since it has no significant importance for Lemma 7.7 we keep $\upsilon$. ## 6 Approximate inverse In order to implement a convergent Nash-Moser scheme that leads to a solution of ${\mathcal{F}}(i,\alpha)=0$, where ${\mathcal{F}}(i,\alpha)$ is the nonlinear operator defined in (5.1), we construct an _almost approximate right inverse_ of the linearized operator ${\rm d}_{i,\alpha}{\mathcal{F}}(i_{0},\alpha_{0})[\widehat{\imath},{\widehat{\alpha}}]=\omega\cdot\partial_{\varphi}\widehat{\imath}-{\rm d}_{i}X_{H_{\alpha}}\left(i_{0}({\varphi})\right)[\widehat{\imath}]-\left({\widehat{\alpha}},0,0\right)\,.$ Note that ${\rm d}_{i,\alpha}{\mathcal{F}}(i_{0},\alpha_{0})={\rm d}_{i,\alpha}{\mathcal{F}}(i_{0})$ is independent of $\alpha_{0}$. We assume that the torus $i_{0}({\varphi})=(\theta_{0}({\varphi}),I_{0}({\varphi}),w_{0}({\varphi}))$ is reversible and traveling, according to (5.7). In the sequel we shall assume the smallness condition, for some ${\mathtt{k}}:={\mathtt{k}}(\tau,\nu)>0$, $\varepsilon\upsilon^{-{\mathtt{k}}}\ll 1$. We closely follow the strategy presented in [6] and implemented for the water waves equations in [9, 2, 7]. As shown in [7] this construction preserves the momentum preserving properties needed for the search of traveling waves and the estimates are very similar. Thus we are short. First of all, we state tame estimates for the composition operator induced by the Hamiltonian vector field $X_{P}=(\partial_{I}P,-\partial_{\theta}P,\Pi_{{\mathbb{S}}^{+},\Sigma}^{\angle}J\nabla_{w}P)$ in (5.1) (see Lemma 6.1 of [7]). ###### Lemma 6.1. (Estimates of the perturbation $P$) Let ${\mathfrak{I}}({\varphi})$ in (5.8) satisfy $\left\|{\mathfrak{I}}\right\|_{3s_{0}+2k_{0}+5}^{k_{0},\upsilon}\leq 1$. Then, for any $s\geq s_{0}$, $\left\|X_{P}(i)\right\|_{s}^{k_{0},\upsilon}\lesssim_{s}1+\left\|{\mathfrak{I}}\right\|_{s+2s_{0}+2k_{0}+3}^{k_{0},\upsilon}$, and, for all $\widehat{\imath}:=({\widehat{\theta}},{\widehat{I}},{\widehat{w}})$, $\displaystyle\left\|{\rm d}_{i}X_{P}(i)[\widehat{\imath}]\right\|_{s}^{k_{0},\upsilon}$ $\displaystyle\lesssim_{s}\left\|\widehat{\imath}\right\|_{s+1}^{k_{0},\upsilon}+\left\|{\mathfrak{I}}\right\|_{s+2s_{0}+2k_{0}+4}^{k_{0},\upsilon}\left\|\widehat{\imath}\right\|_{s_{0}+1}^{k_{0},\upsilon}\,,$ $\displaystyle\left\|{\rm d}_{i}^{2}X_{P}(i)[\widehat{\imath},\widehat{\imath}]\right\|_{s}^{k_{0},\upsilon}$ $\displaystyle\lesssim_{s}\left\|\widehat{\imath}\right\|_{s+1}^{k_{0},\upsilon}\left\|\widehat{\imath}\right\|_{s_{0}+1}^{k_{0},\upsilon}+\left\|{\mathfrak{I}}\right\|_{s+2s_{0}+2k_{0}+5}^{k_{0},\upsilon}(\left\|\widehat{\imath}\right\|_{s_{0}+1}^{k_{0},\upsilon})^{2}\,.$ Along this section, we assume the following hypothesis, which is verified by the approximate solutions obtained at each step of the Nash-Moser Theorem 9.1. * • ANSATZ. The map $(\omega,\gamma)\mapsto{\mathfrak{I}}_{0}(\omega,\gamma)=i_{0}({\varphi};\omega,\gamma)-({\varphi},0,0)$ is $k_{0}$-times differentiable with respect to the parameters $(\omega,\gamma)\in{\mathbb{R}}^{\nu}\times[\gamma_{1},\gamma_{2}]$ and, for some $\mu:=\mu(\tau,\nu)>0$, $\upsilon\in(0,1)$, $\left\|{\mathfrak{I}}_{0}\right\|_{s_{0}+\mu}^{k_{0},\upsilon}+\left|\alpha_{0}-\omega\right|^{k_{0},\upsilon}\leq C\varepsilon\upsilon^{-1}\,.$ (6.1) We first modify the approximate torus $i_{0}({\varphi})$ to obtain a nearby isotropic torus $i_{\delta}({\varphi})$, namely such that the pull-back 1-form $i_{\delta}^{*}\Lambda$ is closed, where $\Lambda$ is the Liouville 1-form defined in (2.44). Consider the pull-back $1$-form $\displaystyle i_{0}^{*}\Lambda$ $\displaystyle=\sum_{k=1}^{\nu}a_{k}({\varphi}){\rm d}{\varphi}_{k}\,,\quad a_{k}({\varphi}):=-\big{(}[\partial_{\varphi}\theta_{0}({\varphi})]^{\top}I_{0}({\varphi})\big{)}_{k}+\tfrac{1}{2}\big{(}J_{\angle}^{-1}w_{0}({\varphi}),\partial_{{\varphi}_{k}}w_{0}({\varphi})\big{)}_{L^{2}}\,,$ (6.2) and define $A_{kj}({\varphi}):=\partial_{{\varphi}_{k}}a_{j}({\varphi})-\partial_{{\varphi}_{j}}a_{k}({\varphi})$. The next Lemma follows as in Lemma 5.3 in [2] and Lemma 6.2 in [7]. Let $Z({\varphi}):={\mathcal{F}}(i_{0},\alpha_{0})({\varphi})=\omega\cdot\partial_{\varphi}i_{0}({\varphi})-X_{H_{\alpha_{0}}}(i_{0}({\varphi}))$. ###### Lemma 6.2. (Isotropic torus) The torus $i_{\delta}({\varphi}):=(\theta_{0}({\varphi}),I_{\delta}({\varphi}),w_{0}({\varphi}))$, defined by $I_{\delta}({\varphi}):=I_{0}({\varphi})+[\partial_{\varphi}\theta_{0}({\varphi})]^{-\top}\rho({\varphi})\,,\quad\rho=(\rho_{j})_{j=1,\ldots,\nu}\,,\quad\rho_{j}({\varphi}):=\Delta_{\varphi}^{-1}\sum_{k=1}^{\nu}\partial_{{\varphi}_{k}}A_{kj}({\varphi})\,,$ is isotropic. Moreover, there is $\sigma:=\sigma(\nu,\tau)$ such that, for all $s\geq s_{0}$, $\displaystyle\left\|I_{\delta}-I_{0}\right\|_{s}^{k_{0},\upsilon}$ $\displaystyle\lesssim_{s}\left\|{\mathfrak{I}}_{0}\right\|_{s+1}^{k_{0},\upsilon}\,,\quad\left\|I_{\delta}-I_{0}\right\|_{s}^{k_{0},\upsilon}\lesssim_{s}\upsilon^{-1}\big{(}\left\|Z\right\|_{s+\sigma}^{k_{0},\upsilon}+\left\|Z\right\|_{s_{0}+\sigma}^{k_{0},\upsilon}\left\|{\mathfrak{I}}_{0}\right\|_{s+\sigma}^{k_{0},\upsilon}\big{)}$ (6.3) $\displaystyle\left\|{\mathcal{F}}(i_{\delta},\alpha_{0})\right\|_{s}^{k_{0},\upsilon}$ $\displaystyle\lesssim_{s}\left\|Z\right\|_{s+\sigma}^{k_{0},\upsilon}+\left\|Z\right\|_{s_{0}+\sigma}^{k_{0},\upsilon}\left\|{\mathfrak{I}}_{0}\right\|_{s+\sigma}^{k_{0},\upsilon}\,,\quad\left\|{\rm d}_{i}(i_{\delta})[\widehat{\imath}]\right\|_{s_{1}}\lesssim_{s_{1}}\left\|\widehat{\imath}\right\|_{s_{1}+1}\,,$ (6.4) for $s_{1}\leq s_{0}+\mu$ (cfr. (6.1)). Furthermore $i_{\delta}({\varphi})$ is a reversible and traveling torus, cfr. (5.7). We first find an approximate inverse of the linearized operator ${\rm d}_{i,\alpha}{\mathcal{F}}(i_{\delta})$. We introduce the symplectic diffeomorphism $G_{\delta}:(\phi,y,{\mathtt{w}})\rightarrow(\theta,I,w)$ of the phase space ${\mathbb{T}}^{\nu}\times{\mathbb{R}}^{\nu}\times\mathfrak{H}_{{\mathbb{S}}^{+},\Sigma}^{\angle}$, $\begin{pmatrix}\theta\\\ I\\\ w\end{pmatrix}:=G_{\delta}\begin{pmatrix}\phi\\\ y\\\ {\mathtt{w}}\end{pmatrix}:=\begin{pmatrix}\theta_{0}(\phi)\\\ I_{\delta}(\phi)+\left[\partial_{\phi}\theta_{0}(\phi)\right]^{-\top}y+\left[(\partial_{\theta}{\widetilde{w}}_{0})(\theta_{0}(\phi))\right]^{\top}J_{\angle}^{-1}{\mathtt{w}}\\\ w_{0}(\phi)+{\mathtt{w}}\end{pmatrix}\,,$ (6.5) where ${\widetilde{w}}_{0}(\theta):=w_{0}(\theta_{0}^{-1}(\theta))$. It is proved in Lemma 2 of [6] that $G_{\delta}$ is symplectic, because the torus $i_{\delta}$ is isotropic (Lemma 6.2). In the new coordinates, $i_{\delta}$ is the trivial embedded torus $(\phi,y,{\mathtt{w}})=(\phi,0,0)$. Moreover the diffeomorphism $G_{\delta}$ in (6.5) is reversibility and momentum preserving, in the sense that (Lemma 6.3 in [7]) $\vec{\mathcal{S}}\circ G_{\delta}=G_{\delta}\circ\vec{\mathcal{S}}$, $\vec{\tau}_{\varsigma}\circ G_{\delta}=G_{\delta}\circ\vec{\tau}_{\varsigma}$, $\forall\,\varsigma\in{\mathbb{R}}$, where $\vec{\mathcal{S}}$ and $\vec{\tau}_{\varsigma}$ are defined respectively in (2.40), (2.41). Under the symplectic diffeomorphism $G_{\delta}$, the Hamiltonian vector field $X_{H_{\alpha}}$ changes into $X_{K_{\alpha}}=\left(DG_{\delta}\right)^{-1}X_{H_{\alpha}}\circ G_{\delta}\qquad{\rm where}\qquad K_{\alpha}:=H_{\alpha}\circ G_{\delta}$ is reversible and momentum preserving, in the sense that $K_{\alpha}\circ\vec{\mathcal{S}}=K_{\alpha}$, $K_{\alpha}\circ\vec{\tau}_{\varsigma}=K_{\alpha}$, $\forall\,\varsigma\in{\mathbb{R}}$. The Taylor expansion of $K_{\alpha}$ at the trivial torus $(\phi,0,0)$ is $\displaystyle K_{\alpha}(\phi,y,{\mathtt{w}})=$ $\displaystyle\ K_{00}(\phi,\alpha)+K_{10}(\phi,\alpha)\cdot y+(K_{01}(\phi,\alpha),{\mathtt{w}})_{L^{2}}+\tfrac{1}{2}K_{20}(\phi)y\cdot y$ (6.6) $\displaystyle+(K_{11}(\phi)y,{\mathtt{w}})_{L^{2}}+\tfrac{1}{2}(K_{02}(\phi){\mathtt{w}},{\mathtt{w}})_{L^{2}}+K_{\geq 3}(\phi,y,{\mathtt{w}})\,,$ where $K_{\geq 3}$ collects all terms at least cubic in the variables $(y,{\mathtt{w}})$. Here $K_{00}\in{\mathbb{R}}$, $K_{10}\in{\mathbb{R}}^{\nu}$, $K_{01}\in\mathfrak{H}_{{\mathbb{S}}^{+},\Sigma}^{\angle}$, whereas $K_{20}$ is a $\nu\times\nu$ symmetric matrix, $K_{11}\in{\mathcal{L}}({\mathbb{R}}^{\nu},\mathfrak{H}_{{\mathbb{S}}^{+},\Sigma}^{\angle})$ and $K_{02}$ is a self-adjoint operator acting on $\mathfrak{H}_{{\mathbb{S}}^{+},\Sigma}^{\angle}$. The Hamilton equations associated to (6.6) are $\begin{cases}\dot{\phi}=K_{10}(\phi,\alpha)+K_{20}(\phi)y+[K_{11}(\phi)]^{\top}{\mathtt{w}}+\partial_{y}K_{\geq 3}(\phi,y,{\mathtt{w}})\\\ \dot{y}=-\partial_{\phi}K_{00}(\phi,\alpha)-[\partial_{\phi}K_{10}(\phi,\alpha)]^{\top}y-[\partial_{\phi}K_{01}(\phi,\alpha)]^{\top}{\mathtt{w}}\\\ \ \ \ \ \ -\partial_{\phi}\left(\tfrac{1}{2}K_{20}(\phi)y\cdot y+\left(K_{11}(\phi)y,{\mathtt{w}}\right)_{L^{2}}+\tfrac{1}{2}\left(K_{02}(\phi){\mathtt{w}},{\mathtt{w}}\right)_{L^{2}}+K_{\geq 3}(\phi,y,{\mathtt{w}})\right)\\\ \dot{\mathtt{w}}=J_{\angle}\,\left(K_{01}(\phi,\alpha)+K_{11}(\phi)y+K_{02}(\phi){\mathtt{w}}+\nabla_{{\mathtt{w}}}K_{\geq 3}(\phi,y,{\mathtt{w}})\right)\,,\end{cases}$ (6.7) where $\partial_{\phi}K_{10}^{\top}$ is the $\nu\times\nu$ transposed matrix and $\partial_{\phi}K_{01}^{\top},K_{11}^{\top}:\mathfrak{H}_{{\mathbb{S}}^{+},\Sigma}^{\angle}\rightarrow{\mathbb{R}}^{\nu}$ are defined by the duality relation $(\partial_{\phi}K_{01}[{\widehat{\phi}}],{\mathtt{w}})_{L^{2}}={\widehat{\phi}}\cdot[\partial_{\phi}K_{01}]^{\top}{\mathtt{w}}$ for any ${\widehat{\phi}}\in{\mathbb{R}}^{\nu}$, ${\mathtt{w}}\in\mathfrak{H}_{{\mathbb{S}}^{+},\Sigma}^{\angle}$. The terms $K_{00},K_{01}$, $K_{10}-\omega$ in the Taylor expansion (6.6) vanish at an exact solution: indeed, arguing as in Lemma 5.4 in [2], there is $\sigma:=\sigma(\nu,\tau)>0$, such that, for all $s\geq s_{0}$, $\left\|\partial_{\phi}K_{00}(\cdot,\alpha_{0})\right\|_{s}^{k_{0},\upsilon}+\left\|K_{10}(\cdot,\alpha_{0})-\omega\right\|_{s}^{k_{0},\upsilon}+\left\|K_{01}(\cdot,\alpha_{0})\right\|_{s}^{k_{0},\upsilon}\lesssim_{s}\left\|Z\right\|_{s+\sigma}^{k_{0},\upsilon}+\left\|Z\right\|_{s_{0}+\sigma}^{k_{0},\upsilon}\left\|{\mathfrak{I}}_{0}\right\|_{s+\sigma}^{k_{0},\upsilon}\,.$ (6.8) Under the linear change of variables $DG_{\delta}({\varphi},0,0)\begin{pmatrix}{\widehat{\phi}}\\\ {\widehat{y}}\\\ {\widehat{{\mathtt{w}}}}\end{pmatrix}:=\begin{pmatrix}\partial_{\phi}\theta_{0}({\varphi})&0&0\\\ \partial_{\phi}I_{\delta}({\varphi})&[\partial_{\phi}\theta_{0}({\varphi})]^{-\top}&[(\partial_{\theta}{\widetilde{w}}_{0})(\theta_{0}({\varphi}))]^{\top}J_{\angle}^{-1}\\\ \partial_{\phi}w_{0}({\varphi})&0&{\rm Id}\end{pmatrix}\begin{pmatrix}{\widehat{\phi}}\\\ {\widehat{y}}\\\ {\widehat{{\mathtt{w}}}}\end{pmatrix}\,,$ the linearized operator ${\rm d}_{i,\alpha}{\mathcal{F}}(i_{\delta})$ is approximately transformed into the one obtained when one linearizes the Hamiltonian system (6.7) at $(\phi,y,{\mathtt{w}})=({\varphi},0,0)$, differentiating also in $\alpha$ at $\alpha_{0}$ and changing $\partial_{t}\rightsquigarrow\omega\cdot\partial_{\varphi}$, namely $\begin{pmatrix}\widehat{\phi}\\\ \widehat{y}\\\ \widehat{\mathtt{w}}\\\ \widehat{\alpha}\end{pmatrix}\mapsto\begin{pmatrix}\omega\cdot\partial_{\varphi}{\widehat{\phi}}-\partial_{\phi}K_{10}({\varphi})[{\widehat{\phi}}]-\partial_{\alpha}K_{10}({\varphi})[{\widehat{\alpha}}]-K_{20}({\varphi}){\widehat{y}}-[K_{11}({\varphi})]^{\top}{\widehat{{\mathtt{w}}}}\\\ \omega\cdot\partial_{\varphi}{\widehat{y}}+\partial_{\phi\phi}K_{00}({\varphi})[{\widehat{\phi}}]+\partial_{\alpha}\partial_{\phi}K_{00}({\varphi})[{\widehat{\alpha}}]+[\partial_{\phi}K_{10}({\varphi})]^{\top}{\widehat{y}}+[\partial_{\phi}K_{01}({\varphi})]^{\top}{\widehat{{\mathtt{w}}}}\\\ \omega\cdot\partial_{\varphi}{\widehat{{\mathtt{w}}}}-J_{\angle}\,\big{(}\partial_{\phi}K_{01}({\varphi})[{\widehat{\phi}}]+\partial_{\alpha}K_{01}({\varphi})[{\widehat{\alpha}}]+K_{11}({\varphi}){\widehat{y}}+K_{02}({\varphi}){\widehat{{\mathtt{w}}}}\big{)}\end{pmatrix}.$ (6.9) In order to construct an “almost approximate" inverse of (6.9), we need that ${\mathcal{L}}_{\omega}:=\Pi_{{\mathbb{S}}^{+},\Sigma}^{\angle}\left(\omega\cdot\partial_{\varphi}-JK_{02}({\varphi})\right)|_{\mathfrak{H}_{{\mathbb{S}}^{+},\Sigma}^{\angle}}$ (6.10) is "almost invertible" (on traveling waves) up to remainders of size $O(N_{{\mathtt{n}}-1}^{-{{\mathtt{a}}}})$, where, for ${\mathtt{n}}\in{\mathbb{N}}_{0}$ $N_{\mathtt{n}}:=K_{\mathtt{n}}^{p}\,,\quad K_{\mathtt{n}}:=K_{0}^{\chi^{\mathtt{n}}}\,,\quad\chi=3/2\,.$ (6.11) The $(K_{\mathtt{n}})_{{\mathtt{n}}\geq 0}$ is the scale used in the nonlinear Nash-Moser iteration of Section 9 and $(N_{\mathtt{n}})_{{\mathtt{n}}\geq 0}$ is the one in Lemma 7.7 and Theorem 8.2. Let $H_{\angle}^{s}({\mathbb{T}}^{\nu+1}):=H^{s}({\mathbb{T}}^{\nu+1})\cap\mathfrak{H}_{{\mathbb{S}}^{+},\Sigma}^{\angle}$. * (AI) Almost invertibility of ${\mathcal{L}}_{\omega}$: There exist positive real numbers $\sigma$, $\mu({\mathtt{b}})$, ${\mathtt{a}}$, $p$, $K_{0}$ and a subset ${\mathtt{\Lambda}}_{o}\subset{\mathtt{D}}{\mathtt{C}}(\upsilon,\tau)\times[\gamma_{1},\gamma_{2}]$ such that, for all $(\omega,\gamma)\in{\mathtt{\Lambda}}_{o}$, the operator ${\mathcal{L}}_{\omega}$ may be decomposed as ${\mathcal{L}}_{\omega}={\mathcal{L}}_{\omega}^{<}+{\mathcal{R}}_{\omega}+{\mathcal{R}}_{\omega}^{\perp}\,,$ (6.12) where, for any traveling wave function $g\in H_{\angle}^{s+\sigma}({\mathbb{T}}^{\nu+1},{\mathbb{R}}^{2})$ and for any $(\omega,\gamma)\in{\mathtt{\Lambda}}_{o}$, there is a traveling wave solution $h\in H_{\angle}^{s}({\mathbb{T}}^{\nu+1},{\mathbb{R}}^{2})$ of ${\mathcal{L}}_{\omega}^{<}h=g$ satisfying, for all $s_{0}\leq s\leq S-\mu({\mathtt{b}})-\sigma$, $\left\|({\mathcal{L}}_{\omega}^{<})^{-1}g\right\|_{s}^{k_{0},\upsilon}\lesssim_{S}\upsilon^{-1}\big{(}\left\|g\right\|_{s+\sigma}^{k_{0},\upsilon}+\left\|g\right\|_{s_{0}+\sigma}^{k_{0},\upsilon}\left\|{\mathfrak{I}}_{0}\right\|_{s+\mu({{\mathtt{b}}})+\sigma}^{k_{0},\upsilon}\big{)}\,.$ (6.13) In addition, if $g$ is anti-reversible, then $h$ is reversible. Moreover, for any $s_{0}\leq s\leq S-\mu({\mathtt{b}})-\sigma$, for any traveling wave $h\in\mathfrak{H}_{{\mathbb{S}}^{+},\Sigma}^{\angle}$, the operators ${\mathcal{R}}_{\omega},{\mathcal{R}}_{\omega}^{\perp}$ satisfy the estimates $\displaystyle\left\|{\mathcal{R}}_{\omega}h\right\|_{s}^{k_{0},\upsilon}$ $\displaystyle\lesssim_{S}\varepsilon\upsilon^{-3}N_{{\mathtt{n}}-1}^{-{\mathtt{a}}}\big{(}\left\|h\right\|_{s+\sigma}^{k_{0},\upsilon}+\left\|h\right\|_{s_{0}+\sigma}^{k_{0},\upsilon}\left\|{\mathfrak{I}}_{0}\right\|_{s+\mu({\mathtt{b}})+\sigma}^{k_{0},\upsilon}\big{)}\,,$ (6.14) $\displaystyle\left\|{\mathcal{R}}_{\omega}^{\perp}h\right\|_{s_{0}}^{k_{0},\upsilon}$ $\displaystyle\lesssim_{S}K_{\mathtt{n}}^{-{\rm b}}\big{(}\left\|h\right\|_{s_{0}+{\rm b}+\sigma}^{k_{0},\upsilon}+\left\|h\right\|_{s_{0}+\sigma}^{k_{0},\upsilon}\left\|{\mathfrak{I}}_{0}\right\|_{s_{0}+\mu({\mathtt{b}})+\sigma+{\rm b}}^{k_{0},\upsilon}\big{)}\,,\ \forall\,{\rm b}>0\,,$ (6.15) $\displaystyle\left\|{\mathcal{R}}_{\omega}^{\perp}h\right\|_{s}^{k_{0},\upsilon}$ $\displaystyle\lesssim_{S}\left\|h\right\|_{s+\sigma}^{k_{0},\upsilon}+\left\|h\right\|_{s_{0}+\sigma}^{k_{0},\upsilon}\left\|{\mathfrak{I}}_{0}\right\|_{s+\mu({\mathtt{b}})+\sigma}^{k_{0},\upsilon}\,.$ (6.16) This assumption shall be verified by Theorem 8.9 at each step of the Nash- Moser iteration. In order to find an almost approximate inverse of the linear operator in (6.9) (and so of ${\rm d}_{i,\alpha}{\mathcal{F}}(i_{\delta})$), it is sufficient to invert the operator ${\mathbb{D}}\big{[}{\widehat{\phi}},{\widehat{y}},{\widehat{{\mathtt{w}}}},{\widehat{\alpha}}\big{]}:=\begin{pmatrix}\omega\cdot\partial_{\varphi}{\widehat{\phi}}-\partial_{\alpha}K_{10}({\varphi})[{\widehat{\alpha}}]-K_{20}({\varphi}){\widehat{y}}-K_{11}^{\top}({\varphi}){\widehat{{\mathtt{w}}}}\\\ \omega\cdot\partial_{\varphi}{\widehat{y}}+\partial_{\alpha}\partial_{\phi}K_{00}({\varphi})[{\widehat{\alpha}}]\\\ {\mathcal{L}}_{\omega}^{<}{\widehat{{\mathtt{w}}}}-J_{\angle}\left(\partial_{\alpha}K_{01}({\varphi})[{\widehat{\alpha}}]+K_{11}({\varphi}){\widehat{y}}\right)\end{pmatrix}$ (6.17) obtained neglecting in (6.9) the terms $\partial_{\phi}K_{10}$, $\partial_{\phi\phi}K_{00}$, $\partial_{\phi}K_{00}$, $\partial_{\phi}K_{01}$ (they vanish at an exact solution by (6.8)) and the small remainders ${\mathcal{R}}_{\omega}$, ${\mathcal{R}}_{\omega}^{\perp}$ appearing in (6.12). As in section 6 of [7] we have the following result, where we denote $\|(\phi,y,{\mathtt{w}},\alpha)\|_{s}^{k_{0},\upsilon}:=\max\big{\\{}\|(\phi,y,{\mathtt{w}})\|_{s}^{k_{0},\upsilon},\left|\alpha\right|^{k_{0},\upsilon}\big{\\}}$ (see [7, Proposition 6.5]): ###### Proposition 6.3. Assume (6.1) (with $\mu=\mu({{\mathtt{b}}})+\sigma$) and (AI). Then, for all $(\omega,\gamma)\in{\mathtt{\Lambda}}_{o}$, for any anti-reversible traveling wave variation $g=(g_{1},g_{2},g_{3})$, there exists a unique solution ${\mathbb{D}}^{-1}g:=({\widehat{\phi}},{\widehat{y}},{\widehat{{\mathtt{w}}}},{\widehat{\alpha}})$ of ${\mathbb{D}}({\widehat{\phi}},{\widehat{y}},{\widehat{{\mathtt{w}}}},{\widehat{\alpha}})=g$ where $({\widehat{\phi}},{\widehat{y}},{\widehat{{\mathtt{w}}}})$ is a reversible traveling wave variation. Moreover, for any $s_{0}\leq s\leq S-\mu({\mathtt{b}})-\sigma$, $\|{\mathbb{D}}^{-1}g\|_{s}^{k_{0},\upsilon}\lesssim_{S}\upsilon^{-1}\big{(}\|g\|_{s+\sigma}^{k_{0},\upsilon}+\|{\mathfrak{I}}_{0}\|_{s+\mu({{\mathtt{b}}})+\sigma}^{k_{0},\upsilon}\|g\|_{s_{0}+\sigma}^{k_{0},\upsilon}\big{)}$. Finally we conclude that the operator ${\bf T}_{0}:={\bf T}_{0}(i_{0}):=(D{\widetilde{G}}_{\delta})({\varphi},0,0)\circ{\mathbb{D}}^{-1}\circ(DG_{\delta})({\varphi},0,0)^{-1}$ (6.18) is an almost approximate right inverse for ${\rm d}_{i,\alpha}{\mathcal{F}}(i_{0})$, where ${\widetilde{G}}_{\delta}(\phi,y,{\mathtt{w}},\alpha):=\left(G_{\delta}(\phi,y,{\mathtt{w}}),\alpha\right)$ is the identity on the $\alpha$-component. Arguing exactly as in Theorem 6.6 in [7] we deduce the following. ###### Theorem 6.4. (Almost approximate inverse) Assume (AI). Then there is $\overline{\sigma}:=\overline{\sigma}(\tau,\nu,k_{0})>0$ such that, if (6.1) holds with $\mu=\mu({\mathtt{b}})+\overline{\sigma}$, then, for all $(\omega,\gamma)\in{\mathtt{\Lambda}}_{o}$ and for any anti-reversible traveling wave variation $g:=(g_{1},g_{2},g_{3})$, the operator ${\bf T}_{0}$ defined in (6.18) satisfies, for all $s_{0}\leq s\leq S-\mu({\mathtt{b}})-\overline{\sigma}$, $\|{\bf T}_{0}g\|_{s}^{k_{0},\upsilon}\lesssim_{S}\upsilon^{-1}\big{(}\|g\|_{s+\overline{\sigma}}^{k_{0},\upsilon}+\|{\mathfrak{I}}_{0}\|_{s+\mu({\mathtt{b}})+\overline{\sigma}}^{k_{0},\upsilon}\|g\|_{s_{0}+\overline{\sigma}}^{k_{0},\upsilon}\big{)}\,.$ (6.19) Moreover, the first three components of ${\bf T}_{0}g$ form a reversible traveling wave variation. Finally, ${\bf T}_{0}$ is an almost approximate right inverse of ${\rm d}_{i,\alpha}{\mathcal{F}}(i_{0})$, namely ${\rm d}_{i,\alpha}{\mathcal{F}}(i_{0})\circ{\bf T}_{0}-{\rm Id}={\mathcal{P}}(i_{0})+{\mathcal{P}}_{\omega}(i_{0})+{\mathcal{P}}_{\omega}^{\perp}(i_{0})\,,$ where, for any traveling wave variation $g$, for all $s_{0}\leq s\leq S-\mu({\mathtt{b}})-\overline{\sigma}$, $\displaystyle\|{\mathcal{P}}g\|_{s}^{k_{0},\upsilon}$ $\displaystyle\lesssim_{S}\upsilon^{-1}\Big{(}\|{\mathcal{F}}(i_{0},\alpha_{0})\|_{s_{0}+\overline{\sigma}}^{k_{0},\upsilon}\|g\|_{s+\overline{\sigma}}^{k_{0},\upsilon}$ (6.20) $\displaystyle\qquad+\,\big{(}\|{\mathcal{F}}(i_{0},\alpha_{0})\|_{s+\overline{\sigma}}^{k_{0},\upsilon}+\|{\mathcal{F}}(i_{0},\alpha_{0})\|_{s_{0}+\overline{\sigma}}^{k_{0},\upsilon}\|{\mathfrak{I}}_{0}\|_{s+\mu({\mathtt{b}})+\overline{\sigma}}^{k_{0},\upsilon}\big{)}\|g\|_{s_{0}+\overline{\sigma}}^{k_{0},\upsilon}\Big{)}\,,$ (6.21) $\displaystyle\|{\mathcal{P}}_{\omega}g\|_{s}^{k_{0},\upsilon}$ $\displaystyle\lesssim_{S}\varepsilon\upsilon^{-4}N_{{\mathtt{n}}-1}^{-{\mathtt{a}}}\big{(}\|g\|_{s+\overline{\sigma}}^{k_{0},\upsilon}+\|{\mathfrak{I}}_{0}\|_{s+\mu({\mathtt{b}})+\overline{\sigma}}^{k_{0},\upsilon}\|g\|_{s_{0}+\overline{\sigma}}^{k_{0},\upsilon}\big{)}\,,$ (6.22) $\displaystyle\|{\mathcal{P}}_{\omega}^{\perp}g\|_{s_{0}}^{k_{0},\upsilon}$ $\displaystyle\lesssim_{S,b}\upsilon^{-1}K_{\mathtt{n}}^{-b}\left(\|g\|_{s_{0}+\overline{\sigma}+b}^{k_{0},\upsilon}+\|{\mathfrak{I}}_{0}\|_{s_{0}+\mu({\mathtt{b}})+b+\overline{\sigma}}^{k_{0},\upsilon}\|g\|_{s_{0}+\overline{\sigma}}^{k_{0},\upsilon}\right)\,,\quad\forall\,b>0\,,$ (6.23) $\displaystyle\|{\mathcal{P}}_{\omega}^{\perp}g\|_{s}^{k_{0},\upsilon}$ $\displaystyle\lesssim_{S}\upsilon^{-1}\big{(}\|g\|_{s+\overline{\sigma}}^{k_{0},\upsilon}+\|{\mathfrak{I}}_{0}\|_{s+\mu({\mathtt{b}})+\overline{\sigma}}^{k_{0},\upsilon}\|g\|_{s_{0}+\overline{\sigma}}^{k_{0},\upsilon}\big{)}\,.$ (6.24) ## 7 The linearized operator in the normal subspace We now write an explicit expression of the linear operator ${\mathcal{L}}_{\omega}$ defined in (6.10). As in Lemma 7.1 in [7], since the diffeomorphism $G_{\delta}$ in (6.5) is just a translation along the infinite dimensional normal variable $w$, we have the following structural result. ###### Lemma 7.1. The Hamiltonian operator ${\mathcal{L}}_{\omega}$ defined in (6.10), acting on the normal subspace $\mathfrak{H}_{{\mathbb{S}}^{+},\Sigma}^{\angle}$, has the form ${\mathcal{L}}_{\omega}=\Pi_{{\mathbb{S}}^{+},\Sigma}^{\angle}({\mathcal{L}}-\varepsilon JR)|_{\mathfrak{H}_{{\mathbb{S}}^{+},\Sigma}^{\angle}}\,,$ (7.1) where: 1\. ${\mathcal{L}}$ is the Hamiltonian operator ${\mathcal{L}}:=\omega\cdot\partial_{\varphi}-J\partial_{u}\nabla_{u}{\mathcal{H}}(T_{\delta}(\varphi))\,,$ (7.2) where ${\mathcal{H}}$ is the water waves Hamiltonian in the Wahlén variables defined in (2.9), evaluated at the reversible traveling wave $T_{\delta}(\phi):=\varepsilon A(i_{\delta}(\phi))=\varepsilon A\left(\theta_{0}(\phi),I_{\delta}(\phi),w_{0}(\phi)\right)=\varepsilon v^{\intercal}\left(\theta_{0}(\phi),I_{\delta}(\phi)\right)+\varepsilon w_{0}(\phi)\,,$ (7.3) the torus $i_{\delta}({\varphi}):=(\theta_{0}({\varphi}),I_{\delta}({\varphi}),w_{0}({\varphi}))$ is defined in Lemma 6.2 and $A(\theta,I,w)$, $v^{\intercal}(\theta,I)$ in (2.2); 2\. $R(\phi)$ has the ‘finite rank" form $R(\phi)[h]=\sum_{j=1}^{\nu}\left(h,g_{j}\right)_{L^{2}}\chi_{j}\,,\quad\forall\,h\in\mathfrak{H}_{{\mathbb{S}}^{+},\Sigma}^{\angle}\,,$ (7.4) for functions $g_{j},\chi_{j}\in\mathfrak{H}_{{\mathbb{S}}^{+},\Sigma}^{\angle}$ which satisfy, for some $\sigma:=\sigma(\tau,\nu,k_{0})>0$, for all $j=1,\ldots,\nu$, for all $s\geq s_{0}$, $\displaystyle\left\|g_{j}\right\|_{s}^{k_{0},\upsilon}+\left\|\chi_{j}\right\|_{s}^{k_{0},\upsilon}$ $\displaystyle\lesssim_{s}1+\left\|{\mathfrak{I}}_{\delta}\right\|_{s+\sigma}^{k_{0},\upsilon}\,,$ (7.5) $\displaystyle\left\|{\rm d}_{i}g_{j}[\widehat{\imath}]\right\|_{s}+\left\|{\rm d}_{i}\chi_{j}[\widehat{\imath}]\right\|_{s}$ $\displaystyle\lesssim_{s}\left\|\widehat{\imath}\right\|_{s+\sigma}+\left\|\widehat{\imath}\right\|_{s_{0}+\sigma}\left\|{\mathfrak{I}}_{\delta}\right\|_{s+\sigma}\,.$ The operator ${\mathcal{L}}_{\omega}$ is reversible and momentum preserving. In order to compute $dX$ we use the "shape derivative" formula, see e.g. [25], $G^{\prime}(\eta)[{\widehat{\eta}}]\psi:=\lim_{\epsilon\rightarrow 0}\tfrac{1}{\epsilon}\big{(}G(\eta+\epsilon{\widehat{\eta}})\psi-G(\eta)\psi\big{)}=-G(\eta)(B{\widehat{\eta}})-\partial_{x}(V{\widehat{\eta}})\,,$ (7.6) where $B(\eta,\psi):=\frac{G(\eta)\psi+\eta_{x}\psi_{x}}{1+\eta_{x}^{2}}\,,\quad V(\eta,\psi):=\psi_{x}-B(\eta,\psi)\eta_{x}\,.$ (7.7) Then, recalling (2.9), (2.7), (1.6) and (7.6) the operator ${\mathcal{L}}$ in (7.2) is given by $\displaystyle{\mathcal{L}}=\omega\cdot\partial_{\varphi}$ $\displaystyle+\begin{pmatrix}\partial_{x}{\widetilde{V}}+G(\eta)B&-G(\eta)\\\ g+B{\widetilde{V}}_{x}+BG(\eta)B&{\widetilde{V}}\partial_{x}-BG(\eta)\end{pmatrix}$ (7.8) $\displaystyle+\frac{\gamma}{2}\begin{pmatrix}-G(\eta)\partial_{x}^{-1}&0\\\ \partial_{x}^{-1}G(\eta)B-BG(\eta)\partial_{x}^{-1}-\frac{\gamma}{2}\partial_{x}^{-1}G(\eta)\partial_{x}^{-1}&-\partial_{x}^{-1}G(\eta)\end{pmatrix}\,,$ where ${\widetilde{V}}:=V-\gamma\eta\,,$ (7.9) and the functions $B:=B(\eta,\psi)$, $V:=V(\eta,\psi)$ in (7.8)-(7.9) are evaluated at the reversible traveling wave $(\eta,\psi):=WT_{\delta}({\varphi})$ where $T_{\delta}({\varphi})$ is defined in (7.3). Notation. In (7.8) and hereafter the function $B$ is identified with the corresponding multiplication operators $h\mapsto Bh$, and, where there is no parenthesis, composition of operators is understood. For example $BG(\eta)B$ means $B\circ G(\eta)\circ B$. ###### Remark 7.2. We consider the operator ${\mathcal{L}}$ in (7.8) acting on (a dense subspace of) the whole $L^{2}({\mathbb{T}})\times L^{2}({\mathbb{T}})$. In particular we extend the operator $\partial_{x}^{-1}$ to act on the whole $L^{2}({\mathbb{T}})$ as in (3.22). The following algebraic properties are a direct consequence of the reversible and space-invariance properties of the water waves equations explained in Section 2 and the fact that the approximate solution $(\eta,\zeta)=T_{\delta}({\varphi})$ is a reversible traveling wave (cfr. Lemma 7.3 in [7]). ###### Lemma 7.3. The functions $(\eta,\zeta)=T_{\delta}({\varphi})$ and $B,{\widetilde{V}}$ defined in (7.7), (7.9) are quasi-periodic traveling waves. The functions $(\eta,\zeta)=T_{\delta}({\varphi})$ are $({\rm even}({\varphi},x),{\rm odd}({\varphi},x))$, $B$ is ${\rm odd}({\varphi},x)$ and ${\widetilde{V}}$ is ${\rm even}({\varphi},x)$. The Hamiltonian operator ${\mathcal{L}}$ is reversible and momentum preserving. For the sequel we will always assume the following ansatz (satisfied by the approximate solutions obtained along the nonlinear Nash-Moser iteration of Section 9): for some constants $\mu_{0}:=\mu_{0}(\tau,\nu)>0$, $\upsilon\in(0,1)$, (cfr. Lemma 6.2) $\left\|{\mathfrak{I}}_{0}\right\|_{s_{0}+\mu_{0}}^{k_{0},\upsilon}\,,\ \left\|{\mathfrak{I}}_{\delta}\right\|_{s_{0}+\mu_{0}}^{k_{0},\upsilon}\leq 1\,.$ (7.10) In order to estimate the variation of the eigenvalues with respect to the approximate invariant torus, we need also to estimate the variation with respect to the torus $i({\varphi})$ in another low norm $\left\|\ \right\|_{s_{1}}$ for all Sobolev indexes $s_{1}$ such that $s_{1}+\sigma_{0}\leq s_{0}+\mu_{0}\,,\quad\text{ for some }\ \sigma_{0}:=\sigma_{0}(\tau,\nu)>0\,.$ (7.11) Thus, by (7.10), we have $\left\|{\mathfrak{I}}_{0}\right\|_{s_{1}+\sigma_{0}}^{k_{0},\upsilon}$, $\left\|{\mathfrak{I}}_{\delta}\right\|_{s_{1}+\sigma_{0}}^{k_{0},\upsilon}\leq 1$. The constants $\mu_{0}$ and $\sigma_{0}$ represent the _loss of derivatives_ accumulated along the reduction procedure of the next sections. What is important is that they are independent of the Sobolev index $s$. In the following sections we shall denote by $\sigma:=\sigma(\tau,\nu,k_{0})>0$, $\sigma_{N}({\mathtt{q}}_{0}):=\sigma_{N}({\mathtt{q}}_{0},\tau,\nu,k_{0})$, $\sigma_{M}:=\sigma_{M}(k_{0},\tau,\nu)>0$, $\aleph_{M}(\alpha)$ constants (which possibly increase from lemma to lemma) representing losses of derivatives along the finitely many steps of the reduction procedure. ###### Remark 7.4. In the next sections $\mu_{0}:=\mu_{0}(\tau,\nu,M,\alpha)>0$ will depend also on indexes $M,\alpha$, whose maximal values will be fixed depending only on $\tau$ and $\nu$ (and $k_{0}$ which is however considered an absolute constant along the paper). In particular $M$ is fixed in (8.5), whereas the maximal value of $\alpha$ depends on $M$, as explained in Remark 7.16. ###### Remark 7.5. Starting from Section 7.2, we introduce in the estimates upper bounds on the regularity $s\geq s_{0}$. We shall control the terms in Sobolev spaces $H^{s}$ with $s_{0}\leq s\leq S-\sigma$, where $\sigma$ denotes a loss of derivatives of the finitely many steps of the reduction (possibly increasing along the steps), whereas $S>s_{0}+k_{0}$ is any finite Sobolev index. The index $S$ has to be taken finite in view of Lemma 7.7 (see also Appendix A). The largest regularity index $S$ will be fixed in (9.3). In particular, it is compatible with the condition (7.11), namely $s_{1}+\sigma_{0}\leq s_{0}+\mu_{0}<S$. As a consequence of Moser composition Lemma 3.2 and (6.3), the Sobolev norm of the function $u=T_{\delta}({\varphi})$ defined in (7.3) satisfies for all $s\geq s_{0}$ $\left\|u\right\|_{s}^{k_{0},\upsilon}=\left\|\eta\right\|_{s}^{k_{0},\upsilon}+\left\|\zeta\right\|_{s}^{k_{0},\upsilon}\leq\varepsilon C(s)\big{(}1+\left\|{\mathfrak{I}}_{0}\right\|_{s}^{k_{0},\upsilon}\big{)}$ (7.12) (the map $A$ defined in (2.2) is smooth). Similarly, using (6.4), $\left\|\Delta_{12}u\right\|_{s_{1}}\lesssim_{s_{1}}\varepsilon\left\|i_{2}-i_{1}\right\|_{s_{1}}\,,\quad\text{ where }\quad\Delta_{12}u:=u(i_{2})-u(i_{1})\,.$ We finally recall that ${\mathfrak{I}}_{0}={\mathfrak{I}}_{0}(\omega,\gamma)$ is defined for all $(\omega,\gamma)\in{\mathbb{R}}^{\nu}\times[\gamma_{1},\gamma_{2}]$ and that the functions $B,{\widetilde{V}}$ and $c$ appearing in ${\mathcal{L}}$ in (7.8) are ${\mathcal{C}}^{\infty}$ in $({\varphi},x)$, as $u=(\eta,\zeta)=T_{\delta}({\varphi})$ is. In Sections 7.1-7.6 we are going to make several transformations, whose aim is to conjugate the operator ${\cal L}$ in (7.8) to a constant coefficients Fourier multiplier, up to a pseudo-differential operator of order $-1/2$ plus a remainder that satisfies tame estimates, see ${\cal L}_{8}$ in (7.138). Finally, in Section 7.7 we shall conjugate the restricted operator ${\cal L}_{\omega}$ in (7.1). ### 7.1 Linearized good unknown of Alinhac The first step is to conjugate the linear operator ${\mathcal{L}}$ in (7.8) by the symplectic (Definition 3.18) multiplication matrix operator ${\mathcal{Z}}:=\left(\begin{matrix}{\rm Id}&0\\\ B&{\rm Id}\end{matrix}\right)\ ,\qquad{\mathcal{Z}}^{-1}=\left(\begin{matrix}{\rm Id}&0\\\ -B&{\rm Id}\end{matrix}\right)\,,$ obtaining $\displaystyle{\mathcal{L}}_{1}$ $\displaystyle:={\mathcal{Z}}^{-1}{\mathcal{L}}{\mathcal{Z}}=\omega\cdot\partial_{\varphi}+\begin{pmatrix}\partial_{x}{\widetilde{V}}&-G(\eta)\\\ a&{\widetilde{V}}\partial_{x}\end{pmatrix}-\frac{\gamma}{2}\begin{pmatrix}G(\eta)\partial_{x}^{-1}&0\\\ \frac{\gamma}{2}\partial_{x}^{-1}G(\eta)\partial_{x}^{-1}&\partial_{x}^{-1}G(\eta)\end{pmatrix}\,,$ (7.13) where $a$ is the function $a:=g+{\widetilde{V}}B_{x}+\omega\cdot\partial_{\varphi}B\,.$ (7.14) The matrix ${\mathcal{Z}}$ amounts to introduce, as in [25] and [9, 2], a linearized version of the “good unknown of Alinhac". ###### Lemma 7.6. The maps ${\mathcal{Z}}^{\pm 1}-{\rm Id}$ are ${\mathcal{D}}^{k_{0}}$-tame with tame constants satisfying, for some $\sigma:=\sigma(\tau,\nu,k_{0})>0$, for all $s\geq s_{0}$, ${\mathfrak{M}}_{{\mathcal{Z}}^{\pm 1}-{\rm Id}}(s)\,,\ {\mathfrak{M}}_{({\mathcal{Z}}^{\pm 1}-{\rm Id})^{*}}(s)\lesssim_{s}\varepsilon\big{(}1+\left\|{\mathfrak{I}}_{0}\right\|_{s+\sigma}^{k_{0},\upsilon}\big{)}\,.$ The function $a$ in (7.14) is a quasi-periodic traveling wave ${\rm even}({\varphi},x)$. There is $\sigma:=\sigma(\tau,\nu,k_{0})>0$ such that, for all $s\geq s_{0}$, $\left\|a-g\right\|_{s}^{k_{0},\upsilon}+\|{\widetilde{V}}\|_{s}^{k_{0},\upsilon}+\left\|B\right\|_{s}^{k_{0},\upsilon}\lesssim_{s}\varepsilon\big{(}1+\|{\mathfrak{I}}_{0}\|_{s+\sigma}^{k_{0},\upsilon}\big{)}\,.$ (7.15) Moreover, for any $s_{1}$ as in (7.11), $\displaystyle\left\|\Delta_{12}a\right\|_{s_{1}}+\|\Delta_{12}{\widetilde{V}}\|_{s_{1}}+\left\|\Delta_{12}B\right\|_{s_{1}}\lesssim_{s_{1}}\varepsilon\left\|i_{1}-i_{2}\right\|_{s_{1}+\sigma}\,,$ (7.16) $\displaystyle\|\Delta_{12}({\mathcal{Z}}^{\pm 1})h\|_{s_{1}},\|\Delta_{12}({\mathcal{Z}}^{\pm 1})^{*}h\|_{s_{1}}\lesssim_{s_{1}}\varepsilon\left\|i_{1}-i_{2}\right\|_{s_{1}+\sigma}\left\|h\right\|_{s_{1}}\,.$ (7.17) The operator ${\mathcal{L}}_{1}$ is Hamiltonian, reversible and momentum preserving. ###### Proof. By the expressions of $B,\widetilde{V},a$ in (7.7), (7.9), (7.14) the composition estimates of Lemma 3.2, (3.7) and the bounds for the Dirichlet- Neumann operator in Lemma 3.10. Since $B$ is an ${\rm odd}(\varphi,x)$ quasi- periodic traveling wave, the matrix operator ${\mathcal{Z}}$ is reversibility and momentum preserving (Definitions 3.19 and 3.22). ∎ ### 7.2 Almost-straightening of the first order transport operator We now write the operator ${\mathcal{L}}_{1}$ in (7.13) as ${\mathcal{L}}_{1}=\omega\cdot\partial_{\varphi}+\begin{pmatrix}\partial_{x}{\widetilde{V}}&0\\\ 0&{\widetilde{V}}\partial_{x}\end{pmatrix}+\begin{pmatrix}-\frac{\gamma}{2}G(0)\partial_{x}^{-1}&-G(0)\\\ a-\left(\frac{\gamma}{2}\right)^{2}\partial_{x}^{-1}G(0)\partial_{x}^{-1}&-\frac{\gamma}{2}\partial_{x}^{-1}G(0)\end{pmatrix}+{\bf R}_{1}\,,$ (7.18) where, using the decomposition (3.32) of the Dirichlet-Neumann operator, ${\bf R}_{1}:=-\begin{pmatrix}\frac{\gamma}{2}{\mathcal{R}}_{G}(\eta)\partial_{x}^{-1}&{\mathcal{R}}_{G}(\eta)\\\ \left(\frac{\gamma}{2}\right)^{2}\partial_{x}^{-1}{\mathcal{R}}_{G}(\eta)\partial_{x}^{-1}&\frac{\gamma}{2}\partial_{x}^{-1}{\mathcal{R}}_{G}(\eta)\end{pmatrix}$ (7.19) is a small remainder in ${\rm OP}S^{-\infty}$. The aim of this section is to conjugate the variable coefficients quasi-periodic transport operator ${\mathcal{L}}_{\rm TR}:=\omega\cdot\partial_{\varphi}+\begin{pmatrix}\partial_{x}{\widetilde{V}}&0\\\ 0&{\widetilde{V}}\partial_{x}\end{pmatrix}$ (7.20) to a constant coefficients transport operator $\omega\cdot\partial_{\varphi}+{\mathtt{m}}_{1,\overline{\mathtt{n}}}\,\partial_{y}$, up to an exponentially small remainder, see (7.28)-(7.29), where ${\mathtt{n}}\in{\mathbb{N}}_{0}$ and the scale $(N_{{\mathtt{n}}})_{{\mathtt{n}}\in{\mathbb{N}}_{0}}$ is defined, for $N_{0}>1$, by $N_{{\mathtt{n}}}:=N_{0}^{\chi^{{\mathtt{n}}}}\,,\quad\chi=3/2\,,\quad N_{-1}:=1\,.$ (7.21) Such small remainder is left because we assume only finitely many non- resonance conditions, see (7.26). This enables to deduce Lemma 7.9, and then to formulate the non-resonance condition (5.15), stated in terms of the “final" function ${\mathtt{m}}_{1}^{\infty}(\omega,\gamma)$, which implies (7.26) at any step of the nonlinear Nash-Moser iteration of Section 9. In the next lemma we conjugate ${\mathcal{L}}_{\rm TR}$ by a _symplectic_ (Definition 3.18) transformation ${\mathcal{E}}:=\begin{pmatrix}(1+\beta_{x}({\varphi},x))\circ{\mathcal{B}}&0\\\ 0&{\mathcal{B}}\end{pmatrix}\,,\quad{\mathcal{E}}^{-1}:=\begin{pmatrix}{\mathcal{B}}^{-1}\circ(1+\beta_{x}({\varphi},x))^{-1}&0\\\ 0&{\mathcal{B}}^{-1}\end{pmatrix}$ (7.22) where the composition operator $({\mathcal{B}}u)({\varphi},x):=u\left({\varphi},x+\beta({\varphi},x)\right)$ (7.23) is induced by a ${\varphi}$-dependent diffeomorphism $y=x+\beta({\varphi},x)$ of the torus ${\mathbb{T}}_{x}$, for some small quasi-periodic traveling wave $\beta:{\mathbb{T}}_{\varphi}^{\nu}\times{\mathbb{T}}_{x}\to{\mathbb{R}}$, ${\rm odd}({\varphi},x)$. Let ${\mathtt{b}}:=[{\mathtt{a}}]+2\in{\mathbb{N}}\,,\quad{\mathtt{a}}:=3(\tau_{1}+1)\geq 1\,,\quad\tau_{1}:=k_{0}+(k_{0}+1)\tau\,.$ (7.24) ###### Lemma 7.7. (Almost-Straightening of the transport operator) There exists $\tau_{2}(\tau,\nu)>\tau_{1}(\tau,\nu)+1+{\mathtt{a}}$ such that, for all $S>s_{0}+k_{0}$, there are $N_{0}:=N_{0}(S,{\mathtt{b}})\in{\mathbb{N}}$ and $\updelta:=\updelta(S,{\mathtt{b}})\in(0,1)$ such that, if $N_{0}^{\tau_{2}}\varepsilon\upsilon^{-1}<\updelta$ the following holds true. For any $\overline{\mathtt{n}}\in{\mathbb{N}}_{0}$: 1\. There exist a constant ${\mathtt{m}}_{1,\overline{\mathtt{n}}}:={\mathtt{m}}_{1,\overline{\mathtt{n}}}(\omega,\gamma)\in{\mathbb{R}}$, where ${\mathtt{m}}_{1,0}=0$, defined for any $(\omega,\gamma)\in{\mathbb{R}}^{\nu}\times[\gamma_{1},\gamma_{2}]$, and a quasi-periodic traveling wave $\beta({\varphi},x):=\beta_{\overline{\mathtt{n}}}({\varphi},x)$, ${\rm odd}({\varphi},x)$, satisfying, for some $\sigma=\sigma(\tau,\nu,k_{0})>0$, the estimates $|{\mathtt{m}}_{1,\overline{\mathtt{n}}}|^{k_{0},\upsilon}\lesssim\varepsilon\,,\quad\|\beta\|_{s}^{k_{0},\upsilon}\lesssim_{S}\varepsilon\upsilon^{-1}(1+\|{\mathfrak{I}}_{0}\|_{s+\sigma+{\mathtt{b}}}^{k_{0},\upsilon})\,,\quad\forall\,s_{0}\leq s\leq S\,,$ (7.25) independently of $\overline{\mathtt{n}}$; 2\. For any $(\omega,\gamma)$ in $\displaystyle{\mathtt{T}}{\mathtt{C}}_{\overline{\mathtt{n}}+1}(2\upsilon,\tau)$ $\displaystyle:={\mathtt{T}}{\mathtt{C}}_{\overline{\mathtt{n}}+1}({\mathtt{m}}_{1,\overline{\mathtt{n}}},2\upsilon,\tau)$ (7.26) $\displaystyle:=\Big{\\{}(\omega,\gamma)\in{\mathbb{R}}^{\nu}\times[\gamma_{1},\gamma_{2}]\,:\,|(\omega-{\mathtt{m}}_{1,\overline{\mathtt{n}}}\vec{\jmath})\cdot\ell|\geq 2\upsilon\braket{\ell}^{-\tau}\,\ \forall\,0<|\ell|\leq N_{\overline{\mathtt{n}}}\Big{\\}}$ the operator ${\mathcal{L}}_{\rm TR}$ in (7.20) is conjugated to ${\mathcal{E}}^{-1}{\mathcal{L}}_{\rm TR}{\mathcal{E}}=\omega\cdot\partial_{\varphi}+{\mathtt{m}}_{1,\overline{\mathtt{n}}}\,\partial_{y}+{\bf P}_{2}^{\perp}\,,$ (7.27) where ${\bf P}_{2}^{\perp}:=\begin{pmatrix}\partial_{y}p_{\overline{\mathtt{n}}}&0\\\ 0&p_{\overline{\mathtt{n}}}\partial_{y}\end{pmatrix}\,,$ (7.28) and the real, quasi-periodic traveling wave function $p_{\overline{\mathtt{n}}}({\varphi},y)$, ${\rm even}({\varphi},y)$, satisfies, for some $\sigma=\sigma(\tau,\nu,k_{0})>0$ and for any $s_{0}\leq s\leq S$, $\|p_{\overline{\mathtt{n}}}\|_{s}^{k_{0},\upsilon}\lesssim_{s,{\mathtt{b}}}\varepsilon\,N_{\overline{\mathtt{n}}-1}^{-{\mathtt{a}}}(1+\|{\mathfrak{I}}_{0}\|_{s+\sigma+{\mathtt{b}}}^{k_{0},\upsilon})\,;$ (7.29) 3\. The operators ${\mathcal{E}}^{\pm}$ are ${\mathcal{D}}^{k_{0}}$-$(k_{0}+1)$-tame, the operators ${\mathcal{E}}^{\pm 1}-{\rm Id}$, $({\mathcal{E}}^{\pm 1}-{\rm Id})^{*}$ are ${\mathcal{D}}^{k_{0}}$-$(k_{0}+2)$-tame with tame constants satisfying, for some $\sigma:=\sigma(\tau,\nu,k_{0})>0$ and for all $s_{0}\leq s\leq S-\sigma$, $\displaystyle{\mathfrak{M}}_{{\mathcal{E}}^{\pm 1}}(s)\lesssim_{S}1+\|{\mathfrak{I}}_{0}\|_{s+\sigma}^{k_{0},\upsilon}\,,\ {\mathfrak{M}}_{{\mathcal{E}}^{\pm 1}-{\rm Id}}(s)+{\mathfrak{M}}_{\left({\mathcal{E}}^{\pm 1}-{\rm Id}\right)^{*}}(s)\lesssim_{S}\varepsilon\upsilon^{-1}(1+\|{\mathfrak{I}}_{0}\|_{s+\sigma+{\mathtt{b}}}^{k_{0},\upsilon})\,.$ (7.30) 4\. Furthermore, for any $s_{1}$ as in (7.11), $\displaystyle|\Delta_{12}{\mathtt{m}}_{1,\overline{\mathtt{n}}}|\lesssim\varepsilon\left\|i_{1}-i_{2}\right\|_{s_{1}+\sigma}\,,\quad\|\Delta_{12}\beta\|_{s_{1}}\lesssim_{s_{1}}\varepsilon\upsilon^{-1}\|i_{1}-i_{2}\|_{s_{1}+\sigma+{\mathtt{b}}}\,,$ (7.31) $\displaystyle\|\Delta_{12}({\mathcal{A}})h\|_{s_{1}}\lesssim_{s_{1}}\varepsilon\upsilon^{-1}\left\|i_{1}-i_{2}\right\|_{s_{1}+\sigma+{\mathtt{b}}}\left\|h\right\|_{s_{1}+\sigma+{\mathtt{b}}}\,,\quad{\mathcal{A}}\in\\{{\mathcal{E}}^{\pm 1},({\mathcal{E}}^{\pm 1})^{*}\\}\,.$ (7.32) ###### Proof. We apply Theorem A.2 and Corollary A.4 to the transport operator $X_{0}=\omega\cdot\partial_{\varphi}+\widetilde{V}\partial_{x}$, which has the form (A.1) with $p_{0}=\widetilde{V}$. By (7.15) and (7.10), the smallness conditions (A.3) and (A.10) hold for $N_{0}^{\tau_{2}}\varepsilon\upsilon^{-1}$ sufficiently small. Therefore there exist a constant ${\mathtt{m}}_{1,\overline{\mathtt{n}}}\in{\mathbb{R}}$ and a quasi-periodic traveling wave $\beta({\varphi},x):=\beta_{\overline{\mathtt{n}}}({\varphi},x)$, ${\rm odd}({\varphi},x)$, such that, for any $(\omega,\gamma)$ in ${\mathtt{T}}{\mathtt{C}}_{\overline{\mathtt{n}}+1}(2\upsilon,\tau)\subseteq{\mathtt{\Lambda}}_{\overline{\mathtt{n}}+1}^{\upsilon,\rm T}\subseteq{\mathtt{\Lambda}}_{\overline{\mathtt{n}}}^{\upsilon,\rm T}$ (see Corollary A.3) we have ${\mathcal{B}}_{\overline{\mathtt{n}}}^{-1}(\omega\cdot\partial_{\varphi}+\widetilde{V}\partial_{x}){\mathcal{B}}_{\overline{\mathtt{n}}}=\omega\cdot\partial_{\varphi}+({\mathtt{m}}_{1,\overline{\mathtt{n}}}+p_{\overline{\mathtt{n}}}({\varphi},y))\partial_{y}$ where the function $p_{\overline{\mathtt{n}}}$ satisfies (7.29) by (A.5) and (7.15). The estimates (A.6), (A.15), (7.15) imply (7.25) and (7.30). The conjugated operator of ${\mathcal{L}}_{\rm TR}$ in (7.20) is ${\mathcal{E}}^{-1}{\mathcal{L}}_{\rm TR}{\mathcal{E}}=\omega\cdot\partial_{\varphi}+\begin{pmatrix}A_{1}&0\\\ 0&({\mathtt{m}}_{1,\overline{\mathtt{n}}}+p_{\overline{\mathtt{n}}})\partial_{y}\end{pmatrix}$ where $\omega\cdot\partial_{\varphi}+A_{1}={\mathcal{B}}^{-1}(1+\beta_{x})^{-1}\big{(}\omega\cdot\partial_{\varphi}+\partial_{x}{\widetilde{V}}\big{)}(1+\beta_{x}){\mathcal{B}}$. Since ${\mathcal{L}}_{\rm TR}$ is Hamiltonian (Definition 3.18), and the map ${\mathcal{E}}$ is symplectic, we have that ${\mathcal{E}}^{-1}{\mathcal{L}}_{\rm TR}{\mathcal{E}}$ is Hamiltonian as well. In particular $A_{1}=-(({\mathtt{m}}_{1,\overline{\mathtt{n}}}+p_{\overline{\mathtt{n}}})\partial_{y})^{*}={\mathtt{m}}_{1,\overline{\mathtt{n}}}\partial_{y}+\partial_{y}p_{\overline{\mathtt{n}}}$. This proves (7.27)-(7.28). The estimates (7.31)-(7.32) follow by (A.11)-(A.12), the bound for $\|\Delta_{12}\beta_{\overline{\mathtt{n}}}\|_{s_{1}}$ in Corollary A.4 and (7.16)-(7.17). ∎ ###### Remark 7.8. Actually, for any $(\omega,\gamma)\in{\mathtt{T}}{\mathtt{C}}_{\overline{\mathtt{n}}+1}(2\upsilon,\tau)$ in (7.26), Theorem A.2 and Corollary A.3 would imply also the conjugation of ${\mathcal{L}}_{\rm TR}$ to the operator $\omega\cdot\partial_{\varphi}+{\mathtt{m}}_{1,\overline{\mathtt{n}}+1}\partial_{y}$ for some ${\mathtt{m}}_{1,\overline{\mathtt{n}}+1}\in{\mathbb{R}}$, up to a remainder ${\bf P}_{2}^{\perp}=O(\varepsilon N_{\overline{\mathtt{n}}}^{-{\mathtt{a}}})$. For simplicity we stated only the conjugation in (7.27). We shall use the non-resonance condition in (7.26) also later in Sections 7.5, 7.6 . The next lemma is needed in order to prove the inclusion of the Cantor sets associated to two nearby approximate solutions. ###### Lemma 7.9. Let $i_{1},i_{2}$ be close enough and $0<2\upsilon-\rho<2\upsilon<1$. Then $\varepsilon C(s_{1})N_{\overline{\mathtt{n}}}^{\tau+1}\|i_{1}-i_{2}\|_{s_{1}+\sigma}\leq\rho\quad\Rightarrow\quad{\mathtt{T}}{\mathtt{C}}_{\overline{\mathtt{n}}+1}(2\upsilon,\tau)(i_{1})\subseteq{\mathtt{T}}{\mathtt{C}}_{\overline{\mathtt{n}}+1}(2\upsilon-\rho,\tau)(i_{2})\,.$ ###### Proof. For any $(\omega,\gamma)\in{\mathtt{T}}{\mathtt{C}}_{\overline{\mathtt{n}}+1}(2\upsilon,\tau)(i_{1})$, using also (7.31), we have, for any $\ell\in{\mathbb{Z}}^{\nu}\setminus\\{0\\}$, $|\ell|\leq N_{\overline{\mathtt{n}}}$, $\displaystyle|(\omega-{\mathtt{m}}_{1,\overline{\mathtt{n}}}(i_{2})\vec{\jmath})\cdot\ell|$ $\displaystyle\geq|(\omega-{\mathtt{m}}_{1,\overline{\mathtt{n}}}(i_{1})\vec{\jmath})\cdot\ell|-C|\Delta_{12}{\mathtt{m}}_{1,\overline{\mathtt{n}}}||\ell|$ $\displaystyle\geq\frac{2\upsilon}{\braket{\ell}^{\tau}}-C(s_{1})\varepsilon N_{\overline{\mathtt{n}}}\|i_{1}-i_{2}\|_{s_{1}+\sigma}\geq\frac{2\upsilon-\rho}{\braket{\ell}^{\tau}}\,.$ We conclude that $(\omega,\gamma)\in{\mathtt{T}}{\mathtt{C}}_{\overline{\mathtt{n}}+1}(2\upsilon-\rho,\tau)(i_{2})$. ∎ We now conjugate the whole operator ${\mathcal{L}}_{1}$ in (7.18)-(7.19) by the operator ${\mathcal{E}}$ in (7.22). We first compute the conjugation of the matrix $\displaystyle{\mathcal{E}}^{-1}$ $\displaystyle\begin{pmatrix}-\frac{\gamma}{2}G(0)\partial_{x}^{-1}&-G(0)\\\ a-\left(\frac{\gamma}{2}\right)^{2}\partial_{x}^{-1}G(0)\partial_{x}^{-1}&-\frac{\gamma}{2}\partial_{x}^{-1}G(0)\end{pmatrix}{\mathcal{E}}$ $\displaystyle=\begin{pmatrix}-\frac{\gamma}{2}{\mathcal{B}}^{-1}(1+\beta_{x})^{-1}G(0)\partial_{x}^{-1}(1+\beta_{x}){\mathcal{B}}&-{\mathcal{B}}^{-1}(1+\beta_{x})^{-1}G(0){\mathcal{B}}\\\ {\mathcal{B}}^{-1}\big{(}a-\left(\frac{\gamma}{2}\right)^{2}\partial_{x}^{-1}G(0)\partial_{x}^{-1}\big{)}(1+\beta_{x}){\mathcal{B}}&-\frac{\gamma}{2}{\mathcal{B}}^{-1}\partial_{x}^{-1}G(0){\mathcal{B}}\end{pmatrix}\,.$ The multiplication operator for $a({\varphi},x)$ is transformed into the multiplication operator for the function ${\mathcal{B}}^{-1}a(1+\beta_{x}){\mathcal{B}}={\mathcal{B}}^{-1}\big{(}a(1+\beta_{x})\big{)}\,.$ (7.33) We write the Dirichlet-Neumann operator $G(0)$ in (1.9) as $G(0)=G(0,{\mathtt{h}})=\partial_{x}{\mathcal{H}}T({\mathtt{h}})\,,$ (7.34) where ${\mathcal{H}}$ is the Hilbert transform defined in (3.21) and $T({\mathtt{h}}):=\begin{cases}\tanh({\mathtt{h}}|D|)={\rm Id}+{\rm Op}(r_{\mathtt{h}})&\text{ if }{\mathtt{h}}<+\infty\,,\qquad r_{{\mathtt{h}}}(\xi):=-\frac{2}{1+e^{2{\mathtt{h}}|\xi|\chi(\xi)}}\in S^{-\infty}\,,\\\ {\rm Id}&\text{ if }{\mathtt{h}}=\infty\,.\end{cases}$ (7.35) We have the conjugation formula (see formula (7.42) in [2]) ${\mathcal{B}}^{-1}G(0){\mathcal{B}}=\left\\{{\mathcal{B}}^{-1}(1+\beta_{x})\right\\}G(0)+{\mathcal{R}}_{1}\,,$ (7.36) where ${\mathcal{R}}_{1}:=\left\\{{\mathcal{B}}^{-1}(1+\beta_{x})\right\\}\partial_{y}\big{(}\l{\mathcal{H}}\left({\mathcal{B}}^{-1}{\rm Op}(r_{\mathtt{h}}){\mathcal{B}}-{\rm Op}(r_{\mathtt{h}})\right)+\left({\mathcal{B}}^{-1}{\mathcal{H}}{\mathcal{B}}-{\mathcal{H}}\right)({\mathcal{B}}^{-1}T({\mathtt{h}}){\mathcal{B}})\big{)}\,.$ The operator ${\mathcal{R}}_{1}$ is in ${\rm OP}S^{-\infty}$ because both ${\mathcal{B}}^{-1}{\rm Op}(r_{\mathtt{h}}){\mathcal{B}}-{\rm Op}(r_{\mathtt{h}})$ and ${\mathcal{B}}^{-1}{\mathcal{H}}{\mathcal{B}}-{\mathcal{H}}$ are in ${\rm OP}S^{-\infty}$ and there is ${\sigma}>0$ such that, for any $m\in{\mathbb{N}}$, $\alpha\in{\mathbb{N}}_{0}$ and $s\geq s_{0}$, $\displaystyle\|{\mathcal{B}}^{-1}{\mathcal{H}}{\mathcal{B}}-{\mathcal{H}}\|_{-m,s,\alpha}^{k_{0},\upsilon}\lesssim_{m,s,\alpha,k_{0}}\|\beta\|_{s+m+\alpha+\sigma}^{k_{0},\upsilon}\,,$ (7.37) $\displaystyle\|{\mathcal{B}}^{-1}{\rm Op}(r_{\mathtt{h}}){\mathcal{B}}-{\rm Op}(r_{\mathtt{h}})\|_{-m,s,\alpha}^{k_{0},\upsilon}\lesssim_{m,s,\alpha,k_{0}}\|\beta\|_{s+m+\alpha+\sigma}^{k_{0},\upsilon}\,.$ The first estimate is given in Lemmata 2.36 and 2.32 in [9], whereas the second one follows by that fact that $r_{\mathtt{h}}\in S^{-\infty}$ (see (7.35)), Lemma 2.18 in [2] and Lemmata 2.34 and 2.32 in [9]. Therefore by (7.36) we obtain ${\mathcal{B}}^{-1}(1+\beta_{x})^{-1}G(0){\mathcal{B}}=\\{{\mathcal{B}}^{-1}(1+\beta_{x})^{-1}\\}{\mathcal{B}}^{-1}G(0){\mathcal{B}}=G(0)+{\mathcal{R}}_{B}\,,$ (7.38) where ${\mathcal{R}}_{B}:=\\{{\mathcal{B}}^{-1}(1+\beta_{x})^{-1}\\}\,{\mathcal{R}}_{1}\,.$ (7.39) Next we transform $G(0)\partial_{x}^{-1}$. By (7.34) and using the identities ${\mathcal{H}}\partial_{x}\partial_{x}^{-1}={\mathcal{H}}$ and ${\mathcal{H}}T({\mathtt{h}})=\partial_{y}^{-1}G(0)$ on the periodic functions, we have that $\displaystyle{\mathcal{B}}^{-1}(1+\beta_{x})^{-1}G(0)\partial_{x}^{-1}(1+\beta_{x}){\mathcal{B}}=G(0)\partial_{y}^{-1}+{\mathcal{R}}_{A}$ (7.40) $\displaystyle{\mathcal{B}}^{-1}\partial_{x}^{-1}G(0){\mathcal{B}}=\partial_{y}^{-1}G(0)+{\mathcal{R}}_{D}\,,$ where $\displaystyle{\mathcal{R}}_{D}$ $\displaystyle=({\mathcal{B}}^{-1}{\mathcal{H}}{\mathcal{B}}-{\mathcal{H}})({\mathcal{B}}^{-1}T({\mathtt{h}}){\mathcal{B}})+{\mathcal{H}}\big{(}{\mathcal{B}}^{-1}{\rm Op}(r_{\mathtt{h}}){\mathcal{B}}-{\rm Op}(r_{\mathtt{h}})\big{)}\,,$ (7.41) $\displaystyle{\mathcal{R}}_{A}$ $\displaystyle=\\{{\mathcal{B}}^{-1}(1+\beta_{x})^{-1}\\}\big{[}{\mathcal{H}}T({\mathtt{h}}),\\{{\mathcal{B}}^{-1}(1+\beta_{x})\\}-1\big{]}$ $\displaystyle\ \ +\\{{\mathcal{B}}^{-1}(1+\beta_{x})^{-1}\\}{\mathcal{R}}_{D}\\{{\mathcal{B}}^{-1}(1+\beta_{x})\\}\,.$ The operator ${\mathcal{R}}_{D}$ is in ${\rm OP}S^{-\infty}$ by (7.37), (7.35). Also ${\mathcal{R}}_{A}$ is in ${\rm OP}S^{-\infty}$ using that, by Lemma 2.35 of [9] and (7.35), there is ${\sigma}>0$ such that, for any $m\in{\mathbb{N}}$, $s\geq s_{0}$, and $\alpha\in{\mathbb{N}}_{0}$, $\|[{\mathcal{H}}T({\mathtt{h}}),{\widetilde{a}}]\|_{-m,s,\alpha}^{k_{0},\upsilon}\lesssim_{m,s,\alpha,k_{0}}\|{\widetilde{a}}\|_{s+m+\alpha+\sigma}^{k_{0},\upsilon}\,.$ (7.42) Finally we conjugate $\partial_{x}^{-1}G(0)\partial_{x}^{-1}$. By the Egorov Proposition 3.9 applied to $\partial_{x}^{-1}$, we have that, for any $N\in{\mathbb{N}}$, ${\mathcal{B}}^{-1}\partial_{x}^{-1}(1+\beta_{x}){\mathcal{B}}={\mathcal{B}}^{-1}\partial_{x}^{-1}{\mathcal{B}}\,\\{{\mathcal{B}}^{-1}(1+\beta_{x})\\}=\partial_{y}^{-1}+P^{(1)}_{-2,N}(\varphi,x,D)+{\mathtt{R}}_{N}\,,$ (7.43) where $P^{(1)}_{-2,N}(\varphi,x,D)\in{\rm OP}S^{-2}$ is given by $P^{(1)}_{-2,N}(\varphi,x,D):=\big{[}\\{{\mathcal{B}}^{-1}(1+\beta_{x})^{-1}\\},\partial_{y}^{-1}\big{]}\\{{\mathcal{B}}^{-1}(1+\beta_{x})\\}+\sum_{j=1}^{N}p_{-1-j}\partial_{y}^{-1-j}\\{{\mathcal{B}}^{-1}(1+\beta_{x})\\}$ with functions $p_{-1-j}(\lambda;\varphi,y)$, $j=0,\ldots,N$, satisfying (3.30) and ${\mathtt{R}}_{N}$ is a regularizing operator satisfying the estimate (3.31). So, using (7.40) and (7.43), we obtain $\displaystyle{\mathcal{B}}^{-1}\partial_{x}^{-1}G(0)\partial_{x}^{-1}(1+\beta_{x}){\mathcal{B}}$ $\displaystyle=({\mathcal{B}}^{-1}\partial_{x}^{-1}G(0){\mathcal{B}})({\mathcal{B}}^{-1}\partial_{x}^{-1}(1+\beta_{x}){\mathcal{B}})$ (7.44) $\displaystyle=\partial_{y}^{-1}G(0)\partial_{y}^{-1}+P_{-2,N}^{(2)}+{\mathtt{R}}_{2,N}$ where $\displaystyle P_{-2,N}^{(2)}$ $\displaystyle:=\partial_{y}^{-1}G(0)P^{(1)}_{-2,N}(\varphi,x,D)\in{\rm OP}S^{-2}$ (7.45) and ${\mathtt{R}}_{2,N}$ is the regularizing operator ${\mathtt{R}}_{2,N}:={\mathcal{R}}_{D}({\mathcal{B}}^{-1}\partial_{x}^{-1}(1+\beta_{x}){\mathcal{B}})+G(0)\partial_{y}^{-1}{\mathtt{R}}_{N}\,.$ (7.46) In conclusion, by Lemma 7.7, (7.33), (7.38), (7.40) and (7.44) we obtain the following lemma, which summarizes the main result of this section. ###### Lemma 7.10. Let $N\in{\mathbb{N}}$. For any $\overline{\mathtt{n}}\in{\mathbb{N}}_{0}$ and for all $(\omega,\gamma)\in{\mathtt{T}}{\mathtt{C}}_{\overline{\mathtt{n}}+1}(2\upsilon,\tau)$, the operator ${\mathcal{L}}_{1}$ in (7.18) is conjugated to the real, Hamiltonian, reversible and momentum preserving operator $\displaystyle{\mathcal{L}}_{2}:={\mathcal{E}}^{-1}{\mathcal{L}}_{1}{\mathcal{E}}=\omega\cdot\partial_{\varphi}+{\mathtt{m}}_{1,\overline{\mathtt{n}}}\partial_{y}\,+$ $\displaystyle\begin{pmatrix}-\frac{\gamma}{2}G(0)\partial_{y}^{-1}&-G(0)\\\ a_{1}-\left(\frac{\gamma}{2}\right)^{2}\partial_{y}^{-1}G(0)\partial_{y}^{-1}&-\frac{\gamma}{2}\partial_{y}^{-1}G(0)\end{pmatrix}$ (7.47) $\displaystyle+\begin{pmatrix}0&0\\\ -\left(\frac{\gamma}{2}\right)^{2}P_{-2,N}^{(2)}&0\end{pmatrix}+{\bf R}_{2}^{\Psi}+{\bf T}_{2,N}+{\bf P}_{2}^{\perp}\,,$ defined for any $(\omega,\gamma)\in{\mathbb{R}}^{\nu}\times[\gamma_{1},\gamma_{2}]$, where: 1\. The constant ${\mathtt{m}}_{1,\overline{\mathtt{n}}}={\mathtt{m}}_{1,\overline{\mathtt{n}}}(\omega,\gamma)\in{\mathbb{R}}$ satisfies $|{\mathtt{m}}_{1,\overline{\mathtt{n}}}|^{k_{0},\upsilon}\lesssim\varepsilon$, independently on $\overline{\mathtt{n}}$; 2\. The real quasi-periodic traveling wave $a_{1}:={\mathcal{B}}^{-1}\big{(}a(1+\beta_{x})\big{)}$, ${\rm even}({\varphi},x)$, satisfies, for some $\sigma:=\sigma(k_{0},\tau,\nu)>0$ and for all $s_{0}\leq s\leq S-\sigma$, $\|a_{1}-g\|_{s}^{k_{0},\upsilon}\lesssim_{s}\varepsilon\upsilon^{-1}(1+\|{\mathfrak{I}}_{0}\|_{s+\sigma}^{k_{0},\upsilon})\,;$ (7.48) 3\. The operator $P_{-2,N}^{(2)}$ is a pseudodifferential operator in ${\rm OP}S^{-2}$, reversibility and momentum preserving, and satisfies, for some $\sigma_{N}:=\sigma_{N}(\tau,\nu,N)>0$, for finitely many $0\leq\alpha\leq\alpha(M)$ (fixed in Remark 7.16) and for all $s_{0}\leq s\leq S-\sigma_{N}-\alpha$, $\|P_{-2,N}^{(2)}\|_{-2,s,\alpha}^{k_{0},\upsilon}\lesssim_{s,N,\alpha}\varepsilon\upsilon^{-1}(1+\|{\mathfrak{I}}_{0}\|_{s+\sigma_{N}+\alpha}^{k_{0},\upsilon})\,;$ (7.49) 4\. For any ${\mathtt{q}}\in{\mathbb{N}}^{\nu}_{0}$ with $|{\mathtt{q}}|\leq{\mathtt{q}}_{0}$, $n_{1},n_{2}\in{\mathbb{N}}_{0}$ with $n_{1}+n_{2}\leq N-(k_{0}+{\mathtt{q}}_{0})+2$, the operator $\langle D\rangle^{n_{1}}\partial_{\varphi}^{\mathtt{q}}({\bf R}_{2}^{\Psi}(\varphi)+{\bf T}_{2,N}({\varphi}))\langle D\rangle^{n_{2}}$ is ${\mathcal{D}}^{k_{0}}$-tame with a tame constant satisfying, for some $\sigma_{N}({\mathtt{q}}_{0}):=\sigma_{N}({\mathtt{q}}_{0},k_{0},\tau,\nu)>0$ and for any $s_{0}\leq s\leq S-\sigma_{N}({\mathtt{q}}_{0})$, ${\mathfrak{M}}_{\langle D\rangle^{n_{1}}\partial_{\varphi}^{\mathtt{q}}({\bf R}_{2}^{\Psi}(\varphi)+{\bf T}_{2,N}({\varphi}))\langle D\rangle^{n_{2}}}(s)\lesssim_{S,N,{\mathtt{q}}_{0}}\varepsilon\upsilon^{-1}\big{(}1+\|{\mathfrak{I}}_{0}\|_{s+\sigma_{N}({\mathtt{q}}_{0})}^{k_{0},\upsilon}\big{)}\,;$ (7.50) 5\. The operator ${\bf P}_{2}^{\perp}$ is defined in (7.28) and the function $p_{\overline{\mathtt{n}}}$ satisfies (7.29); 6\. Furthermore, for any $s_{1}$ as in (7.11), finitely many $0\leq\alpha\leq\alpha(M)$, ${\mathtt{q}}\in{\mathbb{N}}_{0}^{\nu}$, with $\left|{\mathtt{q}}\right|\leq{\mathtt{q}}_{0}$, and $n_{1},n_{2}\in{\mathbb{N}}_{0}$, with $n_{1}+n_{2}\leq N-{\mathtt{q}}_{0}+1$, $\displaystyle|\Delta_{12}{\mathtt{m}}_{1,\overline{\mathtt{n}}}|\lesssim_{s_{1}}\varepsilon\left\|i_{1}-i_{2}\right\|_{s_{1}+\sigma}\,,\ \|\Delta_{12}a_{1}\|_{s_{1}}\lesssim\varepsilon\upsilon^{-1}\left\|i_{1}-i_{2}\right\|_{s_{1}+\sigma}\,,$ (7.51) $\displaystyle\|\Delta_{12}P_{-2,N}^{(2)}\|_{-2,s_{1},\alpha}\lesssim_{s_{1},N,\alpha}\varepsilon\upsilon^{-1}\left\|i_{1}-i_{2}\right\|_{s_{1}+\sigma_{N}+\alpha}\,,$ (7.52) $\displaystyle\left\|\braket{D}^{n_{1}}\partial_{\varphi}^{\mathtt{q}}\Delta_{12}({\bf R}_{2}^{\Psi}(\varphi)+{\bf T}_{2,N}({\varphi}))\braket{D}^{n_{2}}\right\|_{{\mathcal{L}}(H^{s_{1}})}\lesssim_{s_{1},N,{\mathtt{q}}_{0}}\varepsilon\upsilon^{-1}\left\|i_{1}-i_{2}\right\|_{s_{1}+\sigma_{N}({\mathtt{q}}_{0})}\,.$ (7.53) ###### Proof. Item 1 follows by Lemma 7.7. The function $a_{1}$ satisfies (7.48) by (7.14), (3.7), (7.15), (7.30), (7.25). The estimate (7.49) follows by (7.45), Proposition 3.9 and Lemmata 3.5, 3.6, 3.8, 7.7. The operators ${\bf R}_{2}^{\Psi}$ and ${\bf T}_{2,N}$ in (7.47) are ${\bf R}_{2}^{\Psi}:=-\begin{pmatrix}\frac{\gamma}{2}{\mathcal{R}}_{A}&{\mathcal{R}}_{B}\\\ 0&\frac{\gamma}{2}{\mathcal{R}}_{D}\end{pmatrix}+{\mathcal{E}}^{-1}{\bf R}_{1}{\mathcal{E}}\,,\qquad{\bf T}_{2,N}:=-\left(\frac{\gamma}{2}\right)^{2}\begin{pmatrix}0&0\\\ {\mathtt{R}}_{2,N}&0\end{pmatrix}\,,$ where ${\mathcal{R}}_{B}$, ${\mathcal{R}}_{A}$, ${\mathcal{R}}_{D}$, are defined in (7.39), (7.41), and ${\bf R}_{1}$, ${\mathtt{R}}_{2,N}$ in (7.19), (7.46). Thus the estimate (7.50) holds by Lemmata 3.12, 3.13, 7.7, (7.37), (7.42), Proposition 3.9, Lemma 3.10, (7.25) and Lemmata 2.34, 2.32 in [9]. The estimates (7.51)-(7.53) are proved similarly. ∎ ### 7.3 Symmetrization of the order $1/2$ The goal of this section is to symmetrize the order $1/2$ of the quasi- periodic Hamiltonian operator ${\mathcal{L}}_{2}$ in (7.47). From now on, we neglect the contribution of the operator ${\bf P}_{2}^{\perp}$, which will be conjugated in Section 7.7. For simplicity of notation we denote such operator ${\mathcal{L}}_{2}$ as well. Step 1: We first conjugate the operator ${\mathcal{L}}_{2}$ in (7.47), where we relabel the space variable $y\rightsquigarrow x$, by the real, symplectic, reversibility preserving and momentum preserving transformation ${\widetilde{{\mathcal{M}}}}:=\begin{pmatrix}\Lambda&0\\\ 0&\Lambda^{-1}\end{pmatrix}\,,\quad{\widetilde{{\mathcal{M}}}}^{-1}:=\begin{pmatrix}\Lambda^{-1}&0\\\ 0&\Lambda\end{pmatrix}\,,$ (7.54) where $\Lambda\in{\rm OP}S^{\frac{1}{4}}$ is the Fourier multiplier $\Lambda:=\tfrac{1}{\sqrt{g}}\pi_{0}+M(D)\,,\quad\text{with inverse}\quad\Lambda^{-1}:=\sqrt{g}\pi_{0}+M(D)^{-1}\in{\rm OP}S^{-\frac{1}{4}}\,,$ (7.55) with $\pi_{0}$ defined in (3.23) and (cfr. (2.16)) $M(D):=G(0)^{\frac{1}{4}}\big{(}g-(\tfrac{\gamma}{2})^{2}\partial_{x}^{-1}G(0)\partial_{x}^{-1}\big{)}^{-\frac{1}{4}}\in{\rm OP}S^{\frac{1}{4}}\,.$ (7.56) We have the identities $\Lambda^{-1}G(0)\Lambda^{-1}=\omega(\gamma,D)$ and $\Lambda\big{(}g-\big{(}\tfrac{\gamma}{2}\big{)}^{2}\partial_{x}^{-1}G(0)\partial_{x}^{-1}\big{)}\Lambda=\Lambda^{-1}G(0)\Lambda^{-1}+\pi_{0}=\omega(\gamma,D)+\pi_{0}\,,$ (7.57) where $\omega(\gamma,D)\in{\rm OP}S^{\frac{1}{2}}$ is defined in (2.18). By (7.47) we compute $\displaystyle{\mathcal{L}}_{3}:={\widetilde{{\mathcal{M}}}}^{-1}{\mathcal{L}}_{2}{\widetilde{{\mathcal{M}}}}=$ $\displaystyle\ \omega\cdot\partial_{\varphi}+{\mathtt{m}}_{1,\overline{\mathtt{n}}}\partial_{x}+\begin{pmatrix}-\frac{\gamma}{2}G(0)\partial_{x}^{-1}&-\Lambda^{-1}G(0)\Lambda^{-1}\\\ \Lambda\big{(}a_{1}-(\frac{\gamma}{2})^{2}\partial_{x}^{-1}G(0)\partial_{x}^{-1}\big{)}\Lambda&-\frac{\gamma}{2}G(0)\partial_{x}^{-1}\end{pmatrix}$ (7.58) $\displaystyle+\begin{pmatrix}0&0\\\ -(\frac{\gamma}{2})^{2}\Lambda P_{-2,N}^{(2)}\Lambda&0\end{pmatrix}+{\widetilde{{\mathcal{M}}}}^{-1}{\bf R}_{2}^{\Psi}{\widetilde{{\mathcal{M}}}}+{\widetilde{{\mathcal{M}}}}^{-1}{\bf T}_{2,N}{\widetilde{{\mathcal{M}}}}\,.$ By (7.57), (7.55) and (7.56), we get $\displaystyle\Lambda\big{(}a_{1}-(\tfrac{\gamma}{2})^{2}\partial_{x}^{-1}G(0)\partial_{x}^{-1}\big{)}\Lambda=\Lambda\big{(}g-(\tfrac{\gamma}{2})^{2}\partial_{x}^{-1}G(0)\partial_{x}^{-1}\big{)}\Lambda+\Lambda(a_{1}-g)\Lambda$ (7.59) $\displaystyle\ \ \ \ \ \ \ =\omega(\gamma,D)+(a_{1}-g)\Lambda^{2}+[\Lambda,a_{1}]\Lambda+\pi_{0}$ $\displaystyle\ \ \ \ \ \ \ =\big{(}1+\tfrac{a_{1}-g}{g}\big{)}\omega(\gamma,D)+\tfrac{a_{1}-g}{g}\big{(}g\Lambda^{2}-\omega(\gamma,D)\big{)}+[\Lambda,a_{1}]\Lambda+\pi_{0}$ $\displaystyle\ \ \ \ \ \ \ =a_{2}^{2}\omega(\gamma,D)+\tfrac{a_{1}-g}{g}(\tfrac{\gamma}{2})^{2}M(D)^{2}\partial_{x}^{-1}G(0)\partial_{x}^{-1}+[\Lambda,a_{1}]\Lambda+\pi_{0}+\tfrac{a_{1}-g}{g}\pi_{0}$ where $a_{2}$ is the real quasi-periodic traveling wave function (with $a_{1}$ defined in Lemma 7.10) $a_{2}:=\sqrt{\tfrac{a_{1}}{g}}=\sqrt{1+\tfrac{a_{1}-g}{g}}\,,\quad{\rm even}({\varphi},x)\,\,.$ (7.60) Therefore, by (7.58), (7.57), (7.59) we obtain $\displaystyle{\mathcal{L}}_{3}$ $\displaystyle=\omega\cdot\partial_{\varphi}+{\mathtt{m}}_{1,\overline{\mathtt{n}}}\partial_{x}+\begin{pmatrix}-\frac{\gamma}{2}G(0)\partial_{x}^{-1}&-\omega(\gamma,D)\\\ a_{2}\omega(\gamma,D)a_{2}&-\frac{\gamma}{2}\partial_{x}^{-1}G(0)\end{pmatrix}+\begin{pmatrix}0&0\\\ \pi_{0}&0\end{pmatrix}$ (7.61) $\displaystyle+\begin{pmatrix}0&0\\\ C_{3}&0\end{pmatrix}+{\bf R}_{3}^{\Psi}+{\bf T}_{3,N}\,,$ where $C_{3}:=a_{2}[a_{2},\omega(\gamma,D)]+\tfrac{a_{1}-g}{g}(\tfrac{\gamma}{2})^{2}M(D)^{2}\partial_{x}^{-1}G(0)\partial_{x}^{-1}+[\Lambda,a_{1}]\Lambda-(\tfrac{\gamma}{2})^{2}\Lambda P_{-2,N}^{(2)}\Lambda$ (7.62) is in ${\rm OP}S^{-\frac{1}{2}}$ and ${\bf R}_{3}^{\Psi}:={\widetilde{{\mathcal{M}}}}^{-1}{\bf R}_{2}^{\Psi}{\widetilde{{\mathcal{M}}}}+\begin{pmatrix}0&0\\\ (\tfrac{a_{1}}{g}-1)\pi_{0}&0\end{pmatrix}\,,\quad{\bf T}_{3,N}:={\widetilde{{\mathcal{M}}}}^{-1}{\bf T}_{2,N}{\widetilde{{\mathcal{M}}}}\,.$ (7.63) The operator ${\mathcal{L}}_{3}$ in (7.61) is Hamiltonian, reversible and momentum preserving. Step 2: We now conjugate the operator ${\mathcal{L}}_{3}$ in (7.61) with the symplectic matrix of multiplication operators ${\mathcal{Q}}:=\begin{pmatrix}q&0\\\ 0&q^{-1}\end{pmatrix}\ ,\qquad{\mathcal{Q}}^{-1}:=\begin{pmatrix}q^{-1}&0\\\ 0&q\end{pmatrix}\,,$ where $q$ is a real function, close to $1$, to be determined, see (7.69). We have that $\displaystyle{\mathcal{L}}_{4}:={\mathcal{Q}}^{-1}{\mathcal{L}}_{3}{\mathcal{Q}}=\omega\cdot\partial_{\varphi}+{\mathtt{m}}_{1,\overline{\mathtt{n}}}\partial_{x}+\begin{pmatrix}A&B\\\ C&D\end{pmatrix}+{\mathcal{Q}}^{-1}({\bf R}_{3}^{\Psi}+{\bf T}_{3,N}){\mathcal{Q}}\,,$ (7.64) where (actually $D=-A^{*}$, see Definition 3.18) $\displaystyle A:=-\tfrac{\gamma}{2}q^{-1}G(0)\partial_{x}^{-1}q+{\mathtt{m}}_{1,\overline{\mathtt{n}}}q^{-1}q_{x}+q^{-1}(\omega\cdot\partial_{\varphi}q)\,,$ (7.65) $\displaystyle B:=-q^{-1}\omega(\gamma,D)q^{-1}\,,$ (7.66) $\displaystyle C:=qa_{2}\omega(\gamma,D)a_{2}q+q\pi_{0}q+qC_{3}q\,,$ (7.67) $\displaystyle D:=-\tfrac{\gamma}{2}q\partial_{x}^{-1}G(0)q^{-1}-{\mathtt{m}}_{1,\overline{\mathtt{n}}}q^{-1}q_{x}-q^{-1}(\omega\cdot\partial_{\varphi}q)\,.$ (7.68) We choose the function $q$ so that the coefficients of the highest order terms of the off-diagonal self-adjoint operators $B$ and $C$ satisfy $q^{-1}=qa_{2}$, namely as the real quasi-periodic traveling wave, ${\rm even}({\varphi},x)$ $q({\varphi},x):=a_{2}({\varphi},x)^{-\frac{1}{2}}\,.$ (7.69) Thus ${\mathcal{Q}}$ is reversibility and momentum preserving. In view of (7.65)-(7.68) and (7.69) the operator ${\mathcal{L}}_{4}$ in (7.64) becomes $\displaystyle{\mathcal{L}}_{4}$ $\displaystyle=\omega\cdot\partial_{\varphi}+{\mathtt{m}}_{1,\overline{\mathtt{n}}}\partial_{x}+\begin{pmatrix}-\frac{\gamma}{2}G(0)\partial_{x}^{-1}&-a_{2}^{\frac{1}{2}}\omega(\gamma,D)a_{2}^{\frac{1}{2}}\\\ a_{2}^{\frac{1}{2}}\omega(\gamma,D)a_{2}^{\frac{1}{2}}&-\frac{\gamma}{2}\partial_{x}^{-1}G(0)\end{pmatrix}$ (7.70) $\displaystyle\ \ \ +\begin{pmatrix}0&0\\\ \pi_{0}&0\end{pmatrix}+\begin{pmatrix}a_{3}&0\\\ C_{4}&-a_{3}\end{pmatrix}+{\bf R}_{4}^{\Psi}+{\bf T}_{4,N}\,,$ where $a_{3}$ is the real quasi-periodic traveling wave function, ${\rm odd}({\varphi},x)$, $\displaystyle a_{3}$ $\displaystyle:={\mathtt{m}}_{1,\overline{\mathtt{n}}}q^{-1}q_{x}+q^{-1}(\omega\cdot\partial_{\varphi}q)\,,\quad C_{4}:=qC_{3}q\in{\rm OP}S^{-\frac{1}{2}}\,,$ (7.71) and ${\bf R}_{4}^{\Psi},{\bf T}_{4,N}$ are the smoothing remainders (recall that $G(0)\partial_{x}^{-1}={\mathcal{H}}T({\mathtt{h}})$) $\displaystyle{\bf R}_{4}^{\Psi}:=\begin{pmatrix}-\frac{\gamma}{2}q^{-1}[{\mathcal{H}}T({\mathtt{h}}),q-1]&0\\\ q\pi_{0}q-\pi_{0}&-\frac{\gamma}{2}[q-1,{\mathcal{H}}T({\mathtt{h}})]q^{-1}\end{pmatrix}+{\mathcal{Q}}^{-1}{\bf R}_{3}^{\Psi}{\mathcal{Q}}\in{\rm OP}S^{-\infty}\,,$ (7.72) $\displaystyle{\bf T}_{4,N}:={\mathcal{Q}}^{-1}{\bf T}_{3,N}{\mathcal{Q}}\,.$ The operator ${\mathcal{L}}_{4}$ in (7.70) is Hamiltonian, reversible and momentum preserving. Step 3: We finally move in complex coordinates, conjugating the operator ${\mathcal{L}}_{4}$ in (7.70) via the transformation ${\mathcal{C}}$ defined in (2.19). The main result of this section is the following lemma. ###### Lemma 7.11. Let $N\in{\mathbb{N}}$, ${\mathtt{q}}_{0}\in{\mathbb{N}}_{0}$. We have that $\displaystyle{\mathcal{L}}_{5}:=$ $\displaystyle\,({\widetilde{{\mathcal{M}}}}{\mathcal{Q}}{\mathcal{C}})^{-1}{\mathcal{L}}_{2}{\widetilde{{\mathcal{M}}}}{\mathcal{Q}}{\mathcal{C}}$ (7.73) $\displaystyle=$ $\displaystyle\,\omega\cdot\partial_{\varphi}+{\mathtt{m}}_{1,\overline{\mathtt{n}}}\partial_{x}+{\rm i}\,a_{2}({\varphi},x){\bf{\Omega}}(\gamma,D)+{\rm i}\,{\bf{\Pi}}_{0}+a_{4}{\mathcal{H}}+{\bf R}_{5}^{(-\frac{1}{2},d)}+{\bf R}_{5}^{(0,o)}+{\bf T}_{5,N}\,,$ where: 1\. The real quasi-periodic traveling wave $a_{2}({\varphi},x)$ defined in (7.60), ${\rm even}({\varphi},x)$, satisfies, for some $\sigma=\sigma(k_{0},\tau,\nu)>$ and for any $s_{0}\leq s\leq S-\sigma$, $\|a_{2}-1\|_{s}^{k_{0},\upsilon}\lesssim\varepsilon\upsilon^{-1}(1+\|{\mathfrak{I}}_{0}\|_{s+\sigma}^{k_{0},\upsilon})\,;$ (7.74) 2\. ${\bf{\Omega}}(\gamma,D)$ is the matrix of Fourier multipliers (see (2.20), (2.21)) ${\bf{\Omega}}(\gamma,D)=\begin{pmatrix}\Omega(\gamma,D)&0\\\ 0&-\overline{\Omega(\gamma,D)}\end{pmatrix},\quad\Omega(\gamma,D)=\omega(\gamma,D)+{\rm i}\,\frac{\gamma}{2}\partial_{x}^{-1}G(0)\,;$ (7.75) 3\. The operator ${\bf{\Pi}}_{0}:=\frac{1}{2}\begin{pmatrix}\pi_{0}&\pi_{0}\\\ -\pi_{0}&-\pi_{0}\end{pmatrix}\,.$ 4\. The real quasi-periodic traveling wave $a_{4}({\varphi},x):=\tfrac{\gamma}{2}(a_{2}({\varphi},x)-1)$, ${\rm even}({\varphi},x)$, satisfies, for some $\sigma:=$ $\sigma(k_{0},\tau,\nu)>0$ and for all $s_{0}\leq s\leq S-\sigma$, $\|a_{4}\|_{s}^{k_{0},\upsilon}\lesssim_{s}\varepsilon\upsilon^{-1}(1+\|{\mathfrak{I}}_{0}\|_{s+\sigma}^{k_{0},\upsilon})\,;$ (7.76) 5\. ${\bf R}_{5}^{(-\frac{1}{2},d)}\in{\rm OP}S^{-\frac{1}{2}}$ and ${\bf R}_{5}^{(0,o)}\in{\rm OP}S^{0}$ are pseudodifferential operators of the form $\displaystyle\footnotesize{\bf R}_{5}^{(-\frac{1}{2},d)}:=\begin{pmatrix}r_{5}^{(d)}({\varphi},x,D)&0\\\ 0&\overline{r_{5}^{(d)}({\varphi},x,D)}\end{pmatrix},\quad{\bf R}_{5}^{(0,o)}:=\begin{pmatrix}0&r_{5}^{(o)}({\varphi},x,D)\\\ \overline{r_{5}^{(o)}({\varphi},x,D)}&0\end{pmatrix}\,,$ reversibility and momentum preserving, satisfying, for some $\sigma_{N}:=\sigma(\tau,\nu,N)>0$, for finitely many $0\leq\alpha\leq\alpha(M)$ (fixed in Remark 7.16), and for all $s_{0}\leq s\leq S-\sigma_{N}-3\alpha$, $\|{\bf R}_{5}^{(-\frac{1}{2},d)}\|_{-\frac{1}{2},s,\alpha}^{k_{0},\upsilon}+\|{\bf R}_{5}^{(0,o)}\|_{0,s,\alpha}^{k_{0},\upsilon}\lesssim_{s,N,\alpha}\varepsilon\upsilon^{-1}(1+\|{\mathfrak{I}}_{0}\|_{s+\sigma_{N}+3\alpha}^{k_{0},\upsilon})\,;$ (7.77) 6\. For any ${\mathtt{q}}\in{\mathbb{N}}^{\nu}_{0}$ with $|{\mathtt{q}}|\leq{\mathtt{q}}_{0}$, $n_{1},n_{2}\in{\mathbb{N}}_{0}$ with $n_{1}+n_{2}\leq N-(k_{0}+{\mathtt{q}}_{0})+\frac{3}{2}$, the operator $\langle D\rangle^{n_{1}}\partial_{\varphi}^{\mathtt{q}}{\bf T}_{5,N}(\varphi)\langle D\rangle^{n_{2}}$ is ${\mathcal{D}}^{k_{0}}$-tame with a tame constant satisfying, for some $\sigma_{N}({\mathtt{q}}_{0}):=\sigma_{N}({\mathtt{q}}_{0},k_{0},\tau,\nu)>0$ and for any $s_{0}\leq s\leq S-\sigma_{N}({\mathtt{q}}_{0})$, ${\mathfrak{M}}_{\langle D\rangle^{n_{1}}\partial_{\varphi}^{\mathtt{q}}{\bf T}_{5,N}(\varphi)\langle D\rangle^{n_{2}}}(s)\lesssim_{S,N,{\mathtt{q}}_{0}}\varepsilon\upsilon^{-1}\big{(}1+\|{\mathfrak{I}}_{0}\|_{s+\sigma_{N}({\mathtt{q}}_{0})}^{k_{0},\upsilon}\big{)}\,;$ (7.78) 7\. The operators ${\mathcal{Q}}^{\pm 1}$, ${\mathcal{Q}}^{\pm 1}-{\rm Id}$, $({\mathcal{Q}}^{\pm 1}-{\rm Id})^{*}$ are ${\mathcal{D}}^{k_{0}}$-tame with tame constants satisfying, for some $\sigma:=\sigma(\tau,\nu,k_{0})>0$ and for all $s_{0}\leq s\leq S-\sigma$, $\displaystyle{\mathfrak{M}}_{{\mathcal{Q}}^{\pm 1}}(s)\lesssim_{S}1+\|{\mathfrak{I}}_{0}\|_{s+\sigma}^{k_{0},\upsilon}\,,\ \ {\mathfrak{M}}_{{\mathcal{Q}}^{\pm 1}-{\rm Id}}(s)+{\mathfrak{M}}_{\left({\mathcal{Q}}^{\pm 1}-{\rm Id}\right)^{*}}(s)\lesssim_{S}\varepsilon\upsilon^{-1}(1+\|{\mathfrak{I}}_{0}\|_{s+\sigma}^{k_{0},\upsilon})\,.$ (7.79) 8\. Furthermore, for any $s_{1}$ as in (7.11), finitely many $0\leq\alpha\leq\alpha(M)$, ${\mathtt{q}}\in{\mathbb{N}}_{0}^{\nu}$, with $\left|{\mathtt{q}}\right|\leq{\mathtt{q}}_{0}$, and $n_{1},n_{2}\in{\mathbb{N}}_{0}$, with $n_{1}+n_{2}\leq N-{\mathtt{q}}_{0}+\frac{1}{2}$, $\displaystyle\|\Delta_{12}({\mathcal{A}})h\|_{s_{1}}\lesssim_{s_{1}}\varepsilon{\upsilon^{-1}}\left\|i_{1}-i_{2}\right\|_{s_{1}+\sigma}\left\|h\right\|_{s_{1}+\sigma}\,,\quad{\mathcal{A}}\in\\{{\mathcal{Q}}^{\pm 1}=({\mathcal{Q}}^{\pm 1})^{*}\\}\,,$ (7.80) $\displaystyle\|\Delta_{12}a_{2}\|_{s_{1}}\lesssim_{s_{1}}\varepsilon{\upsilon^{-1}}\left\|i_{1}-i_{2}\right\|_{s_{1}+\sigma}\,,\ \|\Delta_{12}a_{4}\|_{s_{1}}\lesssim\varepsilon{\upsilon^{-1}}\left\|i_{1}-i_{2}\right\|_{s_{1}+\sigma}\,,$ (7.81) $\displaystyle\|\Delta_{12}{\bf R}_{5}^{(-\frac{1}{2},d)}\|_{-\frac{1}{2},s_{1},\alpha}+\|\Delta_{12}{\bf R}_{5}^{(0,o)}\|_{0,s_{1},\alpha}\lesssim_{s_{1},N,\alpha}\varepsilon{\upsilon^{-1}}\left\|i_{1}-i_{2}\right\|_{s_{1}+\sigma_{N}+2\alpha}\,,$ (7.82) $\displaystyle\left\|\braket{D}^{n_{1}}\partial_{\varphi}^{\mathtt{q}}\Delta_{12}{\bf T}_{5,N}({\varphi})\braket{D}^{n_{2}}\right\|_{{\mathcal{L}}(H^{s_{1}})}\lesssim_{s_{1},N,{\mathtt{q}}_{0}}\varepsilon{\upsilon^{-1}}\left\|i_{1}-i_{2}\right\|_{s_{1}+\sigma_{N}({\mathtt{q}}_{0})}\,.$ (7.83) The real operator ${\mathcal{L}}_{5}$ is Hamiltonian, reversible and momentum preserving. ###### Proof. By the expression of ${\mathcal{L}}_{4}$ in (7.70) and (3.17) we obtain that ${\cal L}_{5}$ has the form (7.73) with $\displaystyle r_{5}^{(d)}$ $\displaystyle:=\tfrac{\gamma}{2}(a_{2}-1){\mathcal{H}}(T({\mathtt{h}})-1)+{\rm i}\big{(}\tfrac{1}{2}C_{4}+a_{2}^{\frac{1}{2}}[\omega(\gamma,D),a_{2}^{\frac{1}{2}}]\big{)}\in{\rm OP}S^{-\frac{1}{2}}\,,$ (7.84) $\displaystyle r_{5}^{(o)}$ $\displaystyle:=a_{3}+\tfrac{{\rm i}}{2}C_{4}\in{\rm OP}S^{0}$ (with $C_{4}$ given in (7.71)) and ${\bf T}_{5,N}:={\mathcal{C}}^{-1}({\bf R}_{4}^{\Psi}+{\bf T}_{4,N}){\mathcal{C}}$. The function $q$ defined in (7.69), with $a_{2}$ in (7.60), satisfies, by (7.48) and Lemma 3.2, for all $s_{0}\leq s\leq S-\sigma$, $\|q^{\pm 1}-1\|_{s}^{k_{0},\upsilon}\lesssim_{s}\varepsilon{\upsilon^{-1}}(1+\|{\mathfrak{I}}_{0}\|_{s+\sigma}^{k_{0},\upsilon})\,.$ (7.85) The estimates (7.74) and (7.76) follows by (7.60) and (7.85). The estimate (7.77) follows by (7.84), (7.74), (7.69), (7.62), (7.60), (7.48), (7.49), (7.71), (7.55), (2.16), Lemma 7.10. The estimate (7.78) follows by (7.72), (7.63), (7.42), (7.50), (7.48) Lemmata 3.12, 3.13, (7.85). The estimates (7.79) follow by Lemmata 3.13 and (7.85). The estimates (7.80)- (7.83) are proved similarly. ∎ ### 7.4 Symmetrization up to smoothing remainders The goal of this section is to transform the operator ${\mathcal{L}}_{5}$ in (7.73) into the operator ${\mathcal{L}}_{6}$ in (7.88) which is block diagonal up to a regularizing remainder. From this step we do not preserve any further the Hamiltonian structure, but only the reversible and momentum preserving one (it is sufficient for proving Theorem 5.1). ###### Lemma 7.12. Fix ${\mathfrak{m}},N\in{\mathbb{N}}$, ${\mathtt{q}}_{0}\in{\mathbb{N}}_{0}$. There exist real, reversibility and momentum preserving operator matrices $\\{{\bf X}_{k}\\}_{k=1}^{{\mathfrak{m}}}$ of the form ${\bf X}_{k}:=\begin{pmatrix}0&\chi_{k}({\varphi},x,D)\\\ \overline{\chi_{k}({\varphi},x,D)}&0\end{pmatrix},\qquad\chi_{k}({\varphi},x,\xi)\in S^{-\frac{k}{2}}\,,$ (7.86) such that, conjugating the operator ${\mathcal{L}}_{5}$ in (7.73) via the map ${\bf{\Phi}}_{{\mathfrak{m}}}:=e^{{\bf X}_{1}}\circ\cdots\circ e^{{\bf X}_{{\mathfrak{m}}}}\,,$ (7.87) we obtain the real, reversible and momentum preserving operator $\displaystyle{\mathcal{L}}_{6}$ $\displaystyle:={\mathcal{L}}_{6}^{({\mathfrak{m}})}:={\bf{\Phi}}_{{\mathfrak{m}}}^{-1}\,{\mathcal{L}}_{5}\,{\bf{\Phi}}_{{\mathfrak{m}}}$ (7.88) $\displaystyle=\omega\cdot\partial_{\varphi}+{\mathtt{m}}_{1,\overline{\mathtt{n}}}\partial_{x}+{\rm i}\,a_{2}{\bf{\Omega}}(\gamma,D)+{\rm i}{\bf{\Pi}}_{0}+a_{4}{\mathcal{H}}+{\bf R}_{6}^{(-\frac{1}{2},d)}+{\bf R}_{6}^{(-\frac{{\mathfrak{m}}}{2},o)}+{\bf T}_{6,N}\,,$ where: 1\. ${\bf R}_{6}^{(-\frac{1}{2},d)}$ is a block-diagonal operator $\displaystyle{\bf R}_{6}^{(-\frac{1}{2},d)}:={\bf R}_{6,{\mathfrak{m}}}^{(-\frac{1}{2},d)}$ $\displaystyle:=\begin{pmatrix}r_{6}^{(d)}({\varphi},x,D)&0\\\ 0&\overline{r_{6}^{(d)}({\varphi},x,D)}\end{pmatrix}\in{\rm OP}S^{-\frac{1}{2}}\,,$ ${\bf R}_{6}^{(-\frac{{\mathtt{m}}}{2},o)}$ is a smoothing off diagonal remainder $\displaystyle{\bf R}_{6}^{(-\frac{{\mathtt{m}}}{2},o)}:={\bf R}_{6,{\mathfrak{m}}}^{(-\frac{{\mathfrak{m}}}{2},o)}$ $\displaystyle:=\begin{pmatrix}0&r_{6}^{(o)}({\varphi},x,D)\\\ \overline{r_{6}^{(o)}({\varphi},x,D)}&0\end{pmatrix}\in{\rm OP}S^{-\frac{{\mathfrak{m}}}{2}}\,,$ (7.89) satisfying for finitely many $0\leq\alpha\leq\alpha({\mathfrak{m}})$ (fixed in Remark 7.16), for some $\sigma_{N}:=\sigma_{N}(k_{0},\tau,\nu,N)>0$, $\aleph_{{\mathfrak{m}}}(\alpha)>0$ and for all $s_{0}\leq s\leq S-\sigma_{N}-\aleph_{{\mathfrak{m}}}(\alpha)$, $\displaystyle\|{\bf R}_{6}^{(-\frac{1}{2},d)}\|_{-\frac{1}{2},s,\alpha}^{k_{0},\upsilon}+\|{\bf R}_{6}^{(-\frac{{\mathfrak{m}}}{2},o)}\|_{-\frac{{\mathfrak{m}}}{2},s,\alpha}^{k_{0},\upsilon}\lesssim_{s,{\mathfrak{m}},N,\alpha}\varepsilon{\upsilon^{-1}}\big{(}1+\|{\mathfrak{I}}_{0}\|_{s+\sigma_{N}+\aleph_{{\mathfrak{m}}}(\alpha)}^{k_{0},\upsilon}\big{)}\,.$ (7.90) Both ${\bf R}_{6}^{(-\frac{1}{2},d)}$ and ${\bf R}_{6}^{(-\frac{{\mathfrak{m}}}{2},o)}$ are reversible and momentum preserving; 2\. For any ${\mathtt{q}}\in{\mathbb{N}}^{\nu}_{0}$ with $|{\mathtt{q}}|\leq{\mathtt{q}}_{0}$, $n_{1},n_{2}\in{\mathbb{N}}_{0}$ with $n_{1}+n_{2}\leq N-(k_{0}+{\mathtt{q}}_{0})+\frac{3}{2}$, the operator $\langle D\rangle^{n_{1}}\partial_{\varphi}^{\mathtt{q}}{\bf T}_{6,N}(\varphi)\langle D\rangle^{n_{2}}$ is ${\mathcal{D}}^{k_{0}}$-tame with a tame constant satisfying, for some $\sigma_{N}({\mathtt{q}}_{0}):=\sigma_{N}(k_{0},\tau,\nu,{\mathtt{q}}_{0})$, for any $s_{0}\leq s\leq S-\sigma_{N}({\mathtt{q}}_{0})-\aleph_{{\mathfrak{m}}}(0)$, ${\mathfrak{M}}_{\langle D\rangle^{n_{1}}\partial_{\varphi}^{\mathtt{q}}{\bf T}_{6,N}(\varphi)\langle D\rangle^{n_{2}}}(s)\lesssim_{S,{\mathfrak{m}},N,{\mathtt{q}}_{0}}\varepsilon{\upsilon^{-1}}(1+\|{\mathfrak{I}}_{0}\|_{s+\sigma_{N}({\mathtt{q}}_{0})+\aleph_{{\mathfrak{m}}}(0)}^{k_{0},\upsilon})\,.$ (7.91) 3\. The conjugation map ${\bf{\Phi}}_{{\mathfrak{m}}}$ in (7.87) satisfies, for all $s_{0}\leq s\leq S-\sigma_{N}-\aleph_{{\mathfrak{m}}}(0)$, $\|{\bf{\Phi}}_{{\mathfrak{m}}}^{\pm 1}-{\rm Id}\|_{0,s,0}^{k_{0},\upsilon}+\|\left({\bf{\Phi}}_{{\mathfrak{m}}}^{\pm 1}-{\rm Id}\right)^{*}\|_{0,s,0}^{k_{0},\upsilon}\lesssim_{s,{\mathfrak{m}},N}\varepsilon{\upsilon^{-1}}(1+\|{\mathfrak{I}}_{0}\|_{s+\sigma_{N}+\aleph_{{\mathfrak{m}}}(0)}^{k_{0},\upsilon})\,.$ (7.92) 4\. Furthermore, for any $s_{1}$ as in (7.11), finitely many $0\leq\alpha\leq\alpha({\mathfrak{m}})$, ${\mathtt{q}}\in{\mathbb{N}}_{0}^{\nu}$, with $\left|{\mathtt{q}}\right|\leq{\mathtt{q}}_{0}$, and $n_{1},n_{2}\in{\mathbb{N}}_{0}$, with $n_{1}+n_{2}\leq N-{\mathtt{q}}_{0}+\frac{1}{2}$, we have $\displaystyle\|\Delta_{12}{\bf R}_{6}^{(-\frac{1}{2},d)}\|_{-\frac{1}{2},s_{1},\alpha}+\|\Delta_{12}{\bf R}_{6}^{(-\frac{{\mathfrak{m}}}{2},o)}\|_{-\frac{{\mathfrak{m}}}{2},s_{1},\alpha}\lesssim_{s_{1},{\mathfrak{m}},N,\alpha}\varepsilon{\upsilon^{-1}}\left\|i_{1}-i_{2}\right\|_{s_{1}+\sigma_{N}+\aleph_{{\mathfrak{m}}}(\alpha)}\,,$ (7.93) $\displaystyle\|\braket{D}^{n_{1}}\partial_{\varphi}^{\mathtt{q}}\Delta_{12}{\bf T}_{6,N}\braket{D}^{n_{2}}\|_{{\mathcal{L}}(H^{s_{1}})}\lesssim_{s_{1},{\mathfrak{m}},N,{\mathtt{q}}_{0}}\varepsilon{\upsilon^{-1}}\left\|i_{1}-i_{2}\right\|_{s_{1}+\sigma_{N}({\mathtt{q}}_{0})+\aleph_{{\mathfrak{m}}}(0)}\,,$ (7.94) $\displaystyle\|\Delta_{12}{\bf{\Phi}}_{{\mathfrak{m}}}^{\pm 1}\|_{0,s_{1},0}+\|\Delta_{12}({\bf{\Phi}}_{{\mathfrak{m}}}^{\pm 1})^{*}\|_{0,s_{1},0}\lesssim_{s_{1},{\mathfrak{m}},N}\varepsilon{\upsilon^{-1}}\left\|i_{1}-i_{2}\right\|_{s_{1}+\sigma_{N}+\aleph_{{\mathfrak{m}}}(0)}\,.$ (7.95) ###### Proof. The proof is inductive. The operator ${\mathcal{L}}_{6}^{(0)}:={\mathcal{L}}_{5}$ satisfies (7.90)-(7.91) with $\aleph_{0}(\alpha):=3\alpha$, by (7.77)-(7.78). Suppose we have done already ${\mathfrak{m}}$ steps obtaining an operator ${\mathcal{L}}_{6}^{({\mathfrak{m}})}$ as in (7.88) with ${\bf R}_{6,{\mathfrak{m}}}^{(-\frac{1}{2},d)}:={\bf R}_{6}^{(-\frac{1}{2},d)}$ and ${\bf R}_{6,{\mathfrak{m}}}^{(-\frac{1}{2},o)}:={\bf R}_{6}^{(-\frac{{\mathfrak{m}}}{2},o)}$ and the remainder ${\bf\Phi}_{{\mathfrak{m}}}^{-1}{\bf T}_{5,N}{\bf\Phi}_{{\mathfrak{m}}}$, instead of ${\bf T}_{6,N}$. We now show how to define ${\mathcal{L}}_{6}^{({\mathfrak{m}}+1)}$. Let $\chi_{{\mathfrak{m}}+1}({\varphi},x,\xi):=-\big{(}2{\rm i}\,a_{2}({\varphi},x)\omega(\gamma,\xi)\big{)}^{-1}r_{6,{\mathfrak{m}}}^{(o)}({\varphi},x,\xi)\chi(\xi)\in S^{-\frac{{\mathfrak{m}}}{2}-\frac{1}{2}}\,,$ (7.96) where $\chi$ is the cut-off function defined in (3.11) and $\omega(\gamma,\xi)$ is the symbol (cfr. (2.18)) $\omega(\gamma,\xi):=\sqrt{G(0;\xi)\Big{(}g+\frac{\gamma^{2}}{4}\frac{G(0;\xi)}{\xi^{2}}\Big{)}}\in S^{\frac{1}{2}}\,,\ \ G(0;\xi):=\begin{cases}\chi(\xi)|\xi|\tanh({\mathtt{h}}|\xi|)\,,\ {\mathtt{h}}<+\infty\cr\chi(\xi)|\xi|\,,\qquad\qquad\ \,\ {\mathtt{h}}=+\infty\,.\end{cases}$ Note that $\chi_{{\mathfrak{m}}+1}$ in (7.96) is well defined because $\omega(\gamma,\xi)$ is positive on the support of $\chi(\xi)$ and $a_{2}({\varphi},x)$ is close to 1. We conjugate the operator ${\mathcal{L}}_{6}^{({\mathfrak{m}})}$ in (7.88) by the flow generated by ${\bf X}_{{\mathfrak{m}}+1}$ of the form (7.86) with $\chi_{{\mathfrak{m}}+1}(\varphi,x,\xi)$ defined in (7.96). By (7.90) and (7.75), for suitable constants $\aleph_{{\mathfrak{m}}+1}(\alpha)>\aleph_{{\mathfrak{m}}}(\alpha)$, for finitely many $\alpha\in{\mathbb{N}}_{0}$ and for any $s_{0}\leq s\leq S-\sigma_{N}-\aleph_{{\mathfrak{m}}+1}(\alpha)$, $\|{\bf X}_{{\mathfrak{m}}+1}\|_{-\frac{{\mathfrak{m}}}{2}-\frac{1}{2},s,\alpha}^{k_{0},\upsilon}\lesssim_{s,{\mathfrak{m}},\alpha}\varepsilon{\upsilon^{-1}}\big{(}1+\|{\mathfrak{I}}_{0}\|_{s+\sigma_{N}+\aleph_{{\mathfrak{m}}+1}(\alpha)}^{k_{0},\upsilon}\big{)}\,.$ (7.97) Therefore, by Lemmata 3.7, 3.5 and the induction assumption (7.92) for ${\bf{\Phi}}_{{\mathfrak{m}}}$, the conjugation map ${\bf{\Phi}}_{{\mathfrak{m}}+1}:={\bf{\Phi}}_{{\mathfrak{m}}}e^{{\bf X}_{{\mathfrak{m}}+1}}$ is well defined and satisfies estimate (7.92) with ${\mathfrak{m}}+1$. By the Lie expansion (3.18) we have $\displaystyle{\mathcal{L}}_{6}^{({\mathfrak{m}}+1)}$ $\displaystyle:=e^{-{\bf X}_{{\mathfrak{m}}+1}}\,{\mathcal{L}}_{6}^{({\mathfrak{m}})}\,e^{{\bf X}_{{\mathfrak{m}}+1}}$ (7.98) $\displaystyle=\omega\cdot\partial_{\varphi}+{\mathtt{m}}_{1,\overline{\mathtt{n}}}\partial_{x}+{\rm i}a_{2}{\bf{\Omega}}(\gamma,D)+{\rm i}{\bf{\Pi}}_{0}+a_{4}{\mathcal{H}}+{\bf R}_{6,{\mathfrak{m}}}^{(-\frac{1}{2},d)}$ $\displaystyle-\big{[}{\bf X}_{{\mathfrak{m}}+1},{\mathtt{m}}_{1,\overline{\mathtt{n}}}\partial_{x}+{\rm i}\,a_{2}{\bf{\Omega}}(\gamma,D)\big{]}+{\bf R}_{6,{\mathfrak{m}}}^{(-\frac{{\mathfrak{m}}}{2},o)}+{\bf\Phi}_{{\mathfrak{m}}+1}^{-1}{\bf T}_{5,N}{\bf\Phi}_{{\mathfrak{m}}+1}$ $\displaystyle-\int_{0}^{1}e^{-\tau{\bf X}_{{\mathfrak{m}}+1}}\big{[}{\bf X}_{{\mathfrak{m}}+1}\,,\,\omega\cdot\partial_{\varphi}+{\rm i}{\bf{\Pi}}_{0}+a_{4}{\mathcal{H}}+{\bf R}_{6,{\mathfrak{m}}}^{(-\frac{1}{2},d)}\big{]}e^{\tau{\bf X}_{{\mathfrak{m}}+1}}\,{\rm d}{\tau}$ (7.99) $\displaystyle-\int_{0}^{1}e^{-\tau{\bf X}_{{\mathfrak{m}}+1}}\left[{\bf X}_{{\mathfrak{m}}+1},{\bf R}_{6,{\mathfrak{m}}}^{(-\frac{{\mathfrak{m}}}{2},o)}\right]e^{\tau{\bf X}_{{\mathfrak{m}}+1}}\,{\rm d}{\tau}$ (7.100) $\displaystyle+\int_{0}^{1}(1-\tau)e^{-\tau{\bf X}_{{\mathfrak{m}}+1}}\left[{\bf X}_{{\mathfrak{m}}+1},\left[{\bf X}_{{\mathfrak{m}}+1},{\mathtt{m}}_{1,\overline{\mathtt{n}}}\partial_{x}+{\rm i}\,a_{2}{\bf{\Omega}}(\gamma,D)\right]\right]e^{\tau{\bf X}_{{\mathfrak{m}}+1}}\,{\rm d}{\tau}\,.$ (7.101) In view of (7.86), (7.75) and (7.89), we have that $-\big{[}{\bf X}_{{\mathfrak{m}}+1},{\mathtt{m}}_{1,\overline{\mathtt{n}}}\partial_{x}+{\rm i}\,a_{2}{\bf{\Omega}}(\gamma,D)\big{]}+{\bf R}_{6,{\mathfrak{m}}}^{(-\frac{{\mathfrak{m}}}{2},o)}=\begin{pmatrix}0&Z_{{\mathfrak{m}}+1}\\\ \overline{Z_{{\mathfrak{m}}+1}}&0\end{pmatrix}=:{\bf Z}_{{\mathfrak{m}}+1}\,,$ where, denoting for brevity $\chi_{{\mathfrak{m}}+1}:=\chi_{{\mathfrak{m}}+1}({\varphi},x,\xi)$, it results $\displaystyle Z_{{\mathfrak{m}}+1}$ $\displaystyle={\rm i}\left({\rm Op}(\chi_{{\mathfrak{m}}+1})a_{2}\,\omega(\gamma,D)+a_{2}\,\omega(\gamma,D){\rm Op}(\chi_{{\mathfrak{m}}+1})\right)$ $\displaystyle\quad+\left[{\rm Op}(\chi_{{\mathfrak{m}}+1}),-{\mathtt{m}}_{1,\overline{\mathtt{n}}}\partial_{x}+a_{2}\,\tfrac{\gamma}{2}\partial_{x}^{-1}G(0)\right]+{\rm Op}(r_{6,{\mathfrak{m}}}^{(o)})\,.$ By (3.24), (3.26) and since $\chi_{{\mathfrak{m}}+1}\in S^{-\frac{{\mathfrak{m}}}{2}-\frac{1}{2}}$ by (7.96), we have that ${\rm Op}(\chi_{{\mathfrak{m}}+1})a_{2}\omega(\gamma,D)+a_{2}\omega(\gamma,D){\rm Op}(\chi_{{\mathfrak{m}}+1})={\rm Op}\big{(}2a_{2}\omega(\gamma,\xi)\chi_{{\mathfrak{m}}+1}\big{)}+{\mathtt{r}}_{{\mathfrak{m}}+1}\,,$ where ${\mathtt{r}}_{{\mathfrak{m}}+1}$ is in ${\rm OP}S^{-\frac{{\mathfrak{m}}}{2}-1}$. By (7.96) and (7.4) $Z_{{\mathfrak{m}}+1}={\rm i}{\mathtt{r}}_{{\mathfrak{m}}+1}+\left[{\rm Op}(\chi_{{\mathfrak{m}}+1}),-{\mathtt{m}}_{1,\overline{\mathtt{n}}}\partial_{x}+a_{2}\tfrac{\gamma}{2}\partial_{x}^{-1}G(0)\right]+{\rm Op}(r_{6,{\mathfrak{m}}}^{(o)}(1-\chi(\xi)))\in{\rm OP}S^{-\frac{{\mathfrak{m}}}{2}-\frac{1}{2}}\,.$ The remaining pseudodifferential operators in (7.99)-(7.101) have order ${\rm OP}S^{-\frac{{\mathfrak{m}}+1}{2}}$. Therefore the operator ${\mathcal{L}}_{6}^{({\mathfrak{m}}+1)}$ in (7.98) has the form (7.88) at ${\mathfrak{m}}+1$ with ${\bf R}_{6,{\mathfrak{m}}+1}^{(-\frac{1}{2},d)}+{\bf R}_{6,{\mathfrak{m}}+1}^{(-\frac{{\mathfrak{m}}+1}{2},o)}:={\bf R}_{6,{\mathfrak{m}}}^{(-\frac{1}{2},d)}+{\bf Z}_{{\mathfrak{m}}+1}+\eqref{geno12}+\eqref{geno13}+\eqref{geno14}$ (7.102) and a smoothing remainder ${\bf\Phi}_{{\mathfrak{m}}+1}^{-1}{\bf T}_{5,N}{\bf\Phi}_{{\mathfrak{m}}+1}$. By Lemmata 3.5, 3.6, (7.90), (7.97), (7.76), we conclude that ${\bf R}_{6,{\mathfrak{m}}+1}^{(-\frac{1}{2},d)}$ and ${\bf R}_{6,{\mathfrak{m}}+1}^{(-\frac{{\mathfrak{m}}+1}{2},o)}$ satisfy (7.90) at order ${\mathfrak{m}}+1$ for suitable constants $\aleph_{{\mathfrak{m}}+1}(\alpha)>\aleph_{{\mathfrak{m}}}(\alpha)$. Moreover the operator ${\bf{\Phi}}_{{\mathfrak{m}}+1}^{-1}{\bf T}_{5,N}{\bf{\Phi}}_{{\mathfrak{m}}+1}$ satisfies (7.91) at order ${\mathfrak{m}}+1$ by Lemmata 3.12, 3.13 and (7.78), (7.92). Estimates (7.93)-(7.95) follow similarly. By (7.96), Lemmata 3.20, 3.24, and the induction assumption that ${\bf R}_{6,{\mathfrak{m}}}^{(-\frac{{\mathfrak{m}}}{2},o)}$ is reversible and momentum preserving, we get that ${\bf X}_{{\mathfrak{m}}+1}$ is reversibility and momentum preserving, and so are $e^{\pm{\bf X}_{{\mathfrak{m}}+1}}$. We deduce that ${\mathcal{L}}_{6}^{({\mathfrak{m}}+1)}$ is reversible and momentum preserving, in particular ${\bf R}_{6,{\mathfrak{m}}+1}^{(-\frac{{\mathfrak{m}}+1}{2},o)}$ in (7.102). ∎ ###### Remark 7.13. The number of regularizing iterations ${\mathfrak{m}}\in{\mathbb{N}}$ will be fixed by the KAM reduction scheme in Section 8, more precisely we take ${\mathfrak{m}}=2M$ with $M$ in (8.5). Note that it is independent of the Sobolev index $s$. So far the operator ${\mathcal{L}}_{6}$ of Lemma 7.12 depends on two indexes ${\mathfrak{m}},N$ which provide respectively the order of the regularizing off-diagonal remainder ${\bf R}_{6}^{(-\frac{{\mathfrak{m}}}{2},o)}$ and of the smoothing tame operator ${\bf T}_{6,N}$. From now on we fix ${\mathfrak{m}}:=2M\,,\ M\in{\mathbb{N}}\,,\quad N=M\,.$ (7.103) ### 7.5 Reduction of the order 1/2 The goal of this section is to transform the operator ${\mathcal{L}}_{6}$ in (7.88) with ${\mathfrak{m}}:=2M$, $N=M$ (cfr. (7.103)), into the operator ${\mathcal{L}}_{7}$ in (7.117) whose coefficient in front of ${\bf{\Omega}}(\gamma,D)$ is constant. First we rewrite ${\mathcal{L}}_{6}=\omega\cdot\partial_{\varphi}+\begin{pmatrix}P_{6}&0\\\ 0&\overline{P_{6}}\end{pmatrix}+{\rm i}{\bf{\Pi}}_{0}+{\bf R}_{6}^{(-M,o)}+{\bf T}_{6,M}\,,$ having denoted $P_{6}:=P_{6}({\varphi},x,D):={\mathtt{m}}_{1,\overline{\mathtt{n}}}\partial_{x}+{\rm i}a_{2}({\varphi},x)\Omega(\gamma,D)+a_{4}{\mathcal{H}}+r_{6}^{(d)}({\varphi},x,D)\,.$ (7.104) We conjugate ${\mathcal{L}}_{6}$ through the real operator ${\bf{\Phi}}({\varphi}):=\begin{pmatrix}\Phi({\varphi})&0\\\ 0&\overline{\Phi}({\varphi})\end{pmatrix}$ (7.105) where $\Phi({\varphi}):=\Phi^{\tau}({\varphi})|_{\tau=1}$ is the time $1$-flow of the PDE $\begin{cases}\partial_{\tau}\Phi^{\tau}({\varphi})={\rm i}A({\varphi})\Phi^{\tau}({\varphi})\,,\\\ \Phi^{0}({\varphi})={\rm Id}\,,\end{cases}\qquad A({\varphi}):=b({\varphi},x)|D|^{\frac{1}{2}}\,,$ (7.106) and $b({\varphi},x)$ is a real quasi-periodic traveling wave, ${\rm odd}({\varphi},x)$, chosen later, see (7.114). Thus ${\rm i}b({\varphi},x)|D|^{\frac{1}{2}}$ is reversibility and momentum preserving as well as ${\bf{\Phi}}({\varphi})$. Moreover $\Phi\pi_{0}=\pi_{0}=\Phi^{-1}\pi_{0}$, which implies ${\bf{\Phi}}^{-1}{\bf{\Pi}}_{0}{\bf{\Phi}}={\bf{\Pi}}_{0}{\bf{\Phi}}\,.$ (7.107) By the Lie expansion (3.18) we have $\displaystyle\Phi^{-1}P_{6}\Phi$ $\displaystyle=P_{6}-{\rm i}[A,P_{6}]-\frac{1}{2}[A,[A,P_{6}]]+\sum_{n=3}^{2M+1}\frac{(-{\rm i})^{n}}{n!}{\rm ad}_{A({\varphi})}^{n}(P_{6})+T_{M}\,,$ (7.108) $\displaystyle T_{M}$ $\displaystyle:=\frac{(-{\rm i})^{2M+2}}{(2M+1)!}\int_{0}^{1}(1-\tau)^{2M+1}\Phi^{-\tau}({\varphi})\,{\rm ad}_{A({\varphi})}^{2M+2}(P_{6})\,\Phi^{\tau}({\varphi}){\rm d}\tau\,,$ and, by (3.19), $\displaystyle\Phi^{-1}\circ\omega\cdot\partial_{\varphi}\circ\Phi$ $\displaystyle=\omega\cdot\partial_{\varphi}+{\rm i}(\omega\cdot\partial_{\varphi}A)+\frac{1}{2}[A,\omega\cdot\partial_{\varphi}A]-\sum_{n=3}^{2M+1}\frac{(-{\rm i})^{n}}{n!}{\rm ad}_{A({\varphi})}^{n-1}(\omega\cdot\partial_{\varphi}A({\varphi}))+T_{M}^{\prime}\,,$ $\displaystyle T_{M}^{\prime}$ $\displaystyle:=-\frac{(-{\rm i})^{2M+2}}{(2M+1)!}\int_{0}^{1}(1-\tau)^{2M+1}\Phi^{-\tau}({\varphi})\,{\rm ad}_{A({\varphi})}^{2M+1}(\omega\cdot\partial_{\varphi}A({\varphi}))\,\Phi^{\tau}({\varphi}){\rm d}\tau\,.$ (7.109) Note that ${\rm ad}_{A({\varphi})}^{2M+2}(P_{6})$ and ${\rm ad}_{A({\varphi})}^{2M+1}(\omega\cdot\partial_{\varphi}A({\varphi}))$ are in ${\rm OP}S^{-M}$. We now determine the pseudo-differential term of order $1/2$ in (7.108)-(7.109). We use the expansion of the linear dispersion operator $\Omega(\gamma,D)$, defined by (4.1), (1.10), and, since $j\to c_{j}(\gamma)\in S^{0}$ (see (4.14)), $\Omega(\gamma,D)=\sqrt{g}|D|^{\frac{1}{2}}+{\rm i}\,\tfrac{\gamma}{2}{\mathcal{H}}+r_{-\frac{1}{2}}(\gamma,D)\,,\quad r_{-\frac{1}{2}}(\gamma,D)\in{\rm OP}S^{-\frac{1}{2}}\,,$ (7.110) where ${\mathcal{H}}$ is the Hilbert transform in (3.21). By (7.104), that $A=b|D|^{\frac{1}{2}}$, (3.27), (7.110) we get $\displaystyle[A,P_{6}]$ $\displaystyle=\big{[}b|D|^{\frac{1}{2}},{\mathtt{m}}_{1,{\mathtt{n}}}\partial_{x}+{\rm i}\,\sqrt{g}a_{2}|D|^{\frac{1}{2}}+(a_{4}-\tfrac{\gamma}{2}a_{2}){\mathcal{H}}+r_{6}^{(d)}(x,D)+{\rm i}\,a_{2}r_{-\frac{1}{2}}(\gamma,D)\big{]}$ $\displaystyle=-{\mathtt{m}}_{1,\overline{\mathtt{n}}}b_{x}|D|^{\frac{1}{2}}-{\rm i}\tfrac{\sqrt{g}}{2}(b_{x}a_{2}-(a_{2})_{x}b){\mathcal{H}}+{\rm Op}(r_{b,-\frac{1}{2}})\,,$ (7.111) where $r_{b,-\frac{1}{2}}\in S^{-\frac{1}{2}}$ is small with $b$. As a consequence, the contribution at order $\frac{1}{2}$ of the operator ${\rm i}\,\omega\cdot\partial_{\varphi}A+P_{6}-{\rm i}[A,P_{6}]$ is ${\rm i}\big{(}\omega\cdot\partial_{\varphi}b+{\mathtt{m}}_{1,\overline{\mathtt{n}}}b_{x}+\sqrt{g}\,a_{2})|D|^{\frac{1}{2}}$. We choose $b({\varphi},x)$ as the solution of $(\omega\cdot\partial_{\varphi}+{\mathtt{m}}_{1,\overline{\mathtt{n}}}\partial_{x})b+\sqrt{g}\,\Pi_{N_{\overline{\mathtt{n}}}}\,a_{2}=\sqrt{g}\,{\mathtt{m}}_{\frac{1}{2}}$ (7.112) where ${\mathtt{m}}_{\frac{1}{2}}$ is the average (see (3.6)) ${\mathtt{m}}_{\frac{1}{2}}:=\braket{a_{2}}_{{\varphi},x}\,.$ (7.113) We define $b({\varphi},x)$ to be the real, ${\rm odd}({\varphi},x)$, quasi- periodic traveling wave $b({\varphi},x):=-\sqrt{g}(\omega\cdot\partial_{\varphi}+{\mathtt{m}}_{1,\overline{\mathtt{n}}}\partial_{x})_{\rm ext}^{-1}\big{(}\Pi_{N_{\overline{\mathtt{n}}}}a_{2}({\varphi},x)-{\mathtt{m}}_{\frac{1}{2}}\big{)}$ (7.114) recall (3.10). Note that $b({\varphi},x)$ and ${\mathtt{m}}_{\frac{1}{2}}$ are defined for any $(\omega,\gamma)\in{\mathbb{R}}^{\nu}\times[\gamma_{1},\gamma_{2}]$ and that, for any $(\omega,\gamma)\in{\mathtt{T}}{\mathtt{C}}_{\overline{\mathtt{n}}+1}(2\upsilon,\tau)$ defined in (7.26), it solves (7.112). We deduce by (7.108), (7.109), (7.104), (7.5)-(7.114), that, for any $(\omega,\gamma)\in{\mathtt{T}}{\mathtt{C}}_{\overline{\mathtt{n}}+1}(2\upsilon,\tau)$, $\displaystyle L_{7}$ $\displaystyle:=\Phi^{-1}({\varphi})\left(\omega\cdot\partial_{\varphi}+P_{6}\right)\Phi({\varphi})$ $\displaystyle=\omega\cdot\partial_{\varphi}+{\mathtt{m}}_{1,\overline{\mathtt{n}}}\partial_{x}+{\rm i}\,{\mathtt{m}}_{\frac{1}{2}}\Omega(\gamma,D)+a_{5}{\mathcal{H}}+{\rm Op}(r_{7}^{(d)})+T_{M}+T_{M}^{\prime}+{\rm i}\sqrt{g}(\Pi_{N_{\overline{\mathtt{n}}}}^{\perp}a_{2})|D|^{\frac{1}{2}}\,,$ where $a_{5}({\varphi},x)$ is the real function (using that $a_{4}=\frac{\gamma}{2}(a_{2}-1)$) $\displaystyle a_{5}:=$ $\displaystyle\,\tfrac{\gamma}{2}({\mathtt{m}}_{\frac{1}{2}}-1)-\tfrac{\sqrt{g}}{2}(b_{x}a_{2}-(a_{2})_{x}b)$ (7.115)
11institutetext: Univ. Grenoble Alpes, CNRS, IPAG, 38000 Grenoble, France 11email<EMAIL_ADDRESS>22institutetext: Departamento de Física-Icex-UFMG Antônio Carlos, 6627, 31270-901. Belo Horizonte, MG, Brazil 33institutetext: Univ. de Toulouse, CNRS, IRAP, 14 avenue Belin, 31400 Toulouse, France 44institutetext: Infrared Science Archive (IRSA), IPAC, California Institute of Technology, 1200 E. California Blvd., Pasadena, CA, 91125, USA 55institutetext: Université de Montréal, Département de Physique, IREX, Montréal, QC H3C 3J7, Canada # New insights on the near-infrared veiling of young stars using CFHT/SPIRou data††thanks: Based on observations obtained at the Canada-France-Hawaii Tele- scope (CFHT) which is operated by the National Research Council (NRC) of Canada, the Institut National des Sciences de l’Univers of the Centre National de la Recherche Scientifique (CNRS) of France, and the University of Hawaii. Based on observations obtained with SPIRou, an international project led by Institut de Recherche en Astrophysique et Planétologie, Toulouse, France. A. P. Sousa 11 J. Bouvier 11 S. H. P. Alencar 22 J.-F. Donati 33 C. Dougados 11 E. Alecian 11 A. Carmona 11 L. Rebull 44 N. Cook 55 E. Artigau 55 P. Fouqué 33 R. Doyon 55 the SLS consortium (Received September 15, 1996; accepted March 16, 1997) ###### Abstract Context. Veiling is ubiquitous at different wavelength ranges in classical T Tauri stars. However, the origin of the veiling in the infrared (IR) domain is not well understood at present. The accretion spot alone is not enough to explain the shallow photospheric IR lines in accreting systems, suggesting that another source is contributing to the veiling in the near-infrared (NIR). The inner disk is often quoted as the additional emitting source meant to explain the IR veiling. Aims. In this work, we aim to measure and discuss the NIR veiling to understand its origins and variability timescale. Methods. We used a sample of 14 accreting stars observed with the CFHT/SPIRou spectrograph, within the framework of the SPIRou Legacy Survey, to measure the NIR veiling along the $YJHK$ bands. We compared the veiling measurements with accretion and inner disk diagnostics. We also analyzed circumstellar emission lines and photometric observations from the literature. Results. The measured veiling grows from the $Y$ to the $K$ band for most of the targets in our sample. The IR veiling agrees with NIR emission excess obtained using photometric data. However, we also find a linear correlation between the veiling and the accretion properties of the system, showing that accretion contributes to the inner disk heating and, consequently, to the inner disk emission excess. We also show a connection between the NIR veiling and the system’s inclination with respect to our line of sight. This is probably due to the reduction of the visible part of the inner disk edge, where the NIR emission excess is expected to arise, as the inclination of the system increases. Our search for periods on the veiling variability showed that the IR veiling is not clearly periodic in the typical timescale of stellar rotation – which, again, is broadly consistent with the idea that the veiling comes from the inner disk region. The NIR veiling appears variable on a timescale of a day, showing the night-by-night dynamics of the optical veiling variability. In the long term, the mean NIR veiling seems to be stable for most of the targets on timescales of a month to a few years. However, during occasional episodes of high accretion in classical T Tauri stars, which affect the system’s dynamic, the veiling also seems to be much more prominent at such times, as we found in the case of the target RU Lup. Conclusions. We provide further evidence that for most targets in our sample, the veiling that mainly occurs in the $JHK$ bands arises from dust in the inner disk. ###### Key Words.: Stars: pre-main sequence – Accretion, accretion disks – Stars: variables: T Tauri ## 1 Introduction The photospheric lines of young low-mass accreting systems, commonly referred to as Classical T Tauri stars (CTTS), are shallower and present smaller equivalent widths than those of non-accreting stars with a similar spectral type. This phenomenon is known as the veiling of the photospheric lines (e.g., Hartigan et al., 1991; Valenti et al., 1993; Folha & Emerson, 1999; Fischer et al., 2011). The presence of veiling suggests an additional emitting source, beyond the stellar photosphere contributing to the spectra of the targets, that is responsible for filling in the photospheric lines (e.g., Hartmann & Kenyon, 1990; Gullbring et al., 1998; Calvet & Gullbring, 1998; Johns-Krull & Valenti, 2001). The veiling in the optical and in the IR wavelengths has been studied using different approaches in the aim to understand its origins and variability (e.g., Basri & Batalha, 1990; Edwards et al., 2006; Fischer et al., 2011; Antoniucci et al., 2017; Gully-Santiago et al., 2017; Ingleby et al., 2013; McClure et al., 2013; Kidder et al., 2021). The veiling variability along the stellar spectra depends on wavelength (e.g., Fischer et al., 2011; Faesi et al., 2012; McClure et al., 2013; Rei et al., 2018), and optical veiling is often associated with the accretion process, while the accretion shock is thought to be at the origin of an additional continuum emitting source to the stellar spectrum. (e.g., Gullbring et al., 1998). Usually, the accretion spot presents an emission contribution maximum around the ultraviolet domain, which decreases as the wavelength increases (e.g., Calvet & Gullbring, 1998). Nevertheless, we do not expect a significant contribution of the accretion spot continuum emission in the IR wavelength; therefore, the accretion spot alone cannot explain the veiling in the IR domain. In the IR region, the veiling increases with wavelength and in some cases, it becomes greater than the veiling in the optical domain (e.g., McClure et al., 2013). The central star illuminates the inner disk and this region absorbs photons from the star, the accretion spot, and even the accretion funnel, then re-emitting them in the infrared (IR) as the system rotates (e.g., Chiang & Goldreich, 1997, 1999). Therefore, the inner disk is suggested as the origin of the additional continuum emission that is essential to explaining the near- infrared (NIR) veiling, although the measured veiling is often too great to be explained as coming merely from the disk emission, based on model predictions (e.g., Folha & Emerson, 1999; Johns-Krull & Valenti, 2001). Many authors have used different techniques to connect the observed veiling with inner disk emission, such as measuring the temperature of the region where the veiling comes from and using a black body fit to the veiling. For most of these systems, they found temperatures compatible with the dust temperature in the inner disk (e.g., Fischer et al., 2011; Antoniucci et al., 2017; Alcalá et al., 2021). For a few targets, the blackbody temperature measured using veiling is too high for dust to survive in the inner disk; this would indicate that the veiling should arise from the gas in the inner disk inside the star- disk co-rotation radius (e.g., Antoniucci et al., 2017; Alcalá et al., 2021). However, McClure et al. (2013) found no evidence of hot gas inside the inner disk, which would be responsible for the NIR veiling. Instead, they explained the IR veiling as the combined emission from the accretion shock on the stellar surface and dust around the sublimation rim of the inner disk. The veiling around $1$$\mu$m and the veiling in the $K$ band and beyond can also have different origins. While the accretion spot emission contribution is not very substantial around $1$$\mu$m, we do not expect a significant contribution from the inner disk either. The origin of the veiling in this spectral domain is poorly understood. However, significant veiling was measured around $1$$\mu$m, primarily for high accretion rate systems (e.g., Edwards et al., 2006; Fischer et al., 2011; Ingleby et al., 2013). In the literature, there are only a few plausible explanations given for that case of veiling, such as a contribution from emission lines, filling in of the photospheric lines, or origination from the accretion shock (e.g., Sicilia- Aguilar et al., 2015; Dodin & Lamzin, 2013). Even if we do not detect these extra emission lines directly, they can contribute to making the photospheric lines of the stellar spectra shallower, which increases the veiling measurement. We cannot exclude other possible explanations to the IR veiling, such as an envelope around a star that can also be a source of additional emission to the photospheric continuum, with emission compatible with NIR veiling (e.g., Calvet et al., 1997). However, CTTSs are usually Class II stars and we would not expect a significant contribution from a dusty envelope. Furthermore, the veiling in the $K$ band is higher than the dust envelope emission can explain (Folha & Emerson, 1999). In this work, we study the veiling and its variability in the NIR, using a sample of young accreting stars observed with the Canada-France-Hawaii Telescope SPectropolarimètre InfraROUge (CFHT/SPIRou). Our sample comprises stars with different properties, such as the mass accretion rate, spectral type, and inclination with respect to our line of sight. We computed the veiling for the $YJHK$ bands and compared the results with accretion and inner disk diagnostics. We organized the paper as follows. In Sect. 2, we present the sample of stars that we used in this work, and we describe the data used. In Sect. 3 we show the procedures to measure the veiling. We describe the results obtained from the veiling measurements in Sect. 4. In Sect. 5, we discuss the possible origin of the veiling, and we compare our results with those of previous works. In Sect. 6, we present our conclusions. ## 2 Observations and targets selection Table 1: Sample of stars and number of observations per observational period Star | SpT | v$\sin{i}$ | Av | i∗aa$a$ The inclination (see references) between the stellar rotation axis and our line of sight obtained from v$\sin{i}$, period, and radius of each target, except for the inclinations of DO Tau and DG Tau that were obtained from the outer disk parameters. | 2018 | 2019a | 2019b | 2020a | 2020b | 2021a | 2021b | 2022a | Referencesbb$b$ References for the SpT, $v\sin{i}$, Av and inclination ($i$), respectively. ---|---|---|---|---|---|---|---|---|---|---|---|---|--- | | (km/s) | (mag) | (∘) | | | | | | | | | Accreting systems | CI Tau | K4 | 9.5$\pm$0.5 | 0.65 | 55${}^{+35}_{-10}$ | 2 | 6 | 26 | 5 | 39 | - | - | - | (1),(2),(2),(2) DoAr 44 | K2-K3 | 17.0$\pm$1.1 | 2.0$\pm$0.2 | 30$\pm$5 | - | 8 | - | - | - | - | - | - | (3),(3),(3),(3) GQ Lup | K7 | 5$\pm$1 | 0.7 | $\sim$30 | - | 8 | - | 18 | 6 | - | - | - | (4),(5),(4),(5) TW Hya | K6 | 6.0$\pm$1.2 | 0.0 | 18$\pm$10 | - | 12 | - | 14 | - | 27 | - | - | (1),(1),(6),(7) V2129 Oph | K5 | 14.5$\pm$0.3 | 0.6 | 60 | 9 | - | - | 17 | 8 | - | - | - | (1),(1),(8),(9) BP Tau | K5 | 9.0$\pm$0.5 | 0.45 | $\sim$45 | - | - | 21 | - | - | - | 34 | - | (10),(11),(6),(11) V347 Aur | M2-M3 | 11.7${}^{+0.16}_{-0.24}$ | 3.4 | 40 | - | - | 18 | - | 13 | 12 | 22 | - | (12),(13),(12),(14) DG Tau | K6 | 24.7$\pm$0.7 | 1.60$\pm$0.15 | 38$\pm$2 | - | - | - | 2 | 29 | - | - | - | (10),(10),(6),(15) RU Lup | K7 | 8.5$\pm$4.8 | 0.0 | 24 | - | - | - | 9 | - | 17 | 13 | 1 | (16),(17),(14),(18) V2247 Oph | M0 | 20.5$\pm$0.5 | 0.98$\pm$0.02 | 45$\pm$10 | - | - | - | 9 | 7 | - | - | - | (19),(20),(19),(20) DO Tau | M0 | 14.3$\pm$0.5 | 0.75 | 37.0$\pm$3.7 | - | - | - | 3 | 8 | - | - | - | (6),(21),(6),(15) J1604∗ | K3 | 17.3$\pm$0.4 | 1.0 | ¿61 | - | - | - | 12 | - | - | - | - | (22),(22),(24),(23) PDS 70 | K7 | 16.0$\pm$0.5 | 0.01$\pm$0.07 | 50$\pm$8 | - | - | - | 4 | - | - | - | 6 | (19),(25),(19),(25) GM Aur | K4-K5 | 14.9$\pm$0.3 | 0.3$\pm$0.3 | $\geq$63 | - | - | - | - | 2 | - | 34 | - | (26),(26),(26),(26) Non-accreting systems | V819 Tau | K4 | 9.5 | | | - | - | - | 1 | - | - | - | - | (27),(27) TWA 25 | M0.5 | 12.9$\pm$1.2 | | | - | - | 25 | 14 | - | - | - | - | (6),(1) TWA 9A | K6 | 7$\pm$3 | | | - | - | - | 1 | - | - | - | - | (6),(1) 111 $*$$*$footnotetext: RX J1604.3-2130A, elsewhere in the text, we used J1604 for brevity. (1) Torres et al. (2006); (2) Donati et al. (2020a); (3) Bouvier et al. (2020); (4) Alcalá et al. (2017); (5) Donati et al. (2012); (6) Herczeg & Hillenbrand (2014); (7) Alencar & Batalha (2002); (8) Donati et al. (2007); (9) Alencar et al. (2012); (10) Nguyen et al. (2012); (11) Donati et al. (2008); (12) Connelley & Greene (2010); (13) Flores et al. (2019); (14) Alecian et al. (in prep.); (15) Simon et al. (2016); (16) Alcalá et al. (2014); (17) Frasca et al. (2017); (18) Stempels et al. (2007); (19) Pecaut & Mamajek (2016); (20) Donati et al. (2010); (21) Kounkel et al. (2019); (22) Dahm et al. (2012); (23) Davies (2019); (24) Preibisch & Zinnecker (1999); (25) Thanathibodee et al. (2020); (26) Bouvier et al. (in prep.); (27) Donati et al. (2015). The sample of stars used in this work is composed of well-known young stars that are part of the SPIRou Legacy Survey-SLS science program: ”Magnetic PMS star/planet survey” of some 50 class I, II, and III stars. The SPectropolarimètre InfraROUge (SPIRou) is a high-resolution velocimeter and polarimeter spectrograph ($R\sim 75\,000$) covering the NIR wavelength range $\sim 0.98-2.35\,\mu$m, corresponding to the spectral domain of the $YJHK$ bands (Donati et al., 2018). The main science goals of CFHT/SPIRou Legacy Survey are the search for and characterization of planets around low-mass stars, and investigating the impact of the stellar magnetic field on planet and star formation in young systems (Donati et al., 2020b). We aim to investigate the veiling of accreting young stars. Therefore, we selected, among the sample of stars observed by the SLS, 13 stars reported as accreting systems in the literature and for which we had a reasonable number of observations in time in comparison to the stellar rotation period. Most of these targets are classified as Class II and CTTS systems, and only V347 Aur is a Class I target. In addition, we added the T Tauri star J1604 (RX J1604.3-2130A) to the sample, which is not part of the SLS program, however, its CFHT/SPIRou observations are available. In Table 1, we show the list of young stars analyzed in this work and the number of observations that we have for each target and each observational period. We also list three non-accreting T Tauri stars that cover our sample’s spectral types and are also slow rotators ($v\sin{i}<15\,$km/s). We used these stars as templates to compute veiling and the residual profiles, as described in the following sections. Each CFHT/SPIRou observation consists of four sub-exposures to measure Stokes V, taken at different orientations of the polarimeter and used to compute the non-polarized and the circularly polarized profiles. As the focus of this work is to analyze only the non-polarized component of the spectrum, we averaged the four sub-exposures to increase the signal-to-noise ratio (S/N) of the spectra obtained at each night. For the non-accreting systems, we averaged all the observations obtaining a mean spectra that were used as templates. The CFHT/SPIRou data were reduced, while the telluric-corrected spectra were obtained using the data reduction system APERO, versions 0.6.131, 0.6.132, and 0.7.232 (Cook et al., 2022). The spectra were corrected for the barycentric velocity and locally normalized to the continuum level, using a polynomial function to fit the continuum. ## 3 Procedures to measure the veiling Figure 1: Examples of the four spectral regions used to measure the veiling of CI Tau. In each panel, we show two different nights, representing a small and a high veiling estimated for this target. We show the CI Tau spectrum in black and the residual profile (in red) obtained after subtracting the veiled template. The V819 Tau template spectra are also displayed before (orange) and after (blue) applying the veiling correction. The photospheric lines were removed in the residual profiles, showing that the veiling was accurately determined for most nights. We computed the IR veiling of the targets following the method described by Hartigan et al. (1989), where we compare the spectra of the target with the spectrum of a non-accreting T Tauri star of a similar spectral type. The Zeeman broadening of the photospheric lines can affect the veiling measurements. Therefore, we used WTTSs as templates that should present a similar magnetic activity as the CTTS (e.g., Johns-Krull et al., 2000). The WTTSs also present comparable physical properties to the CTTSs, such as chromospheric activity and surface gravity, which make WTTSs stars suitable to measure the veiling in accreting systems. We list the templates applied to each star in Table 2. Before comparing the target and template spectra, we shifted and broadened the template spectra to match the target spectra, using the radial velocities of the targets.222 For most targets, the radial velocities were computed using the CCF profiles generated by the SPIRou pipeline, implementing a numerical mask corresponding to the target spectral type. For TW Hya, we computed the radial velocity cross-correlating the target spectra with a WTTS with a similar spectral type. and the literature $v\sin{i}$ values of the targets. Due to the IR veiling wavelength dependence (e.g., Alcalá et al., 2021), we measured the veiling in four different spectral regions, 10710Å-10810Å, 12840Å-12910Å, 16120Å-16220Å, and 22600Å-22690Å, which we call $r_{Y}$, $r_{J}$, $r_{H}$, and $r_{K}$, representing the $YJHK$ veilings, respectively. The stars RU Lup, DO Tau and, DG Tau present many emission lines along the spectra, probably originating from the accretion shock, which is a characteristic of high-mass accretion rate systems, which prevented us from using the same $Y$ and $J$ spectral regions for the veiling calculations. Then, for these targets, we used the spectral regions 10760Å-10785Å and 12400Å-12550Å to measure the $r_{Y}$ and $r_{J}$ veilings, respectively. On some nights, the 10710Å-10810Å region of the TW Hya and V2247 Oph spectra presented features that made veiling measurements impossible and we had to use the 10864Å-10920Å region to measure $r_{Y}$ veiling instead. We determined the best veiling value for each target spectrum through a $\chi^{2}$ minimization routine. The veiling was defined as the ratio of the continuum excess flux to the stellar photospheric flux ($r_{\lambda}=F_{\lambda_{Excess}}/F_{\lambda_{Phot}}$). Then, zero veiling means the system has no additional excess to the stellar photosphere. In the spectral range of our sample (K2 to M3), the choice of template does not interfere much in the computed veiling, as shown by Folha & Emerson (1999), since the difference between templates of different spectral types is small and produces an almost null relative veiling, within the uncertainties. We also measured the systematic veiling between our templates, which corresponds to the average veiling resulting from the computed veiling comparing each template with the other. We found it to be $r_{Y}=0.098\pm 0.068$, $r_{J}=0.02\pm 0.02$, $r_{H}=0.05\pm 0.02$ and, $r_{K}=0.04\pm 0.02$, which should only affect the veiling determination of stars with very small or no veiling. In Fig. 1, we present the results for the four photospheric regions used to measure the veiling for CI Tau from two nights, representing spectra with smaller and higher veiling values. We show the target spectrum, and the unveiled and veiled template spectra. We also computed the residual profile, which is the target’s spectrum subtracted from the veiled template. Most of the residual profiles show almost no features at the location of photospheric lines, indicating that the veiling measurements were correctly determined for most nights, and that the photospheric lines were correctly removed. However, some of the spectra were quite noisy and this affected the veiling determinations. The veiling error obtained for each night, written in the plot, comes from a Chi-square minimization process, where we compare the target with the template spectra. Besides taking into account the noise of the target and template spectra333 We used as the spectral noise, the standard deviation measured in the continuum of each night, in the spectral region used to measure the veiling. to compute the veiling, there are other errors, such as those associated with the normalization processes that we did not consider in estimating the veiling error. ## 4 Results We present in Fig. 2 the NIR average veiling over the observation nights, measured for all the four regions, referred to as $r_{Y}$, $r_{J}$, $r_{H}$, and $r_{K}$. For readability, we split the targets into two groups; systems with $r_{K}$ higher and smaller than 1. We also show the averaged veiling obtained for each target in Table 2. The veiling values increase from the $Y$ to the $K$ band for most of the targets, similar to the results found in previous works (e.g., McClure et al., 2013; Sousa et al., 2021; Alcalá et al., 2021). Besides this general result, the average veiling for some individual targets remains the same from the $Y$ to $K$ band. For example, the veiling measured for V2247 Oph. However, this target does not present significant veiling in any band, probably due to the fact that it is a more evolved system, where the accretion and the dust in the inner disk are too faint to be detected. Furthermore, this M-type star makes detecting excess in the IR difficult due to the low contrast between the stellar photosphere and the inner disk emission (e.g., Ercolano et al., 2009). Another example is CI Tau, where the average veiling decreases from the $Y$ to the $H$ band, followed by an increase to the $K$ band. In that case, each band’s veiling variability is high and the difference in the $Y$ to $H$ average veiling is smaller than the standard deviation. Figure 2: Average NIR veiling (left) and the veiling variability diagnostic (right) measured in different wavelength regions. The top and bottom panels show the targets with $r_{K}$ higher and lower than 1, respectively. The veiling increases from the $Y$ to the $K$ band for most of the targets. The error bars in the left panel represent the standard deviation of the average veiling over all the observation nights. In this figure and the following figures, the color and symbol codes in the panels identify each target. Figure 3: Veiling variability diagnostic as a function of the average veiling. The rms value refers to the root-mean-square of the veiling variability and $\sigma$ is the average error on the veiling measurements. Each panel shows the veiling $r_{\lambda}$ measured in a different band ($YJHK$). Systems with higher veiling also present higher veiling variability. Due to the small number of photospheric lines and the lower S/N values in the $Y$ region of the spectra, the veiling in this region was not as well determined, compared to the other bands. For some nights or targets, the veiling is even slightly negative, which has no physical meaning – in such cases, we can assume that the system in this region has zero veiling. We also checked the variability of the veiling measured in each band, computing for each target the RMS of the $YJHK$ veilings, using all the observed nights. To characterize the variability over the noise, we subtracted the average error of the veiling ($\sigma$) from the RMS. Then, we computed $S=\sqrt{rms^{2}-\sigma^{2}}$, as a veiling variability diagnostic, following Cody et al. (2014). We show the results in Figs. 2 and 3 as a function of the band and veiling measured, respectively. Most of the targets present variability above the error level. The veiling in the $H$ band appears to be less variable than the other bands, while the $K$ band presents the highest variability, driven by the systems with the highest veilings, which are also the most variable ones (see Fig. 3). The average veiling measured in RU Lup is relatively high compared to the other targets, mainly in the $K$ band. These high average veilings are primarily due to the veiling measured in the 2021a period of observations (see Sect. 4.3). In that period, RU Lup also presented an increase in the photometric AAVSO $V$ band brightness (about $0.3$ mag smaller than the next observational period, where the veiling starts to go down). Therefore, the average veiling of RU Lup probably is overestimated and does not represent a quiescent value. ### 4.1 Veiling compared to inner disk diagnostics Figure 4: Average NIR veiling ($r_{\lambda}$) as a function of spectral energy distribution slope between 2 and 8 $\mu$m ($\alpha_{2\\_8}$), which we used as the NIR disk emission diagnostic. The solid line is the linear fit to the data, and the correspondent slope is written in each panel. The NIR veiling seems to scale with the SED slope. The emission from the inner disk is claimed to contribute to the veiling. In such a case, we would expect a correlation between the veiling and other inner disk emission diagnostics from the photometric data. The slope of the spectral energy distribution (SED) is often used as a disk emission diagnostic (e.g., Lada et al., 2006; Muzerolle et al., 2010; Teixeira et al., 2012). Using photometric data from different surveys, such as SDSS (Gunn et al., 1998), Gaia DR2 (Gaia Collaboration et al., 2018), 2MASS, WISE (Wright et al., 2010), Spitzer (Fazio et al., 2004; Rieke et al., 2004), Herschel (PACS), Akari (IRC and FIS), we constructed the SED of the targets and measured the slope of the SED between 2 and 8 $\mu$m ($\alpha_{2-8}$), which is the spectral range that indicates significant inner disk emission. The $\alpha_{2-8}$ slope is smaller (more negative) for systems with less or no inner disk emission, and is higher (and even positive) for systems that present inner disk emission excess (e.g., Lada et al., 2006; Muzerolle et al., 2010). We list the $\alpha_{2-8}$ slope computed for the targets in Table 2. We show in Fig. 4 the NIR veiling in the four spectral regions ($YJHK$), averaged over all the observation nights, as a function of $\alpha_{2-8}$. The veiling presents a clear linear correlation with the SED slope, mainly from the $J$ to $K$ band. The $Y$ band veiling values are still correlated to the SED slope but less than the other bands, probably due to the contribution from the inner disk emission becoming more important at longer wavelength. The $W_{1}-W_{2}$ ([3.4]-[4.6]) color index (not shown here) comes from the WISE telescope (Wright et al., 2010) is also correlated with the NIR veiling, presenting similar results as $\alpha_{2-8}$. Figure 5: Comparison between the color excess computed using the average NIR veiling and the 2MASS photometry. left: $(H-K_{\rm s})_{excess}$, right: $(J-K_{\rm s})_{excess}$. See the text for the color excess definition. The dashed line represents a slope equal to 1. The NIR color excesses computed using the average veiling and the 2MASS magnitudes agree for most targets. We could expect that the NIR excess computed using the veiling would scale with the emission excess from NIR photometric data, which is often used as an inner disk emission indicator (Hillenbrand et al., 1998; Rebull, 2001; Rebull et al., 2002). We computed the $(H-K_{\rm s})_{excess}$, using the observed $H-K_{\rm s}$ color from 2MASS, corrected for extinction, using the $A_{V}$ quoted in Table 1 and the SVO Filter (Rodrigo et al., 2012; Rodrigo & Solano, 2020) $A_{\lambda}/A_{V}$ relations to obtain the $A_{H}$ and $A_{K}$ extinctions. Then, we compared this dereddened color excess to the intrinsic color $(H-K_{\rm s})_{o}$ expected for an object with the same spectral type (Pecaut & Mamajek, 2013). The color excess is $(H-K_{\rm s})_{excess}=(H-K_{\rm s})_{obs,dred}-(H-K_{\rm s})_{o}$. We also relate the color excess and the excess flux, leading to $(H-K_{\rm s})_{excess}=-2.5\log{[(1+r_{H})/(1+r_{K})]}$, where we used the veiling definition as $r_{\lambda}=F_{\lambda_{excess}}/F_{\lambda_{Phot}}$. Then we can directly compare the color excess computed using the veiling and using the photometric measurements. We compare the two sides of this equation in Fig. 5, which shows a linear tendency and similar values considering the error of the measurements. Performing similar procedures, we computed the $(J-K_{\rm s})_{excess}$, and the results are also presented in Fig. 5. The color excess computed using the 2MASS photometry is dependent on the $A_{v}$ of the systems, and the targets V347 Aur and CI Tau present discrepant $A_{v}$ values in the literature. CI Tau presents $A_{v}=0.65\,$mag (Donati et al., 2020a) and $A_{V}=1.90\,$mag (Herczeg & Hillenbrand, 2014), while the $(J-K_{\rm s})_{excess}$ computed using the largest $A_{V}$ better agrees with the color excess calculated using veiling, the $(H-K_{\rm s})_{excess}$, and the mass accretion rate computed in the next section seem to be in better agreement with $A_{V}=0.65\,$mag; thus, we used this extinction value in the paper. The $A_{V}$ range of V347 Aur is even larger, with values from $2.2$ to $7.5\,$mag (e.g., Dahm & Hillenbrand, 2020). The $A_{V}=3.4\,$mag computed using NIR colors by Connelley & Greene (2010) seems to better represent the value obtained from the veiling calculations. The 2MASS photometric magnitudes used for this analysis were obtained years before our observations. However, we do not expect a significant change in the NIR magnitudes over the next few years, aside from the daily timescale variation (e.g., Carpenter et al., 2001). It is likely that RU Lup will stand as an exception, as the average veiling was measured in a non-quiescent period. Then, the color excess computed using the veiling is higher than that obtained using the 2MASS magnitudes. All these relations between the NIR veiling and the inner disk emission diagnostics show that a higher veiled system also presents higher inner disk emission, which is expected if the veiling has a contribution from the inner disk. However, to draw this conclusion, we assumed that these inner disk indicators, such as the slope of the spectral energy distribution, are suitable inner disk diagnostics. Kidder et al. (2021) showed that some of the targets classified as Class III using these disk indicators still show some inner disk emission based on the $K$ band excess. They checked the emission excess of V819 Tau, which we used as a template to compute the veiling, but the $H$ and $K$ excesses found were very small, similar to the systematic veiling we obtained in Sect. 3. The relation between veiling and inner disk emission is clear for the $K$ band and less for other bands, probably due to the influence of another additional continuum source for the other spectral regions and/or a smaller contribution from the inner disk to these shorter wavelengths. ### 4.2 Veiling compared to accretion diagnostics Figure 6: Accretion diagnostics as a function of average NIR veiling. From left to right: Average equivalent width corrected from the veiling of ${\rm{He\textsc{I}}}$ (10830 Å), ${\rm Pa}\beta$ , and ${\rm Br}\gamma$ lines. We show a connection between the system’s accretion diagnostic and the NIR veiling. We know that CTTSs are still accreting gas from the disk and accreting systems typically present strong and variable emission lines that form in the accretion funnel or in the disk wind (e.g., Muzerolle et al., 1998; White & Basri, 2003; Edwards et al., 2003; Kwan & Fischer, 2011; Alencar et al., 2012). The CFHT/SPIRou wavelength range includes some emission lines from hydrogen and helium, such as ${\rm Pa}\beta$ and ${\rm Br}\gamma$ as well as the ${\rm{He\textsc{I}}}$ (10830 Å) triplet; in particular, the latter is very sensitive to accretion and ejection processes (e.g., Kwan et al., 2007; Sousa et al., 2021). The dynamics of the circumstellar lines for this sample of stars will be analyzed in an accompanying paper (Sousa et al. in prep.). We measured the equivalent width of the circumstellar lines, and the average over all observing nights is listed in Table 2. First, we used the equivalent width as an accretion diagnostic (Alcalá et al., 2017), as systems that present larger equivalent widths are supposed to present higher mass accretion rates as well. Table 2: Parameters of the targets derived in this work. Star | Std | $\mathrm{v_{rad}}$ | $\mathrm{r_{Y}}$ | $\mathrm{r_{J}}$ | $\mathrm{r_{H}}$ | $\mathrm{r_{K}}$ | $\mathrm{EW}_{\mathrm{HeI}\ }$aa$a$We used the convention of positive equivalent width for emission lines, and negative values for absorption lines. | $\mathrm{EW}_{\mathrm{Pa\beta}}$aa$a$We used the convention of positive equivalent width for emission lines, and negative values for absorption lines. | $\mathrm{EW}_{\mathrm{Br\gamma}}$aa$a$We used the convention of positive equivalent width for emission lines, and negative values for absorption lines. | $\mathrm{\dot{M}_{\mathrm{Pa\beta}}}$ | $\mathrm{\dot{M}_{\mathrm{Br\gamma}}}$ | $\alpha_{2\\_8}$bb$b$ Slope of the spectral energy distribution measured between 2 and 8 $\mu$m. ---|---|---|---|---|---|---|---|---|---|---|---|--- | | | | | | | | | | $\times 10^{-8}$ | $\times 10^{-8}$ | | | ($\mathrm{km}\ \mathrm{s}^{-1})$ | | | | | $(\mathring{\mathrm{A}})$ | $(\mathring{\mathrm{A}})$ | $(\mathring{\mathrm{A}})$ | $(M_{\odot}\mathrm{yr}^{-1})$ | $(M_{\odot}\mathrm{yr}^{-1})$ | CI Tau | V819TAU | 16.3 $\pm$ 0.5 | 0.53 $\pm$ 0.28 | 0.49 $\pm$ 0.17 | 0.39 $\pm$ 0.09 | 0.92 $\pm$ 0.23 | 9.5 $\pm$ 3.2 | 14.0 $\pm$ 2.6 | 7.9 $\pm$ 1.8 | 2.2 | 4.2 | -0.84 DoAr 44 | V819TAU | -6.1 $\pm$ 0.7 | 0.20 $\pm$ 0.09 | 0.48 $\pm$ 0.10 | 0.60 $\pm$ 0.06 | 1.28 $\pm$ 0.05 | 5.4 $\pm$ 1.4 | 9.1 $\pm$ 0.8 | 3.8 $\pm$ 0.5 | 1.7 | 1.5 | -1.09 GQ Lup | TWA9A | -3.0 $\pm$ 0.3 | 0.19 $\pm$ 0.09 | 0.58 $\pm$ 0.11 | 0.78 $\pm$ 0.14 | 1.87 $\pm$ 0.26 | 0.9 $\pm$ 2.0 | 6.3 $\pm$ 1.6 | 2.0 $\pm$ 0.7 | 2.5 | 1.9 | -0.89 TW Hya | TWA25 | 12.4 $\pm$ 0.1 | 0.08 $\pm$ 0.07 | 0.09 $\pm$ 0.06 | 0.10 $\pm$ 0.05 | 0.23 $\pm$ 0.11 | 5.1 $\pm$ 4.9 | 15.9 $\pm$ 6.2 | 6.8 $\pm$ 2.6 | 0.6 | 0.3 | -1.92 V2129 Oph | V819TAU | -7.1 $\pm$ 0.6 | 0.05 $\pm$ 0.10 | 0.11 $\pm$ 0.06 | 0.25 $\pm$ 0.07 | 0.76 $\pm$ 0.16 | 0.9 $\pm$ 1.1 | 1.0 $\pm$ 0.7 | 0.5 $\pm$ 0.3 | 0.2 | 0.1 | -1.62 BP Tau | V819TAU | 15.2 $\pm$ 0.6 | 0.22 $\pm$ 0.09 | 0.36 $\pm$ 0.06 | 0.36 $\pm$ 0.09 | 0.78 $\pm$ 0.10 | 3.9 $\pm$ 1.4 | 7.7 $\pm$ 1.4 | 3.2 $\pm$ 0.8 | 1.3 | 1.1 | -1.69 DG Tau | TWA9A | 15.1 $\pm$ 3.4 | 0.29 $\pm$ 0.22 | 0.80 $\pm$ 0.20 | 1.09 $\pm$ 0.16 | 2.90 $\pm$ 0.51 | 11.7 $\pm$ 2.7 | 16.3 $\pm$ 2.1 | 6.8 $\pm$ 1.0 | 6.4 | 7.6 | -0.26 RU Lup | TWA9A | -1.3 $\pm$ 1.4 | 1.14 $\pm$ 0.51 | 1.70 $\pm$ 0.65 | 1.65 $\pm$ 0.44 | 6.27 $\pm$ 2.97 | 23.2 $\pm$ 4.0 | 30.1 $\pm$ 3.0 | 12.2 $\pm$ 1.5 | 7.8 | 11.9 | -0.21 V2247 Oph | TWA25 | -5.8 $\pm$ 0.5 | -0.02 $\pm$ 0.07 | -0.06 $\pm$ 0.01 | 0.01 $\pm$ 0.03 | -0.03 $\pm$ 0.02 | 0.0 $\pm$ 0.2 | 0.1 $\pm$ 0.1 | -0.1 $\pm$ 0.2 | 0.009 | - | -2.54 DO Tau | TWA25 | 16.1 $\pm$ 1.0 | 0.50 $\pm$ 0.40 | 0.78 $\pm$ 0.41 | 1.07 $\pm$ 0.32 | 2.33 $\pm$ 0.61 | 1.0 $\pm$ 1.8 | 12.7 $\pm$ 3.9 | 4.0 $\pm$ 1.9 | 2.3 | 3.4 | -0.54 V347 Aur | TWA25 | 8.1 $\pm$ 0.6 | 0.31 $\pm$ 0.19 | 0.46 $\pm$ 0.26 | 0.37 $\pm$ 0.15 | 1.12 $\pm$ 0.31 | -0.9 $\pm$ 1.1 | 6.7 $\pm$ 2.1 | 2.9 $\pm$ 1.0 | 7.1 | 7.3 | -0.62 J1604 | V819TAU | -5.8 $\pm$ 0.8 | -0.14 $\pm$ 0.11 | 0.20 $\pm$ 0.07 | 0.22 $\pm$ 0.13 | 0.81 $\pm$ 0.23 | -2.6 $\pm$ 1.8 | -0.1 $\pm$ 0.2 | 0.3 $\pm$ 0.1 | - | 0.013 | -2.81 PDS 70 | TWA9A | 5.0 $\pm$ 0.4 | 0.02 $\pm$ 0.11 | 0.18 $\pm$ 0.04 | 0.19 $\pm$ 0.07 | 0.34 $\pm$ 0.08 | -0.7 $\pm$ 0.9 | 0.2 $\pm$ 0.2 | 0.1 $\pm$ 0.2 | 0.007 | 0.003 | - GM Aur | TWA9A | 14.5 $\pm$ 0.3 | -0.00 $\pm$ 0.06 | 0.13 $\pm$ 0.09 | 0.11 $\pm$ 0.06 | 0.31 $\pm$ 0.10 | 3.4 $\pm$ 2.8 | 10.2 $\pm$ 3.2 | 4.7 $\pm$ 1.9 | 1.3 | 1.0 | - 444 We corrected the equivalent width for the veiling as $EW=EW_{measured}(r_{\lambda}+1)$, where the $r_{\lambda}$ represents the veiling computed close to each emission line. In Fig. 6, we show the veiling as a function of the veiling corrected equivalent width of the circumstellar emission lines. We see a clear relationship between the NIR veiling and the accretion diagnostics. It means that higher mass-accretion rate systems also present a higher degree of veiling; this is a similar result to that found by Folha & Emerson (1999), demonstrating that although the veiling shows a contribution from the inner disk emission, it also suggests a connection with the accretion process. We do not have photometric data simultaneous with our spectra to accurately compute the mass accretion rates using the equivalent width of the emission lines. However, most of our systems’ NIR magnitudes are relatively long-term stable. We used the 2MASS $J$ and $K$ magnitudes to estimate the continuum flux and then calculate the mass accretion rate using the ${\rm Pa}\beta$ and ${\rm Br}\gamma$ lines, respectively. The star V347 Aur is known to present long-term photometric variations (e.g., Dahm & Hillenbrand, 2020), and we did not compute the mass accretion rate of this target. We followed the procedures described by Gullbring et al. (1998) to compute the line flux and luminosity. The stellar parameters used are listed in Table 3, and we used the stellar distance from the Gaia collaboration (Gaia Collaboration et al., 2021). We dereddened the 2MASS magnitudes using the same method described in the previous section. To compute the accretion luminosity, we used the fits proposed by Alcalá et al. (2017), which show the relation between the line and accretion luminosities. Then, we determined the accretion rate setting the system inner radius as $5R_{\ast}$ (Gullbring et al., 1998). In Table 2, we show the individual mass accretion rates computed using the ${\rm Pa}\beta$ and ${\rm Br}\gamma$ lines. In Fig. 7, we show the average mass accretion rate as a function of the $Y$ to $K$ band veiling. Once again, we can connect the highest accreting system with the highest NIR veiling computed in the four bands. Figure 7: Average mass accretion rate as a function of average NIR veiling. The average mass accretion was computed using the line fluxes of ${\rm Pa}\beta$ and ${\rm Br}\gamma$. The error bar is the standard deviation between the two measurements. We see an association between the veiling and the mass accretion rate. Table 3: Stellar parameters Star | M∗ | R∗ | ref ---|---|---|--- | (M☉) | (R☉) | CI Tau | 0.90 | 2.0 | (1) DoAr 44 | 1.20 | 2.0 | (2) GQ Lup | 0.86 | 2.26 | (3) TW Hya | 0.80 | 1.1 | (4) V2129 Oph | 1.35 | 2.1 | (5) BP Tau | 0.70 | 1.99 | (6) DG Tau | 0.65 | 2.05 | (6) RU Lup | 1.15 | 2.39 | (7) V2247 Oph | 0.35 | 2.0 | (8) DO Tau | 0.60 | 2.0 | (9) J1604 | 1.24 | 1.4 | (10) PDS 70 | 0.76 | 1.26 | (11) GM Aur | 0.95 | 1.71 | (12) (1) Donati et al. (2020a); (2) Bouvier et al. (2020); (3) Alcalá et al. (2017); (4) Donati et al. (2011); (5) Alencar et al. (2012); (6) Johns-Krull (2007); (7) Alcalá et al. (2014); (8) Donati et al. (2010); (9) Ricci et al. (2010); (10) Sicilia-Aguilar et al. (2020); (11) Müller et al. (2018); (12) Bouvier et. al. (in prep.) ### 4.3 Veiling night-to-night variability In this study, we have access to observations obtained on different nights and sometimes different observational periods for our sample of stars. This allowed us to analyze the night-to-night veiling variation and a possible long-term veiling variability on a timescale of two years. In Fig. 8, we show the veiling measured as a function of the observation dates. We used the modified Lomb-Scargle periodogram (Horne & Baliunas, 1986) to study a possible periodicity of the veiling variations. We performed the periodogram analysis of the veiling in two ways: using all the observed nights and computing one periodogram per observational period. We searched for periods from 2 to 15 days or limited the search to the number of observed days, when the target was observed for less than 15 days. However, we did not find a significant periodic signal for most systems. We also phased the veiling using the known stellar rotation period of the targets, and again, most of the systems do not seem to vary in phase with the stellar rotation. In Sect. 4, we used the veiling error as a threshold of the veiling variability, using the RMS as a variability diagnostic, see Fig. 2. Then, any point above this limit represents a true variability of the veiling; below this threshold, the variability is at the same level as the errors associated with the veiling. We can see that the veiling RMS presents values above the threshold of veiling variability for most of the targets and veiling bands. Only V2247 Oph shows most values below the threshold limit, while a few bands of TW Hya and DoAr 44 present most of the RMS values below the threshold. Then, for V2247 Oph, we cannot attribute a variability to the veiling, which is consistent with this target presenting a small veiling and weak level of accretion and inner disk emission diagnostics. We know the veiling is generally variable and we can trust the detection of veiling variability for most targets. Figure 8 shows that for all the veiling variable targets, the veiling varies on a timescale of at least one day and the veiling computed in the four spectral regions presents the same variability timescale. These results show that whatever region the IR veiling comes from, this region is dynamic and its flux changes on a timescale of days. Figure 8: Night-by-night veiling values measured in four different spectral regions. We show each observational period per target in an individual panel. A missing veiling point in a specific band means that the spectral regions were subject to effects that prevented the veiling from being measured. In addition to the variability in terms of veiling, it is also not clearly periodic, at least on the timescale of stellar rotation. For more details, see text. We also investigated veiling variability on a timescale of months to a few years; a change in the veiling in that timescale can have a different origin from the day-scale veiling variability, discussed above. The latter can be associated with the dynamic of the system’s rotation, while the possible long- term veiling variability reflects a change in the system’s accretion and/or inner disk conditions. In Figure 9, we present the averaged veiling measured at each observational period. This plot displays nine systems that were observed in more than one observational season. Most targets do not present a significant difference in the veiling along the observational period. The $K$ band veiling of CI Tau and the $YJH$ veiling of DO Tau show a possible small change in this timescale. Then, we conclude that the veiling variability on a timescale of months-to-years is on the same order of magnitude as the day-to- day variability. RU Lup is the unique target with an evident change in the veiling along the observational periods in the four bands, but much more pronounced in the $K$ band, along with a high standard deviation. We associate this change in the veiling with an occasional high accretion episode that occurred in 2021a, and despite the veiling still being high in 2021b, it seems to start to diminish later on. In 2022a, it is even smaller, but we have only one observation to serve as the basis for this assumption. The circumstellar emission lines’ equivalent widths corroborate with this assumption, as they increase in 2021a and start to decrease in the subsequent observation periods, similarly to the veiling. In contrast, the average veiling is stable, at least for most of the stars we analyzed, except for very high accreting systems, such as RU Lup, which can present episodic high veiling. Furthermore, in the same observational period, some targets show a few episodes of an increase in veiling, such as V347 Aur (Fig. 8); however, the average veiling values are still sustained. Figure 9: Average of the NIR veiling computed at each observational period, with the error bar as the standard deviation of the computed mean veiling. We merged the measured veiling in 2020a and 2020b of the stars V2129 Oph and GQ Lup, as they are successive observations. The mean NIR veiling seems to be stable for most of the targets for a few months or years. ## 5 Discussion The dependence of veiling on wavelength is ubiquitous from the UV to the NIR range. The veiling in the optical domain decreases from the blue to the red part of the spectra, which is an effect of the decrease of the accretion spot continuum contribution (e.g., Calvet & Gullbring, 1998); also, the veiling also does not vary for some wavelength ranges (e.g., Basri & Batalha, 1990). On the other hand, in the IR range, the veiling increases with wavelength, as seen in Figs. 2 and 8, which is in agreement with similar results in the literature (e.g., Fischer et al., 2011; Alcalá et al., 2021) Figure 10: $YJH$ veiling as a function of the $K$ band veiling. Each veiling value corresponds to the mean of all the observed nights, and the error bar is its standard deviation. The solid line is the linear fit to the data, and the fitted line’s slope is written in each panel. The correlation between the $K$ band and the $YJH$ bands veilings increases from the $Y$ to the $H$ band. The average veiling value for the entire sample of CTTS is $\left<r_{Y}\right>=0.2\pm 0.3$, $\left<r_{J}\right>=0.4\pm 0.4$, $\left<r_{H}\right>=0.5\pm 0.5$, and $\left<r_{K}\right>=1.4\pm 1.6$. We note that in the $Y$ band, the average veiling is the lowest. Over these wavelengths from $Y$ to $K$, the veiling can have contributions from different sources. For example, we expect the veiling in the $K$ band to present more contributions from the inner disk than in the $Y$ band. In Fig. 10, we show the $YJH$ veiling as a function of the $K$ band veiling. We can see that the $J$ and $H$ veilings seem to increase as the $K$ band veiling increases; however, there is smaller correlation with the $Y$ band veiling. These results are supported by the analysis of the correlation of the veiling samples and the linear fit showed in the Fig. 10. We computed the linear correlation coefficient (r) between two samples, where r=1 represents a perfect correlation and when r=0 there is no correlation. The $Y$ and $K$ band veilings present a correlation coefficient of 0.87, while the coefficient of the $J$ and $H$ and the $K$ band are 0.98 and 0.96, respectively. Similar results were found by Cieza et al. (2005), where they compare the excess in the $J$ and $H$ bands with the $K$ band excess, showing that both presented a linear correlation with the $K$ band excess, and this was explained as due to the $JHK$ excess arising from the same region. The NIR veiling should be the result of a combination of physical processes. Alcalá et al. (2021) computed the NIR veiling at several wavelengths for a sample of very high-mass accretion rate systems, including DG Tau (included in our sample). These authors fitted the veiling as a function of wavelength using a blackbody function and found temperatures compatible with the presence of dust in the inner disk. However, the temperature was too high ($>2000\,K$) for the dust to survive in a few of their cases. They also argued that the veiling should have a contribution from the hot gas inside the disk sublimation radius, and similar results were found by Fischer et al. (2011). To investigate this proposition, we looked at the CO bandhead at 2.3$\mu$m of the CFHT/SPIRou data. This band, when in emission, is expected to form in the hot gas in the inner disk. Using the $K$ band veiling, we veiled the template and removed the photospheric lines of the CO bandhead to obtain the residual CO profiles. Most targets do not present clear signs of CO emission, showing that this band is strictly photospheric. However, a few residual profiles of V347 Aur, which is a Class I object, along with DO Tau and RU Lup, which are strong accretors, present CO emission in some observations, indicating the presence of hot gas in the inner disk. In particular, RU Lup presents these hints of CO emission in the observational period when the veiling was high, and the system probably ensued a high episodic accretion. A further analysis of the CO bandhead is beyond the scope of this paper and it will be carried out in a dedicated paper exploring the significance of these CO emissions. In the previous section, we show that the NIR veiling, mainly $r_{K}$, presents a good correlation with the inner disk emission diagnostics obtained from the NIR photometric data and SED fit, demonstrating that the NIR veiling has an important contribution from the inner disk. However, we also see a correlation between veiling and accretion diagnostics. A high accretion-rate system presents larger veiling values in the IR. This shows that high-mass accretion rate systems should feature higher inner disk heating, thus higher temperatures and stronger inner disk emission excess as a consequence. Espaillat et al. (2022) fit most of the continuum spectra from NUV to NIR of the accreting star CVSO 109A quite well, using a combination of emission from the accretion shock (multiple funnel flow model) on the stellar surface and emission from the irradiated inner edge of the dusty disk. However, the inner disk and accretion shock model do not adequately reproduce the continuum excess in the $Y$ to $J$ band. Indeed, while the veiling in the $J$ to $K$ band seems to point to a significant contribution on the part of the inner disk emission excess, the $Y$ band veiling origin is still unknown. Dodin & Lamzin (2013) predicted significant veiling in the $Y$ band from the accretion spot. They argued that the accretion spot (continuum emission and emission lines formed in the accretion shock) could account for optical and near IR veiling up to the $J$ band. Unfortunately, our $Y$ band veiling was not as well computed as for the other bands due to several issues in this spectral region and given the photospheric lines are not quite so prominent. Despite these obstacles, the Y band veiling agrees on a smaller scale with both accretion and inner disk diagnostics. Figure 11: Average NIR veiling as a function of the system’s inclination with respect to our line of sight. The solid line is the data’s linear fit, and the fitted line’s slope is given in each panel. We do not consider the discrepant targets (TW Hya and V2247 Oph) to fit the data. The NIR veilings tend to be anti-correlated with the system’s inclination, see text. We checked if the inclination of the system has any impact on the measured veiling values. In Fig. 11, we show the NIR veiling as a function of the inclination of the system with respect to our line of sight, listed in Table 1. In addition, two discrepant systems (TW Hya and V2247 Oph), we can see an anti-correlation between veiling and the system’s inclination. This anti- correlation is not pronounced, as we can see in the linear fit slope, due to the spread of points (the correlation coefficients between the inclination and the $YJHK$ veilings are -0.64, -0.76, -0.80, and -0.70, respectively), but the decreasing tendency of veiling with inclination is clear. We also advise that the inclinations used for DG Tau and DO Tau are the outer disk inclination, while the disk and the stellar inclination are not necessarily the same (Bohn et al., 2022). If confirmed, this anti-correlation can be due to a geometric effect: the more the system is inclined, the less we see (from the inner disk edge) where the nIR veiling is supposed to arise. The two targets, TW Hya and V2247 Oph, which do not seem to follow this tendency, do not have dust in the inner disk any longer and they are known to have gaps or holes in their inner disks (Calvet et al., 2002; Pontoppidan et al., 2008). In that case, independently of the system’s inclination, we would not expect to detect IR veiling, assuming that the IR veiling is due to dust emission in the inner disk. ## 6 Conclusion In this work, we analyze the NIR veiling computed using high-resolution data from CFHT/SPIRou of a sample of 14 low-mass young stars. We found the veiling to increase from the $Y$ to the $K$ band, as a result of the increase of the emission contribution from the inner disk as a function of wavelength. The veiling correlates with other photometric inner disk diagnostics, such as color excess and the slope of the spectral energy distribution, mainly in the $JHK$ band, providing further evidence that the NIR veiling arises from hot dust in the inner disk. We also found a linear correlation between veiling and the accretion properties of the system. This shows that accretion contributes to inner disk heating and, consequently, to the inner disk emission excess. This effect is enhanced in high-mass accretion rate systems that also present a denser inner disk and higher inner disk emission (e.g., Sullivan & Kraus, 2022). We analyzed the NIR veiling variability through the modified Lomb-Scargle periodogram and we did not find any significant periodic signal in the four bands in timescales typical of stellar rotation ($<15\,$days), which also suggests the veiling comes from the dust emission in the inner disk. However, we show that the veiling is variable for most targets on a timescale of at least one day. Besides the night-by-night veiling variability, the mean NIR veiling per season appears to be mostly stable, for most targets and on timescales of several months to years. ###### Acknowledgements. We thank the referee for the suggestions that helped to clarify this paper. We want to thank Claire Moutou, Sylvie Cabrit, Nicolas Grosso, and Konstantin Grankin for carefully reading the manuscript and giving suggestions to improve the paper. This project has received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No 742095; SPIDI: Star-Planets-Inner Disk- Interactions; http://www.spidi-eu.org and grant agreement No. 740651 NewWorlds). We acknowledge financial support from CNPq, CAPES and Fapemig. We acknowledge funding from the French National Research Agency (ANR) under contract number ANR-18-CE31-0019 (SPlaSH). This research has made use of the SVO Filter Profile Service (http://svo2.cab.inta-csic.es/theory/fps/) supported from the Spanish MINECO through grant AYA2017-84089. The authors wish to recognize and acknowledge the very significant cultural role and reverence that the summit of MaunaKea has always had within the indigenous Hawaiian community. We are most fortunate to have the opportunity to conduct observations from this mountain. We acknowledge with thanks the variable star observations from the AAVSO International Database contributed by observers worldwide and used in this research. ## References * Alcalá et al. (2021) Alcalá, J. M., Gangi, M., Biazzo, K., et al. 2021, A&A, 652, A72 * Alcalá et al. (2017) Alcalá, J. M., Manara, C. F., Natta, A., et al. 2017, A&A, 600, A20 * Alcalá et al. (2014) Alcalá, J. M., Natta, A., Manara, C. F., et al. 2014, A&A, 561, A2 * Alencar & Batalha (2002) Alencar, S. H. P. & Batalha, C. 2002, ApJ, 571, 378 * Alencar et al. (2012) Alencar, S. H. P., Bouvier, J., Walter, F. M., et al. 2012, A&A, 541, A116 * Antoniucci et al. (2017) Antoniucci, S., Nisini, B., Biazzo, K., et al. 2017, A&A, 606, A48 * Basri & Batalha (1990) Basri, G. & Batalha, C. 1990, ApJ, 363, 654 * Bohn et al. (2022) Bohn, A. J., Benisty, M., Perraut, K., et al. 2022, A&A, 658, A183 * Bouvier et al. (2020) Bouvier, J., Alecian, E., Alencar, S. H. P., et al. 2020, A&A, 643, A99 * Calvet et al. (2002) Calvet, N., D’Alessio, P., Hartmann, L., et al. 2002, ApJ, 568, 1008 * Calvet & Gullbring (1998) Calvet, N. & Gullbring, E. 1998, ApJ, 509, 802 * Calvet et al. (1997) Calvet, N., Hartmann, L., & Strom, S. E. 1997, ApJ, 481, 912 * Carpenter et al. (2001) Carpenter, J. M., Hillenbrand, L. A., & Skrutskie, M. F. 2001, AJ, 121, 3160 * Chiang & Goldreich (1997) Chiang, E. I. & Goldreich, P. 1997, ApJ, 490, 368 * Chiang & Goldreich (1999) Chiang, E. I. & Goldreich, P. 1999, ApJ, 519, 279 * Cieza et al. (2005) Cieza, L. A., Kessler-Silacci, J. E., Jaffe, D. T., Harvey, P. M., & Evans, Neal J., I. 2005, ApJ, 635, 422 * Cody et al. (2014) Cody, A. M., Stauffer, J., Baglin, A., et al. 2014, AJ, 147, 82 * Connelley & Greene (2010) Connelley, M. S. & Greene, T. P. 2010, AJ, 140, 1214 * Cook et al. (2022) Cook, N. J., Artigau, É., Doyon, R., et al. 2022, arXiv e-prints, arXiv:2211.01358 * Dahm & Hillenbrand (2020) Dahm, S. E. & Hillenbrand, L. A. 2020, AJ, 160, 278 * Dahm et al. (2012) Dahm, S. E., Slesnick, C. L., & White, R. J. 2012, ApJ, 745, 56 * Davies (2019) Davies, C. L. 2019, MNRAS, 484, 1926 * Dodin & Lamzin (2013) Dodin, A. V. & Lamzin, S. A. 2013, Astronomy Letters, 39, 389 * Donati et al. (2007) Donati, J., Jardine, M. M., Gregory, S. G., et al. 2007, MNRAS, 380, 1297 * Donati et al. (2020a) Donati, J. F., Bouvier, J., Alencar, S. H., et al. 2020a, MNRAS, 491, 5660 * Donati et al. (2011) Donati, J. F., Gregory, S. G., Alencar, S. H. P., et al. 2011, MNRAS, 417, 472 * Donati et al. (2012) Donati, J. F., Gregory, S. G., Alencar, S. H. P., et al. 2012, MNRAS, 425, 2948 * Donati et al. (2015) Donati, J. F., Hébrard, E., Hussain, G. A. J., et al. 2015, MNRAS, 453, 3706 * Donati et al. (2008) Donati, J. F., Jardine, M. M., Gregory, S. G., et al. 2008, MNRAS, 386, 1234 * Donati et al. (2018) Donati, J.-F., Kouach, D., Lacombe, M., et al. 2018, in Handbook of Exoplanets, ed. H. J. Deeg & J. A. Belmonte, 107 * Donati et al. (2020b) Donati, J. F., Kouach, D., Moutou, C., et al. 2020b, MNRAS, 498, 5684 * Donati et al. (2010) Donati, J. F., Skelly, M. B., Bouvier, J., et al. 2010, MNRAS, 402, 1426 * Edwards et al. (2006) Edwards, S., Fischer, W., Hillenbrand, L., & Kwan, J. 2006, ApJ, 646, 319 * Edwards et al. (2003) Edwards, S., Fischer, W., Kwan, J., Hillenbrand, L., & Dupree, A. K. 2003, ApJL, 599, L41 * Ercolano et al. (2009) Ercolano, B., Clarke, C. J., & Robitaille, T. P. 2009, MNRAS, 394, L141 * Espaillat et al. (2022) Espaillat, C. C., Herczeg, G. J., Thanathibodee, T., et al. 2022, AJ, 163, 114 * Faesi et al. (2012) Faesi, C. M., Covey, K. R., Gutermuth, R., et al. 2012, PASP, 124, 1137 * Fazio et al. (2004) Fazio, G. G., Hora, J. L., Allen, L. E., et al. 2004, ApJS, 154, 10 * Fischer et al. (2011) Fischer, W., Edwards, S., Hillenbrand, L., & Kwan, J. 2011, ApJ, 730, 73 * Flores et al. (2019) Flores, C., Connelley, M. S., Reipurth, B., & Boogert, A. 2019, ApJ, 882, 75 * Folha & Emerson (1999) Folha, D. F. M. & Emerson, J. P. 1999, A&A, 352, 517 * Frasca et al. (2017) Frasca, A., Biazzo, K., Alcalá, J. M., et al. 2017, A&A, 602, A33 * Gaia Collaboration et al. (2018) Gaia Collaboration, Brown, A. G. A., Vallenari, A., et al. 2018, A&A, 616, A1 * Gaia Collaboration et al. (2021) Gaia Collaboration, Brown, A. G. A., Vallenari, A., et al. 2021, A&A, 649, A1 * Gullbring et al. (1998) Gullbring, E., Hartmann, L., Briceño, C., & Calvet, N. 1998, APJ, 492, 323 * Gully-Santiago et al. (2017) Gully-Santiago, M. A., Herczeg, G. J., Czekala, I., et al. 2017, ApJ, 836, 200 * Gunn et al. (1998) Gunn, J. E., Carr, M., Rockosi, C., et al. 1998, AJ, 116, 3040 * Hartigan et al. (1989) Hartigan, P., Hartmann, L., Kenyon, S., Hewett, R., & Stauffer, J. 1989, ApJS, 70, 899 * Hartigan et al. (1991) Hartigan, P., Kenyon, S. J., Hartmann, L., et al. 1991, ApJ, 382, 617 * Hartmann & Kenyon (1990) Hartmann, L. W. & Kenyon, S. J. 1990, ApJ, 349, 190 * Herczeg & Hillenbrand (2014) Herczeg, G. J. & Hillenbrand, L. A. 2014, ApJ, 786, 97 * Hillenbrand et al. (1998) Hillenbrand, L. A., Strom, S. E., Calvet, N., et al. 1998, AJ, 116, 1816 * Horne & Baliunas (1986) Horne, J. H. & Baliunas, S. L. 1986, APJ, 302, 757 * Ingleby et al. (2013) Ingleby, L., Calvet, N., Herczeg, G., et al. 2013, ApJ, 767, 112 * Johns-Krull (2007) Johns-Krull, C. M. 2007, ApJ, 664, 975 * Johns-Krull & Valenti (2001) Johns-Krull, C. M. & Valenti, J. A. 2001, ApJ, 561, 1060 * Johns-Krull et al. (2000) Johns-Krull, C. M., Valenti, J. A., & Linsky, J. L. 2000, ApJ, 539, 815 * Kidder et al. (2021) Kidder, B., Mace, G., López-Valdivia, R., et al. 2021, ApJ, 922, 27 * Kounkel et al. (2019) Kounkel, M., Covey, K., Moe, M., et al. 2019, AJ, 157, 196 * Kwan et al. (2007) Kwan, J., Edwards, S., & Fischer, W. 2007, ApJ, 657, 897 * Kwan & Fischer (2011) Kwan, J. & Fischer, W. 2011, MNRAS, 411, 2383 * Lada et al. (2006) Lada, C. J., Muench, A. A., Luhman, K. L., et al. 2006, ApJ, 131, 1574 * McClure et al. (2013) McClure, M. K., Calvet, N., Espaillat, C., et al. 2013, ApJ, 769, 73 * Müller et al. (2018) Müller, A., Keppler, M., Henning, T., et al. 2018, A&A, 617, L2 * Muzerolle et al. (2010) Muzerolle, J., Allen, L. E., Megeath, S. T., Hernández, J., & Gutermuth, R. A. 2010, ApJ, 708, 1107 * Muzerolle et al. (1998) Muzerolle, J., Hartmann, L., & Calvet, N. 1998, AJ, 116, 455 * Nguyen et al. (2012) Nguyen, D. C., Brandeker, A., van Kerkwijk, M. H., & Jayawardhana, R. 2012, ApJ, 745, 119 * Pecaut & Mamajek (2013) Pecaut, M. J. & Mamajek, E. E. 2013, ApJS, 208, 9 * Pecaut & Mamajek (2016) Pecaut, M. J. & Mamajek, E. E. 2016, MNRAS, 461, 794 * Pontoppidan et al. (2008) Pontoppidan, K. M., Blake, G. A., van Dishoeck, E. F., et al. 2008, ApJ, 684, 1323 * Preibisch & Zinnecker (1999) Preibisch, T. & Zinnecker, H. 1999, AJ, 117, 2381 * Rebull (2001) Rebull, L. M. 2001, AJ, 121, 1676 * Rebull et al. (2002) Rebull, L. M., Makidon, R. B., Strom, S. E., et al. 2002, AJ, 123, 1528 * Rei et al. (2018) Rei, A. C. S., Petrov, P. P., & Gameiro, J. F. 2018, A&A, 610, A40 * Ricci et al. (2010) Ricci, L., Testi, L., Natta, A., et al. 2010, A&A, 512, A15 * Rieke et al. (2004) Rieke, G. H., Young, E. T., Engelbracht, C. W., et al. 2004, ApJS, 154, 25 * Rodrigo & Solano (2020) Rodrigo, C. & Solano, E. 2020, in XIV.0 Scientific Meeting (virtual) of the Spanish Astronomical Society, 182 * Rodrigo et al. (2012) Rodrigo, C., Solano, E., & Bayo, A. 2012, SVO Filter Profile Service Version 1.0, IVOA Working Draft 15 October 2012 * Sicilia-Aguilar et al. (2015) Sicilia-Aguilar, A., Fang, M., Roccatagliata, V., et al. 2015, A&A, 580, A82 * Sicilia-Aguilar et al. (2020) Sicilia-Aguilar, A., Manara, C. F., de Boer, J., et al. 2020, A&A, 633, A37 * Simon et al. (2016) Simon, M. N., Pascucci, I., Edwards, S., et al. 2016, ApJ, 831, 169 * Sousa et al. (2021) Sousa, A. P., Bouvier, J., Alencar, S. H. P., et al. 2021, A&A, 649, A68 * Stempels et al. (2007) Stempels, H. C., Gahm, G. F., & Petrov, P. P. 2007, A&A, 461, 253 * Sullivan & Kraus (2022) Sullivan, K. & Kraus, A. L. 2022, ApJ, 928, 134 * Teixeira et al. (2012) Teixeira, P. S., Lada, C. J., Marengo, M., & Lada, E. A. 2012, A&A, 540, A83 * Thanathibodee et al. (2020) Thanathibodee, T., Molina, B., Calvet, N., et al. 2020, ApJ, 892, 81 * Torres et al. (2006) Torres, C. A. O., Quast, G. R., da Silva, L., et al. 2006, A&A, 460, 695 * Valenti et al. (1993) Valenti, J. A., Basri, G., & Johns, C. M. 1993, AJ, 106, 2024 * White & Basri (2003) White, R. J. & Basri, G. 2003, ApJ, 582, 1109 * Wright et al. (2010) Wright, E. L., Eisenhardt, P. R. M., Mainzer, A. K., et al. 2010, AJ, 140, 1868
# Improving Factual Accuracy of Neural Table-to-Text Output by Addressing Input Problems in ToTTo Barkavi Sundararajan Somayajulu Sripada Ehud Reiter Department of Computing Science, University of Aberdeen {b.sundararajan.21, yaji.sripada<EMAIL_ADDRESS> ###### Abstract Neural Table-to-Text models tend to hallucinate, producing texts that contain factual errors. We investigate whether such errors in the output can be traced back to problems with the input. We manually annotated 1,837 texts generated by multiple models in the _politics_ domain of the ToTTo dataset. We identify the input problems that are responsible for many output errors and show that fixing these inputs reduces factual errors by between 52% and 76% (depending on the model). In addition, we observe that models struggle in processing tabular inputs that are structured in a non-standard way, particularly when the input lacks distinct row and column values or when the column headers are not correctly mapped to corresponding values. Improving Factual Accuracy of Neural Table-to-Text Output by Addressing Input Problems in ToTTo Barkavi Sundararajan and Somayajulu Sripada and Ehud Reiter Department of Computing Science, University of Aberdeen {b.sundararajan.21, yaji.sripada, <EMAIL_ADDRESS> ## 1 Introduction Table-to-Text generation refers to the task of generating natural language descriptions from tabular data (Parikh et al., 2020; Chen et al., 2020a, b) and is widely used in several application domains such as medical diagnosis (Pauws et al., 2018), financial (Zhu et al., 2023) and weather reporting (Sripada et al., 2002; Gkatzia et al., 2017; Upadhyay and Massie, 2022) and sports summaries (Thomson et al., 2020). Neural language models are known to generate fluent texts (Ji et al., 2023) but may generate outputs that are factually incorrect or unrelated to the provided input data. Such undesirable generation is called ‘hallucination’ (Wang and Sennrich, 2020; Raunak et al., 2021; Ji et al., 2023). Previous studies on Table-to-Text tasks adopt traditional seq2seq methods to generate table descriptions (Wiseman et al., 2017; Puduppully et al., 2019; Rebuffel et al., 2019). Recently, Transformer based models (Devlin et al., 2019; Raffel et al., 2020; OpenAI, 2023) have shown remarkable progress in language generation from textual input (Badaro et al., 2023), however tabular data still needs more improvement to control hallucinations (Rebuffel et al., 2021). Neural models struggle with tabular data, especially when the inputs do not have distinct cell values from rows and columns mapped along with their respective headers. These input problems lead the model to generate more factual errors (Kasner and Dušek, 2024). Using the ToTTo tabular dataset (Parikh et al., 2020), we identify and address input problems that are responsible for factual errors. Some common tabular input problems in the ToTTo are i. ‘non-atomic’ cell values, where a column contains multiple values such as leader name, party name and % of votes in one cell rather than a single indivisible value, ii. missing important cell values in the input (see Table 1(b)) and iii. nested column headers and row headers in the Wikipedia tables111https://en.wikipedia.org/wiki/Wikipedia:Manual_of_Style/Accessibility/Data_tables_tutorial#Column_headers:_bad_example that lead to incorrect mapping of the cell values. (a) Original cells highlighted in Yellow as Input 1996 United States House of Representatives election --- Party | Candidate | Votes | % Democratic | Eleanor Holmes Norton (inc.) | 134,996 | 90.00 Republican | Sprague Simonds | 11,306 | 7.54 Llama 2-13B Output for tabular input a: --- Eleanor Holmes Norton (inc.) won with 7.54%U of the vote. Sprague Simonds was the Republican candidate and received 22.38%U of the vote. (b) Corrected Tabular data by including relevant cells 1996 United States House of Representatives election --- Party | Candidate | Votes | % Democratic | Eleanor Holmes Norton (inc.) | 134,996 | 90.00 Republican | Sprague Simonds | 11,306 | 7.54 Llama 2-13B Output for tabular input b: --- Democratic Party candidate Eleanor Holmes Norton won with 90% of the vote. Republican Party candidate Sprague Simonds received 7.54%. Table 1: ToTTo example: Highlighted cells in yellow are passed as input to the model. Passing the appropriate cells (i.e., % votes and party name) as input, as shown in Table 1(b) fixes the factual errors. Compare the Table 1(b) output (with no errors) to the Table 1(a) output (with NUMBER errors denoted by a superscript U). Table 2: Linearized Input from (a) and (b) is passed as input to Llama 2-13B model. For example, we present the Linearized Input for Table 1(a) here. The corresponding input for Table 1(b) is not shown, but it will include all related cells highlighted in yellow in (b). <page_title> 1996 United States House of Representatives election </page_title> <table> <cell> Eleanor Holmes Norton (inc.) <col_header> Candidate </col_header> </cell> <cell> Republican <col_header> Party </col_header> </cell> <cell> Sprague Simonds <col_header> Candidate </col_header> </cell> <cell> 7.54 <col_header> % </col_header> </cell> </table> Table 1(b) presents a sample from ToTTo. Only the highlighted cells from Table 1(a) are passed to the model (as shown in Table 1(b)). Passing Norton’s % of votes and her party name (compare Table 1(b) to Table 1(a)) eliminates the hallucinated % of votes (see output for Table 1(b)); this correction is based on the input problem described in Sec. 5.2. In this paper, we score the quality of output texts by manually annotating output errors instead of using automatic evaluation metric scores such as BLEU (Papineni et al., 2002), ROUGE (Lin, 2004), PARENT (Dhingra et al., 2019) and BLEURT (Sellam et al., 2020). We conducted a pilot study, where we fine-tuned T5-base and T5-large models (Raffel et al., 2020), analysing 1,677 _politics_ domain texts from the ToTTo dataset through manual error analysis adopted from Thomson and Reiter (2020). These manual error annotations allowed us to identify patterns of errors in the generated text which were then traced back to input problems. Our approach is summarised as follows: 1. I. We systematically correct the tabular inputs for the _politics_ domain in ToTTo to adhere to a standard form to ensure the generation of factual texts from neural models. The correction procedure is elaborated in Sec. 5.2 and is supplemented by pseudocode in App. D. 1. (a) We apply this correction to a larger subset of 210 samples, resulting in a 62% decrease in factual errors for T5-base and a 57% decrease for T5-large in the generated text (Sec. 6.1). 2. (b) We conduct experiments on Llama 2-7B and Llama 2-13B models (Touvron et al., 2023) with 40 challenging samples selected from the previous 210 samples. Tailoring zero-shot prompts for specific input and error annotation on 160 texts showed that correcting input reduces factual errors by 52% in Llama 2-7B and 76% in Llama 2-13B (Sec. 6.2). 2. II. The manual error annotation methodology adopted from Thomson and Reiter (2020) is detailed in App. B; this builds on the work of Sundararajan et al. (2022) for ToTTo _politics_ domain outputs222Error annotation guidelines and sample annotations from our human evaluation is available at https://github.com/BarkaviSJ/totto_politics_human_annotations. The inter- annotation agreement on the error annotation was good, with a Fleiss’ Kappa of 0.622 (Sec. 7). ### 1.1 Table to Text dataset, ToTTo ToTTo is an open-domain English language dataset (Parikh et al., 2020), where the input $X$ is taken from Wikipedia table $T$, which includes the table’s metadata (such as title) and a set of highlighted cells, along with their headers. This structured information is flattened to create a linearized text representation of the table, as mentioned in Table 1(b). This crowdsourced dataset is paired with a natural language description, denoted as output $Y$, comprising a sequence of tokens $y_{1},y_{2},\ldots,y_{n}$, which provides a coherent summary of the content present in the input table $X$. These input-output pairs from the ToTTo dataset can be used for fine-tuning or prompting the neural language models. As shown in Table 1(a), the Input X is often observed to be problematic and fixing these problems is the main focus of this paper. ## 2 Related Work Prior work on ToTTo: Wang et al. (2022) proposed a framework, LATTICE, that preserves the structural information of the cell values (tokens) within the same rows or columns, and by removing the attention mechanism flow for other unrelated tokens. Hu et al. (2023) incorporates content planning and generation from Su et al. (2021) and synthetically added noisy cells in their fine-tuning regime. While these approaches are model agnostic and improved automatic metric scores such as BLEU (Papineni et al., 2002), PARENT (Dhingra et al., 2019) and BLEURT (Sellam et al., 2020) by few points in the leaderboard 333https://github.com/google-research-datasets/ToTTo, the fundamental problem with the tabular input remains. Chen et al. (2022) acknowledged that the target cells in the ToTTo tabular input are not always highlighted in their work titled ‘Table Structure Understanding and Text Deliberation Approach’. They used a template to extract all facts from the raw table for 1.2K training samples (only for the inputs with rows and columns fewer than 8) and employed hierarchical multi-head attention to capture the structural information in their fine-tuning process. Though this approach promises to retain the facts from raw tables, it only addresses simpler tables with fewer than 8 rows and columns, still has limitations for longer tables, complex tabular structures and non-atomic tabular cells. Our focus on correcting inputs aiming to achieve factually correct outputs aligns with the work of Dušek et al. (2019) on the E2E dataset (Novikova et al., 2017). Their study also demonstrated that improving inputs in an NLG dataset helps in improving model outputs. Error Analysis: While the automatic metric scores such as BLEU, PARENT and BLEURT help evaluate the model’s performance at a high level, relying solely on these metrics will not address specific weaknesses i.e., lower metric scores do not provide insights into specific error types in the output. We follow guidelines from van Miltenburg et al. (2021) to perform error analysis in NLG systems and investigate errors in the output at a more granular level by adopting the manual error annotation approach from Thomson and Reiter (2020); Sundararajan et al. (2022); Thomson et al. (2023). Maynez et al. (2020) also emphasized automatic metrics are not sufficient to study the hallucination problem and provided a detailed study on intrinsic and extrinsic hallucination in abstractive summarization. We studied hallucination in our evaluation scheme by annotating different categories of errors in the output tokens (single token or group of tokens). Mapping our adopted methodology to Maynez et al. (2020)’s work, intrinsic is the main error category occurring when generated outputs are not faithful to the given input. It includes WORDW, NAMEN, DATE_DIMENSIOND, NUMBERU, CONTEXTC and OTHERO) from our error categories. Extrinsic refers to our ADDITIONA category (see Sec. B.3). LLM prompts: Empirical evaluation of prompting strategies on the three large language models (LLMs) by Sivarajkumar et al. (2023) in clinical NLP found that tailoring task-specific prompt is crucial for achieving accuracy. In a study on Text-to-SQL, Chang and Fosler-Lussier (2023) investigated zero-shot prompting strategies, highlighting the significance of table representation. Their findings indicated that normalized database prompts outperformed unnormalized ones; this motivates our initial step of correcting tabular inputs to a standard form. In our work, we leveraged a recent LLM, Llama 2 (Touvron et al., 2023) and tailored our zero-shot prompt (Kojima et al., 2022) specific to the content of each table. ## 3 Pilot Study ### 3.1 Methodology We only look at the _politics_ domain on the ToTTo validation set. We build upon the work of Sundararajan et al. (2022) to identify the causes of output errors in T5 models. In this paper, we go beyond error annotations to fix these errors in our main study, both in T5 and Llama 2 models (detailed in Sec. 5). The error categories mentioned in Sundararajan et al. (2022) are: WORDW, NAMEN, DATE_DIMENSIOND, NUMBERU, OTHERO, CONTEXTC, NOT_CHECKABLENC and NON_ENGLISHNE. In this work, we excluded NOT_CHECKABLENC and introduced a new error category, ADDITIONA, which is used when the generated text has added words or phrases that diverge from the input. Definitions for all error categories are provided in Sec. B.3. ### 3.2 Insights from our Pilot Study For the pilot study, we fine-tuned both the T5-base (T5-b) and the T5-large (T5-l) models on the ToTTo dataset by following the baseline approach (Kale and Rastogi, 2020). The hyperparameters and fine-tuning details for these two models are shown in App. A. Category | T5-base (T5-b) | T5-large (T5-l) ---|---|--- | Count | % | Count | % No Error | 358 | 47 | 450 | 60 Omissions | 272 | 36 | 218 | 29 Errors | 124 | 17 | 86 | 11 Total Count | 754 | 754 Table 3: Pilot Study analysis: T5-b and T5-l Models in ToTTo Politics Domain. ‘Errors’ category is the main focus of our work; ‘omissions’ are excluded. Our analysis (presented in Table 3) shows that: No Error: 47% of the samples from T5-b and 60% of the samples from T5-l are error-free. Omissions: Omissions occur when the generated text fails to mention some information from the input (González Corbelle et al., 2022) without making any factual errors. If the output has errors and omissions, we classify it as an error. The T5-b had 36% omissions and T5-l had 29%. Errors: Our analysis revealed that T5-b made factual errors in 17% and T5-l made factual errors in 11% of the total samples. Hypothesis: Based on the insights from this analysis, we hypothesize that when tabular input data is structured in non-standard ways, models struggle to interpret these ambiguous inputs leading to generate factually inaccurate output. We test this hypothesis by addressing input problems related to non- standard tabular structures. ## 4 Input Problems Due to the practical challenges involved in improving the tabular input for the entire dataset, which includes unique headers and tabular structures for each input, our focus is on analyzing a subset of samples containing errors. We examined 124 error samples from the T5-b and 86 error samples from T5-l models within the ToTTo _politics_ domain, as identified in Table 3. We aim to segregate the errors originating from non-standard or illogical nested table structures. We categorize these input problems into two broad categories, as briefly elaborated upon in Sec. 4.1 and Sec. 4.2. ### 4.1 Generic Input Problems Non-atomic tabular cell values: When a table cell contains multiple atomic values (see Table 4(c)). Examples of such non-atomic forms include multiple leaders’ names, votes, term dates, or election years all in a single cell. We further categorize these problems into ‘single record lacking atomicity’ and ‘multiple records lacking atomicity’ (shown in Fig. 2 in App. F) to demonstrate how the models struggle when records of multiple leaders lack atomicity. Complex table type: When a table contains election results in sentence form, models struggle to interpret and generate meaningful texts because the sentence form data lacks the needed context (see Table 11(d) in App. F). Insufficient input: In some cases, the necessary cells are not highlighted in the tabular input, resulting in incorrect outputs. Our analyses in Table 1(b) and Table 5(c) demonstrate that outputs become factually correct when relevant cells are included. Longer table input: Models often struggle to generate accurate texts for lengthy table inputs, especially when the data is not in a standard form and lacks clear cell relationships. ### 4.2 Politics Domain-Specific Input Problems Politics specific headers: In ToTTo, the use of symbols, for example, ‘+’ or ‘-’ instead of having clear semantic terms like _‘swing percentage’ or ‘% change compared to previous election’_ as column headers caused 5% of errors. This lack of semantic guidance in the input made it difficult for models to accurately generate the correct output text (see Table 10(c) in App. F). List of leader names in the input: In the _politics_ domain, we observed a specific issue when input data contains a list of leader names. Models tend to favour the leader whose name appears first in the list (for example, either from the table title or as the first leader in highlighted cells), even if they have lost the election (see Table 13(c), Table 14, Table 15(c) in App. F). This becomes even more challenging when leader names are associated either with missing values (opponent leader’s name or vote count) in other columns in the input or when the tabular cell values are non-atomic. The manual fixes we applied to each of these input problems with examples are detailed in Sec. 5.2. ## 5 Main Study (a) Input Table with Original Cells highlighted in Yellow Page Title: 2014 United States Senate elections --- Section Title: Elections leading to the next Congress State | Candidates Alaska | Dan Sullivan (Republican) 48.0% Mark Begich (Democratic) 45.8% Mark Fish (Libertarian) 3.7% Ted Gianoutsos (Independent) 2.0% Virginia | Mark Warner (Democratic) 49.1% Ed Gillespie (Republican) 48.3% Robert Sarvis (Libertarian) 2.4% (b) Input Table with Corrected Cells highlighted in Yellow State | Candidates | Party | % votes ---|---|---|--- Alaska | Dan Sullivan | Republican | 48.0% Alaska | Mark Begich | Democratic | 45.8% Virginia | Mark Warner | Democratic | 49.1% Virginia | Ed Gillespie | Republican | 48.3% (c) Outputs Before and After Input Data Correction T5-base before fix: Alaska: Dan Sullivan (R) defeated Mark Begich (D) 45.8% toW 3.7%U and Ted Gianoutsos (Independent) 2.0%, and Virginia: Mark Warner won 49.1% toW 2.4%U. --- T5-base after fix: Alaska was won by Dan Sullivan (R) with 48.0% of the vote, and Virginia was won by Mark Warner (D) with 49.1% of the vote. Llama 2-7B before fix: 2014 United States Senate elections were held on November 4A, 2014. The Republican Party gained control of the Senate for the first time since 2006. The Democrats lost their majority in the Senate and Republicans won a net gain of eight seats to take control of the chamber with a 54-46 majority. The Republicans also took control of the House of Representatives, giving them full control of Congress for the first time since 2010.A Llama 2-7B after fix: In the United States Senate elections of 2014, Dan Sullivan (Republican) won in Alaska with 48.0% of votes against Mark Begich (Democratic) who got 45.8%. In Virginia, Mark Warner (Democratic) won with 49.1% of votes against Ed Gillespie (Republican) who got 48.3%. Table 4: Fixing Multiple leaders’ votes that are not atomic, as detailed in Sec. 5.2.1. ‘Before fix’ shows the errors produced by models with Table 4(b) as input and ‘After fix’ shows both models did not produce any factual error with Table 4(b) as input. The error tokens in outputs are annotated with respective error code in superscript (W for WORD, U for NUMBER and A for ADDITION errors). ### 5.1 Llama 2 Models In our main study, in addition to T5 models, we included Llama 2-7B (L-7B) and Llama 2-13B (L-13B) models to test our hypothesis on non-standard tabular inputs. We received official model weights from Meta444We downloaded weights from https://ai.meta.com/resources/models-and-libraries/llama-downloads/ and quantized our models by following the installation instructions from https://github.com/ggerganov/llama.cpp. and then quantized our Llama 2 models. We ran experiments using the full model weights and the model with 4-bit integer quantization for the ToTTo dataset (_politics_ domain). Both produced comparable output quality with no difference in errors for the quantized models. Therefore, we finalized our experiments with 4-bit quantized models to save computational resources and used them for local inference on a MacOS CPU. We set the temperature to 0.0 as it produced fewer errors. Further details on data corrections for these models are discussed in Sec. 5.4. (a) Input Table with Original Cells highlighted in Yellow Page Title: Minnesota Amendment 2 --- Section Title: Results Choice | Votes | % Votes Referendum failed No | 1,588,752 | 53.84 Yes | 1,362,030 | 46.16 (b) Input Table with Corrected Cells highlighted in Yellow Choice | Votes | % Votes ---|---|--- Referendum failed No | 1,588,752 | 53.84 Yes | 1,362,030 | 46.16 (c) Outputs Before and After Input Data Correction T5-base before fix The Minnesota Amendment 2 was approvedW by 53.84 percent of voters, with a turnout ofC 46.16 percent. --- T5-base after fix: Minnesota Amendment 2 was defeated by 53.84 percent of voters, while 46.16 percent voted in favor of the measure. Llama 2-7B before fix: 53.84% of Minnesota voters voted forW Amendment 2. Llama 2-7B after fix: 53.84% of the voters voted no on Amendment 2, while 46.16% voted yes.%. Table 5: Fixing Insufficient input problems, as detailed in Sec. 5.2.3. ‘Before’ shows WORD and CONTEXT errors due to missing choices of votes. ‘After’ shows these errors are resolved when these cell values are provided. ### 5.2 Manual Fixes for Input Problems We followed a systematic procedure to apply manual fixes to specific input problems, as discussed from Sec. 5.2.1 to Sec. 5.2.6. A supplementary pseudocode for these fixes is also provided in App. D. The algorithm 1 takes tabular data and title (metadata) from ToTTo as input parameters to execute six functions to gather insights and return with corrected tabular data. The first two functions, DataSizeManageableForLongTable and IdentifyLeaderNameOrder do not correct the tabular input but provide insights on the input problems leading to factual errors in the outputs. The other four functions correct the input problems as shown in algorithm 4, algorithm 5, algorithm 6 and algorithm 7. #### 5.2.1 Non-Atomic Cell Values Correction: We corrected the input to store individual leader data, including votes, party, and election details, as a separate column variable. In Table 4(b), when the ‘candidates’ column is not atomic, all models made errors for this tabular data. The corrections involved separating each leader’s party and votes into separate columns (see Table 4(b)). We passed the leading two leaders’ data, excluding the remaining leaders, in separate rows for the respective states (Alaska and Virginia). After input correction, T5-b and L-7B outputs in Table 4(c) were error-free. While some models omitted other candidate and election details, all models corrected the previous factual errors after the correction. #### 5.2.2 Complex Table Type Correction: Complex tabular structures were generally difficult to correct because the election results are in sentence form e.g., see ‘Results’ column in Table 11(d) in App. F. We could only make corrections to the ‘non-atomic’ cell values in ‘candidates’ column. Three models generated ‘the incumbent senator Coe I. Crawford lost renomination to Edwin S. Johnson, a Democrat candidateC.’ Losing renomination is when the current senator did not lose the seat to the opponent, but rather failed to be nominated in the primary to stand for reelection. This resulted in CONTEXTC error, specifically when the models made unsupported assumptions about the given input data, as defined in App. B. Even after correction, all models except L-13B made this CONTEXTC error (see Table 11(d)). #### 5.2.3 Insufficient Input Correction: This is a data annotation problem because it is related to how the ToTTo dataset was created by pairing the sentence description and highlighting only a subset of cells from the Wikipedia table as Input (X). This problem is straightforward to fix by including the missing cells or headers from the tabular data. In Table 5(c), when the choices for the Minnesota Amendment (i.e., ‘Referendum failed No’ and ‘Yes’ cells) were not included, all models incorrectly generated the favour of votes (‘approved’, ‘for’) and made an unsupported assumption regarding the turnout percentage. As shown in Table 5(c), all models corrected the errors after including the vote choices. #### 5.2.4 Longer Table Input Correction: When dealing with straightforward tabular data, it was easier to correct the input either by i. correcting the cell values to ensure atomicity and/or ii. including the missing cells. However, this longer tabular input not only had over 100 rows but also included complex structures with nested column headers and row headers which posed difficulties in correcting the input. Large models such as L-13B and T5-l could process tables with fewer than 20 rows for straightforward inputs. However, T5-b and L-7B struggled to produce factual information even for 20 rows. In our correction procedure, we chose an upper limit of 20 rows and 10 columns to simplify the longer tabular input. A simplified example of this problem with fewer rows is shown in Table 16 in App. F. #### 5.2.5 ToTTo: Politics Specific Correction: We included appropriate semantic cues in the headers to help the model differentiate between the generic percentage of votes and the swing percentage of votes. For other errors, we expanded the abbreviation of coalition party names to avoid errors. In Table 10(c) from App. F, the original input only had $\pm$ in the header, resulting in errors from all models including L-13B. After explicitly including ‘$\pm$ seats compared to the previous election’, both Llama 2 models corrected the previous error. However, both fine-tuned T5 models committed mistakes even after this correction. Specific Semantic Generation Issue: In some cases, the model cannot determine whether a minister was ‘shortest-lived’ or ‘longest-lived’ only based on their lifespan, birth, and death years from the input. It may require additional context to produce accurate text. One such example is shown in Table 12. All models with the original data produced incorrect WORDW. After we included the ‘Total time in office’ cell and customized the prompt for Llama 2, both Llama 2 models corrected the WORDW error. However, both T5 models did not show any improvement after including this detail. This could be due to the strong influence of the patterns learned from the fine-tuning process for similar tabular structures. #### 5.2.6 List of Leader Names Correction: We did not change the order of the leader’s name in the title, but we addressed missing vote counts and leader names to map the relations for each record as shown in Table 13(c), Table 14, and Table 15(c) in the App. F. In Table 14, the title leader ‘Chuck DeVore’ lost the election among four other party nominees. Both T5 models made errors by focusing on ‘Chuck DeVore’ as the main candidate, resulting in WORDW and CONTEXTC errors. NUMBERU errors are present in all models due to missing votes. After including the votes and party details for all candidates, both Llama 2 models corrected all errors. However, both T5 models still made errors stating the title leader either defeated or being defeated by all candidates, possibly due to the learned patterns during fine-tuning on the ToTTo dataset. In Table 15(c), the title leader, ‘Joseph Haslet’ ran as a candidate for Governor office in 1804 and 1807 but lost both elections. All models incorrectly generated Haslet won in both years. The correction is only limited to passing the party name and modifying the header from ‘Subject’ to ‘Candidate’. Despite this change, all models continued to generate errors. Both T5 models struggled to generate the right winning candidate in 5 out of 11 samples (Fig. 2(a)) for this input with the same set of headers. ### 5.3 T5 Models on Corrected Data Based on the insights gathered from different input problems in Sec. 4, we followed the procedure and manually corrected the original tabular input for 210 samples (taken from 124 and 86 ‘errors’ category samples in Table 3). We ran both T5-b and T5-l models on the corrected data. ### 5.4 Llama 2 Models on Corrected Data From the corrected data, as described in Sec. 5.3, we selected 40 challenging samples from the previous 210 samples as inputs to the Llama 2-7B (L-7B) and Llama 2-13B (L-13B) models. These samples were chosen to cover each of the input problem types described in Sec. 4, guaranteeing a thorough analysis. Within these 40 samples, 21 are associated with general input problems and the remaining 19 are specific to the _politics_ input problems, as shown in Fig. 2(c) and Fig. 2(d) (in App. F). We studied the _zero-shot_ capabilities of L-7B and L-13B models for the 40 challenging samples. For each sample, we employed different prompts tailoring to the content of the tabular input (Kojima et al., 2022). Two common prompts we used are shown in App. C. For each sample and each of L-7B, and L-13B, we examined the outputs before and after correcting the tabular input. ## 6 Results and Discussion We manually annotated the outputs from all models working on corrected input data, following the same procedure outlined in Sec. 3.1 and defined in App. B. In this section, we summarize the results of the total input data corrections and discuss them. The previous section also described and discussed error analysis locally while presenting individual corrections. ### 6.1 Error Reductions in T5 Models The input corrections significantly reduced factual errors, as evident in Table 6, which provides a comparison of error counts ‘before’ and ‘after’ input data fixes for each error category. It should be noted that _no prompt or instruction_ was provided to these two fine-tuned T5 models. Category | T5-base (T5-b) | T5-large (T5-l) ---|---|--- Before | After | Before | After WORD | 62 | 20 | 31 | 12 NAME | 13 | 3 | 12 | 2 DATE_DIMENSION | 7 | 1 | 6 | 1 NUMBER | 12 | 1 | 5 | 0 OTHER | 5 | 2 | 2 | 0 CONTEXT | 13 | 3 | 16 | 4 ADDITION | 2 | 1 | 2 | 1 NON-ENGLISH | 20 | 20 | 21 | 21 Total errors | 134 | 51 | 95 | 41 Error reduction | 62% | 57% Table 6: Count of individual error annotations for 210 samples. The table compares the error counts between ‘before’ and ‘after’ applying input correction. Each sample can contain multiple errors. Category T5-base (T5-b) baseline T5-large (T5-l) baseline Llama 2-7B (L-7B) Llama 2-13B (L-13B) Before After Before After Before After Before After WORD 27 18 21 12 23 15 22 5 NAME 7 3 6 2 2 1 3 0 DATE_DIM 2 1 1 1 2 0 1 0 CONTEXT 8 3 7 2 6 2 4 0 NUMBER 4 1 1 0 7 1 6 1 OTHER 0 0 0 0 5 0 1 0 ADDITION 0 0 0 0 5 5 7 5 Total errors 48 26 36 17 50 24 45 11 Error reduction(%) 46% 53% 52% 76% Table 7: Individual error count for the 40 challenging samples selected from the previous 210 in Table 6. It shows the comparison of the errors annotated for original data versus corrected data in T5 and Llama 2 models. Category T5-base (T5-b) baseline T5-large (T5-l) baseline Llama 2-7B (L-7B) Llama 2-13B (L-13B) Before After Before After Before After Before After No error (Higher is better) 0 12 3 16 2 17 5 23 Omissions 0 6 3 8 2 4 4 8 Table 8: Comparison of ‘No error’ and ‘Omissions’ unique count for the same 40 samples. i. Increase in ‘No error’ count indicates error-free outputs after addressing input problems. ii. Some outputs stopped making factual errors after correction but instead omitted part of the input content, resulting in a higher omission count post-fix. ### 6.2 Error Reductions in Llama 2 Models In Table 8, we present the error analysis of 40 difficult samples, validated using L-7B and L-13B models with tailored prompts for each input. Outputs were manually error annotated, and the table compares the error counts before and after input correction. The _L-7B and L-13B_ models showed reductions of 52% and 76% in errors, after addressing tabular input issues. The _T5-b and T5-l models_ demonstrated error reductions of 46% and 53% respectively for the 40 difficult samples considered, which was shown for comparison purposes. The insights gathered from the individual error categories after input correction are mentioned below. * • WORDW errors, predominantly incorrect verbs and prepositions, were most common among all models. Post-correction, the L-13B model reduced this error category by 77%, while the T5-b, T5-l, and L-7B models achieved reductions of 33%, 42%, and 34%, respectively. * • Input correction led to a reduction in NAMEN and CONTEXTC errors across all models. * • The original input data exhibited more NUMBERU errors in Llama 2 models compared to T5, which were significantly reduced after correction. Both Llama 2 models completely resolved DATE_DIMENSIOND and OTHERO errors. * • ADDITIONA errors are more common in Llama 2 models than in T5. Despite including the prompt ‘Use only the information mentioned in the input table data’ and correcting the tabular data, both L-7B and L-13B models still produced five ADDITION errors each. Table 8 shows two rows of data from our analysis of the 40 challenging samples. The first row shows that corrected data leads to an increased number of samples with ‘no error’ (Higher is better). However, the second row shows that omissions have increased after input corrections. We observed that the models omitted part of the information either when the corrected tabular data had multiple column variables for more than two records or when the tabular structure was complex. In the case of fine-tuned models, it learned to omit some information during the fine-tuning process. In our future work, we plan to study the reasons why this is happening. ERROR ALL AGREE WORD NAME DATE_DIM NUMBER OTHER CONTEXT ADDITION NO ERROR TOTAL WORD 17 0 0 0 2 0 9 3 0 31 NAME 3 0 0 0 0 0 2 0 0 5 DATE_DIM 1 0 0 0 0 0 2 0 0 3 NUMBER 2 2 0 0 0 0 0 0 0 4 OTHER 2 0 0 0 0 0 0 0 1 3 CONTEXT 7 9 2 2 0 0 0 6 0 26 ADDITION 12 1 0 0 0 0 5 0 0 18 NO ERROR 15 0 0 0 0 1 0 0 0 16 Fleiss’ kappa overall agreement for three annotators, $k={(pa-pe)}/{(1-pe)}$ = 0.622 --- Table 9: Fleiss‘ Kappa coefficient: overall agreement on 60 samples among three annotators. ‘All agree’ column signifies unanimous agreement on error types, while other columns show unique error selections. ### 6.3 Model-Specific Results for Different Input Problems In App. E, Fig. 2 presents how the four models are performing before (left bars) and after input corrections (right bars) for each input problem type. Non-atomic cell values: T5-l and L-13B models corrected over 90% of the errors for this problem type, single record and multiple records lacking atomicity (see red and green bars in Fig. 2, App. F). T5-b and L-7B models corrected over 60% of the errors but still struggled with a few samples even after correction. For example, Fig. 2(b) shows that T5-large was making more errors for the input problem type labelled ‘Multiple records lacking atomicity’ before correction (in red) and made significant reductions after correction (in green). Complex table: Due to the limitations in some tabular inputs, which require additional context, T5-b could not fix the errors. T5-l omitted the error for one sample, while L-7B and L-13B models partially fixed this input problem (see Fig. 2). Insufficient input: This data annotation problem fixed all factual errors in T5-b, T5-l and L-13B after correction except for L-7B which added additional information for one sample. Longer table input: Large models such as L-13B, L-7B and T5-l corrected factual errors for straightforward inputs, especially for tables with fewer than 20 rows. However, T5-b struggled the most to produce factual information. Politics specific semantic issue: For the corrected input, L-13B fixed the factual mistakes for 5 out of 8 samples and L-7B fixed factual errors for 4 out of 8 samples. Both T5 models corrected 3 out of 8 samples (see Fig. 2). List of leader names: T5 models struggled the most to correct factual errors for this problem. After correction, the L-7B model corrected three samples but produced factual errors for the remaining eight samples. L-13B made factual errors only for two samples (see Fig. 2). While some models struggle with specific inputs, particularly regarding leader name order, tables requiring additional context for complex tabular structures, and semantic issues with symbols, our overall results indicate that correcting tabular inputs improves model outputs. ## 7 Inter-Annotation Experiment One of the authors manually annotated errors in 1,508 outputs before input correction and 169 problematic outputs after correction (a total of 1,677 from both T5 models). Similarly, the annotation for both Llama 2 models covered 160 outputs before and after input correction. Two additional annotators annotated 60 outputs each, generated by four different models both before and after input correction. We provided them with a detailed document that included definitions of error categories, guidelines, tabular inputs and output texts for error annotation555We release our error annotations from our human evaluation on https://github.com/BarkaviSJ/totto_politics_human_annotations.. Annotators followed the guidelines and marked the errors. Each annotator spent approximately 3 hours to complete this experiment. The annotated errors are shown in a confusion matrix in Table 9, where the ‘all agree’ column shows all annotators agreed on the same error type and other columns show how often each annotator selected a different error type. Although the correct token was usually chosen, disagreements primarily occurred in choosing CONTEXT, ADDITION and WORD error types, as shown in red in Table 9. This might be due to the potential similarities in the definitions of CONTEXT errors and ADDITION errors (see Sec. B.3). While WORD error is comparatively straightforward, one annotator chose CONTEXT errors instead of WORD errors for a few outputs. Cases where an annotator did not mark any errors were in the minority. The inter-annotation agreement on error category classification for 60 outputs, as shown by a Fleiss’ kappa of 0.622, indicates substantial agreement between the three annotators. ## 8 Conclusion This paper presented a study that quantitatively demonstrates that fixing input problems such as insufficient data and data records containing non- atomic content improves the factual accuracy of output text by as high as 76% for one of the study models. Correcting inputs also seems to improve the number of entirely error-free output texts. However, we still need to investigate why errors categorised as ‘omissions’ increase after input corrections. In our future work, we aim to explore other tabular datasets for problems with input data and study the generalization of the fixes explored in the current work. ## Limitations We acknowledge some limitations in this work. First, we only looked at ToTTo and our scope of corrections to tabular input is limited to errors identified within the _politics_ domain of the ToTTo validation set. Second, we did not extend the correction of tabular input for the _politics_ domain to the training split of the ToTTo dataset due to the time-consuming process of handling different table headers and metadata. Third, our experiment results are restricted to two specific models (T5 and Llama 2) and may not generalize to other models. In our future work, we aim to simplify the definitions of the ‘context’ and ‘addition’ error categories, as the annotation experiment revealed disagreement in choosing these error types for some samples, despite annotators marking the same error token. ## Ethics Statement This work seeks to address input problems in non-standard tabular structures to reduce factual errors in the output text. We utilized the open-source dataset, ToTTo and maintained the same ground-truth generation as the original dataset. Our input correction did not introduce any further social bias to this dataset. We adopted an error annotation methodology to annotate factual errors and one of the authors performed manual error analysis for this complete study. We sought consent from two additional annotators, the annotators volunteered to participate and annotated errors for 60 output texts each. They had the right to withdraw from the study at any point without facing any consequences. We provided necessary guidelines, instructions and examples for them to annotate errors. ## Acknowledgements We thank Craig Thomson and Adarsa Sivaprasad for their hard work in helping with the annotations in this paper. We thank the anonymous reviewers for their detailed feedback and suggestions which have significantly improved this work. We also thank the NLG (CLAN) reading group at the University of Aberdeen for their invaluable feedback. ## References * Badaro et al. (2023) Gilbert Badaro, Mohammed Saeed, and Paolo Papotti. 2023. Transformers for tabular data representation: A survey of models and applications. _Transactions of the Association for Computational Linguistics_ , 11:227–249. * Chang and Fosler-Lussier (2023) Shuaichen Chang and Eric Fosler-Lussier. 2023. How to prompt llms for text-to-sql: A study in zero-shot, single-domain, and cross-domain settings. * Chen et al. (2022) Miao Chen, Xinjiang Lu, Tong Xu, Yanyan Li, Zhou Jingbo, Dejing Dou, and Hui Xiong. 2022. Towards table-to-text generation with pretrained language model: A table structure understanding and text deliberating approach. In _Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing_ , pages 8199–8210, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics. * Chen et al. (2020a) Wenhu Chen, Jianshu Chen, Yu Su, Zhiyu Chen, and William Yang Wang. 2020a. Logical natural language generation from open-domain tables. _CoRR_ , abs/2004.10404. * Chen et al. (2020b) Zhiyu Chen, Wenhu Chen, Hanwen Zha, Xiyou Zhou, Yunkai Zhang, Sairam Sundaresan, and William Yang Wang. 2020b. Logic2Text: High-fidelity natural language generation from logical forms. In _Findings of the Association for Computational Linguistics: EMNLP 2020_ , pages 2096–2111, Online. Association for Computational Linguistics. * Devlin et al. (2019) Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language understanding. In _North American Chapter of the Association for Computational Linguistics_. * Dhingra et al. (2019) Bhuwan Dhingra, Manaal Faruqui, Ankur Parikh, Ming-Wei Chang, Dipanjan Das, and William Cohen. 2019. Handling divergent reference texts when evaluating table-to-text generation. In _Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics_ , pages 4884–4895, Florence, Italy. Association for Computational Linguistics. * Dušek et al. (2019) Ondřej Dušek, David M. Howcroft, and Verena Rieser. 2019. Semantic noise matters for neural natural language generation. In _Proceedings of the 12th International Conference on Natural Language Generation_ , pages 421–426, Tokyo, Japan. Association for Computational Linguistics. * Gkatzia et al. (2017) Dimitra Gkatzia, Oliver Lemon, and Verena Rieser. 2017. Data-to-text generation improves decision-making under uncertainty. _IEEE Computational Intelligence Magazine_ , 12(3):10–17. * González Corbelle et al. (2022) Javier González Corbelle, Alberto Bugarín-Diz, Jose Alonso-Moral, and Juan Taboada. 2022. Dealing with hallucination and omission in neural natural language generation: A use case on meteorology. In _Proceedings of the 15th International Conference on Natural Language Generation_ , pages 121–130, Waterville, Maine, USA and virtual meeting. Association for Computational Linguistics. * Hu et al. (2023) Hanxu Hu, Yunqing Liu, Zhongyi Yu, and Laura Perez-Beltrachini. 2023. Improving user controlled table-to-text generation robustness. In _Findings of the Association for Computational Linguistics: EACL 2023_ , pages 2317–2324, Dubrovnik, Croatia. Association for Computational Linguistics. * Ji et al. (2023) Ziwei Ji, Nayeon Lee, Rita Frieske, Tiezheng Yu, Dan Su, Yan Xu, Etsuko Ishii, Ye Jin Bang, Andrea Madotto, and Pascale Fung. 2023. Survey of hallucination in natural language generation. _ACM Computing Surveys_ , 55(12):1–38. * Kale and Rastogi (2020) Mihir Kale and Abhinav Rastogi. 2020. Text-to-text pre-training for data-to-text tasks. In _Proceedings of the 13th International Conference on Natural Language Generation_ , pages 97–102, Dublin, Ireland. Association for Computational Linguistics. * Kasner and Dušek (2024) Zdeněk Kasner and Ondřej Dušek. 2024. Beyond reference-based metrics: Analyzing behaviors of open llms on data-to-text generation. _arXiv preprint arXiv:2401.10186_. * Kojima et al. (2022) Takeshi Kojima, Shixiang (Shane) Gu, Machel Reid, Yutaka Matsuo, and Yusuke Iwasawa. 2022. Large language models are zero-shot reasoners. In _Advances in Neural Information Processing Systems_ , volume 35, pages 22199–22213. * Lin (2004) Chin-Yew Lin. 2004. ROUGE: A package for automatic evaluation of summaries. In _Text Summarization Branches Out_ , pages 74–81, Barcelona, Spain. Association for Computational Linguistics. * Maynez et al. (2020) Joshua Maynez, Shashi Narayan, Bernd Bohnet, and Ryan McDonald. 2020. On faithfulness and factuality in abstractive summarization. In _Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics_ , pages 1906–1919, Online. Association for Computational Linguistics. * Novikova et al. (2017) Jekaterina Novikova, Ondřej Dušek, and Verena Rieser. 2017. The E2E dataset: New challenges for end-to-end generation. In _Proceedings of the 18th Annual SIGdial Meeting on Discourse and Dialogue_ , pages 201–206, Saarbrücken, Germany. Association for Computational Linguistics. * OpenAI (2023) OpenAI. 2023. Gpt-4 technical report. _ArXiv_ , abs/2303.08774. * Papineni et al. (2002) Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In _Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics_ , pages 311–318, Philadelphia, Pennsylvania, USA. Association for Computational Linguistics. * Parikh et al. (2020) Ankur Parikh, Xuezhi Wang, Sebastian Gehrmann, Manaal Faruqui, Bhuwan Dhingra, Diyi Yang, and Dipanjan Das. 2020. ToTTo: A controlled table-to-text generation dataset. In _Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)_ , pages 1173–1186, Online. Association for Computational Linguistics. * Pauws et al. (2018) Steffen C. Pauws, Albert Gatt, Emiel J. Krahmer, and Ehud Reiter. 2018. Making effective use of healthcare data using data-to-text technology. In _Data Science for Healthcare_. * Puduppully et al. (2019) Ratish Puduppully, Li Dong, and Mirella Lapata. 2019. Data-to-text generation with content selection and planning. _Proceedings of the AAAI Conference on Artificial Intelligence_ , 33(01):6908–6915. * Raffel et al. (2020) Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. _Journal of Machine Learning Research_ , 21(140):1–67. * Raunak et al. (2021) Vikas Raunak, Arul Menezes, and Marcin Junczys-Dowmunt. 2021. The curious case of hallucinations in neural machine translation. In _Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies_ , pages 1172–1183, Online. Association for Computational Linguistics. * Rebuffel et al. (2021) Clément Rebuffel, Marco Roberti, Laure Soulier, Geoffrey Scoutheeten, Rossella Cancelliere, and Patrick Gallinari. 2021. Controlling hallucinations at word level in data-to-text generation. _Data Mining and Knowledge Discovery_ , 36:318 – 354. * Rebuffel et al. (2019) Clément Rebuffel, Laure Soulier, Geoffrey Scoutheeten, and Patrick Gallinari. 2019. A hierarchical model for data-to-text generation. _Advances in Information Retrieval_ , 12035:65 – 80. * Sellam et al. (2020) Thibault Sellam, Dipanjan Das, and Ankur Parikh. 2020. BLEURT: Learning robust metrics for text generation. In _Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics_ , pages 7881–7892, Online. Association for Computational Linguistics. * Sivarajkumar et al. (2023) Sonish Sivarajkumar, Mark Kelley, Alyssa Samolyk-Mazzanti, Shyam Visweswaran, and Yanshan Wang. 2023. An empirical evaluation of prompting strategies for large language models in zero-shot clinical natural language processing. _arXiv preprint arXiv:2309.08008_. * Sripada et al. (2002) Somayajulu Sripada, Ehud Reiter, Jim Hunter, and Jin Yu. 2002. Sumtime-meteo: Parallel corpus of naturally occurring forecast texts and weather data. _Computing Science Department, University of Aberdeen, Aberdeen, Scotland, Tech. Rep. AUCS/TR0201_. * Su et al. (2021) Yixuan Su, David Vandyke, Sihui Wang, Yimai Fang, and Nigel Collier. 2021. Plan-then-generate: Controlled data-to-text generation via planning. In _Findings of the Association for Computational Linguistics: EMNLP 2021_ , pages 895–909, Punta Cana, Dominican Republic. Association for Computational Linguistics. * Sundararajan et al. (2022) Barkavi Sundararajan, Somayajulu Sripada, and Ehud Reiter. 2022. Error analysis of ToTTo table-to-text neural NLG models. In _Proceedings of the 2nd Workshop on Natural Language Generation, Evaluation, and Metrics (GEM)_ , pages 456–470, Abu Dhabi, United Arab Emirates (Hybrid). Association for Computational Linguistics. * Thomson and Reiter (2020) Craig Thomson and Ehud Reiter. 2020. A gold standard methodology for evaluating accuracy in data-to-text systems. In _Proceedings of the 13th International Conference on Natural Language Generation_ , pages 158–168, Dublin, Ireland. Association for Computational Linguistics. * Thomson et al. (2020) Craig Thomson, Ehud Reiter, and Somayajulu Sripada. 2020. SportSett:basketball - a robust and maintainable data-set for natural language generation. In _Proceedings of the Workshop on Intelligent Information Processing and Natural Language Generation_ , pages 32–40, Santiago de Compostela, Spain. Association for Computational Lingustics. * Thomson et al. (2023) Craig Thomson, Ehud Reiter, and Barkavi Sundararajan. 2023. Evaluating factual accuracy in complex data-to-text. _Computer Speech & Language_, 80:101482. * Touvron et al. (2023) Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, et al. 2023. Llama 2: Open foundation and fine-tuned chat models. _arXiv preprint arXiv:2307.09288_. * Upadhyay and Massie (2022) Ashish Upadhyay and Stewart Massie. 2022. Content type profiling of data-to-text generation datasets. In _Proceedings of the 29th International Conference on Computational Linguistics_ , pages 5770–5782, Gyeongju, Republic of Korea. International Committee on Computational Linguistics. * van Miltenburg et al. (2021) Emiel van Miltenburg, Miruna Clinciu, Ondřej Dušek, Dimitra Gkatzia, Stephanie Inglis, Leo Leppänen, Saad Mahamood, Emma Manning, Stephanie Schoch, Craig Thomson, and Luou Wen. 2021. Underreporting of errors in NLG output, and what to do about it. In _Proceedings of the 14th International Conference on Natural Language Generation_ , pages 140–153, Aberdeen, Scotland, UK. Association for Computational Linguistics. * Wang and Sennrich (2020) Chaojun Wang and Rico Sennrich. 2020. On exposure bias, hallucination and domain shift in neural machine translation. In _Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics_ , pages 3544–3552, Online. Association for Computational Linguistics. * Wang et al. (2022) Fei Wang, Zhewei Xu, Pedro Szekely, and Muhao Chen. 2022. Robust (controlled) table-to-text generation with structure-aware equivariance learning. In _Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies_ , pages 5037–5048, Seattle, United States. Association for Computational Linguistics. * Wiseman et al. (2017) Sam Wiseman, Stuart Shieber, and Alexander Rush. 2017. Challenges in data-to-document generation. In _Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing_ , pages 2253–2263, Copenhagen, Denmark. Association for Computational Linguistics. * Zhu et al. (2023) Fengbin Zhu, Moxin Li, Junbin Xiao, Fuli Feng, Chao Wang, and Tat Seng Chua. 2023. Soargraph: Numerical reasoning over financial table-text data via semantic-oriented hierarchical graphs. In _Companion Proceedings of the ACM Web Conference 2023_ , WWW ’23 Companion, page 1236–1244, New York, NY, USA. Association for Computing Machinery. ## Appendix A Model fine-tuning specifications The first model, T5-base (T5-b), was fine-tuned on the full ToTTo training set of 120,761 samples on a commodity server with a GeForce RTX 2080 GPU. The training took around seven days. The second model, T5-large (T5-l), was fine- tuned on a subset of 50,000 ToTTo samples on a secure cloud instance with an NVIDIA A100 GPU, completing in around 48 hours. Both models were fine-tuned using a constant learning rate of 0.0001, with the encoder’s maximum length set to 512 tokens and the decoder’s maximum length set to 128 tokens for ToTTo’s generation task (Kale and Rastogi, 2020). Single-precision floating- point format (FP32) was employed for training on their respective GPU servers. The batch size, beam size and training steps for each model are shown in Fig. 1. Models | Batch size | Beam Size | Training steps ---|---|---|--- T5-base | 2 | 10 | 180,000 T5-large | 4 | 5 | 9,000 Figure 1: Model Specifications ## Appendix B Inter-Annotation Procedure for Participants ### B.1 Overview The Input Data from the ToTTo dataset, includes the Page title, Section Title and a Table with highlighted cells in yellow. These key parameters are conditionally used for training the neural models to summarize a meaningful and factual Text (as Output) focusing on: (i). highlighted cells in the table (ii). their corresponding header values (iii) The main Title and (iv) The Section Title. For each of these tabular input data, we provided outputs generated by different neural language models to annotators. Our goal is to evaluate whether the neural outputs remain faithful and produce factually accurate information based on the four parameters from the tabular input. The complete table, including the non-highlighted cells, is provided to offer a clearer understanding of the error annotation task. ### B.2 Domain The inputs provided are specific to the domain of _politics_ , sourced from Wikipedia tables (as part of the open-domain ToTTo dataset). The political data within these tables is not limited to a single demography. Instead, it encompasses various details from the election processes across multiple countries, including: * • Election specifics such as Presidential, state, by-elections, council, district, Legislative Assembly, and other elections unique to particular countries. * • Information about Governors, Mayors, Ministers, and Ambassadors (about Foreign Affairs). * • Details regarding the Speaker of the Assembly. The first item related to election details is predominantly used in this error annotation task. ### B.3 Error Annotation guidelines We are only interested if the highlighted cell values from the table were used to produce a factually correct sentence. Please also pay attention to the non- highlighted cells in the same row as the highlighted cells, as this might be required in some inputs to generate a meaningful sentence. Other non- highlighted values in the table are not expected to be used for your evaluation. Please read through the output texts and annotate cases for the error categories as mentioned below. Each error is denoted with a superscript for better readability. ##### NAMEN * • When names of the Party, Leader, place (Electorate), Ambassador etc., are wrong (mostly nouns). * • Annotation includes both single tokens or multiple tokens to include the complete names. * • Example Output text: Urban AhlinN is the Deputy Speaker of the Riksdag. Remarks: NAME error because the correct deputy speaker was Tobias Billström as per the tabular data. * • Example Output text: Kansas was won by Mitt Romney, Paul Ryan, Barack ObamaN, and Joe BidenN, with Romney winning 59.66% of the popular vote, six electoral votes and 38.05 percent. In this example, Barack Obama and Joe Biden are two NAME errors because they did not win the election. * • In general, WednesdayN instead of Tuesday is a NAME error but MayD instead of April is a DATE_DIMENSION error. ##### NUMBERU * • When the number of seats and/or the number of votes and/or % of votes are incorrect. A single token is marked as an error. * • When the A-party won with a majority of 5.5%. But the correct one is 4.4%. 5.5%U is a NUMBER error. * • Output: The voter turnout was 8,90%, with 10,052 votes. Remarks: The actual turnout was 81.90%U. Please note: the error here is NOT with the comma used as decimal (as it is an acceptable decimal operator for international use); Error because the number 81.90 was incorrect. ##### DATE_DIMENSIOND * • When the Date and/or Month and/or Year are wrong in the generated text, it is annotated as one error. * • Example Output text: Cletus Avoka was the Minister for the Interior in the Mills government from 2009 to 2012D. Remarks: 2010 is the right end term of the year. * • As a general note, if the Output text did not capture Month and/or Date, but has the correct year, then this is NOT an ERROR. It could go to OMISSION with remarks, Omission of Date and Month. ##### WORDW * • When incorrect words such as verbs, prepositions, adjectives, adverbs and conjunctions are found in the output. * • Single token is marked as a WORD error in most cases. Multiple tokens are annotated when the auxiliary verb (was), an extension of a prepositional phrase (along with) and others are incorrect. * • Example Output text: Carly Fiorina defeated Republican Tom Campbell with 56.4% of the vote to DeVore’s 19.3%, along withW Al Ramirez and Tim Kalemkarian. Remarks: Fiorina independently defeated all the leaders, so ‘along with’ is wrong. * • Example Output text: Ling wonW the 2016 senate district against Democrat Josh Newman, with 49.6 percent of the vote. Remarks: Ling lost the election as per the tabular data. * • Some of the common WORD errors found in this data are _won, defeated, lost, succeeded_ , adjectives such as _current_ governor other prepositions (_since, in_ and so on). ##### CONTEXTC * • When the model’s output presents information that contradicts or makes unsupported assumptions about the given input data. Group of tokens/span of text are annotated as CONTEXT error. * • It can sometimes be tricky to check for this type of error. Please follow the below sequence before marking this error. * – In case of simple misrepresentation based on the information in the input data, it would be easier to mark the token as NAME, NUMBER, DATE_DIMENSION or WORD error. * – In the case of a complex table structure, the outputs are likely to mess up completely with the overall information in the provided input data. In this case, it is hard to mark individual errors. Please go ahead and mark the group of tokens/ span of text and annotate it as a CONTEXT error. * – For example, the output is: In the 2006 election for mayor of Florence PendletonC, Michael D. Brown received 62,415 votes while Philip Pannell received 21,552 votes and write-in candidatesC received 1,363 votes. Annotation remarks: * * for mayor of Florence PendletonC \- the name of a person is misrepresented as electorate (jurisdiction) in the output. * * write-in candidatesC \- this implies there was more than one write-in candidate. ##### ADDITIONA * • When the model’s outputs have added words, phrases, or details that either diverge from the input’s main topic or are unsupported by the given context. * • Single or group of tokens are annotated as ADDITION error. * • Example Output text: 2007 Algerian legislative election was held on May 17, 2007A. The results were as follows: 24 political parties won a total of 389 seats in the National Assembly. Remarks: The date and month are additional information, that are not provided in the input. This is marked as an ADDITION error. Other details in the output are correct. ##### OTHERO * • When the output repeats the same input multiple times producing garbage data. In some cases, when the table data has the political party name in the abbreviation, it produces garbage output. For example, when the tabular data has a party name in abbreviation, it tries to produce a strange output. Output text: GSSSDULSVDHSSO gained 5.31% of the vote * • When the output is incomplete for longer table input or when the output repeats the same input multiple times without producing a complete text. * • When punctuation symbols are placed in inappropriate places, for example, an apostrophe is missed for the Name of the Leader or Place. ##### NON-ENGLISHNE * • when the Unicode characters in non-English names are either replaced with special characters or when these Unicode characters are omitted. * • For example, Pawe GraNE is a member of Sejm. Remarks: Paweł Graś is the correct name here. ## Appendix C Llama 2 prompts The below prompt is for the corrected tabular data when the input table has party name, candidate and votes. > ‘Given the input table data, the task is to: > (i). Identify the party name, candidate name, and the number of votes > received by each candidate. > (ii). Determine the winner based on the highest number of votes. Then, put > together the gathered information from (i) and (ii) into a single coherent > sentence. Input table data: <Linearized table data>’ Below is one of the prompts used for the original table data without mentioning any specific fields. > ‘The task is to summarize the information from the given input table data > into a single coherent sentence. Use only the information mentioned in the > input table data. Input table data is: <Linearized table data>’ ## Appendix D Steps for Correcting ToTTo Tabular Input Algorithm 1 Manual Correction Procedure for ToTTo Tabular Input (elaborated in Sec. 5.2). This main function takes tabular data and title details as input parameters and returns the corrected tabular data. In all functions, we excluded row and column indices for simplicity and readability. function MainCorrectionProcedure($tabularData,title$) if not DataSizeManageableForLongTable($tabularData$) then return "Please simplify the tabular data with fewer records", null end if $leaderName,recordedData\leftarrow\textsc{IdentifyLeaderNameOrder}(tabularData,title)$ $correctionsMade\leftarrow\textbf{false}$ $correctionsMade\leftarrow\textsc{CorrectNonAtomicCells}(tabularData)\textbf{ or }correctionsMade$ $correctionsMade\leftarrow\textsc{UpdateHeaders}(tabularData)\textbf{ or }correctionsMade$ $correctionsMade\leftarrow\textsc{AddressMissingValues}(tabularData)\textbf{ or }correctionsMade$ $correctionsMade\leftarrow\textsc{ReplaceSymbols}(tabularData)\textbf{ or }correctionsMade$ if $correctionsMade$ then return $(tabularData$, "Leader Data: " + $recordedData)$ $\triangleright$ Returns corrected input and leader data else return $(tabularData$, "Leader Data: " + $recordedData)$ $\triangleright$ Returns original input (if no issues) and leader data end if end function Algorithm 2 Verify the number of rows and columns in Longer Tabular Input (discussed in Sec. 5.2.4). This function validates the maximum allowable number of rows and columns for the tabular data and returns true or false to the main function. function DataSizeManageableForLongTable($tabularData$) $maxRows\leftarrow 20$ $maxCols\leftarrow 10$ return $(\text{length}(\textit{tabularData})\leq maxRows)\textbf{ and }(\text{length}(\textit{tabularData}[0])\leq maxCols)$ end function Algorithm 3 Identify Leader Name Order in Title and Table Rows (discussed in Sec. 5.2.6). This function verifies three main scenarios for the order of leader names in the input. It returns a tuple with title information for leader name and a list of leader names from tabular input depending on the scenario. Description for each scenario is briefly commented. function IdentifyLeaderNameOrder($\textit{tabularData},\textit{title}$) $leaderNameFromTitle\leftarrow\text{ExtractLeaderNameFromTitle}(\textit{title})$ $leaderNamesFromRows\leftarrow[\,]$ for each row in tabularData do for each cell in current row do if leader_name is found in cell then Add leader_name to $leaderNamesFromRows$ $\triangleright$ Record leader’s names from rows end if end for end for $\triangleright$ Scenario 1: Leader’s name found in title if leaderNameFromTitle is not None then $recordedData\leftarrow[\,]$ $\triangleright$ To store sequential order of leader names Add $leaderNameFromTitle$ to $recordedData$ for each leader_name in leaderNamesFromRows do Add leader_name to $recordedData$ end for return ($leaderNameFromTitle,recordedData$) $\triangleright$ for e.g., this could return ("b", ["a", "b", "c"]) else $\triangleright$ Scenario 2: Leader’s name not found in title if $\text{length of }leaderNamesFromRows>0$ then $\triangleright$ if leader’s name is in at least one row $recordedData\leftarrow\text{"Leader name not in title"}$ for each leader_name in leaderNamesFromRows do Add leader_name to $recordedData$ end for return ($leaderNameFromTitle,recordedData$) $\triangleright$ for e.g., this could return (None, ["Leader name not in title", "a", "b", "c"]) else $\triangleright$ Scenario 3: Leader name neither in title nor in rows return ($leaderNameFromTitle,\text{"Leader name not in table"}$) $\triangleright$ for e.g., this could return (None, ["Leader name not in table"]) end if end if end function Algorithm 4 Correct Non-Atomic Cells (discussed in Sec. 5.2.1). If a non- atomic cell is found, the CorrectCell function separates multiple atomic values into individual columns. We follow our manual correction procedure in CellIsNonAtomic and CorrectCell. The function returns the corrected tabular data along with a flag indicating if any corrections were made. function CorrectNonAtomicCells($tabularData$) $correctionsMade\leftarrow\textbf{false}$ for all $row\textbf{ in }tabularData$ do for all $cell\textbf{ in }row$ do if $\text{CellIsNonAtomic}(cell)$ then $correctedCell\leftarrow\text{CorrectCell}(cell)$ $\triangleright$ Separate multiple values into individual columns $tabularData[cell]\leftarrow correctedCell$ $\triangleright$ Update the corrected cell value $correctionsMade\leftarrow\textbf{true}$ end if end for end for return ($tabularData,correctionsMade$) end function Algorithm 5 Update Column and Row Headers to Atomic cells (discussed in Sec. 5.2.4). For each cell, the tabular input has header_value as true or false. When the header value is true, we manually verify the function HeaderIsIncorrect and correct the logic in UpdateHeader(cell). The function returns the corrected tabular data with corrections made flag. function CorrectHeaders($tabularData$) $correctionsMade\leftarrow\textbf{false}$ for all $row\textbf{ in }tabularData$ do for all $cell\textbf{ in }row$ do if $\text{cell.header\\_value}=\textbf{true}$ then if $\text{HeaderIsIncorrect}(cell)$ then $correctedHeader\leftarrow\text{UpdateHeader}(cell)$ $\triangleright$ Update column and row headers $tabularData[cell]\leftarrow correctedHeader$ $\triangleright$ Update the corrected header $correctionsMade\leftarrow\textbf{true}$ end if end if end for end for return $(tabularData,correctionsMade)$ end function Algorithm 6 Address Missing cell Values (discussed in Sec. 5.2.3). For each row, we verify the missing cell values in RowHasMissingValues and pass the right cell values in FillCellValues through a manual process. The function returns the corrected tabular data with corrections made flag. function AddressMissingValues($tabularData$) $correctionsMade\leftarrow\textbf{false}$ for all $row\textbf{ in }tabularData$ do if $\text{RowHasMissingValues}(row)$ then $correctedRow\leftarrow\text{FillCellValues}(row)$ $\triangleright$ Pass missing cells in the row ${tabularData}[row]\leftarrow correctedRow$ $correctionsMade\leftarrow\textbf{true}$ end if end for return $(tabularData,correctionsMade)$ end function Algorithm 7 Replace Politics Domain-Specific Symbols and Abbreviate Party Names with Correct Semantics/words (discussed in Sec. 5.2.5). For all cells in the table, we verify containsDomainSpecificSymbols and containsPartyAbbreviations functions and correct the values using substituteSymbolWithEquivalent and containsPartyAbbreviations through a manual process. The function returns the corrected tabular data with corrections made flag. function replaceSymbols(tabularData) $correctionsMade\leftarrow\textbf{false}$ for all $cell\textbf{ in }\textit{tabularData}$ do if $\text{containsDomainSpecificSymbols}(cell)$ then $correctedValue\leftarrow\text{substituteSymbolWithEquivalent}(cell)$ $\triangleright$ Replace symbols with words $tabularData[cell]\leftarrow correctedValue$ $\triangleright$ Update the corrected cell value $correctionsMade\leftarrow\textbf{true}$ else if $\text{containsPartyAbbreviations}(cell)$ then $correctedValue\leftarrow\text{abbreviatePartyNames}(cell)$ $\triangleright$ Abbreviate party names $\textit{tabularData}[cell]\leftarrow correctedvalue$ $\triangleright$ Update the corrected cell value $correctionsMade\leftarrow\textbf{true}$ end if end for return $(tabularData,correctionsMade)$ end function In addition to the correction procedure detailed in Sec. 5.2, which provides examples of the corrections applied to each tabular input problems, we present a supplementary pseudocode in this section. In algorithm 1, we have a generic function that takes tabular data and title (metadata) from ToTTo as input parameters. This main algorithm executes six different functions. The first two functions, DataSizeManageableForLongTable and IdentifyLeaderNameOrder provide insights on the input problems leading to factual errors in the outputs but do not correct the tabular input. The other four functions, namely CorrectNonAtomicCells, UpdateHeaders, AddressMissingValues, and ReplaceSymbols correct the input problems as presented in algorithm 4, algorithm 5, algorithm 6, and algorithm 7. We provide brief descriptions for each algorithm in the caption and comments. ## Appendix E Input Problem Types for four models Fig. 2 shows how each of the four models is performing before and after data corrections for each of the problem types. The left bars for each input problem type represent output scores before data correction, while the right bars represent scores after data correction. Colour coding of output scores helps to identify and understand errors: green indicates no error, red indicates an error, and yellow indicates an omission. The number of samples for each model differs because we focus on outputs only when the model had ‘errors’ or ‘omissions’ before correction. This approach emphasizes the actual improvement in corrections. For example, T5-large had errors in 37 samples before correction (left bar), the after-correction (right bar) also shows improvements for the same 37 samples. (a) T5-base: Error Analysis of 40 samples.Single record lacking atomicityMultiple records lacking atomicityComplex table typeInsufficient inputLonger inputToTTo specificList of Leader names$2$$4$$6$$8$$10$ Input Problem Types: before data fix (left bar) versus after data fix (right bar) Samples countNo errorOmissionsErrors (b) T5-large model: Error Analysis of 37 samples.Single record lacking atomicityMultiple records lacking atomicityComplex table typeInsufficient inputLonger inputToTTo specificList of Leader names$2$$4$$6$$8$$10$ Input Problem Types: before data fix (left bar) versus after data fix (right bar) Samples countNo ErrorOmissionsErrors (c) Llama 2-7b: Error Analysis of 38 samples.Single record lacking atomicityMultiple records lacking atomicityComplex table typeInsufficient inputLonger inputToTTo specificList of Leader names$2$$4$$6$$8$$10$ Input Problem Types: before data fix (left bar) versus after data fix (right bar) Samples countNo ErrorOmissionsErrors (d) Llama 2-13b: Error Analysis of 35 samples.Single record lacking atomicityMultiple records lacking atomicityComplex table typeInsufficient inputLonger inputToTTo specificList of Leader names$2$$4$$6$$8$$10$ Input Problem Types: before data fix (left bar) versus after data fix (right bar) Samples countNo ErrorOmissionsErrors Figure 2: Comparison of different input problem types before (original data) and after data correction across four models. It shows the count of unique samples for three categories: (i) errors, (ii) omissions, and (iii) no errors. ## Appendix F Input Data Corrections and Output Text In this section, the input tabular data with outputs supplements the examples elaborated in Sec. 5.2 for each of the input problems as presented from Table 10(c) to Table 16. (a) Input Table with Original Cells highlighted in Yellow Page Title: 2012 Cardiff South and Penarth by-election --- Section Title: By-election result Party | % | $\pm$ Conservative | 19.9 | 8.4 Liberal Democrat | 10.8 | -11.5 (b) Input Table with Corrected Cells highlighted in Yellow Page Title: 2012 Cardiff South and Penarth by-election --- Section Title: By-election result Party | % | $\pm$ % difference with previous election Conservative | 19.9 | 8.4 Liberal Democrat | 10.8 | -11.5 (c) Outputs Before and After Input Data Correction T5-large before fix: In the 2012 Cardiff South and Penarth by-election, the Liberal Democrats wonW 11.5% of the vote. --- T5-large after fix: In the 2012 Cardiff South and Penarth by-election, the Liberal Democrats lost byW 11.5%. Llama 2-13B before fix: 2012 Cardiff South and Penarth by-election was wonW by the Liberal Democrats with a marginW of -11.5%. Llama 2-13B after fix: 2012 Cardiff South and Penarth by-election saw Liberal Democrat candidate lose by 11.5% of the vote compared to the previous election. Table 10: ToTTo specific: Swing percentage difference with previous election, as detailed in Sec. 5.2.5. Before fix represents the output generated from input Table 10(a) and after fix represents the output generated after explicitly changing the header, as shown in input Table 10(b). (a) Input Table with Original Cells highlighted in Yellow Page Title: 1914 United States Senate elections --- Section Title: Races leading to the 64th Congress State | Incumbent | Results | Candidates | Senator | Party | | South Dakota | Coe I.Crawford | Republican | Incumbent lost renomination. New senator elected. Democratic gain. | Edwin S. Johnson (Democratic) 48.32% Charles H. Burke (Republican) 44.47% (b) Outputs with Original Input before Input Data Correction T5-base: In South Dakota, Republican Coe I. Crawford lost renomination to Democrat Edwin S. JohnsonC. --- T5-large: South Dakota: Republican incumbent Coe I. Crawford lost his bid for the seat to Democratic challenger Edwin S. JohnsonC. Llama 2-7B: 1914 United States Senate elections in South Dakota were held on November 3A, 1914. Incumbent Republican Senator Coe I. Crawford lost renomination to Democratic candidate Edwin S. JohnsonC. Johnson defeated Charles H. DunniganN by a margin ofW 48.32% to 47.56%U. This was a Democratic gain from the previous election. Llama 2-13B: 1914 United States Senate elections were held on November 3A, 1914 to elect members of the United States Senate in various states. The Republican Party lost one seat and the Democratic Party gained one seat. (c) Input Table with Corrected Cells highlighted in Yellow Page Title: 1914 United States Senate elections | ---|--- Section Title: Races leading to the 64th Congress | State | Incumbent Senator | Incumbent Party | Results | Candidate Senator | Candidate Party | Candidate votes % South Dakota | Coe I.Crawford | Republican | Incumbent lost renomination. New senator elected. Democratic gain. | Edwin S. Johnson | Democratic | 48.32% South Dakota | Coe I.Crawford | Republican | Incumbent lost renomination. New senator elected. Democratic gain. | Charles H. Burke | Republican | 44.47% (d) Outputs After Input Data Correction T5-base: In South Dakota, Republican Coe I. Crawford lost renomination to Democrat Edwin S. JohnsonC. --- T5-large: South Dakota: Republican incumbent Coe I. Crawford lost his bid for the seat to Democratic challenger Edwin S. JohnsonC. Llama 2-7B: 1914 United States Senate elections were held on November 3,A 1914\. In South Dakota, Coe I. Crawford, the Republican incumbent Senator lost renomination to Edwin S. JohnsonC, a Democratic candidate. Charles H., the Republican candidate also ran for the seat but he was defeated by Johnson. Therefore, it is a Democratic gain in this election. Llama 2-13B: 1914 United States Senate elections were held on November 3A, 1914 to elect members of the United States Senate in various states. Races leading to the 64th Congress included South Dakota where Coe I. Crawford (Republican) lost renomination and Edwin S. Johnson (Democratic) was elected. Table 11: Complex Table input problems, as detailed in Sec. 5.2.2. Difficult to fix this complex input type, yet Llama 2-13B produced most of the facts right except for ADDITION error. Lost renomination ‘to Democrat candidate’ is a CONTEXT error, because the model made unsupported assumptions about the given input. Losing renomination is when the current senator failed to be nominated in the primary to stand for reelection. (a) Input Table with Original Cells highlighted in Yellow Page Title: List of Japanese prime ministers by longevity --- Section Title: Longevity of Japanese prime ministers Prime Minister | Total time in Office | Date of Death | Lifespan Sanetomi Sanjō | 60 days | 28 Feb 1891 | 53 years, 352 days (b) Input Table with Corrected Cells highlighted in Yellow Page Title: List of Japanese prime ministers by longevity --- Section Title: Longevity of Japanese prime ministers Prime Minister | Total time in Office | Date of Death | Lifespan Sanetomi Sanjō | 60 days | 28 Feb 1891 | 53 years, 352 days (c) Outputs Before and After Input Data Correction T5-base and T5-large before and after fix (no improvement): Sanj SanetomiNE was the longest-livedW prime minister in Japan at 53 years, 352 days. --- Llama 2-7B before fix: The longest-servingW Japanese prime minister was Sanjō Sanetomi who served for 53 years and 352 days. Llama 2-7B after data fix: Sanjō Sanetomi was the Japanese prime minister and served for a total of 60 days in office. He died at age 53 years and 352 days. Llama 2-13B before data fix: The longest-livedW Japanese prime minister was Sanjō Sanetomi, who served for 53 years and 352 days. Llama 2-13B after data fix: Sanjō Sanetomi was the prime minister of Japan for a total of 60 days. He lived for 53 years and 352 days. Table 12: ToTTo specific - WORD error made by all models before fix, as detailed in Sec. 5.2.5. After including ‘Total time in Office’, Llama 2 models fixed the ‘longest-lived’ word error and introduced other addition errors. We then revised the prompt and fixed the addition errors. T5 models had no improvements after including information. (a) Input Table with Original Cells highlighted in Yellow Page Title: Ling Ling Chang --- Section Title: 2016 Senate run California’s 29th State Senate district election Party | Candidate | % Votes Democratic | Josh Newman | 50.4 Republican | Ling Ling Chang | 49.6 (b) Input Table with Corrected Cells highlighted in Yellow Page Title: Ling Ling Chang --- Section Title: 2016 Senate run California’s 29th State Senate district election Party | Candidate | % Votes Democratic | Josh Newman | 50.4 Republican | Ling Ling Chang | 49.6 (c) Outputs Before and After Input Data Correction T5-base and T5-large before fix: Chang defeatedW Democrat Josh Newman with 49.6% of the vote. --- T5-base and T5-large after fix: Ling Ling Chang was defeated by Josh Newman, a Democrat, with 50.4% of the vote to Chang’s 49.6%. Llama 2-13B before fix: Ling Ling Chang wonW the 2016 Senate run with 49.6% of the vote. Llama 2-13B after fix: Ling Ling Chang, who was running for the Senate seat in California, received 49.6% of votes while Josh Newman, her opponent, received 50.4%. Table 13: Fixing ToTTo specific input problems: List of leader names for two leaders. ‘Before’ shows WORD errors due to missing vote percentages and party names for a leader. ‘After’ shows these errors are resolved by all models when the missing values are provided. Key point is T5 model did not produce error even though the title leader ‘Ling Ling Chang’ lost the election. (a) Input Table with Original Cells highlighted in Yellow Page Title: Chuck DeVore --- Section Title: As U.S. Senate Candidate Party | Candidate | Votes | % Republican | Carly Fiorina | 1,315,429 | 56.4 Republican | Tom Campbell | 504,289 | 21.7 Republican | Chuck DeVore | 452,577 | 19.3 Republican | Al Ramirez | 42,149 | 1.8 Republican | Tim Kalemkarian | 19,598 | 0.8 (b) Input Table with Corrected Cells highlighted in Yellow Page Title: Chuck DeVore --- Section Title: As U.S. Senate Candidate Party | Candidate | Votes | % Republican | Carly Fiorina | 1,315,429 | 56.4 Republican | Tom Campbell | 504,289 | 21.7 Republican | Chuck DeVore | 452,577 | 19.3 Republican | Al Ramirez | 42,149 | 1.8 Republican | Tim Kalemkarian | 19,598 | 0.8 (c) Outputs Before and After Input Data Correction T5-base before fix: DeVore defeatedW Republican Tom Campbell with 56.4%U of the vote, Al Ramirez with 19.3%, and Tim Kalemkarian with 19.3%C. --- T5-base after fix: DeVore defeatedW Republican nominees Carly Fiorina (56.4%) , Tom Campbell (21.7%), Al Ramirez (1.8%) and Tim Kalemkarian (0.8%)C. T5-large before fix: DeVore was defeated by Republican Tom Campbell (56.4%)U, Al Ramirez (19.3%), and Tim Kalemkarian (21.7%)C. T5-large after fix: DeVore was defeated in the U.S. Senate race by Republican nominees Carly Fiorina (56.4%), Tom Campbell (21.7%) , Al Ramirez (1.8%) and Tim Kalemkarian (0.8%).C Llama 2-7B before fix: Chuck DeVore is a Republican candidate for the U.S. Senate in California. He received 19.3% of the vote in the primary election, which was held on June 8th, 2010. The other candidates were Carly Fiorina (56.4%), Tom Campbell (24.1%)U and Al Ramirez (10.2%)U. Tim Kalemkarian did not run for this officeC. Llama 2-7B after fix: The number of votes for each leader is as follows: Carly Fiorina - 56.4% Tom Campbell - 21.7% Chuck DeVore - 19.3% Al Ramirez - 1.8% Tim Kalemkarian - 0.8%. Carly Fiorina won the Republican primary for U.S. Senate in California with 56.4 percent of the vote, followed by Tom Campbell at 21.7 percent and Chuck DeVore at 19.3 percent. Llama 2-13B before fix: Carly Fiorina is the leading candidate for U.S. Senate with 56.4% of the vote, followed by Tom Campbell with 20.3%U, Chuck DeVore with 19.3%, and Al Ramirez with 4.0%U. Llama 2-13B after fix: The Republican Party candidate who won the most votes in the U.S. Senate election was Carly Fiorina with 56.4% of the total votes. Table 14: ToTTo specific input problems: List of Leader names for five leaders, as detailed in Sec. 5.2.6. All models swapped the defeated leader names and/or % of votes, resulting in WORD, NUMBER and CONTEXT errors. After including the party names and % of votes for all leaders, Llama 2 models corrected the errors. Fine-tuned T5 models still made errors. This could be because of the pattern learned from the fine-tuned training data of ToTTo. (a) Input Table with Original Cells highlighted in Yellow Page Title: Joseph Haslet --- Section Title: Almanac Year | Office | Subject | Party | % Votes | Opponent | Party | % Votes 1804 | Governor | Joseph Haslet | Republican | 48% | Nathaniel Mitchell | Federalist | 52% 1807 | Governor | Joseph Haslet | Republican | 48% | George Truitt | Federalist | 52% (b) Input Table with Corrected Cells highlighted in Yellow Page Title: Joseph Haslet --- Section Title: Almanac Year | Office | Candidate | Party | % Votes | Opponent | Party | % Votes 1804 | Governor | Joseph Haslet | Republican | 48% | Nathaniel Mitchell | Federalist | 52% 1807 | Governor | Joseph Haslet | Republican | 48% | George Truitt | Federalist | 52% (c) Outputs Before and After Input Data Correction T5-base before fix: Joseph Haslet was electedW Governor in 1804, defeatingW Federalist Party candidate Nathaniel MitchellN in the election of 1807. --- T5-base after fix: Joseph Haslet was electedW Governor ofW the Republican Party in 1804 and 1807, defeatingW the Federalist Party’s Nathaniel Mitchell and George Truitt. T5-large before fix: Joseph Haslet was electedW Governor in 1804, defeatingW Federalist Party candidate Nathaniel Mitchell in the election of 1807. T5-large after fix: In 1804 and 1807, Republican Joseph Haslet defeatedW Federalist Nathaniel Mitchell and Federalist George Truitt. Llama 2-13B before fix: 1804, Joseph Haslet was electedW governor with 48% of the vote. In 1807, he was re-electedW with 48% of the vote. Llama 2-13B after fix: 1804: Joseph Haslet (Republican) wonW with 48% of the votes. 1807: George Truitt (Federalist) won with 52% of the votes. Table 15: ToTTo specific input problems: List of Leader names, as detailed in Sec. 5.2.6. Correction for this input is including the party name and modifying the header ‘Subject’ to ‘Candidate’. All models generated Joseph Haslet to be the winning candidate including the Llama 2 models that had specific instructions. (a) Input Table with Original Cells highlighted in Yellow Page Title: List of members of the United States House of Representatives in the 67th Congress by seniority --- Section Title: List of Representatives by seniority Rank | Representative | Party | District | Seniority date Twenty-three non-consecutive terms 1 | Joseph G. Cannon | R | IL-18 | March 4, 1915 Fifteen consecutive terms 2 | Frederick H. Gillett | R | MA-2 | March 4, 1893 Fourteen non-consecutive terms 3 | Henry A. Cooper | R | WI-1 | March 4, 1921 One term 4 | … | … | … | … 5 | … | … | … | … 433 | … | … | … | … 434 | … | … | … | … 435 | Adam M. Wyant | PA-22 | WI-1 | March 4, 1921 Members joining the House, after the start of the Congress … | … | … | … | … (b) Outputs with Original Input before Input Data Correction T5-large before fix: The 435th member of the United States House of Representatives in the 65th Congress was a member who joined the House after the start of CongressC. --- Llama2-13B before fix: List of Representatives by seniority The table below lists the 100.O (c) Input Table with Corrected Cells highlighted in Yellow Page Title: List of members of the United States House of Representatives in the 67th Congress by seniority --- Section Title: List of Representatives by seniority Rank | Representative | Party | District | Seniority date | Term details 1 | Joseph G. Cannon | R | IL-18 | March 4, 1915 | Twenty-three non-consecutive terms 2 | Frederick H. Gillett | R | MA-2 | March 4, 1893 | Fifteen consecutive terms 3 | Henry A. Cooper | R | WI-1 | March 4, 1921 | Fourteen non-consecutive terms 4 | … | … | … | … | One term 5 | … | … | … | … | One term 433 | … | … | … | … | One term 434 | … | … | … | … | One term 435 | Adam M. Wyant | PA-22 | WI-1 | March 4, 1921 | One term (d) Outputs After Input Data Correction T5-large after fix: There are 435 members of the United States House of Representatives in the 65th Congress. --- Llama2-13B after fix: The table shows that there are 435 representatives in the 65th Congress, and they are ranked according to their seniority. Table 16: Longer Table input problems, as detailed in Sec. 5.2.4. (a). Original tabular input had 435 rows with term details and other details from nested headers (irrelevant details). (b). The fine-tuned T5-large model hallucinated the output text with the irrelevant header information (nested header issue). Llama 2-13B also struggled to produce the right text and generated incomplete output. Llama 2-7B produced longer garbage output. (c). First, we fixed the nested header by creating a separate column. Then, we only passed 20 records (first 10 rank and 426 to 435 rank) along with title information. We did not pass the irrelevant term details to the corrected data. (d). This simplified input records with relevant header details (Rank) fixed the errors in both models.
# Determination of the asymptotic limits of adaptive photon counting measurements for coherent-state optical phase estimation M. A. Rodríguez-García Instituto de Investigaciones en Matemáticas Aplicadas y en Sistemas, Universidad Nacional Autónoma de México, Ciudad Universitaria, Ciudad de. México 04510, Mexico M. T. DiMario Center for Quantum Information and Control, Department of Physics and Astronomy, University of New Mexico, Albuquerque, New Mexico 87131 Joint Quantum Institute, National Institute of Standards and Technology and the University of Maryland, College Park, Maryland 20742 P. Barberis-Blostein Instituto de Investigaciones en Matemáticas Aplicadas y en Sistemas, Universidad Nacional Autónoma de México, Ciudad Universitaria, Ciudad de. México 04510, Mexico F. E. Becerra <EMAIL_ADDRESS>Center for Quantum Information and Control, Department of Physics and Astronomy, University of New Mexico, Albuquerque, New Mexico 87131 ###### Abstract Physical realizations of the canonical phase measurement for the optical phase are unknown. Single-shot phase estimation, which aims to determine the phase of an optical field in a single shot, is critical in quantum information processing and metrology. Here we present a family of strategies for single- shot phase estimation of coherent states based on adaptive non-Gaussian, photon counting, measurements with coherent displacements that maximize information gain as the measurement progresses, which have higher sensitivities over the best known adaptive Gaussian strategies. To gain understanding about their fundamental characteristics and demonstrate their superior performance, we develop a comprehensive statistical analysis based on Bayesian optimal design of experiments, which provides a natural description of these non-Gaussian strategies. This mathematical framework, together with numerical analysis and Monte Carlo methods, allows us to determine the asymptotic limits in sensitivity of strategies based on photon counting designed to maximize information gain, which up to now had been a challenging problem. Moreover, we show that these non-Gaussian phase estimation strategies have the same functional form as the canonical phase measurement in the asymptotic limit differing only by a scaling factor, thus providing the highest sensitivity among physically-realizable measurements for single-shot phase estimation of coherent states known to date. This work shines light into the potential of optimized non-Gaussian measurements based on photon counting for optical quantum metrology and phase estimation. ## I Introduction Optical phase estimation is ubiquitous in many fundamental and practical problems ranging from quantum state preparation [1, 2, 3], sensing [4], communications [5, 6, 7, 8, 9, 10, 11, 12, 13, 14], and information processing [15]. In a photonic metrology problem [16, 17, 18, 19], an optical probe state interacts with a physical system to interrogate its properties. This interaction maps parameters of the system to the state of the optical probe, where an optimal readout can be performed [17, 18, 15, 19]. When the physical property of the system is mapped onto the phase of the optical probe, the optimal quantum measurement is the canonical phase measurement [20], which consists of projections onto phase eigenstates [21]. However, while theoretically the canonical phase measurement exists, the physical realization of projections onto phase eigenstates are not physically known [22]. Thus in practical estimation problems in quantum metrology one seeks to determine the limits in precision of physically realizable measurements, and the degree with which they approach to the fundamental quantum limit in sensitivity [23, 24, 25, 26, 27, 17]. Physically realizable measurements of the optical phase have been widely investigated [20, 28, 21] for diverse metrological problems with quantum and classical fields [29, 30, 31, 32, 33] including sensing small deviations from a known phase [29, 30, 31, 34, 32, 33] and estimation with repeated sampling [35, 36] and measurements [30, 37, 38, 39, 40, 41]. Beyond these specific estimation problems, measurements of the phase of a single optical mode in a single-shot are central for quantum state preparation and detection [42, 43], waveform estimation and sensing [44, 45, 46, 47], and quantum information processing [48, 49, 50]. The standard measurement for optical phase estimation is the heterodyne measurement [21], which samples both quadratures of the electromagnetic field simultaneously from which the phase can be estimated. However, the achievable sensitivity of the heterodyne measurement [21] is far below the ultimate measurement sensitivity allowed by physics, given by the canonical phase measurement [51, 21]. Adaptive measurement techniques based on homodyne detection, a Gaussian measurement, can be used to align the phase quadrature of the optical field with the measurement setting where the homodyne measurement provides maximum sensitivity [32]. Adaptive homodyne has been theoretically shown to surpass the heterodyne limit and get closer to the canonical phase measurement for optical phase estimation of coherent states [20, 21], providing the most sensitive Gaussian measurement of the optical phase so far [21]. In a complementary measurement paradigm, quantum measurements of coherent states based on photon counting, displacement operations, and feedback [8, 2, 9, 10, 11, 12, 13, 6, 52, 14] have enabled state discrimination below the Gaussian sensitivity limits and approaching the Helstrom bound [5]. Recently, some of the authors proposed and demonstrated a non-Gaussian measurement strategy for single-shot phase estimation of coherent states, able to surpass the heterodyne limit and approach the sensitivity limit of a canonical phase measurement in the presence of loss and noise of real systems [2]. These measurement strategies are based on realizing coherent displacements of the input field and monitoring the output field with photon number resolving detection. The information from the detection outcomes is then used to implement real time feedback of displacement operations optimized to maximize the measurement sensitivity of the phase of the input state. This estimation strategy is the most sensitive single-shot non-Gaussian measurement of a completely unknown phase encoded in optical coherent states so far [2]. In this strategy the displacement operation optimization is realized by maximizing either the information gain in subsequent adaptive steps of the measurement or the sharpness of the posterior phase distribution. While these cost functions are functionally different, both perform similarly and get close to the ultimate sensitivity allowed by physics, the Crámer-Rao lower bound (CRLB), in the limit of many photons and many adaptive measurements. While the work in Ref. [2] demonstrated the potential of non-Gaussian measurements for single-shot phase estimation, the superiority over adaptive homodyne detection was not proven. A deeper understanding of the properties of convergence and ultimate limits of the estimators produced by non-Gaussian measurements is still missing. This is an open problem due to the complexity associated with the analysis of these adaptive non-Gaussian strategies. In this article we investigate a family of adaptive non-Gaussian strategies based on photon counting for single-shot optical phase estimation, and assess their performance compared to Gaussian measurements and the canonical phase measurement. To analyze these non-Gaussian strategies, we use the mathematical framework of Bayesian optimal design of experiments, which provides a natural description of non-Gaussian adaptive strategies, allowing us to investigate their fundamental characteristics and determine the limits in sensitivity, which up to now has been a challenging problem. Our work provides a comprehensive statistical analysis of adaptive non-Gaussian measurements for parameter estimation, and the requirements to approach optimal bounds in the asymptotic limit. We show that strategies based on photon counting and feedback for single-shot phase estimation of coherent states provide superior sensitivity over the best known adaptive Gaussian strategies, having the same functional form as the canonical phase measurement in the asymptotic limit, differing only by a scaling factor. This work provides a deep insight into the potential of optimized non-Gaussian measurements for quantum communication, metrology, sensing, and information processing. ## II Results ### II.1 Holevo Variance of Non-Gaussian Estimation Strategies The non-Gaussian phase estimation strategies investigated here are based on photon counting, displacement operations, and feedback, and are optimized by maximizing a specific cost function. These strategies maximize either the estimation precision (by minimizing the Holevo variance [51]), or the information gain about the unknown parameter based on entropy measures including mutual information, the Kullback-Lieber divergence, and the conditional entropy [53, 54]. We note that these cost functions produce non- identifiable likelihood functions that do not allow to correctly estimate a cyclic parameter, such as the phase [55]. To address this problem, these non- Gaussian strategies use the Fisher information to optimize the displacement operations, which are the dynamical control variable in the strategy, to guarantee that these cost functions provide identifiable likelihood functions, and to enable optical phase estimation with near optimal performance. In the problem of single-shot phase estimation with coherent states, an electromagnetic field in a coherent state $\rho_{0}=\left|\alpha\right\rangle\left\langle\alpha\right|$ interacts with a physical system and experiences a unitary transformation $e^{\mathrm{i}\phi\hat{n}}$, where $\hat{n}$ is the number operator. The phase $\phi$ induced in the coherent state carries information about the system, which can be extracted by a measurement of the output state $\rho(\phi)=e^{\mathrm{i}\phi\hat{n}}\rho_{0}e^{-\mathrm{i}\phi\hat{n}}=\left|e^{\mathrm{i}\phi}\alpha\right\rangle\left\langle e^{-\mathrm{i}\phi}\alpha\right|\,$. Measurements onto $\rho(\phi)$, together with an estimator $\widehat{\phi}$ on the measurement outcomes, provide an estimate of $\phi$, and a measurement strategy aims to obtain the best estimation of the physical parameter. The efficiency of such a strategy is characterized by the estimation variance as a function of the number of photons in the coherent state. The most efficient strategy provides a variance at the highest convergence rate towards zero as the number of photons increase. The standard measurement paradigm for phase estimation of Gaussian states is the heterodyne measurement (a Gaussian measurement), with an estimator variance of $\text{Var}\left[\widehat{\phi}_{\text{Het}}\right]=1/2\lvert\alpha\rvert^{2}$. Within the paradigm of Gaussian measurements, adaptive homodyne strategies optimized to minimize the Holevo variance have achieved the best performance among Gaussian measurements for single-shot phase estimation of coherent states [20]. The best adaptive Gaussian measurement reported to date, termed the Adaptive Mark-II (MKII), achieves a Holevo variance in the limit of large number of photons of: $\text{Var}\left[\widehat{\phi}_{\text{MKII}}\right]\approx\frac{1}{4\lvert\alpha\rvert^{2}}+\frac{1}{8\lvert\alpha\rvert^{3}}\,.$ (1) While this optimized Gaussian strategy surpasses the heterodyne limit, it has an error of order $1/|\alpha|^{3}$ above the Cramér-Rao Lower Bound ($1/4\lvert\alpha\rvert^{2}$), and does not reach the performance of the canonical phase measurement [21]: $\text{Var}\left[\widehat{\phi}_{\text{CPM}}\right]\approx\frac{1}{4\lvert\alpha\rvert^{2}}+\frac{5}{32\lvert\alpha\rvert^{4}}\,.$ (2) In this work, we numerically show that non-Gaussian strategies for single-shot phase estimation based on photon counting, optimized displacement operations, and real-time feedback achieve an estimator variance smaller than Gaussian strategies with an asymptotic scaling in the limit of high mean photon numbers of: $\text{Var}_{\text{H}}\left[\widehat{\phi}\right]\approx\frac{0.250\pm 0.001}{\lvert\alpha\rvert^{2}}+\frac{0.520\pm 0.010}{\lvert\alpha\rvert^{4}}\,.$ (3) Figure 1A summarizes the main result comparing the three asymptotic variances for the canonical phase measurement (solid blue), MKII (solid red), and non- Gaussian strategies (solid green and points). Figure 1B, shows the excess Holevo variance compared to the canonical phase measurement ($\text{Var}[\cdot]-\text{Var}[\hat{\phi}_{\text{CPM}}]$) for Heterodyne (purple), MKII (red), and non-Gaussian strategies (green). Fig. 1: Asymptotic variances for different phase measurements. a: Holevo variance in the limit of large mean photon (MPN) $\lvert{\alpha}\rvert^{2}$ for the canonical phase measurement in Eq. (2), the most sensitive Gaussian measurement known to date (MKII) [21] in Eq. (1), and the non-Gaussian strategies in Eq. (3). The points show numerically values for the non-Gaussian strategies in a region where the analytical expression is not valid. Error bars represent a 1-$\sigma$ standard deviation. b: Excess phase variance ($\text{Var}[\cdot]$ \- $\text{Var}[\widehat{\phi}_{\text{CPM}}]$) for Heterodyne, MKII, and non-Gaussian strategies. These non-Gaussian strategies implement a series of adaptive steps with displacement operations optimized to maximize information gain, while ensuring efficient phase estimators in the asymptotic limit. These strategies surpass the best known Gaussian strategy in Eq. (1), and have the same functional form as the canonical phase measurement in the asymptotic limit, differing only by a scaling factor, thus providing the highest sensitivity among physically- realizable measurements for single-shot phase estimation of coherent states known to date. Achieving this performance with non-Gaussian estimation strategies, however, requires a deep understanding of the measurement process. To gain this understanding, we use the mathematical framework of Bayesian optimal experimental design, which provides a natural description of adaptive non- Gaussian measurements. This allows us to optimize these strategies for single- shot phase estimation with a Holevo variance given by Eq. (3). ### II.2 Bayesian optimal design of experiments Phase estimation of coherent states based on photon counting with adaptive coherent displacement operations can be defined in the context of Bayesian optimal design of experiments. Optimal design of experiments allows for improving statistical inferences about quantities of interest by appropriately selecting the values of control variables known as designs [54, 56, 57]. In this framework, it is assumed that the experimental data $y$ (the measurement outcomes) can be modeled as an element of the set $\mathcal{P}_{\Phi}=\left\\{p(y\mid\mathbf{d},\mathbf{\phi}),\,\,\,\mathbf{d}\in\mathcal{D}\,\,\,\mathbf{\phi}\in\Phi\right\\},$ (4) where $\mathbf{d}$ is a parameter called design chosen from some set $\mathcal{D}$ called design space, $\bm{\phi}\in\Phi$ is an unknown parameter to be estimated, and the data $y$ comes from a sample space $\mathcal{Y}\subseteq\mathbb{R}$. In this paradigm, the experimenter has full control over the designs $\mathbf{d}$ and the ability to adjust them prior to making a measurement. This allows for optimizing such measurement for estimating the unknown parameter $\bm{\phi}$. Bayesian optimal design of experiments goes beyond standard methods for parameter estimation based on Bayesian statistical inference [58, 59, 60, 61, 62], by providing the suitable mathematical framework to ensure optimal designs to find efficient estimators for a general parameter space [63, 64, 65, 66]. In the Bayesian approach of optimal design [54, 56], the initial lack of knowledge about $\bm{\phi}$ is modeled as a prior probability distribution $p(\bm{\phi})$. The measurement aims to reduce the uncertainty of $\bm{\phi}$ as much as possible by the use of Bayes’ theorem over the prior distribution. In an estimation problem, the optimal choice for the designs $\mathbf{d}\in\mathcal{D}$ maximize the expected value of a cost function $U(\bm{d},\bm{\phi},y)$ with respect to the possible outcomes of $y$ and $\bm{\phi}$: $\begin{split}\mathbf{d}^{\text{opt}}&=\arg\max_{\mathbf{d}\in\mathcal{D}}\text{E}\left[U(\mathbf{d},\bm{\phi},y)\right]\\\ &=\arg\max_{\mathbf{d}\in\mathcal{D}}\int_{\mathcal{Y}}\int_{\Phi}U(\mathbf{d},\bm{\phi},y)p(\bm{\phi}\mid\mathbf{d},y)d\bm{\phi}p(y\mid\mathbf{d})dy\\\ &=\arg\max_{\mathbf{d}\in\mathcal{D}}\int_{\mathcal{Y}}\int_{\Phi}U(\mathbf{d},\bm{\phi},y)p\left(\bm{\phi},y\mid\mathbf{d}\right)d\bm{\phi}dy\,.\end{split}$ (5) A standard approach in optimal design of experiments considers choosing $\mathbf{d}^{\text{opt}}$ at the beginning of the experiment an then sample data from $p(y\mid\mathbf{d}^{\text{opt}},\bm{\phi})$ for all subsequent trials. An alternative approach considers dynamically updating $\mathbf{d}^{\text{opt}}$ on each trial, as more data is collected. The advantage of this approach is that adaptive estimation strategies are never less efficient than the non-adaptive ones [67]. The implementation of Bayesian optimal design of experiments requires a cost function, such as the Kullback-Lieber divergence between the prior and posterior distributions [68]: $\begin{split}U(\mathbf{d},y)&=D_{\text{KL}}\left[p(\bm{\phi}\mid\mathbf{d},y)\mid\mid p(\bm{\phi})\right]\\\ &=\int_{\Phi}p(\bm{\phi}\mid\mathbf{d},y)\log\left[\frac{p(\bm{\phi}\mid\mathbf{d},y)}{p(\bm{\phi})}\right]d\bm{\phi}.\end{split}$ (6) The Kullback-Lieber divergence provides a distance between probability distribution $p(\bm{\phi}\mid\mathbf{d},y)$ and $p(\bm{\phi})$ [53]. If $p(\bm{\phi}\mid\mathbf{d},y)$ is equal to $p(\bm{\phi})$ then $U(\mathbf{d},y)=0$ and there is not any gain of information about $\bm{\phi}$ by measuring with design $\mathbf{d}$ and outcome $y$. Another cost function considered in optimal design is the conditional entropy between the plausible values of $\bm{\phi}$ and $y$ $\begin{split}U(\mathbf{d})&=-H(\bm{\phi}\mid Y)\\\ &=-\sum_{y\in\mathcal{Y}}p(y)\int_{\Phi}p(\bm{\phi}\mid\mathbf{d},y)\log\left[p(\bm{\phi}\mid\mathbf{d},y)\right]d\bm{\phi},\end{split}$ (7) which is a measure of how much information is needed to describe the outcomes of the random variable $\bm{\phi}$ given that the value of another random variable $Y$ is known [53]. However, we note that cost functions Eq. (6) and Eq. (7) can be related to the mutual information [53]: $\begin{split}U(\mathbf{d})=I(\bm{\phi}\mid Y)&=\text{E}_{y}\left[D_{\text{KL}}\left[p(\bm{\phi}\mid\mathbf{d},y)\mid\mid p(\bm{\phi})\right]\right]\\\ &=H(p(\bm{\phi}))-H(\bm{\phi}\mid Y).\end{split}$ (8) As a result, designs $d$ maximizing any of these cost functions in Eq. (6), Eq. (7), or Eq. (8) are equivalent [54, 69, 56]. Moreover, in the asymptotic limit, maximizing these cost functions is equivalent to minimizing the determinant of the covariance matrix (D-optimality design criteria), that is, in the asymptotic limit, optimizing these cost functions is equivalent to optimizing any member within the family of D-optimal designs [54, 68]. Our goal is to apply the theory of Bayesian optimal design of experiments to the problem of phase estimation of coherent states with photon counting and adaptive coherent displacement operations. The adaptive non-Gaussian estimation strategy consists of several parts: i) in the first adaptive step, it uses an specific cost function and the prior information to choose the design; ii) then, it performs a measurement; iii) based on the measurement outcome, it uses Bayes’ theorem to update the probability distribution; iv) and lastly, it uses a recursive approach, where this posterior probability distribution becomes the prior of the subsequent measurement step i). The estimation of $\bm{\phi}$ at each adaptive step is obtained from the maximum posterior estimator (MAP) of the posterior probability distribution. This approach requires that the MAP converge to the true value of the parameter when the number of measurements increases. In the adaptive mathematical framework of optimal experimental design, Paninski [67] proved that under a set of regular modelling conditions and in the case when $\phi\in\mathbb{R}$, cost functions based on mutual information can allow for designs that lead to asymptotically consistent and efficient MAP estimators with variance $\sigma^{2}_{\text{INFO}}=\left(\underset{C\in co\left(F(\phi;\mathbf{d})\right)}{\operatorname{arg}\,\operatorname{max}}\;\left\lvert C\right\rvert\right)^{-1}.$ (9) Here $co\left(F(\phi;\mathbf{d})\right)$ denotes the convex closure of the set of Fisher information functions $F(\phi;\mathbf{d})$. In other words, the estimations produced by $\widehat{\phi}$ converge to $\phi$ (consistency), and the distribution of $\widehat{\phi}$ tends to a normal distribution with mean $\phi$ and variance given by Eq. (9) when the number of adaptive steps tends to infinity (efficiency). Formally, the regularity conditions introduced in [67] can be stated as follows: 1. 1. The parameter space $\Phi$ is a compact metric space. 2. 2. The log likelihood, $\log\left(p(y\mid\phi,\mathbf{d})\right)$ is uniformly Lipschitz in $\phi$ with respect to some dominating measure on $\mathcal{Y}$. 3. 3. The likelihood function is identifiable for $\phi$, that is, the likelihood function has a unique global maximum. 4. 4. The prior distribution, assigns a positive probability to any neighborhood of the real value of $\phi$. 5. 5. The Fisher information functions $F(\phi;\mathbf{d})$ are well defined for any $\phi\in\Phi$ and $\mathbf{d}\in\mathcal{D}$. 6. 6. The maximum of the convex closure of the set of Fisher information functions $\left\\{F(\phi,\mathbf{d})\mid\mathbf{d}\in\mathcal{D},\,\phi\in\Phi\right\\}$ must be positive-definite, i.e., $\max_{C\in co\left(F(\phi;\mathbf{d})\right)}\left\lvert C\right\rvert>0$. We note that in the case of estimation of a scalar parameter, the maximization of mutual information is equivalent to the minimization of the mean square error (MSE) [67]: $\begin{split}\rm{MSE}(\widehat{\phi})&=\rm{E}\left[\right(\widehat{\phi}-\phi\left){}^{2}\right]\\\ &=\text{Var}\left(\widehat{\phi}\right)+\left(\rm{E}\left[\widehat{\phi}\right]-\phi\right)^{2}.\end{split}$ (10) This shows that the MSE is a trade-off between the estimator’s variance and its bias. As a result, since the phase is a scalar quantity, in the asymptotic limit both the mutual information Eq. (8) and the mean square error Eq. (10) are appropriate cost functions to find optimal estimation strategies. Even more, for unbiased estimators (such as the MAP estimator on the asymptotic limit), the MSE corresponds to the estimator variance, then the maximization of mutual information is equivalent to the minimization of estimator variance. In practice, however, an estimation strategy would use a cost function that can be calculated efficiently and with a high rate of convergence. Phase estimation in coherent states. Optimal phase estimation of coherent states of light aims to obtain the best estimate, from the outcomes of a physical measurement, of an unknown phase $\phi\in[0,2\pi)$ encoded in a coherent state $\rho(\phi)=\left|e^{i\phi}\alpha\right\rangle\left\langle e^{-i\phi}\alpha\right|\,$. The most general description of a physical measurement is given by a POVM, a positive operator-valued measure. A measurement $M$ with a discrete set of outcomes $\left\\{m\,|\,m\in S\subseteq\mathbb{Z}\right\\}$, can be represented as a POVM $M=\left\\{M(m)\,|\,m\in S\right\\}$, where the operators $M(m)$ are positive bounded $M(m)>0$ and resolve the identity $\sum_{m}M(m)=I$, ${\forall m\in S}$ [70]. By the Born rule, the probability for $m$ conditioned to $\phi$ is $\text{ Tr}\left[M(m)\rho(\phi)\right]=p\left(m\mid\phi\right)\,.$ (11) According to information theory, if an estimator $\widehat{\phi}$ of ${\phi}$ is constructed using a sample from a POVM $M$, the limit in the accuracy of $\widehat{\phi}$ is given by the Crámer-Rao Bound [71, 72] $\text{Var}_{\phi}\left[\widehat{\phi}\right]\geq\frac{1}{F_{M}(\phi)}.$ (12) Here $F_{M}(\phi)$ is the Fisher information of $M$ about $\phi$, which quantifies how much information about $\phi$ is carried in a sample from $M$: $F_{M}(\phi)=\text{E}_{\phi}\left[\left(\frac{\partial}{\partial\phi}\left[\log\left(p\left(m\mid\phi\right)\right)\right]\right)^{2}\right].$ (13) Since the Fisher information in Eq. (13) depends on the POVM $M$, the maximization of the Fisher information over all POVMs provides the lowest possible Cramér-Rao bound. This maximum Fisher information over all POVMs is known as the quantum Fisher information $F_{\text{Q}}$ (QFI), and the lowest possible bound in the accuracy of $\widehat{\phi}$ is known as the quantum Cramér-Rao bound (QCRB) [5, 73, 72]. In the case of phase estimation of coherent states $F_{\text{Q}}=4\lvert\alpha\rvert^{2}$, and the QCRB is: $\text{Var}_{\phi}\left[\widehat{\phi}\right]\geq\frac{1}{4\lvert\alpha\rvert^{2}}.$ (14) In the limit of large photon number $\lvert\alpha\rvert\gg 1$, the QCRB is saturated by the canonical phase measurement (the optimal phase measurement), which is described by the POVM [51]: $M(\widehat{\phi})=\frac{1}{2\pi}\sum_{n,m=0}^{\infty}\left|n\right\rangle\left\langle m\right|e^{\mathrm{i}\left(n-m\right)\widehat{\phi}},$ (15) where $\left|n\right\rangle$ is an eigenstate of the number operator $\hat{n}$. The operator $M(\widehat{\phi})$ is an element of the canonical phase measurement whose outcome is a number $\widehat{\phi}\in\left[0,2\pi\right)$, which can be used as an estimation for $\phi$. The optimality of the canonical phase measurement indicates that its Holevo variance $V_{\text{CPM}}=\left\lvert e^{-\alpha^{2}}\sum_{n=0}^{\infty}\frac{\alpha^{2n+1}}{n!\sqrt{n+1}}\right\rvert^{2}-1\,,$ (16) is the fundamental bound of estimation precision [51, 20]. Although there are proposals that attempt to implement this POVM, they have not been able to reach the fundamental bound Eq. (16), or Eq. (2). For instance, in [74] it was possible to obtain the canonical measurement distribution as the marginal of a joint measurement in phase space producing a worse performance in the context of phase estimation. Moreover, the canonical phase measurement was implemented for the case of one-photon wave packet using quantum feedback [22]. However, for the case of higher dimensional states, such as coherent states, this problem remains open. While there is not a satisfactory known method to implement the canonical phase measurement, Gaussian strategies serve as a standard of physically realizable measurement techniques for phase estimation of coherent states. The natural benchmark in the Gaussian strategies is the heterodyne detection, whose variance is lower bounded by $\text{Var}\left[\widehat{\phi}_{\text{Het}}\right]=1/2\lvert\alpha\rvert^{2}$ [21]. Several adaptive Gaussian schemes have been shown to exceed the lower bound for heterodyne detection. The most efficient Gaussian measurement reported to date, termed the Adaptive Mark-II (MkII) strategy [20], has a variance in the limit of $\lvert\alpha\rvert\gg 1$ given by Eq. (1). Nonetheless, these adaptive Gaussian strategies cannot reach the performance for the canonical phase measurement in Eq. (2) [21]. The proposed non-Gaussian strategies for single-shot phase estimation of coherent states are based on optimized adaptive measurements with photon number resolving detection. These non-Gaussian strategies are able to outperform the best known Gaussian strategies and closely follow the performance of the canonical phase measurement in the limit of large photon number. ### II.3 Adaptive non-Gaussian phase estimation The proposed non-Gaussian adaptive estimation strategies based on adaptive photon counting [2] aim to estimate the phase $\phi\in\Phi=[0,2\pi)$ of a coherent state $\rho(\phi)=\left|e^{\mathrm{i}\phi}\alpha\right\rangle\left\langle e^{-\mathrm{i}\phi}\alpha\right|\,$ with mean photon number $\text{E}\left[\hat{n}\right]=\lvert{\alpha}\rvert^{2}$ using a finite number of adaptive measurement steps, and based on the prior information $p(\phi)$ about $\phi$. In every adaptive step $l=1,2,\cdots,L$, the input coherent state with energy $\lvert{\alpha}\rvert^{2}/L$ interferes with a local oscillator field, which implements a displacement operation $\hat{D}\left(\beta\right)\left|\alpha\right\rangle=\left|\alpha+\beta\right\rangle,\,\beta\in\mathbb{C},$ with phase and amplitude chosen by some rule, in general depending on previous measurement outcomes. This is followed by a photon number detection measurement with a given photon number resolution (PNR) $m$ of the detectors [6], $m\in\mathbb{N}$. In practice, since the energy in each adaptive step is $\lvert{\alpha}\rvert^{2}/L$, the strategy will only require moderate PNR resolution ($m<10$) as $L$ increases. In the first adaptive measurement $l=1$ [2], the strategy makes a random guess hypothesis $\beta_{0}\in\mathbb{C}$ about the optimal $\beta$, and applies the POVM $\displaystyle\left\\{\hat{D}(\beta_{0})\left|n\right\rangle\left\langle n\right|\hat{D}^{\dagger}(\beta_{0})\right\\}_{n=0}^{m-1}\cup$ $\displaystyle\left\\{\mathbb{I}-\sum_{n=0}^{m-1}\hat{D}(\beta_{0})\left|n\right\rangle\left\langle n\right|\hat{D}^{\dagger}(\beta_{0})\right\\}$ (17) over the state $\left|e^{\mathrm{i}\phi}\alpha/\sqrt{L}\right\rangle$. In the POVM in Eq. (II.3), the sum over Fock states $\left|n\right\rangle\left\langle n\right|$ models the photon detection on the displaced state with a detector with PNR($m$) [6]. The corresponding Wigner function describing a photon- number resolving detector shows non-Gaussian features with negative values. For this reason, these adaptive estimation techniques are called “non- Gaussian”, despite that the estimator produced is asymptotically normal (this result will be proved in the remainder of this section). Given the detection outcome $n_{1}$ in $l=1$, the posterior probability distribution becomes $p\left(\phi\mid n_{1};\beta_{0}\right)\propto\mathcal{L}(\phi\mid n_{1};\beta_{0})p(\phi),$ (18) where $\mathcal{L}\left(\phi\mid n_{1};\beta_{0}\right)$ is the likelihood function given by $\begin{split}\mathcal{L}\left(\phi\mid n_{1};\beta_{0}\right)&=p\left(n_{1}\mid\phi;\beta_{0}\right)\\\ &=\operatorname{Tr}\left[\hat{D}(\beta_{0})\left|n_{1}\right\rangle\left\langle n_{1}\right|\hat{D}^{\dagger}(\beta_{0})\rho(\phi)\right]\,.\end{split}$ (19) The phase estimate $\phi_{1}$ in this adaptive step corresponds to the MAP estimator $\widehat{\phi}_{\text{MAP}}$, $\phi_{1}=\widehat{\phi}_{\text{MAP}}(n_{1})$, with the posterior probability distribution in Eq. (18). Using the posterior phase distribution in Eq. (18) as the prior for the next adaptive step $l=2$, the strategy optimizes a cost function $U(\beta)$ to obtain the next value of $\beta$ denoted as $\beta_{1}$, and implements the POVM in Eq. (II.3) with $\beta_{1}$. The Bayesian updating procedure is repeated at each step $l\geq 2$. After $l$ adaptive measurements the posterior probability distribution becomes $\begin{split}p(\phi\mid\mathbf{n},\bm{\beta})&=p(\phi\mid n_{l},...,n_{1},\beta_{l-1},...,\beta_{0})\\\ &\propto\prod_{i=1}^{l}p(n_{i}\mid\phi,\beta_{i-1})p(\phi).\end{split}$ (20) Here $n_{i}$ is the observed photon detection at step $i$. Using the MAP on this phase distribution, we obtain the $lth$ estimation $\widehat{\phi}_{l}$. The procedure is repeated until the last measurement step $L$. This parameter estimation strategy is a particular case of Bayesian optimal design of experiments, where the parameters $\beta\in\mathbb{C}$ are the designs, and which are optimized to estimate a phase $\phi\in[0,2\pi)$. Since the optimal value for $\beta$ on each adaptive step depends on all previous detection results, the cost function to be optimized is a function of the posterior distribution in Eq. (20). A suitable choice of cost function, such as the mutual information or the estimator variance, can provide a sequence of estimations $\widehat{\phi}_{n}$ that approaches the true value of $\phi$ [2]. In the case of estimation of cyclic parameters, such as phase estimation, the posterior distribution in Eq. (20) is $2\pi$ periodic, and the moments of $\widehat{\phi}$ cannot be calculated as in the linear case [51]. In such situations, the first moment of the cyclic random variable $X$ is defined as $\rm{E}\left[e^{\mathrm{i}X}\right]$, and the dispersion of an estimator $\widehat{\phi}$ is calculated using the Holevo Variance [51]: $\text{Var}_{\text{H}}\left[\widehat{\phi}\right]=\frac{1}{\left\lvert\rm{E}\left[e^{\mathrm{i}\widehat{\phi}}\right]\right\rvert^{2}}-1\,,$ (21) which is the analogous to the mean square error. The minimization of the uncertainty about $\phi$ (positive square root of Eq. (21)), requires maximization of $S(\beta,m)=\left\lvert\rm{E}\left[e^{\mathrm{i}\widehat{\phi}}\right]\right\rvert$, known as the sharpness of the distribution. Then, the suitable cost function for the adaptive protocol is the average sharpness: $\bar{S}(\beta,m)=\sum_{n=0}^{m}p(n)\left\lvert\int_{\Phi}e^{\mathrm{i}\phi}p\left(\phi\mid n,\beta\right)d\phi\right\rvert.$ (22) Identifiability of Likelihood. To guarantee a consistent asymptotic estimator the optimized estimation strategies require to satisfy the regularity conditions 1-6 described in Sec. II.2. For phase estimation, the regularity conditions $1\text{ and }2$ are satisfied given that $\phi$ is an interior point of $\Phi=[0,2\pi)$. Moreover, given that the probability $\begin{split}&p(n\mid\phi;\beta)=\text{Tr}\left[\left|\frac{\alpha e^{\mathrm{i}\phi}}{\sqrt{L}}\right\rangle\left\langle\frac{\alpha e^{\mathrm{i}\phi}}{\sqrt{L}}\right|\hat{D}(\beta)\left|n\right\rangle\left\langle n\right|\hat{D}^{\dagger}(\beta)\right]\\\ &=\frac{\exp\left(-\lvert\frac{\alpha e^{\mathrm{i}\phi}}{\sqrt{L}}-\beta\rvert^{2}\right)\lvert\frac{\alpha e^{\mathrm{i}\phi}}{\sqrt{L}}-\beta\rvert^{2n}}{n!},\,\alpha\in\mathbb{R}_{+},\,\beta\in\mathbb{C}\end{split}$ (23) is well defined and two times differentiable, the conditions $4$, $5$, and $6$ are directly satisfied. However, the condition $3$ is not satisfied in general. If one chooses $\beta$ as the value that maximizes the mutual information (8) or the average sharpness (22), the resulting likelihood function can have in general two maxima, that is, a non-identifiable likelihood function [30]. In that case it is not possible to guarantee the existence of a consistent estimator. To address the challenge of working with non-identifiable likelihood functions, we use designs with a fixed relation between the phase of $\beta$ and the amplitude $|\beta|$, given by $\beta=f(\theta)e^{\mathrm{i}\theta}$, with $\theta\in[0,2\pi)$ and $f(\theta)$ a real function. These experimental designs result in a cost function $U$ that is a function of $\theta$. To see how this method solves the problem of non-identifiability, we observe that the probability $p(n\mid\phi;\beta)$ in Eq. (23) is Poisson distributed, $\begin{split}p(n\mid\phi;\beta=\lvert\beta\rvert e^{\mathrm{i}\theta})&=\frac{e^{-\lambda}\cdot\lambda^{n}}{n!},\end{split}$ (24) with $\lambda=\lvert\alpha\rvert^{2}/L+\lvert\beta\rvert^{2}-2\lvert\alpha\rvert\lvert\beta\rvert\cos\left(\phi-\theta\right)/\sqrt{L}$. Then, for $L$ adaptive steps with results $\mathbf{n}=(n_{1},...,n_{L})$, the likelihood function is the product of $L$ probability functions of the form of Eq. (24): $\begin{split}\mathcal{L}(\mathbf{n},\phi)=\prod_{i=1}^{L}p(n_{i}\mid\phi;\beta_{i}=\lvert\beta\rvert e^{\mathrm{i}\theta_{i}})&=\prod_{i=1}^{L}\frac{e^{-\lambda_{i}}\cdot\lambda_{i}^{n_{i}}}{n_{i}!}\,.\end{split}$ (25) Here, each $\theta_{i}$ depends on the cost function and previous POVM outcomes. The choice of the experimental designs with $|\beta|=f(\theta)$ can force the adaptive strategy to change $\theta_{i}$ in each step. In this case, the likelihood function in Eq. (25) becomes identifiable because the product of probability functions with different $\theta_{i}$ produces a likelihood with a global maximum. To see this, note that if $\theta_{i}=\theta$ is fixed, even if the parameter $|\beta|$ is different in each adaptive step of the protocol, the likelihood function from a sequence of independent random variables with probability function given by Eq. (24) has two maxima over $\Phi$. One of the maxima will always be around $\phi$ and the other will depend on the value of $\theta$. On the other hand, if the strategy allows $\theta$ to change at each adaptive step, the second maximum is suppressed, since the functions whose product constitutes the likelihood Eq. (25) will each have a second maxima at different positions $\theta_{i}\neq\theta_{j}\,(i\neq j)$. As a result, with the experimental designs $\beta=f(\theta)\exp(\mathrm{i}\theta)$, the likelihood functions satisfy all the regularity conditions described in Sec. II.2. Bayesian optimal design of $|\beta|$. Any viable estimation strategy should aim to achieve the QCRB (14). Therefore, natural choice for $\lvert{\beta}\rvert$ is the one that maximizes the Fisher information. For a discrete random variable, the Fisher information is given by [72, 75]: $F(\phi)=\sum_{n=0}^{\infty}p(n\mid\phi)\left(\frac{\partial}{\partial\phi}\log\left(p(n\mid\phi)\right)\right)^{2}.$ (26) The Fisher information for a particular design $\beta=\lvert{\beta}\rvert e^{\mathrm{i}\theta}$ for Poisson distributions Eq. (24) results in $F(\phi;\beta)=\frac{4\lvert\alpha\rvert^{2}\lvert\beta\sin^{2}(\phi-\theta)\rvert^{2}}{\lvert\alpha\rvert^{2}+L\lvert\beta\rvert^{2}-2\lvert\alpha\rvert\lvert\beta\rvert\cos(\phi-\theta)\sqrt{L}}.$ (27) Optimizing over $|\beta|$, the Fisher information becomes: $F(\phi,\beta_{\text{opt}})=4\lvert\alpha\rvert^{2}/L,$ (28) where $\beta_{\text{opt}}=\frac{\lvert\alpha\rvert}{\sqrt{L}\cos(\phi-\theta)}$ (29) is the value of $|\beta|$ that maximizes the Fisher information. Unfortunately, since $\beta_{\text{opt}}$ depends on $\phi$, its implementation is not practical, because it would be required to already know a priori the very same parameter that one wants to estimate. To address this problem, the optimal Bayesian design $\beta_{\text{opt}}$ can be estimated as: $\widehat{\beta}_{\text{opt}}=\frac{\lvert\alpha\rvert}{\sqrt{L}\cos(\widehat{\phi}-\theta)},$ (30) where $\widehat{\phi}$ is the MAP estimator. As a result, the non-Gaussian estimation strategy has now only one design, the phase $\theta$. With $\beta=\widehat{\beta}_{\text{opt}}$, the Fisher information becomes: $F(\Delta,\widehat{\beta}_{\text{opt}})=\frac{4\,\sin^{2}(\Delta)\,\alpha^{2}}{L(\cos^{2}\left(\delta+\Delta\right)-2\,\cos(\Delta)\,\cos\left(\delta+\Delta\right)+1)}\,,$ (31) where $\delta=\widehat{\phi}-\phi$ and $\Delta=\phi-\theta$. Note that $F(\Delta,\widehat{\beta}_{\text{opt}})$ becomes a random variable with outcomes depending on $\widehat{\phi}$ through $\widehat{\beta}_{\text{opt}}$. Moreover, for a random initial design $\theta_{1}$, a cost function given by the expected sharpness or the mutual information with $\widehat{\beta}_{\text{opt}}$ in Eq. (30), the likelihood function in Eq. (25) becomes identifiable. As a result, the non-Gaussian strategy then leads to consistent and efficient MAP estimators [67]. Asymptotic behavior. In general $\delta=\widehat{\phi}-\phi\neq 0$, and the Fisher information has a strong dependence on $\theta$. For example, if $\theta\to\phi$ then $F(\Delta=0,\widehat{\beta}_{\text{opt}})\to 0$, and negligible information is gained when the system is measured. Therefore, the optimal value of $\theta$ should result in the value of $\Delta$ that maximizes the expected value of $F(\Delta,\widehat{\beta}_{\text{opt}})$. By observing that the expected value $\rm{E}[\delta^{2n+1}]=0$, with $n\in\mathbb{N}$, we see that the optimal value of $\Delta$ is $\pi/2$. As a result, an efficient adaptive strategy would make $\Delta_{L}=\phi-\theta_{L}$ tend to $\pi/2$ as $L$ increases. However, in the limit of $\Delta\to\pi/2$ and $\delta\to 0$, $|\widehat{\beta}_{\text{opt}}|\to\infty$ (see Eq. (30)), and we expect that when the strategy is implemented $\Delta<\pi/2$. These findings are consistent with our numerical simulations of the non-Gaussian adaptive strategy, where we observe that for $L\gg 1$, $\Delta\to\pi/2-\epsilon$, with $\epsilon$ a small positive real number, and $|\beta|$ stays finite around a value that the PNR detector in the strategy can resolve. Note that $\hat{\beta}_{\text{opt}}$, which maximizes the Fisher information given $\theta$, does not necessarily maximizes the cost function $U(\beta)$ for $\beta\in\mathbb{C}$. However restricting $\beta$ to the set that maximizes the Fisher information makes the phase of $\beta$ change in each step forcing the likelihood to be identifiable. Moreover, when $\hat{\phi}\to\phi$, $\hat{\beta}_{\text{opt}}$ tends to the value that maximizes $U$ [2]. Given that the Fisher Information is additive, for any $L$ measurements made using the optimal design, $\beta_{\text{opt}}$ in Eq. (29), the total Fisher information for this design corresponds to the quantum Fisher information $4|\alpha|^{2}$. However, since $\beta_{\text{opt}}$ is not known, it is not possible to choose a design $\beta$ for which its Fisher information equals the quantum Fisher information. This is already implied by the fact that the canonical phase measurement does not saturate the Cramér-Rao bound. To show this, we observe that for and estimator very close to the true value $\delta=\widehat{\phi}-\phi\ll 1$ for $L$ adaptive measurements, the Fisher information (Eq. (31)) is $F(\phi,\widehat{\beta}_{\text{opt}})\approx 4\,|\alpha|^{2}(1-\delta^{2})\,.$ (32) On the other hand, the best possible estimator of $\delta$ in each step satisfies $E\left[\delta^{2}\right]\geq 1/4|\alpha|^{2}$, so that $F(\phi,\widehat{\beta}_{\text{opt}})\lesssim 4\,|\alpha|^{2}-1\,,$ (33) independently of $L$. We conclude that the adaptive non-Gaussian strategies do not saturate the Cramér-Rao bound for finite $|\alpha|$. Nevertheless, these adaptive estimation schemes outperform the most sensitive Gaussian strategy known to date, and show a similar asymptotic scaling as the canonical phase measurement. ### II.4 Performance of Non-Gaussian Adaptive Strategies We numerically investigate the performance of the estimator produced by non- Gaussian adaptive strategies for phase estimation for different number of adaptive steps $L$, photon number resolution PNR($m$), and average photon number $|\alpha|^{2}$. To assess the performance of these strategies, we compare our results with the best known Gaussian measurement for phase estimation, termed Mark II (MKII) strategy [21], and with the canonical phase measurement. As discussed in Sec. II.2, the performance of non-Gaussian adaptive strategies using cost functions including the Kullback-Lieber divergence Eq. (6), conditional entropy Eq. (7), mutual information Eq. (8), and expected sharpness Eq. (22) are equivalent in the asymptotic limit. In our numerical simulations, however, we use the expected sharpness in Eq. (22) as the cost function for the optimization of the strategy, because it substantially reduces the number of operations for this optimization and the computational overhead. In our numerical analysis, we use Monte Carlo simulations and generate sufficient numerical data samples to reduce statistical uncertainties. Fig. 2A shows the Holevo variance for the non-Gaussian adaptive strategy, as a function of the number of adaptive steps $L$ for $|\alpha|^{2}=1$, for different PNR: $m=1,3,6$. We observe that the adaptive non-Gaussian scheme with PNR($1$) and $L\geq 100$ (green dots with error bars) outperforms the most sensitive Gaussian strategy known to date, the MKII, (red dashed line). Moreover, strategies with PNR($3$) (yellow dots with error bars) and PNR($6$) (light blue dots with error bars) outperform the MKII strategy with only $L\approx 30$ and $L\approx 20$, respectively, achieving a smaller Holevo variance with fewer adaptive measurements compared to PNR(1). We have observed similar behavior for non-Gaussian strategies optimizing different cost functions, such as the mutual information. To investigate the asymptotic behavior of the adaptive non-Gaussian strategy, we assume an exponential dependence for the Holevo variance with $L$ (solid lines in Fig. 2A): $y(L,\alpha)=\text{A}e^{-\text{B}\cdot L}+\text{C}.$ (34) The exponential model is a few parameter model that allows us to quantify asymptotic trends when the datasets have rapidly decaying tails. This is a widely used model for studying the asymptotic scaling of estimators as a function of resources in diverse metrological problems [64, 76, 77]. We fit the numerical data from Monte Carlo simulations to Eq. (34) to estimate the constants $A$, $B$, and $C$. Our results show that the asymptotic constant $C=0.751\pm 0.002$ for PNR(1), $C=0.719\pm 0.004$ for PNR(3), and $C=0.714\pm 0.003$ for PNR(6). We observe that these values are smaller than the asymptote of the MKII Gaussian strategy, but larger than the one for the canonical phase measurement ($0.673$, blue dashed line). We note that while $C$ for PNR(3) is larger than for PNR(6), they are statistically equal due to their uncertainties. This prevents us from drawing any conclusions for larger values of $m$. Fig. 2: Asymptotic limit for the estimator of Holevo variance and optimal design. a: Holevo variance for $\alpha=1$ and PNR $m=1,3,6$ as a function of adaptive steps $L$. Note that the non-Gaussian strategy surpasses the MKII strategy (red dashed line at $y=0.767$) with $L>100$, $L>30$ and $L>20$ for PNR($1$), PNR($3$) and PNR($6$), respectively. The lines are obtained fitting the exponential model Eq. (34). Estimates for these strategies result in $(A,B,C,\mathrm{RSE})=(0.11\pm 0.02,0.059\pm 0.014,0.7145\pm 0.003,0.00058)$ for PNR($6$), $(A,B,C,\mathrm{RSE})=(0.22\pm 0.04,0.082\pm 0.0157,0.719\pm 0.004,0.00076)$ for PNR($3$), $(A,B,C,\mathrm{RSE})=(0.41\pm 0.05,B=0.117\pm 0.013,0.7517\pm 0.0027,0.00052)$ for PNR($1$). The shaded region represents one standard deviation. b: Optimal design as a function of $L$. As $L$ increases the optimal design tends to $\pi/2$. Numerical data are obtained with $10000$ Monte Carlo sequences. Error bars represent a 1-$\sigma$ standard deviation. Figure 2b shows the design $\Delta=\widehat{\phi}-\theta$, the phase of the displacement field, as a function of $L$. We observe that for non-Gaussian adaptive strategies with increasing PNR, $\Delta$ tends to $\pi/2$ for large $L$ ($L=200$). This observation is consistent with the theoretical framework of optimal design of experiments (Sec. II.3), which states that $\Delta_{optimal}=\pi/2$ for $L\to\infty$. Moreover, as PNR increases, the strategies show a faster convergence to the asymptotic value of the Holevo variance, which translates in a smaller error in the estimation (see Fig. 2a.) These observations further support our theoretical model of non-Gaussian strategies for phase estimation. Fig. 3: Estimator variance produced by the non-Gaussian strategy. Holevo variance of adaptive Non-Gaussian strategies as a function of $\lvert\alpha\rvert^{2}$, normalized to QCRB, for different adaptive measurement steps (solid lines), together with the canonical phase measurement (blue dashed line). For $L=100$ the Holevo variance differs from the QCRB by $3\%$ exemplifying the tendency to the ultimate precision limit in the regime of large number of photons in the asymptotic limit. Simulations consist of $10000$ Monte Carlo simulation sequences with PNR $m=3$. We investigate the asymptotic performance of the estimator variance produced by the non-Gaussian strategy for large $|\alpha|^{2}$. Figure 3 shows the Holevo variance for the non-Gaussian adaptive strategy as a function of $|\alpha|^{2}$ for different $L$ normalized to the QCRB. We observe that for large $|\alpha|^{2}$ (with $L=100$), the adaptive non-Gaussian strategy tends to the CRLB. To build a model for the performance of this strategy for large $|\alpha|^{2}$, we propose three candidate models for the Holevo variance for $L\gg 1$: $\displaystyle\widehat{y}_{1}$ $\displaystyle=$ $\displaystyle\frac{A_{1}}{\lvert\alpha\rvert^{2}}+\frac{A_{2}}{\lvert\alpha\rvert^{3}}+\frac{A_{3}}{\lvert\alpha\rvert^{4}},$ (35a) $\displaystyle\widehat{y}_{2}$ $\displaystyle=$ $\displaystyle\frac{A_{1}}{\lvert\alpha\rvert^{2}}+\frac{A_{2}}{\lvert\alpha\rvert^{3}},$ (35b) $\displaystyle\widehat{y}_{3}$ $\displaystyle=$ $\displaystyle\frac{A_{1}}{\lvert\alpha\rvert^{2}}+\frac{A_{3}}{\lvert\alpha\rvert^{4}}\,,$ (35c) to describe our numerical observations in Fig. 3 based on the Monte Carlo simulations. The model that best describes our observations allows us to determine, with a certain degree of confidence, the performance of non- Gaussian adaptive strategies, and compare them with the best Gaussian strategy, Eq. (1), and the canonical phase measurement, Eq. (2). We discriminate among plausible candidate models using the technique of backward elimination [78]. We start with the candidate $\widehat{y}_{1}$, Eq. (35a), and test the deletion of each variable $A_{i}$ using the $p$-value of a hypothesis testing procedure ($H_{0}:\,A_{i}=0$, $H_{A}:\,A_{i}\neq 0$). Given that $p>0.1$ for $A_{2}$ and $p<0.001$ for $A_{1}$ and $A_{3}$, we conclude, with confidence larger than $99\%$ , that the model reflecting the behavior of the data presented in Fig. 3 is $\widehat{y}_{3}$, Eq. (35c). To find $A_{1}$ and $A_{3}$ in the limit $L\to\infty$ in model $\widehat{y}_{3}$, we fit the Holevo variance as a function of $|\alpha|^{2}$ for $\lvert{\alpha}\rvert^{2}>5$ to the model $\widehat{y}_{3}$ for different values of $L$. Given a number of adaptive steps $L$, each fitting provides a set of coefficients $\left\\{A_{1},A_{3}\right\\}$. Hence, to obtain the trend of each coefficient $A_{1}$ and $A_{2}$ as $L$ increases, we fit them to an exponential model of the form $A_{i}=D_{i}\exp\left(-E_{i}L\right)+F_{i}$. Fig. 4 shows the coefficients $A_{1}$ and $A_{3}$ as a function of $L$. Adjusting this exponential model and observing that in the limit $L\gg 1$ $A_{i}\rightarrow F_{i}$, we obtain $A_{1}=0.250\pm 0.001$ and $A_{3}=0.52\pm 0.01$. Therefore, we conclude with a $99\%$ confidence level that the Holevo variance for the adaptive non-Gaussian strategy in the limit $L\to\infty$ for large $|\alpha|$ is: $\text{Var}_{\text{H}}\left[\widehat{\phi}\right]\approx\frac{0.250\pm 0.001}{\lvert\alpha\rvert^{2}}+\frac{0.520\pm 0.010}{\lvert\alpha\rvert^{4}}\,.$ (36) This equation is the main result of this work, also shown in Eq. (3). We observe that the Holevo variance for the adaptive non-Gaussian strategy has a similar dependance with $|\alpha|$ as the canonical phase measurement, differing only in the scaling of the correction term of order $1/|\alpha|^{4}$, see Eq. (2). Moreover, we note that the best Gaussian strategy known to date, the MKII adaptive homodyne, has a correction term of order $1/|\alpha|^{3}$ [20], see Eq. (1). Then, for large $|\alpha|$, the non- Gaussian estimation strategies show a much better scaling in the Holevo variance than the best known Gaussian strategy, and closely follows the canonical phase measurement. Furthermore, these non-Gaussian phase estimation strategies can be implemented with current technologies [2], and our work demonstrates their superior performance over all the physically realizable strategies for single-shot phase estimation of coherent states reported to date. Fig. 4: Coefficients of model $\widehat{y}_{3}$ in Eq. (35c) for the Holevo Variance $\text{Var}_{\text{H}}[\widehat{\phi}]$, as a function of $L$. Note that in the limit of $L\to\infty$ the coefficient $A_{1}$ for the term $1/\lvert\alpha\rvert^{2}$ tends to $0.25$, and $A_{3}$ of term $1/\lvert\alpha\rvert^{4}$ tends to $0.52$. Curves are fits to an exponential model $A_{i}=D_{i}\exp(-E_{i}L)+F_{i}$ with coefficients $(D,E,F,\mathrm{RSE})=(0.065\pm 0.013,0.060\pm 0.010,0.250\pm 0.001,1.87\times 10^{-7})$ for the left panel, and $(D,E,F,\mathrm{RSE})=(0.095\pm 0.015,0.279\pm 0.119,0.522\pm 0.010,1.82\times 10^{-5})$ for the right panel. The shaded regions represent a 1-$\sigma$ standard deviation. The optimized non-Gaussian adaptive strategies based on photon counting analyzed in this work are not the only possible strategies for single-shot phase estimation, and there could be other strategies based on photon counting with better performances. In this work, we studied estimation strategies using cost functions that are consistent with D-optimal designs. However, there may be other cost functions that could provide a further improvements to non- Gaussian strategies. Moreover, we note that the relation in Eq. (30) between the phase and the magnitude of the displacement field was used to obtain identifiable likelihood functions and ensure efficient estimators in the asymptotic limit. While we chose the relation in Eq. (30) because it maximizes the Fisher information, there is no mathematical proof that this choice is optimal, or that other choices for this relation would not provide higher sensitivities. Finally, in the presented adaptive non-Gaussian strategies for phase estimation, the displacement operations were optimized in every adaptive step at a time, i. e. using local optimizations in the adaptive steps. We note, however, that local optimal strategies do not necessarily ensure global optimality [13]. Strategies with global optimizations, where all adaptive steps are optimized simultaneously, could probably lead to better performances. However, the computational overhead required for performing global optimizations beyond $L=10$ adaptive steps prevents us from being able to investigate global optimized strategies. To overcome these limitations, future investigations could make use of machine learning methods, such as neural networks and reinforcement learning [79], to lower the complexity of these calculations. ## III Discussion Non-Gaussian measurement strategies for phase estimation approaching the quantum limits in sensitivity, as set by the canonical phase measurement, can become a tool for enhancing the performance of diverse protocols in quantum sensing, communications, and information processing. Optical phase estimation approaching the quantum limit can be used to prepare highly squeezed atomic spin states based on measurement backaction [42] for quantum sensing and metrology [80]; enhance phase contrast imaging of biological samples with optical probes at the few photon level to avoid photodamage and ensure integrity of the sample; and to enhance fidelities in quantum communications with phase coherent states that require phase tracking close to the quantum level between receiver and transmitter with few-photon pulses [81, 82], while allowing for quantum receivers to decode information encoded in coherent states below the quantum noise limit [83, 13, 52]. As a direct application for quantum information processing, non-Gaussian measurements based on photon counting and displacement operators can be used for full reconstruction of quantum states with on-off detectors [84] and PNR detectors [85] in a multi-copy state setting by sampling phase space with non- Gaussian projections. The theoretical methods to assess the performance of adaptive non-Gaussian measures for phase estimation described in this work could be used to study methods for adaptive quantum tomography [86, 87] based on photon counting for high dimensional quantum states, and investigate their asymptotic advantages over adaptive homodyne tomography [88]. From the practical point of view, our work provides insight into the design of practical, highly efficient measurement strategies for phase estimation based on photon counting. It shows that non-Gaussian strategies optimizing any cost function within the family of D-optimal designs are equally efficient, and demonstrates the advantages of higher photon number resolution in the strategies to reduce estimation errors. This understanding allows the experimenter to chose cost functions that can be efficiently calculated and optimized to achieve higher convergence rates, while selecting a PNR given the desired/target error budget in the estimation problem for specific applications. This knowledge will be critical for the design and development of future sensors based on photon counting for diverse applications in communication, phase-contrast imaging, metrology, and information processing. In conclusion, we investigate a family of non-Gaussian strategies for single- shot phase estimation of optical coherent states. These strategies are based on adaptive photon counting measurements with a finite number of adaptive steps implementing coherent displacement operations, optimized to maximize information gain as the measurement progresses [2]. We develop a comprehensive statistical analysis based on Bayesian optimal design of experiments that provides a natural description of adaptive non-Gaussian strategies. This theoretical framework gives a fundamental understanding on how to optimize these strategies to enable efficient estimators with a high degree of convergence towards the ultimate limit, the Cramér Rao lower bound. We use numerical simulations to show that optimized adaptive non-Gaussian strategies producing an asymptotically efficient normal estimator achieve a much higher sensitivity than the best Gaussian strategy known to date, which is based on adaptive homodyne [20]. Moreover, we show that the Holevo variance of the estimator for the adaptive non-Gaussian strategy has a similar dependence as the canonical phase measurement in the asymptotic limit of large photon numbers, differing by a scaling factor in the second-order correction term. Our work complements the work in single-shot phase measurements for single-photon wave packets in two dimensions [22] using quantum feedback with Gaussian measurements, and paves the way for the realization of optimized feedback measurements approaching the canonical phase measurement for higher dimensional states based on non-Gaussian operations. DATA AVAILABILITY The data that support the findings of this study are available from the authors upon request. CODE AVAILABILITY Our numerical results have been implemented using MATLAB and Julia language. A functions to reproduce the numerical results can be publicly found in the repository [89]. To obtain a set of estimations produced by the non Gaussian strategy, use the program: ` Bootstrap_phase_estimation.jl`, setting the desired values of $L$, Bootstrap size, MPN, in the `Adapt_Steps`, `Boot_Reps`, and `MPNS` variables. ACKNOWLEDGMENTS We thank Laboratorio Universitario de Cómputo de Alto Rendimiento (LUCAR) of IIMAS-UNAM for their service on information processing. This research was supported by the Grant No. UNAM-DGAPA-PAPIIT IG101421 and the National Science Foundation (NSF) grans # PHY-1653670 and PHY-2210447. AUTHOR CONTRIBUTIONS F.E.B. and P.B. conceived the idea and supervised the work. M.A.R. and P.B. conducted the theoretical and statistical study. M.T.D. and M.A.R. contributed in the code for simulations. All authors contributed to the analysis of the theoretical and numerical results and contributed to writing the manuscript. COMPETING INTERESTS The authors declare that there are no competing interests. ## References * [1] Giovannetti, V., Lloyd, S. & Maccone, L. Quantum-enhanced measurements: Beating the standard quantum limit. _Science_ 306, 1330–1336 (2004). URL https://science.sciencemag.org/content/306/5700/1330. * [2] DiMario, M. T. & Becerra, F. E. Single-shot non-gaussian measurements for optical phase estimation. _Phys. Rev. Lett._ 125, 120505 (2020). URL https://link.aps.org/doi/10.1103/PhysRevLett.125.120505. * [3] Demkowicz-Dobrzański, R., Kołodyński, J. & Guţă, M. The elusive heisenberg limit in quantum-enhanced metrology. _Nat. Commun._ 3, 1–8 (2012). * [4] Abbott, B. P. e. a. Observation of gravitational waves from a binary black hole merger. _Phys. Rev. Lett._ 116, 061102 (2016). URL https://link.aps.org/doi/10.1103/PhysRevLett.116.061102. * [5] Helstrom, C. _Quantum Detection and Estimation Theory_. Mathematics in Science and Engineering: a series of monographs and textbooks (Academic Press, 1976). URL https://books.google.com.mx/books?id=fv9SAAAAMAAJ. * [6] Becerra, F. E., Fan, J. & Migdall, A. Photon number resolution enables quantum receiver for realistic coherent optical communications. _Nat. Photonics_ 9, 48–53 (2015). * [7] Dolinar, S. J. An optimum receiver for the binary coherent state quantum channel. Research Laboratory of Electronics, MIT, Quarterly Progress Report No. 111 (1973), p. 115. * [8] DiMario, M. T. & Becerra, F. E. Channel-noise tracking for sub-shot-noise-limited receivers with neural networks. _Phys. Rev. Res._ 3, 013200 (2021). URL https://link.aps.org/doi/10.1103/PhysRevResearch.3.013200. * [9] DiMario, M. T. & Becerra, F. E. Phase tracking for sub-shot-noise-limited receivers. _Phys. Rev. Res._ 2, 023384 (2020). URL https://link.aps.org/doi/10.1103/PhysRevResearch.2.023384. * [10] DiMario, M., Kunz, L., Banaszek, K. & Becerra, F. Optimized communication strategies with binary coherent states over phase noise channels. _npj Quantum Inf._ 5, 1–7 (2019). * [11] DiMario, M. T. & Becerra, F. E. Robust measurement for the discrimination of binary coherent states. _Phys. Rev. Lett._ 121, 023603 (2018). URL https://link.aps.org/doi/10.1103/PhysRevLett.121.023603. * [12] DiMario, M. T., Carrasco, E., Jackson, R. A. & Becerra, F. E. Implementation of a single-shot receiver for quaternary phase-shift keyed coherent states. _J. Opt. Soc. Am. B_ 35, 568–574 (2018). URL http://josab.osa.org/abstract.cfm?URI=josab-35-3-568. * [13] Ferdinand, A., DiMario, M. & Becerra, F. Multi-state discrimination below the quantum noise limit at the single-photon level. _npj Quantum Inf._ 3, 1–7 (2017). * [14] Becerra, F. _et al._ Experimental demonstration of a receiver beating the standard quantum limit for multiple nonorthogonal state discrimination. _Nat. Photonics_ 7, 147–152 (2013). * [15] Slussarenko, S. & Pryde, G. J. Photonic quantum information processing: A concise review. _Appl. Phys. Rev._ 6, 041303 (2019). * [16] Plenio, M. B. Logarithmic negativity: A full entanglement monotone that is not convex. _Phys. Rev. Lett._ 95, 090503 (2005). URL https://link.aps.org/doi/10.1103/PhysRevLett.95.090503. * [17] Polino, E., Valeri, M., Spagnolo, N. & Sciarrino, F. Photonic quantum metrology. _AVS Quantum Science_ 2, 024703 (2020). * [18] Giovannetti, V., Lloyd, S. & Maccone, L. Advances in quantum metrology. _Nat. Photonics_ 5, 222–229 (2011). * [19] Flamini, F., Spagnolo, N. & Sciarrino, F. Photonic quantum information processing: a review. _Rep. Prog. Phys._ 82, 016001 (2018). URL https://doi.org/10.1088/1361-6633/aad5b2. * [20] Wiseman, H. M. Adaptive phase measurements of optical modes: Going beyond the marginal $q$ distribution. _Phys. Rev. Lett._ 75, 4587–4590 (1995). URL https://link.aps.org/doi/10.1103/PhysRevLett.75.4587. * [21] Wiseman, H. M. & Killip, R. B. Adaptive single-shot phase measurements: The full quantum theory. _Phys. Rev. A_ 57, 2169–2185 (1998). * [22] Martin, L. S., Livingston, W. P., Hacohen-Gourgy, S., Wiseman, H. M. & Siddiqi, I. Implementation of a canonical phase measurement with quantum feedback. _Nat. Phys._ 16, 1046–1049 (2020). URL https://doi.org/10.1038/s41567-020-0939-0. * [23] Braunstein, S. L. & Caves, C. M. Statistical distance and the geometry of quantum states. _Phys. Rev. Lett._ 72, 3439–3443 (1994). * [24] Demkowicz-Dobrzański, R., Jarzyna, M. & Kołodyński, J. Quantum limits in optical interferometry. _Prog. Opt._ 60, 345–435 (2015). * [25] Genoni, M. G., Olivares, S. & Paris, M. G. A. Optical phase estimation in the presence of phase diffusion. _Phys. Rev. Lett._ 106, 153603 (2011). * [26] Lee, C., Oh, C., Jeong, H., Rockstuhl, C. & Lee, S.-Y. Using states with a large photon number variance to increase quantum fisher information in single-mode phase estimation. _J. Phys. Commun._ 3, 115008 (2019). URL https://doi.org/10.1088/2399-6528/ab524a. * [27] Bradshaw, M., Lam, P. K. & Assad, S. M. Ultimate precision of joint quadrature parameter estimation with a gaussian probe. _Phys. Rev. A_ 97, 012106 (2018). * [28] Wiseman, H. M. & Killip, R. B. Adaptive single-shot phase measurements: A semiclassical approach. _Phys. Rev. A_ 56, 944–957 (1997). * [29] Anisimov, P. M. _et al._ Quantum metrology with two-mode squeezed vacuum: Parity detection beats the heisenberg limit. _Phys. Rev. Lett._ 104, 103602 (2010). * [30] Huang, Z., Motes, K. R., Anisimov, P. M., Dowling, J. P. & Berry, D. W. Adaptive phase estimation with two-mode squeezed vacuum and parity measurement. _Phys. Rev. A_ 95, 053837 (2017). URL https://link.aps.org/doi/10.1103/PhysRevA.95.053837. * [31] Anderson, B. E. _et al._ Phase sensing beyond the standard quantum limit with a variation on the su(1,1) interferometer. _Optica_ 4, 752–756 (2017). * [32] Izumi, S. _et al._ Optical phase estimation via the coherent state and displaced-photon counting. _Phys. Rev. A_ 94, 033842 (2016). * [33] Slussarenko, S. _et al._ Unconditional violation of the shot noise limit in photonic quantum metrology. _Nat. Photonics_ 11, 700–703 (2017). * [34] Anderson, B. E., Schmittberger, B. L., Gupta, P., Jones, K. M. & Lett, P. D. Optimal phase measurements with bright- and vacuum-seeded su(1,1) interferometers. _Phys. Rev. A_ 95, 063843 (2017). * [35] Daryanoosh, S., Slussarenko, S., Berry, D. W., Wiseman, H. M. & Pryde, G. J. Experimental optical phase measurement approaching the exact heisenberg limit. _Nat. Commun._ 9, 4606 (2018). * [36] Higgins, B. L., Berry, D. W., Bartlett, S. D., Wiseman, H. M. & Pryde, G. J. Entanglement-free heisenberg-limited phase estimation. _Nature_ 450, 393–396 (2007). * [37] Hentschel, A. & Sanders, B. C. Machine learning for precise quantum measurement. _Phys. Rev. Lett._ 104, 063603 (2010). * [38] Hou, Z. _et al._ Control-enhanced sequential scheme for general quantum parameter estimation at the heisenberg limit. _Phys. Rev. Lett._ 123, 040501 (2019). * [39] Larson, W. & Saleh, B. E. A. Supersensitive ancilla-based adaptive quantum phase estimation. _Phys. Rev. A_ 96, 042110 (2017). * [40] Lumino, A. _et al._ Experimental phase estimation enhanced by machine learning. _Phys. Rev. Applied_ 10, 044033 (2018). * [41] Zheng, K., Xu, H., Zhang, A., Ning, X. & Zhang, L. Ab initio phase estimation at the shot noise limit with on–off measurement. _Quantum Inf. Process._ 18, 329 (2019). * [42] Deutsch, I. H. & Jessen, P. S. Quantum control and measurement of atomic spins in polarization spectroscopy. _Opt. Commun._ 283, 681–694 (2010). URL https://www.sciencedirect.com/science/article/pii/S0030401809010517. Quo vadis Quantum Optics? * [43] Bouchoule, I. & Mølmer, K. Preparation of spin-squeezed atomic states by optical-phase-shift measurement. _Phys. Rev. A_ 66, 043811 (2002). * [44] Iwasawa, K. _et al._ Quantum-limited mirror-motion estimation. _Phys. Rev. Lett._ 111, 163602 (2013). * [45] Tsang, M. Quantum metrology with open dynamical systems. _New Journ. of Phys._ 15, 073005 (2013). * [46] Aasi, A. J. A. B. e. a., J. Enhanced sensitivity of the ligo gravitational wave detector by using squeezed states of light. _Nat. Photonics_ 7, 613–619 (2013). * [47] Nair, R., Yen, B. J., Guha, S., Shapiro, J. H. & Pirandola, S. Symmetric $m$-ary phase discrimination using quantum-optical probe states. _Phys. Rev. A_ 86, 022306 (2012). * [48] van Loock, P., Lütkenhaus, N., Munro, W. J. & Nemoto, K. Quantum repeaters using coherent-state communication. _Phys. Rev. A_ 78, 062319 (2008). * [49] Munro, W. J., Nemoto, K. & Spiller, T. P. Weak nonlinearities: a new route to optical quantum computation. _New J. of Phys._ 7, 137 (2005). * [50] Nemoto, K. & Munro, W. J. Nearly deterministic linear optical controlled-not gate. _Phys. Rev. Lett._ 93, 250502 (2004). * [51] Holevo, A. S. _Probabilistic and Statistical Aspects of Quantum Theory; 2nd ed._ Publications of the Scuola Normale Superiore. Monographs (Springer, Dordrecht, 2011). URL https://cds.cern.ch/record/1414149. * [52] Becerra, F., Fan, J. & Migdall, A. Photon number resolution enables quantum receiver for realistic coherent optical communications. _Nat. Photonics_ 9, 48–53 (2015). * [53] Cover, T. M. & Thomas, J. A. _Elements of Information Theory (Wiley Series in Telecommunications and Signal Processing)_ (Wiley-Interscience, USA, 2006). * [54] Chaloner, K. & Verdinelli, I. Bayesian Experimental Design: A Review. _Stat. Sci._ 10, 273 – 304 (1995). URL https://doi.org/10.1214/ss/1177009939. * [55] Rodríguez-García, M. A., Castillo, I. P. & Barberis-Blostein, P. Efficient qubit phase estimation using adaptive measurements. _Quantum_ 5, 467 (2021). URL https://doi.org/10.22331/q-2021-06-04-467. * [56] Ryan, E., Drovandi, C., McGree, J. & Pettitt, T. A review of modern computational algorithms for bayesian optimal design. _Int. Stat. Rev._ 84, 128–154 (2016). URL https://eprints.qut.edu.au/75000/. * [57] Suzuki, J. Quantum-state estimation problem via optimal design of experiments. _Int. J. Quantum Inf._ 19, 2040007 (2021). URL https://doi.org/10.1142/S0219749920400079. eprint https://doi.org/10.1142/S0219749920400079. * [58] Morelli, S., Usui, A., Agudelo, E. & Friis, N. Bayesian parameter estimation using gaussian states and measurements. _Quantum Sci. Technol._ 6, 025018 (2021). URL https://doi.org/10.1088/2058-9565/abd83d. * [59] Martínez-García, F., Vodola, D. & Müller, M. Adaptive bayesian phase estimation for quantum error correcting codes. _New J. Phys._ 21, 123027 (2019). URL https://doi.org/10.1088/1367-2630/ab5c51. * [60] Berni, A. A. _et al._ Ab initio quantum-enhanced optical phase estimation using real-time feedback control. _Nat. Photonics_ 9, 577–581 (2015). URL https://doi.org/10.1038/nphoton.2015.139. * [61] Genoni, M. G. _et al._ Optical interferometry in the presence of large phase diffusion. _Phys. Rev. A_ 85, 043817 (2012). URL https://link.aps.org/doi/10.1103/PhysRevA.85.043817. * [62] Oh, C. & Son, W. Sub shot-noise frequency estimation with bounded a priori knowledge. _J. Phys. A Math. Theor._ 48, 045304 (2014). URL https://doi.org/10.1088/1751-8113/48/4/045304. * [63] Macieszczak, K., Fraas, M. & Demkowicz-Dobrzański, R. Bayesian quantum frequency estimation in presence of collective dephasing. _New J. Phys._ 16, 113002 (2014). URL https://doi.org/10.1088/1367-2630/16/11/113002. * [64] Glatthard, J. _et al._ Optimal cold atom thermometry using adaptive bayesian strategies (2022). URL https://arxiv.org/abs/2204.11816. * [65] McMichael, R. D., Dushenko, S. & Blakley, S. M. Sequential bayesian experiment design for adaptive ramsey sequence measurements. _J. Appl. Phys._ 130, 144401 (2021). URL https://doi.org/10.1063/5.0055630. eprint https://doi.org/10.1063/5.0055630. * [66] Kleinegesse, S. & Gutmann, M. U. Efficient bayesian experimental design for implicit models. In _AISTATS_ (2019). * [67] Paninski, L. Asymptotic theory of information-theoretic experimental design. _Neural Comput._ 17, 1480–1507 (2005). URL https://doi.org/10.1162/0899766053723032. * [68] Verdinelli, I. A note on bayesian design for the normal linear model with unknown error variance. _Biometrika_ 87, 222–227 (2000). URL http://www.jstor.org/stable/2673577. * [69] Ryan, E., Drovandi, C., Thompson, H. & Pettitt, T. Towards bayesian experimental design for nonlinear models that require a large number of sampling times 70, 45–60 (2014). URL https://eprints.qut.edu.au/56522/. * [70] Nielsen, M. A. & Chuang, I. L. _Quantum Computation and Quantum Information_ (Cambridge University Press, 2000). * [71] Braunstein, S. L. & Caves, C. M. Statistical distance and the geometry of quantum states. _Phys. Rev. Lett._ 72, 3439–3443 (1994). URL https://link.aps.org/doi/10.1103/PhysRevLett.72.3439. * [72] Paris, M. G. Quantum estimation for quantum technology. _Int. J. Quantum Inf._ 7, 125–137 (2009). * [73] Holevo, A. Statistical decision theory for quantum systems. _J. Multivar. Anal._ 3, 337 – 394 (1973). URL http://www.sciencedirect.com/science/article/pii/0047259X73900286. * [74] Pellonpää, J.-P. & Schultz, J. Measuring the canonical phase with phase-space measurements. _Phys. Rev. A_ 88, 012121 (2013). URL https://link.aps.org/doi/10.1103/PhysRevA.88.012121. * [75] Escher, B. M., de Matos Filho, R. L. & Davidovich, L. General framework for estimating the ultimate precision limit in noisy quantum-enhanced metrology. _Nat. Phys._ 7, 406–411 (2011). URL https://doi.org/10.1038/nphys1958. * [76] Wang, P. _et al._ Single ion qubit with estimated coherence time exceeding one hour. _Nat. Commun._ 12, 233 (2021). URL https://doi.org/10.1038/s41467-020-20330-w. * [77] Hall, M. J. W. & Wiseman, H. M. Does nonlinear metrology offer improved resolution? answers from quantum information theory. _Phys. Rev. X_ 2, 041006 (2012). URL https://link.aps.org/doi/10.1103/PhysRevX.2.041006. * [78] Young, D. S. _Handbook of regression methods_. A Chapman and Hall Book (2017). * [79] DiMario, M. T. & Becerra, F. E. Channel-noise tracking for sub-shot-noise-limited receivers with neural networks. _Phys. Rev. Res._ 3, 013200 (2021). URL https://link.aps.org/doi/10.1103/PhysRevResearch.3.013200. * [80] Pezzè, L., Smerzi, A., Oberthaler, M. K., Schmied, R. & Treutlein, P. Quantum metrology with nonclassical states of atomic ensembles. _Rev. Mod. Phys._ 90, 035005 (2018). URL https://link.aps.org/doi/10.1103/RevModPhys.90.035005. * [81] Qi, B., Lougovski, P., Pooser, R., Grice, W. & Bobrek, M. Generating the local oscillator “locally” in continuous-variable quantum key distribution based on coherent detection. _Phys. Rev. X_ 5, 041009 (2015). URL https://link.aps.org/doi/10.1103/PhysRevX.5.041009. * [82] Soh, D. B. S. _et al._ Self-referenced continuous-variable quantum key distribution protocol. _Phys. Rev. X_ 5, 041010 (2015). URL https://link.aps.org/doi/10.1103/PhysRevX.5.041010. * [83] DiMario, M. T. & Becerra, F. E. Robust measurement for the discrimination of binary coherent states. _Phys. Rev. Lett._ 121, 023603 (2018). * [84] Allevi, A. _et al._ State reconstruction by on/off measurements. _Phys. Rev. A_ 80, 022114 (2009). URL https://link.aps.org/doi/10.1103/PhysRevA.80.022114. * [85] Nehra, R. _et al._ State-independent quantum state tomography by photon-number-resolving measurements. _Optica_ 6, 1356–1360 (2019). URL http://opg.optica.org/optica/abstract.cfm?URI=optica-6-10-1356. * [86] Huszár, F. & Houlsby, N. M. T. Adaptive bayesian quantum tomography. _Phys. Rev. A_ 85, 052120 (2012). URL https://link.aps.org/doi/10.1103/PhysRevA.85.052120. * [87] Granade, C., Ferrie, C. & Flammia, S. T. Practical adaptive quantum tomography. _New J. Phys._ 19, 113017 (2017). URL https://doi.org/10.1088/1367-2630/aa8fe6. * [88] D’Ariano, G. M. & Paris, M. G. A. Adaptive quantum homodyne tomography. _Phys. Rev. A_ 60, 518–528 (1999). URL https://link.aps.org/doi/10.1103/PhysRevA.60.518. * [89] Rodríguez-García, M. A. adaptive_photon_counting_for_coherent-state (2022). URL https://github.com/Gateishion/adaptive_photon_counting_for_coherent-state.
# Energy-efficient spin injector into semiconductors driven by elastic waves Andrei V. Azovtsev∗ Andrei I. Nikitchenko Nikolay A. Pertsev Ioffe Institute 194021 St. Petersburg Russia <EMAIL_ADDRESS> ###### Abstract Generation of significant spin imbalance in nonmagnetic semiconductors is crucial for the functioning of many spintronic devices, such as magnetic diodes and transistors, spin-based logic gates, and spin-polarized lasers. An attractive design of spin injectors into semiconductors is based on a spin pumping from a precessing ferromagnet, but the classical excitation of magnetization precession by a microwave magnetic field leads to the high power consumption of the device. Here we describe theoretically a spin injector with greatly reduced energy losses, in which the magnetic dynamics is excited by an elastic wave generated in a ferromagnet-semiconductor heterostructure by an attached piezoelectric transducer. To demonstrate the efficient functioning of such an injector, we first perform micromagnetoelastic simulations of the coupled elastic and magnetic dynamics in $\mathrm{Ni}$ films and $\mathrm{Ni}$/$\mathrm{GaAs}$ bilayers traversed by plane longitudinal and shear waves. For thick $\mathrm{Ni}$ films, it is shown that a monochromatic acoustic wave generates a spin wave with the same frequency and wavelength, which propagates together with the driving wave over distances of several micrometers at the excitation frequencies $\nu\approx 10$ GHz close to the frequency of ferromagnetic resonance. The simulations of $\mathrm{Ni}$/$\mathrm{GaAs}$ bilayers with $\mathrm{Ni}$ thicknesses comparable to the wavelength of the injected acoustic wave demonstrate the development of a steady-state magnetization precession at the $\mathrm{Ni}|\mathrm{GaAs}$ interface. The amplitude of such a precession has a maximum at $\mathrm{Ni}$ thickness amounting to three quarters of the wavelength of the elastic wave, which is explained by an analytical model. Using simulation data obtained for the magnetization precession at the $\mathrm{Ni}|\mathrm{GaAs}$ interface, we evaluate the spin current pumped into $\mathrm{GaAs}$ and calculate the spin accumulation in the semiconducting layer by solving the spin diffusion equation. Then the electrical signals resulting from the spin flow and the inverse spin Hall effect are determined via the numerical solution of the Laplace’s equation. It is shown that amplitudes of these ac signals near the interface are large enough for experimental measurement, which indicates an efficient acoustically driven spin pumping into $\mathrm{GaAs}$ and rather high spin accumulation in this semiconductor. ## I Introduction Semiconductors are attractive for the development of spintronic devices due to their large spin diffusion lengths in comparison with transition metals Hägele _et al._ (1998); Kikkawa and Awschalom (1999), long spin relaxation times Bhat and Kumar (2014), and the possibility of manipulating the electrons’ spin by polarized light Putikka and Joynt (2004); Kokurin _et al._ (2013). However, the application of conventional nonmagnetic semiconductors in spintronics requires the generation of an internal spin imbalance by an external stimulus or via an attached magnetic material Hirohata _et al._ (2020). The simplest method to create such an imbalance would be the direct injection of a spin- polarized charge current from a metallic ferromagnet through Ohmic contact, but the conductance mismatch at the semiconductor-metal interface makes this method inefficient Schmidt _et al._ (2000). The presence of a thin insulating interlayer acting as a tunnel barrier solves the mismatch problem Hanbicki and Jonker (2002); Jiang _et al._ (2005); Dash _et al._ (2009); Kamerbeek _et al._ (2014), but requires the fabrication of a high quality interlayer unless the formation of a natural Schottky barrier with the appropriate parameters occurs Hanbicki and Jonker (2002). Alternatively, the spin imbalance in the semiconductor can be created by bringing it into a direct contact with a precessing ferromagnet Brataas _et al._ (2002); Tserkovnyak _et al._ (2005). The resulting spin pumping into the nonmagnetic semiconductor is due to the modulation of the interface scattering matrix by the coherent precession of the magnetization Brataas _et al._ (2002). Typically, in spin pumping experiments magnetization dynamics is excited by an external microwave magnetic field with the frequency matching that of the ferromagnetic resonance. Efficient generation of spin currents in normal metals by this technique has been demonstrated experimentally Heinrich _et al._ (2003); Saitoh _et al._ (2006); Bell _et al._ (2008); Mosendz _et al._ (2010); Czeschka _et al._ (2011); Tashiro _et al._ (2015). The spin pumping into semiconductors from metallic ferromagnets Ando _et al._ (2011); Shikoh _et al._ (2013); Lee _et al._ (2014); Wang _et al._ (2017) and ferrimagnetic insulators Mendes _et al._ (2018) subjected to microwave radiation has been revealed as well. However, the power consumption associated with the generation of microwave magnetic fields appears to be rather high, which impedes applications of magnetically driven spin injectors in low-power spintronics. For this reason, alternative spin pumping techniques have been studied during the past decade, one of which is based on the excitation of magnetization dynamics in ferromagnets by injected elastic waves Weiler _et al._ (2011); Uchida _et al._ (2011a); Weiler _et al._ (2012); Kamra _et al._ (2015); Polzikova _et al._ (2016); Azovtsev and Pertsev (2016, 2017); Polzikova _et al._ (2018); Azovtsev and Pertsev (2019); Alekseev _et al._ (2020). Since such waves can be generated by a piezoelectric transducer coupled to the ferromagnet and subjected to an ac electric field, the power consumption of elastically driven spin injectors is expected to be comparatively low Alekseev _et al._ (2020); Cherepov _et al._ (2014); Bhaskar _et al._ (2020). The experimental and theoretical studies have demonstrated an efficient generation of spin currents in normal metals by surface and bulk acoustic waves, but the strain-driven spin pumping into semiconductors was not investigated so far. In this paper, we theoretically describe a spin injector into nonmagnetic semiconductors, which employs the spin pumping generated by a dynamically strained ferromagnetic film. The injector has the form of a ferromagnet- semiconductor bilayer coupled to a piezoelectric transducer excited by a microwave voltage. Such a transducer creates a bulk elastic wave propagating across the bilayer, which induces a radio-frequency magnetization precession providing efficient spin pumping into the semiconducting layer. To quantify the elastically driven magnetic dynamics in the ferromagnetic film, we employ the state-of-the-art numerical simulations allowing for the two-way coupling between spins and strains (see Sec. II). The simulations are performed for the $(001)$-oriented $\mathrm{Ni}$ films and $\mathrm{Ni/GaAs}$ bilayers traversed by plane longitudinal and transverse acoustic waves. For thick $\mathrm{Ni}$ films, tightly coupled elastic and magnetic dynamics are described (Sec. III), which involve the generation of a spin wave carried by the propagating elastic wave. In Sec. IV, we report the results of numerical simulations performed for $\mathrm{Ni/GaAs}$ bilayers with the $\mathrm{Ni}$ thickness comparable to the wavelength of the propagating elastic wave and discuss the influence of the thickness of the ferromagnetic layer and the excitation frequency on the amplitude of the magnetization precession at the $\mathrm{Ni|GaAs}$ interface (Sec. IV). Numerical results obtained for the steady-state magnetization precession at the interface are then used to calculate the spin pumping into the $\mathrm{GaAs}$ film and to determine the spin accumulation in the semiconductor by solving the spin diffusion equation (Sec. V). It is shown that the proposed injector has a high efficiency ensuring significant spin flux in $\mathrm{GaAs}$, which can be detected experimentally via the inverse spin Hall effect. $y$$z$$x$m$\psi$transducer$t_{\text{F}}$$t_{\text{N}}$$w_{\text{N}}$$h_{\text{N}}$k$\mathrm{GaAs}$$\mathrm{Ni}$Fe$V_{\text{s}}$ Figure 1: Ferromagnet-semiconductor heterostructure comprising $\mathrm{Ni}$ and $\mathrm{GaAs}$ layers. An elastic wave (longitudinal or transverse) with the wave vector k is injected into the $\mathrm{Ni}$ layer by the attached piezoelectric transducer. The thicknesses of the $\mathrm{Ni}$ and $\mathrm{GaAs}$ layers are denoted by $t_{\text{F}}$ and $t_{\text{N}}$, respectively. Precessing magnetization in Ni creates a spin imbalance in GaAs, which then produces a measurable voltage $V_{\text{s}}$ between an attached iron probe and a nonmagnetic contact. ## II Modeling of magnetoelastic phenomena in ferromagnetic heterostructures Owing to the magnetoelastic coupling between spins and strains, the excitation of an elastic wave in a ferromagnetic material can induce a precessional motion of the magnetization and the generation of a spin wave Weiler _et al._ (2011); Uchida _et al._ (2011b); Weiler _et al._ (2012); Thevenard _et al._ (2014); Janušonis _et al._ (2015); Gowtham _et al._ (2015); Casals _et al._ (2020). The backaction of the induced magnetization precession on strain state of the ferromagnet may significantly affect the propagation of the driving elastic wave and lead to the appearance of additional “secondary” waves Azovtsev and Pertsev (2017, 2019). Therefore, the two-way interplay between elastic and magnetic variables Akhiezer _et al._ (1958) should be fully taken into account for an accurate modeling of the magnetoelastic phenomena in ferromagnets. Such micromagnetoelastic modeling can be realized via the numerical solution of the system of differential equations comprising the elastodynamic equation for the mechanical displacement u and the Landau- Lifshitz-Gilbert (LLG) equation for the magnetization M Chen _et al._ (2017); Azovtsev and Pertsev (2017, 2019, 2020). The elastodynamic equation should allow for the magnetoelastic contribution $\delta\sigma_{ij}^{\text{ME}}$ to the mechanical stresses $\sigma_{ij}$ in the ferromagnet, which can be calculated as $\delta\sigma_{ij}^{\text{ME}}=\partial F_{\text{ME}}/\partial\varepsilon_{ij}$, where $F_{\text{ME}}$ is the magnetoelastic energy density, and $\varepsilon_{ij}$ are the elastic strains ($i,j=x,y,z$). The influence of strains on the magnetization orientation can be quantified by adding a magnetoelastic term $\textbf{H}_{\text{ME}}=-(1/\mu_{0})\partial F_{\text{ME}}/\partial\textbf{M}$ to the effective magnetic field $\textbf{H}_{\text{eff}}$ involved in the LLG equation ($\mu_{0}$ being the magnetic constant). For cubic ferromagnets such as nickel, the magnetoelastic contribution to the total energy density F can be written as $\displaystyle F_{\text{ME}}$ $\displaystyle=B_{1}[(m_{x}^{2}-\frac{1}{3})\varepsilon_{xx}+(m_{y}^{2}-\frac{1}{3})\varepsilon_{yy}+(m_{z}^{2}-\frac{1}{3})\varepsilon_{zz}]$ (1) $\displaystyle+2B_{2}\left[m_{x}m_{y}\varepsilon_{xy}+m_{x}m_{z}\varepsilon_{xz}+m_{y}m_{z}\varepsilon_{yz}\right],$ where $\textbf{m}=\textbf{M}/M_{s}$ is the unit vector in the magnetization direction, $M_{s}$ is the saturation magnetization regarded as a strain- independent quantity, and $B_{1}$, $B_{2}$ are the magnetoelastic coupling constants Kittel (1949). In this work, we performed micromagnetoelastic simulations of $\mathrm{Ni}$ films and $\mathrm{Ni}$/$\mathrm{GaAs}$ bilayers subjected to a periodic displacement $\textbf{u}(x=0,t)=\textbf{u}_{0}(t)$ imposed at the $\mathrm{Ni}$ surface $x=0$ (Fig. 1). Such a displacement models the action of a piezoelectric transducer coupled to $\mathrm{Ni}$ film and generates a plane elastic wave traversing the heterostructure Azovtsev and Pertsev (2017, 2019). To excite a longitudinal wave characterized by the strain $\varepsilon_{xx}(x,t)$, we introduced the surface displacement with the components $u^{0}_{y}=u^{0}_{z}=0$ and $u^{0}_{x}=u_{\text{max}}\sin{(2\pi\nu t)}$, while a transverse wave with the shear strain $\varepsilon_{xz}(x,t)$ was created by setting $u^{0}_{x}=u^{0}_{y}=0$ and $u^{0}_{z}=u_{\text{max}}\sin{(2\pi\nu t)}$. The excitation frequency $\nu$ was varied in a wide range spanning the resonance frequency $\nu_{\text{res}}$ of the coherent magnetization precession in the unstrained $\mathrm{Ni}$ film, which was determined by simulations of the magnetization relaxation to its equilibrium orientation. To ensure the same maximal strain in the elastic wave at any excitation frequency $\nu$, the displacement amplitude $u_{\text{max}}$ was taken to be inversely proportional to $\nu$ Azovtsev and Pertsev (2019). Namely, we used the relations $u_{\text{max}}=\varepsilon_{xx}^{\text{max}}/k_{L}$ and $u_{\text{max}}=2\varepsilon_{xz}^{\text{max}}/k_{T}$ for longitudinal and transverse waves, respectively, where $k_{L}=2\pi\nu/c_{L}$ and $k_{T}=2\pi\nu/c_{T}$ are the wave numbers of these waves having velocities $c_{L}$ and $c_{T}$. The magnetization dynamics in the $\mathrm{Ni}$ film was quantified using the LLG equation with the effective magnetic field $\textbf{H}_{\text{eff}}$ comprising contributions resulting from the exchange interaction, cubic magnetocrystalline anisotropy, magnetoelastic coupling, Zeeman energy, and dipolar interactions between oscillating spins Azovtsev and Pertsev (2016). For numerical calculations, we characterized $\mathrm{Ni}$ by the saturation magnetization $M_{s}=4.78\times 10^{5}$ A m-1 Niitsu (2020), exchange constant $A_{\text{ex}}=0.85\times 10^{-11}$ J m-1 Niitsu (2020), magnetocrystalline anisotropy constants $K_{1}=-5.7\times 10^{3}$ J m-3, $K_{2}=-2.3\times 10^{3}$ J m-3 Stearns (1986), magnetoelastic constants $B_{1}=9.2\times 10^{6}$ J m-3, $B_{2}=10.7\times 10^{6}$ J m-3 Stearns (1986), and Gilbert damping parameter of 0.045 Walowski _et al._ (2008). The elastic stiffnesses $c_{11}$, $c_{44}$ and mass densities $\rho$ of $\mathrm{Ni}$ and $\mathrm{GaAs}$, which are involved in their elastodynamic equations of motion, were taken from Ref. Haynes (2016) and listed in Table 1 together with the velocities $c_{L}$ and $c_{T}$ of elastic waves in these materials. No elastic damping was added to the elastodynamic equation in our simulations for the following reasons. First, we are interested in investigating the purely magnetic damping of elastic waves in $\mathrm{Ni}$, and the introduction of the intrinsic elastic damping would obscure simulation results described in Sec. III. Second, in Sec. IV we consider $\mathrm{Ni}$/$\mathrm{GaAs}$ bilayers comprising $\mathrm{Ni}$ films much thinner than the decay lengths of longitudinal and transverse elastic waves in $\mathrm{Ni}$, which are measured to be 5.8 and 29 $\mu$m respectively at the relevant frequency of 9.4 GHz Homer _et al._ (1987). Micromagnetoelastic simulations were performed with the aid of a homemade software operating with a finite ensemble of nanoscale computational cells. Our software solves the elastodynamic equations of $\mathrm{Ni}$ and $\mathrm{GaAs}$ films by a finite-difference technique with a midpoint derivative approximation and numerically integrates the LLG equation by the projective Runge-Kutta algorithm. We employed a fixed integration step $\delta t=100$ fs and set the size of cubic computation cells to $2$ nm, which is smaller than the exchange length $l_{\text{ex}}=\sqrt{2A_{\text{ex}}/(\mu_{0}M_{s}^{2})}\approx 7.7$ nm of $\mathrm{Ni}$. The system of partial differential equations was appended by appropriate boundary conditions. At the free surface of the $\mathrm{GaAs}$ layer, the stresses $\sigma_{ix}$ were set to zero, and the “free-surface” condition $\partial\textbf{m}/\partial x=0$ was imposed at both boundaries of the $\mathrm{Ni}$ layer. Since a unified ensemble of computational cells covering the whole $\mathrm{Ni}$/$\mathrm{GaAs}$ bilayer was employed in the simulations, the continuity conditions at the $\mathrm{Ni}|\mathrm{GaAs}$ interface were satisfied automatically. The layers comprising the heterostructure were considered infinite in $y-z$ plane and the dynamical quantities were allowed to change only along the $x$ direction. | $\mathrm{Ni}$ | $\mathrm{GaAs}$ ---|---|--- $c_{11}$ ($10^{11}$ J m-3) | 2.481 | 1.188 $c_{12}$ ($10^{11}$ J m-3) | 1.549 | 0.537 $c_{44}$ ($10^{11}$ J m-3) | 1.242 | 0.594 $\rho$ (kg m-3) | 8910 | 5317 $c_{L}$ (m s-1) | 5277 | 4726 $c_{T}$ (m s-1) | 3734 | 3344 Table 1: Elastic stiffnesses and mass densities of $\mathrm{Ni}$ and $\mathrm{GaAs}$ Haynes (2016) used in numerical calculations. Velocities $c_{L}=\sqrt{c_{11}/\rho}$ and $c_{T}=\sqrt{c_{44}/\rho}$ of longitudinal and transverse elastic waves in these materials are also given for information. ## III Magnetic dynamics excited by longitudinal and transverse elastic waves in thick Ni films An elastic wave perturbs the ferromagnetic state when it creates a nonzero magnetoelastic torque $\textbf{T}_{\text{ME}}=\textbf{M}\times\textbf{H}_{\text{ME}}$ acting on the magnetization M. In the case of the longitudinal wave $\varepsilon_{xx}(x,t)$, the effective field $\textbf{H}_{\text{ME}}$ has the only nonzero component $H_{x}^{\text{ME}}=-2B_{1}\varepsilon_{xx}m_{x}/M_{s}$. Therefore, an external magnetic field H creating the direction cosine $m_{x}$ in the in-plane magnetized $\mathrm{Ni}$ film should be applied to generate the magnetization dynamics. To additionally stabilize the single-domain state, we introduced the field along the [111] crystallographic direction (easy axis), taking $H_{x}=H_{y}=H_{z}=1000$ Oe. At such field, the magnetization in the unstrained $\mathrm{Ni}$ film has an elevation angle $\psi\approx 46^{\circ}$ ($m_{x}=0.137$) and equal projections on [100] and [010] directions ($m_{y}=m_{z}=0.70044$). The same magnetic field was used in the simulations of magnetic dynamics induced by the shear acoustic wave $\varepsilon_{xz}(x,t)$, which facilitates the comparison of results obtained for two types of elastic perturbations. It should be noted that shear waves impose nonzero magnetoelastic torque $\textbf{T}_{\text{ME}}$ even on the in- plane magnetized ferromagnetic films. Indeed, the effective field $\textbf{H}_{\text{ME}}$ created by the strain $\varepsilon_{xz}$ has two components, $H_{z}^{\text{ME}}=-2B_{2}\varepsilon_{xz}m_{x}/M_{s}$ and $H_{x}^{\text{ME}}=-2B_{2}\varepsilon_{xz}m_{z}/M_{s}$, and the latter differs from zero even at $m_{x}=0$ when $m_{z}\neq 0$. However, simulations show that the amplitude of the magnetization precession increases significantly when both $H_{x}^{\text{ME}}$ and $H_{z}^{\text{ME}}$ differ from zero. At the chosen magnetic field H, the resonance frequency $\nu_{\text{res}}$ of the unstrained $\mathrm{Ni}$ film with in-plane dimensions much larger than the film thickness $t_{\text{F}}$ was found to be $9.6$ GHz. Accordingly, the excitation frequency was varied in a wide range around 10 GHz. By selecting the appropriate amplitudes $u_{\text{max}}(\nu)$ of the surface displacements $u_{x}^{0}(t)$ or $u_{z}^{0}(t)$, we created the maximal strains $\varepsilon^{\text{max}}_{xx}=\varepsilon^{\text{max}}_{xz}=10^{-4}$ in the excited elastic waves near the $\mathrm{Ni}$ surface $x=0$. Simulations were first performed for $\mathrm{Ni}$ films with thicknesses $t_{\text{F}}$ much larger than the wave lengths $\lambda_{L}=c_{L}/\nu$ and $\lambda_{T}=c_{T}/\nu$ of the pure elastic waves, which amount to $\lambda_{L}=550$ nm and $\lambda_{T}=389$ nm at the resonance excitation $\nu=\nu_{\text{res}}$. This allows the observation of several wave periods inside the ferromagnetic film. Since in this section we concentrate on the propagation of the elastic waves in $\mathrm{Ni}$ and their interaction with the magnetic subsystem, the simulation time was limited by the period needed for the wave to reach the opposite boundary of the $\mathrm{Ni}$ film. The effects of the wave reflection from the $\mathrm{Ni}|\mathrm{GaAs}$ interface are discussed in Section IV. Figure 2: Spatial distributions of strains in the driving longitudinal (a) and transverse (b) elastic waves and strain-induced variations of the magnetization direction cosines $m_{i}$ in the 2-$\mu$m-thick $\mathrm{Ni}$ film. Excitation frequency $\nu$ is equal to the resonance frequency $\nu_{\text{res}}=9.6$ GHz, and snapshots are taken at 0.37 ns (a) and 0.52 ns (b). The simulations of the coupled elastic and magnetic dynamics in thick $\mathrm{Ni}$ films confirmed the creation of periodic, almost sinusoidal elastic waves at all studied excitation frequencies $\nu$ (see Fig. 2). The wave emerges at the $\mathrm{Ni}$ surface and propagates away with the velocity $c_{L}$ or $c_{T}$ characteristic of a pure elastic wave despite an inhomogeneous magnetization precession excited by the magnetoelastic torque $\textbf{T}_{\text{ME}}$ (see Fig. 2). However, the strain-induced precession manifests itself in the generation of the additional elastic waves caused by the magnetoelastic feedback (see Fig. 3). These secondary strain waves were already revealed by the micromagnetoelastic simulations performed for Fe81Ga19 and CoFe2O4 films Azovtsev and Pertsev (2017, 2019), but were not detected in the simulations of the propagation of longitudinal elastic waves in $\mathrm{Ni}$ Chen _et al._ (2017). When the driving wave is a longitudinal one, two transverse secondary waves with the strains $\varepsilon_{xy}(x,t)$ and $\varepsilon_{xz}(x,t)$ having amplitudes $\sim 10^{-7}$ appear in $\mathrm{Ni}$. Their profiles depend on the position in the film, exhibiting a peculiar behavior similar to that of the secondary waves arising in CoFe2O4 excited by longitudinal elastic waves Azovtsev and Pertsev (2019). This behavior is caused by the interference of the two components of each secondary wave, which have the form of a shear wave with the wavelength $\lambda_{T}$ and velocity $c_{T}$ freely propagating from the $\mathrm{Ni}$ surface and a forced shear wave with the wavelength $\lambda_{L}$ and velocity $c_{L}$ generated in the whole driving longitudinal wave. When the driving wave is a transverse one [$\varepsilon_{xz}(x,t)$], a longitudinal secondary wave $\varepsilon_{xx}(x,t)$ and another shear wave $\varepsilon_{xy}(x,t)$ with amplitudes $\sim 10^{-6}$ are generated by the magnetization precession. Similarly to the aforementioned situation, the longitudinal wave appears to be a superposition of a free wave with the wavelength $\lambda_{L}$ and velocity $c_{L}$ and a forced wave with the parameters $\lambda_{T}$ and $c_{T}$. In contrast, the secondary shear wave can be regarded as a single wave because its wavelength $\lambda_{T}$ and velocity $c_{T}$ match those of the driving wave. Figure 3: Secondary elastic waves generated by the magnetization precession induced by the primary longitudinal (a) and transverse (b) waves in the 2-$\mu$m-thick $\mathrm{Ni}$ film. Snapshots are taken at 0.37 ns (a) and 0.35 ns (b). The magnetization precession induced by the primary elastic wave also affects its propagation at long distances from the $\mathrm{Ni}$ surface. A careful evaluation of the local strain amplitudes $\varepsilon_{xx}^{\text{max}}$ and $\varepsilon_{xz}^{\text{max}}$ in the driving longitudinal and transverse waves reveals that they decrease with the increasing distance $x$ from the $\mathrm{Ni}$ surface. This decay is caused by the energy transfer to the magnetic subsystem, where the strain-driven magnetization precession is hindered by the Gilbert damping Azovtsev and Pertsev (2019). The analysis of the simulation results shows that the dependences $\varepsilon_{xx}^{\text{max}}(x)$ and $\varepsilon_{xz}^{\text{max}}(x)$ can be fitted by an exponential function $e^{-x/L_{\text{dec}}}$, where the decay length $L_{\text{dec}}$ depends on the wave frequency $\nu$. At the resonance excitation $\nu=9.6$ GHz, $L_{\text{dec}}$ amounts to approximately 350 $\mu$m for the longitudinal wave and about 19 $\mu$m for the shear wave. This finding explains why no damping of magnetic origin was detected in simulations of the propagation of longitudinal elastic waves in $\mathrm{Ni}$ through a short distance of 300 nm Chen _et al._ (2017). At the same time, it was shown experimentally that surface acoustic waves (SAWs) can propagate in a $\mathrm{Ni}$ film over the distance of several millimeters Casals _et al._ (2020). This absence of significant damping observed for the studied SAWs with frequencies not exceeding 500 MHz is very different from the results of Homer et al. Homer _et al._ (1987), who reported the decay length $L_{\text{dec}}\approx 5.8\leavevmode\nobreak\ \mu$m for the longitudinal wave with the frequency of 9.4 GHz in $\mathrm{Ni}$. The reason for such a difference most probably lies in a drastic reduction of damping, which should happen when the frequency of the elastic wave changes from about 10 GHz to several hundreds of MHz. As for the damping of transverse elastic waves in $\mathrm{Ni}$, our results show that the magnetic damping of elastic waves ($L_{\text{dec}}\approx 19\leavevmode\nobreak\ \mu$m) could be stronger than the damping of electronic origin (measured $L_{\text{dec}}\approx 29\leavevmode\nobreak\ \mu$m Homer _et al._ (1987)) at wave frequencies around 10 GHz. Most important of all, the magnetoelastic interaction leads to the formation of a spin wave tightly coupled to the driving elastic wave. The spin waves predicted by our simulations have sinusoidal time dependences under both types of elastic excitation, which differs from a non-sinusoidal time dependence reported in Ref. Chen _et al._ (2017) for the spin wave generated by the longitudinal acoustic wave having near-resonance frequency of 10 GHz. Regarding the spin wave amplitude, the transverse elastic wave appears to be much more efficient for the generation of spin waves in $\mathrm{Ni}$ than the longitudinal wave at the chosen equilibrium magnetization orientation [compare panels (a) and (b) in Fig. 2]. Similarly to elastically generated spin waves in Fe81Ga19 and CoFe2O4 films Azovtsev and Pertsev (2017, 2019), the spin wave propagating in a thick $\mathrm{Ni}$ film has the same frequency and wavelength as the driving strain wave. Since both waves (spin and elastic) travel with the same velocity $c_{L}$ or $c_{T}$ and obey a purely elastic dispersion relation $k_{L}=2\pi\nu/c_{L}$ or $k_{T}=2\pi\nu/c_{T}$, the driving wave acts as a carrier of the spin wave having a forced character. Furthermore, the decay length of the spin wave carried by the longitudinal acoustic wave matches the decay length $L_{\text{dec}}\approx 350\leavevmode\nobreak\ \mu$m of the latter in our simulations, which agrees with the behavior predicted for CoFe2O4 films Azovtsev and Pertsev (2019). However, in the case of the excitation by a shear wave, the spin-wave decay length is significantly smaller than that of the driving wave, being roughly 9 $\mu$m instead of 19 $\mu$m. Despite this peculiarity, the acoustically driven spin waves with frequencies $\nu\approx 10$ GHz can still propagate in Ni over long distances of several micrometers, which is important for magnon spintronics. ## IV Magnetoelastic dynamics in Ni/GaAs bilayers Now we turn our attention to $\mathrm{Ni}$/$\mathrm{GaAs}$ bilayers Figure 4: Time dependence of the magnetization precession at the $\mathrm{Ni}|\mathrm{GaAs}$ interface excited by longitudinal (a) or transverse (b) elastic waves with the frequency $\nu=\nu_{\text{res}}=9.6$ GHz. Thickness $t_{\text{F}}$ of $\mathrm{Ni}$ film in each case is equal to a corresponding wavelength $\lambda_{L}$ or $\lambda_{T}$. comprising relatively thin $\mathrm{Ni}$ layers with the thickness $t_{\text{F}}$ comparable to the wavelength of the driving elastic wave with the frequency $\nu\sim\nu_{\text{res}}$, which are most suitable for applications in miniature spin injectors (see Sec. V). The magnetoelastic dynamics in such bilayers is of a more complicated character due to reflections of the elastic waves from the $\mathrm{Ni}|\mathrm{GaAs}$ interface and the $\mathrm{GaAs}$ free surface. Fortunately, owing to similar acoustic impedances of $\mathrm{Ni}$ and $\mathrm{GaAs}$, the transmittance of the driving longitudinal or transverse wave through the $\mathrm{Ni}|\mathrm{GaAs}$ interface is close to unity (about 0.9 with respect to energy). In contrast, the driving wave fully reflects from the $\mathrm{GaAs}$ free surface, and the reflected wave strongly disturbs the magnetization dynamics when it penetrates back into the $\mathrm{Ni}$ layer. In order to avoid this complication, we imparted a strong artificial elastic damping to $\mathrm{GaAs}$, which is sufficient to force any elastic wave to vanish before it reaches the free surface, but does not change significantly the strain dynamics at the $\mathrm{Ni}|\mathrm{GaAs}$ interface. The simulations demonstrated that the elastically driven magnetization dynamics in $\mathrm{Ni}$ layers with the thicknesses $t_{\text{F}}$ about $\lambda_{L}$ or $\lambda_{T}$ remains to be highly inhomogeneous at the resonance excitation $\nu=\nu_{\text{res}}$. Initially the magnetic dynamics has the form of a spin wave, but it assumes a complex character after several reflections of the driving elastic wave from the boundaries of the $\mathrm{Ni}$ layer. However, near the interface the magnetization precesses with a constant frequency and amplitude in a steady-state regime, which settles in after a transition period of about 1 ns (Fig. 4). Performing a series of simulations at different thicknesses of $\mathrm{Ni}$ layers, we found that the amplitude of the magnetization precession at the interface has local maxima at Ni thicknesses amounting to 0.25, 0.75, 1.25 and 1.75 of the wavelength $\lambda_{L}$ or $\lambda_{T}$ (Fig. 5). This result differs from that obtained for Fe81Ga19/Au and CoFe2O4/Pt bilayers in our previous works Azovtsev and Pertsev (2017, 2019), where such amplitude maximizes at the ferromagnet’s thickness equal to one wavelength of the driving elastic wave. Figure 5: Amplitude of the magnetization precession at the $\mathrm{Ni}|\mathrm{GaAs}$ interface as a function of the $\mathrm{Ni}$ thickness $t_{\text{F}}$ normalized by the wavelength $\lambda$ of the driving longitudinal or transverse elastic wave. The plots show the maximal change $\Delta m_{x}$ of the out-of-plane direction cosine $m_{x}(x=t_{\text{F}},t)$ normalized by the largest value of $\Delta m_{x}$ in the studied thickness range. The excitation frequency equals $\nu_{\text{res}}=9.6$ GHz. To understand the revealed behavior of ferromagnetic-nonmagnetic (F/N) bilayers, we investigated the dependence of the strain amplitude at the interface on the thickness $t_{\text{F}}$ of the F layer. The analysis of the results of simulations showed that in all studied bilayers the precession amplitude in the steady-state regime becomes maximal whenever the strain amplitude maximizes. Therefore, we considered a general elasticity problem of finding the strain distribution in an elastic F/N bilayer subjected to a periodic surface displacement $u_{i}^{\text{F}}(x=0,t)=u_{\text{max}}e^{-i\omega t}$. Despite multiple reflections of the elastic waves from the boundaries of the F layer, the steady-state solution for the elastic displacement $u_{i}^{\text{F}}(x,t)$ inside this layer can be written as a superposition of two waves with the same frequency $\omega=2\pi\nu$. Indeed, due to the principle of superposition in linear elasticity any number of interfering sinusoidal waves with the same frequency but different amplitudes and phases produce another sinusoidal wave of the same frequency with its own amplitude and phase Sadd (2005). Hence, we can write $u_{i}^{\text{F}}(x,t)=A_{i}^{\text{F}}e^{i(k_{i}^{\text{F}}x-\omega t)}+B_{i}^{\text{F}}e^{-i(k_{i}^{\text{F}}x+\omega t)},$ (2) where the first term corresponds to the waves propagating towards the F$|$N interface, while the second term describes the waves reflected from the F$|$N interface; $A_{i}^{\text{F}}$ and $B_{i}^{\text{F}}$ are the unknown amplitudes of these waves, and $k_{i}^{\text{F}}$ is the wavenumber of the longitudinal ($i=x$) or transverse ($i=y$ or $z$) wave in the F layer. Figure 6: Expected thickness dependences of the strain amplitudes at the interface between $\mathrm{Ni}$ and $\mathrm{GaAs}$ layers and CoFe2O4 and Pt layers. The amplitude of each strain is normalized by its maximal value. The thickness $t_{\text{F}}$ of the ferromagnetic layer is normalized by the wavelength $\lambda$ of the excited longitudinal or transverse elastic wave. The excitation frequency equals 9.6 GHz ($\mathrm{Ni}$/$\mathrm{GaAs}$) and 11 GHz (CoFe2O4/Pt). Since we neglect the reflections from the free surface of the N layer, only the transmitted elastic wave exists in it, and the displacement $u_{i}^{\text{N}}(x,t)$ has the form $u_{i}^{\text{N}}(x,t)=A_{i}^{\text{N}}e^{i(k_{i}^{\text{N}}x-\omega t)},$ (3) where $A_{i}^{\text{N}}$ and $k_{i}^{\text{N}}$ are the amplitude and wavenumber of the transmitted wave. The mechanical boundary conditions at the F$|$N interface $x=t_{\text{F}}$ yield the displacement continuity $u_{i}^{\text{F}}(x=t_{\text{F}},t)=u_{i}^{\text{N}}(x=t_{\text{F}},t)$ and the stress continuity $\sigma_{ix}^{\text{F}}(x=t_{\text{F}},t)=\sigma_{ix}^{\text{N}}(x=t_{\text{F}},t)$. In our model case, the stresses are given by the relations $\sigma_{ix}^{\text{F}}(x,t)=c_{\alpha\alpha}^{\text{F}}(1/\sqrt{\alpha})\partial/\partial x[u_{i}^{\text{F}}(x,t)]$ and $\sigma_{ix}^{\text{N}}(x,t)=c_{\alpha\alpha}^{\text{N}}(1/\sqrt{\alpha})\partial/\partial x[u_{i}^{\text{N}}(x,t)]$, where $c_{\alpha\alpha}^{\text{F}}$ and $c_{\alpha\alpha}^{\text{N}}$ are the elastic stiffnesses of the F and N layers, respectively ($\alpha=1$ at $i=x$ and $\alpha=4$ at $i=y$ or $z$). Combining the boundary conditions at the F$|$N interface and the F surface $x=0$ and using Eqs. (2) and (3), one can derive analytic relations for the unknown amplitudes $A_{i}^{\text{F}}$, $B_{i}^{\text{F}}$, and $A_{i}^{\text{N}}$. The substitution of these relations back to Eqs. (2) and (3) yields the formulae for the displacements $u_{i}^{\text{F}}(x,t)$ and $u_{i}^{\text{N}}(x,t)$, which render possible to calculate the strains $\varepsilon_{ix}^{\text{F}}=(1/\sqrt{\alpha})\partial u_{i}^{\text{F}}/\partial x$ and $\varepsilon_{ix}^{\text{N}}=(1/\sqrt{\alpha})\partial u_{i}^{\text{N}}/\partial x$ in the F and N layers. For the strains $\varepsilon_{ix}^{\text{F}}(x=t_{\text{F}},t)$ at the F$|$N interface after some mathematical manipulations we obtain $\varepsilon_{ix}^{\text{F}}(x=t_{\text{F}},t)=\frac{2i\varepsilon_{ix}^{\text{max}}Z_{\alpha}e^{-i\omega t}}{(1+Z_{\alpha})F^{-}+(1-Z_{\alpha})F^{+}},$ (4) where $Z_{\alpha}=\sqrt{\frac{c_{\alpha\alpha}^{\text{N}}\rho_{\text{N}}}{c_{\alpha\alpha}^{\text{F}}\rho_{\text{F}}}},F^{+}=e^{ik_{i}^{\text{F}}t_{\text{F}}},F^{-}=e^{-ik_{i}^{\text{F}}t_{\text{F}}}.$ Equation (4) shows that the amplitude of $\varepsilon_{ix}^{\text{F}}(x=t_{\text{F}},t)$ depends on the input strain $\varepsilon_{ix}^{\text{max}}=(1/\sqrt{\alpha})u_{\text{max}}k_{i}^{\text{F}}$, the relative thickness $t_{\text{F}}/\lambda_{L}$ or $t_{\text{F}}/\lambda_{T}$ of the F layer, and the dimensionless parameter $Z_{\alpha}$ of the F/N bilayer, which is governed by the elastic stiffnesses and densities of the involved materials. Using Eq. (4), we calculated the dependences of the discussed strain amplitudes on the relative thickness of the F layer for $\mathrm{Ni}$/$\mathrm{GaAs}$ and CoFe2O4/Pt bilayers subjected to the resonance excitation $\nu=\nu_{\text{res}}$. The results presented in Fig. 6 show that, for a given bilayer, the amplitudes of $\varepsilon_{xx}^{\text{F}}(x=t_{\text{F}},t)$ and $\varepsilon_{zx}^{\text{F}}(x=t_{\text{F}},t)$ normalized by their maximal values follow similar curves (almost identical in case of $\mathrm{Ni}$/$\mathrm{GaAs}$) when plotted as a function of $t_{\text{F}}/\lambda_{L}$ and $t_{\text{F}}/\lambda_{T}$, respectively. However, the maximal strain amplitude is reached at the thicknesses $t_{\text{F}}=(0.25+0.5n)\lambda$ in $\mathrm{Ni}$/$\mathrm{GaAs}$ bilayers and at $t_{\text{F}}=(0.5+0.5n)\lambda$ in CoFe2O4/Pt ones ($\lambda=\lambda_{L}$ or $\lambda_{T}$, $n=0,1,2,3$…). These conditions explain dissimilar results of our micromagnetoelastic simulations performed for $\mathrm{Ni}$/$\mathrm{GaAs}$ and CoFe2O4/Pt bilayers, which showed that the precession amplitude at the interface has a maximum at $t_{\text{F}}=0.75\lambda$ ($\mathrm{Ni}$/$\mathrm{GaAs}$) and $t_{\text{F}}=\lambda$ (CoFe2O4/Pt). Furthermore, the analysis of Eq. (4) reveals that the character of the strain-amplitude thickness dependence is governed by the magnitude of the parameter $Z_{\alpha}$. Namely, when $Z_{\alpha}<1$, the strain amplitude maximizes at $t_{\text{F}}=(0.25+0.5n)\lambda$ as it happens in the $\mathrm{Ni}$/$\mathrm{GaAs}$ bilayers ($Z_{1}\approx Z_{4}\approx 0.53$), whereas at $Z_{\alpha}>1$ the optimal thicknesses satisfy the condition $t_{\text{F}}=(0.5+0.5n)\lambda$ holding for the CoFe2O4/Pt bilayers ($Z_{1}\approx 2.34$, $Z_{4}\approx 1.91$). The derived simple criteria open the possibility to predict the optimal thickness of the ferromagnetic layer that maximizes the strain and precession amplitudes at the interface for any F/N bilayer. Figure 7: Dependence of the amplitude of the magnetization precession at the $\mathrm{Ni}|\mathrm{GaAs}$ interface on the excitation frequency $\nu$. The points show the maximal deviation $\Delta m_{x}(\nu)$ of the out-of-plane magnetization direction cosine $m_{x}$ from its equilibrium value. The thickness of the $\mathrm{Ni}$ layer equals three quarters of the wavelength of the driving longitudinal or transverse elastic wave. In conclusion of this section, we discuss the dependence of the amplitude of the magnetization precession at the $\mathrm{Ni}|\mathrm{GaAs}$ interface on the excitation frequency $\nu$. For the optimal $\mathrm{Ni}$ thickness $t_{\text{F}}=0.75\lambda$ and the driving waves with the initial strain amplitudes $\varepsilon_{xx}^{\text{max}}=\varepsilon_{xz}^{\text{max}}=10^{-4}$, the simulations predict that the maximal deviation $\Delta m_{x}(\nu)$ of the magnetization direction cosine $m_{x}$ from the equilibrium value varies with the frequency as shown in Fig. 7. It can be seen that $\Delta m_{x}(\nu)$ reaches a peak at a frequency $\nu_{\text{max}}$ slightly higher than the resonance frequency $\nu_{\text{res}}=9.6$ GHz. Namely, $\nu_{\text{max}}$ amounts to 9.9 GHz for the precession excited by the longitudinal elastic waves and to 10 GHz for that induced by the transverse waves. In agreement with the results described in Sec. III, the shear waves appear to be much more efficient for the excitation of the magnetization precession at the $\mathrm{Ni}|\mathrm{GaAs}$ interface (see Fig. 7). ## V Spin pumping into GaAs layer The magnetization precession occurring near the interface between the ferromagnet and a nonmagnetic conductor generates the spin pumping into the latter Tserkovnyak _et al._ (2005). Using the results obtained for the magnetization dynamics $\mathbf{m}(x=t_{\mathrm{F}},t)$ induced by the elastic waves at the $\mathrm{Ni}|\mathrm{GaAs}$ interface, we can calculate the spin current flowing in the $\mathrm{GaAs}$ layer. The spin-current density $\mathbf{J}_{s}(x,t)$ is a second-rank tensor characterizing the direction of spin flow and the orientation and magnitude of the carried spin polarization per unit volume Dyakonov and Perel (1971). In the vicinity of the $\mathrm{Ni}|\mathrm{GaAs}$ interface, the density $\mathbf{J}_{\mathrm{SP}}(x=t_{\mathrm{F}},t)$ of the spin current pumped into $\mathrm{GaAs}$ can be evaluated via the approximate relation $\mathbf{e}_{n}\cdot\mathbf{J}_{\mathrm{SP}}\simeq(\hbar/4\pi)\mathrm{Re}\big{[}g^{r}_{\uparrow\downarrow}\big{]}\mathbf{m}\times\dot{\mathbf{m}}$, where $\mathbf{e}_{n}$ is the unit vector normal to the interface and pointing into $\mathrm{GaAs}$, $\hbar$ is the reduced Planck constant, $g^{r}_{\uparrow\downarrow}$ is the reflection spin-mixing conductance per unit area, and a small contribution caused by the imaginary part of $g^{r}_{\uparrow\downarrow}$ is neglected Zwierzycki _et al._ (2005). Since $\mathrm{Re}\big{[}g^{r}_{\uparrow\downarrow}\big{]}$ may be set equal to $1.5\times 10^{17}$ m-2 for the $\mathrm{Ni}|\mathrm{GaAs}$ interface Ando _et al._ (2011), the above relation and the simulation data on the temporal variation of $\mathbf{m}(x=t_{\mathrm{F}},t)$ enable us to evaluate the spin- current density $\mathbf{J}_{\mathrm{SP}}(x=t_{\mathrm{F}},t)$. Figure 8 shows time dependences of three nonzero components $J^{\mathrm{SP}}_{xj}$ of the tensor $\mathbf{J}_{\mathrm{SP}}$, which settle in the steady-state regime of the magnetization precession at the excitation frequency $\nu=\nu_{\mathrm{max}}$. It can be seen that the transverse wave creates much stronger spin pumping into $\mathrm{GaAs}$ than the longitudinal one. For both types of elastic excitations, the amplitude of $J^{\mathrm{SP}}_{xx}$ is about two times larger than almost equal amplitudes of $J^{\mathrm{SP}}_{xy}$ and $J^{\mathrm{SP}}_{xz}$. Figure 8: Time dependences of the spin-current densities $J_{xj}^{\mathrm{SP}}$ pumped into $\mathrm{GaAs}$ by the longitudinal (a) and transverse (b) elastic waves with the frequencies 9.9 GHz and 10 GHz, respectively. The time period shown in the figure corresponds to the steady- state regime of the magnetization precession at the $\mathrm{Ni}|\mathrm{GaAs}$ interface. The thickness of the $\mathrm{Ni}$ layer equals three quarters of the wavelength of the driving longitudinal or transverse elastic wave. The averaging over the period $1/\nu$ of (almost sinusoidal) spin-current variations shows that $\langle J^{\mathrm{SP}}_{xx}\rangle_{t}$ is negligible, whereas there are very small nonzero dc components $\langle J^{\mathrm{SP}}_{xy}\rangle_{t}=\langle J^{\mathrm{SP}}_{xz}\rangle_{t}$ of the pumped spin current. It should be noted that, owing to relatively small reflection spin mixing conductance of the $\mathrm{Ni}|\mathrm{GaAs}$ interface, the spin pumping into $\mathrm{GaAs}$ does not significantly increase the effective damping of the magnetization precession in $\mathrm{Ni}$ Nikitchenko and Pertsev (2020). The pumped spin current generates non-equilibrium spin accumulation $\bm{\mu}_{s}(x,t)$, which gives rise to a spin backflow at the interface with the density $\mathbf{J}_{\mathrm{SB}}(x=t_{\mathrm{F}},t)$ amounting to $\mathbf{e}_{n}\cdot\mathbf{J}_{\mathrm{SB}}\approx-\mathrm{Re}\big{[}g^{r}_{\uparrow\downarrow}\big{]}\bm{\mu}_{s}/4\pi$ Tserkovnyak _et al._ (2005). The overall spin-current density $\mathbf{J}_{s}(x,t)$ decays inside the $\mathrm{GaAs}$ layer due to spin relaxation and diffusion. The spatial distribution of the density $\mathbf{J}_{s}$ depends on that of the spin accumulation $\bm{\mu_{s}}$ Tserkovnyak and Brataas (2002), being defined in our one-dimensional model by the relation $\mathbf{e}_{n}\cdot\mathbf{J}_{s}(x,t)=-[\sigma\hbar/(4e^{2})]\partial\bm{\mu}_{s}(x,t)/\partial x$, where $e$ is the elementary positive charge, and $\sigma$ is the electrical conductivity, which amounts to $3.68\times 10^{4}$ S m-1 for n+-$\mathrm{GaAs}$ Kikkawa and Awschalom (1998); Nikitchenko and Pertsev (2020). We find the spin accumulation $\bm{\mu}_{s}(x,t)$ by solving the diffusion equation Tserkovnyak and Brataas (2002) appended by the boundary conditions for the spin currents at the $\mathrm{Ni}|\mathrm{GaAs}$ interface $x=t_{\mathrm{F}}$ and the $\mathrm{GaAs}$ free surface $x=t_{\mathrm{F}}+t_{\mathrm{N}}$, which read $\mathbf{J}_{s}(x=t_{\mathrm{F}})=\mathbf{J}_{\mathrm{SP}}(x=t_{\mathrm{F}})+\mathbf{J}_{\mathrm{SB}}(x=t_{\mathrm{F}})$ and $\mathbf{J}_{s}(x=t_{\mathrm{F}}+t_{\mathrm{N}})=0$. The calculation yields $\bm{\mu}_{s}^{\omega}=\frac{4\pi e^{2}\cosh{[\kappa(t_{\mathrm{N}}+t_{\mathrm{F}}-x)]}}{e^{2}\mathrm{Re}\big{[}g^{r}_{\uparrow\downarrow}\big{]}\cosh{(\kappa t_{\mathrm{N}})}+\pi\sigma\hbar\kappa\sinh{(\kappa t_{\mathrm{N}})}}\mathbf{e}_{n}\cdot\mathbf{J}_{\mathrm{SP}}^{\omega},$ (5) where $\bm{\mu}_{s}^{\omega}$ and $\mathbf{J}_{\mathrm{SP}}^{\omega}$ denote the complex amplitudes of the harmonics having the angular frequency $\omega$, which represent the Fourier components of the spin accumulation $\bm{\mu}_{s}(x,t)$ and spin pumping density $\mathbf{J}_{\mathrm{SP}}(x=t_{\mathrm{F}},t)$, and the parameter $\kappa=\lambda_{\mathrm{sd}}^{-1}\sqrt{1+i\omega\tau_{\mathrm{sf}}}$ depends on spin-diffusion length $\lambda_{\mathrm{sd}}$ and spin-flip relaxation time $\tau_{\mathrm{sf}}$. Equation (5) differs from a similar relation derived in Ref. Tserkovnyak and Brataas (2002) by the account of the spin backflow. In the case of $\mathrm{GaAs}$, the spin backflow cannot be neglected because it appears to be rather strong at $\lambda_{\mathrm{sd}}=2.32$ $\mu$m Kikkawa and Awschalom (1998) and $\tau_{\mathrm{sf}}=0.9$ ns Bhat and Kumar (2014). Since the elastically driven spin pumping is almost monochromatic in our setting, Eq. (5) is valid for the sought relation between $\bm{\mu}_{s}(x,t)$ and $\mathbf{J}_{\mathrm{SP}}(x=t_{\mathrm{F}},t)$ as well, which enables us to calculate the overall spin-current density $\mathbf{J}_{s}(x,t)$. Owing to the inverse spin Hall effect (ISHE), the spin current in the $\mathrm{GaAs}$ layer generates a charge current with the density $\mathbf{J}_{c}^{\mathrm{ISHE}}$ defined by the formula Mosendz _et al._ (2010) $\mathbf{J}_{c}^{\mathrm{ISHE}}(x,t)=\alpha_{\mathrm{SH}}(2e/\hbar)\mathbf{e}_{n}\times[\mathbf{e}_{n}\cdot\mathbf{J}_{s}(x,t)],$ (6) where $\alpha_{\mathrm{SH}}=0.007$ is the spin Hall angle of $\mathrm{GaAs}$ Ando _et al._ (2011). Under considered open-circuit electrical boundary conditions, the transverse charge current $\mathbf{J}_{c}^{\mathrm{ISHE}}$ flowing along the interface should create a charge accumulation at the lateral boundaries of the $\mathrm{GaAs}$ film. Such an accumulation induces an electric field $\mathbf{E}$ in $\mathrm{GaAs}$, which causes a drift current with the density $\mathbf{J}_{c}^{\mathrm{drift}}=\sigma\mathbf{E}$. To calculate the spatial distribution of the electric potential $\varphi$ in the $\mathrm{Ni}$/$\mathrm{GaAs}$ bilayer and the total charge current density $\mathbf{J}_{c}=\mathbf{J}_{c}^{\mathrm{ISHE}}+\mathbf{J}_{c}^{\mathrm{drift}}$, we numerically solve the Laplace’s equation $\nabla^{2}\varphi=0$ with the appropriate boundary conditions. The latter follow from the absence of charge current across the outer surfaces of the bilayer, and the absence of $\mathbf{J}_{c}^{\mathrm{ISHE}}$ inside $\mathrm{Ni}$. It should be noted that the potential $\varphi$ should be regarded as a complex quantity since the parameter $\kappa$ affecting the spin-current density $\mathbf{J}_{s}$ involved in Eq. (6) has a substantial complex part at $\omega\tau_{\mathrm{sf}}>>1$. In the numerical calculations, we consider only the component $J^{s}_{xy}$ of the elastically generated spin current $\mathbf{J}_{s}$, because $J^{s}_{xx}$ does not create any charge flow, and the components $J^{s}_{xy}$ and $J^{s}_{xz}$ have almost equal magnitudes and can be probed independently via transverse voltages $V_{z}^{\mathrm{ISHE}}=\varphi(z=w_{\mathrm{N}}/2)-\varphi(z=-w_{\mathrm{N}}/2)$ and $V_{y}^{\mathrm{ISHE}}=\varphi(y=0)-\varphi(y=-h_{\mathrm{N}})$, respectively (Fig. 1). Figure 9 shows the amplitude $\delta V_{z}^{\mathrm{ISHE}}(x)$ of the oscillating voltage $V_{z}^{\mathrm{ISHE}}(x,t)$ calculated at the excitation frequency $\nu=\nu_{\mathrm{max}}$ for the $\mathrm{GaAs}$ films with the thickness $t_{\mathrm{N}}=5$ $\mu$m. Figure 9: Amplitude $\delta V_{z}^{\mathrm{ISHE}}(x)$ of the ac voltage between the lateral sides of the $\mathrm{Ni}$/$\mathrm{GaAs}$ bilayer excited by longitudinal (a) and transverse (b) elastic waves with the frequencies 9.9 GHz and 10 GHz, respectively. The thickness of the $\mathrm{GaAs}$ layer equals 5 $\mu$m, and its width $w_{\mathrm{N}}$ is indicated in the figure. It can be seen that $\delta V_{z}^{\mathrm{ISHE}}$ varies nonmonotonically with the distance $x-t_{\mathrm{F}}$ from the $\mathrm{Ni}|\mathrm{GaAs}$ interface, reaching its maximum inside the semiconductor at $x-t_{\mathrm{F}}=100-200$ nm. The voltage amplitude $\delta V_{z}^{\mathrm{ISHE}}(x)$ grows with the increasing width $w_{\mathrm{N}}$ of the $\mathrm{GaAs}$ film, and the voltage peak becomes much higher at the excitation of magnetization dynamics by the shear elastic wave (see Fig. 9). Importantly, the transverse ac voltage $V_{z}^{\mathrm{ISHE}}$ characterizing the spin pumping induced by either type of elastic waves is high enough for the experimental measurement near the $\mathrm{Ni}|\mathrm{GaAs}$ interface. Another method to evaluate the spin pumping into a normal metal or semiconductor experimentally is known as a nonlocal spin detection scheme Johnson and Silsbee (1985); Lou _et al._ (2007). This scheme measures a voltage $V_{s}$ between a ferromagnetic probe and a nonmagnetic electrode brought into contact with the semiconductor. Since the voltage $V_{s}$ is directly proportional to the product $\bm{\mu}_{s}\cdot\mathbf{M}_{\mathrm{probe}}$, where $\mathbf{M}_{\mathrm{probe}}$ is the probe magnetization, it is possible to detect all three components of the vector $\bm{\mu}_{s}$ by using differently magnetized ferromagnetic contacts. As a representative example, we consider an iron probe magnetized along the $x$ axis, which is placed on the lateral side of the $\mathrm{GaAs}$ layer (Fig. 1), and a normal-metal electrode deposited on the free surface $x=t_{\mathrm{F}}+t_{\mathrm{N}}$ of the 5-$\mu$m-thick $\mathrm{GaAs}$ film. In this case, the spin voltage $V_{s}(x,t)$ is defined by the relation $V_{s}(x,t)=\eta_{\mathrm{IE}}p_{\mathrm{Fe}}\mu_{x}^{s}(x,t)/(2e)$, where $\eta_{\mathrm{IE}}$ is the spin transmission efficiency of the $\mathrm{GaAs}|\mathrm{Fe}$ interface, $p_{\mathrm{Fe}}$ is the spin polarization of $\mathrm{Fe}$ at the Fermi level, and the spin accumulation $\mu_{x}^{s}$ beneath the probe with nanoscale dimensions is assumed uniform. Figure 10 shows the amplitude $\delta V_{s}(x)$ and phase $\phi_{s}(x)$ of the ac spin voltage calculated using the parameters $\eta_{\mathrm{IE}}\approx 0.5$ and $p_{\mathrm{Fe}}\approx 0.42$ characteristic of the Schottky tunnel barrier between Fe probe and n+-$\mathrm{GaAs}$ Lou _et al._ (2007). Figure 10: Amplitude $\delta V_{s}$ and phase $\phi_{s}$ (inset) of the ac spin voltage between the lateral Fe probe and a normal-metal contact at the $\mathrm{GaAs}$ free surface plotted as a function of the distance $x-t_{\mathrm{F}}$ from the $\mathrm{Ni}|\mathrm{GaAs}$ interface. The $\mathrm{Ni}$/$\mathrm{GaAs}$ bilayer is excited either by the longitudinal wave with frequency 9.9 GHz or by the transverse wave with frequency 10 GHz. The thickness of the $\mathrm{GaAs}$ layer equals 5 $\mu$m. Importantly, the voltage amplitude $\delta V_{s}$ appears to be rather large near the $\mathrm{Ni}|\mathrm{GaAs}$ interface, exceeding 650 nV under the excitation by the transverse elastic wave and 80 nV in the bilayer excited by the longitudinal wave (see Fig. 10). Although $\delta V_{s}(x)$ gradually decreases with the increasing distance $x-t_{\mathrm{F}}$ from the $\mathrm{Ni}|\mathrm{GaAs}$ interface, it remains to be measurable experimentally even at the distances over 0.5 $\mu$m. The phase $\phi_{s}(x)$ of the spin voltage varies linearly inside the $\mathrm{GaAs}$ layer and changes strongly already at $x-t_{\mathrm{F}}\sim 0.5$ $\mu$m (Fig. 10). This result demonstrates that the phase difference between the spin accumulation inside $\mathrm{GaAs}$ and the spin pumping at the $\mathrm{Ni}|\mathrm{GaAs}$ interface may be large owing to the condition $\omega\tau_{\mathrm{sf}}>>1$. The input electric power $W$ necessary for the functioning of the proposed spin injector can be estimated from the generated acoustic power using the relation $W=\frac{1}{K^{2}}\frac{1}{2}A\rho c\omega^{2}u_{\text{max}}^{2}$, where $K^{2}$ is the electromechanical transduction efficiency of the piezoelectric transducer Bhaskar _et al._ (2020), $c$ is the velocity of the generated longitudinal or transverse acoustic wave, and $A$ denotes the dynamically strained area of the ferromagnetic film having the mass density $\rho$. Expressing $u_{\text{max}}$ via the maximal strain $\varepsilon_{\text{max}}$ in the acoustic wave, we obtain $W=\alpha\frac{1}{K^{2}}\frac{1}{2}A\rho c^{3}\varepsilon_{\text{max}}^{2}$. For the device with $A<25\leavevmode\nobreak\ \mu$m2 and $K^{2}=12\%$ Bhaskar _et al._ (2020), which is driven by a transverse ($\alpha=4$) or longitudinal ($\alpha=1$) wave with $\varepsilon_{\text{max}}=10^{-4}$, the calculation yields $W<2$ mW. This value is much smaller than the lowest power consumption $W\approx 25$ mW of the spin injector driven by the microwave magnetic field Ando _et al._ (2011). ## VI Conclusions In this work, we theoretically studied the coupled elastic and spin dynamics induced in $\mathrm{Ni}$/$\mathrm{GaAs}$ bilayers by longitudinal and transverse acoustic waves generated by the attached piezoelectric transducer (Fig. 1). Using advanced micromagnetoelastic simulations, we first modeled the elastically driven magnetization dynamics in thick $\mathrm{Ni}$ films at the wave frequencies around the resonance frequency $\nu_{\mathrm{res}}$ of the coherent magnetization precession in unstrained $\mathrm{Ni}$ film. The simulations showed that this dynamics has the form of a forced spin wave having the frequency and wavelength of the monochromatic driving wave. Remarkably, the transverse elastic wave creates much stronger spin wave than the longitudinal one at the considered external magnetic field (Fig. 2). The backaction of travelling spin wave on the elastic dynamics manifests itself in the generation of weak secondary elastic waves created by the magnetization precession (see Fig. 3). These waves are characterized by oscillating strains $\varepsilon_{ij}(x,t)$ different from the strain $\varepsilon_{xx}(x,t)$ or $\varepsilon_{xz}(x,t)$ in the primary driving wave, and they were not reported in the previous work on the modeling of magnetization dynamics induced by longitudinal elastic waves in $\mathrm{Ni}$ Chen _et al._ (2017). The magnetoelastic feedback also influences the driving elastic wave, leading to a gradual reduction of its amplitude during the propagation in $\mathrm{Ni}$, which adds to the “acoustic” decay caused by the wave attenuation of electronic origin Homer _et al._ (1987). At the considered wave frequencies $\nu\approx 10$ GHz, the decay resulting from the energy transfer to the magnetic subsystem is stronger than the acoustic decay for the transverse waves but small for longitudinal ones. Importantly, both types of elastic waves are expected to carry spin signals over significant distances of several micrometers in $\mathrm{Ni}$. We also modeled the magnetoelastic dynamics of $\mathrm{Ni}$/$\mathrm{GaAs}$ bilayers at the excitation frequencies $\nu\sim\nu_{\mathrm{res}}$, focusing on the $\mathrm{Ni}$ thicknesses comparable to the wavelength of the injected acoustic wave. The simulations allowed for the reflections of the elastic waves from the boundaries of the $\mathrm{Ni}$ layer and demonstrated the excitation of a nonhomogeneous magnetization dynamics in it. Importantly, a steady-state magnetization precession with frequency equal to the excitation frequency and constant amplitude was revealed at the $\mathrm{Ni}|\mathrm{GaAs}$ interface after a short transition period of about 1 ns (Fig. 4). The simulations performed for $\mathrm{Ni}$ layers of different thicknesses showed that the amplitude of stationary precession has a maximum at $\mathrm{Ni}$ thickness amounting to three quarters of the driving elastic wave wavelength. This finding, which differs from the results of simulations carried out for $\mathrm{Fe}_{81}\mathrm{Ga}_{19}$/$\mathrm{Au}$ and $\mathrm{CoFe}_{2}\mathrm{O}_{4}$/$\mathrm{Pt}$ bilayers Azovtsev and Pertsev (2017, 2019), was explained by an analytical model giving simple criteria for the optimal geometry of an elastic bilayer that maximizes the strain amplitude at the interface. Numerical results obtained for the steady-state magnetization precession at the $\mathrm{Ni}|\mathrm{GaAs}$ interface were used to evaluate the spin- current densities pumped into $\mathrm{GaAs}$ by the dynamically strained $\mathrm{Ni}$ film (Fig. 8). The spin accumulation in the semiconductor was then calculated by solving numerically the spin diffusion equation with the account of the spin pumping into $\mathrm{GaAs}$ and the spin backflow into $\mathrm{Ni}$. Since the spin current creates a charge current owing to the ISHE, the spin generation in $\mathrm{GaAs}$ can be detected via electrical measurements. Therefore, we also determined the distribution of the electric potential in the $\mathrm{Ni}$/$\mathrm{GaAs}$ bilayer with open-circuit electrical boundary conditions by numerically solving the Laplace’s equation. This enabled us to evaluate the transverse voltage appearing between the lateral sides of the dynamically strained $\mathrm{Ni}$/$\mathrm{GaAs}$ bilayer. It was shown that the amplitude of this ac voltage is large enough for the experimental detection near the $\mathrm{Ni}|\mathrm{GaAs}$ interface (Fig. 9). Furthermore, spin accumulation manifests itself in the voltage between a ferromagnetic probe and a nonmagnetic electrode brought into contact with the semiconductor (Fig. 1). Performing calculations of this ac spin voltage, we found that it retains measurable amplitude even at the distances over 0.5 $\mu$m from the $\mathrm{Ni}|\mathrm{GaAs}$ interface (Fig. 10). Thus, our theoretical study of the $\mathrm{Ni}$/$\mathrm{GaAs}$ heterostructure demonstrated that the spin injector employing elastic waves is promising for the spin generation in semiconductors. Since the proposed device can be driven electrically via the strain-mediated magnetoelectric effect, it has much lower power consumption than the spin injector excited by a microwave magnetic field Ando _et al._ (2011). ## VII Acknowledgements The work was supported by the Foundation for the Advancement of Theoretical Physics and Mathematics ”BASIS”. ## References * Hägele _et al._ (1998) D. Hägele, M. Oestreich, and W. W. Rühle, Appl. Phys. Lett. 73, 1580 (1998). * Kikkawa and Awschalom (1999) J. M. Kikkawa and D. D. Awschalom, Physics Today 52, 33 (1999). * Bhat and Kumar (2014) S. G. Bhat and P. S. A. Kumar, Sci. Rep. 4, 5588 (2014). * Putikka and Joynt (2004) W. O. Putikka and R. Joynt, Phys. Rev. B 70, 113201 (2004). * Kokurin _et al._ (2013) I. A. Kokurin, P. V. Petrov, and N. S. Averkiev, Semiconductors 47, 1232 (2013). * Hirohata _et al._ (2020) A. Hirohata, K. Yamada, Y. Nakatani, I.-L. Prejbeanu, B. Diény, P. Pirro, and B. Hillebrands, J. Magn. Magn. Mater. 509, 166711 (2020). * Schmidt _et al._ (2000) G. Schmidt, D. Ferrand, L. W. Molenkamp, A. T. Filip, and B. J. van Wees, Phys. Rev. B 62, R4790 (2000). * Hanbicki and Jonker (2002) A. T. Hanbicki and B. T. Jonker, Appl. Phys. Lett. 80, 1240 (2002). * Jiang _et al._ (2005) X. Jiang, R. Wang, R. M. Shelby, R. M. Macfarlane, S. R. Bank, J. S. Harris, and S. S. Parkin, Phys. Rev. Lett. 94, 056601 (2005). * Dash _et al._ (2009) S. P. Dash, S. Sharma, R. S. Patel, M. P. de Jong, and R. Jansen, Nature 462, 491 (2009). * Kamerbeek _et al._ (2014) A. M. Kamerbeek, E. K. de Vries, A. Dankert, S. P. Dash, B. J. van Wees, and T. Banerjee, Appl. Phys. Lett. 104, 212106 (2014). * Brataas _et al._ (2002) A. Brataas, Y. Tserkovnyak, G. E. W. Bauer, and B. I. Halperin, Phys. Rev. B 66, 060404 (2002). * Tserkovnyak _et al._ (2005) Y. Tserkovnyak, A. Brataas, G. E. W. Bauer, and B. I. Halperin, Rev. Mod. Phys. 77, 1375 (2005). * Heinrich _et al._ (2003) B. Heinrich, Y. Tserkovnyak, G. Woltersdorf, A. Brataas, R. Urban, and G. E. W. Bauer, Phys. Rev. Lett. 90, 187601 (2003). * Saitoh _et al._ (2006) E. Saitoh, M. Ueda, H. Miyajima, and G. Tatara, Appl. Phys. Lett. 88, 182509 (2006). * Bell _et al._ (2008) C. Bell, S. Milikisyants, M. Huber, and J. Aarts, Phys. Rev. Lett. 100, 047002 (2008). * Mosendz _et al._ (2010) O. Mosendz, J. E. Pearson, F. Y. Fradin, G. E. W. Bauer, S. D. Bader, and A. Hoffmann, Phys. Rev. Lett. 104, 046601 (2010). * Czeschka _et al._ (2011) F. D. Czeschka, L. Dreher, M. S. Brandt, M. Weiler, M. Althammer, I.-M. Imort, G. Reiss, A. Thomas, W. Schoch, W. Limmer, H. Huebl, R. Gross, and S. T. B. Goennenwein, Phys. Rev. Lett. 107, 046601 (2011). * Tashiro _et al._ (2015) T. Tashiro, S. Matsuura, A. Nomura, S. Watanabe, K. Kang, H. Sirringhaus, and K. Ando, Sci. Rep. 5, 15158 (2015). * Ando _et al._ (2011) K. Ando, S. Takahashi, J. Ieda, H. Kurebayashi, T. Trypiniotis, C. H. W. Barnes, S. Maekawa, and E. Saitoh, Nat. Mat. 10, 655 (2011). * Shikoh _et al._ (2013) E. Shikoh, K. Ando, K. Kubo, E. Saitoh, T. Shinjo, and M. Shiraishi, Phys. Rev. Lett. 110, 127201 (2013). * Lee _et al._ (2014) J. Lee, L. Huang, D. Hung, T. Chiang, J. C. A. Huang, J. Liang, and S. Lee, Appl. Phys. Lett. 104, 052401 (2014). * Wang _et al._ (2017) Y. Wang, R. Ramaswamy, M. Motapothula, K. Narayanapillai, D. Zhu, J. Yu, T. Venkatesan, and H. Yang, Nano Lett. 17, 7659 (2017). * Mendes _et al._ (2018) J. B. S. Mendes, A. Aparecido-Ferreira, J. Holanda, A. Azevedo, and S. M. Rezende, Appl. Phys. Lett. 112, 242407 (2018). * Weiler _et al._ (2011) M. Weiler, L. Dreher, C. Heeg, H. Huebl, R. Gross, M. S. Brandt, and S. T. B. Goennenwein, Phys. Rev. Lett. 106, 117601 (2011). * Uchida _et al._ (2011a) K. Uchida, H. Adachi, T. An, T. Ota, M. Toda, B. Hillebrands, S. Maekawa, and E. Saitoh, Nat. Mater. 10, 737 (2011a). * Weiler _et al._ (2012) M. Weiler, H. Huebl, F. S. Goerg, F. D. Czeschka, R. Gross, and S. T. B. Goennenwein, Phys. Rev. Lett. 108, 176601 (2012). * Kamra _et al._ (2015) A. Kamra, H. Keshtgar, P. Yan, and G. E. W. Bauer, Phys. Rev. B 91, 104409 (2015). * Polzikova _et al._ (2016) N. I. Polzikova, S. G. Alekseev, I. I. Pyataikin, I. M. Kotelyanskii, V. A. Luzanov, and A. P. Orlov, AIP Advances 6, 056306 (2016). * Azovtsev and Pertsev (2016) A. V. Azovtsev and N. A. Pertsev, Phys. Rev. B 94, 184401 (2016). * Azovtsev and Pertsev (2017) A. V. Azovtsev and N. A. Pertsev, Appl. Phys. Lett. 111, 222403 (2017). * Polzikova _et al._ (2018) N. I. Polzikova, S. G. Alekseev, V. A. Luzanov, and A. O. Raevskiy, Phys. Solid State 60, 2211 (2018). * Azovtsev and Pertsev (2019) A. V. Azovtsev and N. A. Pertsev, Phys. Rev. B 100, 224405 (2019). * Alekseev _et al._ (2020) S. G. Alekseev, S. E. Dizhur, N. I. Polzikova, V. A. Luzanov, A. O. Raevskiy, A. P. Orlov, V. A. Kotov, and S. A. Nikitov, Appl. Phys. Lett. 117, 072408 (2020). * Cherepov _et al._ (2014) S. Cherepov, P. Khalili Amiri, J. G. Alzate, K. Wong, M. Lewis, P. Upadhyaya, J. Nath, M. Bao, A. Bur, T. Wu, G. P. Carman, A. Khitun, and K. L. Wang, Appl. Phys. Lett. 104, 082403 (2014). * Bhaskar _et al._ (2020) U. K. Bhaskar, D. Tierno, G. Talmelli, F. Ciubotaru, C. Adelmann, and T. Devolder, IEEE Trans. Ultrason. Ferroelectr. Freq. Control 67, 1284 (2020). * Uchida _et al._ (2011b) K.-I. Uchida, T. An, Y. Kajiwara, M. Toda, and E. Saitoh, Appl. Phys. Lett. 99, 212501 (2011b). * Thevenard _et al._ (2014) L. Thevenard, C. Gourdon, J. Y. Prieur, H. J. von Bardeleben, S. Vincent, L. Becerra, L. Largeau, and J.-Y. Duquesne, Phys. Rev. B 90, 094401 (2014). * Janušonis _et al._ (2015) J. Janušonis, C. L. Chang, P. H. M. van Loosdrecht, and R. I. Tobey, Appl. Phys. Lett. 106, 181601 (2015). * Gowtham _et al._ (2015) P. G. Gowtham, T. Moriyama, D. C. Ralph, and R. A. Buhrman, J. Appl. Phys. 118, 233910 (2015). * Casals _et al._ (2020) B. Casals, N. Statuto, M. Foerster, A. Hernández-Mínguez, R. Cichelero, P. Manshausen, A. Mandziak, L. Aballe, J. M. Hernàndez, and F. Macià, Phys. Rev. Lett. 124, 137202 (2020). * Akhiezer _et al._ (1958) A. I. Akhiezer, V. G. Bar’iakhtar, and S. V. Peletminski, J. Exptl. Theoret. Phys. (U.S.S.R.) 35, 228 (1958). * Chen _et al._ (2017) C. Chen, A. Barra, A. Mal, G. Carman, and A. Sepulveda, Appl. Phys. Lett. 110, 072401 (2017). * Azovtsev and Pertsev (2020) A. V. Azovtsev and N. A. Pertsev, Phys. Rev. Materials 4, 064418 (2020). * Kittel (1949) C. Kittel, Rev. Mod. Phys. 21, 541 (1949). * Niitsu (2020) K. Niitsu, J. Phys. D: Applied Physics 53, 39LT01 (2020). * Stearns (1986) M. Stearns, in _Landolt-Börnstein - Group III Condensed Matter_ , Vol. 19a. Magnetic properties of 3d, 4d, and 5d elements, alloys and compounds, edited by H. Wijn (Springer-Verlag, Berlin‐Heidelberg‐New York‐Tokyo, 1986) Chap. 1.1.2. Fe, Co, Ni, pp. 24–51. * Walowski _et al._ (2008) J. Walowski, M. D. Kaufmann, B. Lenk, C. Hamann, J. McCord, and M. Münzenberg, J. Phys. D: Applied Physics 41, 164016 (2008). * Haynes (2016) W. M. Haynes, _CRC Handbook of Chemistry and Physics, 96th Edition (Internet Version 2016)_ (CRC Press/Taylor and Francis, Boca Raton, FL, 2016). * Homer _et al._ (1987) R. Homer, G. C. Alexandrakis, and G. Dewar, J. Appl. Phys. 61, 4133 (1987). * Sadd (2005) M. H. Sadd, _Elasticity. Theory, applications and numerics_ (Elsevier Butterworth-Heinemann, MA, USA, 2005). * Dyakonov and Perel (1971) M. I. Dyakonov and V. I. Perel, Phys. Lett. A 35, 459 (1971). * Zwierzycki _et al._ (2005) M. Zwierzycki, Y. Tserkovnyak, P. J. Kelly, A. Brataas, and G. E. W. Bauer, Phys. Rev. B 71, 064420 (2005). * Nikitchenko and Pertsev (2020) A. I. Nikitchenko and N. A. Pertsev, Phys. Rev. Appl. 14, 034022 (2020). * Tserkovnyak and Brataas (2002) Y. Tserkovnyak and A. Brataas, Phys. Rev. B 66, 224403 (2002). * Kikkawa and Awschalom (1998) J. M. Kikkawa and D. D. Awschalom, Phys. Rev. Lett. 80, 4313 (1998). * Johnson and Silsbee (1985) M. Johnson and R. H. Silsbee, Phys. Rev. Lett. 55, 1790 (1985). * Lou _et al._ (2007) X. Lou, C. Adelmann, S. A. Crooker, E. S. Garlid, J. Zhang, K. S. M. Reddy, S. D. Flexner, C. J. Palmstrøm, and P. A. Crowell, Nat. Phys. 3, 197 (2007).
††thanks: These authors contributed equally to this work.††thanks: These authors contributed equally to this work.††thanks: These authors contributed equally to this work. # A heterogeneously integrated lithium niobate-on-silicon nitride photonic platform Mikhail Churaev Institute of Physics, Swiss Federal Institute of Technology Lausanne (EPFL), CH-1015 Lausanne, Switzerland Rui Ning Wang Institute of Physics, Swiss Federal Institute of Technology Lausanne (EPFL), CH-1015 Lausanne, Switzerland Viacheslav Snigirev Institute of Physics, Swiss Federal Institute of Technology Lausanne (EPFL), CH-1015 Lausanne, Switzerland Annina Riedhauser IBM Research - Europe, Zurich, Säumerstrasse 4, CH-8803 Rüschlikon, Switzerland Terence Blésin Institute of Physics, Swiss Federal Institute of Technology Lausanne (EPFL), CH-1015 Lausanne, Switzerland Charles Möhl IBM Research - Europe, Zurich, Säumerstrasse 4, CH-8803 Rüschlikon, Switzerland Miles A. Anderson Institute of Physics, Swiss Federal Institute of Technology Lausanne (EPFL), CH-1015 Lausanne, Switzerland Anat Siddharth Institute of Physics, Swiss Federal Institute of Technology Lausanne (EPFL), CH-1015 Lausanne, Switzerland Youri Popoff IBM Research - Europe, Zurich, Säumerstrasse 4, CH-8803 Rüschlikon, Switzerland Integrated Systems Laboratory, Swiss Federal Institute of Technology Zurich (ETH Zürich), CH-8092 Zürich, Switzerland Ute Drechsler IBM Research - Europe, Zurich, Säumerstrasse 4, CH-8803 Rüschlikon, Switzerland Daniele Caimi IBM Research - Europe, Zurich, Säumerstrasse 4, CH-8803 Rüschlikon, Switzerland Simon Hönl IBM Research - Europe, Zurich, Säumerstrasse 4, CH-8803 Rüschlikon, Switzerland Johann Riemensberger Institute of Physics, Swiss Federal Institute of Technology Lausanne (EPFL), CH-1015 Lausanne, Switzerland Junqiu Liu Institute of Physics, Swiss Federal Institute of Technology Lausanne (EPFL), CH-1015 Lausanne, Switzerland Paul Seidler<EMAIL_ADDRESS>IBM Research - Europe, Zurich, Säumerstrasse 4, CH-8803 Rüschlikon, Switzerland Tobias J. Kippenberg<EMAIL_ADDRESS>Institute of Physics, Swiss Federal Institute of Technology Lausanne (EPFL), CH-1015 Lausanne, Switzerland The availability of thin-film lithium niobate on insulator (LNOI) and advances in processing have led to the emergence of fully integrated LiNbO3 electro- optic devices[1, 2, 3, 4], including low-voltage[5], high-speed modulators[6], electro-optic frequency combs[7], and microwave-optical transducers [8, 9]. Yet to date, LiNbO3 photonic integrated circuits (PICs) have mostly been fabricated using non-standard etching techniques that lack the reproducibility routinely achieved in silicon photonics. Widespread future application of thin-film LiNbO3 requires a reliable and scalable solution using standard processing and precise lithographic control. Here we demonstrate a heterogeneously integrated LiNbO3 photonic platform that overcomes the abovementioned challenges by employing wafer-scale bonding of thin-film LiNbO3 to planarized low-loss silicon nitride (Si3N4) photonic integrated circuits[10], a mature foundry-grade integrated photonic platform. The resulting devices combine the substantial Pockels effect of LiNbO3 with the scalability, high-yield, and complexity of the underlying Si3N4 PICs. Importantly, the platform maintains the low propagation loss ($\mathbf{<0.1}$ dB/cm) and efficient fiber-to-chip coupling ($<$2.5 dB per facet) of the Si3N4 waveguides. We find that ten transitions between a mode confined in the Si3N4 PIC and the hybrid LiNbO3 mode produce less than 0.8 dB additional loss, corresponding to a loss per transition not exceeding 0.1 dB. These nearly lossless adiabatic transitions thus link the low-loss passive Si3N4 photonic structures with electro-optic components. We demonstrate high-Q microresonators, optical splitters, electrically tunable photonic dimers, electro-optic frequency combs, and carrier-envelope phase detection of a femtosecond laser on the same platform, thus providing a reliable and foundry- ready solution to low-loss and complex LiNbO3 integrated photonic circuits. Figure 1: Hybrid integrated Lithium Niobate photonics.(a) Conventional approaches to lithium niobate photonics, consisting of traditional Ti or proton exchange based waveguides, and the recently emerged integrated photonics based on etching of thin-film LNOI (which is mostly based on ridge waveguides) (b) The hybrid approach presented in this work, based on heterogeneous integration of thin film lithium niobate with Si3N4. (c) Schematics of our approach which is based on wafer bonding of 4” (100 mm) thin film lithium niobate onto planarized ultra low loss $\mathrm{Si_{3}N_{4}}$ photonic integrated circuits. (d) Hybrid optical mode profile for a typical waveguide used in this work (cf Methods). (e) AFM measurements of the Si3N4 wafer before bonding showing 400 pm RMS roughness over 5 $\mu$m by 5 $\mu$m field of view. (f) Long-range profilometry scan of the wafer before bonding. (g) False-coloured scanning electron micrograph showing the hybrid structure. Modern society has an constantly increasing demand for optical communications bandwidth, with aggregate data rates doubling every 18 months[11, 12], and optical technology is coming ever closer to the central processing units[13]. Optical modulators play a crucial role in this context, providing the means to transfer electronic signals to optical carriers. With the rise of commercial integrated photonics [14], a wide variety of modulation platforms have been demonstrated that are compatible with wafer-scale manufacturing, among which silicon and indium phosphide are the most prominent [15, 16, 17, 18]. In the last decade, alternative systems, including organic hybrids [19, 20], plasmonic devices [21], and modulators based on two-dimensional materials [22, 23, 24, 25] have also been developed. Among all the materials used, lithium niobate (LiNbO3) remains the most preferable because of its excellent physical properties, and commercial availability [26]. Advances in wafer-scale transfer of LiNbO3 thin-films via the SmartCut${}^{\textbf{TM}}$ technique, combined with improvements in etching of LiNbO3, have enabled low-loss integrated electro-optics [27, 28, 29]. This has led to several key demonstrations, including ultra-high-$Q$ optical microresonators[28], efficient electro-optic frequency comb generation[7], frequency converters [30], and non-reciprocal devices [31, 32]. In addition, electro-optic modulation both at CMOS voltage levels and at high speed (up to 100 GHz) has been achieved [5, 6], offering routes toward compact integrated LiNbO3 modulators compatible with CMOS microelectronics for applications ranging from classical communication for 5G cellular networks and datacenter interconnects to quantum interfaces for microwave-optical conversion [33, 34, 35], and topological photonics employing synthetic dimensions [36, 37]. Besides the electro-optic applications, integrated LiNbO3 PICs are also of high interest for nonlinear photonics, for example, for efficient second-harmonic generation, optical squeezing, and parametric amplification [38, 39, 40]. Despite the achievements to date, widespread adoption of LiNbO3 integrated photonics is still impeded by several key issues. Current LNOI-based devices are fabricated using specific non-conventional ion-beam etching (IBE) to achieve smooth waveguide surfaces. Insufficient etch-mask selectivity leads to the formation of shallow ridge waveguides that require more challenging process control to achieve the desired geometries. This complicates the establishment of a reliable process design kit (PDK) for integrated LiNbO3 platforms. Second, edge coupling between fibers and chips is challenging, as the ridge waveguide structures demonstrated so far show significant coupling loss, 5 to 10 dB per facet, [30] unless more complicated double-etching techniques are used [41, 42]. Third, while record resonance quality factors (Q $\approx$ 107, linear loss of 0.027 dB/cm) have been reported in LiNbO3 microresonators[28], this has only been demonstrated for selected optical resonances and has not been achieved broadly in other recently reported works, where losses are typically one order of magnitude higher (0.2 - 0.3 dB/cm, see Supplementary Table 1 for comparison). For future applications, uniformly low loss across a wafer using precise and mature lithographic processes, along with efficient coupling, are necessary to develop a foundry-level technology that includes PDKs with, e.g., splitters, arrayed-waveguide gratings, optical filters or beamforming networks. As an alternative to conventional bulk LiNbO3 and ridge-waveguide-based photonic devices, hybrid platforms combining thin-film LiNbO3 with waveguides made of Si, Si3N4, or Ta2O5 have been recently developed[43, 44, 45] (see Fig 1(a)-(c)). With proper geometry optimization, the heterogeneously integrated LNOI devices can reach electro-optic performance comparable to that of the all-LNOI platforms[46] ($\mathrm{V_{\pi}L}=2.3$ V$\cdot$cm). Heterogeneous integration using the organic adhesive benzocyclobutene (BCB) to bond LNOI to silicon and direct bonding of chiplets to silicon and silicon nitride PICs has been demonstrated, leading to modulators operating at CMOS voltages [47, 44]. Even though the low-loss operation of hybrid Si3N4-LiNbO3 waveguides was reported [48, 49], the previous works did not provide reliable studies of wafer-scale uniformity, dispersion, insertion loss, or other essential aspects of photonic integrated circuits. The approaches were aimed at specific device applications and could not demonstrate all the benefits of heterogeneous integration of LiNbO3 with a well-developed photonic platform. Moreover, the wafer-level bonding required to achieve scalable PDKs was not shown. Here, we demonstrate a high-yield, low-loss, integrated LiNbO3-Si3N4 photonic platform that solves multiple issues of LNOI integrated photonics. The approach circumvents the need for optimized IBE etching of LiNbO3 and opens up the possibility of creating a wide range of low-loss integrated electro-optic photonic circuits. This is achieved by wafer-scale heterogeneous integration [50, 48] (i.e. direct wafer bonding [51]) of an LNOI wafer onto a patterned and planarized ultra-low-loss Si3N4 substrate as depicted in Fig 1(c). Our approach combines the maturity of Si3N4 integrated photonics with the large Pockels effect of LiNbO3 and enables complex, hybrid PICs that incorporate passive Si3N4 and electro-optic LiNbO3 and exhibit ultra-low propagation loss (8.5 dB/m). Figure 2: Optical loss measurements of the hybrid devices. (a) Photograph of a fully bonded 2” lithium niobate wafer to a 4” photonic damascene wafer. (b) Schematics of the chip facet with lithium niobate removed from the silicon nitride inverse tapers. FDTD simulations of the mode transition at interface. (c) Broadband transmission measurements of a 100 GHz FSR ring showing flat coupling of resonances from 1260 nm to 1630 nm wavelength. (d) Extracted integrated dispersion of the device. The inset shows a zoom-in of one of the modes at 184 THz with amplitude modulation sidebands at 500 MHz and the corresponding fitting curve. (e) Wafer map with averaged linewidth indicated for the reference 21 GHz FSR rings. (f) Simulated optical mode confinement in LiNbO3 as a function of optical frequency. Insets show typical mode profiles. Grey-shadowed area represents the measurement bandwidth. (g) Loaded (red), intrinsic (green), and coupling (blue) linewidth of the resonances presented in (c). (h) Measured linewidth of 3 types of devices on the wafer accumulated over 55 THz measurement bandwidth. Fabrication process. The process flow for our hybrid PICs starts with the fabrication of Si3N4 waveguide structures using the photonic Damascene process [52, 10]. We use a 100 mm-diameter silicon wafer with 4 $\mu$m-thick wet thermal SiO2, followed by deep-ultraviolet (DUV) stepper lithography, preform dry etching, preform reflow, low-pressure chemical vapor deposition (LPCVD) deposition of Si3N4, chemical-mechanical polishing (CMP), and SiO2 interlayer deposition and annealing, as detailed in the Supplementary Information. The Si3N4 photonic Damascene process is free of crack formation in the highly tensile LPCVD Si3N4 film and provides high fabrication yield and ultra-low propagation loss (1 dB/m). In addition, double-inverse nanotapers [53] are incorporated for efficient edge coupling to lensed fibers. Previous devices fabricated using this process have been the workhorse for numerous system- level applications of soliton microcombs, ranging from coherent telecommunication [54] to astrophysical spectrometer calibration [55] to supercontinuum generation [56] and turnkey soliton generation [57]. One of the key advantages of the photonic Damascene process is the possibility of obtaining an extremely flat wafer surface suitable for heterogeneous integration. Specifically, we perform CMP on the SiO2 interlayer and bond the fabricated Si3N4 Damascene substrate to a commercially available LNOI wafer (NanoLN). The most critical constraints for achieving high bonding yield, the surface roughness and topography, are measured prior to bonding. With CMP, the long-range topography is reduced to a few nanometers over several hundred microns, as shown in Fig. 1(f). Moreover, atomic force microscopy (AFM) measurements over a range of a few microns reveal a root-mean-square (RMS) roughness of 400 pm (Fig. 1(e)). This roughness level is sufficiently low for direct wafer bonding. The donor and the acceptor wafer (the Si3N4 substrate and the LNOI wafer, respectively) are cleaned, and both are coated by atomic layer deposition (ALD) with a few nanometers of alumina (Al2O3). The wafers are then bonded and annealed at 250∘C to enhance the bonding strength. See SI for more technical information on fabrication. We have successfully bonded several wafers, including whole 100-mm LNOI wafers, with a bonding yield close to 100%, as evidenced by photoacoustic spectroscopy. Subsequent back-end processing and electrode integration are described in the SI. A scanning electron microscope image (Fig. 1(g)) of a cross-section of the layer structure reveals clean bonding results. Finite-element-method simulations (see Fig. 1(d)) indicate that, for the waveguide and wafer parameters used here (cf. SI for waveguide geometry), the optical mode confinement factor for LiNbO3 at the telecommunication wavelength of 1550 nm is $\Gamma=\iint_{\mathrm{LiNbO_{3}}}|E|^{2}dS/\iint_{\mathrm{\Omega}}|E|^{2}dS=12\%$, where $\Omega$ denotes the whole cross-section area. To demonstrate the electro-optic capabilities of the heterogeneously integrated LiNbO3 photonic circuits, we deposit tungsten ($\mathrm{W}$) electrodes on top of the LiNbO3 adjacent to the waveguides with an electrode-electrode gap of 6 $\mu$m. To illustrate the versatility, lithographic precision, complexity, and yield of the hybrid platform, we design a reticle with various devices. Figure 2(e) shows the design layout of the Si3N4 photonic circuits for a 100-mm wafer; it contains nine fields with 16 chips each – in total, more than 100 chips with dimensions of 5 mm $\times$ 5 mm. The reticle includes chips with three different types of devices: (1) microresonators with a free spectral range (FSR) of either 100 GHz or 21 GHz, the former being used for electro-optic comb generation; (2) photonic molecules consisting of a pair of coupled microresonators each with a FSR of 50 GHz, as used for microwave-optical conversion schemes; and (3) waveguides with a length of several centimeters for supercontinuum generation (wafer D67b01). Figure 3: Adiabatic mode transitions and hybrid optical splitters. (a) Colored SEM image of a LiNbO3 taper fabricated to make a smooth optical mode transition. (b) Optical microscope image of a chip used to estimate the performance of interface tapers. Horizontal dotted lines mark waveguides with two interface transitions (blue), four interface transitions (red), and six interface transitions (green). The inset shows a zoom-in of a breakout region. (c) Schematic of a tapered (adiabatic) interface transition. (d) Transmission measurements of breakout waveguides described in (b) as well as a typical straight (non-adiabatic) transition for comparison (orange line). (e) Schematic of a straight (non-adiabatic) interface transition. (f) Transmission measurements of waveguides with straight transition breakouts. (g) Optical image, FDTD simulation, and schematic of a W-type 3-dB splitter. The geometry is defined by the underlying Si3N4 layer. (h) Transmission measurements of two ports of the splitter together with the transmission of a straight waveguide as reference. Optical loss characterization. Linear propagation loss is crucial in determining the performance of integrated electro-optic devices, as it limits the length of electro-optic modulators and the complexity of the associated photonic circuit. To measure the linear optical loss, evanescent coupling properties, and group velocity dispersion (GVD) of the hybrid structures, we perform broadband frequency-comb-assisted spectroscopy [58] of multiple microresonators across the entire wafer with three different external-cavity diode lasers covering the wavelength ranges of 1260–1360 nm, 1355–1505 nm, and 1500–1630 nm. The intrinsic quality factors of individual 100 GHz microring resonators reach up to $Q=3\times 10^{6}$, while the 50 GHz photonic dimers (i.e., coupled microrings) and 21 GHz single rings exhibit even higher quality factors up to $Q=4.5\times 10^{6}$. The latter corresponds to a linear propagation loss of 8.5 dB/m. We observe an absorption peak at approximately 1420 nm (207 THz), which we associate with an overtone of OH-bond vibrations in lithium niobate [59, 60]. As shown in Fig 2(g), optical losses rise with increasing optical frequency. We associate this dependency with increased whispering-gallery (radiation) loss as the mode shifts into the LiNbO3 thin film at higher frequencies and becomes less confined (see Fig 2(f)). For the same reason, we observe nearly uniform evanescent coupling of optical microresonators over a span of 55 THz. The layout in Fig 2(e) is labeled with the most probable linewidth measured for 21 GHz microresonators in various regions, indicating the degree of variation across the wafer (see Supplementary Information). These results not only demonstrate high yield and wafer-scale fabrication but include some of the highest quality factors achieved to date with integrated LiNbO3 devices. Notably, the $Q$s reported here are not isolated values as in prior work on ridge resonators[28] but are consistently high, i.e., we measure hundreds of resonances with $Q$ above $4\times 10^{6}$ (linewidth below 50 MHz), as shown in Fig. 2(h). Our hybrid platform also offers the possibility of precise dispersion engineering; due to the interplay between material dispersion and the optical mode distribution, the dispersion can be adjusted by varying the Si3N4 waveguide geometry or the LiNbO3 thickness. In this work, we designed the structure to work in the near-zero GVD regime advantageous for broadband optical frequency comb generation. As shown in Fig. 2(d) for a microresonator with a FSR of 100 GHz, the measured integrated microresonator dispersion ($D_{\text{int}}$) only varies by 15 GHz over an optical bandwidth of 55 THz. Interestingly, there is no constraint preventing the design of devices that are uniformly coupled over a broad frequency range and, at the same time, have extremely small dispersion. Figure 4: Electro-optic frequency comb generation and tunable dimers using heterogenously integrated $\mathrm{LiNbO_{3}}$ photonic circuits. (a) Experimental setup for electro-optic frequency comb generation. MWG - microwave generator, MWA - microwave amplifier, OSA - optical spectrum analyser. (b) Mode coupling schematics and integrated dispersion of the measured device.(c) Measured optical spectrum of the generated EO comb at 1552 nm central frequency. Dashed line corresponds to numerical simulations with phase modulation amplitude $\beta$ = 0.14 $\pi$. (d) Examples of EO frequency combs generated at 4 other pump wavelength. (e) Photonic dimer image and mode hybridization illustration. (f) High-Q resonance splitting of a photonic dimer without additional biasing. S - symmetric supermode, AS - antisymmetric. (g) DC tuning of the photonic dimer mode hybridization, corresponding to linear tuning of around 30 MHz/V for a single mode. (h) Echelle plot of the photonic dimer transmission, showing mode hybridization over a broadband scanning range. Fiber-chip coupling and interface transitions. Efficient input coupling from an optical fiber to the photonic chip is paramount for numerous applications. For air-cladded LiNbO3 ridge waveguides, inverse tapers lead to fiber-chip edge-coupling losses of around 10 dB per facet unless complicated multi-layer etching is used[41, 61, 62]. This is due to the significant mode mismatch between the lensed fiber mode (typically circular with about 2.5 $\mu$m diameter) and the asymmetric mode of partially etched air-cladded ridge waveguide structures. Some recent work on integrated LiNbO3 devices demonstrated the possibility of using embedded silicon edge-couplers to overcome this challenge[6]. In our case, the relative indices of refraction of the materials would lead to significant coupling loss if the LiNbO3 layer remained on top of underlying Si3N4 inverse tapers. Hence, we remove the LiNbO3 from the coupling regions and rely on standard Damascene Si3N4 inverse tapers[53]. While this provides an efficient input coupling, there remains the challenge of the transition between the regions with and without LiNbO3. To address this issue, we designed and implemented adiabatic tapers in the LiNbO3 layer, as shown in Fig.3(c). The tapers are 100 $\mathrm{\mu}$m long with a tip width of 500 nm and a final width of 10 $\mathrm{\mu}$m. Both film removal and etching of the tapers are done in a single fabrication step using argon ion-beam etching with a photoresist etch mask (see the Supplementary Information for details on taper fabrication). All the functional photonic components do not depend on the etching of the chip interface, which is employed only at mode transition regions and, due to the short taper length, does not require low roughness. We thus keep the LiNbO3 layer unprocessed for all the photonic components, where both roughness and precise alignment are critical to achieving low optical losses. To measure the efficiency of the adiabatic transitions and remove the ambiguity associated with the fiber-chip coupling loss from the measurement, we designed an experiment in which we introduce multiple breakouts on straight waveguides, where the optical mode experiences transitions from Si3N4 waveguides into the hybridized mode and back as depicted in Fig. 3(b). We fabricated waveguides with 2, 4, 6, and 10 transitions (input/output tapers and 0, 1, 2, and 4 breakouts, respectively) to determine the increase in loss due to each transition. Figure 3(d) shows the results of these measurements. As a reference, we compare them with straight interfaces (schematic in Fig.3(e)). As can be seen in Figure 3(f), each straight transition leads to approximately 1 dB loss, whereas the tapered input/output behaves in this measurement as a virtually lossless transition. Strikingly, for the case of the tapered transitions, we observe hardly any difference between 2, 4 and 6 interfaces, as shown in Figure 3(d) and approximately only 0.8 dB additional loss for ten interfaces. Considering the statistical uncertainty in the measurements, we deduce a transition loss of $<$0.1 dB per taper. Calibration of the transmission measurements is discussed in the Supplementary Information. The lithographic precision of the Si3N4 photonic circuit layer provides our heterogeneous integration approach with versatiliy and robustness, as confirmed by the implementation of a W-shaped 3-dB splitter/coupler[63] (see Fig. 3(g)) that uses the hybrid Si3N4-LiNbO3 mode but is defined solely by underlying Si3N4 inverse tapers. Splitters are important components for many optical devices, such as electro-optic modulators, optical networks, and lasers based on reflective semiconductor optical amplifiers. The elegance of this type of splitter is in its simplicity of design. Due to the presence of the LiNbO3 slab and the single-mode nature of our hybrid waveguides, the optical mode is adiabatically transferred from the input arm to the output arms. We make the tapered sections 100 $\mu$m long, ensuring a small footprint for integrated components exploiting this design. Transmission measurements of the device reveal a flat response, with power asymmetry between the two arms not exceeding 1.7 dB and on-chip insertion loss not exceeding 1 dB in the 1500-1620 nm wavelength range (Fig.3(h)). Electro-optic devices. To demonstrate the electro-optic performance achievable with our high-quality factor, hybrid LiNbO3 microresonators, we generate electro-optic frequency combs in the 21 GHz resonators pumped resonantly in the telecommunications C-band. We apply a high-power microwave signal with a frequency 20.97 GHz across the integrated electrodes such that microwave- induced sidebands are resonantly enhanced (Fig 4(a)-(b)). Even though only 12% of the optical mode is confined inside the lithium niobate, it is enough to achieve strong modulation of the optical phase. We observe around 60 sidebands within a 25 dB span for an injected RF power of 40 dBm, as depicted in Fig 4(c). In this experiment, the phase modulation amplitude corresponds to approximately 0.14$\mathrm{\pi}$ (see Supplementary Information). The electro- optic coupling is enhanced due to the device’s high quality factor and flat dispersion. We also make use of the previously mentioned homogeneous coupling of our hybrid devices at optical wavelengths ranging from 1260 nm to 1630 nm to generate electro-optic comb at five different pump wavelengths (1290 nm, 1345 nm, 1500 nm, 1550 nm, and 1625 nm) on a single device (Fig 4(d)). Moreover, according to the simulations, the geometry can be optimized for maximum electro-optic efficiency with a characteristic $\mathrm{V_{\pi}\cdot L}$ product comparable to (2x larger than) the performance of X-cut ridge- waveguide platforms[5]. The reason is that, in ridge waveguides, most of the electro-optic interaction occurs not in the ridge itself, but in the slab layer, where the modulating electric field is a factor of magnitude stronger (see Supplementary Information). Therefore, the ridge waveguides and hybrid waveguides are conceptually similar in terms of electro-optic interactions (i.e., the role of the slab in ridge waveguides is being taken over by the bonded LiNbO3 layer in our hybrid structure). Similar studies on optimization of electro-optic performance of heterogeneously integrated LiNbO3 devices can also be found elsewhere[46]. As a further example of the electro-optic capabilities, we fabricate photonic dimers, which are known building blocks for quantum coherent transducers based on cavity electro-optics[64, 34, 65, 9, 66]. Figure 4(g) shows the mode hybridization in the system as a function of the applied DC voltage. We observe frequency tuning of 30 MHz/V when a DC voltage is applied to one of the rings. Moreover, the precise and mature fabrication of the Si3N4 waveguides enables the creation of high-Q photonic dimers exhibiting broadband normal mode splitting even at zero-bias (cf Figure 4(f)-(h)). The presence of avoided mode crossings for the symmetric supermode (lower frequency) is common with the photonic dimer configuration[67]. In one last experiment exploiting the $\chi^{(2)}$ nonlinearity of the LiNbO3, we perform supercontinuum generation in the hybrid waveguides. We observe octave- spanning supercontinuum generation mediated by the $\chi^{(3)}$ nonlinearity, together with simultaneous second-harmonic generation due to the optical field in the LiNbO3, allowing direct measurement of the carrier-envelope offset frequency of the femtosecond pulse laser used as a pump. The details of this experiment can be found in the Supplementary Information. Summary. To conclude, we have demonstrated a hybrid Si3N4-LiNbO3 platform for photonic integrated circuits using direct wafer-scale bonding that endows the mature Si3N4 Damascene technology with the second-order nonlinearity ($\chi^{(2)}$ / Pockels effect) of LiNbO3. The heterogeneous integration preserves the precise lithographic control, low propagation loss, and efficient fiber-to-chip coupling of the underlying Si3N4 waveguides for use in a variety of important photonic building blocks. We have also presented a design for the transition from Si3N4 waveguides to hybrid Si3N4-LiNbO3 waveguides with a measured insertion loss not exceeding 0.1 dB per interface. The ability to achieve low-loss transitions is essential for the realization of complex devices, providing a bridge between passive silicon nitride photonics and electro-optic devices. To the best of our knowledge, this is the first time a heterogeneously integrated LiNbO3 photonic platform combines all the beneficial features of Si3N4 PICs at wafer scale. A comparison of the simultaneously achieved desirable features is given in Supplementary Table 1. With further geometry optimization, the electro-optic performance can reach levels comparable to that of ridge waveguide structures while keeping propagation losses independent of the quality of the LiNbO3 etching. Possible applications of our platform include photonic switching networks for neuromorphic or quantum computing, devices for quantum state transduction from microwave to optical photons, integrated electro-optic frequency comb sources, on-chip generation of second-harmonic and squeezed light, as well as high- speed electro-optic devices for optical communications or rapidly tunable, low-noise lasers. Acknowledgments: We thank the Operations Team of the Binnig and Rohrer Nanotechnology Center (BRNC), and especially Diana Davila Pineda and Ronald Grundbacher, for their help and support. Silicon nitride substrates were fabricated in the EPFL center of MicroNanoTechnology (CMi). We also thank Aleksandr Tusnin for his help in numerical simulations. Funding Information: This work was supported by funding from the European Union Horizon 2020 Research and Innovation Program under the Marie Skłodowska- Curie grant agreement No. 722923 (OMT) and No. 812818 (MICROCOMB), as well as under the FET-Proactive grant agreement No. 732894 (HOT). This work was also supported by the Swiss National Science Foundation under grant agreement No. 176563 (BRIDGE) and 186364 (Sinergia). Data Availability: The code and data used to produce the plots are available from Zenodo. ## References * Zhu _et al._ [2021] D. Zhu, L. Shao, M. Yu, R. Cheng, B. Desiatov, C. J. Xin, Y. Hu, J. Holzgrafe, S. Ghosh, A. Shams-Ansari, E. Puma, N. Sinclair, C. Reimer, M. Zhang, and M. Lončar, Adv. Opt. Photon. 13, 242 (2021). * Wang _et al._ [2019a] C. Wang, M. Zhang, M. Yu, R. Zhu, H. Hu, and M. Loncar, Nature Communications 10, 978 (2019a). * Desiatov _et al._ [2019] B. Desiatov, A. Shams-Ansari, M. Zhang, C. Wang, and M. Lončar, Optica 6, 380 (2019). * Bahadori _et al._ [2020] M. Bahadori, Y. Yang, A. E. Hassanien, L. L. Goddard, and S. Gong, Opt. Express 28, 29644 (2020). * Wang _et al._ [2018] C. Wang, M. , X. Chen, M. Bertrand, A. Shams-Ansari, S. Chandrasekhar, P. Winzer, and M. Lončar, Nature 562, 101 (2018). * He _et al._ [2019a] M. He, M. Xu, Y. Ren, J. Jian, Z. Ruan, Y. Xu, S. Gao, S. Sun, X. Wen, L. Zhou, L. Liu, C. Guo, H. Chen, S. Yu, L. Liu, and X. Cai, Nature Photonics 13, 359 (2019a), arXiv:1807.10362 . * Zhang _et al._ [2019a] M. Zhang, B. Buscaino, C. Wang, A. Shams-Ansari, C. Reimer, R. Zhu, J. M. Kahn, and M. Lončar, Nature 568, 373 (2019a), arXiv:1809.08636 . * Holzgrafe _et al._ [2020] J. Holzgrafe, N. Sinclair, D. Zhu, A. Shams-Ansari, M. Colangelo, Y. Hu, M. Zhang, K. K. Berggren, and M. Lončar, Optica 7, 1714 (2020). * McKenna _et al._ [2020] T. P. McKenna, J. D. Witmer, R. N. Patel, W. Jiang, R. V. Laer, P. Arrangoiz-Arriola, E. A. Wollack, J. F. Herrmann, and A. H. Safavi-Naeini, Optica 7, 1737 (2020). * Liu _et al._ [2021] J. Liu, G. Huang, R. N. Wang, J. He, A. S. Raja, T. Liu, N. J. Engelsen, and T. J. Kippenberg, Nature Communications 12, 2236 (2021). * Agrell _et al._ [2016] E. Agrell, M. Karlsson, A. R. Chraplyvy, D. J. Richardson, P. M. Krummrich, P. Winzer, K. Roberts, J. K. Fischer, S. J. Savory, B. J. Eggleton, M. Secondini, F. R. Kschischang, A. Lord, J. Prat, I. Tomkos, J. E. Bowers, S. Srinivasan, M. Brandt-Pearce, and N. Gisin, Journal of Optics (United Kingdom) 18, 10.1088/2040-8978/18/6/063002 (2016). * Kitayama _et al._ [2019] K. I. Kitayama, M. Notomi, M. Naruse, K. Inoue, S. Kawakami, and A. Uchida, APL Photonics 4, 10.1063/1.5108912 (2019). * Caulfield and Dolev [2010] H. J. Caulfield and S. Dolev, Nature Photonics 4, 261 (2010). * Thomson _et al._ [2016] D. Thomson, A. Zilkie, J. E. Bowers, T. Komljenovic, G. T. Reed, L. Vivien, D. Marris-Morini, E. Cassan, L. Virot, J.-M. Fédéli, J.-M. Hartmann, J. H. Schmid, D.-X. Xu, F. Boeuf, P. O’Brien, G. Z. Mashanovich, and M. Nedeljkovic, Journal of Optics 18, 073003 (2016). * Xu _et al._ [2005] Q. Xu, B. Schmidt, S. Pradhan, and M. Lipson, Nature 435, 325 (2005). * Maram _et al._ [2019] R. Maram, S. Kaushal, J. Azaña, and L. R. Chen, _Photonics_, Vol. 6 (2019). * Ogiso _et al._ [2017] Y. Ogiso, J. Ozaki, Y. Ueda, N. Kashio, N. Kikuchi, E. Yamada, H. Tanobe, S. Kanazawa, H. Yamazaki, Y. Ohiso, T. Fujii, and M. Kohtoku, Journal of Lightwave Technology 35, 1450 (2017). * Witzens [2018] J. Witzens, Proceedings of the IEEE 106, 2158 (2018). * Lee _et al._ [2002] M. Lee, H. E. Katz, C. Erben, D. M. Gill, P. Gopalan, J. D. Heber, and D. J. McGee, Science 298, 1401 (2002). * Alloatti _et al._ [2014] L. Alloatti, R. Palmer, S. Diebold, K. P. Pahl, B. Chen, R. Dinu, M. Fournier, J. M. Fedeli, T. Zwick, W. Freude, C. Koos, and J. Leuthold, Light: Science and Applications 3, 5 (2014). * Haffner _et al._ [2015] C. Haffner, W. Heni, Y. Fedoryshyn, J. Niegemann, A. Melikyan, D. L. Elder, B. Baeuerle, Y. Salamin, A. Josten, U. Koch, C. Hoessbacher, F. Ducry, L. Juchli, A. Emboras, D. Hillerkuss, M. Kohl, L. R. Dalton, C. Hafner, and J. Leuthold, Nature Photonics 9, 525 (2015). * Liu _et al._ [2011] M. Liu, X. Yin, E. Ulin-Avila, B. Geng, T. Zentgraf, L. Ju, F. Wang, and X. Zhang, Nature 474, 64 (2011). * Gruhler _et al._ [2013] N. Gruhler, C. Benz, H. Jang, J.-H. Ahn, R. Danneau, and W. H. P. Pernice, Opt. Express 21, 31678 (2013). * Phare _et al._ [2015] C. T. Phare, Y.-H. Daniel Lee, J. Cardenas, and M. Lipson, Nature Photonics 9, 511 (2015). * Datta _et al._ [2020] I. Datta, S. H. Chae, G. R. Bhatt, M. A. Tadayon, B. Li, Y. Yu, C. Park, J. Park, L. Cao, D. N. Basov, J. Hone, and M. Lipson, Nature Photonics 14, 256 (2020). * Poberaj _et al._ [2012] G. Poberaj, H. Hu, W. Sohler, and P. Günter, Laser & Photonics Reviews 6, 488 (2012), https://onlinelibrary.wiley.com/doi/pdf/10.1002/lpor.201100035 . * Krasnokutska _et al._ [2018] I. Krasnokutska, J.-L. J. Tambasco, X. Li, and A. Peruzzo, Optics Express 26, 897 (2018). * Zhang _et al._ [2017] M. Zhang, C. Wang, R. Cheng, A. Shams-Ansari, and M. Lončar, Optica 4, 1536 (2017). * Luke _et al._ [2020] K. Luke, P. Kharel, C. Reimer, L. He, M. Loncar, and M. Zhang, Optics Express 28, 24452 (2020). * Hu _et al._ [2020a] Y. Hu, M. Yu, D. Zhu, N. Sinclair, A. Shams-Ansari, L. Shao, J. Holzgrafe, E. Puma, M. Zhang, and M. Loncar, Arxiv (2020a), arXiv:2005.09621 . * Shao _et al._ [2020] L. Shao, W. Mao, S. Maity, N. Sinclair, Y. Hu, L. Yang, and M. Lončar, Nature Electronics 3, 267 (2020). * Sohn _et al._ [2021] D. Sohn, O. E. Örsel, and G. Bahl, arXiv preprint arXiv:2104.04803 (2021). * Lambert _et al._ [2020] N. J. Lambert, A. Rueda, F. Sedlmeir, and H. G. L. Schwefel, Advanced Quantum Technologies 3, 1900077 (2020), arXiv:1906.10255 . * Javerzac-Galy _et al._ [2016] C. Javerzac-Galy, K. Plekhanov, N. R. Bernier, L. D. Toth, A. K. Feofanov, and T. J. Kippenberg, Physical Review A 94, 1 (2016), arXiv:1512.06442 . * Wang _et al._ [2019b] J. Wang, F. Sciarrino, A. Laing, and M. G. Thompson, Nature Photonics 10.1038/s41566-019-0532-1 (2019b). * Yuan _et al._ [2018] L. Yuan, Q. Lin, M. Xiao, and S. Fan, Optica 5, 1396 (2018), arXiv:1807.11468 . * Hu _et al._ [2020b] Y. Hu, C. Reimer, A. Shams-Ansari, M. Zhang, and M. Loncar, Optica 7, 1189 (2020b). * Yu _et al._ [2019] M. Yu, B. Desiatov, Y. Okawachi, A. L. Gaeta, and M. Lončar, Optics Letters 44, 1222 (2019). * Kanter _et al._ [2002] G. Kanter, P. Kumar, R. Roussev, J. Kurz, K. Parameswaran, and M. Fejer, Optics Express 10, 177 (2002). * Lu _et al._ [2021] J. Lu, A. Al Sayem, Z. Gong, J. B. Surya, C.-L. Zou, and H. X. Tang, Optica 8, 539 (2021). * He _et al._ [2019b] L. He, M. Zhang, A. Shams-Ansari, R. Zhu, C. Wang, and L. Marko, Optics Letters 44, 2314 (2019b), arXiv:1902.08969 . * Ying _et al._ [2021] P. Ying, H. Tan, J. Zhang, M. He, M. Xu, X. Liu, R. Ge, Y. Zhu, C. Liu, and X. Cai, Optics Letters 46, 1478 (2021). * Weigel _et al._ [2016] P. O. Weigel, F. Valdez, J. Zhao, H. Li, and S. Mookherjea, Scientific Reports 6, 012001 (2016). * Rao _et al._ [2016] A. Rao, A. Patil, P. Rabiei, A. Honardoost, R. DeSalvo, A. Paolella, and S. Fathpour, Opt. Lett. 41, 5700 (2016). * Jin _et al._ [2016] S. Jin, L. Xu, H. Zhang, and Y. Li, IEEE Photonics Technology Letters 28, 736 (2016). * Weigel _et al._ [2020] P. O. Weigel, F. Valdez, J. Zhao, H. Li, and S. Mookherjea, Journal of Physics: Photonics 3, 012001 (2020). * Boynton _et al._ [2020] N. Boynton, H. Cai, M. Gehl, S. Arterburn, C. Dallo, A. Pomerene, A. Starbuck, D. Hood, D. C. Trotter, T. Friedmann, C. T. DeRose, and A. Lentine, Optics Express 28, 1868 (2020). * Chang _et al._ [2017] L. Chang, M. H. P. Pfeiffer, N. Volet, M. Zervas, J. D. Peters, C. L. Manganelli, E. J. Stanton, Y. Li, T. J. Kippenberg, and J. E. Bowers, Opt. Lett. 42, 803 (2017). * Ahmed _et al._ [2019] A. N. R. Ahmed, S. Shi, M. Zablocki, P. Yao, and D. W. Prather, Opt. Lett. 44, 618 (2019). * Komljenovic _et al._ [2018] T. Komljenovic, D. Huang, P. Pintus, M. A. Tran, M. L. Davenport, and J. E. Bowers, Proceedings of the IEEE 106, 2246 (2018). * Plößl and Kräuter [1999] A. Plößl and G. Kräuter, Materials Science and Engineering: R: Reports 25, 1 (1999). * Pfeiffer _et al._ [2018] M. H. P. Pfeiffer, C. Herkommer, J. Liu, T. Morais, M. Zervas, M. Geiselmann, and T. J. Kippenberg, IEEE Journal of Selected Topics in Quantum Electronics 24, 1 (2018). * Liu _et al._ [2018] J. Liu, A. S. Raja, M. H. P. Pfeiffer, C. Herkommer, H. Guo, M. Zervas, M. Geiselmann, and T. J. Kippenberg, Opt. Lett. 43, 3200 (2018). * Marin-Palomo _et al._ [2017] P. Marin-Palomo, J. N. Kemal, M. Karpov, A. Kordts, J. Pfeifle, M. H. P. Pfeiffer, P. Trocha, S. Wolf, V. Brasch, M. H. Anderson, R. Rosenberger, K. Vijayan, W. Freude, T. J. Kippenberg, and C. Koos, Nature 546, 274 (2017). * Obrzud _et al._ [2019] E. Obrzud, M. Rainer, A. Harutyunyan, M. H. Anderson, J. Liu, M. Geiselmann, B. Chazelas, S. Kundermann, S. Lecomte, M. Cecconi, A. Ghedina, E. Molinari, F. Pepe, F. Wildi, F. Bouchy, T. J. Kippenberg, and T. Herr, Nature Photonics 13, 31 (2019). * Guo _et al._ [2018] H. Guo, C. Herkommer, A. Billat, D. Grassani, C. Zhang, M. H. P. Pfeiffer, W. Weng, C.-S. Brès, and T. J. Kippenberg, Nat. Photonics 12, 330 (2018). * Shen _et al._ [2020] B. Shen, L. Chang, J. Liu, H. Wang, Q.-F. Yang, C. Xiang, R. N. Wang, J. He, T. Liu, W. Xie, J. Guo, D. Kinghorn, L. Wu, Q.-X. Ji, T. J. Kippenberg, K. Vahala, and J. E. Bowers, Nature 582, 365 (2020). * Liu _et al._ [2016] J. Liu, V. Brasch, M. H. P. Pfeiffer, A. Kordts, A. N. Kamel, H. Guo, M. Geiselmann, and T. J. Kippenberg, Optics Letters 41, 3134 (2016), arXiv:1604.05149 . * Schwesyg _et al._ [2010] J. R. Schwesyg, C. R. Phillips, K. Ioakeimidi, M. C. C. Kajiyama, M. Falk, D. H. Jundt, K. Buse, and M. M. Fejer, Opt. Lett. 35, 1070 (2010). * Heinemeyer _et al._ [2006] U. Heinemeyer, M. C. Wengler, and K. Buse, Applied Physics Letters 89, 13 (2006). * Mercante _et al._ [2016] A. J. Mercante, P. Yao, S. Shi, G. Schneider, J. Murakowski, and D. W. Prather, Optics Express 24, 15590 (2016). * Hu _et al._ [2021] C. Hu, A. Pan, T. Li, X. Wang, Y. Liu, S. Tao, C. Zeng, and J. Xia, Opt. Express 29, 5397 (2021). * Wang _et al._ [2016] Y. Wang, S. Gao, K. Wang, and E. Skafidas, Opt. Lett. 41, 2053 (2016). * Wade _et al._ [2015] M. T. Wade, X. Zeng, and M. A. Popović, Optics Letters 40, 107 (2015). * Zhang _et al._ [2019b] M. Zhang, C. Wang, Y. Hu, A. Shams-Ansari, T. Ren, S. Fan, and M. Lončar, Nature Photonics 13, 36 (2019b), arXiv:1809.08638 . * Youssefi _et al._ [2021] A. Youssefi, I. Shomroni, Y. J. Joshi, N. R. Bernier, A. Lukashchuk, P. Uhrich, L. Qiu, and T. J. Kippenberg, Nature Electronics 4, 326 (2021), arXiv:2004.04705 . * Tikan _et al._ [2022] A. Tikan, A. Tusnin, J. Riemensberger, M. Churaev, X. Ji, K. N. Komagata, R. N. Wang, J. Liu, and T. J. Kippenberg, Science Advances 8, eabm6982 (2022).
aainstitutetext: Instituto de Física Teórica UAM-CSIC, Universidad Autónoma de Madrid, Calle de Nicolás Cabrera 13–15, Cantoblanco, E-28049 Madrid, Spainbbinstitutetext: Center for Cosmology and AstroParticle Physics (CCAPP), Ohio State University, 191 W. Woodruff Ave., Columbus, Ohio 43210, U.S.A.ccinstitutetext: Department of Physics, Ohio State University, 191 W. Woodruff Ave., Columbus, Ohio 43210, U.S.A.ddinstitutetext: C. N. Yang Institute for Theoretical Physics, Stony Brook University, Stony Brook NY 11794-3840, U.S.A.eeinstitutetext: Institució Catalana de Recerca i Estudis Avançats (ICREA), Pg. Lluis Companys 23, E-08010 Barcelona, Spainffinstitutetext: Departament de Fisica Quantica i Astrofisica and Institut de Ciencies del Cosmos, Universitat de Barcelona, Diagonal 647, E-08028 Barcelona, Spaingginstitutetext: Donostia International Physics Center (DIPC), Paseo Manuel Lardizabal, 4, Donostia-San Sebastián, E-20018, Spainhhinstitutetext: Ikerbasque, Basque Foundation for Science, Plaza Euskadi 5, E-48013 Bilbao, Spainiiinstitutetext: Instituto de Física Corpuscular (IFIC), CSIC & Universitat de València, Parc Científic, C/ Catedrático José Beltrán 2, E-46980 Paterna, Spain # Bounds on new physics with data of the Dresden-II reactor experiment and COHERENT Pilar Coloma<EMAIL_ADDRESS>b,c Ivan Esteban<EMAIL_ADDRESS>d,e,f M.C. Gonzalez-Garcia<EMAIL_ADDRESS>g Leire Larizgoitia<EMAIL_ADDRESS>g,h Francesc Monrabal <EMAIL_ADDRESS>i Sergio Palomares-Ruiz<EMAIL_ADDRESS> ###### Abstract Coherent elastic neutrino-nucleus scattering was first experimentally established five years ago by the COHERENT experiment using neutrinos from the spallation neutron source at Oak Ridge National Laboratory. The first evidence of observation of coherent elastic neutrino-nucleus scattering with reactor antineutrinos has now been reported by the Dresden-II reactor experiment, using a germanium detector. In this paper, we present constraints on a variety of beyond the Standard Model scenarios using the new Dresden-II data. In particular, we explore the constraints imposed on neutrino non-standard interactions, neutrino magnetic moments, and several models with light scalar or light vector mediators. We also quantify the impact of their combination with COHERENT (CsI and Ar) data. In doing so, we highlight the synergies between spallation neutron source and nuclear reactor experiments regarding beyond the Standard Model searches, as well as the advantages of combining data obtained with different nuclear targets. We also study the possible signal from beyond the Standard Model scenarios due to elastic scattering off electrons (which would pass selection cuts of the COHERENT CsI and the Dresden-II experiments) and find more stringent constraints in certain parts of the parameter space than those obtained considering coherent elastic neutrino-nucleus scattering. ###### Keywords: coherent neutrino-nucleus scattering, non-standard interactions, neutrino magnetic moment, light mediators ††preprint: IFT-UAM/CSIC-22-10 YITP-SB-2022-03 IFIC/22-06 ## 1 Introduction Low-energy neutrinos can elastically scatter off atomic nuclei via weak neutral currents, with the initial and final states of the nucleus being indistinguishable. For low enough momentum transfers, the interaction takes place coherently with the whole nucleus, leading to an enhancement of the cross section approximately proportional to the square of its number of neutrons. Although the so-called coherent elastic neutrino-nucleus scattering (CE$\nu$NS) was first theoretically described in Freedman’s seminal paper almost 50 years ago Freedman:1973yd , its detection has not been possible until very recently. This is so because the single observable for this process is a recoiling nucleus which generates a signal in the sub-keV to few keV energy range, difficult to detect. An additional hindrance to CE$\nu$NS detection is the limited number of suitable neutrino sources, which must be sufficiently intense in yield and, at the same time, low enough in neutrino energy for the coherence condition to be satisfied. The detection of CE$\nu$NS was experimentally demonstrated by the COHERENT experiment COHERENT:2017ipa using the currently most intense spallation neutron source (SNS), sited at the Oak Ridge National Laboratory, U.S.A. The original observation was obtained with a CsI[Na] scintillation detector COHERENT:2017ipa ; COHERENT:2018imc , which was later followed by the observation of CE$\nu$NS at a liquid Argon detector COHERENT:2020iec ; COHERENT:2020ybo . In addition to spallation sources, CE$\nu$NS is also searched for in a variety of experiments using electron antineutrinos emitted by nuclear reactors, such as TEXONO Wong:2004ru , $\nu$GeN Belov:2015ufh , CONNIE Aguilar- Arevalo:2016khx , MINER Agnolet:2016zir , Ricochet Billard:2016giu , $\nu$-cleus Strauss:2017cuu , RED-100 Akimov:2019ogx , NEON Choi:2020gkm , CONUS CONUS:2020skt and NCC-1701 at Dresden-II Colaresi:2021kus . At present, most of these reactor experiments have not yet been successful in the CE$\nu$NS detection. The exception to this is the NCC-1701 experiment at the Dresden-II nuclear reactor. In their first published data Colaresi:2021kus , the experimental collaboration presented an event spectrum with an excess of events at low energies, compatible with expectations from a CE$\nu$NS signal in the Standard Model (SM). Recently, the collaboration has released new results with an increased exposure. The observation of CE$\nu$NS is reported with strong to very strong preference (with respect to the only-background hypothesis) in the Bayesian statistics sense, depending on the quenching factor considered Colaresi2022suggestive . A precision measurement of CE$\nu$NS provides a direct probe to both SM and beyond the standard model (BSM) physics. Paradigmatic examples of the former are the determination of the weak mixing angle at very low momentum transfer Canas:2018rng ; Cadeddu:2018izq ; Huang:2019ene ; Cadeddu:2019eta ; Cadeddu:2020lky ; Cadeddu:2021ijh and the study of nuclear structure Cadeddu:2017etk ; Ciuffoli:2018qem ; Papoulias:2019lfi ; Cadeddu:2021ijh ; Coloma:2020nhf . The program of BSM exploration with CE$\nu$NS is broad (see, e.g., refs. Barranco:2005yy ; Formaggio:2011jt ; Anderson:2012pn ; Dutta:2015nlo ; Cerdeno:2016sfi ; Dent:2016wcr ; Coloma:2017egw ; Kosmas:2017zbh ; Ge:2017mcq ; Shoemaker:2017lzs ; Coloma:2017ncl ; Liao:2017uzy ; Canas:2017umu ; Dent:2017mpr ; Papoulias:2017qdn ; Farzan:2018gtr ; Billard:2018jnl ; Coloma:2019mbs ; Chaves:2021pey ; AristizabalSierra:2018eqm ; Brdar:2018qqj ; Cadeddu:2018dux ; Blanco:2019vyp ; Dutta:2019eml ; Miranda:2019wdy ; CONNIE:2019swq ; Dutta:2019nbn ; Papoulias:2019txv ; Khan:2019cvi ; Cadeddu:2019eta ; Giunti:2019xpr ; Baxter:2019mcx ; Canas:2019fjw ; Miranda:2020zji ; Flores:2020lji ; Miranda:2020tif ; Hurtado:2020vlj ; Miranda:2020syh ; Cadeddu:2020nbr ; Shoemaker:2021hvm ; delaVega:2021wpx ; Liao:2021yog ; CONUS:2021dwh ; Flores:2021kzl ; Li:2022jfl ; AristizabalSierra:2019zmy ; Abdullah:2020iiv ; Fernandez-Moroni:2021nap ; Bertuzzo:2021opb ; Bonet:2022imz for an incomplete list) being most sensitive to a variety of scenarios leading to modified neutrino interactions with nuclei – in particular at low momentum transfer – but extending also to the production of new light neutral states, and sterile neutrino searches, among others. In this paper, we present the first analysis using the new data of the Dresden-II reactor experiment in the context of BSM searches. In doing so, we consider several new physics scenarios commonly studied in the literature: effective four-fermion interactions, neutrino magnetic moments, and light mediators. In order to place our results in a broader context, we also combine the Dresden-II data with a germanium detector with the COHERENT data obtained for neutrino scattering off CsI and Ar. We highlight and quantify the synergies between SNS and reactor experiments using CE$\nu$NS, as well as the advantages of the combination of data sets obtained with multiple nuclear targets. Furthermore, we also study a potential signal from BSM scenarios due to elastic electron scattering (ES). Although in the SM this contribution to the event rates is negligible, it could be significantly enhanced in the presence of new physics effects. The contribution from scattering off electrons could pass selection cuts of both COHERENT CsI and Dresden-II experiments and, as we discuss, its inclusion leads to stronger constraints in certain parts of the parameter space than those obtained using CE$\nu$NS. The paper is organized as follows. In section 2 we introduce the theoretical frameworks we consider. Section 3 discusses the computation of the expected event rates and the details regarding our fit to the experimental data, while the results of our analysis are presented in section 4. Finally, we summarize and conclude in section 5, and provide the binding energies for neutrino scattering off electrons in appendix A. ## 2 Phenomenological frameworks | $c_{V}$ | $c_{A}$ ---|---|--- $\nu_{e}$ | $\frac{1}{2}+2\,\sin^{2}\theta_{W}$ | $+\frac{1}{2}$ $\nu_{\mu},\nu_{\tau}$ | $-\frac{1}{2}+2\,\sin^{2}\theta_{W}$ | $-\frac{1}{2}$ Table 1: Values of the SM effective couplings in neutrino-electron scattering. For antineutrinos the vector couplings are the same, while the axial ones change sign, $c_{A}\to-c_{A}$. Let us start by introducing the differential cross sections for both CE$\nu$NS and ES in the SM. The SM differential coherent elastic scattering cross section, for a neutrino with energy $E_{\nu}$ scattering off a nucleus of mass $m_{A}$, is given111This strictly applies to $J=1/2$ nuclei Lindner:2016wff . Note that for $J=0$ nuclei the last term is not present Freedman:1973yd , although its contribution is negligible for the energies of interest. by Freedman:1973yd ; Lindner:2016wff $\frac{\mathrm{d}\sigma^{\rm CE\nu NS}_{\rm SM}}{\mathrm{d}E_{R}}={\cal Q}_{W}^{2}\,\frac{G_{F}^{2}\,m_{A}}{2\,\pi}\,\left[2-\frac{m_{A}\,E_{R}}{E_{\nu}^{2}}-\frac{2\,E_{R}}{E_{\nu}}+\left(\frac{E_{R}}{E_{\nu}}\right)^{2}\right]\,\left|F(q)\right|^{2}~{},$ (1) where $G_{F}$ is the Fermi constant, $E_{R}$ is the nuclear (kinetic) recoil energy, and the weak hypercharge of the nucleus in the SM is ${\cal Q}_{W}=\left(N\,g_{V,n}+Z\,g_{V,p}\right)$, where $N$ and $Z$ are its number of neutrons and protons, respectively. The weak charges of the neutron and the proton are $g_{V,n}=-1/2$ and $g_{V,p}=1/2-2\sin^{2}\theta_{W}$, with $\sin^{2}\theta_{W}=0.23868$ at zero momentum transfer Erler:2004in ; Erler:2017knj . For Ge we use $\\{N,Z\\}=\\{40.71,\,32\\}$ (weighted average of natural isotopic abundances), while for CsI we use $\\{N,Z\\}=\\{76,\,54\\}$ and for Ar, $\\{N,Z\\}=\\{22,\,18\\}$. In eq. (1) above, the nuclear form factor $F(q)$ is assumed to be equal for protons and neutrons. Some commonly used phenomenological parameterizations are the Klein-Nystrand Klein:1999qj or the Helm Helm:1956zz form factors, among others. Note that, at the energies of interest for the Dresden-II experiment, $F(q)\simeq 1$, so the choice of form factor is completely irrelevant. For COHERENT, we use two different form factors, depending on the target nucleus. For CsI, we use the theoretical calculation from refs. Klos:2013rwa (based on refs. Hoferichter:2018acd ; Hoferichter:2016nvd ), while for Ar we assume a Helm form factor with $s=0.9~{}\mathrm{fm}$, $\langle r_{n}^{2}\rangle=-0.1161~{}\mathrm{fm}^{2}$ and $\langle r_{p}^{2}\rangle=0.70706~{}\mathrm{fm}^{2}$ ParticleDataGroup:2020ssz , $R_{\mathrm{ch}}=3.4274~{}\mathrm{fm}$ Angeli:2013epw and $R_{n,\mathrm{pt}}=3.405~{}\mathrm{fm}$ Payne:2019wvy (following the same notation as in ref. Coloma:2020nhf ). The differential cross section for ES per atom within the SM can be expressed as (see, e.g., refs. Vogel:1989iv ; Tomalak:2019ibg ) $\frac{\mathrm{d}\sigma^{\rm ES}_{\rm SM}}{\mathrm{d}T_{e}}=Z_{\rm eff}^{\rm X}(T_{e})\,\frac{G_{F}^{2}\,m_{e}}{2\,\pi}\left[\left(c_{V}+c_{A}\right)^{2}+\left(c_{V}-c_{A}\right)^{2}\left(1-\frac{T_{e}}{E_{\nu}}\right)+\left(c_{A}^{2}-c_{V}^{2}\right)\frac{m_{e}\,T_{e}}{E_{\nu}^{2}}\right]~{},$ (2) where $T_{e}$ stands for electron (kinetic) recoil energy. The effective electron couplings $c_{V}$ and $c_{A}$ contain the contributions from the SM neutral current (NC), which are equal for all flavors, plus the charged- current (CC) contribution which only affects $\bar{\nu}_{e}e^{-}$ and $\nu_{e}e^{-}$ scattering. For convenience, their values are given in table 1. We assume that scattering off electrons is incoherent, so the total ES cross section per atom is obtained by multiplying the ES cross section per electron by an effective charge, $Z_{\rm eff}^{\rm X}(T_{e})$, where $\textrm{X}\equiv^{A}_{Z}\\!\text{X}$ indicates the target nucleus. This takes into account that, in this process, atomic binding effects have to be considered for recoil energies comparable to atomic binding energies. In doing so, we follow the procedure proposed in refs. Kopeikin:1997; Fayans:2000ns; Mikaelyan:2002nv and assume $Z_{\rm eff}^{\rm X}(T_{e})$ to be a step function of the binding energies of the different atomic levels, as provided in appendix A. Finally, both types of interactions (off nuclei and off electrons) are two- body elastic scattering processes, so the kinematics is the same. Hence, the minimum neutrino energy to produce a recoil with energy $T$ is $E_{\nu,{\rm min}}=\left(T+\sqrt{T^{2}+2\,m\,T}\right)/2$ and the maximum recoil energy for a given neutrino energy is $T_{\rm max}=2\,E_{\nu}^{2}/(2\,E_{\nu}+m)$, where $T\equiv E_{R}$ and $m\equiv m_{A}$ for nuclei, while $T\equiv T_{e}$ and $m\equiv m_{e}$ for electrons. Throughout this work we will consider several phenomenological scenarios leading to modifications to the interaction cross sections in eqs. (1) and (2), as explained in more detail below. ### 2.1 Non-standard neutrino interactions The so-called non-standard neutrino interaction (NSI) framework consists on the addition of four-fermion effective operators to the SM Lagrangian at low energies. For example, the effective Lagrangian $\mathcal{L}_{\text{NSI, NC}}=-\sum_{f,\alpha,\beta}2\sqrt{2}\,G_{F}\,\varepsilon_{\alpha\beta}^{f,P}\,\left(\bar{\nu}_{\alpha}\gamma_{\mu}P_{L}\nu_{\beta}\right)\,\left(\bar{f}\gamma^{\mu}Pf\right)~{},$ (3) would lead to new NC interactions with the rest of the SM fermions. Here, $\alpha,\beta\equiv e,\mu,\tau$ while $f$ refers to SM fermions, and $P$ can be either a left-handed or a right-handed projection operator ($P_{L}$ or $P_{R}$, respectively). Such new interactions may induce lepton flavor- changing processes (if $\alpha\neq\beta$), or may lead to a modified interaction rate with respect to the SM result (if $\alpha=\beta$). In presence of NC NSI, the effective charge of the nucleus in eq. (1) gets modified, $\mathcal{Q}_{W}^{2}\to\mathcal{Q}^{2}_{\alpha}(\boldsymbol{\varepsilon})$. For real off-diagonal NSI parameters, it reads Barranco:2005yy $\displaystyle\mathcal{Q}^{2}_{\alpha}(\boldsymbol{\varepsilon})$ $\displaystyle=$ $\displaystyle\left[Z\big{(}g_{p}^{V}+2\,\varepsilon_{\alpha\alpha}^{u}+\varepsilon_{\alpha\alpha}^{d}\big{)}+N\big{(}g_{n}^{V}+\varepsilon_{\alpha\alpha}^{u}+2\,\varepsilon_{\alpha\alpha}^{d}\big{)}\right]^{2}$ (4) $\displaystyle+\sum_{\beta\neq\alpha}\left[Z\big{(}2\,\varepsilon_{\alpha\beta}^{u}+\varepsilon_{\alpha\beta}^{d})+N\big{(}\varepsilon_{\alpha\beta}^{u}+2\,\varepsilon_{\alpha\beta}^{d}\big{)}\right]^{2}~{}.$ where, in order to simplify notation, we have renamed $\varepsilon_{\alpha\beta}\equiv\varepsilon_{\alpha\beta}^{q,V}=\varepsilon_{\alpha\beta}^{q,L}+\varepsilon_{\alpha\beta}^{q,R}$. The weak charge may be rewritten in a more compact form as $\displaystyle\mathcal{Q}^{2}_{\alpha}(\boldsymbol{\varepsilon})$ $\displaystyle=$ $\displaystyle\left[\mathcal{Q}_{W}+\varepsilon_{\alpha\alpha}^{\text{X}}\right]^{2}+\sum_{\beta\neq\alpha}\big{(}\varepsilon_{\alpha\beta}^{\text{X}}\big{)}^{2}~{},$ (5) where we have defined $\varepsilon_{\alpha\beta}^{\text{X}}\equiv N\,\varepsilon_{\alpha\beta}^{n}+Z\,\varepsilon_{\alpha\beta}^{p}\,,\qquad\varepsilon_{\alpha\beta}^{n}\equiv\varepsilon_{\alpha\beta}^{u}+2\,\varepsilon_{\alpha\beta}^{d}~{},\qquad\varepsilon_{\alpha\beta}^{p}\equiv 2\,\varepsilon_{\alpha\beta}^{u}+\varepsilon_{\alpha\beta}^{d}~{}.$ (6) The first consequence we observe of the inclusion of NSI effects is that the weak charge may now depend on the incident neutrino flavor $\alpha$. As will be discussed below, it is relevant to note that the COHERENT experiment observes interactions of both electron and muon neutrinos. However, at first approximation, it measures the linear combination $\mathcal{Q}^{2}_{e}+2\,\mathcal{Q}^{2}_{\mu}$ and hence, both charges are degenerate: a reduction in the value of $\mathcal{Q}^{2}_{e}$ can be compensated by an increase in $\mathcal{Q}^{2}_{\mu}$ and vice versa. At COHERENT, though, the addition of timing information partially breaks this degeneracy for CsI, as discussed in detail in ref. Coloma:2019mbs . Including reactor data, which is only sensitive to interactions with electron antineutrinos, brings in additional complementary information in this respect Dent:2017mpr . It provides an additional constraint on $\mathcal{Q}^{2}_{e}$, which is independent of $\mathcal{Q}^{2}_{\mu}$. The second relevant feature we observe from eqs. (4)-(6) is that the impact of NSI on the weak charge depends on the values of $N$ and $Z$ in a non-trivial manner. Because of this, the combination of data obtained for different nuclei offers an additional handle to reduce the size of the allowed confidence regions of this scenario (for earlier discussions see, e.g., refs. Scholberg:2005qs ; Barranco:2005yy ; Coloma:2017egw ; Baxter:2019mcx ). ### 2.2 Neutrino magnetic moment In the presence of a neutrino magnetic moment, $\mu_{\nu}$, the cross sections for neutrino scattering off nuclei and electrons get additional contributions which do not interfere with the SM ones. The scattering off protons can be considered coherent and therefore its cross section is given, up to order ${\cal O}((E_{R}/E_{\nu})^{2},E_{R}/m_{A})$, by Vogel:1989iv $\frac{\mathrm{d}\sigma^{\rm CE\nu NS}_{\rm\mu_{\nu}}}{\mathrm{d}E_{R}}=Z^{2}\,\left(\frac{\mu_{\nu}}{\mu_{B}}\right)^{2}\,\frac{\alpha^{2}\,\pi}{m_{e}^{2}}\left[\frac{1}{E_{R}}-\frac{1}{E_{\nu}}\right]\;|F(q)|^{2}~{},$ (7) where the form factor $F(q)$ is assumed to be the same as in the SM, which is a reasonable approximation at the transfer momenta of interest Hoferichter:2020osn . Conversely, the scattering off electrons is incoherent and therefore, the cross section in this case, up to order ${\cal O}((T_{e}/E_{\nu})^{2})$, is Vogel:1989iv $\frac{\mathrm{d}\sigma^{\rm ES}_{\rm\mu_{\nu}}}{\mathrm{d}T_{e}}=Z_{\rm eff}^{\rm X}(T_{e})\,\left(\frac{\mu_{\nu}}{\mu_{B}}\right)^{2}\,\frac{\alpha^{2}\,\pi}{m_{e}^{2}}\left[\frac{1}{T_{e}}-\frac{1}{E_{\nu}}\right]~{}.$ (8) Notice that, in writing eqs. (7) and (8), we have denoted the neutrino magnetic moment as $\mu_{\nu}$, without specifying the flavor of the neutrino. However, neutrino magnetic moments arise in a variety of models of new physics and, in particular, they do not need to be flavor-universal. Therefore, in what follows, we will allow different magnetic moments for the different neutrino flavors, reporting their bounds separately. ### 2.3 Light scalar mediators The Lagrangian of the simplified model of interaction of a scalar $\phi$ with the relevant fermions we consider is ${\cal L}_{\phi}=g_{\phi}\,\phi\,\left(\sum_{q}q_{\phi}^{q}\,\bar{q}q+q_{\phi}^{e}\,\bar{e}e+q_{\phi}^{\nu}\,\bar{\nu}_{R}\nu_{L}+\text{h.c.}\right)-\frac{1}{2}\,M^{2}_{\phi}\,\phi^{2}~{},$ (9) where $q_{\phi}^{i}$ are the charges of each fermion ($i=\\{\nu,\,e,\,u,\,d,\,s,\,c,\,b,\,t\\}$) under the new interaction. This new interaction does not interfere with the SM one Rodejohann:2017vup ; Farzan:2018gtr , and the corresponding neutrino-nucleus elastic scattering cross section, up to order ${\cal O}((E_{R}/E_{\nu})^{2})$, reads Cerdeno:2016sfi ; Farzan:2018gtr $\frac{\mathrm{d}\sigma^{\rm CE\nu NS}_{\phi}}{\mathrm{d}E_{R}}=\frac{g_{\phi}^{4}\,(q_{\phi}^{\nu})^{2}\,\mathcal{Q}_{\phi}^{2}}{4\,\pi}\,\frac{m_{A}^{2}\,E_{R}}{E_{\nu}^{2}\,(2\,m_{A}\,E_{R}+M_{\phi}^{2})^{2}}\,|F(q)|^{2}~{},$ (10) where $F(q)$ is the form factor (which, with enough precision at the transfer momenta of interest, can be assumed to be the same as in the SM Hoferichter:2020osn ), and $\mathcal{Q}_{\phi}$ is the nuclear charge for the scalar interaction Shifman:1978zn ; DelNobile:2013sia ; Ellis:2018dmb , $\mathcal{Q}_{\phi}=\sum_{N,q}q^{q}_{\phi}\,\frac{m_{N}}{m_{q}}\,f^{(N)}_{T,q}+\frac{2}{27}\,\left(1-\sum_{N,q}f^{(N)}_{T,q}\right)\sum_{N,\tilde{q}}q_{\phi}^{\tilde{q}}\,\frac{m_{N}}{m_{q}}~{}.$ (11) Here $N=n,p$ stands for the nucleons, the superindex $q=u,d,s$ runs over the light valence and sea quarks, $\tilde{q}=c,b,t$ runs over the heavy quarks, and the coefficients $f_{T,q}^{(N)}$ incorporate the effective low-energy couplings of the scalar to the nucleons Shifman:1978zn . For universal coupling of the scalar to quarks, $q_{\phi}^{u}=q_{\phi}^{d}=q_{\phi}^{s}=q_{\phi}^{c}=q_{\phi}^{b}=q_{\phi}^{t}\equiv q_{\phi}^{q}$, the nuclear scalar charge takes the value $\mathcal{Q}_{\phi}=q_{\phi}^{q}\,(14\,N+15.1\,Z)$ DelNobile:2013sia .222This is an approximate expression, which actually does not coincide with any of the sets of values in ref. DelNobile:2013sia (for updated values of the coefficients, see ref. Ellis:2018dmb ). We use it for the sake of comparison with the results from refs. Cerdeno:2016sfi ; CONUS:2021dwh . Current uncertainties on the low-energy coefficients and masses lead to variations of 20%-30% in eq. (11); we have numerically checked that this leads to a $\lesssim 10\%$ effect on our constraints. Scalar exchange also gives a contribution to the cross section for neutrino- electron elastic scattering as Cerdeno:2016sfi $\frac{\mathrm{d}\sigma^{\rm ES}_{\phi}}{\mathrm{d}T_{e}}=Z_{\rm eff}^{\rm X}(T_{e})\,\frac{g_{\phi}^{4}\,(q^{\nu}_{\phi})^{2}\,(q^{e}_{\phi})^{2}}{4\,\pi}\,\frac{m_{e}^{2}\,T_{e}}{E_{\nu}^{2}\,(2\,m_{e}\,T_{e}+M_{\phi}^{2})^{2}}~{}.$ (12) In order to quantify the contribution of the scattering off electrons, in section 4 we study the constraints on two specific models: a first one in which the scalar couples universally to all relevant fermions (hereafter referred to as _universal_), and another one in which it only couples to leptons (dubbed _leptonic scalar_ model). For convenience, table 2 summarizes the explicit values of charges considered for the different SM fermions. Model | $q^{q}$ | $q^{e}$ | $q^{\nu_{e}}$ | $q^{\nu_{\mu}}$ ---|---|---|---|--- Universal scalar or vector | 1 | 1 | 1 | 1 Leptonic ($\ell$) scalar | 0 | 1 | 1 | 1 $L_{e}$ vector | 0 | 1 | 1 | 0 $B-L$ vector | $\frac{1}{3}$ | -1 | -1 | -1 Table 2: Charges for the scalar and vector mediator models considered in this work. ### 2.4 Light vector mediators The Lagrangian of a simplified model of interaction of a neutral vector $Z^{\prime}$ with the fermions of the first generation is given by ${\cal L}_{Z^{\prime}}=g_{Z^{\prime}}\;Z^{\prime}_{\mu}\left(q_{Z^{\prime}}^{u}\,\bar{u}\gamma^{\mu}u+q_{Z^{\prime}}^{d}\,\bar{d}\gamma^{\mu}d+q_{Z^{\prime}}^{e}\,\bar{e}\gamma^{\mu}e+q_{Z^{\prime}}^{\nu}\,\bar{\nu}_{L}\gamma^{\mu}\nu_{L}\right)+\frac{1}{2}M^{2}_{Z^{\prime}}{Z^{\prime}}^{\mu}Z^{\prime}_{\mu}~{},$ (13) where $q_{Z^{\prime}}^{i}$ indicates the charges of each fermion ($i=\\{\nu,\,e,\,u,\,d\\}$) under the new interaction. Unlike the magnetic-moment and scalar interactions, a neutral vector interaction interferes with the SM. The additional contribution to the neutrino-nucleus scattering cross section reads Cerdeno:2016sfi $\Delta\frac{\mathrm{d}\sigma^{\rm CE\nu NS}_{Z^{\prime}}}{\mathrm{d}E_{R}}=\frac{g^{2}_{Z^{\prime}}\,m_{A}}{2\,\pi}\left[\frac{g^{2}_{Z^{\prime}}\,(q_{Z^{\prime}}^{\nu})^{2}\,\mathcal{Q}^{2}_{Z^{\prime}}}{\left(2\,m_{A}\,E_{R}+M_{Z^{\prime}}^{2}\right)^{2}}-\frac{2\,\sqrt{2}\,G_{F}\,q_{Z^{\prime}}^{\nu}\,\mathcal{Q}_{Z^{\prime}}\,\mathcal{Q}_{W}}{\left(2\,m_{A}\,E_{R}+M_{Z^{\prime}}^{2}\right)}\right]\left(1-\frac{m_{A}\,E_{R}}{2\,E^{2}_{\nu}}\right)|F(q)|^{2}~{},$ (14) where $\mathcal{Q}_{Z^{\prime}}$ is the weak charge of the nucleus for the light vector interaction. Vector current conservation implies that only valence quarks contribute by simply summing up their charges, so for universal couplings ($q_{Z^{\prime}}^{u}=q_{Z^{\prime}}^{d}\equiv q_{Z^{\prime}}^{q}$), $\mathcal{Q}_{Z^{\prime}}=3\,q_{Z^{\prime}}^{q}\,(Z+N)$ (see, e.g., ref. DelNobile:2013sia ). As in the case of scalar mediators, we use the same nuclear form factor as in the SM. The corresponding contribution to the cross section for ES reads Cerdeno:2016sfi $\Delta\frac{\mathrm{d}\sigma^{\rm ES}_{Z^{\prime}}}{\mathrm{d}T_{e}}=Z_{\rm eff}^{\rm X}(T_{e})\,\frac{g^{2}_{Z^{\prime}}\,m_{e}}{2\,\pi}\left[\frac{{g^{2}_{Z^{\prime}}}\,(q_{Z^{\prime}}^{\nu})^{2}\,(q^{e}_{Z^{\prime}})^{2}}{\left(2\,m_{e}\,T_{e}+M_{Z^{\prime}}^{2}\right)^{2}}\,+\,\frac{2\,\sqrt{2}\,G_{F}\,q_{Z^{\prime}}^{\nu}\,q_{Z^{\prime}}^{e}\,c_{V}}{\left(2\,m_{e}\,T_{e}+M_{Z^{\prime}}^{2}\right)}\right]~{},$ (15) where $c_{V}$ is the SM effective vector coupling introduced in eq.(2), which depends on the flavor of the incident neutrino as given in table 1. As in the case with light scalar mediators, in section 4 we study the constraints on three specific models in which the vector couples universally to all relevant fermions, another in which it only couples to electron flavor ($L_{e}$), and the anomaly-free flavor-universal model with coupling to $B-L$ (see table 2). ## 3 Data analysis In this section we describe the procedure we follow to perform the data analysis of the Dresden-II and COHERENT data. ### 3.1 The Dresden-II reactor experiment We use a sample with 96.4-day exposure of a 2.924 kg ultra-low noise p-type point contact (PPC) germanium detector (NCC-1701) to the high flux of electron antineutrinos from the Dresden-II boiling water reactor (BWR). The new data set spans the period between January 22 and May 8, 2021, when the reactor was operated at its full nominal power of 2.96 GW. In the data release Colaresi2022suggestive , which we follow closely, the data points and errors are provided in the form of rate per day. The errors (per day) represent a combination of statistical and signal acceptance uncertainties. With the new information obtained from the reactor operator, the improved center-to-center effective distance between the PPC crystal and the center point of the BWR core is 10.39 m. The estimate of the $\bar{\nu}_{e}$ flux from the reactor is $4.8\times 10^{13}~{}\bar{\nu}_{e}/\textrm{cm}^{2}\textrm{s}$ with a $\sim 2\%$ uncertainty Huber:2011wv ; Mueller:2011nm . We describe the reactor $\bar{\nu}_{e}$ spectrum using the typical average values of fission fractions of the four main isotopes (making up more than 99% of all reactor neutrinos), for commercial power reactor using low-enriched uranium: ${}^{235}\rm{U}$ (58%), ${}^{239}\rm{Pu}$ (29%), ${}^{238}\rm{U}$ (8%) and ${}^{241}\rm{Pu}$ (5%) Qian:2018wid . We use the combination of the tabulated spectra for $E_{\bar{\nu}_{e}}<2$ MeV from ref. Vogel:1989iv and for $2~{}\textrm{MeV}\leq E_{\bar{\nu}_{e}}\leq 8$ MeV from ref. Mueller:2011nm . We set the flux to zero beyond the maximum energy that is tabulated. The background model consists of four components Colaresi2022suggestive , although only two of them really contribute to the SM signal region. The other two allow constraining these components. The dominant source in the signal region is the elastic scattering of epithermal neutrons, which is modeled by a free exponential plus a free constant term, $R_{\rm epith}+A_{\rm epith}\,e^{-E_{\rm rec}/E_{\rm epith}}$, where $E_{\rm rec}$ is the reconstructed ionization energy. The other three components of the background consist of electron capture (EC) peaks in 71Ge, which are all described as Gaussian probability density functions (PDF) with three free parameters: the amplitude $A_{\rm shell}$, the centroid $E_{\rm shell}$, and the standard deviation $\sigma_{\rm shell}$. The $M$-shell EC peak is constrained by the $L_{1}$-shell EC peak, with parameters $\\{A_{L_{1}},\,E_{L_{1}},\,\sigma_{L_{1}}\\}$. Although the $L_{1}$-shell peak is at a nominal energy of 1.297 keV, it is also allowed to vary freely. The ratio of the amplitudes of the $L_{1}$-shell and $M$-shell has been experimentally determined to be $0.16\pm 0.03$ SuperCDMS:2015eex , so following the data release Colaresi2022suggestive , we model this ratio with a Gaussian prior of width $\sigma_{M/L_{1}}=0.03$ and centered at $A_{M}/A_{L_{1}}=0.16$, which we add to the likelihood. The centroid of the $M$-shell EC contribution is fixed at $E_{M}=0.158$ keV SuperCDMS:2015eex ; Firestone1996 and the standard deviation is set to be equal to the electronic noise $\sigma_{M}=\sigma_{n}$, which is 68.5 eV during the 96.4 days of reactor operation (Rx-ON) and 65.25 eV during the 25 days of reactor refueling outage (Rx-OFF). Finally, the last contribution comes from the $L_{2}$-shell EC, with the amplitude fixed by $A_{L_{2}}/A_{L_{1}}=0.008$, the centroid at $E_{L_{2}}=1.142$ keV and the standard deviation $\sigma_{L_{2}}=\sigma_{L_{1}}$. Explicitly, the background model, in terms of the reconstructed ionization energy, $E_{\rm rec}$, is described by a differential rate $\frac{\mathrm{d}R_{\rm bkg}(\boldsymbol{\beta})}{\mathrm{d}E_{\rm rec}}=R_{\rm epith}+A_{\rm epith}\,e^{-E_{\rm rec}/E_{\rm epith}}+\sum_{i=L_{1},L_{2},M}\frac{A_{i}}{\sqrt{2\,\pi}\,\sigma_{i}}\,e^{-\frac{\left(E_{\rm rec}-E_{i}\right)^{2}}{2\,\sigma_{i}^{2}}}~{},$ (16) in the reconstructed energy region of interest (ROI) $E_{\rm rec}=[0.2,1.5]~{}{\rm keV}$. The number of free background parameters to be fitted in every analysis is seven, which are represented by the vector $\boldsymbol{\beta}=\left\\{R_{\rm epith},A_{\rm epith},E_{\rm epith},A_{L_{1}},E_{L_{1}},\sigma_{L_{1}},\beta_{M/L_{1}}\right\\}$. The parameter $\beta_{M/L_{1}}$, defined as $A_{M}=\beta_{M/L_{1}}\,A_{L_{1}}$, is added to the likelihood with a Gaussian prior, as described below. The CE$\nu$NS signal rate from the reactor $\bar{\nu}_{e}$ flux is given by $\frac{\mathrm{d}R_{\rm sig}^{\textrm{CE}\nu\textrm{NS}}}{\mathrm{d}E_{\rm rec}}=N_{\rm T}\,\int_{E_{\nu,{\rm min}}}^{\infty}\mathrm{d}E_{\bar{\nu}_{e}}\int_{E_{R,\rm min}}^{E_{R,\rm max}}\mathrm{d}E_{R}\,\frac{\mathrm{d}\Phi_{\bar{\nu}_{e}}}{\mathrm{d}E_{\bar{\nu}_{e}}}\,\frac{\mathrm{d}\sigma^{\rm CE\nu NS}}{\mathrm{d}E_{R}}\,{\cal R}(E_{\rm rec},E_{I}=Q\,E_{R};\sigma_{I})~{},$ (17) where $N_{T}=2.43\times 10^{25}$ is the number of germanium nuclei and $\mathrm{d}\Phi_{\bar{\nu}_{e}}/\mathrm{d}E_{\bar{\nu}_{e}}$ is the reactor electron antineutrino spectrum. Here $E_{R,\rm min}$ is the minimum nuclear recoil energy which corresponds to a minimum average ionization energy, $E_{I,\rm min}\simeq 2.98$ eV, required to produce a hole-pair in germanium (at 77 K) Antman1966272 (see ref. Wei:2016xbw for a compilation of existing data at other temperatures). The ionization energy is defined as $E_{I}=Q(E_{R})\,E_{R}$, with $Q(E_{R})$ being the quenching factor, which describes the reduction in ionization yield of a nuclear recoil when compared to an electron recoil of same energy. We consider two models from the data release, which are not in tension with CONUS data CONUS:2020skt , and are denoted by ‘Fef’ (using iron-filtered monochromatic neutrons) and ‘YBe’ (based on photoneutron source measurements) Collar:2021fcl . The spread between these two models approximately represents the uncertainty on this parameter. Note that the cross sections in eqs. (7) and (8) diverge as the recoil energy goes to zero. Consequently, the contribution arising from the scattering due to the neutrino magnetic moment (and similarly for models with very light mediators) is larger in the lower energy bins, a contribution that can be divergently large under the assumption that asymptotically low ionization energies could trigger the detector. This unphysical behavior is cut-off by the physical requirement of the average energy required to produce an electron-hole pair. Thus, in what follows, we impose $E_{I,\rm min}=3$ eV when evaluating the expected rate of events. The energy resolution function, ${\cal R}(E_{\rm rec},E_{I};\sigma_{I})$, is described as a truncated Gaussian ($E_{\rm rec}>0$), ${\cal R}(E_{\rm rec},E_{I};\sigma_{I})=\left(\frac{2}{1+\textrm{Erf}\left(\frac{E_{I}}{\sqrt{2}\,\sigma_{I}}\right)}\right)\,\frac{1}{\sqrt{2\,\pi}\,\sigma_{I}}\,e^{-\frac{\left(E_{\rm rec}-E_{I}\right)^{2}}{2\,\sigma_{I}^{2}}}~{},$ (18) with $\sigma_{I}^{2}=\sigma_{n}^{2}+E_{I}\,\eta\,F$, where the electronic noise is $\sigma_{n}=68.5~{}{\rm eV}$ (Rx-ON) and 65.25 eV (Rx-OFF), $\eta$ is the average energy of electron-hole formation and $F$ is the Fano factor. For Ge, $\eta=2.96$ eV and $F=0.11$ Colaresi2022suggestive . The prefactor with the error function is included to guarantee the resolution function is normalized to 1 for $E_{I}>0$. Note that, although the reconstructed energy ROI is $E_{\rm rec}=[0.2,1.5]~{}{\rm keV}$, events with $E_{I}<0.2~{}{\rm keV}$ are assumed to trigger the detector and be susceptible of filtering into the ROI. Moreover, it is important to point out that, as provided, all data points are already corrected for signal acceptance, so this must not be included in the calculation of the expected signal rate. Ideally, raw data and signal acceptance as a function of the true ionization energy, $E_{I}$, must be used. In this way, the signal acceptance must be included in eq. (17). Nevertheless, the approach used here, following the data release, has a negligible impact on signals that grow slowly at small ionization energies, as the SM or NSI cases. Yet, it could have a non-negligible effect on the event rate in models with a large neutrino magnetic moment or in the presence of light mediators (in the case of interactions with nucleons). Furthermore, the impact of using signal acceptance-corrected data also depends on the minimum ionization energy capable of triggering the detector. Note that the minimum ionization energy at which the signal acceptance is currently measured is 0.13 keV Colaresi2022suggestive . With all these ingredients, the expected event rate is given by $\frac{\mathrm{d}R\left(\boldsymbol{\theta};\boldsymbol{\beta}\right)}{\mathrm{d}E_{\rm rec}}=\frac{\mathrm{d}R_{\rm bkg}\left(\boldsymbol{\beta}\right)}{\mathrm{d}E_{\rm rec}}+\frac{\mathrm{d}R_{\rm sig}^{\textrm{CE}\nu\textrm{NS}}\left(\boldsymbol{\theta}\right)}{\mathrm{d}E_{\rm rec}}+\frac{\mathrm{d}R_{\rm sig}^{\rm ES}\left(\boldsymbol{\theta}\right)}{\mathrm{d}E_{\rm rec}}~{},$ (19) where our notation explicitly indicates that the event rates include both the signal contribution from CE$\nu$NS and from ES. In general, when considering different new physics scenarios, the signal rate depends on a set of (one or two in this work) parameters $\boldsymbol{\theta}$. We use the following $\chi^{2}$, $\chi_{\rm D-II}^{2}\left(\boldsymbol{\theta};\boldsymbol{\beta}\right)=\sum_{i=1}^{130}\frac{\left(P_{i}\left(\boldsymbol{\theta};\boldsymbol{\beta}\right)-D_{i}\right)^{2}}{\sigma_{i}^{2}}+\frac{\left(\beta_{M/L_{1}}-A_{M}/A_{L_{1}}\right)^{2}}{\sigma_{M/L_{1}}^{2}}~{},$ (20) where $P_{i}$ refers to the prediction and $D_{i}$ to the measured rates (or number of events in the exposure time) in energy bin $i$ (of width 10 eV and with the center of the bin indicated in the data release), $\sigma_{i}$ is the standard deviation of the rate (or of the measured number of events in the exposure time) in bin $i$, which combines statistical and signal acceptance uncertainties, and $\beta_{M/L_{1}}$ is the ratio between the amplitudes of the $M$-shell and $L_{1}$-shell EC contributions to the background, with central value $A_{M}/A_{L_{1}}=0.16$ and $\sigma_{M/L_{1}}=0.03$. We do not include additional nuisance parameters. Nevertheless, we have checked that even adding a 10% uncertainty on the normalization of the signal, the results are only affected at the percent level. Finally, in order to study the sensitivity to the new physics parameters, we profile the above likelihood, $\left(\chi^{2}_{\rm D-II}\right)_{\rm p}\left(\boldsymbol{\theta}\right)=\chi_{\rm D-II}^{2}\left(\boldsymbol{\theta};\widehat{\widehat{\boldsymbol{\beta}\,}}(\boldsymbol{\theta})\right)\equiv\textrm{min}_{\boldsymbol{\beta}}\left\\{\chi_{\rm D-II}^{2}\left(\boldsymbol{\theta};\boldsymbol{\beta}\right)\right\\}~{},$ (21) where the profiled values of $\boldsymbol{\beta}$ that minimize $\chi^{2}$ for each $\boldsymbol{\theta}$ are indicated by a double hat. Figure 1: Left panel: Spectral rate of signal events from CE$\nu$NS and $\bar{\nu}_{e}e^{-}$ scattering in the SM and from the electromagnetic scattering induced by a neutrino magnetic moment $\mu_{\nu_{e}}=10^{-10}\mu_{B}$. Right panel: Spectral rate of signal events from CE$\nu$NS and $\bar{\nu}_{e}e^{-}$ scattering in the SM and for the contribution induced by an interaction mediated by a massless scalar boson (dashed lines) and by a massless vector boson (dotted lines), assuming universal couplings (see table 2). For a vector mediator, we show the addition of both contributions, the purely vector boson contribution and the interference with the SM (see eq. (14)). In both panels we compare the spectra to data, after subtracting the best-fit background model assuming only the SM signal. All results in both panels are depicted for the Fef model of the quenching factor (for interactions with nuclei) and assuming $E_{I,\rm min}=3$ eV. In figure 1 we show the predicted event distributions for the SM and for the models of new physics we consider. Along with these spectra, we also include the Dresden-II data points, after subtracting the best-fit background model assuming only the SM signal. In the left panel of figure 1 we show the predicted event distributions for the SM and the extra contribution from a non-zero magnetic moment. We see that the $\mu_{\nu_{e}}$-induced electromagnetic scattering off protons dominates over the corresponding scattering off electrons for $E_{\rm rec}\lesssim 0.35$ keV. This is so because the coherent cross section off nuclei is enhanced by the factor $Z^{2}/Z_{\rm eff}^{\rm Ge}$, which is large enough to compensate the suppression due to the quenching factor (which makes the characteristic $1/E_{R}$ of the nucleus smaller than $1/T_{e}$ of the electron, for the same value of $E_{\rm rec}$). Also, from eqs. (10), (12), (14) and (15) we expect different spectra of events for interactions mediated by light scalars or by light vectors, as well as a different scaling with energy for scattering off electrons and off nucleus. This can be seen from the comparison among the different curves in the right panel of figure 1, where we have assumed universal couplings (see table 2). The event spectra for interactions mediated by a massless scalar are very similar to the $\mu_{\nu_{e}}$-induced ones, as they are governed by the $1/E_{R}$ or $1/T_{e}$ dependence for CE$\nu$NS or ES, respectively. In this case, the coherent cross section off nuclei is enhanced by the factor $\mathcal{Q}_{\phi}^{2}/Z_{\rm eff}^{\rm Ge}$ with respect to that of scattering off electrons. On the other hand, the event spectra for ES mediated by a massless vector is much larger than the one for scattering off nuclei. Unlike the cases of the magnetic moment or the scalar mediator, the dominant contribution at small energies for CE$\nu$NS is a factor $\sim m_{e}/m_{A}$ smaller than for ES. ### 3.2 The COHERENT experiment For the COHERENT CsI analysis, we follow the same procedure as in ref. Coloma:2019mbs . We use the nuclear form factor from ref. Klos:2013rwa , we estimate the time dependence of the background directly from anti-coincidence data, and for concreteness, we use the quenching factor from ref. Collar:2019ihs . In the present work we also include scattering off electrons. In this case, there is no quenching factor, the target mass is set to the electron mass, and we use an effective number of electrons per nucleus, $Z_{\mathrm{eff}}(T_{e})$, as provided in appendix A (see also section 2). The rest of the details of the analysis are the same as in ref. Coloma:2019mbs . Here, the minimum recoil energy considered can be safely set to zero, since the efficiency vanishes at low energies. For the analysis of Ar data, we follow the official data release COHERENT:2020ybo . In this case we do _not_ include scattering off electrons, as the experiment can discriminate nuclear from electron recoils COHERENT:2020iec . To obtain the CE$\nu$NS event spectrum, we start by computing the signal event spectrum in nuclear recoil energy, $E_{R}$, which is given by $\frac{\mathrm{d}N}{\mathrm{d}E_{R}}=\mathcal{N}\sum_{\alpha}\int_{E_{\nu,\mathrm{min}}}^{m_{\mu}/2}\frac{\mathrm{d}\sigma(E_{\nu},E_{R})}{\mathrm{d}E_{R}}\,\frac{\mathrm{d}\phi_{\alpha}}{\mathrm{d}E_{\nu}}\,\mathrm{d}E_{\nu}~{},$ (22) where $E_{\nu}$ is the neutrino energy and the sum extends over the three components of the neutrino flux $\\{\nu_{e},\nu_{\mu},\bar{\nu}_{\mu}\\}$. The neutrino spectra $\mathrm{d}\phi_{\alpha}/\mathrm{d}E_{\nu}$ are normalized to 1 (see, e.g., eq. (2.1) in ref. Coloma:2019mbs for the expressions), and all normalizations are absorbed into an overall constant $\mathcal{N}$ given by $\mathcal{N}=\frac{1}{4\pi\ell^{2}}\,N_{\mathrm{PoT}}\,f_{\pi/p}\,N_{\mathrm{Ar}}~{},$ (23) where $\ell=27.5~{}\mathrm{m}$ is the distance to the detector; $N_{\mathrm{PoT}}=13.77\times 10^{22}$ is the number of protons on target (PoT), corresponding to an integrated power of 6.12 GW$\cdot$hr; $f_{\pi/p}=0.09$ is the number of pions produced per PoT; and $N_{\mathrm{Ar}}=m_{\mathrm{det}}/m_{\rm Ar}$ is the number of nuclei in the detector, with $m_{\mathrm{det}}=24.4~{}\mathrm{kg}$ the detector mass and $m_{\rm Ar}$ the 40Ar mass. However, the experimental collaboration does not bin their data in nuclear recoil energy (keVnr), but in electron-equivalent recoil energy instead (keVee). In addition, we have to account for the detection efficiency, $\epsilon$, and the energy resolution. Introducing these effects, the expected event rate in each bin $i$ (of width $\Delta E_{\rm rec}$) is computed as: $N_{i}=\int_{E_{\rm rec,i}-\Delta E_{\rm rec}/2}^{E_{\rm rec,i}+\Delta E_{\rm rec}/2}\mathrm{d}E_{\rm rec}\,\,\epsilon(E_{\rm rec})\int_{E_{R,\mathrm{min}}}^{\infty}\mathrm{d}E_{R}\,\frac{\mathrm{d}N}{\mathrm{d}E_{R}}\,\mathcal{R}(E_{\rm rec},E_{I};\sigma_{I})~{},$ (24) where $E_{I}$ stands for the true electron-equivalent recoil energy and $E_{\rm rec}$ is the reconstructed electron-equivalent recoil energy (that is, after energy smearing effects). The function $\mathcal{R}$ accounts for the energy resolution of the detector: $\mathcal{R}(E_{\rm rec},E_{I};\sigma_{I})=\frac{1}{\sqrt{2\,\pi}\,\sigma_{I}}\,e^{\frac{-\left(E_{\rm rec}-E_{I}\right)^{2}}{2\,\sigma_{I}^{2}}}~{},$ (25) with a width $\sigma_{I}\equiv\sigma(E_{I})=0.58~{}\mathrm{keV}\,\sqrt{E_{I}/\mathrm{keV}}$, as prescribed in the data release. As the energies in the ROI are much larger than the standard deviation, this definition, unlike eq. (18), does not require the extra factor to guarantee it is correctly normalized to 1. Also, note that in eq. (24) the detection efficiency $\epsilon$ is obtained post- triggering and it is a function of the reconstructed energy (and not of the true energy).333Furthermore, contrary to the indications provided in the data release, in our analysis we include this efficiency as a function of reconstructed energy _before binning the event distribution_ , as otherwise we do not find good agreement with the results of the collaboration. We set the minimum nuclear recoil energy to $E_{I,\mathrm{min}}=19.5$ eV, the average energy to produce a scintillation photon in Ar Creus:2013sau . As indicated above, the relation between $E_{I}$ and $E_{R}$ is given by the quenching factor, $E_{I}=Q(E_{R})\,E_{R}$, which is described as $Q(E_{R})=a+b\,E_{R}$, with $a=0.246$ and $b=0.00078$ keV-1, as given in the data release. Following the procedure above, we obtain a nominal prediction of $135.3$ signal events. This is slightly higher than the rates predicted by the collaboration, but well within their reported error bars (their nominal prediction is $128\pm 17$ events). Our spectrum also shows good agreement with the official one. To further reduce backgrounds, the analysis includes information not only on recoil energy, but also on timing and on the fraction of integrated amplitude within the first 90 ns (this last variable is called F90 by the collaboration). Once we compute the distribution in recoil energy as described above, we can obtain the full 3D PDF. We do so by rescaling the original PDF provided by the collaboration by the ratio of their projected PDF in $E_{\rm rec}$ to our $E_{\rm rec}$ event distribution, i.e., $N_{i,j,k}=\mathrm{PDF}_{i,j,k}\times\frac{N_{i}}{\sum_{j,k}\mathrm{PDF}_{i,j,k}}~{},$ (26) where $\mathrm{PDF}_{i,j,k}$ stands for the predicted PDF provided by the collaboration as a function of recoil energy $i$, F90,j and time $k$. To do this, we use their nominal prediction in absence of systematics. Finally, we include systematic uncertainties by adding nuisance parameters as prescribed in the data release.444We have checked that the impact of the quenching factor uncertainties is negligible at the level of the uncertainties quoted in the data release. Therefore, they are not included here. For each nuisance parameter, we add a pull term to either the signal or background prediction as $n_{i,j,k}=\bar{n}_{i,j,k}\,(1+\xi\,\sigma_{i,j,k})~{},$ (27) where $\bar{n}$ is the predicted signal or background event rates with no systematic errors, $\xi$ is the nuisance parameter, and $\sigma_{i,j,k}$ is obtained from the data release. We also add a signal normalization uncertainty as $s_{i,j,k}=\bar{s}_{i,j,k}\,(1+\xi_{\mathrm{norm}}\,\sigma_{\mathrm{norm}})~{},$ (28) where we set $\sigma_{\mathrm{norm}}=0.1$ according to the data release (this corresponds to the “neutrino flux” uncertainty listed in table 1 in ref. COHERENT:2020iec ). We use a Poissonian $\chi^{2}$ for statistical uncertainties, $\left(\chi^{2}_{\rm COH}\right)_{\mathrm{stat}}=\sum_{i,j,k}2\left(P_{i,j,k}-D_{i,j,k}+D_{i,j,k}\ln\frac{D_{i,j,k}}{P_{i,j,k}}\right)~{},$ (29) where $D$ stands for the data and $P$ for the prediction (including the effect of the nuisance parameters). A pull term is then added for each nuisance parameter $\xi_{r}$, as well as for the normalization of the background components, $\chi_{\rm COH}^{2}\left({\boldsymbol{\xi}}\right)=\left(\chi^{2}_{\rm COH}\right)_{\mathrm{stat}}+\sum_{r}\xi_{r}^{2}+\left(\frac{n_{\rm pr}-\bar{n}_{\rm pr}}{\sigma_{\rm pr}}\right)^{2}+\left(\frac{n_{\rm del}-\bar{n}_{\rm del}}{\sigma_{\rm del}}\right)^{2}+\left(\frac{n_{\rm ss}-\bar{n}_{\rm ss}}{\sigma_{\rm ss}}\right)^{2}~{},$ (30) where $\bar{n}$ and $\sigma$ are the central values and the uncertainties at $1\sigma$ provided in the data release for the different background components. The final $\left(\chi^{2}_{\rm COH}\right)_{\mathrm{p}}$ is obtained after minimization over all nuisance parameters and the normalization of the three background components. ## 4 Results In this section we present the results of the analysis of the Dresden-II reactor experiment data and its combination with COHERENT CsI and Ar data, for the BSM frameworks presented in section 2. ### 4.1 Bounds on non-standard neutrino interactions As described in section 2.1, in presence of NSI the Dresden-II reactor experiment is sensitive to a unique combination of the $\varepsilon$ coefficients, $\mathcal{Q}^{\rm Ge}_{e}(\boldsymbol{\varepsilon})$, defined in eq. (5), where we have introduced an explicit superindex indicating the nucleus it refers to. We show in the left panel of figure 2 the dependence of $\Delta\chi^{2}_{\rm D-II}$ on this effective NSI combination. In constructing this $\Delta\chi^{2}_{\rm D-II}$, we have profiled over the background model parameters for each value of $\mathcal{Q}^{\rm Ge}_{e}(\boldsymbol{\varepsilon})^{2}$ and we have neglected small effects of any additional systematic nuisance parameter. As mentioned above, the signal acceptance uncertainties are included in the data provided by the experiment. Figure 2: Left panel: Dependence of the profiled $\Delta\chi^{2}_{\rm D-II}$ on the combination of NSI coefficients relevant to CE$\nu$NS of $\bar{\nu}_{e}$ scattering off the Ge detector of the Dresden-II experiment. Right panel: Allowed regions at 90% CL (1 dof, two-sided, $\Delta\chi^{2}=2.71$) in the $(\varepsilon_{ee}^{u},\varepsilon_{ee}^{d})$ plane. In both panels, the results are shown for two quenching factors, denoted by Fef (in blue), using iron-filtered monochromatic neutrons, and YBe (in red), based on photoneutron source measurements Collar:2021fcl . Although not obvious, note that the allowed regions for the two quenching factors have one common side, so the Fef constraints are more stringent. From figure 2, we read the following ranges allowed at 90% confidence level (CL) (1 dof) $\left(g_{e}^{\rm Ge}\right)^{2}\equiv\left(\frac{\mathcal{Q}^{\rm Ge}_{e}(\boldsymbol{\varepsilon})}{\mathcal{Q}^{\rm Ge}_{e}(0)}\right)^{2}=0.91\pm 0.56\;\;(1.36\pm 0.97)~{},$ (31) derived with the Fef (YBe) quenching factor for germanium. Notice that $g_{e}^{\rm Ge}=1$ corresponds to the SM prediction and $g_{e}^{\rm Ge}=0$ to no signal, so this implies that the absence of CE$\nu$NS in the Dresden-II data is disfavored at 2.6$\sigma$ (2.3$\sigma$) for the analysis with the Fef (YBe) quenching factor. In the right panel of figure 2, we show the corresponding 90% CL allowed regions for the flavor-diagonal NSI coefficients $\varepsilon_{ee}^{u}$ and $\varepsilon_{ee}^{d}$ assuming vanishing non-diagonal NSI coefficients. The shape of these bands can be understood directly from the expression of the weak charge of the nucleus in eq. (4). They are defined as the two regions around the points that satisfy $\left[\mathcal{Q}_{W}+(2\,Z+N)\,\varepsilon_{ee}^{u}+(2\,N+Z)\,\varepsilon_{ee}^{d}\right]^{2}={\rm constant}~{},$ (32) which follow lines in the $(\varepsilon_{ee}^{u},\varepsilon_{ee}^{d})$ plane with the slope given by $-(2Z+N)/(2N+Z)$. As seen in the figure, the analysis employing the Fef quenching factor results in slightly stronger constraints. In most scenarios we find that the analysis using the Fef quenching factor reproduces slightly better the SM predictions and therefore leaves less room for new physics. Exception, as we will see, is the case for some models with light vector mediators for which a local non-standard minima appears in the analysis with the YBe quenching factor, which results in slightly stronger constraints (see section 4.4). Figure 3: 90% CL allowed regions on flavor diagonal NSI with up-quarks (for zero values of all other NSI coefficients) from the analysis of COHERENT CsI and Ar data, the Dresden-II reactor data – with Fef (YBe) quenching factor in left (right) panel –, and their combination. Note that, in the two-dimensional panels, the results for the Dresden-II reactor experiment are obtained for 1 dof ($\Delta\chi^{2}=2.71$), while the rest of the regions are obtained for 2 dof ($\Delta\chi^{2}=4.61$). CE$\nu$NS at the COHERENT experiment is sensitive to interactions of electron and muon neutrinos and hence, it provides information on the corresponding effective combinations $\mathcal{Q}^{\rm CsI}_{e}(\boldsymbol{\varepsilon})$, $\mathcal{Q}^{\rm CsI}_{\mu}(\boldsymbol{\varepsilon})$, $\mathcal{Q}^{\rm Ar}_{e}(\boldsymbol{\varepsilon})$, $\mathcal{Q}^{\rm Ar}_{\mu}(\boldsymbol{\varepsilon})$. Generically, because both $\nu_{e}$ and $\nu_{\mu}$ are present in the beam, degeneracies between NSI parameters corresponding to $e$ and $\mu$ flavors appear. This is illustrated in figure 3, where we show the allowed regions obtained from our combined analysis of CE$\nu$NS at COHERENT with both CsI and Ar targets for flavor-diagonal NSIs with up-quarks only (the results for NSI with only down-quarks are similar). The shape of these regions for a given nucleus leads to allowed regions defined by a band around the points that approximately obey the equation of an ellipse in the $(\varepsilon_{ee}^{u},\varepsilon_{\mu\mu}^{u})$ plane, $\left[\mathcal{Q}_{W}+\left(2\,Z+N\right)\,\varepsilon_{ee}^{u}\right]^{2}+2\,\left[\mathcal{Q}_{W}+\left(2\,Z+N\right)\,\varepsilon_{\mu\mu}^{u}\right]^{2}={\rm constant}~{}.$ (33) Since $\mathcal{Q}_{W}$ depends on the target nucleus, the ellipse obtained is different for different detector materials and, therefore, it may be broken by adding information on different nuclei, provided they have a different ratio of protons to neutrons (for earlier discussions on this, see, e.g., refs. Scholberg:2005qs ; Barranco:2005yy ; Coloma:2017egw ; Baxter:2019mcx ). Most importantly, the use of timing information at COHERENT translates into a partial discrimination between the weak charges for the different flavors, thanks to the distinct composition of the prompt ($\nu_{\mu}$) and delayed ($\bar{\nu}_{\mu}$ and $\nu_{e}$) neutrino flux. This leads to a partial breaking of this degeneracy which, however, is not complete (see, e.g., ref. Coloma:2019mbs for a more detailed explanation of this effect). This is what explains the results shown in figure 3. The regions allowed by COHERENT data present a double-wedge shape due to the partial breaking of the degeneracy between those two parameters after the inclusion of the energy and, in particular, the timing information.555For concreteness, the COHERENT analysis shown in this plot was performed using the quenching factor from the Chicago group Collar:2019ihs . See ref. Coloma:2019mbs for a discussion about the small variation of the results obtained with other quenching factors. But yet, a continuous wide range of values of $\varepsilon^{u}_{ee}$ remains allowed. For the considered flavor-diagonal NSI with up-quarks only, CE$\nu$NS at the Dresden-II reactor experiment provides information on $\varepsilon^{u}_{ee}$ only, so the allowed regions correspond to the vertical bands in the figure. Consequently, the combined analysis of COHERENT + Dresden-II results in a substantial reduction of the allowed values of $\varepsilon^{u}_{ee}$ (and indirectly, also on $\varepsilon^{u}_{\mu\mu}$) as seen in the figure. We finish by commenting that the results we show correspond to the case of diagonal NSI with up-quarks only, but as mentioned above, similar results are obtained for diagonal NSI with down-quarks only. We also notice that the inclusion of flavor off-diagonal NSI couplings results in the enlargement of the regions shown in figure 3. Nevertheless, within the current constraints from other experiments, in particular from neutrino oscillations Esteban:2018ppq ; Coloma:2019mbs , similar qualitative conclusions hold. For NSI couplings to both up and down quarks, the complementarity between COHERENT and Dresden-II data depends on the specific assumption about the ratio of diagonal couplings to both quarks. If they are varied freely, meaningful constraints cannot be obtained, as can be understood from figure 2. Therefore, current data from these experiments is not enough by itself to impose meaningful constraints on the complete parameter space of NSI with quarks, if considered in full generality. ### 4.2 Bounds on neutrino magnetic moments Figure 4: Left panel: Profiled $\Delta\chi^{2}_{\rm D-II}$ including events induced by a magnetic moment for $\bar{\nu}_{e}$. We show results including only $\mu_{\nu_{e}}$-induced scattering off electrons (dashed curves) and for scattering off electrons and nucleons (solid curves). In both cases the SM CE$\nu$NS contribution is included. Right panel: 90% CL excluded (one-sided) regions from the combination of Dresden-II and COHERENT data (in color). We also indicate (with arrows) the 90% CL (one-sided) allowed regions from Dresden-II data with the Fef (blue dotted line) and YBe (red dotted line) quenching factor, and from the combined analysis of COHERENT CsI and Ar data (black dashed line). Notice that the vertical lines for the Dresden-II reactor experiment are defined one-sided for 1 dof $(\Delta\chi^{2}=1.64$), while the rest of the regions are defined for 2 dof ($\Delta\chi^{2}=3.22$). The results of our analysis of data from the Dresden-II reactor experiment, including the contribution induced by a neutrino magnetic moment, are shown in figure 4 where, on the left panel, we plot the one-dimensional $\Delta\chi^{2}_{\rm D-II}$ as a function $\mu_{\nu_{e}}$ after profiling over all background parameters ${\boldsymbol{\beta}}$. As previously, we have neglected small effects of any additional systematic nuisance parameter, while the signal acceptance uncertainties are included in the data provided by the experiment. For comparison, in the left panel we also show $\Delta\chi^{2}_{\rm D-II}$ for the case of scattering off electrons only, including the SM CE$\nu$NS contribution (dashed lines). As discussed in section 3.1, the contribution from ES induced by a neutrino magnetic moment is subdominant to that from CE$\nu$NS. One must notice, however, that a better bound on $\mu_{\nu_{e}}$ (and similarly for models with light scalar mediators, see below) could be attainable with a dedicated analysis aimed at optimizing the sensitivity to the signal from scattering of electrons. The signal acceptance at the lowest energies in the Dresden-II experiment is quite small Colaresi2022suggestive , which significantly reduces statistics. Furthermore, given the much flatter shape of the event spectrum for the magnetic moment contribution than from the SM, as depicted in figure 1, extending the ROI to higher energies could enhance the signal-to-noise ratio (see, e.g., ref. Bonet:2022imz ). The fit shows no evidence of a non-zero magnetic moment and therefore, the analysis results in a bound which, at 90% CL (1 dof), reads $\left|\mu_{\nu_{e}}\right|<\left\\{\begin{array}[]{c}1.9\;(2.9)\times 10^{-10}\;\mu_{B}\quad\textrm{(one-sided~{}limit)}\\\\[6.45831pt] 2.2\;(3.3)\times 10^{-10}\;\mu_{B}\quad\textrm{(two- sided~{}limit)}\end{array}\right.~{},$ (34) for the Fef (YBe) quenching factor for germanium. This is an order of magnitude weaker than the current best limit from reactor antineutrinos Beda:2012zz ; Beda:2013mta . Notice that here we report two different limits according to different statistical criteria to derive constraints. The reason is that the experiment is in fact sensitive to $|\mu_{\nu_{e}}|^{2}$, which can only take non-negative values. Therefore, accounting for the physical boundary, it is possible to report the limit on $|\mu_{\nu_{e}}|$ as a one- sided limit, which at 90% CL for 1 dof (2 dof) corresponds to $\Delta\chi^{2}=1.64\;(3.22)$. Conversely, if this restriction is not imposed, the result obtained is what is denoted as a two-sided limit, which at 90%CL for 1 dof (2 dof) corresponds to $\Delta\chi^{2}=2.71\;(4.61)$ and results into less tight constraints. For the sake of comparison with different results in the literature, we indicate in eq. (34) bounds obtained with both criteria. As seen in eq. (34), the difference is at the level of 10%. As mentioned in section 3.1 the $\mu_{\nu}$-induced cross sections in eqs. (7) and (8) diverge as the recoil energy goes to zero and this unphysical behavior is cut-off by the physical requirement of the average energy required to produce an electron-hole pair, $E_{I,\rm min}=3$ eV. This raises the issue of the possible dependence of the bounds on the exact value of this minimum energy that could trigger the detector. The dependence on the cut energy, however, is approximately logarithmic and we have verified that increasing it by as much as one order of magnitude results in weakening the bounds in eq. (34) by less than 25%. CE$\nu$NS at the COHERENT experiment provides information on magnetic moments for both $\nu_{e}$ and $\nu_{\mu}$. In the right panel of figure 4, we show their 90% CL (2 dof, one-sided) excluded values, obtained with our combined analysis of CE$\nu$NS at COHERENT with both CsI and Ar targets. Because there is no interference between the contributions from $\mu_{\nu_{\mu}}$ and $\mu_{\nu_{e}}$, the resulting allowed region is just a square with a rounded upper-right corner. The 90% CL (1 dof, one-sided) upper bound from the Dresden-II reactor experiment on $\mu_{\nu_{e}}$, eq. (34), is indicated by vertical lines. As seen in the figure, the sensitivity of COHERENT to $\mu_{\nu_{e}}$ is ${\cal O}(10)$ weaker. The combination of the two experiments results in the 90% CL excluded regions (2 dof, one-sided) shown in the figure in color. The corresponding combined bounds on each neutrino magnetic moment (after profiling over the other) at 90% CL (1 dof) are $\displaystyle\left|\mu_{\nu_{e}}\right|$ $\displaystyle<$ $\displaystyle\left\\{\begin{array}[]{c}1.8\;(2.8)\times 10^{-10}\;\mu_{B}\quad\textrm{(one-sided~{}limit)}\\\\[6.45831pt] 2.2\;(3.2)\times 10^{-10}\;\mu_{B}\quad\textrm{(two- sided~{}limit)}\end{array}\right.~{},$ (37) $\displaystyle\left|\mu_{\nu_{\mu}}\right|$ $\displaystyle<$ $\displaystyle\left\\{\begin{array}[]{c}2.4\;(2.4)\times 10^{-9}\;\mu_{B}\quad\;\,\textrm{(one-sided~{}limit)}\\\\[6.45831pt] 2.7\;(2.7)\times 10^{-9}\;\mu_{B}\quad\;\textrm{(two- sided~{}limit)}\end{array}\right.~{},$ (40) for the combination of data from the COHERENT and the Dresden-II reactor experiments performed with the Fef (YBe) quenching factor for germanium. ### 4.3 Bounds on light scalar mediator models Figure 5: Left panel: 90% CL (2 dof, two-sided, $\Delta\chi^{2}=4.61$) excluded regions for models with light scalar mediators coupled universally to all relevant fermions. Right panel: 90% CL (2 dof, two-sided, $\Delta\chi^{2}=4.61$) excluded regions for models with light scalar mediators coupled only to leptons. The thick blue and red dotted lines show the boundary of the region excluded by the Dresden-II reactor experiment with the Fef and YBe quenching factors, respectively. The black dashed line shows the boundary of region excluded from the analysis of COHERENT CsI and Ar data. The filled regions are those excluded by the combined COHERENT+Dresden-II analysis. The results of the analysis of the data from the Dresden-II reactor experiment and its combination with COHERENT, including the contribution of the events induced by a new light scalar mediator, are shown in figure 5. We depict the excluded region at 90% CL (2 dof, two-sided) for the parameter space of the scalar models $\left(\left|g^{\rm mod}_{\phi}\right|,M_{\phi}\right)$, for the two models with charges listed in table 2. On the left panel, we show the results for a scalar coupling universally coupled to all relevant fermions, while on the right panel we focus on a scalar which only interacts with leptons. The thick blue and red thick dotted lines in figure 5 show the boundaries of the regions excluded by the Dresden-II reactor data with with the Fef and YBe quenching factor, respectively. These results are obtained after profiling over the background model parameters (i.e., for each value of $\left(\left|g^{\rm mod}_{\phi}\right|,M_{\phi}\right)$). For light mediator masses, the experiment has no sensitivity to the mediator mass and the limit of the regions approaches a horizontal line of constant $\left|g^{\rm mod}_{\phi}\right|$. For these effectively massless scalar mediators it is possible to derive an upper bound on the coupling constant regardless the mediator mass, which for the considered models, reads at 90% CL (1 dof), $\displaystyle\left|g_{\phi}^{\rm univ}\right|$ $\displaystyle\leq$ $\displaystyle\left\\{\begin{array}[]{c}1.6\;(1.9)\times 10^{-6}\quad\textrm{(one-sided)}\\\\[6.45831pt] 1.7\;(2.0)\times 10^{-6}\quad\textrm{(two-sided)}\end{array}\right.~{},$ (43) $\displaystyle\left|g_{\phi}^{\ell}\right|$ $\displaystyle\leq$ $\displaystyle\left\\{\begin{array}[]{c}4.9\;(5.5)\times 10^{-6}\quad\textrm{(one-sided)}\\\\[6.45831pt] 5.5\;(6.1)\times 10^{-6}\quad\textrm{(two-sided)}\end{array}\right.~{},$ (46) for the Fef (YBe) quenching factor for germanium. As done above, we indicate both, one-sided and two-sided (1 dof) bounds. Conversely, for larger mediator masses, the boundary is a diagonal, characteristic of the contact-interaction limit. In that case, the event rates depend on $(g_{\phi}/M_{\phi})^{4}$, which we find to be well approximated by $\displaystyle\frac{\left|g_{\phi}^{\rm univ}\right|}{M_{\phi}/{\rm MeV}}$ $\displaystyle\gtrsim$ $\displaystyle 3.4\;(3.6)\times 10^{-7}~{},\quad{\rm for\;}M_{\phi}\gtrsim 10\;{\rm MeV}~{},$ $\displaystyle\frac{\left|g_{\phi}^{\ell}\right|}{M_{\phi}/{\rm MeV}}$ $\displaystyle\gtrsim$ $\displaystyle 1.1\;(1.1)\times 10^{-4}~{},\quad{\rm for\;}M_{\phi}\gtrsim 0.1\;{\rm MeV}~{},$ (47) for the Fef (YBe) quenching factor for germanium. In summary, we find that for sufficiently light mediators ($M_{\phi}\lesssim 10^{-2}~{}\mathrm{MeV}$), the bound on the coupling constant is about a factor ${\cal O}(3)$ stronger if the interaction is coupled to quarks than if it couples only to leptons. This is so because scalar-mediated CE$\nu$NS always dominates over the corresponding scalar-mediated incoherent scattering off electrons for the low-energy part of the spectrum, where statistics is best (see figure 1). Equivalently, in the contact-interaction limit, the bound on $\left|g^{\rm univ}_{\phi}\right|/M_{\phi}$ is more than two orders of magnitude better than on $\left|g^{\ell}_{\phi}\right|/M_{\phi}$. In figure 5, we also show the boundary of the regions excluded from the analysis of COHERENT CsI and Ar data (black dashed lines). For most of the parameter space we consider, the bounds imposed by the Dresden-II reactor experiment dominate in either model. COHERENT bounds only become competitive for the universal model for $\left|g_{\phi}^{\rm univ}\right|\gtrsim 5\times 10^{-5}$. Consequently, the results in eqs. (46) and (47) hold to good approximation for the combination of COHERENT and Dresden-II reactor experiments. ### 4.4 Bounds on light vector mediator models The results of the analysis of the data from the Dresden-II reactor experiment and its combination with COHERENT, including the contribution of the events induced by a new light vector mediator, are shown in figure 6. The figure displays the regions excluded at 90% CL (2 dof, two-sided) corresponding to the parameter space of vector mediator models $\left(\left|g^{\rm mod}_{Z^{\prime}}\right|,M_{Z^{\prime}}\right)$, for the three cases with charges listed in table 2. Figure 6: Upper left panel: 90% CL (2 dof, two-sided, $\Delta\chi^{2}=4.61$) excluded regions for models with light vector mediators coupled universally to all relevant fermions. Upper right panel: 90% CL (2 dof, two-sided, $\Delta\chi^{2}=4.61$) excluded regions for models with light vector mediators coupled only to $L_{e}$. Lower panel: 90% CL (2 dof, two-sided, $\Delta\chi^{2}=4.61$) excluded regions for models with light vector mediators coupled only to $B-L$. The thick blue and red dotted lines show the boundary of the region excluded by the Dresden-II reactor experiment with the Fef and YBe quenching factors, respectively. The black dashed line shows the boundary of region excluded from the analysis of COHERENT CsI and Ar data. The filled regions are those excluded by the combined COHERENT+Dresden-II analysis. As above, the thick blue and red thick dotted lines in the figures indicate the boundaries of the regions excluded at 90% CL by the Dresden-II reactor data with the Fef and YBe quenching factor, respectively, after profiling over the background model parameters. The black dashed lines indicate the boundary of the regions excluded from the analysis of COHERENT CsI and Ar data and the filled regions correspond to the results of the COHERENT + Dresden-II combination. We see that COHERENT bounds only give some contribution for the universal and $B-L$ models for $\left\\{\left|g_{Z^{\prime}}^{\rm univ}\right|,\left|g_{Z^{\prime}}^{B-L}\right|\right\\}\gtrsim 5\times 10^{-4}$. Again, we find that for sufficiently light mediator masses ($M_{\phi}\lesssim 10^{-2}~{}\mathrm{MeV}$) the region becomes independent of the mediator mass. For these effectively massless vector mediators, the corresponding 90% CL (1 dof) upper bounds on the coupling constant read $\displaystyle\left|g_{Z^{\prime}}^{\rm univ}\right|$ $\displaystyle\leq$ $\displaystyle\left\\{\begin{array}[]{c}0.87\;(1.0)\times 10^{-6}\quad\textrm{(one-sided)}\\\\[6.45831pt] 0.92\;(1.1)\times 10^{-6}\quad\textrm{(two-sided)}\end{array}\right.~{},$ $\displaystyle\left|g_{Z^{\prime}}^{L_{e}}\right|$ $\displaystyle\leq$ $\displaystyle\left\\{\begin{array}[]{c}0.90\;(1.0)\times 10^{-6}\quad\textrm{(one-sided)}\\\\[6.45831pt] 0.95\;(1.1)\times 10^{-6}\quad\textrm{(two-sided)}\end{array}\right.~{},$ (52) $\displaystyle\left|g_{Z^{\prime}}^{B-L}\right|$ $\displaystyle\leq$ $\displaystyle\left\\{\begin{array}[]{c}0.90\;(1.1)\times 10^{-6}\quad\textrm{(one-sided)}\\\\[6.45831pt] 0.95\;(1.1)\times 10^{-6}\quad\textrm{(two-sided)}\end{array}\right.~{},$ for the Fef (YBe) quenching factor for germanium. As done above, we indicate both, one-sided and two-sided (1 dof) bounds. Unlike the case of scalar mediators, in the small-mass limit, the bounds for vector mediators are very similar regardless of whether quarks are charged under the new interaction. This is so because in this limit the $Z^{\prime}$ contribution to the event rates is dominated by the incoherent scattering off electrons, as seen in figure 1. We also notice that for all models, the regions obtained with the YBe quenching factor present a dip around $M_{Z^{\prime}}\sim 0.03$ MeV and $\left|g_{Z^{\prime}}\right|\sim 10^{-6}$. This arises because the inclusion of scattering off electrons mediated by such a particle provides a fit which is better than the SM one at $\sim$ 1.5$\sigma$. Comparing the two upper panels, we also see that constraints on the universal model become more stringent than those on the $L_{e}$ model for $M_{Z^{\prime}}\gtrsim 0.2$ MeV. For these masses, CE$\nu$NS starts dominating over scattering off electrons. For sufficiently large mediator masses, the contact-interaction approximation is recovered and the event rates vary with a power of $\left|g_{Z^{\prime}}\right|/M_{Z^{\prime}}$, which ranges from a power of two, due the interference with the SM, to a power of four, when the new vector contribution is the dominant one (the interplay between the interference and quadratic contributions is responsible for the disconnected island observed in the upper right excluded region of the $B-L$ model). From the figure, we read that, in this regime, the boundaries of the 90% CL excluded regions for the Dresden-II reactor experiment are approximately described by $\displaystyle\frac{\left|g_{Z^{\prime}}^{\rm univ}\right|}{M_{Z^{\prime}}/{\rm MeV}}$ $\displaystyle\gtrsim$ $\displaystyle 7.5\;(9.0)\times 10^{-7}~{},\quad{\rm for\;}M_{Z^{\prime}}\gtrsim 10\;{\rm MeV}~{},$ $\displaystyle\frac{\left|g_{Z^{\prime}}^{L_{e}}\right|}{M_{Z^{\prime}}/{\rm MeV}}$ $\displaystyle\gtrsim$ $\displaystyle 1.7\;(2.9)\times 10^{-5}~{},\quad{\rm for\;}M_{Z^{\prime}}\gtrsim 0.1\;{\rm MeV}~{},$ (55) $\displaystyle\frac{\left|g_{Z^{\prime}}^{B-L}\right|}{M_{Z^{\prime}}/{\rm MeV}}$ $\displaystyle\gtrsim$ $\displaystyle\quad\;\;(6.5)\times 10^{-7}~{},\quad{\rm for\;}M_{Z^{\prime}}\gtrsim 10\;{\rm MeV}~{},$ for the Fef (YBe) quenching factor for germanium. For the $B-L$ model, the interplay between scattering off electrons and nucleus and between the interference and quadratic pieces makes the region obtained with the Fef quenching factor not well described by this approximation. Also, as mentioned above, for the universal and $B-L$ models, the combination with COHERENT slightly extends the large-mass part of the excluded region beyond these values as seen in the figure. ## 5 Summary and conclusions The first evidence of observation of CE$\nu$NS with reactor electron antineutrinos has been recently reported by an experiment conducted at the Dresden-II reactor site, using the NCC-1701 germanium detector Colaresi2022suggestive . This adds to the previous measurements performed by the COHERENT experiment, which uses neutrinos from a spallation neutron source and has observed CE$\nu$NS using both CsI[Na] and Ar nuclei COHERENT:2017ipa ; COHERENT:2018imc ; COHERENT:2020iec ; COHERENT:2020ybo . The very low momentum transfer produced by CE$\nu$NS renders this process an excellent probe of new interactions in the neutrino sector. In this paper, we have performed a detailed analysis of the new data from the Dresden-II experiment, with the aim of deriving powerful constraints on a variety of BSM scenarios. In particular, we have derived new bounds on neutrino NSI with quarks, magnetic moments for electron and muon neutrinos, and several models with light neutral scalar or vector mediators. Our analysis includes the contributions to the event rates from CE$\nu$NS and from elastic scattering off electrons. In our analysis of the Dresden-II data, we have taken special care in profiling over the parameters required to accurately model the background, closely following the data release of the experimental collaboration Colaresi2022suggestive . We have also quantified the dependence of the results on the quenching factor by considering two models: one based on the use iron- filtered monochromatic neutrons (Fef) and another one based on photoneutron source measurements (YBe). The impact on the results of the uncertainty on this parameter is approximately indicated by the spread between the bounds obtained for these two models. As for COHERENT, our analysis includes both timing and energy information in the case of CsI, while in the case of the Ar data, we perform a $\chi^{2}$ analysis using time, energy and F90 information. A careful treatment of systematic uncertainties is performed, following the prescriptions provided by the collaboration in refs. COHERENT:2018imc ; COHERENT:2020ybo . We have also quantified the impact of the combination of the new data from the Dresden-II reactor experiment with COHERENT data. From the phenomenological point of view, the main difference between these two experiments arises from the different flavor composition of the neutrino flux: while a nuclear reactor only emits $\bar{\nu}_{e}$, at a spallation neutron source neutrinos are primarily produced from pion decay at rest and therefore, the flux contains an equal admixture of $\nu_{\mu}$, $\bar{\nu}_{\mu}$, and $\nu_{e}$, with higher energies than the reactor $\bar{\nu}_{e}$ flux. Therefore, generically, for models with lepton-flavor dependent effects (such as NSI), COHERENT is a priori sensitive to a larger number of parameters. Nevertheless, the flavor discrimination with COHERENT comes from the timing information and is only partial. Thus, within the current experimental precision, this results in degeneracies among the $\mu$-flavor and $e$-flavor parameters. The new Dresden-II data bring in complementary information in this respect, as they provide independent constraints on the $e$-flavor parameters. As a result, the combination of both experiments allows breaking (or at least alleviating) degeneracies present in the case of NSI, as illustrated in figure 3. Moreover, the combination of data obtained for scattering off different nuclei (Ar and CsI for COHERENT, Ge for the Dresden-II experiment) also adds additional synergies since, for a given set of NSI parameters, the impact on the weak charge is different depending on the particular nucleus under consideration (see eq. (4)). In the case of neutrino magnetic moment and light mediators with masses below ${\cal O}(100\;{\rm MeV})$, we generically find that the Dresden-II experiment outperforms the bounds obtained for COHERENT (an obvious exception is the magnetic moment of muon neutrinos, which is only constrained by COHERENT, since for the Dresden-II experiment only a $\bar{\nu}_{e}$ flux is available). This is a priori expected, since the Dresden-II experiment is sensitive to much lower values of the momentum transfer, a critical feature which drives the experimental reach of these scenarios. On the other hand, for mediators with masses above ${\cal O}(100\;{\rm MeV})$, we find that COHERENT and Dresden-II lead to similar bounds for the universal and $B-L$ scenarios, while if the new interaction is coupled only to leptons, the Dresden-II bound is more restrictive. We finish by commenting on the role of ES. While the event rates from this process are insignificant in the SM, their contribution could be significantly enhanced in the presence of new physics effects. Most notably, such contribution could pass the signal selection cuts for both the Dresden-II experiment and the COHERENT CsI detector. In fact, for the analysis of COHERENT CsI, this is a novel point of our analysis with respect to previous works in the literature. Our results show that the inclusion of ES is particularly relevant in scenarios with light vector mediators. In the case of COHERENT CsI, this improves the bound on the coupling obtained from CE$\nu$NS alone for the $B-L$ and universal models by at least a factor $\sim 5$, for mediator masses below $\sim 20$ MeV. And a similar improvement is obtained on the bounds derived from the Dresden-II reactor data when taking ES into account. Conversely, in the reconstructed energy range of the Dresden-II experiment, the contribution of ES induced by a neutrino magnetic moment and by light scalar mediators is smaller than that from CE$\nu$NS. At higher energies, however, ES would also dominate over CE$\nu$NS at the Dresden-II reactor experiment. Therefore, extending the reconstructed energy region to higher energies could enhance the reach of the Dresden-II experiment within these BSM scenarios. ###### Acknowledgements. We warmly thank Juan Collar for all the help with the new Dresden-II data release. We also thank Nicola Cargioli for noticing a plotting bug in figure 1. This project has received funding/support from the European Union’s Horizon 2020 research and innovation program under the Marie Skłodowska-Curie grant agreement No 860881-HIDDeN, and from grants RTI2018-095979, PID2019-108892RB-I00, PID2019-105614GB-C21, PID2020-113334GB-I00, PID2020-113644GB-I00, CEX2019-000918-M and CEX2020-001007-S, funded by MCIN/AEI/10.13039/501100011033 and by “ERDF A way of making Europe”. PC is also supported by Grant RYC2018-024240-I funded by MCIN/AEI/ 10.13039/501100011033 and by “ESF Investing in your Investing in your future”. M.C.G-G is also supported by U.S.A.-NSF grant PHY-1915093, and by AGAUR (Generalitat de Catalunya) grant 2017-SGR-929. LL is supported by the predoctoral training program non-doctoral research personnel of the Department of Education of the Basque Government. LL and FM are also supported by the European Research Council (ERC) under Grant Agreement No. 101039048-GanESS and the Severo Ochoa Program grant CEX2018-000867-S. SPR is also partially supported by the Portuguese FCT (CERN/FIS-PAR/0004/2019). ## Appendix A Binding energies for neutrino scattering off electrons For germanium we use x-ray $Z_{\rm eff}^{\rm Ge}(T_{e})=\left\\{\begin{array}[]{lcl}32&&T_{e}>11.11\;{\rm keV}\\\ 30&&11.11\;{\rm keV}>T_{e}>1.4146\;{\rm keV}\\\ 28&&1.4146\;{\rm keV}>T_{e}>1.248\;{\rm keV}\\\ 26&&1.248\;{\rm keV}>T_{e}>1.217\;{\rm keV}\\\ 22&&1.217\;{\rm keV}>T_{e}>0.1801\;{\rm keV}\\\ 20&&0.1801\;{\rm keV}>T_{e}>0.1249\;{\rm keV}\\\ 18&&0.1249\;{\rm keV}>T_{e}>0.1208\;{\rm keV}\\\ 14&&0.1208\;{\rm keV}>T_{e}>0.0298\;{\rm keV}\\\ 4&&0.0298\;{\rm keV}>T_{e}\end{array}\right.$ (56) while for caesium and for iodine we use x-ray $\begin{array}[]{cc}Z^{\mathrm{Cs}}_{\rm eff}(T_{e})=\left\\{\begin{array}[]{lcl}55&&T_{e}>35.99\;{\rm keV}\\\ 53&&35.99\;{\rm keV}>T_{e}>5.71\;{\rm keV}\\\ 51&&5.71\;{\rm keV}>T_{e}>5.36\;{\rm keV}\\\ 49&&5.36\;{\rm keV}>T_{e}>5.01\;{\rm keV}\\\ 45&&5.01\;{\rm keV}>T_{e}>1.21\;{\rm keV}\\\ 43&&1.21\;{\rm keV}>T_{e}>1.07\;{\rm keV}\\\ 41&&1.07\;{\rm keV}>T_{e}>1\;{\rm keV}\\\ 37&&1\;{\rm keV}>T_{e}>0.74\;{\rm keV}\\\ 33&&0.74\;{\rm keV}>T_{e}>0.73\;{\rm keV}\\\ 27&&0.73\;{\rm keV}>T_{e}>0.23\;{\rm keV}\\\ 25&&0.23\;{\rm keV}>T_{e}>0.17\;{\rm keV}\\\ 23&&0.17\;{\rm keV}>T_{e}>0.16\;{\rm keV}\\\ 19&&0.16\;{\rm keV}>T_{e}\end{array}\right.&Z^{\mathrm{I}}_{\rm eff}(T_{e})=\left\\{\begin{array}[]{lcl}53&&T_{e}>33.17\;{\rm keV}\\\ 51&&33.17\;{\rm keV}>T_{e}>5.19\;{\rm keV}\\\ 49&&5.19\;{\rm keV}>T_{e}>4.86\;{\rm keV}\\\ 47&&4.86\;{\rm keV}>T_{e}>4.56\;{\rm keV}\\\ 43&&4.56\;{\rm keV}>T_{e}>1.07\;{\rm keV}\\\ 41&&1.07\;{\rm keV}>T_{e}>0.93\;{\rm keV}\\\ 39&&0.93\;{\rm keV}>T_{e}>0.88\;{\rm keV}\\\ 35&&0.88\;{\rm keV}>T_{e}>0.63\;{\rm keV}\\\ 31&&0.63\;{\rm keV}>T_{e}>0.62\;{\rm keV}\\\ 25&&0.62\;{\rm keV}>T_{e}>0.19\;{\rm keV}\\\ 23&&0.19\;{\rm keV}>T_{e}>0.124\;{\rm keV}\\\ 21&&0.124\;{\rm keV}>T_{e}>0.123\;{\rm keV}\\\ 17&&0.123\;{\rm keV}>T_{e}\end{array}\right.\end{array}$ (57) noting that $Z_{\rm eff}^{\mathrm{CsI}}(T_{e})=\frac{1}{2}\left[Z_{\rm eff}^{\mathrm{Cs}}(T_{e})+Z_{\rm eff}^{\mathrm{I}}(T_{e})\right]$. ## References * (1) D. Z. Freedman, _Coherent effects of a weak neutral current_ , _Physical Review D_ 9 (1974) 1389. * (2) COHERENT collaboration, D. Akimov et al., _Observation of coherent elastic neutrino-nucleus scattering_ , _Science_ 357 (2017) 1123 [1708.01294]. * (3) COHERENT collaboration, D. Akimov et al., _COHERENT Collaboration data release from the first observation of coherent elastic neutrino-nucleus scattering_ , 1804.09459. * (4) COHERENT collaboration, D. Akimov et al., _First measurement of coherent elastic neutrino-nucleus scattering on Argon_ , _Phys. Rev. Lett._ 126 (2021) 012002 [2003.10630]. * (5) COHERENT collaboration, D. Akimov et al., _COHERENT Collaboration data release from the first detection of coherent elastic neutrino-nucleus scattering on Argon_ , 2006.12659. * (6) TEXONO collaboration, H. T. Wong, _The TEXONO research program on neutrino and astroparticle physics_ , _Mod. Phys. Lett. A_ 19 (2004) 1207. * (7) $\nu$GeN collaboration, V. Belov et al., _The $\nu$GeN experiment at the Kalinin nuclear power plant_, _JINST_ 10 (2015) P12011. * (8) CONNIE collaboration, A. Aguilar-Arevalo et al., _The CONNIE experiment_ , _J. Phys. Conf. Ser._ 761 (2016) 012057 [1608.01565]. * (9) MINER collaboration, G. Agnolet et al., _Background studies for the MINER coherent neutrino scattering reactor experiment_ , _Nucl. Instrum. Meth. A_ 853 (2017) 53 [1609.02066]. * (10) Ricochet collaboration, J. Billard et al., _Coherent neutrino scattering with low temperature bolometers at Chooz reactor complex_ , _J. Phys. G_ 44 (2017) 105101 [1612.09035]. * (11) $\nu$-cleus collaboration, R. Strauss et al., _The $\nu$-cleus experiment: a gram-scale fiducial-volume cryogenic detector for the first detection of coherent neutrino-nucleus scattering_, _Eur. Phys. J. C_ 77 (2017) 506 [1704.04320]. * (12) RED-100 collaboration, D. Y. Akimov et al., _First ground-level laboratory test of the two-phase Xenon emission detector RED-100_ , _JINST_ 15 (2020) P02020 [1910.06190]. * (13) J. J. Choi, _Neutrino elastic-scattering observation with NaI[Tl] (NEON)_ , _PoS_ NuFact2019 (2020) 047. * (14) CONUS collaboration, H. Bonet et al., _Constraints on elastic neutrino nucleus scattering in the fully coherent regime from the CONUS experiment_ , _Phys. Rev. Lett._ 126 (2021) 041804 [2011.00210]. * (15) J. Colaresi et al., _First results from a search for coherent elastic neutrino-nucleus scattering at a reactor site_ , _Phys. Rev. D_ 104 (2021) 072003 [2108.02880]. * (16) J. Colaresi, J. I. Collar, T. W. Hossbach, C. M. Lewis and K. M. Yocum, _Suggestive evidence for coherent elastic neutrino-nucleus scattering from reactor antineutrinos_ , 2202.09672. * (17) B. C. Cañas, E. A. Garcés, O. G. Miranda and A. Parada, _Future perspectives for a weak mixing angle measurement in coherent elastic neutrino nucleus scattering experiments_ , _Phys. Lett. B_ 784 (2018) 159 [1806.01310]. * (18) M. Cadeddu and F. Dordei, _Reinterpreting the weak mixing angle from atomic parity violation in view of the Cs neutron rms radius measurement from COHERENT_ , _Phys. Rev. D_ 99 (2019) 033010 [1808.10202]. * (19) X.-R. Huang and L.-W. Chen, _Neutron skin in CsI and low-energy effective weak mixing angle from COHERENT data_ , _Phys. Rev. D_ 100 (2019) 071301 [1902.07625]. * (20) M. Cadeddu, F. Dordei, C. Giunti, Y. F. Li and Y. Y. Zhang, _Neutrino, electroweak, and nuclear physics from COHERENT elastic neutrino-nucleus scattering with refined quenching factor_ , _Phys. Rev. D_ 101 (2020) 033004 [1908.06045]. * (21) M. Cadeddu, F. Dordei, C. Giunti, Y. F. Li, E. Picciau and Y. Y. Zhang, _Physics results from the first COHERENT observation of coherent elastic neutrino-nucleus scattering in argon and their combination with cesium-iodide data_ , _Phys. Rev. D_ 102 (2020) 015030 [2005.01645]. * (22) M. Cadeddu et al., _New insights into nuclear physics and weak mixing angle using electroweak probes_ , _Phys. Rev. C_ 104 (2021) 065502 [2102.06153]. * (23) M. Cadeddu, C. Giunti, Y. F. Li and Y. Y. Zhang, _Average CsI neutron density distribution from COHERENT data_ , _Phys. Rev. Lett._ 120 (2018) 072501 [1710.02730]. * (24) E. Ciuffoli, J. Evslin, Q. Fu and J. Tang, _Extracting nuclear form factors with coherent neutrino scattering_ , _Phys. Rev. D_ 97 (2018) 113003 [1801.02166]. * (25) D. K. Papoulias, T. S. Kosmas, R. Sahu, V. K. B. Kota and M. Hota, _Constraining nuclear physics parameters with current and future COHERENT data_ , _Phys. Lett. B_ 800 (2020) 135133 [1903.03722]. * (26) P. Coloma, I. Esteban, M. C. Gonzalez-Garcia and J. Menéndez, _Determining the nuclear neutron distribution from coherent elastic neutrino-nucleus scattering: current results and future prospects_ , _JHEP_ 08 (2020) 030 [2006.08624]. * (27) J. Barranco, O. G. Miranda and T. I. Rashba, _Probing new physics with coherent neutrino scattering off nuclei_ , _JHEP_ 12 (2005) 021 [hep-ph/0508299]. * (28) J. A. Formaggio, E. Figueroa-Feliciano and A. J. Anderson, _Sterile neutrinos, coherent scattering and oscillometry measurements with low-temperature bolometers_ , _Phys. Rev. D_ 85 (2012) 013009 [1107.3512]. * (29) A. J. Anderson et al., _Measuring active-to-sterile neutrino oscillations with neutral current coherent neutrino-nucleus scattering_ , _Phys. Rev. D_ 86 (2012) 013004 [1201.3805]. * (30) B. Dutta, Y. Gao, R. Mahapatra, N. Mirabolfathi, L. E. Strigari and J. W. Walker, _Sensitivity to oscillation with a sterile fourth generation neutrino from ultra-low threshold neutrino-nucleus coherent scattering_ , _Phys. Rev. D_ 94 (2016) 093002 [1511.02834]. * (31) D. G. Cerdeño, M. Fairbairn, T. Jubb, P. A. N. Machado, A. C. Vincent and C. Bœhm, _Physics from solar neutrinos in dark matter direct detection experiments_ , _JHEP_ 05 (2016) 118 [1604.01025]. * (32) J. B. Dent, B. Dutta, S. Liao, J. L. Newstead, L. E. Strigari and J. W. Walker, _Probing light mediators at ultralow threshold energies with coherent elastic neutrino-nucleus scattering_ , _Phys. Rev. D_ 96 (2017) 095007 [1612.06350]. * (33) P. Coloma, P. B. Denton, M. C. Gonzalez-Garcia, M. Maltoni and T. Schwetz, _Curtailing the dark side in non-standard neutrino interactions_ , _JHEP_ 04 (2017) 116 [1701.04828]. * (34) T. S. Kosmas, D. K. Papoulias, M. Tórtola and J. W. F. Valle, _Probing light sterile neutrino signatures at reactor and spallation neutron source neutrino experiments_ , _Phys. Rev. D_ 96 (2017) 063013 [1703.00054]. * (35) S.-F. Ge and I. M. Shoemaker, _Constraining photon portal dark matter with TEXONO and COHERENT data_ , _JHEP_ 11 (2018) 066 [1710.10889]. * (36) I. M. Shoemaker, _COHERENT search strategy for beyond standard model neutrino interactions_ , _Phys. Rev. D_ 95 (2017) 115028 [1703.05774]. * (37) P. Coloma, M. C. Gonzalez-Garcia, M. Maltoni and T. Schwetz, _COHERENT enlightenment of the neutrino dark side_ , _Phys. Rev. D_ 96 (2017) 115007 [1708.02899]. * (38) J. Liao and D. Marfatia, _COHERENT constraints on nonstandard neutrino interactions_ , _Phys. Lett. B_ 775 (2017) 54 [1708.04255]. * (39) B. C. Cañas, E. A. Garcés, O. G. Miranda and A. Parada, _The reactor antineutrino anomaly and low energy threshold neutrino experiments_ , _Phys. Lett. B_ 776 (2018) 451 [1708.09518]. * (40) J. B. Dent, B. Dutta, S. Liao, J. L. Newstead, L. E. Strigari and J. W. Walker, _Accelerator and reactor complementarity in coherent neutrino-nucleus scattering_ , _Phys. Rev. D_ 97 (2018) 035009 [1711.03521]. * (41) D. K. Papoulias and T. S. Kosmas, _COHERENT constraints to conventional and exotic neutrino physics_ , _Phys. Rev. D_ 97 (2018) 033003 [1711.09773]. * (42) Y. Farzan, M. Lindner, W. Rodejohann and X.-J. Xu, _Probing neutrino coupling to a light scalar with coherent neutrino scattering_ , _JHEP_ 05 (2018) 066 [1802.05171]. * (43) J. Billard, J. Johnston and B. J. Kavanagh, _Prospects for exploring new physics in coherent elastic neutrino-nucleus scattering_ , _JCAP_ 11 (2018) 016 [1805.01798]. * (44) P. Coloma, I. Esteban, M. C. Gonzalez-Garcia and M. Maltoni, _Improved global fit to non-standard neutrino interactions using COHERENT energy and timing data_ , _JHEP_ 02 (2020) 023 [1911.09109]. * (45) M. Chaves and T. Schwetz, _Resolving the LMA-dark NSI degeneracy with coherent neutrino-nucleus scattering_ , _JHEP_ 05 (2021) 042 [2102.11981]. * (46) D. Aristizabal Sierra, V. De Romeri and N. Rojas, _COHERENT analysis of neutrino generalized interactions_ , _Phys. Rev. D_ 98 (2018) 075018 [1806.07424]. * (47) V. Brdar, W. Rodejohann and X.-J. Xu, _Producing a new fermion in coherent elastic neutrino-nucleus scattering: from neutrino mass to dark matter_ , _JHEP_ 12 (2018) 024 [1810.03626]. * (48) M. Cadeddu, C. Giunti, K. A. Kouzakov, Y. F. Li, A. I. Studenikin and Y. Y. Zhang, _Neutrino charge radii from COHERENT elastic neutrino-nucleus scattering_ , _Phys. Rev. D_ 98 (2018) 113010 [1810.05606]. * (49) C. Blanco, D. Hooper and P. Machado, _Constraining sterile neutrino interpretations of the LSND and MiniBooNE anomalies with coherent neutrino scattering experiments_ , _Phys. Rev. D_ 101 (2020) 075051 [1901.08094]. * (50) B. Dutta, S. Liao, S. Sinha and L. E. Strigari, _Searching for beyond the Standard Model physics with COHERENT energy and timing data_ , _Phys. Rev. Lett._ 123 (2019) 061801 [1903.10666]. * (51) O. G. Miranda, D. K. Papoulias, M. Tórtola and J. W. F. Valle, _Probing neutrino transition magnetic moments with coherent elastic neutrino-nucleus scattering_ , _JHEP_ 07 (2019) 103 [1905.03750]. * (52) CONNIE collaboration, A. Aguilar-Arevalo et al., _Exploring low-energy neutrino physics with the Coherent Neutrino Nucleus Interaction Experiment_ , _Phys. Rev. D_ 100 (2019) 092005 [1906.02200]. * (53) B. Dutta, D. Kim, S. Liao, J.-C. Park, S. Shin and L. E. Strigari, _Dark matter signals from timing spectra at neutrino experiments_ , _Phys. Rev. Lett._ 124 (2020) 121802 [1906.10745]. * (54) D. K. Papoulias, _COHERENT constraints after the COHERENT-2020 quenching factor measurement_ , _Phys. Rev. D_ 102 (2020) 113004 [1907.11644]. * (55) A. N. Khan and W. Rodejohann, _New physics from COHERENT data with an improved quenching factor_ , _Phys. Rev. D_ 100 (2019) 113003 [1907.12444]. * (56) C. Giunti, _General COHERENT constraints on neutrino nonstandard interactions_ , _Phys. Rev. D_ 101 (2020) 035039 [1909.00466]. * (57) D. Baxter et al., _Coherent elastic neutrino-nucleus scattering at the European Spallation Source_ , _JHEP_ 02 (2020) 123 [1911.00762]. * (58) B. C. Cañas, E. A. Garcés, O. G. Miranda, A. Parada and G. Sánchez García, _Interplay between nonstandard and nuclear constraints in coherent elastic neutrino-nucleus scattering experiments_ , _Phys. Rev. D_ 101 (2020) 035012 [1911.09831]. * (59) O. G. Miranda, D. K. Papoulias, M. Tórtola and J. W. F. Valle, _Probing new neutral gauge bosons with CE $\nu$NS and neutrino-electron scattering_, _Phys. Rev. D_ 101 (2020) 073005 [2002.01482]. * (60) L. J. Flores, N. Nath and E. Peinado, _Non-standard neutrino interactions in U(1)’ model after COHERENT data_ , _JHEP_ 06 (2020) 045 [2002.12342]. * (61) O. G. Miranda, D. K. Papoulias, G. Sánchez García, O. Sanders, M. Tórtola and J. W. F. Valle, _Implications of the first detection of coherent elastic neutrino-nucleus scattering (CE $\nu$NS) with Liquid Argon_, _JHEP_ 05 (2020) 130 [2003.12050]. * (62) N. Hurtado, H. Mir, I. M. Shoemaker, E. Welch and J. Wyenberg, _Dark matter-neutrino interconversion at COHERENT, direct detection, and the early Universe_ , _Phys. Rev. D_ 102 (2020) 015006 [2005.13384]. * (63) O. G. Miranda, D. K. Papoulias, O. Sanders, M. Tórtola and J. W. F. Valle, _Future CE $\nu$NS experiments as probes of lepton unitarity and light-sterile neutrinos_, _Phys. Rev. D_ 102 (2020) 113014 [2008.02759]. * (64) M. Cadeddu et al., _Constraints on light vector mediators through coherent elastic neutrino nucleus scattering data from COHERENT_ , _JHEP_ 01 (2021) 116 [2008.05022]. * (65) I. M. Shoemaker and E. Welch, _Sailing the CE $\nu$NS seas of non-standard neutrino interactions with the coherent CAPTAIN Mills experiment_, 2103.08401. * (66) L. M. G. de la Vega, L. J. Flores, N. Nath and E. Peinado, _Complementarity between dark matter direct searches and CE $\nu$NS experiments in U(1)’ models_, _JHEP_ 09 (2021) 146 [2107.04037]. * (67) J. Liao, H. Liu and D. Marfatia, _Coherent neutrino scattering and the Migdal effect on the quenching factor_ , _Phys. Rev. D_ 104 (2021) 015005 [2104.01811]. * (68) CONUS collaboration, H. Bonet et al., _Novel constraints on neutrino physics beyond the standard model from the CONUS experiment_ , _JHEP_ 05 (2022) 085 [2110.02174]. * (69) L. J. Flores, N. Nath and E. Peinado, _CE $\nu$NS as a probe of flavored generalized neutrino interactions_, _Phys. Rev. D_ 105 (2022) 055010 [2112.05103]. * (70) Y.-F. Li and S.-y. Xia, _Constraining light mediators via detection of coherent elastic solar neutrino nucleus scattering_ , _Nucl. Phys. B_ 977 (2022) 115737 [2201.05015]. * (71) D. Aristizabal Sierra, J. Liao and D. Marfatia, _Impact of form factor uncertainties on interpretations of coherent elastic neutrino-nucleus scattering data_ , _JHEP_ 06 (2019) 141 [1902.07398]. * (72) M. Abdullah, D. Aristizabal Sierra, B. Dutta and L. E. Strigari, _Coherent elastic neutrino-nucleus scattering with directional detectors_ , _Phys. Rev. D_ 102 (2020) 015009 [2003.11510]. * (73) G. Fernandez-Moroni et al., _The physics potential of a reactor neutrino experiment with Skipper-CCDs: searching for new physics with light mediators_ , _JHEP_ 02 (2022) 127 [2108.07310]. * (74) E. Bertuzzo, G. Grilli di Cortona and L. M. D. Ramos, _Probing light vector mediators with coherent scattering at future facilities_ , _JHEP_ 06 (2022) 075 [2112.04020]. * (75) H. Bonet et al., _First limits on neutrino electromagnetic properties from the CONUS experiment_ , 2201.12257. * (76) M. Lindner, W. Rodejohann and X.-J. Xu, _Coherent neutrino-nucleus scattering and new neutrino interactions_ , _JHEP_ 03 (2017) 097 [1612.04150]. * (77) J. Erler and M. J. Ramsey-Musolf, _The weak mixing angle at low energies_ , _Phys. Rev. D_ 72 (2005) 073003 [hep-ph/0409169]. * (78) J. Erler and R. Ferro-Hernández, _Weak mixing angle in the Thomson limit_ , _JHEP_ 03 (2018) 196 [1712.09146]. * (79) S. Klein and J. Nystrand, _Exclusive vector meson production in relativistic heavy ion collisions_ , _Phys. Rev. C_ 60 (1999) 014903 [hep-ph/9902259]. * (80) R. H. Helm, _Inelastic and elastic scattering of 187-MeV electrons from selected even-even nuclei_ , _Phys. Rev._ 104 (1956) 1466. * (81) P. Klos, J. Menéndez, D. Gazit and A. Schwenk, _Large-scale nuclear structure calculations for spin-dependent WIMP scattering with chiral effective field theory currents_ , _Phys. Rev. D_ 88 (2013) 083516 [1304.7684]. * (82) M. Hoferichter, P. Klos, J. Menéndez and A. Schwenk, _Nuclear structure factors for general spin-independent WIMP-nucleus scattering_ , _Phys. Rev. D_ 99 (2019) 055031 [1812.05617]. * (83) M. Hoferichter, P. Klos, J. Menéndez and A. Schwenk, _Analysis strategies for general spin-independent WIMP-nucleus scattering_ , _Phys. Rev. D_ 94 (2016) 063505 [1605.08043]. * (84) Particle Data Group collaboration, P. A. Zyla et al., _Review of Particle Physics_ , _PTEP_ 2020 (2020) 083C01. * (85) I. Angeli and K. P. Marinova, _Table of experimental nuclear ground state charge radii: An update_ , _Atom. Data Nucl. Data Tabl._ 99 (2013) 69. * (86) C. G. Payne, S. Bacca, G. Hagen, W. Jiang and T. Papenbrock, _Coherent elastic neutrino-nucleus scattering on 40Ar from first principles_, _Phys. Rev. C_ 100 (2019) 061304 [1908.09739]. * (87) P. Vogel and J. Engel, _Neutrino electromagnetic form-factors_ , _Phys. Rev. D_ 39 (1989) 3378. * (88) O. Tomalak and R. J. Hill, _Theory of elastic neutrino-electron scattering_ , _Phys. Rev. D_ 101 (2020) 033006 [1907.03379]. * (89) L. A. Mikaelyan, _Investigation of neutrino properties in experiments at nuclear reactors: present status and prospects_ , _Phys.Atom.Nucl._ 65 (2002) 1173 [hep-ph/0210047]. * (90) K. Scholberg, _Prospects for measuring coherent neutrino-nucleus elastic scattering at a stopped-pion neutrino source_ , _Phys. Rev. D_ 73 (2006) 033005 [hep-ex/0511042]. * (91) M. Hoferichter, J. Menéndez and A. Schwenk, _Coherent elastic neutrino-nucleus scattering: EFT analysis and nuclear responses_ , _Phys. Rev. D_ 102 (2020) 074018 [2007.08529]. * (92) W. Rodejohann, X.-J. Xu and C. E. Yaguna, _Distinguishing between Dirac and Majorana neutrinos in the presence of general interactions_ , _JHEP_ 05 (2017) 024 [1702.05721]. * (93) M. A. Shifman, A. I. Vainshtein and V. I. Zakharov, _Remarks on Higgs boson interactions with nucleons_ , _Phys. Lett. B_ 78 (1978) 443. * (94) M. Cirelli, E. Del Nobile and P. Panci, _Tools for model-independent bounds in direct dark matter searches_ , _JCAP_ 10 (2013) 019 [1307.5955]. * (95) J. Ellis, N. Nagata and K. A. Olive, _Uncertainties in WIMP dark matter scattering revisited_ , _Eur. Phys. J. C_ 78 (2018) 569 [1805.09795]. * (96) P. Huber, _On the determination of anti-neutrino spectra from nuclear reactors_ , _Phys. Rev. C_ 84 (2011) 024617 [1106.0687]. * (97) T. A. Mueller et al., _Improved predictions of reactor antineutrino spectra_ , _Phys. Rev. C_ 83 (2011) 054615 [1101.2663]. * (98) X. Qian and J.-C. Peng, _Physics with reactor neutrinos_ , _Rept. Prog. Phys._ 82 (2019) 036201 [1801.05386]. * (99) SuperCDMS collaboration, R. Agnese et al., _New results from the search for low-mass weakly interacting massive particles with the CDMS low ionization threshold experiment_ , _Phys. Rev. Lett._ 116 (2016) 071301 [1509.02448]. * (100) R. B. Firestone and V. S. Shirley. New York: Wiley, 1996. * (101) S. O. W. Antman, D. A. Landis and R. H. Pehl, _Measurements of the Fano factor and the energy per hole-electron pair in Germanium_ , _Nucl. Instrum. Methods_ 40 (1966) 272. * (102) W. Z. Wei, L. Wang and D. M. Mei, _Average energy expended per e-h pair for Germanium-based dark matter experiments_ , _JINST_ 12 (2017) P04022 [1602.08005]. * (103) J. I. Collar, A. R. L. Kavner and C. M. Lewis, _Germanium response to sub-keV nuclear recoils: a multipronged experimental characterization_ , _Phys. Rev. D_ 103 (2021) 122003 [2102.10089]. * (104) J. I. Collar, A. R. L. Kavner and C. M. Lewis, _Response of CsI[Na] to nuclear recoils: impact on coherent elastic neutrino-nucleus scattering (CE $\nu$NS)_, _Phys. Rev. D_ 100 (2019) 033003 [1907.04828]. * (105) W. Creus, _Light yield in liquid Argon for dark matter detection_ , Ph.D. thesis, Zurich U., 2013. 10.5167/uzh-86639. * (106) I. Esteban, M. C. Gonzalez-Garcia, M. Maltoni, I. Martinez-Soler and J. Salvado, _Updated constraints on non-standard interactions from global analysis of oscillation data_ , _JHEP_ 08 (2018) 180 [1805.04530]. * (107) GEMMA collaboration, A. G. Beda et al., _The results of search for the neutrino magnetic moment in GEMMA experiment_ , _Adv. High Energy Phys._ 2012 (2012) 350150. * (108) GEMMA collaboration, A. G. Beda et al., _Gemma experiment: the results of neutrino magnetic moment search_ , _Phys. Part. Nucl. Lett._ 10 (2013) 139. * (109) A. Thompson et al., _X-ray data booklet_ , _https://xdb.lbl.gov/_ (2009) .
Copyright for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0). ISWC’22: The 21st International Semantic Web Conference, October 23–27, 2022, Hangzhou, China [orcid=0000-0001-8402-2853<EMAIL_ADDRESS>] [orcid=0000-0003-4805-1467<EMAIL_ADDRESS>] [orcid=0000-0003-1047-6339<EMAIL_ADDRESS>] # SignalKG: Towards Reasoning about the Underlying Causes of Sensor Observations Anj Simmons Rajesh Vasa Antonio Giardina Applied Artificial Intelligence Institute, Deakin University, Geelong, Australia (2022) ###### Abstract This paper demonstrates our vision for knowledge graphs that assist machines to reason about the cause of signals observed by sensors. We show how the approach allows for constructing smarter surveillance systems that reason about the most likely cause (e.g., an attacker breaking a window) of a signal rather than acting directly on the received signal without consideration for how it was produced. ###### keywords: knowledge graph ontology sensor surveillance ## 1 Introduction Standards such as the Semantic Sensor Network (SSN/SOSA) ontology [1] allow capturing the semantics of sensor observations, and emerging standards for smart buildings [2, 3] and smart cities allow capturing the semantics of the environments in which sensors operate. However, reasoning about the underlying cause of sensor observations requires not only knowledge of the sensors and their environment, but also an understanding of the signals they detect and the possible causes of these signals. For example, inferring that the sound of breaking glass may be due to a broken window requires knowledge of the fact that glass windows produce a distinct sound when broken and that this sound propagates as sound waves through the air to a sensor such as a human ear or microphone. This paper proposes a signal knowledge graph (SignalKG) to support machines to reason about the underlying cause of sensor observations. Sensors and their environment are represented using existing standards, and then linked to SignalKG. To reason about the cause of sensor observations, we automatically generate a Bayesian network based on information in the knowledge graph, and use this to infer the posterior probability of causes given the sensor data. Figure 1: High level overview of SignalKG ontology Figure 1 presents an ontology describing the high level concepts. A category of entities (e.g., humans) perform actions (e.g., walking) that act on a type of object or place (e.g., in hallways), which in turn create a type of signal (e.g., the sound of footsteps). A sensor observes a particular type of signal, which usually reduces in strength with distance and can be distorted by surrounding objects (e.g., a wall between a sound source and the receiver attenuates the sound signal). The sensor may implement a classifier to detect the presence of the signal (e.g., an acoustic detector may make use of a binary machine learning classifier to detect if the sound of footsteps is present or not). A formal RDF/OWL specification of the SignalKG ontology is available online111SignalKG ontology: https://signalkg.visualmodel.org/skg as well as an interactive demonstration222Interactive demonstration available at: https://signalkg.visualmodel.org of how SignalKG can be applied and used to reason about the underlying causes of sensor observations. ## 2 Related Work Past research has utilised ontologies for the purpose of threat and situation awareness using description logic [4] and rule based reasoning [5]. However, these approaches assume that threats can be classified into classes according to deterministic rules, whereas in reality threats may be probabilistic in nature and there may be multiple possible explanations for a given set of observations. To overcome this limitation, reasoning methods have been proposed that combine deterministic rules with a Bayesian network for reasoning probabilistically [6]. However, the structure of the Bayesian network and associated probabilities need to be manually specified. In contrast, we seek to express this knowledge in a reusable and extensible form. There have been attempts to extend OWL to support representing/reasoning about uncertainty [7]. However, general approaches do not directly specify concepts for reasoning about sensor signals. ## 3 Demonstration Figure 2: Knowledge graph of audio signals constructed using SignalKG ontology showing link to building rooms/assets and sensors Figure 2 demonstrates how concepts in the SignalKG ontology can be applied to construct a knowledge graph of audio signals. Attackers and employees are entities that are capable of producing an action, such as breaking a window or dropping a tray of glasses in a dining room. Actions create signals, in this case, the sound of breaking glass or the sound of dropped glass. The building asset/room type on/in which an action can occur is represented using the RealEstateCore ontology [2]. To group similar signals together, we use the Simple Knowledge Organization System (SKOS) [8] ‘broader’ property to represent a signal hierarchy, for example, the sound of breaking glass is similar to the sound of dropped glass and thus are grouped under the same category. The knowledge graph also includes information about how signals propagate, for example, that sound intensity reduces with distance (according to an inverse square law) and is attenuated by walls. To model the case in which signals are classified on the sensing device, we allow for describing different classification models, such as YAMNet, an audio classifier that recognises 521 classes of sounds (e.g., glass). Figure 3: Bayesian network before (left) and after (right) conditioning on sensor observations. Green bars indicate the probability of each node value. (Key: a=action, s=signal emitted due to action, r=received signal strength at location of sensor, d=detected signal after classification). Visualisation created with jsbayes-viz [9]. Sensor observations can be linked to the signal knowledge graph via the property (signal) that the sensor observes. Our interactive demonstration includes a simulator to generate sample sensor observations (represented using the SSN/SOSA ontology) for hypothetical scenarios that could occur. The goal is to infer what took place given only the sensor observations, knowledge of the building and sensor placement, and our understanding of possible underlying causes of observed signals (specified using SignalKG). To support reasoning probabilistically about causes of sensor observations, the knowledge graph also includes probabilities, such as the prior probability that an entity will be present, and the probability that an entity (if present) will perform an action. As our goal is to infer a cause given observations, it lends itself to Bayesian reasoning. Rather than manually specifying a Bayesian network for a particular scenario, we automatically generate one based on the information in the knowledge graph. Encoding all the information needed to reason about signals in the knowledge graph itself helps facilite reuse and extension. Nodes in the Bayesian network are generated for each entity, action at a location, signal emitted/received, and sensor. While our simple example results in a 1:1 mapping (shown in Figure 3), in more complex scenarios there may be vastly more nodes due to all the possible permutations of entity, action, location, signal and sensor. If multiple types of signals are present (audio, vision, social, etc.) then these all appear as part of the same generated Bayesian network, e.g., an attacker breaking a window will create both audio (sound of breaking glass) and vision (suspicious behaviour in a video feed) based signals. Prior probabilities for entities and actions need to be specified in the knowledge graph. The probability that a signal emitted by an action will be detected by a sensor is calculated based on the distance from the location of the action to the location of the sensor, how the signal intensity reduces with distance (e.g., the knowledge graph may specify an inverse square law for sound signals), any barriers between the source and the sensor that may attenuate/block the signal (e.g., the knowledge graph may specify sound is attenuated by walls), and the sensitivity of the classifier used by the sensor to detect presence of a signal. Once we have generated a Bayesian network, we can condition it on sensor observations to infer the posterior probability of the underlying cause. For the demonstration, we estimate the posterior probability via likelihood weight sampling, implemented by [9], drawing 20,000 samples (the number of samples to draw is a trade-off between accuracy and computation time). In the example, prior to conditioning on observations, there is a 50% chance of an attacker being present. After conditioning on the observation that the microphone has detected the sound of glass, the posterior probability of an attacker increases to 97%. ## 4 Next Steps Even for the simple example of detecting building intrusions, the space of possible causes is large (e.g., an attacker could impersonate an employee then ask someone to let them in, or suspicious sounds could be due to a movie playing in the background). Furthermore, signals are not independent (as assumed by our preliminary prototype), but rather occur in sequences (e.g., the sound of footsteps, followed by a weapon detected in video footage, followed by a scream) that could help more reliably distinguish between possible causes. Also, more realistic models of signal propagation are needed, which may require continuous probability distributions rather than a conditional probability table over a discrete set of values as in our example Bayesian network. To support efficient reasoning about these more complex scenarios, we plan to explore generation of probabilistic programs in place of the discrete Bayesian networks used in this paper. A practical barrier to uptake of our approach is the need to specify prior probabilities for each action that an entity can perform. Ordinary behaviour could potentially be learned from data. However, modelling prior probabilities of actions intruders perform is more difficult, as attacks are rare events (limited data to learn from) and an adversary will adjust their actions to avoid detection. In future work, we plan to include the goals of the intruder as part of the knowledge graph, then use a game theoretic approach to determine probable actions they will take rather than manually specifying prior probabilities for each action. ###### Acknowledgements. This research was funded by National Intelligence Postdoctoral Grant NIPG-2021-006. ## References * Haller et al. [2018] A. Haller, K. Janowicz, S. J. Cox, M. Lefrançois, K. Taylor, D. Le Phuoc, J. Lieberman, R. García-Castro, R. Atkinson, C. Stadler, The modular SSN ontology: A joint W3C and OGC standard specifying the semantics of sensors, observations, sampling, and actuation, Semantic Web 10 (2018) 9–32. doi:10.3233/SW-180320. * Hammar et al. [2019] K. Hammar, E. O. Wallin, P. Karlberg, D. Hälleberg, The RealEstateCore Ontology, in: The Semantic Web – ISWC 2019, 2019, pp. 130–145. doi:10.1007/978-3-030-30796-7_9. * Rasmussen et al. [2021] M. H. Rasmussen, M. Lefrançois, G. F. Schneider, P. Pauwels, BOT: the building topology ontology of the W3C linked building data group, Semantic Web 12 (2021) 143–161. doi:10.3233/SW-200385. * Roy and Guyard [2012] J. Roy, A. B. Guyard, Supporting threat analysis through description logic reasoning, in: 2012 IEEE International Multi-Disciplinary Conference on Cognitive Methods in Situation Awareness and Decision Support, IEEE, 2012, pp. 308–315. doi:10.1109/CogSIMA.2012.6188401. * Pai et al. [2017] F. P. Pai, L. J. Yang, Y. C. Chung, Multi-layer ontology based information fusion for situation awareness, Applied Intelligence 46 (2017) 285–307. doi:10.1007/s10489-016-0834-7. * Yao et al. [2022] H. Yao, C. Han, F. Xu, Reasoning Methods of Unmanned Underwater Vehicle Situation Awareness Based on Ontology and Bayesian Network, Complexity 2022 (2022) 7143974. doi:10.1155/2022/7143974. * Carvalho et al. [2017] R. N. Carvalho, K. B. Laskey, P. C. Costa, PR-OWL – a language for defining probabilistic ontologies, International Journal of Approximate Reasoning 91 (2017) 56–79. doi:10.1016/j.ijar.2017.08.011. * Miles and Bechhofer [2009] A. Miles, S. Bechhofer, SKOS simple knowledge organization system reference, W3C recommendation (2009). URL: https://www.w3.org/TR/skos-reference/. * Vang [2016] J. Vang, jsbayes-viz, 2016. URL: https://github.com/vangj/jsbayes-viz/.
# Enhancing Intrinsic Features for Debiasing via Investigating Class-Discerning Common Attributes in Bias-Contrastive Pair Jeonghoon Park*1, Chaeyeon Chung*1, Juyoung Lee2, Jaegul Choo1 1Korea Advanced Institute of Science and Technology, South Korea, 2Kakao Brain, South Korea. 1{jeonghoon_park, cy_chung<EMAIL_ADDRESS><EMAIL_ADDRESS> ###### Abstract In the image classification task, deep neural networks frequently rely on bias attributes that are spuriously correlated with a target class in the presence of dataset bias, resulting in degraded performance when applied to data without bias attributes. The task of debiasing aims to compel classifiers to learn intrinsic attributes that inherently define a target class rather than focusing on bias attributes. While recent approaches mainly focus on emphasizing the learning of data samples without bias attributes (i.e., bias- conflicting samples) compared to samples with bias attributes (i.e., bias- aligned samples), they fall short of directly guiding models where to focus for learning intrinsic features. To address this limitation, this paper proposes a method that provides the model with explicit spatial guidance that indicates the region of intrinsic features. We first identify the intrinsic features by investigating the class-discerning common features between a bias- aligned (BA) sample and a bias-conflicting (BC) sample (i.e., bias-contrastive pair). Next, we enhance the intrinsic features in the BA sample that are relatively under-exploited for prediction compared to the BC sample. To construct the bias-contrastive pair without using bias information, we introduce a bias-negative score that distinguishes BC samples from BA samples employing a biased model. The experiments demonstrate that our method achieves state-of-the-art performance on synthetic and real-world datasets with various levels of bias severity. ††* indicates equal contribution. ## 1 Introduction Deep neural networks in image classification [21, 5, 20, 26] are known to be vulnerable to the dataset bias [23], which refers to a spurious correlation between the target classes and the peripheral attributes. Basically, image classification aims to learn intrinsic attributes — the visual features that inherently define a target class — that generally appear across the samples in the class. However, when the dataset bias exists in the training data, the models tend to use the frequently appearing peripheral attribute (_i.e_., bias attribute) to predict the class unintentionally. For instance, if airplanes in the training images are mostly in the sky, a model can heavily rely on the sky to predict an image as an airplane class due to its high correlation with the airplane class. This indicates that the model is biased towards the bias attribute (_e.g_., sky) rather than focusing on intrinsic features (_e.g_., the shape of wings or the body) when making decisions. As a result, even though the biased model achieves high accuracy on the samples including bias attributes (_e.g_., airplanes in the sky), termed as bias-aligned (BA) samples, it may fail to accurately predict samples devoid of such bias attributes (_e.g_., airplanes on the runway), referred to as bias-conflicting (BC) samples. In this regard, debiasing aims to encourage the model to focus on intrinsic attributes rather than bias attributes when dataset bias exists. One straightforward approach is utilizing prior knowledge regarding bias (_e.g_., labels for bias attribute) to inform the model which attributes to focus on or not to focus on [9, 25, 2, 22]. However, acquiring such bias information is often infeasible in real-world scenarios. Therefore, recent studies [15, 14, 13, 12, 8] have proposed debiasing methods that do not require bias information. They identify and emphasize BC samples during the training using an additional biased classifier that mainly learns the bias attributes. However, such a training strategy fails to directly indicate where the model should focus to learn the intrinsic features. To address this issue, we present a debiasing approach that explicitly informs the model of the region of the intrinsic features during the training while not using bias labels. While the intrinsic features in the unbiased dataset can simply be identified in generally appearing features in the training samples, generally appearing features in the biased dataset inevitably include bias features. Therefore, we identify the intrinsic features in the biased dataset by investigating the common features between a BA and a BC sample (i.e., a bias-contrastive pair). Here, the common features also need to be class-discerning since the common features might include irrelevant environmental features. For example, in the above scenario, the common feature between an airplane in the sky (BA sample) and an airplane on the runway (BC sample) might include the features of wings, the body, and trees. In this case, the intrinsic features are the shape of the wings and the body that can distinguish the airplane class from the others. Specifically, we introduce an intrinsic feature enhancement (IE) weight that identifies the spatial regions of intrinsic features commonly appearing in a bias-contrastive pair. We leverage an auxiliary sample in addition to the original input to construct the bias-contrastive pair. Since the majority of the original input from training samples are BA samples, we mainly adopt the BC samples as the auxiliary sample. To achieve this without bias information, we present a bias-negative (BN) score that identifies BC samples by employing a classification loss of a biased model. Our IE weight investigates common features in the bias-contrastive pair and identifies the class-discerning features among the common features. Within the identified intrinsic features, we enhance the features that are relatively under-exploited in the BA samples compared to the BC samples. In this way, we can explicitly provide our model with spatial guidance for intrinsic attributes while not using bias labels. We verify the effectiveness of our method on both synthetic and real-world datasets with various levels of bias severity. Furthermore, the in-depth analysis demonstrates that our method successfully guides the model to make predictions based on the intrinsic features. ## 2 Related work Debiasing with bias information. Previous approaches [9, 22, 18, 25, 4, 2] utilize bias labels or predefined bias types to encourage the model to learn intrinsic attributes for debiasing. Kim _et al_. [9], Tartaglione _et al_. [22], and Sagawa _et al_. [18] employ bias labels to encourage the model not to learn specific bias features. Wang _et al_. [25] and Bahng _et al_. [2] predefine the bias type (e.g., color, texture, etc.) and utilize such prior knowledge to supervise models to be robust against such predefined bias type. However, obtaining bias information requires additional cost, which is often infeasible in the real world. Debiasing without bias information. Recent studies [3, 7, 15, 12, 13, 14, 8, 1, 28, 16, 11] propose debiasing strategies that do not require bias information. Nam _et al_. [15] present an approach that encourages the model to concentrate on BC samples during the training process considering that the bias attributes are easier to learn than intrinsic attributes. Instead of using bias information, they additionally train a biased model that mainly learns bias attributes and regard the samples that are not easily trained by the biased model as BC samples. Lee _et al_. [13] reveal that BC samples serve as noisy samples when training the additional biased model and propose a method to eliminate such BC samples using multiple biased models. Liu _et al_. [14] regard the samples misclassified by the model trained with empirical risk minimization as BC samples and emphasize them during training of a debiased model. Also, MaskTune [1] expects the model to learn intrinsic features by fine-tuning the model with the data whose already-explored area is masked out using Grad-CAM [19]. Another stream of approaches [10, 12, 8] synthesize samples having similar characteristics with BC samples and employ them to train a debiased model. Kim _et al_. [10] synthesize images without bias attributes leveraging an image- to-image translation model [17]. Lee _et al_. [12] and Hwang _et al_. [8] augment BC samples in the feature space by employing the disentangled representations and mixup [27], respectively. A recent pair-wise debiasing method $\mathcal{X}^{2}$-model [28] encourages the model to retain intra-class compactness using samples generated via feature-level interpolation between BC and BA samples. However, such approaches lack explicit supervision about which features to focus on to learn intrinsic features. To address this issue, we present a debiasing method that provides spatial guidance to encourage a model to learn intrinsic features during the training while not using bias labels. We design our model architecture using bias-contrastive pairs referring to the previous studies [8, 28]. Figure 1: Overview of our method. We provide explicit spatial guidance $g(\mathbf{z})$ for a debiased model $f_{d}$, which is described with $f_{d}^{\text{emb}}$ and $f_{d}^{\text{cls}}$, to learn intrinsic features. To achieve this, we leverage a bias-contrastive pair, $\mathbf{x}$ and $\mathbf{x}^{\text{BN}}$ from the same target class $y$. $g(\mathbf{z})$ highlights intrinsic features that are relatively under-exploited in $\mathbf{z}$ compared to $\mathbf{z}^{\text{BN}}$, calculated by common feature score $c$ and relative-exploitation score $r$. Here, we mainly adopt BC samples from $\mathcal{D}^{\text{BN}}_{\text{cand}}$ to construct $\mathcal{D}^{\text{BN}}$, where we sample $\mathbf{x}^{\text{BN}}$. $\mathcal{D}^{\text{BN}}$ is updated every iteration using the BN score $S$, which is also updated every iteration. At the inference, we only use $f_{d}$ in the gray-colored area. ## 3 Methodology ### 3.1 Overview As shown in Fig. 1, our framework consists of a biased model $f_{b}$ that focuses on bias attributes and a debiased model $f_{d}$ that learns debiased representations. We use BiasEnsemble (BE) [13] as a backbone, where $f_{b}$ is trained with bias-amplified dataset $\mathcal{D}^{\text{A}}$ which mainly consists of BA samples, while $f_{d}$ concentrates on the samples that $f_{b}$ fails to learn. Our method provides $f_{d}$ with spatial guidance for intrinsic features using a bias-contrastive pair: an input $\mathbf{x}$ and an auxiliary input $\mathbf{x}^{\text{BN}}$. We denote the auxiliary input $\mathbf{x}^{\text{BN}}$ as a bias-negative (BN) sample because we primarily adopt samples devoid of bias attributes. We sample an image $\mathbf{x}$ from the original training data $\mathcal{D}$, and $\mathbf{x}^{\text{BN}}$ from a BN dataset $\mathcal{D}^{\text{BN}}$ which mainly consists of BC samples. $\mathcal{D}^{\text{BN}}$ is updated every iteration to mainly include BC samples using the BN score $S$ that employs $f_{b}$ to identify BC samples. The BN score is also updated every iteration. Given the intermediate features $\mathbf{z}$ and $\mathbf{z}^{\text{BN}}$, we first extract the common features between the bias-contrastive pair ($c(\mathbf{z})$ in Fig. 1). Also, we identify the class-discerning features that are relatively under-exploited in $\mathbf{z}$ compared to $\mathbf{z}^{\text{BN}}$ ($r(\mathbf{z})$ in Fig. 1). Next, we calculate the IE weight that indicates relatively under-exploited intrinsic features in $\mathbf{z}$ based on $c(\mathbf{z})$ and $r(\mathbf{z})$ ($\text{IE}(\mathbf{z})$ in Fig. 1). Finally, we obtain the guidance $g(\mathbf{z})$ that emphasizes the region of intrinsic feature in $\mathbf{z}$ during the training. At the inference, we utilize $f_{d}$ without $\mathbf{x}^{\text{BN}}$, as in a gray-colored area of Fig. 1. ### 3.2 Constructing bias-negative dataset We construct a BN dataset $\mathcal{D}^{\text{BN}}$, where we sample $\mathbf{x}^{\text{BN}}$ during the training. As the majority of the training dataset is BA samples, we aim to mainly adopt BC samples as $\mathbf{x}^{\text{BN}}$ to construct bias-contrastive pairs. To achieve this, we first construct $\mathcal{D}^{\text{BN}}_{\text{cand}}$, a candidate dataset for $\mathcal{D}^{\text{BN}}$, that contains roughly identified BC samples. During the training, we dynamically update $\mathcal{D}^{\text{BN}}$ every iteration to mainly adopt BC samples from $\mathcal{D}^{\text{BN}}_{\text{cand}}$ using our newly proposed BN score. Constructing candidate dataset $\mathcal{D}^{\text{BN}}_{\text{cand}}$. To roughly identify BC samples in $\mathcal{D}$, we filter out easily learned BA samples from $\mathcal{D}$ using multiple biased models, following BE [13]. Since the bias features are easier to learn than the intrinsic features [15], each biased model is trained only for a few iterations so that BC samples can be distinguished from the easily learned BA samples. Inspired by JTT [14], we regard the samples that are incorrectly predicted by the majority of the biased models as BC samples. Finally, we construct $\mathcal{D}^{\text{BN}}_{\text{cand}}$ with the roughly identified BC samples. Adopting BC samples with BN score. We introduce a BN score to update $\mathcal{D}^{\text{BN}}$ to primarily exploit BC samples as $\mathbf{x}^{\text{BN}}$ from $\mathcal{D}^{\text{BN}}_{\text{cand}}$ during training $f_{d}$. Considering the unavailability of bias labels, the BN score employs $f_{b}$ to further exclude BA samples from $\mathcal{D}^{\text{BN}}_{\text{cand}}$. As training proceeds, $f_{b}$ is overfitted to the bias attributes in $\mathcal{D}^{\text{A}}$ and outputs a high probability on the ground-truth label for the samples that have similar bias features with samples in $\mathcal{D}^{\text{A}}$. This indicates that samples whose $f_{b}$ loss decreases as training proceeds are likely to have bias attributes learned from $\mathcal{D}^{\text{A}}$. Such samples disturb the extraction of intrinsic features when selected as $\mathbf{x}^{\text{BN}}$. To validate this, we investigate the samples in $\mathcal{D}^{\text{BN}}_{\text{cand}}$ whose $f_{b}$ loss at the later stage of training (50K-th iteration) decreases compared to the early stage of training (1K-th iteration). The result shows that 95.63% of them are BA samples. We use the BFFHQ dataset [10] with a bias severity of 1% for the analysis. Further details of the dataset are described in Sec. 4.1. In this regard, we design a BN score to exclude the samples with decreasing $f_{b}$ loss from the $\mathcal{D}^{\text{BN}}_{\text{cand}}$ to construct $\mathcal{D}^{\text{BN}}$ by tracking the $f_{b}$ loss during training $f_{d}$. First, the $f_{b}$ loss of $\mathbf{x}$ at the $t$-th iteration is calculated as follows: $l_{t}(\mathbf{x})=\alpha_{l}\cdot\mathcal{L}_{\text{CE}}(f_{b}(\mathbf{x}),y)+(1-\alpha_{l})\cdot l_{t-1}(\mathbf{x}),$ (1) where $\mathcal{L}_{\text{CE}}(f_{b}(\mathbf{x}),y)$ indicates the cross- entropy (CE) loss of $\mathbf{x}$ on its ground-truth label $y$ and $\alpha_{l}$ is a hyperparameter for the exponential moving average (EMA). We employ EMA to enable a stable tracking of the classification losses. Note that $l_{t}$ is updated only for the samples in a mini-batch at the $t$-th iteration. The BN score tracks $l_{t}(\mathbf{x})$ compared to the loss recorded at the early stage of training. The BN score at the $t$-th iteration is formulated as follows: $s_{t}(\mathbf{x})=\alpha_{s}\cdot(l_{t}(\mathbf{x})-l_{\text{ref}}(\mathbf{x}))+(1-\alpha_{s})\cdot s_{t-1}(\mathbf{x}),$ (2) where $\alpha_{s}$ is a hyperparameter for the EMA and $l_{\text{ref}}(\mathbf{x})$ denotes the reference loss of $\mathbf{x}$ that is first recorded after a few iterations of training. We exploit EMA to stabilize the tracking. Note that we update $s_{t}$ only for the samples in a mini-batch at the $t$-th iteration. The negative value of $s_{t}(\mathbf{x})$ indicates that the loss of $\mathbf{x}$ decreased compared to the early stage of training, which means that the sample is likely to contain bias attributes. Updating $\mathcal{D}^{\text{BN}}$ with BN score. At every iteration, we update $\mathcal{D}^{\text{BN}}$ to exclude the newly detected BA samples whose BN score $s_{t}(\mathbf{x})$ is smaller than zero as follows: $\mathcal{D}^{\text{BN}}_{t}=\\{\mathbf{x}\mid s_{t}(\mathbf{x})>0,\mathbf{x}\sim\mathcal{D}^{\text{BN}}_{\text{cand}}\\},$ (3) where $\mathcal{D}^{\text{BN}}_{t}$ indicates $\mathcal{D}^{\text{BN}}$ at the $t$-th iteration. We employ $\mathbf{x}^{\text{BN}}\sim\mathcal{D}^{\text{BN}}_{t}$ as an auxiliary input at the $t$-th iteration. In this way, we can construct a bias-contrastive pair that encourages the intrinsic attributes to be extracted as their common features. We abbreviate $\mathcal{D}^{\text{BN}}_{t}$ as $\mathcal{D}^{\text{BN}}$ for brevity in the rest of the paper. ### 3.3 Intrinsic feature enhancement To emphasize the intrinsic features in $f_{d}$, we introduce the intrinsic feature enhancement (IE) weight that imposes a high value on the intrinsic features. The IE weight identifies the region of intrinsic features from bias- contrastive pairs by investigating 1) their common features with common feature score $c$ and 2) class-discerning features that are relatively under- exploited in the input with relative-exploitation score $r$. For the explanation, we split $f_{d}$ into two parts. $f_{d}^{\text{emb}}:\mathbb{R}^{H\times W\times 3}\to\mathbb{R}^{h\times w\times c}$ maps an input to the intermediate feature, and $f_{d}^{\text{cls}}:\mathbb{R}^{h\times w\times c}\to\mathbb{R}^{C}$ is composed of the average pooling and the linear classifier and outputs the classification logits, where $f_{d}(\mathbf{x})=f_{d}^{\text{cls}}\left(f_{d}^{\text{emb}}(\mathbf{x})\right)$. First, given the input $\mathbf{x}$, common feature score $c$ identifies the features that are similar to the features in $\mathbf{x}^{\text{BN}}$ that has the same class label as $\mathbf{x}$ while not having bias attributes. Specifically, we extract the intermediate features $\mathbf{z}=f_{d}^{\text{emb}}(\mathbf{x})$ and $\mathbf{z}^{\text{BN}}=f_{d}^{\text{emb}}(\mathbf{x}^{\text{BN}})$, respectively. Next, we obtain the common feature score of $\mathbf{z}$ (_i.e_., $c(\mathbf{z})\in\mathbb{R}^{h\times w}$). Given the $n$-th feature of $\mathbf{z}$ (_i.e_., $\mathbf{z}_{n}\in\mathbb{R}^{c}$), let $i^{*}$-th feature of $\mathbf{z}^{\text{BN}}$ (_i.e_., $\mathbf{z}^{\text{BN}}_{i^{*}}\in\mathbb{R}^{c}$) be the most similar feature to $\mathbf{z}_{n}$, where $i^{*}=\underset{i}{\arg\max}\left(\mathbf{z}^{\text{BN}}_{i}\cdot\mathbf{z}_{n}\right)$. Then, the $n$-th element of $c(\mathbf{z})$ denotes the similarity score between $\mathbf{z}_{n}$ and $\mathbf{z}^{\text{BN}}_{i^{*}}$, which is formulated as follows: $c(\mathbf{z})_{n}=\frac{\mathbf{z}_{i^{*}}^{\text{BN}}\cdot\mathbf{z}_{n}}{\max_{i,j}(\mathbf{z}_{i}^{\text{BN}}\cdot\mathbf{z}_{j})},$ (4) where $\cdot$ indicates a dot product operation. We adopt the dot product for the similarity metric to consider both the scale and the direction of the features. The max normalization is employed to limit the score to less than one. We consider the features with a high common feature score $c$ in $\mathbf{z}$ as features that have a high likelihood of being intrinsic features. Next, the relative-exploitation score $r$ identifies class-discerning features that are relatively under-exploited in $\mathbf{x}$ compared to $\mathbf{x}^{\text{BN}}$. Since most of the $\mathbf{x}^{\text{BN}}$ does not contain bias attributes, we identify class-discerning intrinsic features by investigating the features that are mainly used to predict $\mathbf{x}^{\text{BN}}$ as its target label. At the same time, we identify the features that are under-exploited in the $\mathbf{x}$ compared to the $\mathbf{x}^{\text{BN}}$. To achieve this, we use a visual explanation map of Grad-CAM [19] that imposes a higher value on the features that have more contribution to predicting a specific label. We calculate the explanation map $\text{E}(\mathbf{z})$ and $\text{E}(\mathbf{z}^{\text{BN}})$ with respect to their ground-truth labels. We apply max normalization to the explanation maps to compare the relative importance of the features in prediction. We compare the $n$-th value of $\text{E}(\mathbf{z})$ (_i.e_., $\text{E}(\mathbf{z})_{n}$) with the $i^{*}$-th value of $\text{E}(\mathbf{z}^{\text{BN}})$, where $i^{*}$ is the index of the feature in $\mathbf{z}^{\text{BN}}$ that is the most similar with $\mathbf{z}_{n}$. Accordingly, the $n$-th element of $r(\mathbf{z})\in\mathbb{R}^{h\times w}$ is calculated as: $r(\mathbf{z})_{n}=\left(\frac{2\text{E}(\mathbf{z}^{\text{BN}})_{i^{*}}}{\text{E}(\mathbf{z}^{\text{BN}})_{i^{*}}+\text{E}(\mathbf{z})_{n}}\right)^{\tau},$ (5) where $\tau$ is the amplification factor. The score becomes larger than one when the $\mathbf{z}_{n}$ is relatively under-exploited than $\mathbf{z}^{\text{BN}}_{i^{*}}$ for prediction. When $\mathbf{z}^{\text{BN}}_{i^{*}}$ is not used for discerning the class, $\text{E}(\mathbf{z}^{\text{BN}})_{i^{*}}$ becomes close to zero and the score converges to zero. Finally, $n$-th element of the $\text{IE}(\mathbf{z})$ is defined as: $\text{IE}(\mathbf{z})_{n}=\max(c(\mathbf{z})_{n}\odot r(\mathbf{z})_{n},1),$ (6) where $\odot$ indicates the element-wise multiplication. The IE weight has a large value on the features of $\mathbf{x}$ that commonly appear in $\mathbf{x}^{\text{BN}}$ but has not exploited enough for the prediction of $\mathbf{x}$. We clip the values to be larger than one to enhance only the relatively under-exploited features in $\mathbf{z}$ while preserving the other features. Using the IE weight, we obtain the guidance $g(\mathbf{z})$ that emphasizes the intrinsic features in $\mathbf{z}$ as follows: $g(\mathbf{z})=\mathbf{z}\odot\text{IE}(\mathbf{z}).$ (7) We broadcast $\text{IE}(\mathbf{z})$ to match its shape with $\mathbf{z}$ before multiplication. During the training, this spatial guidance informs the model of where to focus to learn intrinsic features from training samples. Algorithm 1 Debiasing with the intrinsic feature guidance Input: pretrained biased models, training dataset $\mathcal{D}$, biased model $f_{b}$, debiased model $f_{d}$, reference iteration $T_{1}$ for BN score, starting iteration $T_{2}(\geq T_{1})$ to apply intrinsic feature guidance Output: trained debiased model $f_{d}$ 1:Construct $\mathcal{D}^{\text{A}},\mathcal{D}^{\text{BN}}_{\text{cand}}$ from $\mathcal{D}$ using pretrained biased models 2:for every iteration $t$ do 3: Sample $\mathbf{x}\sim\mathcal{D}$ 4: if $t\geq T_{1}$ and $l_{\text{ref}}(\mathbf{x})$ is not initialized then 5: $l_{\text{ref}}(\mathbf{x})\leftarrow l(\mathbf{x})$ 6: end if 7: if $\mathbf{x}\in\mathcal{D}^{\text{A}}$ then 8: Train $f_{b}(\mathbf{x})$ with $\mathcal{L}_{\text{CE}}$ 9: end if 10: if $t<$ $T_{2}$ then $\triangleright$ Train w/o guidance 11: Train $f_{d}(\mathbf{x})$ with $\mathcal{L}_{\text{main}}$ 12: else if $t\geq$ $T_{2}$ then $\triangleright$ Train w/ guidance 13: Update $\mathcal{D}^{\text{BN}}$ 14: Sample $\mathbf{x}^{\text{BN}}\sim\mathcal{D}^{\text{BN}}$ 15: Train $f_{d}(\mathbf{x},\mathbf{x}^{\text{BN}})$ with $\mathcal{L}_{\text{total}}$ 16: end if 17:end for ### 3.4 Training with intrinsic feature guidance We basically train $f_{d}$ with the CE loss as follows: $\mathcal{L}_{\text{main}}=w(\mathbf{x})\mathcal{L}_{\text{CE}}(f_{d}(\mathbf{x}),y)$ (8) where $w(\mathbf{x})$ is the sample reweighting value of $\mathbf{x}$ [15]. $w(\mathbf{x})$ emphasizes the samples that $f_{b}$ fails to learn, which are mostly BC samples. The detailed description of $w(\mathbf{x})$ is included in the Supplementary. In addition, we guide the model to focus on the region of intrinsic features through a guidance loss and a BN loss. We observe that the BN score has a higher value on the BC samples compared to the BA samples as training $f_{b}$ proceeds (See Sec. 4.3). In this respect, we employ the BN score of $\mathbf{x}^{\text{BN}}$ (_i.e_., $s(\mathbf{x}^{\text{BN}})$) to upweight the loss when BC samples are adopted as $\mathbf{x}^{\text{BN}}$. Here, we clip the value of loss weight $s(\mathbf{x}^{\text{BN}})$ to be larger than zero. Guidance loss. To guide the model to exploit the intrinsic features from $\mathbf{x}$, we minimize the L1 distance between $g(\mathbf{z})$ and $\mathbf{z}$ as follows: $\mathcal{L}_{\text{guide\\_sim}}=s(\mathbf{x}^{\text{BN}})\lVert\text{GAP}(\mathbf{z})-\text{GAP}\left({g(\mathbf{z})}\right)\rVert_{1},$ (9) where GAP represents the global average pooling. $s(\mathbf{x}^{\text{BN}})$ is multiplied as a loss weight to impose a high weight on the loss when BC samples are selected as $\mathbf{x}^{\text{BN}}$. Also, we apply the CE loss to the guidance $g(\mathbf{z})$ to encourage it to include the intrinsic features that contribute to the correct prediction as follows: $\mathcal{L}_{\text{guide\\_cls}}=w(\mathbf{x})\mathcal{L}_{\text{CE}}\left(f_{d}^{\text{cls}}(g(\mathbf{z})),y\right),$ (10) where $w(\mathbf{x})$ is the reweighting value as in $\mathcal{L}_{\text{main}}$. Finally, our guidance loss $\mathcal{L}_{\text{guide}}$ is calculated as follows: $\mathcal{L}_{\text{guide}}=\lambda_{\text{sim}}\mathcal{L}_{\text{guide\\_sim}}+\mathcal{L}_{\text{guide\\_cls}},$ (11) where $\lambda_{\text{sim}}$ is a hyperparameter to control the relative significance between the losses. We set $\lambda_{\text{sim}}$ set as 0.1. BN loss. We also employ the CE loss on $\mathbf{x}^{\text{BN}}$ to encourage the model to learn class-discerning features. This enables the IE weight to find intrinsic features among the common features. The BN loss is defined as: $\mathcal{L}_{\text{BN}}=s(\mathbf{x}^{\text{BN}})\mathcal{L}_{\text{CE}}(f_{d}(\mathbf{x}^{\text{BN}}),y).$ (12) Here, we also exploit $s(\mathbf{x}^{\text{BN}})$ to impose high weight on the loss when $\mathbf{x}^{\text{BN}}$ is a BC sample. Overall objective function. In summary, the overall objective function is defined as follows: $\mathcal{L}_{\text{total}}=\lambda_{\text{main}}\mathcal{L}_{\text{main}}+\mathcal{L}_{\text{guide}}+\mathcal{L}_{\text{BN}},$ (13) $\lambda_{\text{main}}$ is the constant value that linearly increases from zero to one during training $f_{d}$ with the guidance. This prevents the model from focusing on bias features in $\mathbf{x}$ in the early phase. The overall process of our method is provided in Algorithm 1. Here, we set $T_{1}$ and $T_{2}$ as 1K and 10K, respectively. Note that all the hyperparameters are identically applied across different datasets and bias severities. We provide further details of the training and the implementation in the Supplementary. Method | Waterbirds | BFFHQ | BAR ---|---|---|--- 0.5 | 1.0 | 2.0 | 5.0 | 0.5 | 1.0 | 2.0 | 5.0 | 1.0 | 5.0 Vanilla [5] | 57.41 | 58.07 | 61.04 | 64.13 | 55.64 | 60.96 | 69.00 | 82.88 | 70.55 | 82.53 HEX [25] | 57.88 | 58.28 | 61.02 | 64.32 | 56.96 | 62.32 | 70.72 | 83.40 | 70.48 | 81.20 LNL [9] | 58.49 | 59.68 | 62.27 | 66.07 | 56.88 | 62.64 | 69.80 | 83.08 | - | - EnD [22] | 58.47 | 57.81 | 61.26 | 64.11 | 55.96 | 60.88 | 69.72 | 82.88 | - | - ReBias [2] | 55.44 | 55.93 | 58.53 | 62.14 | 55.76 | 60.68 | 69.60 | 82.64 | 73.04 | 83.90 LfF [15] | 60.66 | 61.78 | 58.92 | 61.43 | 65.19 | 69.24 | 73.08 | 79.80 | 70.16 | 82.95 DisEnt [12] | 59.59 | 60.05 | 59.76 | 64.01 | 62.08 | 66.00 | 69.92 | 80.68 | 70.33 | 83.13 LfF+BE [13] | 61.22 | 62.58 | 63.00 | 63.48 | 67.36 | 75.08 | 80.32 | 85.48 | 73.36 | 83.87 DisEnt+BE [13] | 51.65 | 54.10 | 53.43 | 54.21 | 67.56 | 73.48 | 79.48 | 84.84 | 73.29 | 84.96 Ours | 63.64 | 65.22 | 65.23 | 66.33 | 71.68 | 77.56 | 83.08 | 87.60 | 75.14 | 85.03 Table 1: Comparison to the baselines. We measure the classification accuracy on test sets with different bias severities. The best accuracy values are in bold. The hyphen mark ‘-’ means it is not applicable. Results with standard deviations are provided in the Supplementary. | $\mathcal{D}^{\text{BN}}_{\text{cand}}$ \- $\mathcal{D}^{\text{BN}}$ | $\mathcal{D}^{\text{BN}}$/$\mathcal{D}$ (%) ---|---|--- Dataset | BA | BC | BA | BC Waterbirds | 26.50 $\pm$5.32 | 0.75 $\pm$0.83 | 2.75 $\pm$0.31 | 79.69 $\pm$3.72 BFFHQ | 199.80 $\pm$40.14 | 8.00 $\pm$2.76 | 0.46 $\pm$0.09 | 50.00 $\pm$1.04 BAR | 30.60 $\pm$3.83 | 3.20 $\pm$1.60 | 3.58 $\pm$0.14 | 47.14 $\pm$5.71 Table 2: Effectiveness of BN score on excluding BA samples. $\mathcal{D}^{\text{BN}}_{\text{cand}}$ \- $\mathcal{D}^{\text{BN}}$ presents the number of excluded samples when constructing $\mathcal{D}^{\text{BN}}$ from $\mathcal{D}^{\text{BN}}_{\text{cand}}$. $\mathcal{D}^{\text{BN}}$/$\mathcal{D}$ indicates that the ratio of samples in $\mathcal{D}^{\text{BN}}$ to the samples in $\mathcal{D}$. ## 4 Experiments ### 4.1 Experimental settings Dataset. We utilize Waterbirds [18], biased FFHQ (BFFHQ) [10], and BAR [15] for the experiments. Each dataset contains different types of target class and bias attributes: Waterbirds - {bird type, background}, BFFHQ - {age, gender}, and BAR - {action, background}. The former and the latter in the bracket indicate the target class and the bias attribute, respectively. To be specific, the Waterbirds dataset has two bird classes: waterbirds and landbirds. Most of the waterbirds are in the water background, and most of the landbirds are in the land background. In the training dataset of BFFHQ, most young people are female while most old people are male. The word ‘young’ indicates an age ranging between 10 and 29, and ‘old’ indicates an age ranging between 40 and 59. Lastly, the BAR dataset consists of six classes of action (e.g., fishing), where the background (e.g., water surface) is highly correlated with each class. Following the previous studies [15, 12, 13], we validate our model’s effectiveness under different levels of bias severity, _i.e_., a ratio of BC samples to the total training samples: 0.5%, 1%, 2%, and 5%. In the test sets, the spurious correlations found in the training set do not exist. More details are provided in Supplementary. Evaluation. We report the best accuracy of the test set averaged over five independent trials with different random seeds. The Waterbirds dataset has an extremely skewed test dataset composed of 4,600 landbirds and 1,194 waterbirds. This can mislead the debiasing performance as the model may achieve high classification accuracy by simply predicting most images as landbirds. We measure the classification accuracy for each class and report their average value to obtain an accurate understanding of the effectiveness of methods, regardless of class frequencies. Also, for the BFFHQ, we report the best accuracy of BC samples in the test set, following the previous works [12, 13]. For the analyses, we utilize the datasets with 1% bias severity. ### 4.2 Comparison to previous works We compare the classification accuracy on the test sets between the baselines and ours in Table 1. For baselines, we employ a vanilla model trained with the CE loss, the methods using explicit bias label (_i.e_., LNL [9], EnD [22]), presuming the type of the bias (_i.e_., HEX [25], ReBias [2]), and assuming the bias information is unknown (_i.e_., LfF [15], DisEnt [12], LfF+BE [13], DisEnt+BE [13]). Our approach achieves state-of-the-art performance in comparison to the previous methods including those utilizing explicit bias labels. The results exhibit that our method improves performance robustly across various levels of bias severity, even under the constraints of extreme bias severity (_e.g_., 0.5 %). This shows that providing explicit spatial guidance for intrinsic features effectively encourages the model to learn debiased representations, leading to performance improvement. Figure 2: Visualization of BN scores of the samples in $\mathcal{D}^{\text{BN}}_{\text{cand}}$ during the training. The red lines and the blue lines indicate the BN scores of BA and BC samples, respectively. Figure 3: Visualization of the spatial guidance using (a) Waterbirds and (b) BAR dataset. Given bias-contrastive pairs, $\mathbf{x}$ and $\mathbf{x}^{\text{BN}}$, $\text{E}(\mathbf{z})$ indicates the regions originally focused on by $f_{d}$ and $\text{IE}(\mathbf{z})$ shows the regions highlighted by our IE weight. Figure 4: Comparison of the region focused by a debiased model trained with and without our method. We compare Grad-CAM results on the test set of (a) Waterbirds and (b) BAR. ### 4.3 Analysis of BN score We analyze our BN score that identifies and emphasizes BC samples in $\mathcal{D}^{\text{BN}}_{\text{cand}}$ during training $f_{d}$. In this section, we assess the effectiveness of the BN score on excluding BA samples from $\mathcal{D}^{\text{BN}}_{\text{cand}}$. Also, we evaluate the efficacy of the BN score as a loss weight by investigating the BN scores of the samples in $\mathcal{D}^{\text{BN}}_{\text{cand}}$. Effectiveness of BN score on excluding BA samples. Table 2 presents how the BN score effectively filters out BA samples from $\mathcal{D}^{\text{BN}}_{\text{cand}}$ while preserving BC samples when constructing $\mathcal{D}^{\text{BN}}$. The first two columns present the number of the BA and BC samples excluded from $\mathcal{D}^{\text{BN}}_{\text{cand}}$ to construct $\mathcal{D}^{\text{BN}}$, respectively. Also, the last two columns represent the ratio of the number of the BA and BC samples in $\mathcal{D}^{\text{BN}}$ to that in $\mathcal{D}$, respectively. For the analysis, we use $\mathcal{D}^{\text{BN}}$ at the 50K-th iteration and report the mean value of the five independent trials. Here, we expect $\mathcal{D}^{\text{BN}}$ to contain a maximal number of BC samples while including a minimal number of BA samples. As shown in the first two columns of Table 2, our BN score excludes a large number of BA samples while minimizing the loss of BC samples. As a result, $\mathcal{D}^{\text{BN}}$ preserves around 50-80% of BC samples, while containing a minimal number of BA samples compared to $\mathcal{D}$, as presented in the last two columns. Efficacy of BN score as loss weight. We utilize the BN score to upweight the training loss when BC samples are chosen as $\mathbf{x}^{\text{BN}}$. To verify its effectiveness as a loss weight, we compare the BN scores of BC samples and BA samples in $\mathcal{D}^{\text{BN}}_{\text{cand}}$ during the training for the Waterbirds, BFFHQ, and BAR dataset. In Fig. 2, we present the average BN scores of BA (red line) and BC samples (blue line) in $\mathcal{D}^{\text{BN}}_{\text{cand}}$ at every 500 iterations. Since BN scores are recorded after the 1K-th iteration, the BN scores until the 1K-th iteration are reported as zero. The BN scores of BC samples in the BFFHQ dataset mostly range from 0.4 to 0.5 while the scores of BA samples are close to 0. This indicates that the BN score as a loss weight in the BFFHQ imposes a much larger value on the BC samples, while approximately zero values on the BA samples. Also, the BN scores of BC samples in the Waterbirds and the BAR dataset become twice larger than the scores of BA samples. The result shows that the BN score effectively emphasizes BC samples compared to the BA samples in $\mathcal{D}^{\text{BN}}_{\text{cand}}$ during the training. Further analysis of the BN score is included in the Supplementary. ### 4.4 Analysis of intrinsic feature guidance We conduct a qualitative analysis of the regions emphasized by our intrinsic feature guidance during the training and the features learned by $f_{d}$ after training. We use the Waterbirds and BAR datasets with 1% of bias severity for the analysis. Visualization of the guidance during training. In Fig. 3, we visualize the features emphasized by our IE weight $\text{IE}(\mathbf{z})$. For comparison, we also visualize $\text{E}(\mathbf{z})$, the features focused by the model before applying $\text{IE}(\mathbf{z})$ for guidance. We select a BA sample as $\mathbf{x}$ and a BC sample as $\mathbf{x}^{\text{BN}}$ from the training data for the analysis. For the Waterbirds dataset in Fig. 3(a), $\text{IE}(\mathbf{z})$ highlights the wings or the beak of the bird compared to $\text{E}(\mathbf{z})$, where the forest or the water (_i.e_., bias attributes) is highlighted. Also, in Fig. 3(b), $\text{E}(\mathbf{z})$ focuses more on the bias attributes such as rocks or the water than the intrinsic attributes. In contrast, $\text{IE}(\mathbf{z})$ emphasizes the action of the human that is less exploited compared to the bias features in $\text{E}(\mathbf{z})$. The results demonstrate that our guidance successfully identifies and enhances under-exploited intrinsic features during the training. Effect of intrinsic feature guidance on debiasing. We qualitatively evaluate the effectiveness of the intrinsic feature guidance by investigating the visual explanation maps of the test samples. We compare the Grad-CAM [19] results of the model trained with and without our method in Fig 4. The Grad- CAM results highlight the features that the model employs to predict the input as its ground-truth label. In Fig. 4 (a), while the model trained without guidance focuses on the forest or the sea, ours focuses on the tail or a curved shape of the bird’s body. Additionally, Fig. 4 (b) shows that ours focuses on the motion of the human rather than the backgrounds that are concentrated on by the model trained without our guidance. The results verify that our method successfully encourages the model to learn intrinsic features from the training dataset, improving the robustness of the model against dataset bias. ### 4.5 Ablation study As shown in Table 3, we perform ablation studies to verify the effectiveness of the individual components in our method. The results of ours are reported in the last row. Importance of $\mathbf{x}^{\text{BN}}$ selection and BN score as loss weight. We demonstrate the efficacy of adopting BC samples as $\mathbf{x}^{\text{BN}}$. We train the model by sampling $\mathbf{x}^{\text{BN}}$ from three different datasets: $\mathcal{D}$, $\mathcal{D}^{\text{BN}}_{\text{cand}}$, and $\mathcal{D}^{\text{BN}}$. In the first row of Table 3, we randomly sample $\mathbf{x}^{\text{BN}}$ from the training dataset $\mathcal{D}$ without using the BN score. In the second row, we train the model with $\mathbf{x}^{\text{BN}}$ sampled from $\mathcal{D}^{\text{BN}}_{\text{cand}}$, where BA samples are roughly filtered out using the early-stopped biased models. The model in the third row is trained with $\mathbf{x}^{\text{BN}}$ from $\mathcal{D}^{\text{BN}}$ that mainly includes BC samples using our BN score. The results show a gradual improvement in debiasing performance as more BC samples are selected as $\mathbf{x}^{\text{BN}}$. This is because auxiliary samples without bias attributes prevent the common features from including the bias features, composing a bias-contrastive pair with the input. Therefore, our guidance effectively enhances the intrinsic features during the training. Finally, the last row in Table 3 presents that employing the BN score $s(\mathbf{x}^{\text{BN}})$ to reweight the losses further enhances performance by emphasizing the usage of BC samples as $\mathbf{x}^{\text{BN}}$. Training objectives. We examine the impact of each training objective, $\mathcal{L}_{\text{guide}}$ and $\mathcal{L}_{\text{BN}}$, in our method. We report the performance of the model trained without $\mathcal{L}_{\text{guide}}$ (the fourth row) and without $\mathcal{L}_{\text{BN}}$ (the fifth row) in Table 3. The model trained without $\mathcal{L}_{\text{guide}}$ exhibits degraded performance, facing difficulties in identifying where to focus to learn intrinsic features. Similarly, training the model without $\mathcal{L}_{\text{BN}}$ also results in a performance decrease. The results verify that $\mathcal{L}_{\text{BN}}$ successfully supports the IE weight to identify intrinsic features among the common features by learning class-discerning features from $\mathbf{x}^{\text{BN}}$. The model that incorporates both $\mathcal{L}_{\text{guide}}$ and $\mathcal{L}_{\text{BN}}$ demonstrates the best performance (the last row of Table 3). $\mathcal{L}_{\text{guide}}$ | $\mathcal{L}_{\text{BN}}$ | $\mathbf{x}^{\text{BN}}$ | | $s(\mathbf{x}^{\text{BN}})$ as --- loss weight Waterbirds | BFFHQ | BAR TextRenderingMode=FillStroke, LineWidth=.5pt, ✓ | TextRenderingMode=FillStroke, LineWidth=.5pt, ✓ | $\mathcal{D}$ | TextRenderingMode=FillStroke, LineWidth=.5pt, ✗ | 62.79 $\pm$1.21 | 71.04 $\pm$2.55 | 73.36 $\pm$1.40 TextRenderingMode=FillStroke, LineWidth=.5pt, ✓ | TextRenderingMode=FillStroke, LineWidth=.5pt, ✓ | $\mathcal{D}^{\text{BN}}_{\text{cand}}$ | TextRenderingMode=FillStroke, LineWidth=.5pt, ✗ | 64.65 $\pm$1.23 | 75.64 $\pm$1.87 | 74.27 $\pm$0.66 TextRenderingMode=FillStroke, LineWidth=.5pt, ✓ | TextRenderingMode=FillStroke, LineWidth=.5pt, ✓ | $\mathcal{D}^{\text{BN}}$ | TextRenderingMode=FillStroke, LineWidth=.5pt, ✗ | 65.10 $\pm$0.87 | 77.08 $\pm$2.05 | 74.62 $\pm$1.07 TextRenderingMode=FillStroke, LineWidth=.5pt, ✗ | TextRenderingMode=FillStroke, LineWidth=.5pt, ✓ | $\mathcal{D}^{\text{BN}}$ | TextRenderingMode=FillStroke, LineWidth=.5pt, ✓ | 63.81 $\pm$1.24 | 76.92 $\pm$1.03 | 74.03 $\pm$1.13 TextRenderingMode=FillStroke, LineWidth=.5pt, ✓ | TextRenderingMode=FillStroke, LineWidth=.5pt, ✗ | $\mathcal{D}^{\text{BN}}$ | TextRenderingMode=FillStroke, LineWidth=.5pt, ✓ | 62.10 $\pm$3.35 | 74.84 $\pm$2.00 | 74.87 $\pm$1.51 TextRenderingMode=FillStroke, LineWidth=.5pt, ✓ | TextRenderingMode=FillStroke, LineWidth=.5pt, ✓ | $\mathcal{D}^{\text{BN}}$ | TextRenderingMode=FillStroke, LineWidth=.5pt, ✓ | 65.22 $\pm$0.95 | 77.56 $\pm$1.24 | 75.14 $\pm$0.82 Table 3: Ablation study on the proposed training objectives, the dataset that $\mathbf{x}^{\text{BN}}$ is sampled from, and the BN score of $\mathbf{x}^{\text{BN}}$ as a loss weight. The check mark ( TextRenderingMode=FillStroke, LineWidth=.5pt, ✓) denotes the inclusion of the corresponding method, while the cross mark ( TextRenderingMode=FillStroke, LineWidth=.5pt, ✗) indicates the exclusion of the component in the experiment. ## 5 Conclusion In this paper, we propose a debiasing method that explicitly provides the model with spatial guidance for intrinsic features. Leveraging an auxiliary sample, we first identify intrinsic features by investigating the class- discerning features commonly appearing in a bias-contrastive pair. Our IE weight enhances the intrinsic features that have not been focused on yet in the input by a debiased model. To construct the bias-contrastive pair without bias labels, we introduce a bias-negative (BN) score that tracks the classification loss of a biased model to distinguish BC samples from BA samples during the training. The effectiveness of our method is demonstrated through experiments on synthetic and real-world datasets with varying levels of bias severity. We believe this work sheds light on the significance of providing explicit guidance on the intrinsic attributes for debiasing. ## Acknowledgments This work was supported by the Institute for Information & communications Technology Promotion(IITP) grant funded by the Korea government(MSIT) (No.2019-0-00075, Artificial Intelligence Graduate School Program(KAIST), the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIT) (No. NRF-2022R1A2B5B02001913 & No. 2022R1A5A7083908) and partly by Kakao Brain Corporation. ## References * Asgari et al. [2022] Saeid Asgari, Aliasghar Khani, Fereshte Khani, Ali Gholami, Linh Tran, Ali Mahdavi Amiri, and Ghassan Hamarneh. Masktune: Mitigating spurious correlations by forcing to explore. In _Proc. the Advances in Neural Information Processing Systems (NeurIPS)_ , pages 23284–23296, 2022. * Bahng et al. [2020] Hyojin Bahng, Sanghyuk Chun, Sangdoo Yun, Jaegul Choo, and Seong Joon Oh. Learning de-biased representations with biased representations. In _Proc. the International Conference on Machine Learning (ICML)_ , 2020. * Darlow et al. [2020] Luke Darlow, Stanislaw Jastrzebski, and Amos Storkey. Latent adversarial debiasing: Mitigating collider bias in deep neural networks. _arXiv preprint arXiv:2011.11486_ , 2020. * Geirhos et al. [2019] Robert Geirhos, Patricia Rubisch, Claudio Michaelis, Matthias Bethge, Felix A. Wichmann, and Wieland Brendel. Imagenet-trained CNNs are biased towards texture; increasing shape bias improves accuracy and robustness. In _Proc. the International Conference on Learning Representations (ICLR)_ , 2019. * He et al. [2016] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. _Proc. of the IEEE conference on computer vision and pattern recognition (CVPR)_ , 2016. * Hu et al. [2018] Jie Hu, Li Shen, and Gang Sun. Squeeze-and-excitation networks. In _Proc. of the IEEE conference on computer vision and pattern recognition (CVPR)_ , 2018. * Huang et al. [2020] Zeyi Huang, Haohan Wang, Eric P. Xing, and Dong Huang. Self-challenging improves cross-domain generalization. In _Proc. of the European Conference on Computer Vision (ECCV)_ , 2020. * Hwang et al. [2022] Inwoo Hwang, Sangjun Lee, Yunhyeok Kwak, Seong Joon Oh, Damien Teney, Jin-Hwa Kim, and Byoung-Tak Zhang. Selecmix: Debiased learning by contradicting-pair sampling. In _Proc. the Advances in Neural Information Processing Systems (NeurIPS)_ , 2022. * Kim et al. [2019] Byungju Kim, Hyunwoo Kim, Kyungsu Kim, Sungjin Kim, and Junmo Kim. Learning not to learn: Training deep neural networks with biased data. In _Proc. of the IEEE conference on computer vision and pattern recognition (CVPR)_ , 2019. * Kim et al. [2021] Eungyeup Kim, Jihyeon Lee, and Jaegul Choo. Biaswap: Removing dataset bias with bias-tailored swapping augmentation. In _Proc. of the IEEE international conference on computer vision (ICCV)_ , pages 14992–15001, 2021. * Kwon et al. [2022] Bum Chul Kwon, Jungsoo Lee, Chaeyeon Chung, Nyoungwoo Lee, Ho-Jin Choi, and Jaegul Choo. DASH: Visual Analytics for Debiasing Image Classification via User-Driven Synthetic Data Augmentation. _Eurographics Conference on Visualization (EuroVis)-Short Papers. The Eurographics Association_ , 2022. * Lee et al. [2021] Jungsoo Lee, Eungyeup Kim, Juyoung Lee, Jihyeon Lee, and Jaegul Choo. Learning debiased representation via disentangled feature augmentation. In _Proc. the Advances in Neural Information Processing Systems (NeurIPS)_ , 2021. * Lee et al. [2023] Jungsoo Lee, Jeonghoon Park, Daeyoung Kim, Juyoung Lee, Edward Choi, and Jaegul Choo. Revisiting the importance of amplifying bias for debiasing. In _Proc. the AAAI Conference on Artificial Intelligence (AAAI)_ , pages 14974–14981, 2023. * Liu et al. [2021] Evan Z Liu, Behzad Haghgoo, Annie S Chen, Aditi Raghunathan, Pang Wei Koh, Shiori Sagawa, Percy Liang, and Chelsea Finn. Just train twice: Improving group robustness without training group information. In _Proc. the International Conference on Machine Learning (ICML)_ , pages 6781–6792, 2021. * Nam et al. [2020] Junhyun Nam, Hyuntak Cha, Sungsoo Ahn, Jaeho Lee, and Jinwoo Shin. Learning from failure: Training debiased classifier from biased classifier. In _Proc. the Advances in Neural Information Processing Systems (NeurIPS)_ , 2020. * Park et al. [2023] Geon Yeong Park, Sangmin Lee, Sang Wan Lee, and Jong Chul Ye. Training debiased subnetworks with contrastive weight pruning. In _Proc. of the IEEE conference on computer vision and pattern recognition (CVPR)_ , pages 7929–7938, 2023. * Park et al. [2020] Taesung Park, Jun-Yan Zhu, Oliver Wang, Jingwan Lu, Eli Shechtman, Alexei A. Efros, and Richard Zhang. Swapping autoencoder for deep image manipulation. In _Proc. the Advances in Neural Information Processing Systems (NeurIPS)_ , 2020. * Sagawa et al. [2020] Shiori Sagawa, Pang Wei Koh, Tatsunori B Hashimoto, and Percy Liang. Distributionally robust neural networks for group shifts: On the importance of regularization for worst-case generalization. _Proc. the International Conference on Learning Representations (ICLR)_ , 2020. * Selvaraju et al. [2017] Ramprasaath R. Selvaraju, Michael Cogswell, Abhishek Das, Ramakrishna Vedantam, Devi Parikh, and Dhruv Batra. Grad-cam: Visual explanations from deep networks via gradient-based localization. In _Proc. of the IEEE international conference on computer vision (ICCV)_ , 2017. * Simonyan and Zisserman [2015] Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. _Proc. the International Conference on Learning Representations (ICLR)_ , 2015. * Szegedy et al. [2015] Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dragomir Anguelov, Dumitru Erhan, Vincent Vanhoucke, and Andrew Rabinovich. Going deeper with convolutions. In _Proc. of the IEEE conference on computer vision and pattern recognition (CVPR)_ , pages 1–9, 2015. * Tartaglione et al. [2021] Enzo Tartaglione, Carlo Alberto Barbano, and Marco Grangetto. End: Entangling and disentangling deep representations for bias correction. In _Proc. of the IEEE conference on computer vision and pattern recognition (CVPR)_ , pages 13508–13517, 2021. * Torralba and Efros [2011] Antonio Torralba and Alexei A. Efros. Unbiased look at dataset bias. In _Proc. of the IEEE conference on computer vision and pattern recognition (CVPR)_ , pages 1521–1528, 2011. * Wah et al. [2011] Catherine Wah, Steve Branson, Peter Welinder, Pietro Perona, and Serge Belongie. The caltech-ucsd birds-200-2011 dataset. 2011\. * Wang et al. [2019] Haohan Wang, Zexue He, Zachary L. Lipton, and Eric P. Xing. Learning robust representations by projecting superficial statistics out. In _Proc. the International Conference on Learning Representations (ICLR)_ , 2019. * Zagoruyko and Komodakis [2016] Sergey Zagoruyko and Nikos Komodakis. Wide residual networks. _Proc. of the British Machine Vision Conference (BMVC)_ , 2016. * Zhang et al. [2018] Hongyi Zhang, Moustapha Cisse, Yann N Dauphin, and David Lopez-Paz. mixup: Beyond empirical risk minimization. In _Proc. the International Conference on Learning Representations (ICLR)_ , 2018. * Zhang et al. [2023] Yi-Kai Zhang, Qi-Wei Wang, De-Chuan Zhan, and Han-Jia Ye. Learning debiased representations via conditional attribute interpolation. In _Proc. of the IEEE conference on computer vision and pattern recognition (CVPR)_ , pages 7599–7608, 2023. * Zhang and Sabuncu [2018] Zhilu Zhang and Mert R Sabuncu. Generalized cross entropy loss for training deep neural networks with noisy labels. _arXiv preprint arXiv:1805.07836_ , 2018. * Zhou et al. [2017] Bolei Zhou, Agata Lapedriza, Aditya Khosla, Aude Oliva, and Antonio Torralba. Places: A 10 million image database for scene recognition. _The IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI)_ , 40(6):1452–1464, 2017. Supplementary Material This supplementary material offers further analysis of our approach, additional experimental results, the details of the datasets and implementation, limitations, and future work. Appendix A and Appendix B provide the analysis of the bias-negative (BN) score as a loss weight and samples with negative BN score, respectively. Appendix C analyzes the effect of BC samples in $\mathcal{D}^{\text{BN}}$ on debiasing performance. Also, Appendix D compares the recent sample selection methods with ours. Moreover, Appendix E and Appendix F present additional qualitative results regarding the guidance and additional quantitative results, respectively. Appendix G and Appendix H provide the details about the dataset and implementation. Lastly, Appendix I discusses the limitations and future work. ## Appendix A Additional analysis of the BN score as a loss weight Figure 5: The distributions of $f_{b}$’s classification loss of samples in $\mathcal{D}^{\text{BN}}_{\text{cand}}$. The red and blue lines denote the losses of BA and BC samples, respectively. The dotted and solid lines indicate the losses at the early and later stages of the training, respectively. Best viewed in color. As described in Sec. 3.4 in the main paper, we utilize the BN score of $\mathbf{x}^{\text{BN}}$ (_i.e_., $s(\mathbf{x}^{\text{BN}})$) to reweight the guidance loss $\mathcal{L}_{\text{guide\\_sim}}$ and the BN loss $\mathcal{L}_{\text{BN}}$. The BN score as a loss weight is designed to upweight the losses when bias-conflicting (BC) samples are selected as $\mathbf{x}^{\text{BN}}$, which further encourages our IE weight to enhance the intrinsic features. For verification, we present that the BN score has a much larger value on the BC samples compared to bias-aligned (BA) samples during the training in Fig. 2 in the main paper. Since $s(\mathbf{x}^{\text{BN}})$ has a larger value when the current $f_{b}$ loss of $\mathbf{x}^{\text{BN}}$ is larger than that of the early stage of training, the results imply that the $f_{b}$ loss of BC samples largely increases as training proceeds compared to BA samples. To further verify this, we present $f_{b}$’s classification loss of samples in $\mathcal{D}^{\text{BN}}_{\text{cand}}$ during the training in Fig. 5. The BFFHQ dataset [10] with a bias severity of 1% is used for the analysis. In Fig. 5, the dotted lines denote the distribution of $f_{b}$’s classification loss at the early stage of training (1K-th iteration), and the solid lines indicate that of the later stage of training (50K-th iteration). The results show that the $f_{b}$ loss of BC samples (blue lines) largely increases at the later stage of training compared to the early stage, unlike BA samples (red lines). This demonstrates that the BN score as a loss weight can effectively upweight the training losses when BC samples are chosen as $\mathbf{x}^{\text{BN}}$. ## Appendix B Samples having negative BN score Figure 6: The examples of samples that have negative BN scores at the later stage of training. Figure 7: Additional visualization results of the spatial guidance using (a) Waterbirds and (b) BAR dataset. Given bias-contrastive pairs, $\mathbf{x}$ and $\mathbf{x}^{\text{BN}}$, $\text{E}(\mathbf{z})$ indicates the regions originally focused on by $f_{d}$ and $\text{IE}(\mathbf{z})$ shows the regions highlighted by our IE weight. As mentioned in Sec. 3.2 in the main paper, we further filter out the samples with negative BN scores from $\mathcal{D}_{\text{cand}}^{\text{BN}}$ to mainly exploit the BC samples as $\mathbf{x}^{\text{BN}}$. Here, we expect that the samples with negative BN scores are mostly BA samples. To investigate the samples with negative BN scores, we chose the samples that were erroneously incorporated into $\mathcal{D}_{\text{cand}}^{\text{BN}}$ initially but excluded at the later stage of training (_i.e_., 50K-th iteration), exhibiting negative BN scores. This process is repeated five times, and we visualize the samples chosen more than three times in Fig. 6. We use the BFFHQ dataset with a 1% bias severity for the experiment. We observe that the samples with negative BN scores in $\mathcal{D}_{\text{cand}}^{\text{BN}}$ are mostly BA samples. As shown in the figure, while the samples obviously contain bias attributes (_i.e_., features representing female or male), the samples mostly have extreme shade, blur, saturation, or unusual makeup, exhibiting non-typical appearance. Although the bias attributes are known to be easy to learn, the non-typical appearance prevents $f_{b}$ from detecting such bias attributes in the early stage of training. In Sec. 4.5 of the main paper, we verify that employing such BA samples as $\mathbf{x}^{\text{BN}}$ largely degrades the debiasing performance by allowing the bias attributes to be included in the common features between $\mathbf{x}$ and $\mathbf{x}^{\text{BN}}$. Our BN score effectively alleviates this issue by filtering out such BA samples from $\mathcal{D}_{\text{cand}}^{\text{BN}}$. Figure 8: Additional comparison of the region focused by a debiased model trained with and without our method. We compare Grad-CAM results on the test set of (a) Waterbirds and (b) BAR. ## Appendix C Importance of BN sample selection We analyze the effect of the BC sample ratio in $\mathcal{D}^{\text{BN}}$ on debiasing performance. We measure the accuracy using the BFFHQ dataset with a 1% bias severity by varying the number of BA and BC samples in $\mathcal{D}^{\text{BN}}$. Table 4 shows that higher accuracy is achieved for more BC samples and a lower ratio of BA to BC samples in $\mathcal{D}^{\text{BN}}$. Overall, our method constantly shows performance gain, except for the last column ({${\text{\\#BC in }\mathcal{D}^{\text{BN}}}/{\text{\\#BC in }\mathcal{D}}$, ${\text{\\#BA in }\mathcal{D}^{\text{BN}}}/{\text{\\#BC in }\mathcal{D}^{\text{BN}}}$}-{1.0, 10.0}). It is crucial not to select too many BA samples as $\mathbf{x}^{\text{BN}}$. ${\text{\\#BC in }\mathcal{D}^{\text{BN}}}/{\text{\\#BC in }\mathcal{D}}$ | 0.1 | 0.5 | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 ---|---|---|---|---|---|---|--- ${\text{\\#BA in }\mathcal{D}^{\text{BN}}}/{\text{\\#BC in }\mathcal{D}^{\text{BN}}}$ | 0.0 | 0.0 | 0.0 | 0.1 | 1.0 | 2.0 | 10.0 Accuracy | 75.84 | 78.12 | 81.40 | 80.24 | 77.48 | 75.48 | 70.90 Table 4: Importance of $\mathbf{x}^{\text{BN}}$ selection. ## Appendix D Comparison to recent sample selection methods Our BN score is designed to further filter out BA samples in $\mathcal{D}^{\text{BN}}_{\text{cand}}$, improving debiasing performance (Sec. 4.5 in the main paper). We compare BC sample selection in recent methods with ours using BFFHQ with a 1% bias severity. Let $\mathcal{S}$ be a set of samples identified as BC samples from training data $\mathcal{D}$. $\\{\text{\\#BC in }\mathcal{S}/\text{\\#BC in }\mathcal{D},\text{\\#BA in }\mathcal{S}/\text{\\#BC in }\mathcal{S}\\}$ is {75.63, 10.18}-BE [13], {27.29, 4.32}-DCWP [16], and {50.0, 0.89}-Ours, respectively. Our method has the least number of BA compared to BC samples in $\mathcal{S}$ while preserving half of the total BC samples. ## Appendix E Additional qualitative results ### E.1 Visualization of the guidance during training In addition to Fig. 3 of the main paper, we provide supplementary qualitative results that present the features that the current model $f_{d}$ focuses on (_i.e_., $\text{E}(\mathbf{z})$) and the features emphasized by the guidance (_i.e_., $\text{IE}(\mathbf{z})$) during the training in Fig. 7. We use the Waterbirds and BAR datasets with a bias severity of 1% for the analysis. We train $f_{d}$ during 10K iterations and obtain the visual explanation map $\text{E}(\mathbf{z})$ for the ground-truth label using Grad-CAM [19]. The min-max normalization is applied to the values of $\text{E}(\mathbf{z})$ and $\text{IE}(\mathbf{z})$ for visualization. We intentionally select the BA sample and BC sample as $\mathbf{x}$ and $\mathbf{x}^{\text{BN}}$, respectively, to compose a bias-negative pair. As shown in Fig. 7, our IE weight (_i.e_., $\text{IE}(\mathbf{z})$) appropriately emphasizes the regions of the intrinsic features while $\text{E}(\mathbf{z})$ shows that the current model $f_{d}$ relies on the bias attributes for prediction. For example, in the Waterbirds dataset, $\text{IE}(\mathbf{z})$ properly enhances the intrinsic features of a bird such as wings, a body, or a neck, while $\text{E}(\mathbf{z})$ highlights the background features such as the land, the forest or the water. Also, in the BAR dataset, $\text{IE}(\mathbf{z})$ emphasizes the arm throwing the javelin, the motion of a person vaulting, climbing, or diving, while the current $f_{d}$ mainly focuses on the biased features such as the playing field, the sky, the mountain, or the water. These results verify the validity of our IE weight $\text{IE}(\mathbf{z})$ as guidance for emphasizing the intrinsic features in $\mathbf{x}$ that are under-exploited yet. ### E.2 Effect of intrinsic feature guidance on debiasing We present an additional qualitative analysis regarding the effectiveness of the intrinsic feature guidance to supplement Fig. 4 in the main paper. Fig. 8 illustrates the Grad-CAM [19] results of the model trained with and without our method. Here, the model trained without our method is the same as LfF+BE [13]. We train the models with the Waterbids and the BAR datasets with a bias severity of 1% and apply the Grad-CAM to the test samples for visualization. The highlighted regions indicate the features that the model mainly employs for prediction. Fig. 8 (a) shows that our approach properly focuses on the intrinsic features of the bird (_e.g_., wings, a beak, or feet), while the model trained without our guidance mostly concentrates on the bias features (_e.g_., the water or trees). For the BAR dataset in Fig. 8 (b), our model principally exploits the action of a person (_e.g_., fishing, vaulting, or throwing) or the racing car for prediction, while the model without our guidance focuses on the backgrounds (_e.g_., the playing field or the water). The results demonstrate the effectiveness of our method in guiding the model to learn intrinsic features. Method | Waterbirds ---|--- 0.5 | 1.0 | 2.0 | 5.0 Vanilla [5] | 57.41 $\pm$0.74 | 58.07 $\pm$1.00 | 61.04 $\pm$0.55 | 64.13 $\pm$0.14 HEX [25] | 57.88 $\pm$0.83 | 58.28 $\pm$0.67 | 61.02 $\pm$0.48 | 64.32 $\pm$0.62 LNL [9] | 58.49 $\pm$0.81 | 59.68 $\pm$0.78 | 62.27 $\pm$0.91 | 66.07 $\pm$1.15 EnD [22] | 58.47 $\pm$0.97 | 57.81 $\pm$1.04 | 61.26 $\pm$0.54 | 64.11 $\pm$0.52 ReBias [2] | 55.44 $\pm$0.24 | 55.93 $\pm$0.66 | 58.53 $\pm$0.52 | 62.14 $\pm$1.03 LfF [15] | 60.66 $\pm$0.77 | 61.78 $\pm$1.53 | 58.92 $\pm$2.93 | 61.43 $\pm$1.92 DisEnt [12] | 59.59 $\pm$1.67 | 60.05 $\pm$0.82 | 59.76 $\pm$1.26 | 64.01 $\pm$1.36 LfF+BE [13] | 61.22 $\pm$2.54 | 62.58 $\pm$1.12 | 63.00 $\pm$1.18 | 63.48 $\pm$0.56 DisEnt+BE [13] | 51.65 $\pm$1.45 | 54.10 $\pm$1.04 | 53.43 $\pm$1.42 | 54.21 $\pm$1.36 Ours | 63.64 $\pm$1.63 | 65.22 $\pm$0.95 | 65.23 $\pm$1.06 | 66.33 $\pm$1.42 Table 5: Comparison to the baselines. We measure the classification accuracy on test sets of the Waterbirds dataset with different bias severities. The best accuracy values are in bold. Results with standard deviations are provided in the Supplementary. Method | BFFHQ | BAR ---|---|--- 0.5 | 1.0 | 2.0 | 5.0 | 1.0 | 5.0 Vanilla [5] | 55.64 $\pm$0.44 | 60.96 $\pm$1.00 | 69.00 $\pm$0.50 | 82.88 $\pm$0.49 | 70.55 $\pm$0.87 | 82.53 $\pm$1.08 HEX [25] | 56.96 $\pm$0.62 | 62.32 $\pm$1.21 | 70.72 $\pm$0.89 | 83.40 $\pm$0.34 | 70.48 $\pm$1.74 | 81.20 $\pm$0.68 LNL [9] | 56.88 $\pm$1.13 | 62.64 $\pm$0.99 | 69.80 $\pm$1.03 | 83.08 $\pm$0.93 | - | - EnD [22] | 55.96 $\pm$0.91 | 60.88 $\pm$1.17 | 69.72 $\pm$1.14 | 82.88 $\pm$0.74 | - | - ReBias [2] | 55.76 $\pm$1.50 | 60.68 $\pm$1.24 | 69.60 $\pm$1.33 | 82.64 $\pm$0.64 | 73.04 $\pm$1.04 | 83.90 $\pm$0.82 LfF [15] | 65.19 $\pm$3.23 | 69.24 $\pm$2.07 | 73.08 $\pm$2.70 | 79.80 $\pm$1.09 | 70.16 $\pm$0.77 | 82.95 $\pm$0.27 DisEnt [12] | 62.08 $\pm$3.89 | 66.00 $\pm$1.33 | 69.92 $\pm$2.72 | 80.68 $\pm$0.25 | 70.33 $\pm$0.19 | 83.13 $\pm$0.46 LfF+BE [13] | 67.36 $\pm$3.10 | 75.08 $\pm$2.29 | 80.32 $\pm$2.07 | 85.48 $\pm$2.88 | 73.36 $\pm$0.97 | 83.87 $\pm$0.82 DisEnt+BE [13] | 67.56 $\pm$2.11 | 73.48 $\pm$2.12 | 79.48 $\pm$1.80 | 84.84 $\pm$2.11 | 73.29 $\pm$0.41 | 84.96 $\pm$0.69 DCWP [16] | 64.08 $\pm$1.08 | 67.44 $\pm$2.87 | 75.24 $\pm$1.73 | 85.00 $\pm$0.94 | 69.63 $\pm$0.85 | 81.89 $\pm$0.68 Ours | 71.68 $\pm$1.74 | 77.56 $\pm$1.24 | 83.08 $\pm$1.69 | 87.60 $\pm$1.68 | 75.14 $\pm$0.82 | 85.03 $\pm$0.64 Table 6: Comparison to the baselines. We measure the classification accuracy on test sets of the BFFHQ and BAR datasets with different bias severities. The best accuracy values are in bold. The hyphen mark (-) means it is not applicable. Results with standard deviations are provided in the Supplementary. ## Appendix F Additional quantitative results ### F.1 Quantitative results with standard deviations In Table 1 of our main paper, we report the quantitative comparison results with classification accuracies on the test set which are averaged across five independent experiments with different random seeds. We additionally provide the standard deviations of the classification accuracies in Table 5 and Table 6. Each table shows the results of the synthetic dataset (_i.e_., Waterbirds) and the real-world dataset (_i.e_., BFFHQ and BAR), respectively. Since the BAR dataset lacks explicit bias labels, approaches such as LNL and EnD that necessitate explicit bias labels are not applicable to the BAR dataset. The baseline results for the BFFHQ and the BAR dataset are from the results reported in BE [13] except for DCWP [16]. ### F.2 Comparison to recent baseline Our primary contribution lies in providing the model with explicit spatial guidance for intrinsic features by examining features that commonly appear in bias-contrastive pairs. The intrinsic feature exists in generally appearing features within a class, however, this property has not been tackled to provide intrinsic feature guidance in prior studies to the best of our knowledge. While recent debiasing approaches aim to encourage the model to learn intrinsic features, they fail to directly indicate where the model should focus to learn the features. For instance, MaskTune [1] expects the model to learn intrinsic features by fine-tuning the model with the data whose already-explored area is masked out using GradCAM. However, simply exploring the unmasked area cannot inform the model where exactly the intrinsic features are located. In this case, the model may rather focus on non-intrinsic features during the fine-tuning. We experiment on real-world datasets with a 1% bias severity: {58.00, 69.42}-MaskTune and {77.56, 75.14}-Ours for {BFFHQ, BAR}. Ours achieves better debiasing performance by providing explicit spatial guidance for intrinsic features based on common features in bias-contrastive pairs. A recent pair-wise debiasing method $\mathcal{X}^{2}$-model [28] encourages the model to retain intra-class compactness using samples generated via feature-level interpolation between BC and BA samples. However, $\mathcal{X}^{2}$-model does not inform the model where the intrinsic features are located in the interpolated features. Simply making samples closer to the interpolated samples does not assure the model to focus on the intrinsic features. In contrast, our method directly encourages the model to focus on the area of the intrinsic features. Also, we conduct a quantitative comparison to the recently proposed debiasing approach, DCWP [16], in Table 6. We use real-world datasets, BFFHQ and BAR, with various bias severity. For a fair comparison, we utilize ResNet18, which is the same architecture as ours. The ImageNet pretrained weight is employed only for the BAR dataset. The results demonstrate the superiority of our method over the DCWP, where ours provides the model with explicit guidance for intrinsic features for debiasing, unlike DCWP. ### F.3 Worst accuracy between the accuracy of BA and BC samples in Waterbirds BS | Vanilla [5] | HEX [25] | LNL [9] | EnD [22] | ReBias [2] | LfF [15] | DisEnt [12] | LfF+BE [13] | DisEnt+BE [13] | Ours ---|---|---|---|---|---|---|---|---|---|--- 0.5 | 24.08 $\pm$1.56 | 28.20 $\pm$3.07 | 26.08 $\pm$1.64 | 28.29 $\pm$3.53 | 27.00 $\pm$1.10 | 56.22 $\pm$6.07 | 38.07 $\pm$11.01 | 55.15 $\pm$2.78 | 36.60 $\pm$10.88 | 59.12 $\pm$3.67 1.0 | 24.78 $\pm$2.45 | 26.32 $\pm$2.90 | 29.72 $\pm$3.45 | 25.69 $\pm$2.41 | 27.95 $\pm$1.56 | 59.07 $\pm$3.40 | 47.02 $\pm$7.26 | 55.53 $\pm$1.60 | 28.35 $\pm$4.17 | 63.05 $\pm$1.97 2.0 | 34.39 $\pm$2.24 | 32.12 $\pm$2.89 | 33.92 $\pm$1.94 | 32.94 $\pm$1.48 | 32.16 $\pm$0.76 | 53.07 $\pm$6.74 | 44.93 $\pm$8.54 | 52.91 $\pm$2.62 | 31.08 $\pm$6.01 | 61.71 $\pm$4.94 5.0 | 38.34 $\pm$1.05 | 39.08 $\pm$0.92 | 43.22 $\pm$1.94 | 40.91 $\pm$1.11 | 39.72 $\pm$1.11 | 58.05 $\pm$2.37 | 52.96 $\pm$6.33 | 48.48 $\pm$3.72 | 37.92 $\pm$6.47 | 58.60 $\pm$3.32 Table 7: The worst accuracy between the accuracy of BA and BC samples in the Waterbirds dataset. BS is bias severity. To further analyze our model’s performance on the Waterbirds dataset, we measure the accuracy of BA and BC samples separately, where the class accuracy values are averaged. Then, we report the worst accuracy between them in Table 7. The results show that ours achieves the highest worst accuracy compared to other baselines. ## Appendix G Detailed description of datasets Figure 9: Visualization of datasets used in the experiments. A group of three columns represents each class for (a) Waterbirds-{Landbird, Waterbird} and (b) BFFHQ-{Young, Old}, and each column of (c) BAR-{Climbing, Diving, Fishing, Vaulting, Racing, Throwing} represents a distinct class. The samples above the dashed line are bias-aligned samples and the below ones are bias-conflicting samples. We utilize Waterbirds [18], BFFHQ [10], and BAR [15] dataset. First, the Waterbirds dataset is composed of two classes of bird images and has background bias. In the training set, the waterbirds are mostly with the water background and the landbirds are with the land background. The number of BA samples and that of BC samples are balanced in the test set. By following Sagawa et al. [18], we utilize two datsets, the Caltech-UCSD Birds-200-2011 (CUB) dataset [24] and the Places111CC BY dataset [30], to construct the Waterbirds dataset. The bird images are segmented from the CUB dataset, and the segmented birds are combined with the background images from the Place dataset. We employ the code released by Sagawa et al. [18]222https://github.com/kohpangwei/group_DRO for constructing the dataset. As mentioned in the repository, a few landbirds (Eastern Towhees, Western Meadowlarks, and Western Wood Pewees) in the original dataset are incorrectly labeled as waterbirds. Therefore, we correct their labels to landbirds for the experiments. The BFFHQ is initially presented by Kim et al. [10] and constructed by modifying the FFHQ dataset 333BY-NC-SA 4.0. In the BFFHQ, the bias attribute is the gender and the intrinsic attribute is the age. Specifically, most of the young people are female, and most of the old people are male in the training dataset. Lastly, the BAR dataset is introduced by Nam et al. [15]. The dataset contains six action classes (_i.e_., Climbing, Diving, Fishing, Vaulting, Racing, Throwing) and each class is biased to a certain background (_i.e_., RockWall, Underwater, WaterSurface, Sky, APavedTrack, PlayingField). In the test set, such correlations do not exist. For the experiments, we use the BFFHQ dataset and BAR dataset released by Lee et al. [13]444https://github.com/kakaoenterprise/BiasEnsemble. The examples of the BA samples and BC samples in each dataset are shown in Fig. 9. ## Appendix H Implementation details Following the previous studies [15, 12, 13], we utilize ResNet18 [5] for the biased model $f_{b}$ and the debiased model $f_{d}$. Also, $f_{d}^{\text{emb}}$ indicates the subnetwork before the average pooling layer, and $f_{d}^{\text{cls}}$ consists of an average pooling layer and a linear classifier that outputs logits, where $f_{d}(\mathbf{x})=f_{d}^{\text{cls}}\left(f_{d}^{\text{emb}}(\mathbf{x})\right)$. Before training, $f_{b}$ and $f_{d}$ are initialized with the ImageNet pretrained weight for the BAR dataset, while we randomly initialize the models for the other datasets. This is because the size of the BAR dataset is extremely small compared to the others [13]. During training $f_{d}$, we employ the sample reweighting value $w(\mathbf{x})$ termed as relative difficulty score [15], as mentioned in Sec. 3.4 in the main paper. $w(\mathbf{x})$ is calculated as follows: $w(\mathbf{x})=\frac{\mathcal{L}_{\text{CE}}(f_{b}(\mathbf{x}),y)}{\mathcal{L}_{\text{CE}}(f_{b}(\mathbf{x}),y)+\mathcal{L}_{\text{CE}}(f_{d}(\mathbf{x}),y)}.$ (14) This score assigns a high weight to the BC samples and a low weight to the BA samples. This encourages $f_{d}$ to mainly learn intrinsic features by emphasizing BC samples with $w(\mathbf{x})$. The models are trained for 50K iterations with a batch size of 64. The horizontal flip and a random crop with a size of 224 are used for data augmentation during the training. All the models are trained with the Adam optimizer. The learning rate is set as 1e-4 for the Waterbirds and the BFFHQ dataset, and 1e-5 for the BAR dataset. The hyperparameters of $\alpha_{l}$, $\alpha_{s},$ and $\tau$ are set as $0.1$, $0.9$, and $2$, respectively, for all the datasets. We apply class-wise max normalization to our BN score to consider the different ranges of the scores across the classes for stability of training. During the training, we aim to select an auxiliary sample that has no bias attribute but has the same class label with $\mathbf{x}$ as $\mathbf{x}^{\text{BN}}$ from $\mathcal{D}^{\text{BN}}$. If no sample in $\mathcal{D}^{\text{BN}}$ has the same label as $\mathbf{x}$, we select the sample that has the same label with $\mathbf{x}$ from $\mathcal{D}^{\text{BN}}_{\text{cand}}$. In a case where there’s no sample with the same label as $\mathbf{x}$ in both $\mathcal{D}^{\text{BN}}$ and $\mathcal{D}^{\text{BN}}_{\text{cand}}$, we sample $\mathbf{x}^{\text{BN}}$ that has the same label with $\mathbf{x}$ from $\mathcal{D}$. As described in Sec. 3.1 in the main paper, we utilize the pretrained biased models to construct $\mathcal{D}^{\text{A}}$, following the previous work [13]. We utilize ResNet18 [5] for the pretrained biased models, and all the pretrained biased models are randomly initialized before training. The models are trained for 1K iterations with the generalized cross entropy (GCE) loss [29]. Within each model, the samples with a high ground-truth probability (i.e., $\geq$ 0.99) are considered as BA samples. Based on majority voting, we collect the samples that are considered as the BA sample by the majority of the models and construct the bias-amplified dataset $\mathcal{D}^{\text{A}}$. We use five pretrained biased models following the study of Lee _et al_. [13]. Lee _et al_. demonstrate that adopting the additional biased models requires a negligible amount of additional computational costs and memory space. Note that the same biased models are utilized when constructing $\mathcal{D}_{\text{cand}}^{\text{BN}}$. ## Appendix I Limitations and future work Although our BN score effectively encourages BC samples to be mainly adopted as auxiliary inputs, the auxiliary inputs still can include a few BA samples, as shown in Table 2 of Sec. 4.3 in the main paper. Accordingly, such BA samples may interfere with the model to capture the intrinsic features. Identifying intrinsic attributes without relying on auxiliary inputs can be one promising future work. In addition, since our IE weight is designed to enhance intrinsic features by imposing spatially different values on the features, our method might be more effective especially when bias attributes are located in different regions from the intrinsic attributes. In this regard, applying channel-wise re- weighting [6] to our approach will further improve the general applicability of our method. Despite the limitations above, we believe that our work poses the importance of enhancing intrinsic attributes for debiasing.
11institutetext: Physics Department, Columbia University, New York, NY 10027, USA 22institutetext: Center for Astrophysics $|$ Harvard & Smithsonian, Cambridge, MA 02138, USA 33institutetext: Department of Physics, Washington University, St. Louis, MO 63130, USA 44institutetext: Physics Department, California Polytechnic State University, San Luis Obispo, CA 94307, USA 55institutetext: Department of Astronomy and Astrophysics, 525 Davey Lab, Pennsylvania State University, University Park, PA 16802, USA 66institutetext: Department of Physics and Astronomy, Barnard College, Columbia University, NY 10027, USA 77institutetext: Department of Physics and Astronomy, Purdue University, West Lafayette, IN 47907, USA 88institutetext: Department of Physics and Astronomy and the Bartol Research Institute, University of Delaware, Newark, DE 19716, USA 99institutetext: School of Physics and Astronomy, University of Minnesota, Minneapolis, MN 55455, USA 1010institutetext: Department of Physics, California State University - East Bay, Hayward, CA 94542, USA 1111institutetext: DESY, Platanenallee 6, 15738 Zeuthen, Germany 1212institutetext: Physics Department, McGill University, Montreal, QC H3A 2T8, Canada 1313institutetext: Santa Cruz Institute for Particle Physics and Department of Physics, University of California, Santa Cruz, CA 95064, USA 1414institutetext: Department of Physics and Astronomy, University of Utah, Salt Lake City, UT 84112, USA 1515institutetext: Department of Physics and Astronomy, University of Alabama, Tuscaloosa, AL 35487, USA 1616institutetext: Department of Physics and Astronomy, University of Iowa, Van Allen Hall, Iowa City, IA 52242, USA 1717institutetext: School of Physics, National University of Ireland Galway, University Road, Galway, Ireland 1818institutetext: Department of Physics, Engineering Physics, and Astronomy, Queen’s University, Kingston, ON K7L 3N6, Canada 1919institutetext: Institute of Physics and Astronomy, University of Potsdam, 14476 Potsdam-Golm, Germany 2020institutetext: School of Physics, University College Dublin, Belfield, Dublin 4, Ireland 2121institutetext: Department of Physical Sciences, Munster Technological University, Bishopstown, Cork, T12 P928, Ireland 2222institutetext: Department of Physics and Astronomy, University of California, Los Angeles, CA 90095, USA 2323institutetext: Department of Physics and Astronomy, Iowa State University, Ames, IA 50011, USA 2424institutetext: Centro de Investigaciones Energéticas, Medioambientales y Tecnológicas (CIEMAT), Av. Complutense, 40, E-28040 Madrid, Spain 2525institutetext: Instituto de Astrofísica de Canarias, E-38205 La Laguna, Tenerife, Spain 2626institutetext: Universidad de La Laguna, Dept. Astrofísica, E-38206 La Laguna, Tenerife, Spain # The throughput calibration of the VERITAS telescopes C. B. Adams 11 W. Benbow 22 A. Brill 11 J. H. Buckley 33 J. L. Christiansen 44 A. Falcone 55 Q. Feng 66 J. P. Finley 77 G. M Foote 88 L. Fortson 99 A. Furniss 1010 C. Giuri 1111 D. Hanna 1212 T. Hassan 11112424 O. Hervet 1313 J. Holder 88 B. Hona 1414 T. B. Humensky 11 W. Jin 1515 P. Kaaret 1616 T. K Kleiner 1111 S. Kumar 1212 M. J. Lang 1717 M. Lundy 1212 G. Maier 11 and 22211 and 222 P. Moriarty 1717 R. Mukherjee 66 M. Nievas Rosillo 11 and 25 and 26 and 11111 and 25 and 26 and 111 S. O’Brien 1212 N. Park 1818 S. Patel 1616 K. Pfrang 1111 M. Pohl 19 and 1119 and 11 R. R. Prado 1111 E. Pueschel 1111 J. Quinn 2020 K. Ragan 1212 P. T. Reynolds 2121 D. Ribeiro 11 E. Roache 22 J. L. Ryan 2222 M. Santander 1515 A. Weinstein 2323 D. A. Williams 1313 T. J Williamson 88<EMAIL_ADDRESS><EMAIL_ADDRESS> (Received ¡date¿ / Accepted ¡date¿) ###### Abstract Context. The response of imaging atmospheric Cherenkov telescopes to incident $\gamma$-ray-initiated showers in the atmosphere changes as the telescopes age due to exposure to light and weather. These aging processes affect the reconstructed energies of the events and $\gamma$-ray fluxes. Aims. This work discusses the implementation of signal calibration methods for the Very Energetic Radiation Imaging Telescope Array System (VERITAS) to account for changes in the optical throughput and detector performance over time. Methods. The total throughput of a Cherenkov telescope is the product of camera-dependent factors, such as the photomultiplier tube gains and their quantum efficiencies, and the mirror reflectivity and Winston cone response to incoming radiation. This document summarizes different methods to determine how the camera gains and mirror reflectivity have evolved over time and how we can calibrate this changing throughput in reconstruction pipelines for imaging atmospheric Cherenkov telescopes. The implementation is validated against seven years of observations with the VERITAS telescopes of the Crab Nebula, which is a reference object in very-high-energy astronomy. Results. Regular optical throughput monitoring and the corresponding signal calibrations are found to be critical for the reconstruction of extensive air shower images. The proposed implementation is applied as a correction to the signals of the photomultiplier tubes in the telescope simulation to produce fine-tuned instrument response functions. This method is shown to be effective for calibrating the acquired $\gamma$-ray data and for recovering the correct energy of the events and photon fluxes. At the same time, it keeps the computational effort of generating Monte Carlo simulations for instrument response functions affordably low. ###### Key Words.: Throughput calibration, Instrument Response Functions, Cherenkov telescopes ## 1 Introduction When energetic $\gamma$-rays or charged particles (typically protons, atomic nuclei, or electrons) enter the atmosphere, they generate a cascade of secondary particles (also known as an extensive air shower or EAS) through pair production, bremsstrahlung emission, and, for the case of hadronic showers, fragmentation and decay of unstable mesons ($\pi^{0}$, $\pi^{\pm}$, and $K^{\pm}$). The resulting ultra-relativistic particles, traveling faster than the local speed of light in the atmosphere, produce coherent polarization of the dielectric medium. This in turn produces beamed Cherenkov radiation in the forward direction, forming a light pool at ground level of approximately $150\,\mathrm{m}$ radius. Imaging atmospheric Cherenkov telescopes (IACTs) sample this Cherenkov light pool where the optical assembly of each telescope collects and focuses the light onto a corresponding camera, comprising an array of photo-sensitive detectors. As the shower progresses from the top of the atmosphere to the ground, Cherenkov light is produced with a duration of up to 100 nanoseconds. From the ground, camera pixels register the flash of light to form an image of the shower. The shape and time-gradient of that image depends on the properties of the primary particle, on its arrival direction, and on the distance of the air shower to the telescopes. Images due to pure electromagnetic showers tend to be compact, well-defined, and of an approximately elliptical shape. Hadronic showers, on the other hand, tend to generate more penetrating secondary particles (e.g., muons) at higher transverse momenta, resulting in images with more clumpy shapes. Particle shower images, particularly of $\gamma$-ray origin, can be described in terms of a first and second moment analysis whose parameters are used to derive the properties of the primary particle (Hillas 1985). The observed spectrum of Cherenkov light that is generated in extensive air showers extends from about $200\,\mathrm{nm}$ to more than $700\,\mathrm{nm}$. The generated Cherenkov light follows a $1/\lambda^{2}$ distribution; absorption and scattering of the light in the atmosphere affects mostly the ultraviolet (UV) part of the spectrum. As a result, the observed flux at ground level from $\gamma$-ray showers is significantly reduced below a wavelength of about $280\,\mathrm{nm}$. Even so, the bulk of the observed emission happens at short wavelengths, decreasing in intensity as the wavelength increases. This, in addition to the presence of strong airglow lines and increasing night sky background (NSB) starting from $550\,\mathrm{nm}$, explains why IACTs are designed to be mostly sensitive to blue and near-UV radiation. The number of Cherenkov photons emitted in the shower is approximately proportional to the energy of the primary particle (Hillas 1985; de Naurois & Mazin 2015). This is particularly true for air showers that are generated by primary electrons and $\gamma$-rays. IACTs operate in harsh environments with variable weather conditions, wide temperature ranges, and occasional snow, rain, or dust storms. Nevertheless, these telescopes lack the protective buildings of other optical instruments, such as astronomical domes. The optical components of IACTs, which are designed to collect and focus the Cherenkov light, suffer from degradation processes due to the aforementioned weather conditions. In addition, the cameras of IACTs are usually made of photomultiplier tube (PMT) pixels operating at high voltages and they degrade as the total accumulated charge increases. During standard operations, PMTs are exposed to NSB light, typically inducing pixel currents of $5-10\,\mathrm{\mu A}$, and up to $15-20\,\mathrm{\mu A}$ during exceptionally bright moonlight conditions. These factors combined can induce aging that impacts both the optical and electronic response to incoming light, including Cherenkov light from EAS. Monitoring and correcting for the varying instrument performance therefore becomes a key element in the calibration and analysis of Cherenkov data. Hillas & Patterson (1990) originally proposed to monitor and calibrate IACTs using images generated from local high energy muons generated by hadronic showers. First implemented for the calibration of the Whipple telescopes (Fleury 1991; Rovero et al. 1996), various IACTs have used muon images to study the changing responses of the telescopes (Vacanti et al. 1994; Aharonian et al. 2004; Meyer 2005; Gaug 2006; Chalme-Calvet et al. 2014). The Cherenkov emission generated by these particles is emitted in a cone with a roughly constant opening angle, appearing as a ring when observed from the ground at small incident angles. The radius of the ring depends on the properties of the incident particle (e.g., energy) and is not affected by instrument throughput. It can be compared with the number of photo-electrons seen by the telescopes to provide a measure of the efficiency of the telescope detecting optical light. In practice, the analysis of muon images is challenging for many reasons. It involves a detailed geometrical reconstruction, particularly when part of the muon ring image falls outside the camera’s field of view (FoV), when the shower trajectory is tilted or displaced with respect to the center of the FoV or if groups of pixels are malfunctioning. In addition, as Cherenkov photons from muons are produced close to the telescopes, atmospheric absorption is less severe and the Cherenkov photon spectrum is shifted to shorter wavelengths compared with photons generated by $\gamma$-ray showers (Chalme-Calvet et al. 2014). Therefore, the calibration method using the muon images requires additional corrections or specific Monte Carlo (MC) simulations in order to provide an accurate estimation of the throughput for $\gamma$-ray showers (Gaug et al. 2019). Due to this added complexity, the Very Energetic Radiation Imaging Telescope Array System (VERITAS) currently uses muons mostly as a supporting technique to monitor the calibration workflow (see section 2.3.1). This work discusses a different method to measure the total throughput. It is based on the characterization of mirror reflectivity and the continuous monitoring of camera gains. The relative variations of these parameters are used to correct the simulated $\gamma$-ray event pulse charges to provide throughput-corrected response functions. We discuss how this method was successfully implemented in VERITAS, providing a reliable way to determine the total optical and camera throughput, and to reconstruct shower energies and source fluxes. This document is structured as follows. Section 2 describes the VERITAS telescopes, giving a comprehensive overview of the main components that are used for both the standard operation of VERITAS and the calibration of the instrument. It also details the analysis techniques and workflows that are used to produce the standard science products of the telescope. The measurement of the total optical throughput and its evolution over time is presented in section 3. In section 4 we show the implementation of throughput corrections, which are applied to the instrument MC simulations to account for instrument aging. The impact of the varying throughput on the performance of the instrument is evaluated in section 4.3, using metrics such as the energy threshold of the analysis, the differential sensitivity and the uncertainties in the reconstruction of fluxes. The validation of the method using real data collected over seven years is shown in section 5. Finally, we present a brief discussion of systematic uncertainties, limitations and possible improvements of the method in section 6. ## 2 The VERITAS telescopes VERITAS is a ground-based very high energy (VHE) instrument operating at the basecamp of the Fred Lawrence Whipple Observatory in southern Arizona, USA ($31^{\circ}40^{\prime}N$, $110^{\circ}57^{\prime}W$, $1268\,\mathrm{m}$ elevation). It consists of four $12\,\mathrm{m}$ IACTs which use a shower imaging and moment analysis technique (Weekes 1996) to detect $\gamma$-ray photons with energies above $85\,\mathrm{GeV}$. VERITAS operates in stereoscopic mode, with the array only triggering when a Cherenkov shower is detected by at least two telescopes. This allows a more accurate reconstruction of the shower properties (in particular, the incident direction and energy of the primary particle) while reducing the number of accidental triggers of a single telescope by NSB fluctuations or local muons. It also makes the discrimination of hadronic showers more efficient, which in turn boosts the sensitivity. An example of a stereoscopic reconstruction of a real EAS by VERITAS is shown in Figure 1. The first VERITAS telescope (T1) started operations in February 2005 (Holder et al. 2006). Three more telescopes (T2, T3 and T4) were added in the following years. The full four-telescope array was commissioned by 2007 (Krennrich et al. 2007). The observatory was subsequently upgraded to an optimized diamond-shaped array layout by moving T1 to its current position in 2009 (Perkins et al. 2009), and the camera and electronics were replaced with improved hardware and higher quantum efficiency (QE) PMTs in 2011-2012 (Kieda 2013). With its current, post-upgrade, array configuration, VERITAS has been able to characterize $\gamma$-rays with energies from $\sim 85\,\mathrm{GeV}$ to $>30\,\mathrm{TeV}$, and can detect a point-like source with $\sim 1\%$ of the Crab Nebula flux in $25\,\mathrm{h}$. Figure 1: Extensive air shower reconstructed by the VERITAS telescopes. The shower images for each camera have been integrated over time and cleaned using a two-level filter. Dead and disabled channels are shown in dark gray and brown respectively. The signal registered by each PMT is color-coded, with red colors representing higher signal than blue tones. An approximate geometrical reconstruction of the shower core location is illustrated with red dashed lines and a red star. The upper right plot shows the trace of one of PMT (#255) of T1 for reference. There, the signal registered for each sample of $2\,\mathrm{ns}$ is plotted in red and the integration window (six samples) is shown in shadowed blue. ### 2.1 Telescope design The VERITAS telescopes follow a Davies-Cotton design (Davies & Cotton 1957). The primary mirror of each telescope, hereafter also referred to as the “dish,” consists of 345 identical hexagonal facets. The resulting optical focal ratio of the system is $f/1$, for a focal length of $12\,\mathrm{m}$. Each facet is designed as a one-piece glass element with $\approx 61\,\mathrm{cm}$ sides (Roache et al. 2008). They are commercially ground and slumped for a radius of curvature of $23.97\pm 0.01\,\mathrm{m}$ on average, and a spot size of $<10\,\mathrm{mm}$. The coating is made of aluminum, deposited at a rate of $3-8\,\mathrm{nm}/s$ under vacuum conditions with a purity better than $99.999\,\%$. Its total thickness is $180\,\mathrm{nm}$, the top $80\,\mathrm{nm}$ of which are oxidized during an anodizing process to improve durability and ensure a peak reflectivity of $92\,\%$ at about $320\,\mathrm{nm}$ (Roache et al. 2008). Cherenkov light collected by the dish is focused onto the camera. In order to detect the brief Cherenkov flashes, the cameras of the current configuration are composed of 499 high-quantum-efficiency (Otte et al. 2011; Kieda 2011) Hamamatsu R10560 PMTs located at the focal plane. Each PMT has a FoV of $0.15^{\circ}$ and is equipped with a Winston cone, a nonimaging device designed to minimize light collection gaps between the PMT photocathodes and limit the acceptance angle (Winston 1974, 1976), therefore reducing the background light intensity. Each Winston cone is made of plastic with an evaporated aluminum coating and a protective overlayer. This provides a reflectivity greater than $85\%$ above $260\,\mathrm{nm}$ (Nagai et al. 2007). After a careful optical alignment of the system (McCann et al. 2010), the on- axis point spread function (PSF) of the telescopes is about $0.10-0.12^{\circ}$ in diameter, leading to the ability to concentrate $80\%$ of the light within one pixel in the camera. The optical PSF of the telescopes is frequently monitored, with observed variations of less than $0.02^{\circ}$ over long periods of time and for most elevations. Up to $0.6^{\circ}$ off- axis the degradation of the optical PSF with respect to on-axis observations is small, $\lesssim 0.02^{\circ}$, but it quickly increases at larger angles reaching $\sim 0.2^{\circ}$ at $1.2^{\circ}$ offset. The total FoV of the telescopes is $3.5^{\circ}$. ### 2.2 Readout and trigger system PMT signals are digitized using $500\,\mathrm{Msample/s}$ flash analog-to- digital converters (FADCs) with 8-bit dynamic range, capable of storing the waveforms in $64\,\mathrm{\mu s}$ memory buffers. VERITAS employs a three- level trigger. A first-level trigger (L1) or constant fraction discriminator (CFD) requires the PMT pulse height to be above a given threshold (typically 5-6 photo-electrons). A pattern trigger or telescope trigger (L2), requires L1 trigger signals to occur in three adjacent PMTs within the timing coincidence window. This pattern trigger is based on $400\,\mathrm{MHz}$ Xilinx Virtex-5 FPGAs for pixel neighbor coincidence, allowing triggers to occur before samples are read out. The time-aligning accuracy of this system is $\sim 0.2\,\mathrm{ns}$ and allows for a pixel-to-pixel coincidence window of $\sim 5\,\mathrm{ns}$. Finally, an array trigger (L3) requires a L2 trigger signal from at least two telescopes to happen, once corrected for the propagation time of the shower front across the array and the varying distance from each telescope to the central control building, within a $50\,\mathrm{ns}$ coincidence window. A more detailed description of the VERITAS trigger system can be found in Zitzer (2013). Coincidence window widths and CFD thresholds are optimized to trigger on low- energy gamma-ray showers while avoiding random coincident triggers from NSB. The optimization process consists of scans of the trigger thresholds when the instrument is exposed to NSB. The aim is to keep the L3 rates at a few hundred Hz, and to balance a low trigger energy threshold with avoiding data losses from dead-time. The typical dead-time for VERITAS, in its current configuration, is roughly $15\%$ for a data acquisition rate of about $300\,\mathrm{Hz}$. ### 2.3 Data analysis VERITAS maintains two data analysis packages: Eventdisplay (Maier & Holder 2017) and VEGAS (Cogan 2008). They allow independent reconstruction of the data, limiting the impact of systematic uncertainties due to the analysis software implementation on the scientific results. Each package performs a calibration of the signal collected for each shower by the four telescopes, a second-order moment analysis, and parametrization of the shower images. The resulting parameters are used to classify the showers as $\gamma$-ray-like showers or hadronic events. Using the stereo shower reconstruction, the arrival direction and the energy of the primary particle are estimated. Finally, from a comparison with reconstructed MC shower simulations, the effective collection area is evaluated and the events in excess of the estimated background are converted into high-level analysis products such as fluxes, light curves, and energy spectra. The results of the throughput analysis shown in this work were obtained using the Eventdisplay package, however they were also validated using VEGAS. #### 2.3.1 Calibration The calibration process comprises the determination of the electronics baseline (pedestal) and its variation plus the measurement of the response of each individual PMT to incident light, that is absolute and relative gain differences between PMTs. FADC inputs are AC coupled and a DC voltage offset is added to the signal inputs (pedestal) so that positive and negative fluctuations around the mean value (pedestal variation) due to NSB variations can be measured. Artificially triggered pedestal events are injected during observation runs with a frequency of $1\,\mathrm{Hz}$ and used for the estimation of pedestal level and variation. The sky brightness and therefore the background light level might change during observations; pedestal levels and variations are consequently updated every three minutes. PMT gains are monitored and calibrated by using a flasher light source. The VERITAS FADCs have two gain channels for a wide dynamic range of the readout chain (Holder 2005). The calibration software reconstructs the relative gain of the pixels, timing differences (e.g., due to differences in the cable length for each channel), and relative calibration of the high voltage settings and the high- and low-gain readout channels by using uniformly illuminated camera events generated with flasher light sources (Hanna et al. 2010). Each flasher unit consist of blue ($370\,\mathrm{nm}$) light-emitting diodes (LEDs), driver electronics, and a front face made of an opal diffuser which spreads the light from the LEDs and distributes it almost homogeneously across the entire PMT camera. The flasher pulses span eight brightness levels (upgraded to fifteen in 2016), covering the dynamic range of VERITAS for both high- and low-gain readout. Absolute gain calibration is determined on a periodic basis following the procedure described in 3.1. Relative gain differences between PMTs are monitored daily, and corrected during the data analysis. The inter-calibration between the two gain channels is performed on a monthly basis by recording calibration runs with a particularly long readout window. This is needed to avoid truncation and provide samples at the end of the trace that allow us to estimate the low-gain pedestal. For each run, half the camera is operated at a reduced gain to force it to stay in high-gain mode, while the other half operates in low-gain. Finally, the results of the analysis of Cherenkov light from muons measured by the VERITAS telescopes (Tyler 2013) are used to monitor the calibration, in conjunction with the procedures described below (single photo-electron runs, optical PSF, and mirror reflectivity measurements). This approach mitigates several limitations of a calibration based on muons only, especially the differences in the wavelength spectrum between Cherenkov light from muons and from air shower events. #### 2.3.2 Signal extraction and image analysis Signals are extracted from the FADC traces using a two-pass integration method. The first pass uses a long integration window to determine the pulse arrival time and to derive the optimal position of the time integration window, using a linear fit of the pulse arrival times along the image axis (Holder et al. 2006). A second pass performs the trace integration, using a short window of typically six samples ($12\,\mathrm{ns}$). An example of a PMT trace for a real event and the optimized integration window is shown in Figure 1. A two-level filter is then used to extract the signal pixels and clean the image. First, the brightest parts of the shower image (core pixels) are localized at five times the pedestal root mean square (RMS). If a Gaussian distribution of pixel signals is assumed for images containing only noise, this would correspond to a surviving rate of less than one in $3\times 10^{6}$. This is enough to remove most clusters of pixels containing only noise that might exist in the full image. Then, to avoid cutting out the edges of the shower images, adjacent (boundary) pixels with at least 2.5 times the pedestal RMS are added to the core pixels. The resulting shower image, which resembles an ellipse in the case of $\gamma$-rays, is parameterized using a second moment analysis, based on the method originally developed by Hillas (1985). Images from showers with large inclinations, large offsets, or originating from high energy showers may not be fully contained in the camera. These “leaking events” cause a loss of signal, degrade energy resolution and take a toll on the performance. Using a log-likelihood fitting algorithm, some of these showers are recovered (Maier & Holder 2017), particularly improving the performance of the array at high energies. #### 2.3.3 Direction, shower core, and energy reconstruction The direction of the primary particle and the impact parameter of the shower core on the ground are derived using stereoscopic techniques (Hofmann et al. 1999; Maier & Holder 2017). The energy of each $\gamma$-ray event is estimated from a comparison using lookup tables built with simulations. The lookup tables encode the dependency of the distance of the air shower core to each telescope, integrated signal of the shower image (size), noise level, azimuth, zenith angle and array configuration. #### 2.3.4 Gamma-hadron separation Cosmic rays represent the largest fraction of the data for all $\gamma$-ray observations111A notable exception are flaring events such as GRBs, for which the rate of $\gamma$-rays may exceed the rate of background events for the first few seconds after analysis cuts have been applied (Acciari et al. 2019).. In order to separate signal from background cosmic-ray events, it is common practice to compare the shape of observed shower images with simulated $\gamma$-ray showers. To break the dependency of length and width of the image ellipse on the energy of each event, Eventdisplay uses the median reduced scaled parameters (MSCP) of width (MSCW) and length (MSCL) (Krawczynski et al. 2006): $\mathrm{MSCP}=\frac{1}{N_{tel}}\sum_{i=1}^{N_{tel}}{\frac{p_{i}-{\bar{p}}_{MC}(\theta,\phi,\alpha,\mathrm{NSB},\mathrm{ATM},\mathrm{size},r)}{\sigma_{90}}},$ (1) where $p_{i}$ is the measured parameter (in our case width or length) for telescope $i$, $\bar{p}_{MC}$ and $\sigma_{90}$ the expected value and width of the distribution for $90\%$ of the events, which are read from lookup tables that are filled using simulated gamma rays. Finally, $\theta$ is the zenith angle, $\phi$ the azimuth angle, $\alpha$ the offset angle of the telescope pointing direction with respect to the nominal source position, NSB the night sky background, ATM the atmospheric profile, size the integrated charge of the shower image, and $r$ the distance of the shower core to telescope $i$. VEGAS, in contrast, uses nonreduced versions of these parameters, centered at 1 instead of 0. The separation of $\gamma$-rays from background cosmic-ray events is done using boosted decision trees (BDTs) (Krause et al. 2017). BDTs are implemented as part of the Toolkit for Multivariate Data Analysis (TMVA) package (Speckmayer et al. 2010) and trained on a set of simulated $\gamma$-ray and observed hadronic showers using reconstructed shower parameters: MSCW, MSCL, shower core position, height of the maximum of the Cherenkov emission, and quality of the energy reconstruction. With different science goals in mind, three sets of cuts are usually defined in VERITAS, each requiring different BDT scores and different minimum image brightness. “Soft cuts” provide a low energy threshold but larger acceptance of background events. They are ideal for sources with very steep energy spectra which emit large fluxes from tens of GeV to a few hundreds of GeV. “Moderate cuts” are the most widely used and are adequate for most analyses, as they provide a good balance between energy threshold and sensitivity. “Hard cuts”, particularly combined with higher multiplicity (e.g., 3-telescopes) provide the best background rejection. Though the energy threshold is high ($\gtrsim 300\,\mathrm{GeV}$), this set of cuts is used when the source of interest is weak but with emission extending well into the TeV energy range. Unless otherwise stated, “moderate cuts” have been used throughout this work. #### 2.3.5 Spatial and spectral reconstruction Event counts are converted into energy spectra taking into account the effective areas of the telescopes and dead-time corrected exposure time (Acciari et al. 2008). The background at each point in the sky is estimated using either the reflected region method or the ring-background model approach (Berge et al. 2007). ### 2.4 Monte Carlo simulation Instrument response functions (IRFs), required for both the separation of $\gamma$-rays from background events and for the reconstruction of energy spectra and light curves, are generated using large sets of simulations of $\gamma$-ray showers. The propagation of extensive air showers in the atmosphere is carried out using the CORSIKA package (Heck et al. 1998) taking into account all relevant particle interactions. The generation, propagation, scattering, and absorption of Cherenkov light is simulated using measurements of molecular and aerosol content of the atmosphere from a radiosonde (station number #72274) which operates from Tucson, Arizona, approximately $60\,\mathrm{km}$ away from the VERITAS site. From the radiosonde data, two sets of simulations are generated, the first with an average winter atmospheric profile and the second for summer atmosphere. The response of the instrument is simulated using the GrOptics222https://github.com/groptics/GrOptics and CARE333https://github.com/nepomukotte/CARE packages for the optical and camera simulations. The optical simulations take into account the properties of the primary mirrors including facet alignment, reflectivity, diffuse scattering and all relevant shadowing elements of the telescopes. The photo-detector and electronic simulation code CARE is used to model the digitization, trigger, and readout processes including noise components. The MC model is validated through extensive comparisons with calibration measurements. The generation of MC simulations for instrument response functions is an important computational effort, given the large parameter space that needs to be covered. The MC instrument model for the array configuration after the upgrade of the cameras was generated to accurately reproduce the array performance for the period October 2012 - June 2013. ## 3 Throughput measurements The collection efficiency of Cherenkov light at ground level by the telescopes and its conversion to measurable signals, that is telescope throughput, strongly depends on the properties of the cameras and dishes, which change over time. At the camera, PMT gains and the QE affect the conversion of photons into charge pulses in each pixel. Similarly, the combined effect of the reflectivity of the mirror facets and the collection efficiency of the Winston cones affects the total optical light collected at each PMT photocatode. Furthermore, some of these parameters vary on different time scales and some are wavelength dependent, turning the characterization of the different parts of the instrument into an important and very challenging step in the analysis of data produced by IACTs. This section describes how these components are studied in VERITAS. ### 3.1 Camera gains and gain factors The current detector model of VERITAS assumes an average value of the absolute PMT gain $G_{i,MC}$ for each telescope $i$, estimated after the upgrade of the PMT cameras in 2011-2012: * • $G_{1,MC}=5.38\,\mathrm{dc/pe}$ * • $G_{2,MC}=5.29\,\mathrm{dc/pe}$ * • $G_{3,MC}=5.29\,\mathrm{dc/pe}$ * • $G_{4,MC}=5.73\,\mathrm{dc/pe}$ where dc stands for digital counts and pe for photo-electrons. In practice, the individual absolute gains are slightly different for every PMT and they vary over time due to temperature fluctuations. Supply voltage fluctuations are are kept stable at the subvolt scale, therefore are unlikely to have a large impact in the absolute gains. At the beginning of each observing campaign, usually in October, absolute and relative gains are measured and adjusted by changing the HV settings of the telescopes. The process is usually repeated once or twice during the season. This brings the gains closer to the nominal values shown before and results in absolute gain swings of, at most, $10\,\%$ over months. The resulting PMT charge distribution is also optimized to have $\lesssim 5\%$ width. Part of the aim of this work is to reduce the impact on the IRFs of the remaining gain variations left after the flat fielding process. Gains are measured in VERITAS in two independent ways, each with its own set of assumptions. The first approach directly measures the absolute gain by detecting the signal from single photo-electrons. This is achieved by placing a cover to reduce the optical transmission in front of the camera. The second method evaluates the absolute gains of the PMTs statistically, using “LED flashers” to uniformly illuminate the camera. Flasher runs are collected daily to perform a relative gain correction directly in the analysis chain. #### 3.1.1 Single photo-electron (SPE) runs Weak, pulsating sources of light can be used to study the response of PMTs to a single photo-electron. The measured charge from a single photo-electron is, on average, proportional to the number of electrons produced by the PMT. The proportionality constant is the gain of the system, $G$ (see e.g., Kieda 2011; Hanna et al. 2010; MacLeod 2007). In VERITAS, single photo-electron light levels are achieved by attenuating the light of the flasher with a custom-made camera cover with a small hole aligned with the center of each PMT (Hanna 2008). Each of these holes allows only about $1.6\%$ of the light to pass. With such a device, not only are the required low-illumination conditions achieved, but NSB is suppressed by the same amount. A histogram of the accumulated charge shows a series of peaks. The first peak describes the pedestal, the second the mean value of the SPE charge in digital counts. The separation depends on the absolute gain set by the HV settings used during the data acquisition, making it possible to derive the gain- voltage relationship and measure the absolute gain. Since the absolute gain can locally be approximated as having a linear dependence on voltage, VERITAS takes SPE runs at both nominal and a slightly increased voltage ($110\%$) so that the two peaks are better separated. The shape of the single-electron peak follows a Polya distribution (López Paredes et al. 2018), which is a special case of a negative-binomial distribution. The comparison to photo-statistic gains, described in the next section, requires a correction factor for each pixel, which quantifies the deviation from Poisson statistics. SPE runs are taken in VERITAS approximately once a month, as they require the manual installation of the custom cover on each of the telescope cameras, temporarily interrupting the data taking and leading to increased observer fatigue and safety risks. #### 3.1.2 Photo-statistic gains from nightly flasher runs The second method to measure absolute gains in VERITAS uses nightly flasher runs. We refer to these gains as photo-statistic gains. During a flasher run, the mean charge of a pulse on a PMT, $\mu$, is statistically proportional to the mean number of photo-electrons at the first dynode, $N_{pe}$: $\mu=G\times N_{pe}.$ (2) The proportionality constant is the absolute gain $G$. It can be determined by assuming that $N_{pe}$ follows Poisson statistics, which implies that the variance in photo-electrons is approximately the mean charge $\mu$, hence $\sigma_{pe}\simeq\sqrt{N_{pe}}$. After the dynodes and preamplifier have amplified the signal, the variance becomes $\sigma^{2}\simeq G^{2}\times N_{pe}=G\times\mu.$ (3) Fitting a linear function to the charge variance $\sigma^{2}$ over $\mu$ provides the gain as the slope and the variance of the pedestal noise as the intercept. Figure 2: Time-dependent changes of photo-statistic gains normalized to the nominal absolute gains used in the baseline MC model for the current detector model. This detector model was constructed in 2012, and reproduces the characteristics of VERITAS after the PMT camera upgrade. Black points show the average gain factors for each instrument epoch. Gray points show the individual unfiltered gain factor values per telescope and daily flasher run. Blue, orange, green and red points show the result of a median filter of the individual gain factors. Curves of the same colors represent a spline interpolation of the filtered values. Even though spline interpolation can locally diverge when there is no data available, we note that this occurs only during Summer breaks, where no data is taken in VERITAS, therefore it is not a concern for this study. Error bars represent statistical uncertainties. #### 3.1.3 Comparison of absolute gain measurement methods The two methods of measuring the gain agree within $5-6\%$, which is assumed to be the systematic uncertainty for the absolute gain estimation. Photo- statistic gains can be determined using runs that are taken once a night, therefore allowing better monitoring. We consequently opted to use them for our throughput calibration purposes, as opposed to SPE gains. So far we have only discussed the evolution of the gains, but the QE of the PMTs may also change as the detector ages. The possible aging of VERITAS PMTs was covered in Gazda et al. (2016). In that work, a set of 20 PMTs were removed from the telescopes and compared to 20 spare PMTs that were never operated and therefore are expected to exhibit their original properties. The absolute quantum efficiency was measured to be consistent for both samples at the wavelength range in which VERITAS detects Cherenkov radiation from $\gamma$-ray showers. More recently, during the Summer break of 2020, another aging study was done using the same sets of PMTs. This time, the test consisted of the determination of the HV-dependence of PMT gains, which is related to their QE. Again the properties of both sets of PMTs were found to be compatible within uncertainties. #### 3.1.4 Gain factors Once the average absolute camera gains ($G_{i}$) are measured for each telescope $i$, they can be compared with the gains assumed in the reference MC simulations ($G_{i,MC}$) to obtain the gain factors ($g_{i}=G_{i}/G_{i,MC}$) needed to correct the simulated signals to account for changes in the camera gains. Figure 2 shows the gain factors as derived from photostatic gains. ### 3.2 Optical throughput and mirror reflectivity VERITAS is located in southern Arizona, where at least three months during summer are under direct influence of the North American monsoon, in addition to the all year round varying weather conditions. The mirror facets of the telescopes are consequently affected by mechanical and chemical degradation of their reflecting surfaces (e.g., due to fine dust grains which scratch the reflective coating, pollutants chemically reacting with the mirror materials, or temperature oscillations inducing deformations on the underlying mirror substrate). Furthermore, periodic mirror cleaning, re-coating, and Winston cone cleaning contribute to partially recover the reflectivity of VERITAS telescopes, but it also adds an additional variability component to the optical performance of the telescopes. The degradation of the mirrors changes their reflective properties. The total reflectivity is the combination of diffuse and specular reflectivity. The first is the tendency of a surface to reflect the incident radiation in random directions. Degraded mirror surfaces can cause high diffuse reflectivity, scattering photons and making the mirrors unable to focus and properly form an image. For imaging instruments, diffuse reflectivity is undesirable as it increases background photon noise at the focal plane and degrades the optical PSF. Specular reflectivity refers to the ability to form an image of a given object at the focal plane. It is very sensitive to mirror aging since it depends on the material of the substrate, how accurately the mirror is figured, how smooth its surface is and the type of coating. Mirror coating degradation causes a wavelength-dependent effect on the optical throughput of the instrument (Garoli et al. 2020), with reflectivity at short- wavelengths more severely affected than that for longer wavelengths. This is relevant for IACTs because the bulk of the Cherenkov emission concentrates in the near-UV and blue part of the visible spectrum. The optical properties of the Winston cones are not expected to vary dramatically over time as they are protected from the elements and the camera is fully closed while the telescopes are stowed (i.e., during daylight and during bad weather conditions). Nonetheless, these components were examined over time and no evidence of degradation in their light collection efficiency was detected. Therefore, we concentrate on the reflectivity of the telescope dishes and treat Winston cone efficiency as a source of systematic uncertainty. #### 3.2.1 Laboratory measurements of specular reflectivity Beginning with the VERITAS inauguration in 2007 and continuing until 2015, fifteen facets from different parts of the dish (top, middle, bottom) were regularly removed to monitor the changes of the wavelength-dependent reflectivity over time. This was done in the laboratory using a broad-spectrum light source, an adjustable filter wheel, and a photometer (Oriel 71610). Measurements were compared to a calibration mirror made of pure aluminum, periodically re-coated to ensure consistency in the measurements (Roache et al. 2007). Even though it was intented to measure specular reflectivity, this method had the disadvantage of not being able to fully discriminate between diffuse and specular reflectivity as it relied on a photometer and not on a real imaging detector. It also did not evaluate the impact of PSF changes on the performance of the instrument collecting Cherenkov light. Finally, the evolution of the reflectivity of a few mirrors was measured, rather than that for the entire dish. Laboratory measurements of specular reflectivity were dropped in 2016 as the achieved accuracy was limited by the complexity of determining the ratio of specular to diffuse reflectivity. #### 3.2.2 Whole dish reflectivity (WDR) The WDR method was initially developed by MAGIC (Mirzoyan et al. 2007) and after a few preliminary tests in 2010 and 2011, the method was adopted by the VERITAS observatory starting from 2014 (Archambault et al. 2013, 2021). The WDR technique uses a wide-field CCD camera attached to the dish, near its center, which is pointed directly toward a target at the focal plane in front of the PMT camera. In order to reproduce the response of the mirrors to Cherenkov light combined with the QE of the PMT, which peaks at $\sim 340\,\mathrm{nm}$, the CCD (SBIG ST402ME) is equipped with a blue filter from the same manufacturer (Diffraction Limited©). This effectively shifts the response of the CCD camera to shorter wavelengths and provides a better match to the spectral response of the PMT camera to extended air showers. The target is made of Spectralon, a fluoropolymer which exhibits Lambertian behavior (i.e., high diffuse reflectivity). This ensures that the apparent brightness is the same regardless of the observing angle. The reflectivity of the Spectralon plates used in VERITAS has been checked over the years and no significant change could be detected, setting a conservative limit of $\lesssim 5\%$ to their evolution, well within the total systematic uncertainty of the reflectivity determination. With this set-up, the CCD camera can be used to measure simultaneously the brightness of a reference star in the FoV and its reflection generated by the dish on the Spectralon. Flat and dark frames are used to correct for effects such as vignetting or pixel-to-pixel response, and to subtract the electronic dark signal in the image. The comparison of the brightness of the reference star and its reflected spot yields an estimation of the total reflectivity of the dish. The advantage of this method resides on its simplicity and robustness, using just a standard CCD with a color filter and target plate over the focal plane. Being based on differential photometry methods, it is insensitive to changes in the QE or the gain of the CCD itself. The main disadvantage of the WDR method is that the employed CCD is blind to UV radiation and therefore the spectral averaged reflectivity does not fully match the reflectivity for Cherenkov light. #### 3.2.3 Optical throughput and optical throughput factors In a manner similar to the case of the gains, this study focuses on the relative changes of the reflectivity with respect to what is simulated for each telescope $i$ in the detector model. From the two possible methods to estimate the reflectivity of VERITAS mirrors, we adopted the WDR method described in section 3.2.2. VERITAS simulations take into account the complete spectral response of each telescope as a function of wavelength (see Appendix A). On the other hand, the WDR method only provides an averaged reflectivity for each telescope $i$, measured in the blue part of the visible spectrum. The calibration of the optical throughput requires the estimation of the same wavelength-averaged reflectivity for the reflector model of the current instrument configuration $R_{i,MC}$ and the assumption that the differences due to the spectral mismatch that exists between the Cherenkov spectrum and the assumed blue filter are small. The spectral average is determined assuming a standard Bessel-B filter Bessell (1990), similar to the one used with our CCD. The transmission of this filter is multiplied by the reference reflectivity curves from the reflector model. The resulting curve is integrated to obtain the average reflectivity $R_{i,MC}$ for each telescope: * • $R_{1,MC}=0.828$ * • $R_{2,MC}=0.841$ * • $R_{3,MC}=0.852$ * • $R_{4,MC}=0.846$ In order to estimate the collection efficiency error due to the use of a B-filter and a CCD to characterize the response of the telescope to Cherenkov light, we computed the wavelength-averaged reflectivities, this time convolving each mirror reflectivity curve with a typical Cherenkov spectrum from a $\gamma$-ray induced shower. The resulting average reflectivities are between $0.79$ and $0.82$ for all telescopes. The differences are therefore well within the assumed systematic uncertainty on the mirror reflectivity ($\lesssim 10\,\%$). Figure 3: Time-dependent changes of the reflectivity factors obtained from WDR measurements. Black points show the average reflectivity factors for each instrument epoch. Blue, orange, green, and red points show the individual reflectivity factor measurements for each telescope. Curves of the same colors represent a spline interpolation, which removes outliers. The first point (early 2012) is extracted from the simulations and it serves as the reference for the calculation of the reflectivity factors, therefore adopting a value of 1 as reflectivity factor. We note that a value of the reflectivity factor slightly greater than 1 just means that the reflectivity at that time was slightly higher than at the time where simulations were produced. Calculated based on the reflectivity measurements, the optical throughput factors or reflectivity factors can be defined as $r_{i}=R_{i}/R_{i,MC}$. Their evolution over time is shown in Figure 3. These reflectivity factors are needed to correct our MC simulations so that they reflect the changes in the mirror reflectivity. The first point is fixed at unity as it represents the reflectivity values actually used in the simulations. Because the whole-dish reflectivity only started to be measured routinely in 2014, the second and third points of each panel of Figure 3 had to be interpolated. We artificially assigned them a comparatively larger uncertainty of $\pm 5\%$ in absolute reflectivity ($\pm 0.05$ in the plot). As can be seen from the figure, reflectivity quickly degraded by $20-30\%$ from 2013 to 2015, coinciding with a period of reduced cadence of re-coating degraded mirror facets. Since 2015, the telescope reflectivity continues to decrease, but at a much slower pace. Improvements in the reflectivity are also present in our data for specific times, coinciding with cleaning and re-coating of the most damaged mirror facets. This process usually involves the exchange of $\sim 100$ mirror facets, and its effects are smoothed in the spline fitting case. ### 3.3 Total throughput The total throughput of each telescope, normalized to the reference values in the simulations, can be computed as the product of the gain factor $g_{i}$ (Figure 2) and the corresponding reflectivity factor $r_{i}$ (Figure 3). The resulting values and their evolution for each telescope are shown in Figure 4 and table 1. We refer to this total throughput factor as $t_{i}$ or simply throughput factor. To ensure consistency, we computed the average throughput factors for each time bin with three different strategies: i) using the product of spline interpolations for gain factors and reflectivity factors and then computing the mean for each season, ii) using the closest reflectivity and gain measurement pairs to compute the total throughput and then estimating seasonal averages, and iii) using the average values of gain factors and reflectivity factors per instrument epoch and calculating their product. They all agree at the $\sim 5\%$ level. The application of these throughput factors to the Cherenkov signals of the reference simulations allows us to produce IRFs that account for the throughput degradation of the VERITAS telescopes. The actual implementation and assessment of this calibration is discussed in the next section. Figure 4: Time-dependent change of the throughput factors calculated from photo-statistic gain factors and WDR reflectivity factors. Error bars show only statistical uncertainties. ### 3.4 IRF period definition VERITAS has undergone two major instrument upgrades. The first consisted on the relocation of T1 to its current position, improving significantly the trigger efficiency. The second upgrade targeted the camera, with improvements to the electronics and the PMT QE. This naturally generated three major instrument configurations, each with its own set of MC simulations, which provide an accurate model of the telescope performance. The current instrument configuration spans almost eight years of operations, during which the instrument has evolved significantly due to aging and our maintenance efforts. Consequently, we tune our IRFs and simulations in finer time bins so that each time bin or IRF period has throughput drops of at most $10\%$, comparable to the claimed systematic uncertainties on dish reflectivity and gains. Similarly, we require the throughput values at the limits of each bin to be covered by the statistical errors for that bin. Finally, the duration of each epoch is at most a single year, and approximately aligned with the observing cycle that begins with the end of the monsoon in September or October. With these criteria it is clear that for the last years, during which the evolution of the throughput was less dramatic, one IRF period per year is enough to provide the desired granularity and precision. For the first couple of years, however, the total throughput was rapidly evolving and it required finer time bins to fulfill our criteria. The resulting IRF periods are summarized in Table 1. Table 1: IRF periods for the current VERITAS instrument configuration. season | MJD range | data runs | $t_{1}$ | $t_{2}$ | $t_{3}$ | $t_{4}$ ---|---|---|---|---|---|--- 2012-2013a | 56124-56367 | 63373-67410 | $0.986\pm 0.056$ | $1.039\pm 0.059$ | $0.972\pm 0.057$ | $0.914\pm 0.056$ 2012-2013b | 56367-56588 | 67411-70170 | $0.941\pm 0.055$ | $1.023\pm 0.056$ | $0.885\pm 0.054$ | $0.823\pm 0.052$ 2013-2014a | 56588-56778 | 70171-73235 | $0.887\pm 0.021$ | $0.958\pm 0.030$ | $0.818\pm 0.022$ | $0.803\pm 0.018$ 2013-2014b | 56778-56968 | 73236-75021 | $0.809\pm 0.042$ | $0.878\pm 0.050$ | $0.782\pm 0.029$ | $0.775\pm 0.036$ 2014-2015 | 56968-57242 | 75022-78239 | $0.713\pm 0.027$ | $0.821\pm 0.026$ | $0.730\pm 0.019$ | $0.715\pm 0.019$ 2015-2016 | 57242-57600 | 78240-82587 | $0.708\pm 0.022$ | $0.772\pm 0.018$ | $0.690\pm 0.018$ | $0.731\pm 0.018$ 2016-2017 | 57600-57966 | 82588-86848 | $0.653\pm 0.016$ | $0.774\pm 0.023$ | $0.647\pm 0.015$ | $0.688\pm 0.024$ 2017-2018 | 57966-58331 | 86849-90608 | $0.677\pm 0.023$ | $0.748\pm 0.016$ | $0.683\pm 0.025$ | $0.695\pm 0.027$ 2018-2019 | 58331-58696 | 90609-93829 | $0.608\pm 0.021$ | $0.678\pm 0.019$ | $0.619\pm 0.021$ | $0.635\pm 0.024$ 2019-2020 | 58696-59061 | 93830-96048 | $0.608\pm 0.017$ | $0.698\pm 0.034$ | $0.732\pm 0.042$ | $0.640\pm 0.015$ * • Note: Definition of IRF periods for the current VERITAS instrument configuration with the covered dates, the corresponding data runs and the values of the total throughput corrections that need to be applied to the simulated event charges to obtain the calibrated MC and IRFs. Only statistical uncertainties were considered in this study. ## 4 Implementation of throughput calibration This section discusses various strategies to implement the throughput calibration in IACTs, a description of the method adopted in the VERITAS software packages and the tests we performed to ensure the consistency of the results. ### 4.1 Implementation methods MC simulations should accurately reproduce the response of VERITAS at any given time. We discussed in the previous section the impact of the time- dependent changes on the PMT gains and optical elements of the telescope. Reproducing all the details with the required accuracy while using a single MC production and a static set of IRFs is not an option if throughput is changing over time. Here, we concentrate on how we can implement the throughput calibration in our analysis framework to ensure consistent detector response over the entire operating life of the telescopes. There are a number of strategies one could adopt to implement this calibration in the analysis, with varying degrees of accuracy that correlate with the computational cost and the parameter space. In the case of VERITAS, we need to cover the response of the instrument as a function of zenith angle of observation, azimuth angle, atmospheric profiles, telescope multiplicity, camera HV settings, wobble pointing offset, and NSB intensity. Full MC productions that reproduce the changes in the throughput of the instrument over time and corrections to an existing MC production are both valid possibilities. Provided that the systematic errors on the calibration measurements are small, the former is potentially the most accurate and correctly reproduces the changes at the trigger level caused by throughput variations. However, it is computationally very expensive. It also requires producing a new complete MC model of the entire array and simulating $\gamma$-ray showers with the required statistics. Consequently we chose to explore corrections to the already existing MC simulations that were produced for the baseline instrument configuration. Calibration corrections could be applied on a run-wise basis (with typical duration of an observing run shorter than one hour) or on longer periods of time: from just one day to an entire year or even longer, during which the instrument response is measured to be approximately stable. Because of the monsoon shutdown in summer, which defines a natural observing season from about mid-September to mid-June, the fact that camera voltages (and hence gains) are adjusted at the beginning of each of these seasons and the slow cadence of WDR measurements, such seasons seemed a natural choice for VERITAS to define the time bins for the corrections. Exceptionally for the first two years, for which the throughput evolved rapidly, each season was divided into two shorter time bins. We refer to these time bins in which we apply the throughput corrections as IRF periods. Throughput corrections can be introduced at different steps in the analysis of simulated showers and the production of the IRFs. A simple option, yet not fully accurate, is to calibrate simulated event images after they have been cleaned and the PMT traces have been integrated. The size of the event is modified using the changes in the total throughput of the instrument. Noise level changes can be either ignored or approximately corrected. Testing this option in VERITAS resulted in a reasonable reconstruction of event energies and $\gamma$-ray fluxes. However, the corrected simulated image shapes did not match the shape of the observed shower images. A natural improvement over this, and the solution adopted in this study, consists of correcting the signals of the simulated events before the PMT traces are integrated and the Cherenkov images are cleaned. Even though the trigger changes are still not properly modeled, noise levels change as fewer pixels containing only NSB are able to pass the image cleaning thresholds. Our tests resulted in reconstructed simulated showers which are directly comparable to observed data, making it easier to assess the accuracy of the corrections by comparing the distributions for the different image parameters. The main disadvantage is that the cleaning and Hillas parameterization of the simulated showers had to be repeated once per IRF period with each throughput factor $t_{i}$ on the signal integration, significantly increasing the computing time, storage requirements and analysis complexity needed to cover the entire parameter space. Producing IRFs for a single wobble offset with 10 time bins as described in this work, including two atmospheric profiles, and all zenith angles (8) and noise levels required (11), took several weeks of testing and computation time and required about $480\,\mathrm{GB}$ of storage for the final products. ### 4.2 Robustness of shower parameter reconstruction This work is based on the assumption that we can correct the simulated PMT traces with just two global factors, one due to the camera gain $g_{i}$ and the other due to the dish reflectivity $r_{i}$. In principle, this should not distort the shape of the shower images; instead we expect it to alter the size of the cleaned events. To check if this is the case, we extracted the same simulated events corresponding to two different IRF periods (2012-2013a and 2019-2020) and calculated the ratio of values for some of their key geometrical parameters (size, width, length, and time gradient along the major axis) between the two periods. Figure 5 shows histograms of the obtained ratios together with the mean and the standard deviation of the obtained distribution. It can be seen that only the size of the events is significantly shifted ($\mu\simeq 0.63$, $\sigma\simeq 0.34$) when we scale the simulated pixel traces. As this is a statistical comparison between individual simulated showers obtained with independently generated IRFs, the large width of the distributions for some of these ratios is not a concern for this study. Figure 5: Ratio between image parameters derived from MC simulations for the season 2019-2020 vs season 2012-2013a (Telescope T1 only, winter simulations, noise level of $100\,\mathrm{MHz}$, and no size cuts). The vertical axis of each panel shows the number of events within a given ratio bin. ### 4.3 Impact of throughput changes on VERITAS performance The evolution of the total throughput can affect the reconstructed fluxes in two different ways. The relation between the size of an event and its energy changes as discussed in section 4.3.1. Additionally, some events are no longer reconstructed, as their integrated charges are not large enough to pass the hardware trigger, analysis, and reconstruction cuts. This has a significant impact for the lowest energies, as is shown in the effective areas of the telescopes, discussed in section 4.3.2. #### 4.3.1 Effect on reconstructed energies Figure 6: Relative changes in energy reconstruction induced by the evolution of the throughput of the instrument. The ratio of throughput-calibrated energies with respect to the uncorrected energies is presented in two formats: As 2D lookup tables, with the ratio as function of impact distance and size of the simulated events (bottom panels); curves with the distance-averaged lookup-tables, showing the ratio as a function only of size (top panels). One of the main effects of the application of correction factors to the measured digital traces is the improvement of the energy reconstruction. Lowered gain and reflectivity naturally result in smaller images and consequently lower estimated energy if the IRFs do not account for throughput variations. The energy reconstruction method applied uses lookup tables, similar to the ones described in section 2.3.4. They represent the energy as a function of size and distance to the telescope in a coordinate system perpendicular to the arrival direction of the air shower event, the NSB, the wobble offset of the observation and the atmospheric profile. These tables are filled using simulated $\gamma$-ray events by calculating the median and the standard deviation of the distribution of true energies ($\mathrm{E_{MC}}$) for each bin in the grid. Figure 6 shows the ratio between the reconstructed energies of the events whose traces have been corrected to account for the drop in telescope throughput and the uncorrected case. That ratio is represented, following the format of the lookup tables, as a function of impact distance and size of the simulated event for a fixed set of values for the other parameters. For a given noise level and zenith angle, the size of the event scales almost linearly with its energy. The exception would be the highest energies. In that regime, the accumulated charge may hit a maximum value due to FADC saturation, affecting the linear relation between size and energy previously described. Since the events migrate to lower sizes for a fixed energy as the telescope throughput drops, one can easily see the effect of its evolution with time once we calibrate the throughput in the analysis: for a given image size and distance of the event, the reconstructed energy is higher in recent years than it was right after the upgrade of VERITAS (years 2012-2013). #### 4.3.2 Effect on flux reconstruction and effective areas Figure 7: Evolution of the effective area for different instrument epochs, for a zenith angle of observation of $20^{\circ}$, noise level of $100\,\mathrm{MHz}$, moderate cuts and a minimum of two images per event. The impact of the total throughput on the effective area is shown in Figure 7. In that figure, the effective area is computed as a function of true energy in each period for a fixed position in the parameter space (zenith angle of $20^{\circ}$, noise level of $100\,\mathrm{MHz}$, moderate cuts, a minimum of two images per event, and $0.5^{\circ}$ wobble offset). There is insufficient background data to optimize the cuts individually for every period. The reason is that we require data that were taken under very good weather conditions, fields without strong $\gamma$-ray sources, and dividing the remaining data sample into zenith angle bins. Consequently, we used the same set of cuts optimized using data from the entire post-upgrade array configuration. This artificially enhances the differences seen at low energies in Figure 7, since fewer events have the size required to pass the analysis cuts in the latest periods. Reducing the size cut over time to palliate this effect is not an option as it would result in the inclusion of worse quality events in the analysis. Above $\sim 300\,\mathrm{GeV}$, away from the energy threshold of the analysis, almost no events are lost despite the decreased optical throughput of the telescope. #### 4.3.3 Effect on the energy threshold of the analysis Due to the statistical nature of the reconstruction methods used for IACTs, the energy threshold of the analysis is defined as the energy at which the effective collection area is $10\%$ of the maximum effective area reached with the telescope at any energy. This definition ensures that a significant fraction of the events have passed the cuts and that the individual shower images and traces are comparatively less affected by noise artifacts. It does not imply that lower energy particles are undetectable, but the percentage of particle showers below this energy quickly drops as the energy decreases. The detected emission at energies lower than the energy threshold may still be significant for sources with steep spectra. Figure 8 shows the evolution of the energy threshold of the analysis with VERITAS. Uncertainties of the energy thresholds shown in the figure are calculated accounting for two independent contributions: i) the energy bin width of the effective area histogram, ii) the uncertainty on the effective area at the center of the bin. Figure 8: Energy threshold of the analysis estimated from simulated events and using moderate cuts, zenith angle of $20^{\circ}$, and noise level of $100\,\mathrm{MHz}$. It is defined as the energy for which the effective area becomes $10\%$ of the maximum effective area. #### 4.3.4 Effect on the differential sensitivity Figure 9: Differential sensitivity with $50\,\mathrm{h}$ of VERITAS observations calculated using different low zenith angle Crab Nebula data, with winter atmospheric profile conditions. The same cuts as in Figure 7 are used. The sensitivity is defined in terms of the flux of the Crab Nebula (in Crab units, CU) assuming it is a reference astrophysical object at VHE energies. We can define the sensitivity of a $\gamma$-ray instrument as the minimum flux that such an instrument can detect for a given amount of exposure. Simulating a realistic sample of background data is challenging, so it is common practice to use the Crab Nebula data as a reference object and to use its flux as the unit system for sensitivity studies. In the following, the differential sensitivity is computed as the minimum photon flux required to achieve a 5$\,\sigma$ detection in $50\,\mathrm{h}$ of observation time (using the statistics described in Li & Ma (1983)) in each energy bin, assuming five equal-width bins per decade on the logarithmic energy axis. Additionally, at least ten signal events are required per energy bin. The differential sensitivity is presented in Figure 9, based on good quality Crab Nebula data, which were collected under dark sky conditions, and at low zenith angle to minimize the atmosphere’s influence. We show the results for BDT-based moderate cuts with a minimum telescope multiplicity of two, which provides a balance between optimizing sensitivity and maintaining a low energy threshold. Results employing softer cuts and harder cuts were obtained and compared between seasons, yielding similar conclusions. It can be concluded that throughput degradation affects mostly the lowest energies, where some dim shower events are lost or mistakenly classified as hadronic showers. At high energies, the performance is similar between the different periods, unaffected by the changes in the throughput. ## 5 Validation of the throughput calibration on real data ### 5.1 Comparison between real data and MC simulations The validity of the estimated total throughput correction can be tested by comparing image and stereo parameter distributions obtained with data and throughput-calibrated simulated events for every IRF period. If MC showers are not calibrated with the correct throughput, the shower shapes will be distorted and the estimated shower parameter distributions will not match those derived from data. In this work, we compared MSCW and MSCL distributions of simulated and observed data for six reconstructed energy bins: $\log_{10}E_{rec}\in$ $[-1.0,-0.7]$, $[-0.7,-0.3]$, $[-0.3,0.0]$, $[0.0,0.3]$, $[0.3,0.7]$, $[0.7,1.0]$ (energy measured in TeV), and for all the considered IRF periods. The results, with the last two energy bins combined to increase statistics, can be seen in Figure 10. More detailed comparisons are shown in appendix B. In all cases, data and simulated events have similar distributions for both parameters, almost symmetric, with similar widths and centered at zero (as expected for $\gamma$-ray events). No significant evolution was observed in these parameters with time. The only exception is the highest energy bin, which has a wider distribution of MSCW values for real shower events. This is caused by the high energy events, $\sim 10\,\mathrm{TeV}$, being subject to difficulties in calibrating of the low-gain readout channel, saturation effects, and leakage of many shower images outside of the camera. Moreover, its scientific impact is limited because statistical uncertainties almost always dominate at $\sim 10\,\mathrm{TeV}$ due to the small photon fluxes emitted at these energies from astrophysical sources. Figure 10: Average distributions of MSCW values, in different energy bins, for the data (blue points) and simulation (red curve) in the entire post-upgrade period. They were both generated after throughput calibration. Data events were obtained from a sample of Crab Nebula observations. Due to the small total number of events, the last two energy bins are combined into a single bin with energies $\mathrm{\log_{10}E_{rec}}\mathrm{(TeV)}\in[0.3,1.0]$. ### 5.2 Inter-telescope calibration Each telescope that detects a $\gamma$-ray-induced shower provides an independent estimation of the primary particle energy. This enables us to test how well the throughput of each telescope is calibrated against that of the other telescopes. There are different ways of performing such a test (Hofmann 2003; Lebohec & Holder 2003; Mitchell et al. 2016). One possibility involves selecting, for each pair of telescopes, events with a similar distance to each telescope $R$ (in our case, within $\pm 20\%$). Such events should have approximately the same reconstructed energy if the optical and camera calibration of the telescopes is appropriate. Alternatively, one can compare the energy reconstructed by a given telescope, $\mathrm{E}$, against the average energy reconstructed by the array, $\mathrm{E_{mean}}$. Figure 11 illustrates the first approach (telescope pairs compared against each other) for 2019-2020. Figure 11: Inter-telescope calibration: reconstructed energies for $\gamma$-ray-like events from Crab Nebula observations, with a similar (difference less than $20\%$) distance from each telescope to the reconstructed shower core. The density of events in the parameter space is color-coded, with yellow values representing more events and blue values representing fewer events. A 1:1 relationship for the event energies is plotted in black, corresponding to perfect inter-telescope calibration. For compactness, we only show the results for 2019-2020. We repeated these measurements for all IRF periods. For each period, we calculated the mean $\log_{10}\mathrm{(E_{i})}-\log_{10}\mathrm{(E_{mean})}$ and its standard deviation, shown in Figure 12. The reconstructed energy of real $\gamma$-ray like events deviate, on average for each telescope and season, no more than $\sim 10\%$ with respect to the mean energy reconstructed by the entire array. If the throughput calibration had not been implemented successfully, we would see larger differences in the energy measured by the individual telescopes. Telescopes with faster throughput degradation would provide smaller reconstructed energy estimates. Figure 12: Mean of the $\log_{10}\mathrm{(E_{rec}[T_{i}]/E_{mean}})$ values for selected $\gamma$-ray-like events and its standard deviation, for each telescope and season, where $\mathrm{E_{rec}[T_{i}]}$ is the reconstructed energy of telescope $i$ and $\mathrm{E_{mean}}$ the average reconstructed energy from the entire array, using the same weights for all telescopes. For a given epoch, the energies reconstructed by the different telescopes do not deviate on average more than $\sim 10\%$ with respect to the mean energy reconstructed with the entire array. ### 5.3 Reconstruction of the Crab Nebula flux The reconstructed flux and spectral shape for any measurement depends on the flux calibration of the instrument. The Crab Nebula is the closest object to a standard reference object for VHE observations as it is one of the brightest sources known that has a stable emission. Therefore, we used observations of this source for the last tests to validate both the throughput measurements and the actual implementation of the throughput calibration. This study was performed using the same Crab Nebula dataset and the same cuts as used in Section 4.3.4. Two figures summarizing the results of this study are obtained: Figure 13: Reconstructed flux of the Crab Nebula above $200\,\mathrm{GeV}$, using 1-day bins and IRFs that correctly match the instrument throughput for each period. Results are obtained applying moderate cuts to runs with mean elevation $>50^{\circ}$. Left panel: Each color represents an IRF period. The blue dot-dashed curve represents the reference Crab Nebula spectrum of Meagher (2015) integrated above $200\,\mathrm{GeV}$, horizontal solid black lines represent the season-average fluxes, while dashed horizontal curves show the standard deviation of the fluxes for each season. Right panel: Shown in gray is the distribution of fluxes for all seasons combined, with a fit to a Gaussian shape as a solid black line. The dashed black curve shows the equivalent result when throughput changes are not taken into account in the IRFs. The vertical blue dashed line shows the reference Crab Nebula flux from Meagher (2015). Figure 14: Flux dispersion ($\sigma/\mu$) as a function of threshold energy for light curves similar to the one of Figure 13, using moderate cuts. Taking into account the throughput evolution brings down the statistical uncertainties from $\sim 15\%$ baseline to $\sim 10\%$. * • Light curves with daily binned fluxes above $200\,\mathrm{GeV}$ for BDT-based moderate cuts (see Figure 13). Datasets corresponding to the different IRF sets are plotted with different colors. Flux points versus time are shown in the left panel and a histogram of the flux points is presented in the right panel. As can be seen from the figure, every season shows an average flux compatible within $\pm 1\,\mathrm{\sigma}$ with the flux of the Crab Nebula reported in Meagher (2015). In addition, the spread of the flux points (standard deviation $\sigma$ over the mean $\mu$ of the distribution) shown in the right panels remains on the level of $\lesssim 10\%$ for the selected cuts. This value is significantly lower than the one obtained if no throughput calibration is applied. In that case, the spread would be $\gtrsim 15\%$. The relationship between the energy threshold of the light curve and flux dispersion is shown in Figure 14. * • Energy spectra for the different periods, using the same cuts (Figure 15), showing simultaneously for each period the energy spectrum obtained using the uncorrected IRFs (blue open circles), the energy spectral points for the corrected IRFs (filled red circles) and, for comparison purposes, the published spectrum from the Fermi-LAT 3FHL catalog Ajello et al. (2017) and the Crab Nebula spectrum published by the VERITAS collaboration in Meagher (2015) (orange curved line), which is based mostly on data from seasons that were minimally affected by the throughput degradation. Figure 15: Collection of spectral energy distributions obtained with moderate cuts for each IRF period. Purple open diamonds represent measurements of the Crab Nebula spectrum from the 3FHL (Ajello et al. 2017), the orange curve shows the Crab Nebula spectrum published by VERITAS in Meagher (2015), open blue points show the fluxes that would be obtained if throughput is not calibrated during the reconstruction of $\gamma$-ray showers and instead the MC model from 2012 is employed. Finally, red solid points show the spectral points that are obtained once the pulse signals are calibrated to generate correct IRFs for each period. ## 6 Systematic uncertainties The estimation of systematic uncertainties affecting the reconstructed $\gamma$-ray energies and fluxes covered in this section is based on calibration measurements, MC simulations, and reasonable assumptions. Overall, we could identify the following components of the systematic uncertainty budget impacting the throughput calibration and the determination of precise IRFs: * • Accuracy and stability (time and temperature dependence) of the photo-electron response, including the precision of the pixel-to-pixel relative gain correction. Absolute pixel gains are corrected through high-voltage adjustments (“flat-fielding”) every year at the beginning of the VERITAS observing season, in September or October. This contributes to reduce the photo-electron response drift of the PMTs over long periods of time. Relative gains likely have a small contribution to the total systematic uncertainty if the appropriate Flasher runs are used in the analysis. * • QE of the PMTs and collection efficiency at the first dynode. After several years of operations, a comparison of used and unused PMTs shows no significant evolution of these parameters over time. * • Mirror reflectivity and its wavelength dependence after correction for degradation. * • The optical PSF, determined by mirror alignment accuracy and checked once a month. Mirror facets are re-aligned after detection of significant deviation from the expected position. Independently of the throughput calibration, the overall systematic uncertainty budget of the VERITAS instrument is affected by: * • The effect of broken pixels and electronic channels, which are not modeled in the MC simulations. The fraction of unavailable pixels per camera is typically less than 5%. * • The Winston cone efficiency, not covered in our study but included in the simulations, with an estimated contribution to the optical throughput uncertainty of $\sim 4\%$. * • The treatment of shadowing elements on the telescopes in the MC simulations, including the camera housing and support structure. Smaller elements like cross bars are not included in the simulation model. The impact of these simplifications introduces a systematic uncertainty of less than 1%. * • The systematic limit of the VERITAS pointing monitor, which is less than 20 arc seconds, much smaller than the optical PSF of the telescopes. Its contribution to the systematic uncertainty on the photon flux is therefore ¡3%. * • The ability to calibrate the linearity of the readout and the transition from high to low gain using flasher pulses at eight (fifteen starting from 2016) different levels of brightness. The pulse shape of the low-gain channel is roughly twice as broad than that for high gain and exhibits a complex broadening for extremely bright signals from nearby showers with energies of tens of TeV. This complex behavior is not accurately reproduced in the current MC simulations, leading to an increase of the systematic uncertainties for the highest energies. The throughput changes discussed in this paper lead additionally to a change of pulse brightness required to activate the low-gain chain. Our correction method, which cannot account for this hardware effect, leads to a mismatch in the activation point between the simulations and the real data. * • The analysis and reconstruction pipelines, Eventdisplay and VEGAS. While the code development is independent, the analysis and reconstruction techniques employed in both pipelines are similar and they use the same MC simulations to produce the IRFs. Based on a long-term comparison of scientific results produced with both tools, we estimate a $10\%$ systematic uncertainty due to the analysis pipeline used. Table 2: Summary of systematic uncertainty estimations. Type | Flux ---|--- Photo-electron response and gain | 5% Signal pulse shape | 5-10% Low-gain modeling | 5-10% Efficiency of photo detection | $<5$% Unavailable pixels | 3-5% Shadowing | 1% Mirror reflectivity | 10% Winston cone efficiency | 4% Point spread function | 5-10% Atmospheric profiles, | Absorption, scattering | 10-15% Analysis | 10% Total | $\sim 25\%$ In general, the total systematic uncertainties are dominated by uncertainties on telescope throughput, covered by this work, and the approximations in the modeling of the atmosphere above the observatory. As for the latter, the development of extensive air showers is determined by the vertical density, temperature, and humidity profiles. The intensity of Cherenkov light is influenced by the amount of absorption and scattering on atmospheric molecules or aerosols. While VERITAS does not operate during the warmest summer months, and two seasonal atmospheric profiles are used to correct for the large atmospheric changes, the systematic error due to inaccurate atmospheric models is approximately $10-15\,\%$. This includes day- to-day variability of the atmosphere, which is monitored by a combination of a weather station, a commercial ceilometer, and three infrared pyrometers. To limit its impact on the total systematic errors, observing periods of inferior quality (e.g., due to clouds) are flagged and removed from the analysis for most publications. The total systematic uncertainty is estimated by summing quadratically the individual components listed in Table 2, ignoring possible correlations. As a caveat, some of the mentioned sources of systematic uncertainty are energy dependent and introduce an error on the reconstructed spectral slope, resulting in larger uncertainties for the steepest spectra. The total systematic uncertainty on the absolute flux level is estimated to be $\sim 25\%$ for a $\gamma$-ray source with a spectral index of $\approx 2.5$. Starting from $\sim 10\mathrm{\,TeV}$, where the calibration of the low- and high-gain channels and saturation effects begin to be important, an additional $5-10\%$ would have to be added in quadrature. Similarly, we estimate the systematic uncertainty on the spectral index to be $\pm 0.2$ for sources with Crab Nebula-like spectra. For sources with steeper spectra, the corresponding photon flux is dominated by the emission at the lowest energies, where many of the systematic uncertainty components that we mentioned become most relevant. In addition, the impact of the energy scale errors on the absolute flux become significantly larger and their estimation requires a case-by-case study, beyond the scope of this document. Additional sources of systematic uncertainties not directly linked to the telescope performance have not been included in this work. Nonetheless, some might have relevant implications in long-lived ground-based astronomical installations, including VERITAS. A good example of this is the long-term evolution of NSB (Massey & Foltz 2000) due to an increased human activity over time and changes in street lamp technology: from sodium lamps to LED-based illumination, likely brighter at short wavelengths Sánchez de Miguel et al. (2017), where IACTs are most sensitive. Such long-term variations are likely more evident in the direction of largely populated areas, for example, Tucson (to the north) and Nogales (to the southwest) areas, and might have a significant impact during observations at low elevations. Since VERITAS simulations are produced for different NSB levels and IRFs are interpolated to match the NSB level of each data run, this effect is corrected to first order in the standard analysis. ## 7 Conclusions After almost 15 years of operations, VERITAS performance has changed due to a combination of hardware upgrades (relocation of T1, camera and camera electronics upgrades), aging of the different components, and maintenance duties, such as recoating of the most degraded mirror facets. With this work, we aim to document the calibration efforts that have been carried out by the VERITAS collaboration during this time. As a first step, we described in Section 3 the different approaches that are used to monitor the behavior of the instrument, measuring the gains of each pixel, the relative evolution of the average camera gain over time, and the changes of reflectivity and optical throughput due to telescope aging. The described methodology, now well defined, will continue to be used for the upcoming years of VERITAS operations and we are confident it will serve for other experiments as well. In Section 4, we detailed the implementation of the proposed throughput calibration method in our software pipelines. It is based on correction factors for the optical throughput or reflectivity $r_{i}$ and the average gain of the camera $g_{i}$, which we combined into a total throughput factor $t_{i}$. We showed how the application of the throughput factors to the measured signals of the simulated events could be used to produce throughput- calibrated IRFs for VERITAS. These response functions can be used to analyze real showers and derive the corrected particle shower energy and source fluxes. Finally, using the time-dependent response functions obtained, we evaluated the impact of the throughput changes on the performance of VERITAS using different metrics such as the sensitivity and the energy threshold. In order to validate this calibration method, we performed extensive tests, detailed in Section 5, to ensure that all VERITAS telescopes are properly calibrated against each other and that the geometrical parameters of simulated and real shower events are comparable. In addition, we checked the stability of the flux reconstruction using a multiyear sample of Crab Nebula observations, often assumed to be a reference object in the very-high-energy range. Finally, in Section 6 we discussed the main systematic uncertainties on the energy scale, fluxes, and spectral indices that affect the analysis and reconstruction of data from the VERITAS telescopes. As a final note, this work is based on direct measurements of the gains and reflectivity of the dish as an alternative to the study of local muons or the measurement of the cosmic electron spectrum. Both have been proposed and are currently being actively discussed (Parsons et al. 2016; Gaug et al. 2019) as possible calibrators for the upcoming Cherenkov Telescope Array (CTA). CTA will be comprised of arrays several times bigger than that of VERITAS, and have much tighter requirements on systematic uncertainties. It is clear that having independent methods to calibrate the response of the instrument to incident light can only help to reduce their individual limitations and systematic uncertainties. ###### Acknowledgements. This research is supported by grants from the U.S. Department of Energy Office of Science, the U.S. National Science Foundation and the Smithsonian Institution, by NSERC in Canada, and by the Helmholtz Association in Germany. This research used resources provided by the Open Science Grid, which is supported by the National Science Foundation and the U.S. Department of Energy’s Office of Science, and resources of the National Energy Research Scientific Computing Center (NERSC), a U.S. Department of Energy Office of Science User Facility operated under Contract No. DE-AC02-05CH11231. We acknowledge the excellent work of the technical support staff at the Fred Lawrence Whipple Observatory and at the collaborating institutions in the construction and operation of the instrument. M.N. acknowledges the Young Investigators Program of the Helmholtz Association for support during the period of the project. Reproduced with permission from Astronomy & Astrophysics, ©ESO. ## References * Acciari et al. (2019) Acciari, V. A., Ansoldi, S., Antonelli, L. A., et al. 2019, Nature, 575, 455 * Acciari et al. (2008) Acciari, V. A., Beilicke, M., Blaylock, G., et al. 2008, ApJ, 679, 1427 * Aharonian et al. (2004) Aharonian, F., Akhperjanian, A. G., Aye, K. M., et al. 2004, Astroparticle Physics, 22, 109 * Ajello et al. (2017) Ajello, M., Atwood, W. B., Baldini, L., et al. 2017, ApJS, 232, 18 * Archambault et al. (2021) Archambault, S., Chernitsky, G., Griffin, S., & Hanna, D. 2021, Astroparticle Physics, 128, 102556 * Archambault et al. (2013) Archambault, S., Hanna, D., & Griffin, S. 2013, in 33rd International Cosmic Ray Conference, 0379 * Berge et al. (2007) Berge, D., Funk, S., & Hinton, J. 2007, A&A, 466, 1219 * Bessell (1990) Bessell, M. S. 1990, PASP, 102, 1181 * Chalme-Calvet et al. (2014) Chalme-Calvet, R., de Naurois, M., & Tavernet, J. P. 2014, in AtmoHEAD workshop, 2013 * Cogan (2008) Cogan, P. 2008, in International Cosmic Ray Conference, Vol. 3, International Cosmic Ray Conference, 1385–1388 * Davies & Cotton (1957) Davies, J. M. & Cotton, E. S. 1957, Solar Energy, 1, 16 * de Naurois & Mazin (2015) de Naurois, M. & Mazin, D. 2015, Comptes Rendus Physique, 16, 610 * Fleury (1991) Fleury, P. 1991, in International Cosmic Ray Conference, Vol. 2, International Cosmic Ray Conference, 595–598 * Garoli et al. (2020) Garoli, D., Rodriguez De Marcos, L. V., Larruquert, J. I., et al. 2020, Applied Sciences, 10 * Gaug (2006) Gaug, M. 2006, PhD thesis, Autonomous University of Barcelona * Gaug et al. (2019) Gaug, M., Fegan, S., Mitchell, A. M. W., et al. 2019, ApJS, 243, 11 * Gazda et al. (2016) Gazda, E., Nguyen, T., Otte, N., & Richards, G. 2016, Journal of Instrumentation, 11, P11015 * Hanna (2008) Hanna, D. 2008, in International Cosmic Ray Conference, Vol. 3, International Cosmic Ray Conference, 1417–1420 * Hanna et al. (2010) Hanna, D., McCann, A., McCutcheon, M., & Nikkinen, L. 2010, Nuclear Instruments and Methods in Physics Research A, 612, 278 * Heck et al. (1998) Heck, D., Knapp, J., Capdevielle, J. N., Schatz, G., & Thouw, T. 1998, CORSIKA: a Monte Carlo code to simulate extensive air showers. (Forschungszentrum Karlsruhe GmbH)) * Hillas (1985) Hillas, A. M. 1985, in International Cosmic Ray Conference, Vol. 3, 19th International Cosmic Ray Conference (ICRC19), Volume 3, 445 * Hillas & Patterson (1990) Hillas, A. M. & Patterson, J. R. 1990, Journal of Physics G: Nuclear and Particle Physics, 16, 1271 * Hofmann (2003) Hofmann, W. 2003, Astroparticle Physics, 20, 1 * Hofmann et al. (1999) Hofmann, W., Jung, I., Konopelko, A., et al. 1999, Astroparticle Physics, 12, 135 * Holder (2005) Holder, J. 2005, in International Cosmic Ray Conference, Vol. 5, 29th International Cosmic Ray Conference (ICRC29), Volume 5, 379 * Holder et al. (2006) Holder, J., Atkins, R. W., Badran, H. M., et al. 2006, Astroparticle Physics, 25, 391 * Kieda (2011) Kieda, D. 2011, in International Cosmic Ray Conference, Vol. 9, International Cosmic Ray Conference, 14 * Kieda (2013) Kieda, D. B. 2013, in International Cosmic Ray Conference, Vol. 33, International Cosmic Ray Conference, 1124 * Krause et al. (2017) Krause, M., Pueschel, E., & Maier, G. 2017, Astroparticle Physics, 89, 1 * Krawczynski et al. (2006) Krawczynski, H., Carter-Lewis, D. A., Duke, C., et al. 2006, Astroparticle Physics, 25, 380 * Krennrich et al. (2007) Krennrich, F., Blaylock, G., Bradbury, S. M., et al. 2007, in Journal of Physics Conference Series, Vol. 60, Journal of Physics Conference Series, 34–39 * Lebohec & Holder (2003) Lebohec, S. & Holder, J. 2003, Astroparticle Physics, 19, 221 * Li & Ma (1983) Li, T. P. & Ma, Y. Q. 1983, ApJ, 272, 317 * López Paredes et al. (2018) López Paredes, B., Araújo, H. M., Froborg, F., et al. 2018, Astroparticle Physics, 102, 56 * MacLeod (2007) MacLeod, A. 2007, Master’s thesis, McGill University Libraries * Maier & Holder (2017) Maier, G. & Holder, J. 2017, in International Cosmic Ray Conference, Vol. 301, 35th International Cosmic Ray Conference (ICRC2017), 747 * Massey & Foltz (2000) Massey, P. & Foltz, C. B. 2000, PASP, 112, 566 * McCann et al. (2010) McCann, A., Hanna, D., Kildea, J., & McCutcheon, M. 2010, Astroparticle Physics, 32, 325 * Meagher (2015) Meagher, K. 2015, in International Cosmic Ray Conference, Vol. 34, 34th International Cosmic Ray Conference (ICRC2015), 792 * Meyer (2005) Meyer, M. 2005, in American Institute of Physics Conference Series, Vol. 745, High Energy Gamma-Ray Astronomy, ed. F. A. Aharonian, H. J. Völk, & D. Horns, 774–778 * Mirzoyan et al. (2007) Mirzoyan, R., Garczarczyk, M., Hose, J., & Paneque, D. 2007, Astroparticle Physics, 27, 509 * Mitchell et al. (2016) Mitchell, A. M. W., Parsons, R. D., Hofmann, W., & Bernlöhr, K. 2016, Astroparticle Physics, 75, 1 * Nagai et al. (2007) Nagai, T., McKay, R., Sleege, G., & Petry, D. 2007, 30th International Cosmic Ray Conference, Merida, Mexico, July 2007, arXiv:0709.4517 * Otte et al. (2011) Otte, A. N., Gebremedhin, L., Kaplan, K., & Long, D. 2011, in 32nd International Cosmic Ray Conference * Parsons et al. (2016) Parsons, R. D., Hinton, J. A., & Schoorlemmer, H. 2016, Astroparticle Physics, 84, 23 * Perkins et al. (2009) Perkins, J. S., Maier, G., & The VERITAS Collaboration. 2009, 2009 Fermi Symposium, eConf Proceedings C091122, arXiv:0912.3841 * Roache et al. (2007) Roache, E., Irvin, R., Perkins, J., et al. 2007, in 30th International Cosmic Ray Conference (ICRC2017), 1397–1400, 30th International Cosmic Ray Conference, ICRC 2007 ; Conference date: 03-07-2007 Through 11-07-2007 * Roache et al. (2008) Roache, E., Irvin, R., Perkins, J. S., et al. 2008, in International Cosmic Ray Conference, Vol. 3, International Cosmic Ray Conference, 1397–1400 * Rovero et al. (1996) Rovero, A. C., Buckley, J. H., Fleury, P., et al. 1996, Astroparticle Physics, 5, 27 * Sánchez de Miguel et al. (2017) Sánchez de Miguel, A., Aubé, M., Zamorano, J., et al. 2017, MNRAS, 467, 2966 * Speckmayer et al. (2010) Speckmayer, P., Höcker, A., Stelzer, J., & Voss, H. 2010, Journal of Physics: Conference Series, 219, 032057 * Tyler (2013) Tyler, J. 2013, in International Cosmic Ray Conference, Vol. 33, International Cosmic Ray Conference, 3096 * Vacanti et al. (1994) Vacanti, G., Fleury, P., Jiang, Y., et al. 1994, Astroparticle Physics, 2, 1 * Weekes (1996) Weekes, T. C. 1996, Space Sci. Rev., 75, 1 * Winston (1974) Winston, R. 1974, Solar Energy, 16, 89 * Winston (1976) Winston, R. 1976, Light collectors in cylindrical geometry, United States, https://www.osti.gov/biblio/7129508 * Zitzer (2013) Zitzer, B. 2013, in International Cosmic Ray Conference, Vol. 33, International Cosmic Ray Conference, 3076 ## Appendix A Reference mirror reflectivities used in the simulations The reflectivities of the mirror facets used in the simulation of the VERITAS telescopes are based on measurements made in 2011, after T1 was relocated to its current position in the post-upgrade array layout. Together with the average camera gains of section 3.1, they are part of the current detector model of VERITAS, defined in April 2012 and used as a reference throughout this document. The reflectivities were measured from $260\,\mathrm{nm}$ to $700\,\mathrm{nm}$ for several mirrors facets that were located on the top, middle, and bottom of the dish of each telescope. The measured values were then averaged to get a mean reflectivity representative of the entire dish. Figure 16: Mean telescope reflectivity used to produce the baseline MC simulations in the current instrument configuration of VERITAS, and the reference IRFs discussed throughout this work. ## Appendix B Detailed MC-Data comparison During the validation of the proposed method to calibrate the throughput, we performed extensive tests to check the agreement between data and the modified MC simulations. One of the most sensitive tests in VERITAS to detect possible discrepancies between the measured Cherenkov signals and the corresponding simulated showers is to check the agreement in the MSCW and MSCL parameters. Both are shown, for each IRF period, in Figures 17 and 18. Figure 17: Comparison between the MSCW values of simulated $\gamma$-ray showers (red curves) and the ones for real $\gamma$-like events (bliue dots). Each row corresponds to one IRF period and each column to a energy bin (in log-scale, with the energy measured in TeV). Figure 18: Comparison between the MSCL values of simulated $\gamma$-ray showers (red curves) and the ones for real $\gamma$-like events (blue dots). Each row corresponds to one IRF period and each column to a energy bin (in log-scale, with the energy measured in TeV). ## Appendix C Stability of spectral parameters As discussed in section 2.3, the Crab Nebula is often used as a source to benchmark the performance of $\gamma$-ray instruments. It is not only one of the brightest sources in the sky, but also visible from both hemispheres. The Crab Nebula has a hard spectrum which reaches energies of a few tens of TeV with photon fluxes that are detectable by VERITAS. Finally, it is thought to be stable in year-timescales both in spectral shape and emitted luminosity, at least in the very-high-energy band. Using eight one-year observation campaigns of the Crab Nebula for the entire post-upgrade period of VERITAS, we could evaluate the performance of the proposed throughput calibration method. Figure 19 shows as a time series the run-wise values of the normalization flux and the spectral index of the source, evaluated at $1\,\mathrm{TeV}$. It was estimated by analysing each Crab Nebula run that survived quality cuts and had more than $4\,\sigma$ excess. Then, the resulting run-wise spectra were fitted locally with a power-law over the small energy range of $0.4-6\,\mathrm{TeV}$. A histogram representation of the values of Figure 19 can be seen in Figure 20. Figure 19: Run-wise measurement of the normalization flux at $1\,\mathrm{TeV}$ and the spectral index of the Crab Nebula between 0.6 and 4 TeV, including all IRF periods starting from the upgrade of VERITAS. Only runs with a detection significance of at least $4\,\sigma$ are included. Figure 20: Histogram of run-wise measurements of normalization and spectral index of Crab Nebula between 0.6 and 4 TeV including all IRF periods starting from the upgrade of VERITAS. Only runs with a detection significance of at least $4\,\sigma$ are included.
# Explore More Guidance: A Task-aware Instruction Network for Sign Language Translation Enhanced with Data Augmentation Yong Cao1* * Equal Contribution. Wei Li2* Xianzhi Li1* Min Chen1# # Corresponding author: Min Chen. Guangyong Chen3 Long Hu1 Zhengdao Li4 and Hwang Kai4 1Huazhong University of Science and Technology 2Nanchang University 3Zhejiang University, Zhejiang Lab 4The Chinese University of Hong Kong, Shenzhen {yongcao_epic, weili_epic, xzli<EMAIL_ADDRESS><EMAIL_ADDRESS> <EMAIL_ADDRESS><EMAIL_ADDRESS><EMAIL_ADDRESS> ###### Abstract Sign language recognition and translation first uses a recognition module to generate glosses from sign language videos and then employs a translation module to translate glosses into spoken sentences. Most existing works focus on the recognition step, while paying less attention to sign language translation. In this work, we propose a task-aware instruction network, namely TIN-SLT, for sign language translation, by introducing the isntruction module and the learning-based feature fuse strategy into a Transformer network. In this way, the pre-trained model’s language ability can be well explored and utilized to further boost the translation performance. Moreover, by exploring the representation space of sign language glosses and target spoken language, we propose a multi-level data augmentation scheme to adjust the data distribution of the training set. We conduct extensive experiments on two challenging benchmark datasets, PHOENIX-2014-T and ASLG-PC12, on which our method outperforms former best solutions by 1.65 and 1.42 in terms of BLEU-4. Our code is published at https://github.com/yongcaoplus/TIN-SLT. Explore More Guidance: A Task-aware Instruction Network for Sign Language Translation Enhanced with Data Augmentation ## 1 Introduction Figure 1: Comparing the sign language translation performance on two challenging datasets, i.e., PHOENIX-2014-T (blue) and ASLG-PC12 (gray), in terms of BLEU-1 and BLEU-4 metrics. Clearly, our approach achieves the highest scores on both datasets compared with others. The experiments section contains more results and analysis. Sign language recognition and translation aims to transform sign language videos into spoken languages, which builds a bridge for communication between deaf and normal people. Considering the unique grammar of sign languages, current effective recognition and translation systems involve two steps: a tokenization module to generate glosses from sign language videos, and a translation module to translate the recognized glosses into spoken natural languages. Previous works (Li et al., 2020; Sincan and Keles, 2020; Sharma and Kumar, 2021; Kumar et al., 2020; Camgoz et al., 2020) have proposed various solutions to address the first step, but paid less attention to the translation system. Hence, this paper aims to solve the problem of sign language translation (SLT) with the goal of translating multiple recognized independent glosses into a complete sentence. To do so, most existing works (Ko et al., 2019; Stoll et al., 2018) directly apply advanced techniques, e.g., Seq2Seq model (Sutskever et al., 2014) or Transformer (Vaswani et al., 2017), from neural machine translation to SLT. However, different from the lingual translation task in neural machine translation, SLT poses several unique challenges. First, it is hard to collect and annotate a large amount of sign language corpus. It is still an open question that how to explore more guidance and external information for SLT task by incorporating the pre-trained language models based on masses of unlabeled corpus. Second, since sign languages are developed independently from spoken languages with quite different linguistic features, the discrepancy of representation space between glosses and spoken sentences is significant, thus increasing the translation difficulty. To address the above issues, we propose a novel task-aware instruction network, called TIN-SLT for sign language translation, further enhanced with a multi-level data augmentation scheme. Our TIN-SLT is capable of encoding pre- trained language model’s ability into the translation model and also decreasing the discrepancy between the representation space of glosses and texts. To begin with, we leverage the extracted hidden features from the pre-trained model as extra information to guide the sign language translation. Besides, we apply an instruction module to transform general token features into task- aware features. In this way, we can fully utilize the language skills originating from the external world, thus reducing the demand for sign language training data. Next, to better inject the information from pre-trained model into the SLT model, we design a learning-based feature fusion strategy, which has been analyzed and validated to be effective compared with existing commonly-used fusion ways. Finally, considering the large difference between the sign language glosses and texts in terms of the representation space, we propose a multi-level data augmentation scheme to enrich the coverage and variety of existing datasets. In summary, our contributions are threefold: (i) a novel TIN-SLT network to explore more guidance of pre-trained models, (ii) a learning-based feature fusion strategy, and (iii) a multi-level data augmentation scheme. Extensive experiments on challenging benchmark datasets validate the superiority of our TIN-SLT over state-of-the-art approaches; see Figure 1 for example results. ## 2 Related Works Methods for sign language recognition. SLR task mainly focuses on the extraction of extended spatial and temporal multi-cue features (Zhou et al., 2020; Koller et al., 2017). Most existing works (Yin et al., 2016; Qiu et al., 2017; Wei et al., 2019; Cui et al., 2019) study the strong representation of sign language videos such as multi-semantic (Cui et al., 2019) and multi- modality (Koller et al., 2019) analysis. Although extracting representative features from sign language videos is fully explored, how to effectively conduct the subsequent translation by considering the unique linguistic features of sign language is often ignored in these SLR works. Methods for sign language translation. Early approaches for SLT rely on seq2seq model and attention mechanism (Arvanitis et al., 2019), while facing the limitation of long-term dependencies. Later, motivated by the ability of the Transformer (Vaswani et al., 2017), many researchers utilize it to effectively improve SLT performance. For example, the work in Camgoz et al. (2020) tried to use Transformer for both recognition and translation, and promote the joint optimization of sign language recognition and translation. The subsequent work (Yin and Read, 2020) proposed the STMC-Transformer network which first uses STMC networks (Zhou et al., 2020) to achieve better results for SLR, and then exploits Transformer for translation to obtain better SLT performance. Figure 2: Comparing the sample distribution between the input sign glosses (yellow dots) and the output translated texts (red dots) on two datasets. General neural machine translation. Broadly speaking, sign language translation belongs to the field of neural machine translation, with the goal of carrying out automated text translation. Earlier approaches deployed recurrent network (Bahdanau et al., 2014), convolutional network (Gehring et al., 2017), or Transformer (Vaswani et al., 2017) as encoder-decoder module. Among them, Transformer has achieved state-of-the-art results, but the translation performance still needs to be improved due to the limited training corpus. In addition, there are some explorations in bringing the pre-trained models into neural machine translation (Imamura and Sumita, 2019; Shavarani and Sarkar, 2021; Zhu et al., 2020). ## 3 Challenges The goal of this work is to translate the recognized multiple independent glosses (network input) into a complete spoken sentence (expected output). Compared with general neural machine translation tasks, SLT faces two main challenges: Figure 3: Network architecture of TIN-SLT. As shown in the bottom row, we first employ STMC model (Zhou et al., 2020) to recognize sign language videos to independent glosses. Next, we design a multi-level data augmentation scheme to enrich existing data pool for better feature embedding from glosses. Then, we design a task-aware instruction network with a novel instruction module to translate glosses into a complete spoken sentence. Limited annotated corpus: Compared with natural languages, the data resources of sign languages are scarce (Bragg et al., 2019). As a result, the SLT models trained on limited data often suffer from the overfitting problem with poor generalization (Moryossef et al., 2021; Yin et al., 2021). Discrepancy between glosses (input) and texts (output): Figure 2 shows the representation space of sign glosses (yellow dots) and translated texts (red dots) using Word2Vec (Mikolov et al., 2013) on two different datasets. We can observe that the representation space of sign glosses is clearly smaller than that of the target spoken language, thus increasing the difficulty of network learning. ## 4 Our Approach To address the above challenges, we propose TIN-SLT by effectively introducing the pre-trained model into SLT task and further designing a multi-level data augmentation scheme. Figure 3 depicts the detailed network architecture. In the following subsections, we will firstly introduce the network architecture of TIN-SLT, followed by our solutions to address the above two challenges. ### 4.1 Network Architecture of TIN-SLT Given a sign language video $\mathcal{V}=\\{V_{1},\dots,V_{T}\\}$ with $T$ frames, like existing approaches, we also adopt a two-step pipeline by first (i) recognizing $\mathcal{V}$ into a sequence $\mathcal{G}=\\{g_{1},\dots,g_{L}\\}$ with $L$ independent glosses and then (ii) translating $\mathcal{G}$ into a complete spoken sentence $\mathcal{S}=\\{w_{1},\dots,w_{M}\\}$ with $M$ words, but we pay more attention to solve step (ii). Hence, for step (i), as shown in the bottom-left part of Figure 3, we empirically use the spatial-temporal multi-cue (STMC) network (Zhou et al., 2020), which consists of a spatial multi-cue module and a temporal multi-cue module. For more technical details of STMC, please refer to (Zhou et al., 2020). Below, we shall mainly elaborate on the details of addressing step (ii). After obtaining the sequence $\mathcal{G}$ of sign glosses, considering that the representation space of glosses is much smaller than that of texts (see Figure 2), we thus design a multi-level data augmentation scheme to expand the gloss representation space; see the top-left part of Figure 3 as an illustration and we shall present its details in Section 4.3. Next, as shown in the bottom-middle part of Figure 3, the key of our design is a task-aware instruction network, where we adopt Transformer as the network backbone consisting of several encoder and decoder layers, whose objective is to learn the conditional probabilities $p(\mathcal{S}|\mathcal{G})$. Since SLT is an extremely low-data-resource task as we have discussed in Section 3, we thus focus on exploring more task-aware guidance by learning external world knowledge, which is dynamically incorporated into the Transformer backbone via our designed task-aware instruction module. We shall present its details in Section 4.2. Lastly, the outputs of last decoder are passed through a non-linear point-wise feed forward layer and we can obtain the predicted sentence $\mathcal{S}$ by a linear transform and softmax layer. ### 4.2 Task-aware Instruction Module As is shown in Figure 3, our task-aware instruction network is composed of a series of encoder and decoder layers. To handle the limited training data, we propose to leverage the learned external knowledge from natural language datasets to guide the learning of sign languages. More specifically, we design a task-aware instruction module to dynamically inject external knowledge from pre-trained models into our encoder and decoder. Below, we shall present the details. Encoder. Given the recognized glosses,let $H_{I}$ denotes the instruction features encoded by the pre-trained model (PTM), $H_{E}$ and $H_{E}^{\prime}$ denotes the input and output of encoder which is randomly initialized. As shown in Figure 4, $H_{I}$ and $H_{E}$ are fed into the task-aware instruction module for feature fusing. Then, the output of the instruction module is fed into residual connection (Add&Norm) and feed forward network (FFN). The light yellow box of Figure 4 shows the detailed design of task-aware instruction module. Specifically, we feed $H_{E}$ into a self-attention module to learn the contextual relationship between the features of glosses, while $H_{I}$ is fed into a PTM-attention, which is the same architecture as self- attention. Different from existing work which employ PTM in general neural network (Zhu et al., 2020), we insert an adaptive layer to fine-tune PTM- attention output for SLT task, to transform general gloss features into task- aware features. $h_{i}=\sigma(Attn_{I}(h_{t},H_{I},H_{I}))$ (1) where $\sigma()$ denotes the adaptive layer (we set it as fully connection layers here), and $h_{t}$ denotes the gloss features at time step $t$. Then, the output of two modules are combined via $\alpha$ strategy. The whole process is formulated as follows: $\hat{h}_{t}=(1-\alpha)Attn_{E}(h_{t},H_{E},H_{E})+\alpha h_{i}$ (2) where $Attn_{E}$ and $Attn_{I}$ are two attention layers with different parameters, which follow (Vaswani et al., 2017). The way of setting an optimal $\alpha$ will be introduced later. Decoder. Let $S_{D}$ and $S^{\prime}_{D}$ denotes the input and output of decoder, $s_{t}$ denote the hidden state at time step $t$, and $s_{0}$ denotes the beginning token of a sentence, i.e., $<bos>$. The hidden states are passed to a masked self-attention ensuring that each token may only use its predecessors as follows: $\tilde{s}_{t}=Attn_{D}(s_{t},s_{1:t},s_{1:t})$ (3) Representations $H_{E}^{\prime}$ and $H_{I}$ extracted from encoder and PTM are fed into the decoder-attention and PTM-attention module, respectively, as shown in the right part of Figure 4. Similar to Encoder, we formulate this decoding output as: $\hat{s}_{t}=(1-\alpha)Attn_{D}(\tilde{s}_{t},H_{E}^{\prime},H_{E}^{\prime})+\alpha h_{i}$ (4) where $Attn_{D}$ represent decoder-attention, and $\hat{s}_{t}$ is the output of decoder instruction module. Learning-based feature fusion. As shown in Eq. (2), representations extracted from both PTM- and self- attention are fused via a parameter $\alpha$. How to set a reasonable and optimal $\alpha$ will directly affects the learning performance, which is a problem worthy of exploration. Instead of manually setting a constant $\alpha$, we propose a learning-based strategy to encourage the network to learn the optimal $\alpha$ by itself for better feature fusion. Specifically, learning-based strategy means that we adopt the back-propagation learning algorithm to update $\alpha$ during the network training process: $\alpha_{t+1}=\Gamma(\alpha_{t},g_{t})$ (5) where $g_{t}$ indicates the gradient and $\Gamma(\cdot)$ represents the optimization algorithm. Though the idea of self-learning is straightforward, we shall show in the experiment section that it is quite effective compared with many other strategies. Figure 4: Details of Encoder layer, Decoder layer, and and Instruction Module. ### 4.3 Multi-level Data Augmentation To decrease the discrepancy between glosses (input) and texts (output), we propose a multi-level data augmentation scheme. Our key idea is that, besides existing gloss-text pairs, we use upsampling as our data augmentation algorithm and generate text-text pairs as extended samples to introduce texts information into glosses, thus enlarging the feature distribution space of glosses. Actually, there is a trade-off between augmentation and overfitting, which means the upsampling ratio $\Phi_{upsamp}$ should be determined by the degree of gloss-text difference. We here propose four factors $\phi=[\phi_{v},\phi_{r},\phi_{s},\phi_{d}]$ to calculate the difference in terms of token, sentence and dataset level, and set weighted $\phi$ as $\Phi_{upsamp}$. Token level. Vocabulary Different Ratio (VDR, $\phi_{v}$) is used to measure the difference of gloss vocabulary space and text’s, as calculated by Eq. (6). $\phi_{v}=1-\frac{|W_{\mathcal{G}}|}{|W_{\mathcal{G}}\cup W_{\mathcal{S}}|}$ (6) where $W_{\mathcal{G}}$ and $W_{\mathcal{S}}$ represent gloss and text vocabularies, and $|\cdot|$ denotes the size of set. We present Rare Vocabulary Ratio (RVR, $\phi_{r}$) to calculate the ratio of the rare words: $\phi_{r}=1-\frac{\sum_{\mathcal{G}\in W_{\mathcal{G}}}\\#(Counter(\mathcal{G})<\tau_{r})}{|W_{\mathcal{G}}\cup W_{\mathcal{S}}|}$ (7) where $\\#(\cdot)$ is 1 if the value is true, else 0, $Counter(\mathcal{G})$ is to calculate the gloss vocabulary frequency, and $\tau_{r}$ means the empirical thresh frequency determined by the vocabulary frequency, which is empirically set to be 2. Sentence level. We propose Sentence Cover Ratio (SCR, $\phi_{s}$) to compute the gloss-text pair similarity and covered ratio, calculated as: $r_{i}=\frac{|\mathcal{G}_{i}\cap\mathcal{S}_{i}|}{|\mathcal{S}_{i}|},\quad\phi_{s}=1-\frac{1}{N}\sum_{i,r_{i}>\tau_{c}}r_{i}$ (8) where $r_{i}$ denotes the covered ratio of gloss-text pair $\mathcal{G}_{i}$ and $\mathcal{S}_{i}$, while $\tau_{c}$ means the empirical thresh (set $\tau_{c}=0.5$). We labeled gloss-text pairs which satisfy $r_{i}>\tau_{c}$ as candidates $\mathcal{C}$. Dataset level. We use Dataset Length-difference Ratio (DLR, $\phi_{d}$) to calculate the length of sentence distance, calculated as: $\phi_{d}=1-\frac{\sum_{i}|\mathcal{G}_{i}|}{\sum_{i}|\mathcal{S}_{i}|}$ (9) Then we can get the upsampling ratio by: $\Phi_{upsamp}=\theta*\phi$ (10) where the weight matrix $\theta$ is empirically set as $[0.1,0.1,0.6,0.2]$, corresponding to the weight of $[\phi_{v},\phi_{r},\phi_{s},\phi_{d}]$, as we suppose the sentence level matters the most and the weight of token level is the same as dataset level. Lastly, we obtain the upsampling ratio and use upsampling strategy among all candidates $\mathcal{C}$ to enrich the dataset. ## 5 Experiments | Dev Set | Test Set ---|---|--- Model | BLEU-1 | BLEU-2 | BLEU-3 | BLEU-4 | ROUGE-L | METEOR | BLEU-1 | BLEU-2 | BLEU-3 | BLEU-4 | ROUGE-L | METEOR | PHOENIX-2014-T Dataset Evaluation Raw Data (Yin and Read 2020) | 13.01 | 6.23 | 3.03 | 1.71 | 24.23 | 13.69 | 11.88 | 5.05 | 2.41 | 1.36 | 22.81 | 12.12 Seq2seq (Camgoz et al. 2018) | 44.40 | 31.93 | 24.61 | 20.16 | 46.02 | - | 44.13 | 31.47 | 23.89 | 19.26 | 45.45 | - Transformer (Camgoz et al. 2020) | 50.69 | 38.16 | 30.53 | 25.35 | - | - | 48.90 | 36.88 | 29.45 | 24.54 | - | - Transformer (Yin and Read 2020) | 49.05 | 36.20 | 28.53 | 23.52 | 47.36 | 46.09 | 47.69 | 35.52 | 28.17 | 23.32 | 46.58 | 44.85 Transformer Ens. (Yin and Read 2020) | 48.85 | 36.62 | 29.23 | 24.38 | 49.01 | 46.96 | 48.40 | 36.90 | 29.70 | 24.90 | 48.51 | 46.24 DataAug (Moryossef et al. 2021b) | - | - | - | - | - | - | - | - | - | 23.35 | - | - TIN-SLT(Ours) | 52.35 | 39.03 | 30.83 | 25.38 | 48.82 | 48.40 | 52.77 | 40.08 | 32.09 | 26.55 | 49.43 | 49.36 | ASLG-PC12 Dataset Evaluation Raw data (Yin and Read 2020) | 54.60 | 39.67 | 28.92 | 21.16 | 76.11 | 61.25 | 54.19 | 39.26 | 28.44 | 20.63 | 75.59 | 61.65 Preprocessed data (Yin and Read 2020) | 69.25 | 56.83 | 46.94 | 38.74 | 83.80 | 78.75 | 68.82 | 56.36 | 46.53 | 38.37 | 83.28 | 79.06 Seq2seq (Arvanitis et al. 2019) | - | - | - | - | - | - | 86.70 | 79.50 | 73.20 | 65.90 | - | - Transformer (Yin and Read 2020) | 92.98 | 89.09 | 83.55 | 85.63 | 82.41 | 95.93 | 92.98 | 89.09 | 85.63 | 82.41 | 95.87 | 96.46 Transformer Ens.(Yin and Read 2020) | 92.67 | 88.72 | 85.22 | 81.93 | 96.18 | 95.95 | 92.88 | 89.22 | 85.95 | 82.87 | 96.22 | 96.60 TIN-SLT (Ours) | 92.75 | 88.91 | 85.51 | 82.33 | 95.17 | 95.21 | 93.35 | 90.03 | 87.07 | 84.29 | 95.39 | 95.92 | | | | | | | | | | | | Table 1: Comparing the translation performance of TIN-SLT against state-of- the-art techniques on PHOENIX-2014-T and ASLG-PC12 datasets. Clearly, our TIN- SLT achieves the best performance on most metrics. ### 5.1 Implementation Details Datasets. We conduct our experiments on two popular benchmark datasets of different languages and scales, including PHOENIX-2014-T (Camgoz et al., 2018) dataset and ASLG-PC12 (Othman and Jemni, 2012) dataset. Specifically, PHOENIX-2014-T, i.e., PH14, is an open-source German sign language dataset, recorded from broadcast news about the weather. This dataset contains parallel sign language videos from 9 different signers, gloss annotations with a vocabulary of 1066 different signs, and their translations with a vocabulary of 2887 different words. ASLG-PC12, i.e., ASLG, is a parallel corpus of English written texts and American Sign Language (ASL) glosses, which is constructed based on rule-based approach. It contains more than one hundred million pairs of sentences between English sentences and ASL glosses. Evaluation metrics. To fairly evaluate the effectiveness of our TIN-SLT, we follow (Yin and Read, 2020) to use the commonly-used BLEU-$N$ ($N$-grams ranges from 1 to 4) (Papineni et al., 2002), ROUGE-L (Lin, 2004) and METEOR (Banerjee and Lavie, 2005) as the evaluation metrics. Experimental setup. The experiments are conducted on Ubuntu 18.04 system with two NVIDIA V100 GPUs. Our Transformers are built using 2048 hidden units and 8 heads in each layer. Besides, we adopt Adam (Kingma and Ba, 2014) as optimization algorithm with $\beta_{1}=0.9$, $\beta_{2}=0.998$ and use inverse sqrt learning rate scheduler with a weight decay of $10^{-3}$. Please refer to Appendix for more hyper-parameter settings. ### 5.2 Comparison with Others To compare our TIN-SLT against state-of-the-art approaches on sign language translation task, we conducted two groups of experiments, Gloss2Text (G2T) and Sign2Gloss2Text (S2G2T). Evaluation on G2T. G2T is a text-to-text translation task, whose objective is to translate ground-truth sign glosses to spoken language sentences. In specific, for PH14 dataset, we should output German spoken language sentences; while for ASLG dataset, we should output English sentences. Table 1 summarizes the comparison results. Clearly, our TIN-SLT achieves the highest values on most evaluation metrics with a significant margin. Particularly, the superiority of our method on PH14 dataset is more obvious, where almost all the evaluation values are the highest. Thanks to our multi-level data augmentation scheme, the integrity of translated sentences has been improved, which is reflected in the significant improvement of BLEU-$N$ metric. In addition, the strong guidance from external knowledge also encourages our network to generate translated sentences in correct grammar, consistent tense and appropriate word order. For the lower ROUGE-L metric, we think that although the instruction module obviously help improve the accuracy and fluency of translation results, it leads to a slight decrease of continuous texts’ recall rate in this task. (a) Comparing various $\alpha$ strategies on PH14 dataset (b) Comparing various $\alpha$ strategies on ASLG dataset (c) The learned value of $\alpha$ on PH14 dataset (d) The learned value of $\alpha$ on ASLG dataset (e) Effect of beam size (f) Effect of layer number (g) Effect of learning rate (h) Effect of dropout rate Figure 5: Various analysis results. (a) & (b) present the results by using different feature fusion strategies on two datasets, respectively. (c) & (d) show our learned value of $\alpha$ during the training process on the two datasets, respectively. (e)-(h) explore how beam size, layer number, learning rate, and dropout rate affect the model performance. Evaluation on S2G2T. S2G2T is an extended task beyond G2T, which aims to recognize sign language videos to sign glosses, and then translate the recognized glosses to spoken sentences. Hence, unlike the task of G2T, in this comparison, we focus on the evaluation of the whole two-step pipeline, that is, obtaining spoken language sentences from sign language videos. Considering that only PH14 contains sign language videos, we thus conduct experiments on this dataset for S2G2T task, and the results are reported in Table 2. Note that, for the recognition step, we employ STMC model to realize vision-based sequence learning (Zhou et al., 2020). From the comparison we can see that, our TIN-SLT still outperforms existing approaches on most evaluation metrics. | Test Set ---|--- Model | BLEU-1 | BLEU-2 | BLEU-3 | BLEU-4 | ROUGL-L | METEOR G2T | 44.13 | 31.47 | 23.89 | 19.26 | 45.45 | - S2G-G2T | 41.54 | 29.52 | 22.24 | 17.79 | 43.45 | - S2G2T | 43.29 | 30.39 | 22.82 | 18.13 | 43.80 | - Sign2 | 46.61 | 33.73 | 26.19 | 21.32 | - | - Bahdanau | 47.53 | 33.82 | 26.07 | 21.54 | 45.50 | 44.87 Luong | 47.08 | 33.93 | 26.31 | 21.75 | 45.66 | 44.84 Transformer Ens. | 50.63 | 38.36 | 30.58 | 25.40 | 48.78 | 47.60 TIN-SLT (Ours) | 51.06 | 38.85 | 31.23 | 26.13 | 48.56 | 47.83 | | | | | | Table 2: Comparing the S2G2T performance by using our TIN-SLT and state-of- the-art techniques on PHOENIX-2014-T dataset. The results of G2T, S2G-G2T and S2G2T are from (Camgoz et al., 2018). The results of Sign2 are from (Camgoz et al., 2020). The results of Bahdanau, Luong, and Transformer Ens. are from (Yin and Read, 2020). Clearly, our TIN-SLT achieves the highest values on most metrics. ### 5.3 Analysis and Discussions Here, we conducted a series of detailed experiments to analyze our method and give some insights behind our network design. Effect of learning-based feature fusion. In this work, we propose a learning- based strategy to set $\alpha$ dynamically. Here, we conducted experiments by comparing this strategy with other four different strategies, including (1) cosine annealing (Loshchilov and Hutter, 2016), (2) cosine increment, (3) cosine decrement, and (4) constant value. The update of $\alpha$ by the three cosine strategies are calculated as Eq. (11) with different settings of the epoch cycle coefficient $T_{c}$: $\alpha_{t+1}=\alpha_{min}+\frac{1}{2}(\alpha_{max}-\alpha_{min})(1-cos(\frac{T_{t}}{T_{c}}\pi+\gamma))$ (11) where $\alpha$ is the fusion ratio, $T_{t}$ is current epoch step, and $\gamma$ is the time-shift constant. We set $T_{c}$ as (25, 100, 100) and $\gamma$ as (0, 0, $\pi$) for cosine annealing, cosine decrement, and cosine increment, respectively. The minimum value $\alpha_{min}$ and maximum value $\alpha_{max}$ of $\alpha$ are set to be 0 and 1. Figures 5(a)-5(b) are the experimental results on the two datasets. We can observe that the learning-based strategy (red line) gets the best result on ASLG and comparable result with the constant setting ($\alpha$$=$$0.8$) on PH14, but still better than other three cosine strategies. Moreover, we also visualize the learned value of $\alpha$ during the training process as shown in Figures 5(c)-5(d) to find out the contribution ratio of the BERT model to the final performance. We can see that, the value of $\alpha$ is gradually decreasing on PH14, meaning that the model depends more on the BERT pre- trained knowledge at the beginning of the training process and gradually inclines to our employed training corpus. The observation is just opposite on ASLG, since it is a much larger dataset than PH14 and our model relies more on BERT to further boost the performance near the end of training. | Test Set ---|--- Model | BLEU-1 | BLEU-2 | BLEU-3 | BLEU-4 | ROUGE-L | METEOR | PHOENIX-2014-T Dataset Evaluation Baseline | 47.69 | 35.52 | 28.17 | 23.32 | 46.58 | 44.85 w/ DataAug | 50.77 | 37.85 | 29.88 | 24.57 | 47.39 | 46.95 w/ Encoder | 51.05 | 37.94 | 29.91 | 24.63 | 47.59 | 47.13 w/ Decoder | 50.99 | 38.47 | 30.48 | 25.08 | 48.78 | 48.20 Full pipeline | 52.77 | 40.08 | 32.09 | 26.55 | 49.43 | 49.36 | ASLG-PC12 Dataset Evaluation Baseline | 92.98 | 89.09 | 85.63 | 82.41 | 95.87 | 96.46 w/ DataAug | 92.60 | 89.15 | 85.80 | 83.05 | 95.08 | 95.33 w/ Encoder | 92.77 | 89.22 | 86.23 | 83.40 | 95.22 | 96.87 w/ Decoder | 93.15 | 89.80 | 86.49 | 83.89 | 95.34 | 95.67 Full pipeline | 93.35 | 90.03 | 87.07 | 84.29 | 95.39 | 95.92 | | | | | | Table 3: Ablation analysis of our major network components on the G2T task. Analysis on major network components. In our TIN-SLT, there are two major components: the multi-level data augmentation scheme and the instruction module. To validate the effectiveness of each component, we conduct an ablation analysis on the G2T task with the following cases. * • Baseline: We use two layers Transformer (Yin and Read, 2020) without data augmentation and instruction module as baseline. * • w/ DataAug: Based on the baseline, we add our data augmentation scheme back. * • w/ Encoder: Based on w/ DataAug, we fuse instruction module only into the encoder. * • w/ Decoder: Based on w/ DataAug, we fuse instruction module only into the decoder. As a contrast, in our full pipeline, the instruction module is inserted into both encoder and decoder. Table 3 shows the evaluation results on both PH14 and ASLG. By comparing the results from Baseline and w/ DataAug, we can see that our data augmentation improves the translation performance, especially for the PH14 dataset. A reasonable interpretation is that the translation task on PH14 dataset is more difficult than on ASLG, thus our data augmentation contributes more. On the other hand, w/ Encoder, w/ Decoder and Full pipeline explore the best location to introduce PTM information into the model. Results in Table 3 show that our full model achieves the best performance. Particularly, by comparing the results from w/ Encoder and w/ Decoder against the results from SOTA methods (Tables 1 & 3), we can observe that as long as we employ the pre-trained model, no matter where it is inserted into the network, the performance is always better than existing methods. Model111The pre-trained models links are listed in Appendix. | Size(MB) | Dataset | Gloss(%) | Text(%) | BLEU4 ---|---|---|---|---|--- | PHOENIX-2014-T Dataset Evaluation Multilingual | 641.10 | PH14 | 59.96 | 74.62 | 25.48 Distilbert | 257.30 | PH14 | 44.50 | 71.15 | 24.73 Gbert | 421.80 | PH14 | 44.50 | 71.15 | 25.13 Dbmdz | 421.80 | PH14 | 73.72 | 88.13 | 26.55 | ASLG-PC12 Dataset Evaluation Base-Tiny | 16.90 | ASLG | 76.77 | 96.35 | 82.44 Electra | 51.70 | ASLG | 76.77 | 96.35 | 82.60 Distilbert | 255.60 | ASLG | 76.77 | 96.35 | 83.06 Base-uncased | 420.10 | ASLG | 76.77 | 96.35 | 84.29 | | | | | Table 4: Comparing different pre-trained models in terms of BLEU-4. Effect of different pre-trained models. We here explored the translation performance by using different pre-trained models; see Table 4. We analyzed the model size and vocabulary coverage of the pre-trained model with gloss and text of our dataset. We can see that introducing a pre-trained model with larger vocabulary coverage of the target dataset will gain better performance, since a pre-trained model with larger vocabulary coverage can inject more knowledge learned from another unlabeled corpus into the translation task. For ASLG, although the vocabulary coverage is the same, we can see that the bigger model has better performance since it can learn contextual representation better. Analysis on hyper-parameters. To search the best settings of our hyper- parameters, we employed Neural Network Intelligence (NNI) (Microsoft, 2018), a lightweight but powerful toolkit. As shown in Figures 5(e)-5(h), we explored how beam size, layer number, learning rate and dropout rate affect the model performance on PH14 dataset. First, beam search enables to explore more possible candidates, but large beam widths do not always result in better performance as shown in Figure 5(e). We obtain optimal beam size as 10 on PH14. Second, the layer number decides the model size and capacity, where the larger model would overfit on a small dataset. In Figure 5(f), we find the optimal layer number to be 3 on PH14. Lastly, as shown in Figures 5(g) & 5(h), we adopt an early-stopping strategy to avoid overfitting and find the best learning rate and dropout rate are 0.0003 and 0.45, respectively. Case study. Table 5 presents some intuitive translation results on ASLG by reporting the translated spoken sentences. Overall, the translation quality is good, even the translated sentences with low BLEU-4 still convey the same information. Also, we can observe that our translated sentences are basically the same with ground truth, although using different expressions, i.e., “decision making” vs. “decision made”. The translation results on PH14 are reported in Appendix. Type | Content | BLEU-4 ---|---|--- GT Gloss | X-IT BE DESC-UP TO X-YOU TO CONSIDER | 100.00 | AND CHOOSE OUTCOME X-YOU WANT TO SEE . GT Text | it is up to you to consider and choose | the outcome you want to see . Pred Text | it is up to you to consider and choose | the outcome you want to see . GT Gloss | X-I WANT IRELAND TO REMAIN AT | 57.58 | HEART DECISION MAKE IN EUROPE . GT Text | i want ireland to remain at the | heart of decision making in europe . Pred Text | i want ireland to remain at the | heart of the decision made in europe . GT Gloss | X-I WILL DESC-NEVER FORGET WHAT X-I | 13.44 | EXPERIENCE . SHOULD BE ABOUT . GT Text | that is what this european day of memorial should be | about . i will never forget what i experienced . Pred Text | i will never forget what i experienced . | | Table 5: Qualitative evaluation of translation performance in different BLEU-4 scores on ASLG dataset. ## 6 Conclusion In this paper, we proposed a task-aware instruction network for sign language translation. To address the problem of limited data for SLT, we introduced a pre-trained model into Transformer and designed an instruction module to adapt SLT task. Besides, due to the discrepancy between the representation space of sign glosses and spoken sentences, we proposed a multi-level data augmentation scheme. Extensive experiments validate our superior performance compared with state-of-the-art approaches. While there is obvious improvement among most evaluation metrics, the complexity of our models is also increased, causing a longer training period. In the future, we would like to explore the possibility of designing a lightweight model to achieve real-time efficiency. ## Acknowledgements We thank anonymous reviewers for the valuable comments. This work is supported by the China National Natural Science Foundation (No. 62176101 & No. 62106094) and Zhejiang Lab’s International Talent Fund for Young Professionals. ## References * Arvanitis et al. (2019) Nikolaos Arvanitis, Constantinos Constantinopoulos, and Dimitrios Kosmopoulos. 2019\. Translation of sign language glosses to text using sequence-to-sequence attention models. In _2019 15th International Conference on Signal-Image Technology & Internet-Based Systems (SITIS)_, pages 296–302. IEEE. * Bahdanau et al. (2014) Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2014. Neural machine translation by jointly learning to align and translate. _arXiv preprint arXiv:1409.0473_. * Banerjee and Lavie (2005) Satanjeev Banerjee and Alon Lavie. 2005. Meteor: An automatic metric for mt evaluation with improved correlation with human judgments. In _Proceedings of the acl workshop on intrinsic and extrinsic evaluation measures for machine translation and/or summarization_ , pages 65–72. * Bragg et al. (2019) Danielle Bragg, Oscar Koller, Mary Bellard, Larwan Berke, Patrick Boudreault, Annelies Braffort, Naomi Caselli, Matt Huenerfauth, Hernisa Kacorri, Tessa Verhoef, et al. 2019. Sign language recognition, generation, and translation: An interdisciplinary perspective. In _The 21st international ACM SIGACCESS conference on computers and accessibility_ , pages 16–31. * Camgoz et al. (2018) Necati Cihan Camgoz, Simon Hadfield, Oscar Koller, Hermann Ney, and Richard Bowden. 2018. Neural sign language translation. In _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition_ , pages 7784–7793. * Camgoz et al. (2020) Necati Cihan Camgoz, Oscar Koller, Simon Hadfield, and Richard Bowden. 2020. Sign language transformers: Joint end-to-end sign language recognition and translation. In _Proceedings of the IEEE/CVF conference on computer vision and pattern recognition_ , pages 10023–10033. * Cui et al. (2019) Runpeng Cui, Hu Liu, and Changshui Zhang. 2019. A deep neural framework for continuous sign language recognition by iterative training. _IEEE Transactions on Multimedia_ , 21(7):1880–1891. * Gehring et al. (2017) Jonas Gehring, Michael Auli, David Grangier, Denis Yarats, and Yann N Dauphin. 2017\. Convolutional sequence to sequence learning. In _International Conference on Machine Learning_ , pages 1243–1252. PMLR. * Huggingface-community (2018) Huggingface-community. 2018. https://huggingface.co/models. Accessed: 2018-11-17. * Imamura and Sumita (2019) Kenji Imamura and Eiichiro Sumita. 2019. Recycling a pre-trained bert encoder for neural machine translation. In _Proceedings of the 3rd Workshop on Neural Generation and Translation_ , pages 23–31. * Kingma and Ba (2014) Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. _arXiv preprint arXiv:1412.6980_. * Ko et al. (2019) Sang-Ki Ko, Chang Jo Kim, Hyedong Jung, and Choongsang Cho. 2019. Neural sign language translation based on human keypoint estimation. _Applied Sciences_ , 9(13):2683. * Koller et al. (2019) Oscar Koller, Necati Cihan Camgoz, Hermann Ney, and Richard Bowden. 2019. Weakly supervised learning with multi-stream cnn-lstm-hmms to discover sequential parallelism in sign language videos. _IEEE transactions on pattern analysis and machine intelligence_ , 42(9):2306–2320. * Koller et al. (2017) Oscar Koller, Sepehr Zargaran, and Hermann Ney. 2017. Re-sign: Re-aligned end-to-end sequence modelling with deep recurrent cnn-hmms. In _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)_. * Kumar et al. (2020) E Kiran Kumar, PVV Kishore, M Teja Kiran Kumar, and D Anil Kumar. 2020. 3d sign language recognition with joint distance and angular coded color topographical descriptor on a 2–stream cnn. _Neurocomputing_ , 372:40–54. * Li et al. (2020) Dongxu Li, Cristian Rodriguez, Xin Yu, and Hongdong Li. 2020. Word-level deep sign language recognition from video: A new large-scale dataset and methods comparison. In _Proceedings of the IEEE/CVF winter conference on applications of computer vision_ , pages 1459–1469. * Lin (2004) Chin-Yew Lin. 2004. Rouge: A package for automatic evaluation of summaries. In _Text summarization branches out_ , pages 74–81. * Loshchilov and Hutter (2016) Ilya Loshchilov and Frank Hutter. 2016. Sgdr: Stochastic gradient descent with warm restarts. _arXiv preprint arXiv:1608.03983_. * Microsoft (2018) Microsoft. 2018. https://github.com/microsoft/nni. Accessed: 2018-09-20. * Mikolov et al. (2013) Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013. Distributed representations of words and phrases and their compositionality. In _Advances in neural information processing systems_ , pages 3111–3119. * Moryossef et al. (2021) A. Moryossef, K. Yin, G. Neubig, and Y. Goldberg. 2021. Data augmentation for sign language gloss translation. * Othman and Jemni (2012) Achraf Othman and Mohamed Jemni. 2012. English-asl gloss parallel corpus 2012: Aslg-pc12. In _5th Workshop on the Representation and Processing of Sign Languages: Interactions between Corpus and Lexicon LREC_. * Papineni et al. (2002) Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In _Proceedings of the 40th annual meeting of the Association for Computational Linguistics_ , pages 311–318. * Qiu et al. (2017) Zhaofan Qiu, Ting Yao, and Tao Mei. 2017. Learning spatio-temporal representation with pseudo-3d residual networks. In _proceedings of the IEEE International Conference on Computer Vision_ , pages 5533–5541. * Sharma and Kumar (2021) Shikhar Sharma and Krishan Kumar. 2021. Asl-3dcnn: American sign language recognition technique using 3-d convolutional neural networks. _Multimedia Tools and Applications_ , pages 1–13. * Shavarani and Sarkar (2021) Hassan S Shavarani and Anoop Sarkar. 2021. Better neural machine translation by extracting linguistic information from bert. _arXiv preprint arXiv:2104.02831_. * Sincan and Keles (2020) Ozge Mercanoglu Sincan and Hacer Yalim Keles. 2020. Autsl: A large scale multi-modal turkish sign language dataset and baseline methods. _IEEE Access_ , 8:181340–181355. * Stoll et al. (2018) Stephanie Stoll, Necati Cihan Camgöz, Simon Hadfield, and Richard Bowden. 2018\. Sign language production using neural machine translation and generative adversarial networks. In _Proceedings of the 29th British Machine Vision Conference (BMVC 2018)_. University of Surrey. * Sutskever et al. (2014) Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural networks. In _Advances in neural information processing systems_ , pages 3104–3112. * Vaswani et al. (2017) Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In _Advances in neural information processing systems_ , pages 5998–6008. * Wei et al. (2019) Chengcheng Wei, Wengang Zhou, Junfu Pu, and Houqiang Li. 2019. Deep grammatical multi-classifier for continuous sign language recognition. In _2019 IEEE Fifth International Conference on Multimedia Big Data (BigMM)_ , pages 435–442. IEEE. * Yin et al. (2016) Fang Yin, Xiujuan Chai, and Xilin Chen. 2016. Iterative reference driven metric learning for signer independent isolated sign language recognition. In _European Conference on Computer Vision_ , pages 434–450. Springer. * Yin et al. (2021) K. Yin, A. Moryossef, J. Hochgesang, Y. Goldberg, and M. Alikhani. 2021. Including signed languages in natural language processing. In _Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)_. * Yin and Read (2020) Kayo Yin and Jesse Read. 2020. Better sign language translation with stmc-transformer. _arXiv preprint arXiv:2004.00588_. * Zhou et al. (2020) Hao Zhou, Wengang Zhou, Yun Zhou, and Houqiang Li. 2020. Spatial-temporal multi-cue network for continuous sign language recognition. In _Proceedings of the AAAI Conference on Artificial Intelligence_ , volume 34, pages 13009–13016. * Zhu et al. (2020) Jinhua Zhu, Yingce Xia, Lijun Wu, Di He, Tao Qin, Wengang Zhou, Houqiang Li, and Tie-Yan Liu. 2020. Incorporating bert into neural machine translation. _arXiv preprint arXiv:2002.06823_. ## Appendix A Appendix ### A.1 Dataset Description In this section, we will introduce two public benchmark datasets used in sign language translation tasks, namely PHOENIX-2014-T and ASLG-PC12. We conducted statistical analysis on the datasets and the results are shown in Table 6. It is obvious that PHOENIX-2014-T is a small-scale dataset, while ASLG-PC12 is a large-scale dataset. Dataset | Gloss | Translation ---|---|--- Train | Dev | Test | Train | Dev | Test PH14 | Samples | 7096 | 519 | 642 | 7096 | 519 | 642 Vocabs | 1066 | 393 | 411 | 2887 | 951 | 1001 Words | 67781 | 3745 | 4257 | 99081 | 6820 | 7816 ASLG | Samples | 82709 | 4000 | 1000 | 82709 | 4000 | 1000 Vocabs | 15782 | 4323 | 2150 | 21600 | 5634 | 2609 Words | 862046 | 41030 | 10503 | 975942 | 46637 | 11953 | | | | | | | Table 6: The descriptive statistics of PHOENIX-2014-T and ASLG-PC12 datasets. Samples row means the sample size of the dataset, Vocabs row represents the total vocabularies contained in the dataset, and Words row means the total words of the dataset. ### A.2 PHOENIX-2014-T Qulitative Result BE-SLT performance of G2T task on PHOENIX-2014-T is shown in Table 7, from which we can observe that sign language translation results are of good quality with different BLEU-4 scores and the predicted sentences can convey effective information even for low BLEU-4 scores. Type | Content | BLEU-4 ---|---|--- Gloss | BERG ORKAN MOEGLICH | 100.00 GT Text | auf den bergen sind orkanartige | böen möglich . Pred Text | auf den bergen sind orkanartige | böen möglich . Gloss | HEUTE NACHT ZWISCHEN NEUNZEHN ZWISCHEN | 57.58 | FUENFZEHN SUEDOST MAXIMAL ZWOELF GT Text | heute nacht werte zwischen neunzehn und fünfzehn | grad im südosten bis zwölf grad . Pred Text | heute nacht neunzehn bis fünfzehn grad im | südosten bis zwölf grad . Gloss | RUSSLAND IX TROCKEN HEISS SCHEINEN FUENF | 13.44 | DREISSIG BIS VIERZIG GRAD GT Text | ganz anders die trockene hitze über russland | mit fünfunddreißig bis vierzig grad . Pred Text | aber bei uns wird es auch noch ein bisschen | heißer da sind es fünf bis vierzig grad . | | Table 7: PHOENIX-2014-T: Qualitatively evaluation of translation performance in different BLEU-4 scores. ### A.3 Experiment Parameter In order to help reproduce BE-SLT and its translation performance, as shown in Table 8 and 9, we list the hyper-parameters of the best results on two benchmark datasets. For G2T task on PHOENIX-2014-T, we list the best hyper- parameter settings for the experiments which apply data augmentation scheme, or fuse BERT-attention module into encoder, decoder, and both respectively (namely,w/DataAug, w/Encoder, w/Decoder, w/All). W/All obtains the highest BLEU-4 using the initial learning rate of 0.00025, dropout rate of 0.45, beam search with width 5, and the max epoch size of 120. For G2T task on ASLG-PC12, we also list the hyper-parameter settings for the four experiments that achieve significant results, listed in Table 9. For more experiment details, please refer to our code which will be published upon the publication of this work. | PHOENIX-2014-T ---|--- Parameter | w/DataAug | w/Encoder | w/Decoder | w/All Embedding size | 512 | 512 | 512 | 512 Hidden size | 2048 | 2048 | 2048 | 2048 Head number | 8 | 8 | 8 | 8 Encoder BERT gate | 1 | 1 | 0 | 1 Decoder BERT gate | 1 | 0 | 1 | 1 Optimizer | Adam | Adam | Adam | Adam Learning rate | 0.00025 | 0.00025 | 0.00025 | 0.0003 LR schedule | inverse sqrt | inverse sqrt | inverse sqrt | inverse sqrt Weight decay | $10^{-3}$ | $10^{-3}$ | $10^{-3}$ | $10^{-3}$ Drop out | 0.45 | 0.45 | 0.45 | 0.45 Label smoothing | 0.3 | 0.3 | 0.3 | 0.3 BERT ratio | - | 0.6 | 0.6 | 0.65 Max epoch | 120 | 120 | 120 | 120 BERT model | bert-base-german-dbmdz-uncased | | | | Table 8: The hyper-parameters of the best results on PHOENIX-2014-T for the G2T task. | ASLG-PC12 ---|--- Parameter | w/DataAug | w/Encoder | w/Decoder | w/All Embedding size | 512 | 512 | 512 | 512 Hidden size | 2048 | 2048 | 2048 | 2048 Head number | 8 | 8 | 8 | 8 Encoder BERT gate | 1 | 1 | 0 | 1 Decoder BERT gate | 1 | 0 | 1 | 1 Optimizer | Adam | Adam | Adam | Adam Learning rate | 0.00025 | 0.00025 | 0.00025 | 0.00045 LR schedule | inverse sqrt | inverse sqrt | inverse sqrt | inverse sqrt Weight decay | $10^{-3}$ | $10^{-3}$ | $10^{-3}$ | $10^{-3}$ Drop out | 0.45 | 0.45 | 0.45 | 0.4 Label smoothing | 0.3 | 0.3 | 0.3 | 0.1 BERT ratio | - | 0.6 | 0.6 | 0.6 Max epoch | 70 | 70 | 70 | 70 BERT model | bert-base-uncased | | | | Table 9: The hyper-parameters of the best results on ASLG-PC12 for the G2T task. ### A.4 Alpha Strategy Settings Here we introduce the $\alpha$ value setting details corresponding to cosine strategy and constant strategy adopted in this work as shown in Formula 2 and Formula 4. The cosine annealing and cosine decrement strategies are calculated according to Formula 11. To simplify the calculation, the cosine increment strategy is calculated according to Formula 12. In order to be more intuitive, we plotted the curve of $\alpha$ value during the training process, as shown in Figure 6. $\alpha_{t+1}=1-\alpha_{min}-\frac{1}{2}(\alpha_{max}-\alpha_{min})(1-cos(\frac{T_{t}}{T_{c}}\pi))$ (12) (a) Cosine annealing strategy (b) Constant strategy (c) Cosine increment strategy (d) Cosine decrement strategy Figure 6: The $\alpha$ value during the training process in four setting strategies, namely cosine annealing, cosine increment, cosine decrement and constant. ### A.5 Pre-trained Models Download All BERT pre-trainied models adopted in Table 4 are published by (Huggingface- community, 2018). In order to help reproduce our work and use our code easily, we summarize the download links of the pre-trained models as follows. PHOENIX-2014-T Dataset * • Multilingual: _bert-base-multilingual-uncased_ https://huggingface.co/bert-base-multilingual-uncased * • Distilbert: _distilbert-base-german-cased_ https://huggingface.co/distilbert-base-german-cased * • Gbert: _gbert-base_ https://huggingface.co/deepset/gbert-base * • Dbmdz: _bert-base-german-dbmdz-uncased_ https://huggingface.co/bert-base-german-dbmdz-uncased ASLG-PC12 Dataset * • Base-Tiny: _bert-tiny_ https://huggingface.co/prajjwal1/bert-tiny * • Electra: _electra-small-discriminator_ https://huggingface.co/google/electra-small-discriminator * • Distilbert: _distilbert-base-uncased_ https://huggingface.co/distilbert-base-uncased * • Base-uncased: _bert-base-uncased_ https://huggingface.co/bert-base-uncased
ampmtime # Detailed Gender Wage Gap Decompositions: Controlling for Worker Unobserved Heterogeneity Using Network Theory111Fogel Opportunity Insights<EMAIL_ADDRESS>Modenesi: University of Michigan<EMAIL_ADDRESS>This material is based upon work supported by the National Science Foundation Graduate Research Fellowship Program under Grant No. 1256260. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation. This research is also supported by the Alfred P. Sloan Foundation through the CenHRS project at the University of Michigan. This work is done in partnership with the Brazilian Institute of Applied Economic Research (IPEA). We thank John Bound, Abigail Jacobs, Matthew Shapiro, Mel Stephens, and Sebastian Sotelo for advice and guidance throughout this project. We also thank Charlie Brown, Zach Brown, Raj Chetty, Ying Fan, John Friedman, Florian Gunsilius, Nathan Hendren, Dhiren Patki, Rafael Pereira, Matthew Staiger, Dyanne Vaught, and Jean-Gabriel Young for helpful comments and discussions. We also received helpful feedback from seminar participants at the University of Michigan, Labo(u)r Day, the Urban Economics Association, Networks 2021, Yale University, Duke University, the Federal Reserve Bank of Boston, Opportunity Insights, and JAM. Jamie Fogel and Bernardo Modenesi ###### Abstract Recent advances in the literature of decomposition methods in economics have allowed for the identification and estimation of detailed wage gap decompositions. In this context, building reliable counterfactuals requires using tighter controls to ensure that similar workers are correctly identified by making sure that important unobserved variables such as skills are controlled for, as well as comparing only workers with similar observable characteristics. This paper contributes to the wage decomposition literature in two main ways: (i) developing an economic principled network based approach to control for unobserved worker skills heterogeneity in the presence of potential discrimination; and (ii) extending existing generic decomposition tools to accommodate for potential lack of overlapping supports in covariates between groups being compared, which is likely to be the norm in more detailed decompositions. We illustrate the methodology by decomposing the gender wage gap in Brazil. ## 1 Introduction Significant attention has been paid to the gap in wages between men and women. Researchers are interested in understanding how much of the gap is due to men and women performing different work using different skills, and how much is due to men and women being paid differently for similar work. A number of methods exist for trying to answer this question. These methods decompose gender wage gaps into a portion explained by differences in characteristics between men and women, and a portion explained by differences in the return to characteristics, or “discrimination”. However, all of these methods rely on three assumptions. First, they assume that unobserved determinants of earnings are independent of gender. To the extent that there exist unobserved worker characteristics that are important for determining wages and are correlated with gender, then researchers will obtain biased estimates of the return to observable characteristics. As a result, decompositions of gender wage gaps into a component explained by covariates and a component explained by the return to covariates will be incorrect. Second, they assume a functional form in order to estimate the function that maps observable characteristics into wages and thus serves as the foundation for counterfactuals that ask what men would earn if they had the same characteristics except their gender were switched to female, and vice versa. Third, they assume that the covariates for male workers and female workers share a common support. While this is likely to hold when the number of covariates is small, as more covariates are added (possibly to satisfy the independence assumption) the common support assumption becomes more likely to be violated.222As more covariates are added it becomes harder to find another worker who shares the same values of all covariates. In this paper, we (i) propose a new method for identifying unobserved determinants of workers’ earnings from the information revealed by detailed data on worker–job matching patterns, (ii) non-parametrically estimate counterfactual wage functions for male and female workers, (iii) allow for a relaxation of the common support assumption, and (iv) apply our methods by decomposing the gender wage gap in Brazil using improved counterfactuals based on (i), (ii) and (iii). We find that the Brazilian gender wage gap is almost entirely explained by male and female workers who possess similar skills and perform similar tasks being paid different wages, not women possessing skills or tasks that pay relatively lower wages. To understand the problem created by unobserved determinants of productivity, suppose that there are three types of worker characteristics that are relevant for determining wages: gender, other characteristics observable to researchers, and characteristics that are observable to labor market participants, but not to researchers. A naive wage decomposition would simply compare male wages to female wages and attribute all differences to the effect of gender. A more common approach would condition on observable characteristics like age, experience, occupation, education, and union membership and would attribute all differences in wages, conditional on these characteristics, solely to being a woman as opposed to being a man. However, this would miss the fact that even workers with identical observable covariates may perform distinct labor. As Goldin (2014) shows, male lawyers significantly outearn female lawyers largely because males are more likely to work long, inflexible hours, which leads to high wages. Therefore, if we simply compared the wages of male lawyers to the wages of female lawyers, we might mistakenly conclude that male and female lawyers receive differential pay for the same work, when in fact male and female lawyers perform different types of legal work. In other words, male and female lawyers differ in terms of covariates that are observed by labor market participants but not by researchers. The key to our approach is identifying information about worker characteristics observable to labor market participants, but not to researchers, directly from the behavior of labor market participants. If we can identify groups of workers and groups of jobs who are similar _from the perspective of labor market participants_ , then we can be confident that any gender wage differentials within these groups are due to differential returns to labor market activities by gender, rather than differences in the work done by male and female workers. We employ a revealed preference approach that relies on workers’ and jobs’ choices, rather than observable variables or expert judgments, to classify workers and jobs into groups. Our key insight is that linked employer-employee data contain a previously underutilized source of information: millions of worker–job matches, each of which reflects workers’ and jobs’ perceptions of the workers’ skills and the jobs’ tasks. Intuitively, if two workers are employed by the same job, they probably have similar skills, and if two jobs employ the same worker those jobs probably require workers to perform similar tasks. However, since discrimination may lead men and women with similar skills to sort into different jobs, our method includes a correction for gender-based sorting into jobs that normalizes workers’ job match probabilities by the match probabilities for their gender. We formalize this intuition and apply it to large-scale data using a Roy (1951) model in which workers supply labor to jobs according to comparative advantage. Workers belong to a discrete set of latent _worker types_ defined by having the same “skills” and jobs belong to a discrete set of latent _markets_ defined by requiring employees to perform the same “tasks.”333“Skills” and “tasks” should be interpreted broadly as any worker and job characteristics that determine which workers match with which jobs. Workers match with jobs according to comparative advantage, which is determined by complementarities between skills and tasks at the worker type–market level. Workers who have similar vectors of match probabilities over markets are therefore revealed to have similar skills and belong to the same worker type, and jobs that have similar vectors of match probabilities over worker types are revealed to have similar tasks and belong to the same market. Our model extends the model in Fogel and Modenesi (2023) to allow firms to have labor market power, thereby rationalizing pay heterogeneity among workers with the same skills in jobs requiring the same tasks and microfounding the correction for gender-based sorting. Once we have clustered workers with similar skills into worker types and jobs requiring similar tasks into markets, we turn to estimating counterfactual wage functions. Traditional decomposition methods estimate counterfactual female earnings by fitting wage regressions using observations for male workers only, but generating predicted values by multiplying average female covariate values by the male regression coefficients. This approach suffers from three main issues: (i) it requires the researcher to impose a restrictive regression functional form; (ii) it does not necessarily allow for heterogeneous returns to covariates in predictions; and (iii) it does not have embedded tools to handle when workers do not share similar covariate support. Taken together, these issues can potentially bias the counterfactual estimation exercise, which is the foundation of gender wage gap decompositions. In order to circumvent these issues, we make use of a flexible matching estimator for counterfactual earnings. We implement a matching estimator in which we match male and female workers who belong to the same worker type and are employed by jobs in the same market. In doing so, we implicitly assume that worker types and markets fully account for all factors, other than gender, that affect workers’ wages, although we also estimate specifications in which we include other observable characteristics in addition to worker types and markets. Within these matched groups, we use the male workers’ mean wages as counterfactuals for what the female workers would have earned if they were male, and vice versa. We compare our matching estimator to a standard estimator and find similar results, although in some specifications the matching estimator is clearly preferable. However, there may be some worker type–market cells with no male workers or no female workers so we introduce a correction to account for this lack of common support. We address the issue of a lack of common covariate support between male and female workers by decomposing the gender wage gap into four components: (i) differences due to different covariate distributions between groups, i.e. the composition factor, for observations that share the same support; (ii) differences related to differential returns to covariates between groups over a common support of the covariates, i.e. the structural factor, often associated with labor market discrimination; (iii) a part due to observations from male workers being out of the female workers’ support of the covariates; and (iv) the last portion related to observations of female workers being out of the male workers’ support of the covariates. This decomposition allows us to perform counterfactuals similar to existing methods for the part of the distribution of the covariates for which male and female workers have common support, yet it still allows us to quantify how much of the gender wage gap occurs outside the region of common support and would therefore be ignored by standard decomposition methods. We estimate our model and conduct empirical analyses using Brazilian administrative records from the Annual Social Information Survey (RAIS) that is managed by the Brazilian labor ministry. The RAIS data contain detailed information about every formal sector employment contract, including worker demographic information, occupation, sector, and earnings. Critically, these data represent a network of worker–job matches in which workers are connected to every job they have ever held, allowing us to identify job histories of workers, their coworkers, their coworkers’ coworkers, and so on. We restrict our analysis to the Rio de Janeiro metropolitan area both for computational reasons and because restricting to a single metropolitan area enables us to focus on skills and tasks dimensions of worker and job heterogeneity rather than geographic heterogeneity. In our data, the average male worker earns a wage 16.7% higher than the average female worker. Our primary result is that almost the entire gender wage gap is attributable to male and female workers who possess similar skills and perform similar tasks being paid differently, or what is often referred to as “discrimination.” This is true at the aggregate level, and remains true when we perform wage decompositions within each worker type–market cell, indicating that this is a widespread phenomenon, not one driven by large wage differentials in small subsets of the labor market. We find that wage decompositions based on standard observable variables suffer from omitted variable bias, emphasizing the need for detailed worker and job characteristics in the form of worker types and markets. We find that wage decompositions based on linear regressions yield similar findings to those based on matching when a lack of common support is not an issue, however when male and female workers’ characteristics do not share a common support the matching estimator with corrections for a lack of common support outperforms alternatives. Literature: The literature of decomposition methods in economics can be classified into two main branches. The first decomposes average differences in a variable of interest $Y$ — often wages — between two groups of workers. The most widespread method in this class was developed by Oaxaca (1973) and Blinder (1973). The second branch decomposes functionals of the variable of interest $Y$ – e.g. its distribution or quantile function. Given that functionals of a variable often provide more information than its average, the second group of decompositions is referred to as “detailed decompositions” (Fortin et al. 2011). A seminal paper in this group is DiNardo et al. (1996)444Barsky et al. (2002) develop a methodology similar to DiNardo et al. (1996), focusing on issues that arise from lack of common covariate support between the groups in the decomposition. Modenesi (2022) discusses their approach in light of alternatives to handle the lack of common support. and their methodology and inference was further generalized and improved later by Chernozhukov et al. (2013)555Firpo et al. (2018) later in this literature uses influence functions to propose a detailed decomposition that is invariant to the order of the decomposition.. We follow the first branch of the literature in focusing on average differences, largely because our rich set of controls introduces a curse of dimensionality that renders detailed decompositions infeasible. Our method for handling a lack of common covariate support follows Ñopo (2008) and Garcia et al. (2009)666Garcia et al. (2009) and Morello and Anjolim (2021) both study the evolution of the Brazilian gender gap. Garcia et al. (2009) uses the same approach we use to handle the problem of lack of overlapping supports, and Morello and Anjolim (2021) have a similar matching methodology to decompose the gender gap. In addition to using similar methods for the decomposition, we add the skills and tasks controls derived from the labor market network, and we derive a distribution of gender gaps for different clusters of similar workers performing similar tasks.. In concurrent work we extend Ñopo (2008) to generic “detailed decompositions” (Modenesi 2022). Our model of labor market power builds on Card et al. (2015), Card et al. (2018) and Gerard et al. (2018) but allows for significantly more granular worker and job heterogeneity. The way we model multidimensional worker–job heterogeneity relates to papers that use a skills-tasks framework in the worker-job matching literature (Autor et al. 2003; Acemoglu and Autor 2011; Autor 2013; Lindenlaub 2017; Tan 2018; Kantenga 2018). Our method for clustering workers and jobs fits into the relatively recent literature in labor economics that extracts latent information from the network structural of the labor market (Sorkin 2018; Nimczik 2018; Jarosch et al. 2019) and directly extends Fogel and Modenesi (2023) by allowing for labor market power. Methodologically, we draw from the community detection branch of network theory (Larremore et al. 2014; Peixoto 2018; 2019)777More precisely, we employ a variant of the SBM which makes use of network edge weights (Peixoto 2018), which are key for us to model the presence of potential discrimination in the labor market.. Our paper connects to this literature by formalizing a theoretical link between monopsonistic labor market models and the stochastic block model, providing microfoundations and economic interpretability of network theory unsupervised learning tools in order to solve economic problems. By controlling for skills and tasks, our papers share common ground with Goldin (2014) and Hurst et al. (2021). Goldin (2014) indicates that the potential residual discrimination in the gender wage gap is due to the nature of the tasks in some occupations, by using a linear regression approach dummies for occupation interacted with the gender dummy. We add to her approach by proposing an economic model for discrimination, which provides us with both worker and job heterogeneity controls, in addition to performing the gender gap decomposition while taking into account potential violations of conventional decomposition assumptions. Hurst et al. (2021) on the other hand are assessing the black-white wage gap over time as function of changes in the taste vs statistical discrimination factors, as well as the result of workers sorting after these changes. Roadmap: The paper proceeds as follows. Section 2 introduces a simple framework for decomposition methods. Section 3 presents our model of worker–job matching and derives from it our algorithm for clustering workers into worker types and jobs into markets. Section 4 provides greater detail on the wage gap decomposition methods we employ. Section 5 describes our data. Section 6 presents results. Finally, Section 7 concludes. ## 2 A framework for decomposition methods We introduce a simple framework for decomposition methods to guide the analysis in this paper. Define the actual wage of worker $i$ employed by job $j$ as $Y_{ij}$, and let $G_{i}$ be a dummy denoting whether worker $i$ is male. The difference between the average wage for male workers and the average wage for female workers, which we call the “overall wage gap,” can be expressed as: $\Delta:=E[Y_{ij}|G_{i}=1]-E[Y_{ij}|G_{i}=0]$ (1) The overall wage gap above can be decomposed into two factors: differences in productivity between male and female workers, usually referred to as the composition factor; and differences in pay between equally productive male and female workers, known as the structural factor. We use the potential outcomes framework in order to formally decompose the overall wage gap into these two factors. Denote by $Y_{0ij}$ the potential wage of worker $i$ employed by job $j$ when the worker is female, and $Y_{1ij}$ the potential wage of worker $i$ employed by job $j$ when the worker is male. Let $x$ be the vector of all variables that determine workers’ productivity. We assume that the worker’s gender may affect their pay, but does not directly affect their productivity. We represent the potential outcomes as functions of $x$ as follows: $Y_{gij}:=Y_{g}(x_{ij}),g\in\\{0,1\\}$. Notice that $x$ has both $i$ and $j$ subscripts, as the marginal product of worker $i$ at their current job $j$ depends on both the worker’s skills and the job’s tasks. The fact that there is a different earnings function for men and women reflects the possibility that male and female workers with identical productivities may be paid differently. Furthermore, it is possible to use the dummy for gender to represent observed wages as a function of potential outcomes using a switching regression model $Y_{ij}:=G_{i}Y_{g}(x_{ij})-(1-G_{i})Y_{g}(x_{ij})$. At this point we are able to decompose the overall wage gap, $\Delta$, into the composition and structural components mentioned above by adding and subtracting the quantity888Analogously, the overall decomposition can be performed by adding and subtracting the male counterfactual quantity $E[Y_{0}(x_{ij})|G_{i}=1]$ to $\Delta$. The main results in this paper use the female counterfactual approach. $E[Y_{1}(x_{ij})|G_{i}=0]:=\int Y_{1}(x_{ij})dF_{G=0}(x)$ from the overall wage gap $\Delta$, where $F_{G=0}(x)$ is the productivity distribution for female workers. Intuitively, $E[Y_{1}(x_{ij})|G_{i}=0]$ is the mean earnings for a counterfactual set of workers possessing the female productivity distribution, but who are paid like men999Alternatively, this counterfactual term can be interpreted as the mean earnings of male workers whose productivity distribution was adjusted to match the female productivity distribution. $\Delta:=\underset{\Delta_{X}:=\text{Composition}}{\underbrace{E[Y_{ij}|G_{i}=1]-E[Y_{1}(x_{ij})|G_{i}=0]}}+\underset{\Delta_{0}:=\text{Structural}}{\underbrace{E[Y_{1}(x_{ij})|G_{i}=0]-E[Y_{ij}|G_{i}=0]}}$ (2) The composition portion can be rewritten as $E[Y_{1}(x_{ij})|G_{i}=1]-E[Y_{1}(x_{ij})|G_{i}=0]$101010We use the representation of the observed $Y$ in terms of potential outcomes to write $E[Y_{ij}|G_{i}=1]=E[G_{i}Y_{g}(x_{ij})-(1-G_{i})Y_{g}(x_{ij})|G_{i}=1]=E[Y_{1}(x_{ij})|G_{i}=1]$ and substitute it in $\Delta_{X}$.. It represents the difference between what male workers actually earn and what male workers would have earned in a counterfactual scenario in which their productivity distribution was equivalent to the female productivity distribution. This quantity captures the portion of the overall wage gap attributable to differences in the composition, or distribution of productivity, between male and female workers. The structural portion is equivalent to $E[Y_{1}(x_{ij})-Y_{0}(x_{ij})|G_{i}=0]$111111Analogously to the previous term, using the map from the potential outcomes to the observed $Y$, we can write $E[Y_{ij}|G_{i}=0]=E[G_{i}Y_{g}(x_{ij})-(1-G_{i})Y_{g}(x_{ij})|G_{i}=1]=E[Y_{0}(x_{ij})|G_{i}=0]$ and substitute it in $\Delta_{0}$.. This is the difference between female earnings in a counterfactual state in which females were paid equivalently to what equally productive male workers are paid and actual average female earnings. This portion of the overall wage gap is due to structural differences in how the two genders are paid, holding productivity constant, which is why this term is often associated with a form of discrimination. What we define as the structural component might reasonably be thought as discrimination, where labor market discrimination is defined as workers with similar productivity, performing similar tasks, and being paid differently based on observables that do not influence productivity. Other forms of discrimination may exist — including mistreatment or harassment, differential pre-job human capital accumulation opportunities, or discriminatory hiring practices — but we do not consider those in this paper. In our set up, individual discrimination occurs when the wage for worker $i$ at job $j$ is different if the individual’s gender changes, ceteris paribus, i.e. $Y_{1}(x_{ij})-Y_{0}(x_{ij})\neq 0$. The problem is that, in order to measure this quantity, we run into the fundamental problem of causal inference: it is impossible to observe the potential wages in both states for the same individual. Therefore we must make assumptions in order to construct counterfactual values, i.e. the value of $Y_{1}$ for a female worker, or the value of $Y_{0}$ for a male worker. In this paper, we break the assumptions needed for the counterfactual estimation into two parts and we show how our approach contributes to deal with limitations in each of them. The first assumption is that workers with the same values of $x$ are equally productive and would be paid equal wages if gender played no role in wage determination, conditional on productivity. This is equivalent to assuming that $x$ contains all factors that affect productivity and are correlated with gender. This “conditional independence/ignorability” assumption, is the basis of all decomposition methods in economics (Fortin et al. 2011), as it is a requirement for consistency of its estimates for the gap decomposition portions. However, not all factors that theoretically should be included in $x$ are observable. A problem would arise if certain factors that contribute to worker $i$’s productivity in job $j$ are both unobserved by the econometrician and correlated with gender. If such factors exist, our counterfactuals would be invalid. Specifically, wage differentials due to unobserved differences in skills and tasks between male and female workers would be attributed to the effect of gender itself. For example, if women tend to have better social skills but we do not observe social skills, then we would interpret women outearning men in social skill-intensive jobs as discrimination against men, when in fact it is simply the result of differences in unobserved skills. Therefore, it is critical to come as close as possible to identifying groups of male and female workers who have exactly the same skills and perform exactly the same tasks. If we do so, then any gender wage differentials within this group are attributable to the effect of gender per se. In Section 3 we address this issue by identifying latent worker and job characteristics relevant to productivity and wage determination using the network of worker–job matches. The second set of assumptions required to build the counterfactual $Y_{1}(x)$ for females in $\Delta$ are related to the choice of an estimation strategy for the function $Y_{1}(\cdot)$121212Another approach decomposes the wage distributions, as opposed to actual wages, which would be equivalent to switching $Y$ for its distribution $F_{Y}$, but still needing the estimation of the counterfactual $F_{Y_{1}}(y|x)$ for females (e.g. DiNardo et al. 1996 and Chernozhukov et al. 2013). We choose not to employ these decompositions in this paper as our setup does not satisfy basic conditions for decomposing distributions, such as having a low-dimensional vector of observable characteristics $x$ – given curse of dimensionality – and having the overlapping supports assumptions satisfied.. A common estimation strategy requires fitting a linear wage regression for males and using its estimated coefficients to predict wages, but inputting female workers’ covariates (Oaxaca 1973 and Blinder 1973). This approach is highly tractable, however the assumption of a linear functional form is to some extent arbitrary, and using the same regression coefficients to predict counterfactual earnings for distinct female workers (i.e. allowing no heterogeneous returns to observable characteristics) could lead to biased estimates of counterfactual earnings. An alternative approach relies on matching males to each female worker based on similar observable characteristics, and uses the wages of matched male workers in order to inform each female’s counterfactual wage. This less-parametric approach has the advantage of not imposing any functional form assumption for $Y_{1}(\cdot)$, however it requires us to observe a sufficiently rich set of observable variables that male and female workers with the same observables may be assumed to have similar productivity. Moreover, matching methods are unreliable when we are unable to find a female worker with the same observables as a male worker, or vice versa. In Section 3 we describe a new method to enhance the set of observable characteristics available to the researcher, reducing the scope for unobserved determinants of productivity to cause biased estimates. In Section 4 we compare and contrast different methods to decompose the gender wage gap given a set of observable characteristics, circumventing issues present in counterfactual earnings estimation. ## 3 Revealing latent worker and job heterogeneity using network theory In this section we present an economic model of monopsonistic wage setting, which rationalizes a wage gap between two groups of workers who have different demographic characteristics, but have the same skills and perform the same tasks. Intuitively, otherwise identical male and female workers may supply labor to individual jobs with different elasticities, and jobs respond by offering them wages with different markdowns. If one group of workers supplies labor to jobs more inelastically, then they will be paid less, holding productivity constant. Moreover, the model microfounds our network-based clustering algorithm, which identifies groups of male and female workers with similar skills who perform similar tasks, and therefore can serve as good counterfactuals for each other. The model builds on the model of the labor market developed in Fogel and Modenesi (2023), with two important differences: (i) in this paper workers have idiosyncratic preferences over individual jobs, not just markets, causing jobs to face upward-sloping labor supply curves, and (ii) firms may offer different wages to men and women, even if they have identical skills and perform identical tasks. The model defines a probability distribution that governs how workers match with jobs, forming the network of worker-job matches observed in linked employer-employee data. We use this probability distribution to assign similar workers to worker types and similar jobs to markets, using a Bayesian method based on generative network theory models, which we present after the economic model. ### 3.1 Economic model We propose a model with two primary components: heterogeneous workers who supply labor and firms that produce goods by employing labor to perform tasks. Workers supply their skills to jobs, which are bundles of tasks embedded within firms. Jobs’ tasks are combined by the firms’ production functions to produce output. We assume that firms face an exogenously-determined demand for their goods131313For an alternative version of the model with endogenous product demand, see Fogel and Modenesi (2023).. Our model of the labor market has the following components: * • Each worker is endowed with a “worker type,” and all workers of the same type have the same skills. * • A job is a bundle of tasks within a firm. As we discuss in Section 5, we define a job in our data as an occupation–establishment pair. * • Each job belongs to a “market,” and all jobs in the same market are composed of the same bundle of tasks. * • There are $I$ worker types, indexed by $\iota$, and $\Gamma$ markets, indexed by $\gamma$. * • The key parameter governing worker-job match propensity is an $I\times\Gamma$ productivity matrix, $\Psi$, where the ($\iota,\gamma$) cell, $\psi_{\iota\gamma}$ denotes the number of efficiency units of labor a type $\iota$ worker can supply to a job in market $\gamma$.141414We can think of $\psi_{\iota\gamma}$ as $\psi_{\iota\gamma}=f(X_{\iota},Y_{\gamma})$, where $X_{\iota}$ is an arbitrarily high dimensional vector of skills for type $\iota$ workers, $Y_{\gamma}$ is an arbitrarily high dimensional vector of tasks for jobs in market $\gamma$, and $f()$ is a function mapping skills and tasks into productivity. This framework is consistent with Acemoglu and Autor (2011)’s skill and task-based model, and is equivalent to Lindenlaub (2017) and Tan (2018). A key difference is that Lindenlaub and Tan observe $X$ and $Y$ directly and assume a functional form for $f()$, whereas we assume that $X$, $Y$, and $f()$ exist but are latent. We do not identify $X$, $Y$, and $f()$ directly because in our framework $\psi_{\iota\gamma}$ is a sufficient statistic for all of them. Time is discrete, with time periods indexed by $t\in\\{1,\dots,T\\}$ and workers make idiosyncratic moves between jobs over time. Neither workers, households, nor firms make dynamic decisions, meaning that the model may be considered one period at a time. We do not consider capital as an input to production. #### 3.1.1 Firm’s problem Each firm, indexed by $f$, has a production function $Y_{f}(\cdot)$ which aggregates tasks from different labor markets, indexed by $\gamma$. Firm $f$ faces exogenously-determined demand for its output, $\bar{Y}_{f}$. The firm’s only cost is labor. As we discuss in the next subsection, firms face upward- sloping labor supply curves and therefore have wage-setting power. Firms demand labor in each market, $\gamma\in\\{1,\dots,\Gamma\\}$ and offer a different wage per efficiency unit of labor for each market. Firms also may offer different wages to workers in different demographic groups $g\in\\{A,B\\}$ (e.g. male and female workers), although type $A$ and type $B$ workers belonging to the same worker type $\iota$ are equally productive in all jobs. We define a job $j$ as a firm $f$ – market $\gamma$ pair. We define the wage per efficiency unit of labor for demographic group $g$ workers employed in job $j$ $w_{j}^{g}$. Define $L_{j}^{g}$ as the quantity of efficiency units of labor supplied by demographic group $g$ workers to job $j$. The firm’s problem is to choose the quantity of labor inputs in each job for each demographic group in order to minimize costs subject to the constraint that production is greater than or equal to the firm’s exogenous product demand, $\bar{Y}_{f}$: $\displaystyle\min_{\\{{w}_{j}^{A},{w}_{j}^{B}\\}_{j=1}^{\Gamma}}\sum_{j=1}^{\Gamma}w^{A}_{j}L^{A}_{j}+w^{B}_{j}L^{B}_{j}\quad\text{s.t.}\quad Y_{f}\left(L_{1},\ldots,L_{\Gamma}\right)\geq\bar{Y}_{f}$ where $L_{j}=L^{A}_{j}+L^{B}_{j}$ is the total amount of efficiency units of labor employed by job $j$ and $Y_{f}$ is a concave and differentiable production function. Taking the first order condition with respect to $w_{j}^{g}$ allows us to solve for the wage paid by job $j$ to workers in demographic group $g$ as a markdown relative to the marginal revenue product of labor: $\displaystyle w_{j}^{g}=$ $\displaystyle\underset{\text{ Markdown }}{\underbrace{\frac{e_{j}^{g}}{1+e_{j}^{g}}}}\qquad\times\underset{\text{Marg. revenue product of labor}}{\underbrace{\mu_{f}\frac{\partial Y_{f}}{\partial L_{j}}}}$ (3) where $\mu_{f}$ is the shadow revenue associated with one more unit of output and $e_{j}^{g}:=\frac{\partial L_{j}^{g}}{\partial w_{j}^{g}}\frac{w_{j}^{g}}{L_{j}^{g}}$ is the labor supply elasticity of workers from group $g$ to job $j$. Equation (3) shows that the wage paid to demographic group $g$ workers employed in job $j$ (equivalently, employed in market $\gamma$ by firm $f$) is the product of a markdown and the marginal revenue product of labor in job $j$. The markdown depends on the demographic group $g$’s elasticity of labor supply to job $j$. As labor supply becomes more elastic, the markdown converges to 1 and the wage converges to the marginal product of labor. Conversely, as labor supply becomes less elastic, the wage declines further below the marginal product of labor. This equation rationalizes different demographic groups being paid different wages for the same labor: if one demographic group supplies labor more inelastically, they will be paid less.151515We are referring to the elasticity of labor supply _to a specific job $j$_, which may differ from a group’s labor supply elasticity to the overall labor market. For example, it could be the case that men supply labor more inelastically at the extensive margin, but women have stronger idiosyncratic preferences for specific jobs, making them less likely to change jobs in response to a wage differential. In this case, women would supply labor less elastically to a specific job $j$ and thus receive lower wages. The firm employs workers in both demographic groups despite paying them different wages because in order to attract the marginal worker from the lower-paid demographic group, it must raise wages for all inframarginal workers in that group. At some point the marginal cost (inclusive of the required raises for inframarginal workers) of hiring workers from the lower-paid demographic group exceeds the marginal cost of hiring workers from the higher-paid demographic group, and the firm will switch to hiring the higher-paid workers. #### 3.1.2 Worker’s problem A worker belonging to worker type $\iota$ and demographic group $g\in\\{A,B\\}$, has a two step decision. First, she chooses a market $\gamma$ in which to look for a job, and second she chooses a firm $f$ (and by extension a job $j$). The worker’s type defines their skills. Type $\iota$ workers can supply $\psi_{\iota\gamma}$ efficiency units of labor to any job in market $\gamma$. $\psi_{\iota\gamma}$ is a reduced form representation of the skill level of a type $\iota$ worker in the various tasks required by a job in market $\gamma$. Units of human capital are perfectly substitutable, meaning that if type 1 workers are twice as productive as type 2 workers in a particular market $\gamma$ (i.e. $\psi_{1\gamma}=2\psi_{2\gamma}$), firms would be indifferent between hiring one type 1 worker and two type 2 workers at a given wage per efficiency unit of labor, $w_{j}$. Therefore, the law of one price holds within each demographic group for each job, and a type $\iota$ worker belonging to demographic group $g$ employed in a job in market $\gamma$ is paid $\psi_{\iota\gamma}w_{j}^{g}$. Because workers’ time is indivisible, each worker may supply labor to only one job in each period and we do not consider the hours margin. Workers choose job $j$, equivalent to $\gamma f$, in order to maximize utility, which is the sum of log earnings $\log(\psi_{\iota\gamma}w_{j}^{g})$ and an idiosyncratic preference for job $j$, $\varepsilon_{ij}^{g}$: $\displaystyle j^{*}=$ $\displaystyle\arg\max_{j}\quad\log(\psi_{\iota\gamma}w_{j}^{g})+\varepsilon_{ij}^{g}.$ We assume that $\varepsilon_{ij}^{g}$ follows a nested logit distribution with parameter $\nu_{\gamma}^{g}$, with the $\gamma$ subscript indicating that nests are defined by $\gamma$: $\displaystyle\varepsilon_{ij}^{g}\sim NestedLogit(\nu_{\gamma}^{g})$ It follows from this assumption about the distribution of $\varepsilon_{ij}^{g}$ that the probability that worker $i$ belonging to worker type $\iota$ and demographic group $g$ matches with job $j$ in market $\gamma$ is161616Details for the derivation of the choice probability in the Appendix A.: $\displaystyle P(j=j^{*}|j\in\gamma,i\in\iota,g)$ $\displaystyle=\underset{\underset{\text{{1st step}: market choice}}{\underbrace{\scriptstyle P(\gamma=\gamma^{*}|i\in\iota,j\in\gamma,g)}}}{\underbrace{\frac{\exp(I_{\iota\gamma}^{g})^{\nu_{\gamma}^{g}}}{\sum_{\gamma}\exp(I_{\iota\gamma}^{g})^{\nu_{\gamma}^{g}}}}}\underset{\underset{\text{{2nd step}: job choice}}{\underbrace{\scriptstyle P(j=j^{*}|i\in\iota,j\in\gamma,\gamma=\gamma^{*},g)}}}{\underbrace{\frac{(\psi_{\iota\gamma}w_{j}^{g})^{\frac{1}{\nu_{\gamma}^{g}}}}{\sum_{j\in\gamma}(\psi_{\iota\gamma}w_{j}^{g})^{\frac{1}{\nu_{\gamma}^{g}}}}}}$ (4) where $I^{g}_{\iota\gamma}:=\sum_{j\in\gamma}(\psi_{\iota\gamma}w_{j}^{g})^{\frac{1}{\nu_{\gamma}^{g}}}$, also referred to as the inclusive value, is the expected utility a type $\iota$ worker faces when choosing market $\gamma$. Intuitively, the nested logit assumption decomposes the job choice probability into a first stage in which the worker chooses a market and then a second stage in which the worker chooses a job conditional on their choice of a market. ### 3.2 Identifying worker types and markets #### 3.2.1 Deriving the likelihood Now that we have derived the probability of worker $i$ matching with job $j$ from the primitives of our model, the next step is using this probability as the basis for a maximum likelihood procedure that assigns workers to worker types and jobs to markets based on the observed set of worker–job matches. This procedure builds on Fogel and Modenesi (2023), by allowing workers in the same worker type but different demographic groups to have different vectors of match probabilities over jobs. We decompose the choice probability in equation (4) into a component that depends only on variation at the $\iota,\gamma,g$ level and a component that depends on wages at individual jobs: $\displaystyle P(j=j^{*}|j\in\gamma,i\in\iota,g)$ $\displaystyle=\underset{\underset{\iota-\gamma-g\text{ component}\quad}{\underbrace{=:\Omega_{\iota\gamma}^{g}}}}{\underbrace{\frac{\exp(I_{\iota\gamma}^{g})^{\nu_{\gamma}^{g}-1}}{\sum_{\gamma}\exp(I_{\iota\gamma}^{g})^{\nu_{\gamma}^{g}}}\psi_{\iota\gamma}^{\frac{1}{\nu_{\gamma}^{g}}}}}\underset{\underset{\quad j-g\text{ component}}{\underbrace{=:d_{j}^{g}}}}{\underbrace{\vphantom{\frac{\exp(I_{\iota\gamma}^{g})^{\nu_{\gamma}^{g}-1}}{\sum_{\gamma}\exp(I_{\iota\gamma}^{g})^{\nu_{\gamma}^{g}}}\psi_{\iota\gamma}^{\frac{1}{\nu_{\gamma}^{g}}}}(w_{j}^{g})^{\frac{1}{\nu_{\gamma}^{g}}}}}.$ (5) The first term reflects workers choosing markets according to comparative advantage, while the second captures the fact that some jobs in market $\gamma$ require more workers than others (due to exogenous product demand differences), and since jobs face upward-sloping labor supply curves, they must pay higher wages to attract greater numbers of workers. Isolating the group-level ($\iota,\gamma,g$) variation from the idiosyncratic job-level variation allows us to cluster workers into worker types and jobs into markets on the basis of having the same group-level match probabilities, as we discuss below. The choice probabilities we have discussed thus far refer to a single job search for worker $i$. In reality, we may observe workers searching for jobs multiple times, and each of these searches is informative about the latent worker skills and job tasks that define worker types $\iota$ and markets $\gamma$. We incorporate repeated searches by assuming that workers periodically receive exogenous separation shocks which arrive following a Poisson process. Upon receiving a separation shock, the worker draws a new $\varepsilon_{ij}^{g}$ shock and repeats the job choice process described above. Assuming that $Poisson$-distributed exogenous separations happen at a rate $d_{i}^{g}$ for the individual worker $i$, then the expected number of times she will match with job $j$ throughout our sample period is given by $\displaystyle d_{i}^{g}\cdot P(j=j^{*}|j\in\gamma,i\in\iota,g)=\Omega_{\iota\gamma}^{g}d_{i}^{g}d_{j}^{g}.$ (6) Equation 6 forms the basis of our algorithm for clustering workers into worker types and jobs into markets, but before proceeding we must define some notation. Let $N_{W}$ and $N_{J}$ denote the number of workers and jobs, respectively, in our data. Define $A_{ij}$ as the number of times that worker $i$ is observed to match with job $j$. Further, define $\bm{A}$ as the matrix with typical element $A_{ij}$. $\bm{A}$ is a $N_{W}\times N_{J}$ matrix and represents the full set of worker–job matches observed in our data. As discussed previously, each individual worker belongs to a latent worker type denoted by $\iota$ and each job belongs to a latent market denoted by $\gamma$. The list of all latent worker type and market assignments is stored in the $(N_{W}+N_{J})\times 1$ vector denoted by $\bm{b}$, known as the node membership vector. We define $\bm{g}$ as the $N_{W}\times 1$ vector containing each worker’s demographic group affiliation. The matrix of worker–job matches $\bm{A}$ and workers’ demographic groups $\bm{g}$ are the data we use to cluster workers and jobs, while the node membership vector $\bm{b}$ is the latent object identified by the maximum likelihood procedure we discuss below. Following equation (6), the expected number of matches between a worker–job pair, $A_{ij}$, can be written as171717It is worth mentioning that: (i) the information $i\in\iota,j\in\gamma$ is contained in $\bm{b}$; and (ii) $A_{ij}$ is the number of matches between worker $i$ and job $j$, which makes the event that $j=j^{*}|i$ equivalent to the event that $A_{ij}=1$. These two facts allow us to use more succinct notation that directly links theoretical objects in our model to data: $P(j=j^{*}|j\in\gamma,i\in\iota,g)=P(A_{ij}=1|\bm{b},g)$, which we know the distributional form for. This connects notations from the economic model to the network model, but it still lacks the precise definition of the likelihood of interest, $P(\bm{A},\bm{g}|\bm{b})$, where $A_{ij}$ can assume values other than just $1$. $\displaystyle E[A_{ij}|\bm{b},g]=\Omega_{\iota\gamma}^{g}d_{i}^{g}d_{j}^{g}.$ (7) We prove in Appendix C that our assumption of Poisson-distributed exogenous separation shocks implies that $A_{ij}$ follows a Poisson distribution: $\displaystyle A_{ij}|\bm{b},g\sim Poisson(\Omega_{\iota\gamma}^{g}d_{i}^{g}d_{j}^{g})$ (8) Finally, we incorporate equation (8) above to fully characterize the likelihood of our data as a function of the unknown parameters, by applying Bayes rule: $\displaystyle P(A_{ij},g|\bm{b})=\underset{Poisson(\Omega_{\iota\gamma}^{g}d_{i}^{g}d_{j}^{g})}{\underbrace{P(A_{ij}|\bm{b},g)}}\underset{\alpha_{\iota\gamma}^{g}}{\underbrace{P(g|\bm{b})}},$ (9) where $\alpha_{\iota\gamma}^{g}\equiv P(g|\bm{b})$ is the fraction of type $\iota$ workers employed in market $\gamma$ jobs who belong to the demographic group $g$. Equation 9 corresponds to a commonly-used method from network theory known as the bipartite degree-corrected stochastic block model with edge weights (SBM). The SBM clusters nodes in a network (workers and jobs) into groups (worker types and markets) based on patterns of connections between nodes.181818Larremore et al. (2014) lays out the advantages of using bipartite models over using one-sided network projections to fit SBMs; Karrer and Newman (2011) presents the methodology for degree-correction as it enhances significantly the ability of the SBM to fit large scale real world networks; and Peixoto (2018) deal with weighted SBM inference, which is how we accommodate discrimination influencing matches within the SBM.. The main parameter of interest is the set of assignments of workers to worker types and jobs to markets contained in $\bm{b}$, while all of the other parameters are nuisance parameters that can be straightforwardly determined after $\bm{b}$ is defined (Karrer and Newman 2011). The next step is to maximize the likelihood defined in equation 9, which we address in the next subsection. #### 3.2.2 A Bayesian approach to recovering worker types and markets In order to make the estimation of worker types and markets feasible, together with using a principled method for choosing the number of clusters, we employ Bayesian methods from the network literature (Peixoto 2017). We can rewrite equation (9) as $\displaystyle P(\bm{b}|A_{ij},g)\quad\propto$ $\displaystyle\qquad P(A_{ij},g|\bm{b})P(\bm{b})$ $\displaystyle=$ $\displaystyle\quad\underset{Poisson(\Omega_{\iota\gamma}^{g}d_{i}^{g}d_{j}^{g})}{\underbrace{P(A_{ij}|\bm{b},g)}}\underset{\alpha_{\iota\gamma}^{g}}{\underbrace{P(g|\bm{b})}}\underset{\text{Prior}}{\underbrace{P(\bm{b})}}$ (10) Maximizing the posterior distribution means assigning individual workers to worker types $\iota$ and jobs to markets $\gamma$. The basic intuition follows from and is described in greater detail in Fogel and Modenesi (2023): workers belong to the same worker type if they have approximately the same vector of match probabilities over jobs, while jobs belong to the same market if they have approximately the same vector of match probabilities over workers. The key difference in this paper is that workers in the same worker type $\iota$ may belong to different demographic groups $g$ and each worker type–demographic group pair may face its own wage and therefore have its own match probability. Equation (3.2.2) allows for this by allowing the match probabilities $P(A_{ij},g|\bm{b})$ to depend on the workers’ demographic group $g$ in addition to the worker types and markets stored in $\bm{b}$. If worker types are defined by having common vectors of match probabilities over jobs, but match probabilities are allowed to vary by demographic group within a worker type, how do we know that type $\iota$ workers in group $A$ belong to the same worker type as type $\iota$ workers in group $B$? The answer is embedded in equation (3.2.2). The $\alpha_{\iota\gamma}^{g}$ term in equation (3.2.2) adjusts workers’ match probabilities so that they are relative to their own gender. Suppose women are significantly underrepresented in construction jobs and overrepresented in nursing jobs, and vice versa for men. Once we incorporate this adjustment, we would assign workers to a construction-intensive worker type if they are disproportionately likely to match with construction jobs, relative to other workers of their gender. Once we adjust the raw match probabilities to account for this selection, we obtain identical adjusted match probability vectors for this group of men and this group of women, causing us to assign them to the same worker type, $\iota$. Equation (3.2.2) assumes that we know the number of worker types and markets _a priori_ , however this is rarely the case in real world applications. Therefore we must choose the number of worker types and markets, $I$ and $\Gamma$ respectively. We do so using the principle of minimum description length (MDL), an information theoretic approach that is commonly used in the network theory literature. MDL chooses the number of worker types and markets to minimize the total amount of information necessary to describe the data, where the total includes both the complexity of the model conditional on the parameters _and_ the complexity of the parameter space itself. MDL will penalize a model that fits the data very well but overfits by using a large number of parameters (corresponding to a large number of worker types and markets), and therefore requires a large amount of information to encode it. MDL effectively adds a penalty term in our objective function, such that our algorithm finds a parsimonious model. See Fogel and Modenesi (2023) for greater detail. Equation (3.2.2) defines a combinatorial optimization problem. If we had infinite computing resources, we would test all possible assignments of workers to worker types and jobs to markets and choose the one that maximizes the likelihood in equation (3.2.2), however this is not computationally feasible for large networks like ours. Therefore, we use a Markov chain Monte Carlo (MCMC) approach in which we modify the assignment of each worker to a worker type and each job to a market in a random fashion and accept or reject each modification with a probability given as a function of the change in the likelihood. We repeat the procedure for multiple different starting values to reduce the chances of finding local maxima. We implement the procedure using a Python package called graph-tool. (https://graph-tool.skewed.de/. See Peixoto (2014) for details.) Now that we have dealt with the issue of important worker and job characteristics being unobserved, we turn our attention to estimating counterfactuals for wage gap decompositions. ## 4 Wage gap decomposition This section lays out the estimation strategies we use to decompose the Brazilian gender wage gap, while circumventing some of the issues associated with conventional decomposition methods. We decompose the gender wage gap into the quantities listed in equation (2): the composition component $E[Y_{1}(x_{ij})|G_{i}=1]-E[Y_{1}(x_{ij})|G_{i}=0]$ and the structural component $E[Y_{1}(x_{ij})-Y_{0}(x_{ij})|G_{i}=0]$. The quantity $E[Y_{g}(x_{ij})|G_{i}=g]=E[Y_{ij}|G_{i}=g]$, $g\in\\{0,1\\}$ can be consistently and straightforwardly estimated since it is directly observable. The challenge is estimating the counterfactual wage function $E[Y_{1}(x_{ij})|G_{i}=0]$, given that the potential outcome $Y_{1}(x_{ij})$ is not observed for female workers. Estimating $E[Y_{1}(x_{ij})|G_{i}=0]$ requires us to use data on male workers to estimate a relationship between observable characteristics $x_{ij}$ and male earnings $Y_{1}$ and then extrapolate this relationship to female workers. In this paper, we consider two approaches to estimating counterfactual wage functions. The first is the commonly-used Oaxaca-Blinder decomposition, which we henceforth refer to as OB (Oaxaca 1973; Blinder 1973). For the OB decomposition, we estimate two linear regressions — one for the set of male workers and another for the set of female workers — to estimate the functionals $Y_{1}(\cdot)$ and $Y_{0}(\cdot)$, respectively, as denoted in equation (11). Values for $E[Y_{g}(x_{ij})|G_{i}=g]$ are obtained by averaging out the fitted values of the respective linear regressions. Estimates for the counterfactual $E[Y_{1}(x_{ij})|G_{i}=0]$ are obtained by using the coefficients from the linear regression fitted for males, $\hat{\beta}_{G=1}$, and multiplying them by the average female covariates, $\bar{x}_{G=0}$, as defined in equation (11). This is equivalent to producing fitted values for the males’ regression, while inputting females’ covariates. OB regressions: $\displaystyle Y_{g}(x_{ij})=x_{ij}^{T}\beta_{G=g}+\epsilon_{gij},\qquad g\in\\{0,1\\}$ (11) OB counterfactual estimate: $\displaystyle\widehat{E[Y_{1}(x_{ij})|G_{i}=0]}:=\bar{x}_{G=0}^{T}\hat{\beta}_{G=1},\quad\bar{x}_{G=0}:=\sum_{i|G_{i}=0}\frac{x_{ij}}{n}$ As discussed in section 2, the OB decomposition has several important limitations. Although highly tractable, OB imposes potentially restrictive assumptions on $Y_{1}(x_{ij})$. First, it assumes that its expectation is linear in $x_{ij}$. Although linear regressions allow for flexible transformations of its covariates, the functional form is still a somewhat arbitrary researcher choice. Second, by using a linear regression to estimate the potential outcome function, $Y_{1}(\cdot)$, as in equation (11), it uses the same functional form to compute counterfactuals for all male workers. In other words, it imposes the same average returns to covariates for all workers, which would create biases in the counterfactual estimation if returns to worker characteristics are heterogeneous. The third limitation of the OB is related to the overlapping supports assumption, also referred to as the common supports assumption. This assumption imposes that the support of $x$ for one of the genders has to fully overlap with the support of $x$ for the other gender, and is imposed by almost all decomposition methods in economics (Fortin et al. 2011). The overlapping supports assumption is imposed to ensure that the counterfactual function $Y_{1}(x)$ estimated using male data, $x_{G_{i}=1}$, is only used to predict counterfactual earnings for females whose values of $x$ lie within the male support of $x$. When this condition is not satisfied in the data, observations that are outside of the common support are typically trimmed or given virtually zero weight in the estimation process, potentially eliminating significant numbers of workers from the analysis and making the analysis representative of only a subset of the population (Modenesi 2022). This is particularly salient when $x$ lies in a high-dimensional space, as is the case in our application with high- dimensional worker types and markets. Our preferred decomposition strategy relies on matching male and female workers with similar observable characteristics and using matched workers of different genders as counterfactuals for each other. This approach was initially proposed by Ñopo (2008) and was further extended by Modenesi (2022). Not only does this approach avoid the strong functional form assumptions made by OB, it includes a framework for handling a lack of common support. In this paper, we choose to use the original estimation strategy laid out by Ñopo (2008), given its tractability especially for a high-dimensional set of covariates like ours, and we refer to it as the matching decomposition henceforth. The matching decomposition has two main components: (i) matching observations and (ii) relaxing the overlapping supports assumption. First, counterfactual female earnings $Y_{1}(x_{ij})|G_{i}=0$ — what female workers would have earned if their gender were changed to male but nothing else about them changed — are obtained by exact matching each female to one or more male workers with similar observable characteristics and then taking a sample average of the matched males191919In this paper we coarsened a few variables such as years of education and age, and we use the coarsened version of these variables instead to perform the exact matching. This serves the purpose of matching more individuals, giving more statistical power to the method, since workers with just e.g. 1 year difference in age, ceteris paribus, are roughly the same in terms of productivity.. This method for building counterfactuals is non-parametric, assuming no functional form for $Y_{1}(\cdot)$, it exerts no extrapolations out of the support of $x$ and it avoids using data from all workers to build counterfactuals for a specific worker. The matching decomposition handles the lack of common support issue by allowing unmatched workers, i.e. outside of the common support of $x$, to contribute to the overall observed gap. In the matching decomposition, we add two terms, $\Delta_{M}$ and $\Delta_{F}$, to the expression for the overall wage gap $\Delta$ in equation (1) which captures the contributions of unmatched male and female workers, respectively. The resulting expression is $\displaystyle\Delta=$ $\displaystyle E[Y_{ij}|G_{i}=1]-E[Y_{ij}|G_{i}=0]=:\Delta_{X}+\Delta_{0}+\Delta_{M}+\Delta_{F},$ (12) where $\displaystyle\Delta_{X}:=E\left[Y_{ij}|Matched,G_{i}=1\right]-E\left[Y_{1}(x_{ij})|Matched,G_{i}=0\right]$ $\displaystyle\Delta_{0}:=E\left[Y_{ij}|Matched,G_{i}=1\right]-E\left[Y_{1}(x_{ij})|Matched,G_{i}=0\right]$ $\displaystyle\Delta_{M}:=\left\\{E\left[Y_{ij}|Unmatched,G_{i}=1\right]-E\left[Y_{ij}|Matched,G_{i}=1\right]\right\\}P\left(Unmatched|G_{i}=1\right)$ $\displaystyle\Delta_{F}:=\left\\{E\left[Y_{ij}|Matched,G_{i}=0\right]-E\left[Y_{ij}|Unmatched,G_{i}=0\right]\right\\}P\left(Unmatched|G_{i}=0\right)$ Notice that if all observations are matched the $\Delta_{M}$ and $\Delta_{F}$ terms vanish and this method collapses back to the original decomposition we have in equation (2). The terms $\Delta_{X}$ and $\Delta_{0}$ still have the same interpretation as discussed in Section 2 — composition and structural, respectively — but now only similar workers of one gender are used to build counterfactuals for the other gender, using an agnostic functional form for the counterfactual function. The extra terms $\Delta_{M}$ and $\Delta_{F}$ measure the contribution of unmatched male and female workers to the overall observed gender gap. Each of them measures the difference between matched and unmatched workers of a given gender, weighted by the proportion of unmatched workers within that gender202020Precise definitions of each of the terms in the NP decomposition can be found in the appendix section B. For example, if unmatched male workers have an average log wage that is 0.2 higher than the average log wage for matched male workers and 10% of male workers are unmatched, then $\Delta_{M}=0.2\times 0.1=0.02$. To understand how the matching decomposition handles a lack of common support, consider male workers employed as professional football players. These workers will not be matched to female workers and therefore would be omitted from the analysis if we simply restrict it to the region of common support. However, the male workers do contribute meaningfully to the overall gender wage gap because they earn significantly more than the average female worker. The matching decomposition would handle this by including these workers in the $\Delta_{M}$ term. Intuitively, it would say that some of the gender wage gap can be decomposed within the region of common support, while some of it is explained by male workers outside the region of common support earning more than male workers within the region of common support, and similarly for female workers. Our preferred specifications in this paper use the matching decomposition in conjunction with the latent skills and tasks clusters revealed by our network methodology developed in Section 3. Since we define labor market gender discrimination as workers with similar skills performing similar tasks with similar productivity but being paid differently based on gender, our worker type–market clusters serve as natural cells within which workers are considered as equivalent in terms of productivity. With the matching decomposition we are able to ensure that only similar workers are used when estimating counterfactual earnings, mitigating counterfactual biases, and also avoid dropping unmatched workers from the estimation procedure as mentioned above. Although the original matching decomposition is not considered to be a “detailed decomposition” by the literature of decompositions in economics, in combination with our network clusters, it is possible to compute an economically principled distribution of the gender gap (and its components) for a vast amount of cells of workers in the labor market, mapping how discrimination is spread in different parts of the market. ## 5 Data ### 5.1 Administrative Brazilian data We use the Brazilian linked employer-employee data set RAIS. The data contain detailed information on all employment contracts in the Brazilian formal sector, going back to the 1980s. The sample we work with includes all workers between the ages of 25 and 55 employed in the formal sector in the Rio de Janeiro metro area at least once between 2009 and 2018. These workers are defined as matching with the unemployment (or informal sector) in years we do not observe them. We also exclude the public sector because institutional barriers make flows between the Brazilian public and private sectors rare, as well as the military. Finally, we exclude the small number of jobs that do not pay workers on a monthly basis. Our wage variable is the real hourly log wage in December, defined as total December earnings divided by hours worked. We deflate wages using the national inflation index. We exclude workers who were not employed for the entire month of December because we do not have accurate hours worked information for such workers. We define a job as an occupation-establishment pair. This implicitly assumes that all workers employed in the same occupation at the same establishment are performing approximately the same tasks. Our data contain 4,578,210 unique workers, 289,836 unique jobs, and 7,940,483 unique worker–job matches. The average worker matches with 1.73 jobs and the average job matches with 27.4 workers. 42% of workers match with more than one job during our sample. Figure 1(b) presents histograms of the number of matches for workers and jobs, respectively. In network theory parlance, these are known as degree distributions. Figure 1: Distributions of Number of Matches Per Worker and Job (a) Workers (b) Jobs Our network-based classification algorithm identifies 187 worker types ($\iota$) and 341 markets ($\gamma$). Figure 2(b) presents histograms of the number of workers per worker type and jobs per market. The average worker belongs to a worker type with 20,896 workers and the median worker belongs to a worker type with 14,211 workers. The average job belongs to a market with 1,156 jobs and the median job belongs to a market with 1,127 jobs. Figure 2: Worker Type ($\iota$) and Market ($\gamma$) Size Distributions (a) Number of Workers Per Worker Type ($\iota$) (b) Number of Jobs Per Market ($\gamma$) ## 6 Results ### 6.1 Aggregate wage gap decomposition Table 1 presents the results of performing gender wage decompositions using each of our two methods: OB and matching. For each method, we have three specifications. The first, presented in columns (1) and (4), estimates counterfactual earnings distributions using a standard set of observable characteristics: experience, education, race, industry and union status. The second, presented in columns (2) and (5), estimates counterfactual earnings distributions using the worker types and markets identified by the SBM. The third specification, presented in columns (3) and (6) uses both standard observable characteristics and worker types and markets. The first row of each column presents the overall wage gap: the average male worker earns 16.7 percent more than the average female worker in our sample. The second row presents the wage gap that would exist if male and female workers with the same productivity were paid equivalently but the observed differences between the distributions of male and female productivity — as proxied by observable characteristics and/or worker types and markets — remained, the composition component. The third row presents the wage gap that would exist if male and female workers had identical productivity distributions, but the observed earnings differences conditional on productivity remained, the structural component. The fourth and fifth rows present the wage gap explained by male and female workers outside the region of common support, respectively. For the OB method the composition and structural components add up to the overall wage gap; for the matching method the overall wage gap equals the sum of the composition and structural components and the components due to a lack of common support. The qualitative stories told by both the OB method and the matching method are similar. When we define counterfactual earnings using observable characteristics (columns 1 and 4), we find that if male and female workers with the same productivity were paid similarly, then female workers would significantly outearn male workers (structural effect): by 12.7% using the OB method and 8.8% using the matching method. By contrast, female workers would be paid significantly less if they possessed the male’s productivity distribution (composition effect): 29.4% less using the OB method and 25.6% less using the matching method. When we define counterfactuals using worker types and markets instead of observable characteristics (columns 2 and 5) we find that the wage gap would nearly disappear if male and female workers with the same productivity were paid similarly. By contrast, the wage gap that would exist if male and female workers had the same productivity distribution — 17.9% according to OB and 17.8% according to matching — is almost equal to the overall wage gap of 16.7%. In other words, when we compute counterfactuals using worker types and markets we find that differential pay for similar productivity explains roughly the entire gender wage gap. This tells us that the results of gender wage gap decompositions are highly sensitive to the way in which we define counterfactuals. If, as we argue, worker types and markets do a better job of capturing the latent productivity of worker–job matches than do standard observable characteristics, then these results imply that gender wage gaps are almost entirely due to similarly productive male and female workers being paid differently, not male and female workers having different productivity distributions. Columns (3) and (6) of Table 1 use both observable characteristics and worker types and markets to form counterfactuals for the gender wage gap decompositions. The OB method finds that female workers have covariates that would imply that they would outearn male workers if equally productive workers were paid equivalently, similar to the findings when we included only observable characteristics, not worker and job types, in column (1). By contrast, the matching method finds that male workers’ covariates imply 3.4% higher earnings than female workers’ covariates and that male workers are paid 18.5% more than similarly productive female workers. Why do we observe a discrepancy between the OB and matching methods once we include observable characteristics and worker types and markets? The answer lies in the final two rows of Table 1, which present the fraction of male and female workers, respectively, for whom we are unable to find a counterfactual. Once we try to match workers on such a large set of variables, many workers are unable to be matched, and a significant part of the gender wage gap occurs among such workers. The matching method allows us to take this into account, while the OB method simply makes a linear extrapolation. However, a linear extrapolation outside the region of common support is likely to lead to incorrect inferences. Furthermore, the fact that the matching estimator yields similar results when we use worker types and markets as it does when we use worker types, markets, and other observable characteristics, but not when we use other observables alone, implies that worker types and markets capture significant determinants of productivity, and omitting them leads to incorrect inferences. This highlights the importance of using a sufficiently set of worker characteristics when estimating counterfactuals, and our method for identifying previously unobserved heterogeneity enhances our ability to do so. All of the results presented in this section correspond to the aggregate gender wage gap. In the next section, we consider heterogeneity in wage gaps within different subsets of the labor market. Table 1: Gap decomposition using Oaxaca-Blinder vs Matching | Oaxaca-Blinder | Matching ---|---|--- | Observables | $\iota\times\gamma$ | Full model | Observables | $\iota\times\gamma$ | Full model | (1) | (2) | (3) | (4) | (5) | (6) Gap | 0.167 | 0.167 | 0.167 | 0.167 | 0.167 | 0.167 Composition | -0.127 | -0.011 | -0.084 | -0.088 | -0.006 | 0.034 structural | 0.294 | 0.179 | 0.250 | 0.256 | 0.178 | 0.185 Males unmatched | - | - | - | 0.000 | -0.005 | -0.076 Females unmatched | - | - | - | 0.000 | 0.000 | 0.024 % of males matched | - | - | - | 1.00 | 0.98 | 0.57 % of females matched | - | - | - | 1.00 | 0.99 | 0.74 ### 6.2 Wage gaps within worker type–market cells An appealing feature of our worker types and markets is that they allow us to further decompose gender wage gaps and identify heterogeneity in gender wage gaps across the labor market. We do so by computing overall wage gaps, $\Delta$, and then decomposing them following the matching decomposition, within each worker type–market cell. For each worker type–market cell we decompose the overall wage gap (Row 1 of Table 1) into its four components: composition, structural, males unmatched, and females unmatched (Rows 2–5 of Table 1). Figure 3 presents kernel density plots of the resulting distributions of overall wage gaps and their four components. Several clear patterns emerge. First, the overall wage gaps $\Delta$ are almost universally positive, meaning that male workers outearning their female counterparts is a widespread phenomenon. Specifically, 91% of workers are in clusters where males outearn females. Second, the distribution of the structural component, $\Delta_{0}$, is similar to the distribution of the overall wage gap. This suggests that the result from the aggregate decomposition in Section 6.1 that almost the entire overall gender wage gap is explained by the structural component holds within worker type–market cells as well. The fact that the structural component roughly coincides with the overall wage gap implies that the other three components — composition, males outside the common support, and females outside the common support — must contribute relatively little to the overall gender wage gap, which is confirmed by the fact that the distributions for these three components are centered close to zero and have low variances. We present the same results quantitatively in Table 2. Together, these results tell us that while there is significant variability in gender wage gaps across different worker type–market pairs, the overall qualitative pattern of male workers outearning their female counterparts, and almost all of this gap being explained by differential returns to the same skills rather than different skills, is true in the disaggregated results as well as the aggregated results. Figure 3: Distribution of Components of Overall Wage Gap, Disaggregated Table 2: Summary Statistics of Components of Overall Wage Gap, Disaggregated | mean | sd | min | max | count ---|---|---|---|---|--- $\Delta$ | 0.215 | 0.240 | -1.183 | 6.228 | 4791014 $\Delta_{0}$ | 0.196 | 0.172 | -2.506 | 9.384 | 4791014 $\Delta_{M}$ | 0.016 | 0.134 | -3.577 | 3.448 | 4783255 $\Delta_{F}$ | -0.011 | 0.116 | -2.632 | 4.684 | 4724863 $\Delta_{X}$ | 0.013 | 0.153 | -1.150 | 2.418 | 4791014 Frac. Male Workers Matched | 0.766 | 0.238 | 0.004 | 1.000 | 4791014 Frac. Female Workers Matched | 0.875 | 0.199 | 0.008 | 1.000 | 4791014 Frac. Workers that Are Male | 0.617 | 0.162 | 0.037 | 0.999 | 4791014 ## 7 Conclusion In this paper we reconsider the wage gap decomposition literature and make three key contributions. First, we propose a new method for identifying unobserved determinants of workers’ earnings from the information revealed by detailed data on worker–job matching patterns. The method builds on Fogel and Modenesi (2023) and provides a blueprint for incorporating observable variables into the clustering algorithm, while also relaxing the assumption of perfect competition in labor markets. Second, we non-parametrically estimate counterfactual wage functions for male and female workers and use them to decompose gender wage gaps into a composition component in which male and female workers earn different wages because they possess different skills and perform different tasks, and a structural component in which male and female workers who possess similar skills and perform similar tasks nonetheless earn different wages. Third, we address the issue of male workers’ observables characteristics falling outside the support of female workers’ observable characteristics, and vice versa, by augmenting the wage decomposition with components attributable to male and female workers, respectively, outside the region of common support. We apply these methods to Brazilian administrative data and find that almost the entire gender wage gap is attributable to male and female workers who possess similar skills and perform similar tasks being paid differently. This is true at the aggregate level, and remains true when we perform wage decompositions within each worker type–market cell, indicating that this is a widespread phenomenon, not one driven by large wage differentials in small subsets of the labor market. We find that wage decompositions based on standard observable variables suffer from omitted variable bias, emphasizing the need for detailed worker and job characteristics in the form of worker types and markets. We find that wage decompositions based on linear regressions yield similar findings to those based on matching when a lack of common support is not an issue, however when male and female workers’ characteristics do not share a common support the matching estimator with corrections for a lack of common support outperforms alternatives. While this paper focuses on gender wage gaps, the methods are applicable to other wage gaps, for instance race. Moreover, our strategy for using worker–job matching patterns to control for previously-unobserved, but potentially confounding, covariates may be applied in a wide variety of contexts. ## References * (1) * Acemoglu and Autor (2011) Acemoglu, Daron and David Autor, “Skills, tasks and technologies: Implications for employment and earnings,” 2011, 4, 1043–1171. * Autor (2013) Autor, David H, “The ‘task approach’ to labor markets: an overview,” 2013\. * Autor et al. (2003) Autor, David H., Frank Levy, and Richard J. Murnane, “The Skill Content of Recent Technological Change: An Empirical Exploration,” The Quarterly Journal of Economics, 2003, 118 (4), 1279–1333. * Barsky et al. (2002) Barsky, Robert, John Bound, Kerwin Kofi Charles, and Joseph P. Lupton, “Accounting for the Black-White Wealth Gap: A Nonparametric Approach,” Journal of the American Statistical Association, 2002, 97 (459), 663–673. * Blinder (1973) Blinder, Alan S., “Wage Discrimination: Reduced Form and Structural Estimates,” The Journal of Human Resources, 1973, 8 (4), 436–455. * Card et al. (2015) Card, David, Ana Rute Cardoso, and Patrick Kline, “ Bargaining, Sorting, and the Gender Wage Gap: Quantifying the Impact of Firms on the Relative Pay of Women *,” The Quarterly Journal of Economics, 10 2015, 131 (2), 633–686. * Card et al. (2018) , , Joerg Heining, and Patrick Kline, “Firms and Labor Market Inequality: Evidence and Some Theory,” Journal of Labor Economics, 2018, 36 (S1), S13–S70. * Chernozhukov et al. (2013) Chernozhukov, Victor, Iván Fernández-Val, and Blaise Melly, “Inference on Counterfactual Distributions,” Econometrica, 2013, 81 (6), 2205–2268. * DiNardo et al. (1996) DiNardo, John, Nicole M. Fortin, and Thomas Lemieux, “Labor Market Institutions and the Distribution of Wages, 1973-1992: A Semiparametric Approach,” Econometrica, 1996, 64 (5), 1001–1044. * Firpo et al. (2018) Firpo, Sergio P., Nicole M. Fortin, and Thomas Lemieux, “Decomposing Wage Distributions Using Recentered Influence Function Regressions,” Econometrics, May 2018, 6 (2), 1–40. * Fogel and Modenesi (2023) Fogel, Jamie and Bernardo Modenesi, “What is a Labor Market? Classifying Workers and Jobs Using Network Theory,” 2023. * Fortin et al. (2011) Fortin, Nicole, Thomas Lemieux, and Sergio Firpo, “Chapter 1 - Decomposition Methods in Economics,” in Orley Ashenfelter and David Card, eds., Orley Ashenfelter and David Card, eds., Vol. 4 of Handbook of Labor Economics, Elsevier, 2011, pp. 1–102. * Garcia et al. (2009) Garcia, Luana Marquez, Hugo Nopo, and Paola Salardi, “Gender and Racial Wage Gaps in Brazil 1996-2006: Evidence Using a Matching Comparisons Approach,” Research Department Publications 4626, Inter-American Development Bank, Research Department May 2009. * Gerard et al. (2018) Gerard, François, Lorenzo Lagos, Edson Severnini, and David Card, “Assortative Matching or Exclusionary Hiring? The Impact of Firm Policies on Racial Wage Differences in Brazil,” Working Paper 25176, National Bureau of Economic Research October 2018. * Goldin (2014) Goldin, Claudia, “A Grand Gender Convergence: Its Last Chapter,” American Economic Review, April 2014, 104 (4), 1091–1119. * Hurst et al. (2021) Hurst, Erik, Yona Rubinstein, and Kazuatsu Shimizu, “Task-Based Discrimination,” Working Paper 29022, National Bureau of Economic Research July 2021. * Jarosch et al. (2019) Jarosch, Gregor, Jan Sebastian Nimczik, and Isaac Sorkin, “Granular search, market structure, and wages,” Technical Report, National Bureau of Economic Research 2019. * Kantenga (2018) Kantenga, Kory, “The effect of job-polarizing skill demands on the US wage structure,” 2018. * Karrer and Newman (2011) Karrer, Brian and Mark EJ Newman, “Stochastic blockmodels and community structure in networks,” Physical review E, 2011, 83 (1), 016107. * Larremore et al. (2014) Larremore, Daniel B, Aaron Clauset, and Abigail Z Jacobs, “Efficiently inferring community structure in bipartite networks,” Physical Review E, 2014, 90 (1), 012805. * Lindenlaub (2017) Lindenlaub, Ilse, “Sorting multidimensional types: Theory and application,” The Review of Economic Studies, 2017, 84 (2), 718–789. * McFadden (1978) McFadden, Daniel, “Modeling the choice of residential location,” Transportation Research Record, 1978, (673). * Modenesi (2022) Modenesi, Bernardo, “Advancing Distribution Decomposition Methods Beyond Common Supports: Applications to Racial Wealth Disparities,” 2022. * Morello and Anjolim (2021) Morello, Thiago and Jacqueline Anjolim, “Gender wage discrimination in Brazil from 1996 to 2015: A matching analysis,” EconomiA, 2021. * Nimczik (2018) Nimczik, Jan Sebastian, “Job Mobility Networks and Endogenous Labor Markets,” 2018. * Ñopo (2008) Ñopo, Hugo, “Matching as a Tool to Decompose Wage Gaps,” The Review of Economics and Statistics, 2008, 90 (2), 290–299. * Oaxaca (1973) Oaxaca, Ronald, “Male-Female Wage Differentials in Urban Labor Markets,” International Economic Review, 1973, 14 (3), 693–709. * Peixoto (2014) Peixoto, Tiago P, “Efficient Monte Carlo and greedy heuristic for the inference of stochastic block models,” Physical Review E, 2014, 89 (1), 012804. * Peixoto (2017) , “Nonparametric Bayesian inference of the microcanonical stochastic block model,” Physical Review E, 2017, 95 (1), 012317\. * Peixoto (2018) Peixoto, Tiago P., “Nonparametric weighted stochastic block models,” Phys. Rev. E, Jan 2018, 97, 012306. * Peixoto (2019) Peixoto, Tiago P, “Bayesian stochastic blockmodeling,” Advances in network clustering and blockmodeling, 2019, pp. 289–332. * Roy (1951) Roy, Andrew Donald, “Some thoughts on the distribution of earnings,” Oxford economic papers, 1951, 3 (2), 135–146. * Sorkin (2018) Sorkin, Isaac, “Ranking firms using revealed preference,” The quarterly journal of economics, 2018, 133 (3), 1331–1393. * Tan (2018) Tan, Joanne, “Multidimensional heterogeneity and matching in a frictional labor market - An application to polarization,” 2018. * Train (2010) Train, Kenneth E, Discrete choice methods with simulation, Cambridge university press, 2010. ## Appendix A Nested Logit Choice Probability According to Train (2010), and originally developed by McFadden (1978), maximizing the utility choosing $j$, which is nested within a group $\gamma$ $j^{*}:=\arg\max_{j}\quad W_{\gamma}+Y_{j}+\varepsilon_{j}$ (13) with $\varepsilon_{j}\sim NestedLogit(\nu_{\gamma})$ results in the following choice probability: $\displaystyle P(j=j^{*})$ $\displaystyle=P(\text{Choose }\gamma)P(j=j^{*}|\gamma)$ $\displaystyle=\frac{\exp(W_{\gamma}+\nu_{\gamma}I_{\gamma})}{\sum_{\gamma}\exp(W_{\gamma}+\nu_{\gamma}I_{\gamma})}\frac{\exp(Y_{j})^{\frac{1}{\nu_{\gamma}}}}{\sum_{j\in\gamma}\exp(Y_{j})^{\frac{1}{\nu_{\gamma}}}}$ where $I_{\gamma}=\log\left(\sum_{j\in\gamma}\exp(Y_{j})^{\frac{1}{\nu_{\gamma}}}\right)$. Our problem is similar, with workers choosing job $j$ within a market $\gamma$ in order to maximize the sum of log earnings $\log(\psi_{\iota\gamma}w_{j}^{g})$ and an idiosyncratic preference for job $j$, $\varepsilon_{ij}^{g}$: $\displaystyle j^{*}=$ $\displaystyle\arg\max_{j}\quad\log(\psi_{\iota\gamma}w_{j}^{g})+\varepsilon_{ij}^{g}.$ (14) We also assume that $\varepsilon_{ij}^{g}\sim NestedLogit(\nu_{\gamma}^{g})$. One of the differences from our setup to what is covered by Train (2010) is that we add extra worker indexes $\iota$ for her/his skills and $g$ for her gender and we condition our probabilities on knowing $\iota$ and $g$. Notice that when comparing equations 13 and 14, $W_{\gamma}=0$ and $Y_{j}=\log(\psi_{\iota\gamma}w_{j}^{g})$, which results in the following choice probabilities: $\displaystyle P(j=j^{*}|j\in\gamma,i\in\iota,g)$ $\displaystyle=P(\gamma=\gamma^{*}|i\in\iota,j\in\gamma,g)P(j=j^{*}|i\in\iota,j\in\gamma,\gamma=\gamma^{*},g)$ $\displaystyle=\frac{\exp(\nu_{\gamma}^{g}I_{\iota\gamma}^{g})}{\sum_{\gamma}\exp(\nu_{\gamma}^{g}I_{\iota\gamma}^{g})}\frac{\exp(\log(\psi_{\iota\gamma}w_{j}^{g}))^{\frac{1}{\nu_{\gamma}^{g}}}}{\sum_{j\in\gamma}\exp(\log(\psi_{\iota\gamma}w_{j}^{g}))^{\frac{1}{\nu_{\gamma}^{g}}}}\quad\text{\tiny(plugging objects in)}$ $\displaystyle=\frac{\exp(I_{\iota\gamma}^{g})^{\nu_{\gamma}^{g}}}{\sum_{\gamma}\exp(I_{\iota\gamma}^{g})^{\nu_{\gamma}^{g}}}\frac{(\psi_{\iota\gamma}w_{j}^{g})^{\frac{1}{\nu_{\gamma}^{g}}}}{\sum_{j\in\gamma}(\psi_{\iota\gamma}w_{j}^{g})^{\frac{1}{\nu_{\gamma}^{g}}}}\quad\text{\tiny(similar to equation \ref{eq_worker_choice})}$ $\displaystyle=\frac{\exp(I_{\iota\gamma}^{g})^{\nu_{\gamma}^{g}}}{\sum_{\gamma}\exp(I_{\iota\gamma}^{g})^{\nu_{\gamma}^{g}}}\frac{(\psi_{\iota\gamma}w_{j}^{g})^{\frac{1}{\nu_{\gamma}^{g}}}}{\exp(I_{\iota\gamma}^{g})}\quad\text{\tiny(by definition of $I_{\iota\gamma}^{g}$)}$ $\displaystyle=\underset{\underset{\iota-\gamma-g\text{ component}\quad}{\underbrace{=:\Omega_{\iota\gamma}^{g}}}}{\underbrace{\frac{\exp(I_{\iota\gamma}^{g})^{\nu_{\gamma}^{g}-1}}{\sum_{\gamma}\exp(I_{\iota\gamma}^{g})^{\nu_{\gamma}^{g}}}\psi_{\iota\gamma}^{\frac{1}{\nu_{\gamma}^{g}}}}}\underset{\underset{\quad j-g\text{ component}}{\underbrace{=:d_{j}^{g}}}}{\underbrace{\vphantom{\frac{\exp(I_{\iota\gamma}^{g})^{\nu_{\gamma}^{g}-1}}{\sum_{\gamma}\exp(I_{\iota\gamma}^{g})^{\nu_{\gamma}^{g}}}\psi_{\iota\gamma}^{\frac{1}{\nu_{\gamma}^{g}}}}(w_{j}^{g})^{\frac{1}{\nu_{\gamma}^{g}}}}}\quad\text{\tiny(similar to equation \ref{eq_worker_choice_separation})}$ where $I_{\iota\gamma}^{g}=\log\left[\sum_{j\in\gamma}\exp(\log(\psi_{\iota\gamma}w_{j}^{g}))^{\frac{1}{\nu_{\gamma}^{g}}}\right]=\log\left[\sum_{j\in\gamma}(\psi_{\iota\gamma}w_{j}^{g})^{\frac{1}{\nu_{\gamma}^{g}}}\right]$. ## Appendix B Terms in the NP decomposition The terms in the NP decomposition from equation 12 can be more formally defined as follows: $\displaystyle\Delta_{M}:=\left[\int_{\bar{S}_{F}}Y_{1}(x)\frac{dF_{M}(x)}{\mu_{M}(\bar{S}_{F})}-\int_{S_{F}}Y_{1}(x)\frac{dF_{M}(x)}{\mu_{M}(S_{F})}\right]\mu_{M}(\bar{S}_{F})$ (15) $\displaystyle\Delta_{X}:=\int_{S_{M}\cap S_{F}}Y_{1}(x)\left[\frac{dF_{M}(x)}{\mu_{M}(S_{F})}-\frac{dF_{F}(x)}{\mu_{F}(S_{M})}\right]$ $\displaystyle\Delta_{0}:=\int_{S_{M}\cap S_{F}}\left[Y_{1}(x)-Y_{0}(x)\right]\frac{dF_{F}(x)}{\mu_{F}(S_{M})}$ $\displaystyle\Delta_{F}:=\left[\int_{S_{M}}Y_{0}(x)\frac{dF_{F}(x)}{\mu_{F}(S_{M})}-\int_{\bar{S}_{M}}Y_{0}(x)\frac{dF_{F}(x)}{\mu_{F}(\bar{S}_{M})}\right]\mu_{F}(\bar{S}_{M})$ where: $F_{M}(x)$ and $F_{F}(x)$ denote the distributions of $x$ for both males and females, respectively; $\mu_{M}$ and $\mu_{F}$ measure the proportions of males and females over regions of the supports of $x$; and the support of $x$ for a gender $g$, $supp_{(}X_{g})$, is partitioned as $supp_{(}X_{g}):=S_{g}\cup\bar{S}_{g}$, with $S_{g}\cap\bar{S}_{g}=\emptyset$, for $g\in\\{F,M\\}$. ## Appendix C Proof that $A_{ij}$ follows a Poisson distribution If an individual worker $i$ only searched for a job once, then the probability of worker $i$ matching with job $j$ would be equal to $\mathbb{P}_{ij}=\mathcal{P}_{\iota\gamma}d_{j}$ and $A_{ij}$ would follow a Bernoulli distribution: $A_{ij}\sim Bernoulli(\mathcal{P}_{\iota\gamma}d_{j}).$ However, since worker $i$ searches for jobs $c_{i}\equiv\sum_{t=1}^{T}c_{it}$ times, $A_{ij}$ is actually the sum of $c_{i}$ Bernoulli random variables, and is therefore a Binomial random variable. Conditional on knowing $c_{i}$, $A_{ij}|c_{i}\sim Binomial(c_{i},\mathcal{P}_{\iota\gamma}d_{j}).$ However, we still need to take into account the fact that $c_{i}$ is a Poisson-distributed random variable with arrival rate $d_{i}$. Consequently, the unconditional distribution of $A_{ij}$ is Poisson as well: $A_{ij}\sim Poisson(d_{i}d_{j}\mathcal{P}_{\iota\gamma}).$ We prove this fact by multiplying the conditional density of $A_{ij}|c_{i}$ by the marginal density of $c_{i}$ to get the joint density of $A_{ij}$ and $c_{i}$, and then integrating out $c_{i}$. $\displaystyle P(A_{ij},c_{i})=\underset{Bin(c_{i},d_{j}P_{\iota\gamma})}{\underbrace{P(A_{ij}|c_{i})}}\quad\times\quad\underset{Poisson(d_{i})}{\underbrace{P(c_{i})}}$ Deriving the joint distribution: $\displaystyle P(A_{ij},c_{i})=$ $\displaystyle\binom{c_{i}}{A_{ij}}(d_{j}P_{\iota\gamma})^{A{ij}}(1-d_{j}P_{\iota\gamma})^{c_{i}-A{ij}}\times\frac{d_{i}^{c_{i}}\exp{(-d_{i}})}{c_{i}!}$ We want to find out the marginal distribution of $A_{ij}$: $\displaystyle P(A_{ij})$ $\displaystyle=\sum_{c_{i}=0}^{\infty}P(A_{ij},c_{i})$ $\displaystyle=\sum_{c_{i}=0}^{\infty}\binom{c_{i}}{A_{ij}}(d_{j}P_{\iota\gamma})^{A{ij}}(1-d_{j}P_{\iota\gamma})^{c_{i}-A{ij}}\times\frac{d_{i}^{c_{i}}\exp{(-d_{i}})}{c_{i}!}$ $\displaystyle=\sum_{c_{i}=0}^{\infty}\frac{c_{i}!}{A_{ij}!(di- A_{ij})!}(d_{j}P_{\iota\gamma})^{A{ij}}(1-d_{j}P_{\iota\gamma})^{c_{i}-A{ij}}\times\frac{d_{i}^{c_{i}}\exp{(-d_{i}})}{c_{i}!}$ $\displaystyle=\frac{(d_{j}P_{\iota\gamma})^{A{ij}}\exp{(-d_{i}})}{A_{ij}!}\sum_{c_{i}=0}^{\infty}\frac{1}{(di- A_{ij})!}(1-d_{j}P_{\iota\gamma})^{c_{i}-A{ij}}d_{i}^{c_{i}}$ If the summation term is equal to $\sum_{c_{i}=0}^{\infty}\frac{1}{(di- A_{ij})!}(1-d_{j}P_{\iota\gamma})^{c_{i}-A{ij}}d_{i}^{c_{i}}=d_{i}^{A_{ij}}\exp{(d_{i}(1-d_{j}P_{\iota\gamma}))}$ (16) then $P(A_{ij})=\frac{(d_{i}d_{j}P_{\iota\gamma})^{A{ij}}\exp{(-d_{i}d_{j}P_{\iota\gamma}})}{A_{ij}!}$, i.e. $A_{ij}$ would be Poisson distributed: $A_{ij}\sim Poisson(d_{i}d_{j}P_{\iota\gamma})$ Proving (16) is equivalent to proving the following equality: $\displaystyle 1=$ $\displaystyle\frac{1}{d_{i}^{A_{ij}}\exp{(d_{i}(1-d_{j}P_{\iota\gamma}))}}\sum_{c_{i}=0}^{\infty}\frac{1}{(di- A_{ij})!}(1-d_{j}P_{\iota\gamma})^{c_{i}-A{ij}}d_{i}^{c_{i}}$ Proof: $\displaystyle d_{i}^{-A_{ij}}\exp{(-d_{i}(1-d_{j}P_{\iota\gamma}))}\sum_{c_{i}=0}^{\infty}\frac{1}{(di- A_{ij})!}(1-d_{j}P_{\iota\gamma})^{c_{i}-A{ij}}d_{i}^{c_{i}}=$ $\displaystyle=\sum_{c_{i}=0}^{\infty}\frac{\exp{(-d_{i}(1-d_{j}P_{\iota\gamma}))}}{(di- A_{ij})!}(1-d_{j}P_{\iota\gamma})^{c_{i}-A{ij}}d_{i}^{c_{i}-A_{ij}}$ $\displaystyle=\sum_{c_{i}=0}^{\infty}\frac{\exp{(-d_{i}(1-d_{j}P_{\iota\gamma}))}}{(di- A_{ij})!}(d_{i}(1-d_{j}P_{\iota\gamma}))^{c_{i}-A{ij}}$ We assume $\lambda=d_{i}(1-d_{j}P_{\iota\gamma})$ for simplicity and we apply a change of variables $z=c_{i}-A_{ij}$ $\displaystyle=\sum_{z=0}^{\infty}\frac{\exp{(-\lambda)}}{z!}\lambda^{z}\text{, knowing that in our problem $c_{i}\geq A_{ij}$, i.e. $z\geq 0$}.$ $\displaystyle=1$ $\displaystyle\text{Since we have the p.d.f. of a Poisson r.v. inside the summation, i.e. $z\sim Poisson(\lambda)$ }\square$ Therefore, we have $A_{ij}\sim Poisson(d_{i}d_{j}P_{\iota\gamma})\qed$ ## Appendix D Soft assignment workers and jobs to worker types and markets In section 3, at the maximum of our posterior in equation 3.2.2, each worker is assigned to only one skill cluster, a process of hard assignments. However, it is possible that, given the pattern of worker matches, a particular worker could be revealed to possess certain skills $\iota_{1}$ in most of her matches, and skills $\iota_{2}$ in a few other of her matches. Creating a single worker skill group to accommodate her hybrid skills might not improve model fit if there are only a few workers who exhibit similar matches. Instead, allowing her to have mixed skills $\iota_{1}$ and $\iota_{2}$, i.e. soft assignment, with weights according to her matching history, provides further nuanced information to the researcher. In fact, we propose using the Bayesian setup in order to recover these weights. It turns out that the posterior $P(\bm{b}|\bm{A},\bm{g})$ ultimately carries the desired measure of workers’ skill profile needed to control for workers’ unobserved skills in the wage gap estimation. Given a total of $I$ clusters of workers competing for the same jobs in the labor market network, i.e. with similar skills, the posterior distribution provides the chance of each worker to belong to a certain skill cluster, given the worker demographic group $g$ and the entire network $\bm{A}$. More formally, for worker $i$, her skills profile is defined as: $\displaystyle\vec{P}_{i}:=\left[P(i\in\iota_{1}|\bm{A},\bm{g})\qquad P(i\in\iota_{2}|\bm{A},\bm{g})\qquad\cdots\qquad P(i\in\iota_{I}|\bm{A},\bm{g})\right]^{T}$ (17)
# Regret Analysis in Threshold Policy Design††thanks: I am grateful to Ivan Canay, Joel Horowitz and Charles Manski for their guidance in this project. I am also thankful to Anastasiia Evdokimova, Amilcar Velez and all the participants of the Econometric Reading Group at Northwestern University for their comments and suggestions. Federico Crippa Department of Economics Northwestern University <EMAIL_ADDRESS> (April 17, 2024) Threshold policies are targeting mechanisms that assign treatments based on whether an observable characteristic exceeds a certain threshold. They are widespread across multiple domains, such as welfare programs, taxation, and clinical medicine. This paper addresses the problem of designing threshold policies using experimental data, when the goal is to maximize the population welfare. First, I characterize the regret—a measure of policy optimality—of the Empirical Welfare Maximizer (EWM) policy, popular in the literature. Next, I introduce the Smoothed Welfare Maximizer (SWM) policy, which improves the EWM’s regret convergence rate under an additional smoothness condition. The two policies are compared studying how differently their regrets depend on the population distribution, and investigating their finite sample performances through Monte Carlo simulations. In many contexts, the welfare guaranteed by the novel SWM policy is larger than with the EWM. An empirical illustration demonstrates how the treatment recommendation of the two policies may in practice notably differ. Keywords: Threshold policies, heterogeneous treatment effects, statistical decision theory, randomized experiments. JEL classification codes: C14, C44 ## 1 Introduction Treatments are rarely universally assigned. When their effects are heterogeneous across individuals, policymakers aim to target those who would benefit the most from specific interventions. Scholarships, for example, are awarded to students with high academic performance or financial need; tax credits are provided to companies engaged in research and development activities; medical treatments are prescribed to sick patients. Despite the potential complexity and multidimensionality of heterogeneous treatment effects, treatment eligibility criteria are often kept quite simple. This paper studies one of the most common of these simple assignment mechanisms: threshold policies, where the decision to assign the treatment is based on whether a scalar observable characteristic — referred to as the index — exceeds a specified threshold. Threshold policies are ubiquitous, ranging across multiple domains. In welfare policies, they regulate the qualification for public health insurance programs through age (Card et al., 2008; Shigeoka, 2014) and anti-poverty programs through income (Crost et al., 2014). In taxation, they determine marginal rates through income brackets (Taylor, 2003). In clinical medicine, the referral for liver transplantation depends on whether a composite of laboratory values obtained from blood tests is beyond a certain threshold (Kamath and Kim, 2007). Even criminal offenses are defined through threshold policies: sanctions for Driving Under the Influence are based on whether the Blood Alcohol Content exceeds specific values. The regression discontinuity design has been developed to study the treatment effect at the point of discontinuity of threshold policies: its popularity in econometrics and applied economics is a further indication of how widespread threshold policies are. Regression discontinuity design focuses on an ex-post evaluation. In this paper, my perspective is different: I consider the ex-ante problem faced by a policymaker wanting to implement a threshold policy and interested in maximizing the average social welfare, targeting individuals who would benefit from the treatment. Experimental data are available: how should they be used to implement the threshold policy in the population? Answering this question requires defining a criterion by which policies are evaluated. Since the performance of a policy depends on the unknown data distribution, the policy maker searches for a policy that behaves uniformly well across a specified family of data distribution (the state space). The regret of a policy is the (possibly) random difference between the maximum achievable welfare and the welfare it generates in the population. Policies can be evaluated considering their maximum expected regret (Manski, 2004; Hirano and Porter, 2009; Kitagawa and Tetenov, 2018; Athey and Wager, 2021; Mbakop and Tabord-Meehan, 2021; Manski, 2023), or other worst-case statistics of the regret distribution (Manski and Tetenov, 2023; Kitagawa et al., 2022). Once the criterion has been established, optimal policy learning aims to pinpoint the policy that minimizes it. Rather than directly tackling the functional minimization problem, following the literature, I consider candidate threshold policy functions and characterize some properties of their regret. The first contribution of this paper is to show how to derive the asymptotic distribution of the regret for a given threshold policy. The underlying intuition is simple: threshold policies use sample data to choose the threshold, which is hence a random variable with a certain asymptotic behavior. A Taylor expansion establishes a map between the regret of a policy and its threshold, allowing one to characterize the asymptotic distribution of the regret through the asymptotic behavior of the threshold. This shifts the problem to characterizing the asymptotic of the threshold estimator, simplifying the analysis as threshold estimators can be studied with common econometric tools. I start considering the Empirical Welfare Maximizer (EWM) policy studied by kitagawa2018should. They derive uniform bounds for the expected regret of the policy for various policy classes, where the policy class impacts the findings only in terms of its VC dimensionality. My approach is more specific, considering only threshold policies, but also more informative: leveraging the knowledge of the policy class, I characterize the entire asymptotic distribution of the regret. As mentioned above, this requires the derivation of the asymptotic distribution of the threshold for the EWM policy, which is non-standard: it exhibits the “cube root asymptotics” behavior studied in kim1990cube. The convergence rate is $n^{1/3}$, and the asymptotic distribution is of chernoff1964estimation type. The non-standard behavior and the unusual convergence rate are due to the discontinuity in the objective function and are reflected in the asymptotic distribution of the regret, and in its $n^{2/3}$ convergence rate. My second contribution is hence to propose a novel threshold policy, the Smoothed Welfare Maximizer (SWM) policy. This policy replaces the indicator function in the EWM policy’s objective function with a smooth kernel. Under certain regularity assumptions, the threshold estimator for the SWM policy is asymptotically normal and its regret achieves a $n^{4/5}$ convergence rate. This implies that the regret’s convergence rate with the SWM is faster than with the EWM policy. Building on these asymptotic results, I extend the comparison of the regrets with the EWM and the SWM policies beyond their convergence rates. My findings allow to compare the asymptotic distributions and investigate how differently they depend on the data distribution; theoretical results are helpful to inform and guide the Monte Carlo simulations, which confirm that the asymptotic results approximate the actual finite sample behaviors. Notably, the simulations confirm that the SWM policy may guarantee lower expected regret in finite samples. To demonstrate the practical differences between the two policies, I present an empirical illustration considering a job-training treatment. In that context, the SWM threshold policy would recommend treating 79% of unemployed workers, as opposed to 83% with the EWM policy. This difference of 4 percentage points is economically non-negligible. ### 1.1 Related Literature This paper relates to the statistical decision theory literature studying the problem of policy assignment with covariates (manski2004statistical; stoye2012minimax; kitagawa2018should; athey2021policy; mbakop2021model; sun2021treatment; sun2021empirical; viviano2023fair). My setting is mainly related to kitagawa2018should and athey2021policy, with some notable differences. kitagawa2018should study the EWM policy for policy classes with finite VC dimension. They derive finite sample bounds for regret without relying on smoothness assumptions. athey2021policy consider a double robust version of the EWM and allow for observational data. Under smoothness assumptions analogous to mine, they derive asymptotic bounds for the expected regret for policy classes with finite VC dimensions. Conversely, results in this paper apply exclusively to threshold policies, relying on a combination of the assumptions in kitagawa2018should and athey2021policy. The narrower focus allows for more comprehensive results: the entire asymptotic distribution of the regret is derived rather than providing some bounds for the expected regret. Another critical distinction lies in the different nature of the uniform convergence rates: my state space is a subset of theirs, which is why the rates I derive for the EWM and the SWM policies are faster than the $\sqrt{n}$ rate reported as the optimal by kitagawa2018should and athey2021policy. Their $\sqrt{n}$ rate is, in fact, uniformly valid for a family of data distributions that, at least for threshold policies, include extreme cases (e.g. discontinuous conditional ATE), which determine the slower rate of convergence. Their uniform results may be viewed as a benchmark: when more structure is added to the problem and some distribution in the family excluded, they can be improved. Optimal policy learning finds its empirical counterpart in the literature dealing with targeting, especially common in development economics. Recent studies rely on experimental evidence to decide who to treat in a data-driven way (hussam2022targeting; aiken2022machine), even if haushofer2022targeting pointed out the need for a more formalized approach to the targeting decision problem. The availability for the policy maker of appropriate tools to use the data in the decision process is probably necessary to guarantee a broader adoption of data-driven targeting strategies. Focusing on threshold policies, this paper explicitly formulates the decision problem, introduces implementable policies (the EWM and the SWM policy), and compares their asymptotic properties. Turning to the threshold estimators, I already mentioned that the EWM policy exhibits the cube root of $n$ asymptotics studied by kim1990cube, distinctive of several estimators in different contexts. Noteworthy examples are the maximum score estimator in choice models (manski1975maximum), the split point estimator in decision trees (banerjee2007confidence), and the risk minimizer in classification problems (mohammadi2005asymptotics), among others. Specific to my analysis is the emergence of the cube root asymptotic within a causal inference problem relying on the potential outcomes model, which is then mirrored in the regret’s asymptotic distribution. Addressing the cube root problem by smoothing the indicator in the objective function aligns closely with the strategy proposed by horowitz1992smoothed for studying the asymptotic behavior of the maximum score estimator. Objective functions are nonetheless different, and in my context, I derive the asymptotic distribution for both the unsmoothed (EWM) and the smoothed (SWM) policies. This is convenient, as it allows me to compare not only the convergence rates but also the entire asymptotic distributions of the estimators and their regrets and study the asymptotic approximations in Monte Carlo simulations. The rest of the paper is structured as follows. Section 2 introduces the problem and outlines my analytical approach. Section 3 derives formal results for the asymptotic of the EWM and SWM policies and their regrets. In Section 4, I investigate finite sample performance of the EWM and SWM policies through Monte Carlo simulations, while in Section 5 I consider the analysis of experimental data from the National Job Training Partnership Act (JTPA) Study to compare the practical implications of the policies. Section 6 concludes. ## 2 Overview of the Problem I consider the problem of a policymaker who wants to implement a binary treatment in a population of interest. Individuals are characterized by a vector of observable characteristics $\textbf{X}\in\mathbb{R}^{d}$, on which the policy maker bases the treatment assignment choice. A policy is hence a map $\pi(\textbf{x}):\mathbb{R}^{d}\rightarrow\\{0,1\\}$, from observable characteristics to the binary treatment status. The policy maker is utilitarian: its goal is to maximize the average welfare in the population. Indicating by $Y_{1}$ and $Y_{0}$ the potential outcomes with and without the treatment, population average welfare generated by a policy $\pi$ can be written as $\displaystyle W(\pi)=\mathbb{E}[Y_{1}\pi(X)+Y_{0}(1-\pi(X))].$ (1) When treatment effects are heterogeneous, the same treatment can have opposite average effects across individuals with different X. For this reason, the policy assignment may vary with X: the policymaker wants to target only those who benefit from being treated, to maximize the average welfare. The policy learning literature has considered several classes $\Pi$ of policy functions, such for example linear eligibility indexes, decision trees, or monotone rules, discussed in kitagawa2018should; athey2021policy; mbakop2021model. This paper focuses on threshold policies, a specific class of policy functions that can be represented as $\displaystyle\pi(\textbf{X})=\pi(X,t)=\mathbf{1}\\{X>t\\}.$ (2) Treatment is assigned whenever the scalar observable index $X\in\mathbb{R}$ exceeds threshold $t$, a parameter that must be chosen. These threshold policies are widespread: they regulate, beyond others, organ transplants kamath2007model, taxation (taylor2003corporation), and access to social welfare programs (card2008impact; crost2014aid). This paper aims not to justify or advocate for the use of threshold policies, and the restriction to this policy class is taken as exogenous. I will focus on the case when the index $X$ is chosen before the experiment. Population welfare depends only on threshold $t$, and can be written as $\displaystyle W(\pi)=W(t)=\mathbb{E}[Y_{1}\mathbf{1}\\{X>t\\}+Y_{0}\mathbf{1}\\{X\leq t\\}].$ (3) Choosing the policy is equivalent to choosing the threshold. If the joint distribution of $Y_{1}$, $Y_{0}$, and $X$ were known, the policy maker would implement the policy with threshold $t^{*}$ defined as: $\displaystyle t^{*}\in\mathop{\rm arg~{}max}\limits_{t}\mathbb{E}[Y_{1}\mathbf{1}\\{X>t\\}+Y_{0}\mathbf{1}\\{X\leq t\\}]$ (4) which would guarantee the maximum achievable welfare $W(t^{*})$. The problem described in equation (4) is unfeasible since the joint distribution of $Y_{1}$, $Y_{0}$, and $X$ is unknown. The policy maker observes an experimental sample $Z=\\{Z_{i}\\}_{i=1}^{n}=\\{Y_{i},D_{i},X_{i}\\}$, where $Y$ is the outcome of interest, $D$ the randomly assigned treatment status, and $X$ the policy index. Experimental data, which allows to identify the conditional average treatment effect, are used to learn the threshold policy $\hat{t}_{n}=\hat{t}_{n}(Z)$, function of the observed sample. Statistical decision theory deals with the problem of choosing the map $\hat{t}_{n}$. First, it is necessary to specify the decision problem the policymaker faces. For any threshold policy $\hat{t}_{n}$, define the regret $\mathcal{R}(\hat{t}_{n})$: $\displaystyle\mathcal{R}(\hat{t}_{n})=W(t^{*})-W(\hat{t}_{n}),$ (5) a measure of welfare loss indicating the suboptimality of policy $\hat{t}_{n}$. The regret depends on the unknown data distribution: the policymaker specifies a state space, and searches for a policy that does well uniformly for all the data distributions in the state space. Following manski2004statistical, statistical decision theory has mainly focused on the problem of minimizing the maximum expected regret, looking for a policy $\hat{t}_{n}$ that does uniformly well on average across repeated samples. Directly solving the constrained minimization problem of the functional $\sup\mathbb{E}[\mathcal{R}(\hat{t}_{n})]$ is impractical: the literature instead focuses on considering a specific policy map and studying its properties, for example showing its rate optimality, through finite sample valid (kitagawa2018should) or asymptotic (athey2021policy) arguments. Following this approach, I characterize and compare some properties for the regret of two different threshold policies, the Empirical Welfare Maximizer (EWM) policy, commonly studied in the literature, and the novel Smoothed Welfare Maximizer (SWM) policy. kitagawa2018should derive bounds for the expected regret of the EWM policy for a wide range of policy function classes. In their results, the policy class $\Pi$ affects the bounds only through its VC dimension, and the knowledge of $\Pi$ is not further exploited. Conversely, I leverage the additional structure from the knowledge of the policy class and characterize the entire asymptotic distribution of the regret for the EWM and the SWM threshold policies, comparing how their regrets depend on the data distribution. My results could hence be of interest also when decision problems not involving the expected regret are considered, as in manski2023statistical and kitagawa2022treatment: I characterize the asymptotic behavior of regret quantiles, and the asymptotic distributions can be used to simulate expectations of their non-linear functions. To derive my results, I take advantage of the link between a threshold policy function $\hat{t}_{n}$ and its regret $\mathcal{R}(\hat{t}_{n})$. Let $\\{r_{n}\\}$ be a sequence such that $r_{n}\rightarrow\infty$ for $n\rightarrow\infty$, and suppose that $r_{n}(\hat{t}_{n}-t^{*})$ converges to a non degenerate limiting distribution, i.e $(\hat{t}_{n}-t^{*})=O_{p}(r_{n}^{-1})$. Assume function $W(t)$ to be twice continuously differentiable, and consider its second-order Taylor expansion around $t^{*}$: $\displaystyle W(\hat{t}_{n})=W(t^{*})+\underbrace{W^{\prime}(t^{*})}_{=0}\left(\hat{t}_{n}-t^{*}\right)+\frac{1}{2}W^{\prime\prime}(\tilde{t})\left(\hat{t}_{n}-t^{*}\right)^{2}$ where $|\tilde{t}-t^{*}|\leq|\hat{t}_{n}-t^{*}|$, and $W^{\prime}(t^{*})=0$ by optimality of $t^{*}$. The previous equation can be written as $\displaystyle r_{n}^{2}\mathcal{R}(\hat{t}_{n})=\frac{1}{2}W^{\prime\prime}(\tilde{t})\left[r_{n}\left(\hat{t}_{n}-t^{*}\right)\right]^{2},$ (6) establishing a relationship between the convergence rates of $\hat{t}_{n}$ and $\mathcal{R}(\hat{t}_{n})$, and between their asymptotic distributions. Equation (6) therefore shows how the rate of convergence and the asymptotic distribution of regret $\mathcal{R}(\hat{t}_{n})$ can be studied through the rate of convergence and the asymptotic distribution of policy $\hat{t}_{n}$. In the next section, I consider the EWM policy $\hat{t}^{e}_{n}$ and the SWM policy $\hat{t}^{s}_{n}$: through their asymptotic behaviors, I characterize the asymptotic distributions of their regrets $\mathcal{R}(\hat{t}^{e}_{n})$ and $\mathcal{R}(\hat{t}^{s}_{n})$. ## 3 Formal Results Let $Y_{0}$ and $Y_{1}$ be scalar potential outcomes, $D$ the binary treatment assignment in the experiment, $\textbf{X}\in\mathbb{R}^{d}$ a vector of $d$ observable characteristics, and $X$ the observable index. $\\{Y_{0},Y_{1},D,\textbf{X}\\}$ are random variables distributed according to the distribution $P$. They satisfy the following assumptions, which guarantee the identification of the optimal threshold: ###### Assumption 1. (Identification) 1. 1.1 (No interference) Observed outcome $Y$ is related with potential outcomes by the expression $Y=DY_{1}+(1-D)Y_{0}$. 2. 1.2 (Unconfoundedness) Distribution $P$ satisfies $D\perp\\!\\!\\!\perp(Y_{0},Y_{1})|\textbf{X}$. 3. 1.3 (Overlap) Propensity score $p(\textbf{x})=\mathbb{E}[D|\textbf{X}=\textbf{x}]$ is assumed to be known and such that $p(\textbf{x})\in(\eta,1-\eta)$, for some $\eta\in(0,0.5)$. 4. 1.4 (Joint distribution) Potential outcomes $(Y_{0},Y_{1})$ and index $X$ are continuous random variables with joint probability density function $\varphi(y_{0},y_{1},x)$, and marginal densities $\varphi_{0}$, $\varphi_{1}$, and $f_{x}$ respectively. Expectations $\mathbb{E}[Y_{0}|x]$ and $\mathbb{E}[Y_{1}|x]$, for $x$ in the support of $X$, exist. Assumptions 1.1, 1.2, and 1.3 are standard assumptions in many causal models. Assumption 1.1 requires the outcome of each unit to depend only on their treatment status, excluding spillover effects. Assumption 1.2 requires random assignment of the treatment, conditionally on X. Assumption 1.3 requires that, for any value of X, there is a positive probability of observing both treated and untreated units. Probabilities of being assigned to the treatment may vary with X, allowing for stratified experiments. Assumption 1.4 specifies the focus on continuous outcome and index. While it would be possible to accommodate discrete $Y_{0}$ and $Y_{1}$, maintaining the continuity of $X$ remains essential. The arguments developed in this paper, in fact, are not valid for a discrete index: my focus is on studying optimal threshold policies in contexts where the probability of observing any value on the support of the index $X$ is zero, and the threshold must be chosen from a continuum of possibilities. Under Assumption 1, optimal policy $t^{*}$ defined in (4) can be written as $\displaystyle t^{*}\in$ $\displaystyle\mathop{\rm arg~{}max}\limits_{t}\mathbb{E}_{P}[Y_{1}\mathbf{1}\\{X>t\\}+Y_{0}\mathbf{1}\\{X\leq t\\}]$ (7) $\displaystyle=$ $\displaystyle\mathop{\rm arg~{}max}\limits_{t}\mathbb{E}_{P}\left[\left(\frac{DY}{p(\textbf{X})}-\frac{(1-D)Y}{(1-p(\textbf{X}))}\right)\mathbf{1}\\{X>t\\}\right]$ (8) and is hence identified. This standard result specifies under which conditions an experiment allows to identify $t^{*}$. ### 3.1 Empirical Welfare Maximizer Policy Policymaker observes an i.i.d. random sample $Z=\\{Y_{i},D_{i},X_{i}\\}$ of size $n$ from $P$, and considers the Empirical Welfare Maximizer policy $\hat{t}^{e}_{n}$, the sample analog of $t^{*}$ in equation (4)111Computationally, the problem requires to evaluate the function inside the argmax $n+1$ times; the solution is the convex set of points in $\mathbb{R}$ that give the maximum of these values.: $\displaystyle\hat{t}^{e}_{n}=\mathop{\rm arg~{}max}\limits_{t}\frac{1}{n}\sum_{i=1}^{n}\left(\frac{D_{i}Y_{i}}{p(\textbf{X}_{i})}\mathbf{1}\\{X_{i}>t\\}+\frac{(1-D_{i})Y_{i}}{(1-p(\textbf{X}_{i}))}\mathbf{1}\\{X_{i}\leq t\\}\right).$ (9) Policy $\hat{t}^{e}_{n}$ can be seen as an extremum estimator, maximizer of a function not continuous in $t$. #### 3.1.1 Consistency of $\hat{t}^{e}_{n}$ First, I will prove that $\hat{t}^{e}_{n}$ consistently estimates the optimal threshold $t^{*}$, implying that $\mathcal{R}(\hat{t}^{e}_{n})\rightarrow^{p}0$. To prove this result, I need the following assumptions on the data distribution. ###### Assumption 2. (Consistency) 1. 2.1 (Maximizer $t^{*}$) Maximizer $t^{*}\in\mathcal{T}$ of $\mathbb{E}[(Y_{1}-Y_{0})\mathbf{1}\\{X>t\\}]$ exists and is unique. It is an interior point of the compact parameter space $\mathcal{T}\subseteq\mathbb{R}$. 2. 2.2 (Square integrability) Conditional expectations $\mathbb{E}[Y_{0}^{2}|X]$ and $\mathbb{E}[Y_{1}^{2}|X]$ exist. 3. 2.3 (Smoothness) In a neighbourhood of $t^{*}$, density $f_{x}(x)$ is positive, and function $\mathbb{E}[(Y_{1}-Y_{0})\mathbf{1}\\{X>t\\}]$ is at least $s$-times continuously differentiable in $t$. By requiring the existence of the optimal threshold in the interior of the parameter space, Assumption 2.1 is assuming heterogeneity in the sign of the conditional average treatment effect $\mathbb{E}[Y_{1}-Y_{0}|X]$. It is because of this heterogeneity that the policymaker implements the threshold policy, targeting groups that would benefit from being treated. The assumption neither excludes the multiplicity of local maxima, as long as the global one is unique, nor excludes unbounded support for $X$, but requires the parameter space to be compact. A sufficient condition for Assumption 2.1, easy to interpret and plausible in many applications, is that the conditional average treatment effect has negative and positive values, and crosses zero exactly once. Assumption 2.2 requires that the conditional potential outcomes have finite second moments and is satisfied when $Y$ is assumed to be bounded (as in kitagawa2018should). Assumption 2.3 will be used with increasing values of $s$ to prove different results. To prove consistency, it needs to hold for $s=0$, requiring the continuity of the objective function $W(t)$ in a neighborhood of $t^{*}$. The derivative of $\mathbb{E}[(Y_{1}-Y_{0})\mathbf{1}\\{X>t\\}]$ with respect to $t$ is equal to $f_{x}(t)\tau(t)$, where $\tau(x)=\mathbb{E}[Y_{1}-Y_{0}|X=x]$ is the conditional average treatment effect. Assumption 2.3 with $s\geq 1$ hence requires smoothness of $f_{x}(x)$ and $\tau(x)$, in a neighborhood of $t^{*}$. The following theorem proves the consistency of $\hat{t}^{e}_{n}$ for $t^{*}$. ###### Theorem 1. Consider the EWM policy $\hat{t}^{e}_{n}$ defined in equation (9) and the optimal policy $t^{*}$ defined in equation (4). Under Assumptions 1 and 2 with $s=0$, $\displaystyle\hat{t}^{e}_{n}\rightarrow^{a.s.}t^{*}$ i.e. $\hat{t}^{e}_{n}$ is a consistent estimator for $t^{*}$. Note that, since $\hat{t}^{e}_{n}$ is not continuous in $t$, the proof of Theorem 1 does not resort to the standard arguments for consistency of extremum estimators. Conversely, it relies on the approach used to prove consistency for estimators exhibiting the “cube root asymptotics” behavior, discussed in the next section. #### 3.1.2 Asymptotic Distribution for $\hat{t}^{e}_{n}$ The fact that $\hat{t}^{e}_{n}$ is not continuous in $t$ directly affects the convergence rate and the asymptotic distribution. The EWM policy $\hat{t}^{e}_{n}$ exhibits the “cube root asymptotics” behavior studied in kim1990cube, the same as, beyond others, the maximum score estimator (manski1975maximum), and the split point estimator in decision trees (banerjee2007confidence). The limiting distribution is not Gaussian, and its derivation requires an additional regularity condition on the tails of the probability density distribution of $Y_{1}$ and $Y_{0}$: ###### Assumption 3. (Tail condition) Let $\varphi_{1}$ and $\varphi_{0}$ be the probability density functions of $Y_{1}$ and $Y_{0}$. Assume that, as $|y|\rightarrow\infty$, $\varphi_{1}(y)=o(|y|^{-(4+\delta)})$ and $\varphi_{0}(y)=o(|y|^{-(4+\delta)})$, for $\delta>0$. The following theorem gives the asymptotic distribution of $\hat{t}^{e}_{n}$. ###### Theorem 2. Consider the EWM policy $\hat{t}^{e}_{n}$ defined in equation (9) and the optimal policy $t^{*}$ defined in equation (4). Under Assumptions 1, 2 with $s=2$ and 3, as $n\rightarrow\infty$, $\displaystyle n^{1/3}\left(\hat{t}^{e}_{n}-t^{*}\right)\rightarrow^{d}(2\sqrt{K}/H)^{2/3}\mathop{\rm arg~{}max}\limits_{r}\left(B(r)-r^{2}\right)$ (10) where $B(r)$ is the two-sided standard Brownian motion process, and $K$ and $H$ are $\displaystyle K=$ $\displaystyle f_{x}(t^{*})\left(\frac{1}{p(\textbf{X})}\mathbb{E}[Y_{1}^{2}|X=t^{*}]+\frac{1}{1-p(\textbf{X})}\mathbb{E}[Y_{0}^{2}|X=t^{*}]\right)$ $\displaystyle H=$ $\displaystyle f_{x}(t^{*})\left(\frac{\partial\mathbb{E}\left[Y_{1}-Y_{0}|X=t^{*}\right]}{\partial X}\right).$ The limiting distribution of $n^{1/3}\left(\hat{t}^{e}_{n}-t^{*}\right)$ is a of Chernoff type (chernoff1964estimation). The Chernoff’s distribution is the probability distribution of the random variable $\mathop{\rm arg~{}max}\limits_{r}B(r)-r^{2}$, where $B(r)$ is the two-sided standard Brownian motion process. The process $B(r)-r^{2}$ can be simulated, and the distribution of $\mathop{\rm arg~{}max}\limits_{r}B(r)-r^{2}$ numerically studied. groeneboom2001computing report values for selected quantiles. It’s worth noticing how the variance of $\hat{t}^{e}_{n}$ depends on the data distribution. $K$ and $H$ are functions of the density of $X$, the variance of the Conditional Average Treatment Effect, and the derivative of the CATE at $t^{*}$. The optimal threshold is estimated with more precision when more data around the optimal threshold are available (larger density), when the treatment effect changes more rapidly (larger derivative of CATE), and when the outcomes have less variability (smaller variance of CATE). Exponents on $K$ and $H$, determined by the cube root behavior, will be crucial for comparing $\hat{t}^{e}_{n}$ and $\hat{t}^{s}_{n}$. Results in Theorem 2 can be used to derive asymptotic valid confidence intervals for $\hat{t}^{e}_{n}$, as discussed in appendix A. More interestingly, they can be combined with equation 6 to characterize the asymptotic distribution of the regret $\mathcal{R}(\hat{t}^{e}_{n})$, as derived in the following corollary. ###### Corollary 2.1. The asymptotic distribution of regret $\mathcal{R}(\hat{t}^{e}_{n})$ is: $\displaystyle n^{\frac{2}{3}}\mathcal{R}(\hat{t}^{e}_{n})\rightarrow^{d}\left(\frac{2K^{2}}{H}\right)^{\frac{1}{3}}\left(\mathop{\rm arg~{}max}\limits_{r}B(r)-r^{2}\right)^{2}.$ The expected value of the asymptotic distribution is $K^{\frac{2}{3}}H^{-\frac{1}{3}}C^{e}$, where $C^{e}=\sqrt[3]{2}\mathbb{E}\left[\left(\mathop{\rm arg~{}max}\limits_{r}B(r)-r^{2}\right)^{2}\right]$ is a constant not dependent on $P$. For the regret of the EWM policy, Corollary 2.1 establishes a $n^{\frac{2}{3}}$ rate, faster than the $\sqrt{n}$ rate found to be the optimal for the EWM expected regret (kitagawa2018should). It is essential to highlight how the two results are different: the rate derived by kitagawa2018should for the expected regret is valid uniformly over a family of distributions that may violate the assumptions in Theorem 4, for example including distributions where the CATE is discontinuous at the threshold. On the other hand, Corollary 2.1 is derived for distributions satisfying Assumptions 1, 2 and 3. They imply that $K$ is bounded and $H$ is bounded away from zero: the state space can be further characterized setting an upper bound $\overline{K}<\infty$ for $K$ and a lower bound $\underline{H}>0$ for $H$, implying a maximum for the expectation of the asymptotic regret equal to $\overline{K}^{\frac{2}{3}}\underline{H}^{-\frac{1}{3}}C^{e}$. ### 3.2 Smoothed Welfare Maximizer Policy Corollary 2.1 shows how the cube root of $n$ convergence rate of the EWM policy directly impacts the convergence rate of its regret. In this section, I propose an alternative threshold policy, the Smoothed Welfare Maximizer policy, that achieves a faster rate of convergence and hence guarantees a faster rate of convergence for its regret. My approach exploits some additional smoothness assumptions on the distribution $P$: Corollary 2.1 holds when $f_{x}(x)$ and $\tau(x)$ are assumed to be at least once differentiable; if they are at least twice differentiable, the SWM policy guarantees a $n^{\frac{4}{5}}$ convergence rate for the regret. Note that asking the density of the index and the conditional average treatment effect to be twice differentiable seems plausible for many applications. In the context of policy learning, it is, for example, assumed by athey2021policy to derive their results222This assumption is implied by the high-level assumptions made in the paper, as the authors discuss in footnote 15.. My approach involves smoothing the objective function in (9), in the same spirit as the smoothed maximum score estimator proposed by horowitz1992smoothed to deal with inference for the maximum score estimator (manski1975maximum). The Smoothed Welfare Maximizer (SWM) policy $\hat{t}^{s}_{n}$ is defined as: $\displaystyle\hat{t}^{s}_{n}=\mathop{\rm arg~{}max}\limits_{t}\frac{1}{n}\sum_{i=1}^{n}\left[\left(\frac{D_{i}Y_{i}}{p(\textbf{X}_{i})}-\frac{(1-D_{i})Y_{i}}{(1-p(\textbf{X}_{i}))}\right)k\left(\frac{X_{i}-t}{\sigma_{n}}\right)\right]$ (11) where $\sigma_{n}$ is a sequence of positive real numbers such that $\lim_{n\rightarrow\infty}\sigma_{n}=0$, and the function $k(\cdot)$ satisfies: ###### Assumption 4. (Kernel function) Kernel function $k(\cdot):\mathbb{R}\rightarrow\mathbb{R}$ is continuous, bounded, and with limits $\lim_{x\rightarrow-\infty}k(x)=0$ and $\lim_{x\rightarrow\infty}k(x)=1$. In practice, the indicator function found in $\hat{t}^{e}_{n}$ is here substituted by a smooth function $k(\cdot)$ with the same limiting behavior, which guarantees the differentiability of the expression in the argmax. The bandwidth $\sigma_{n}$, decreasing with the sample size, ensures that when $n\to\infty$ the policy converges to the optimal one, as proved in the next section. #### 3.2.1 Consistency of $\hat{t}^{s}_{n}$ I start showing consistency of $\hat{t}^{s}_{n}$ for $t^{*}$, which implies $\mathcal{R}(\hat{t}^{s}_{n})\rightarrow^{p}0$. ###### Theorem 3. Consider the SWM policy $\hat{t}^{s}_{n}$ defined in equation (11) and the optimal policy $t^{*}$ defined in equation (4). Under Assumptions 1, 2 with $s=0$ and 4, as $n\rightarrow\infty$, $\displaystyle\hat{t}^{s}_{n}\rightarrow^{a.s.}t^{*}$ i.e. $\hat{t}^{s}_{n}$ is a consistent estimator for $t^{*}$. Theorems 3 and 1 are analogous: they rely on the same assumptions on the data (Assumptions 1, 2) to prove the consistency of $\hat{t}^{e}_{n}$ and $\hat{t}^{s}_{n}$. Where the two policies differ is in the asymptotic distributions: smoothness in the objective function for $\hat{t}^{s}_{n}$ guarantees asymptotic normality, but also introduces a bias, since the bandwidth $\sigma_{n}$ equals zero only in the limit, which pops up in the limiting distribution. #### 3.2.2 Asymptotic Distribution for $\hat{t}^{s}_{n}$ Deriving this asymptotic behavior of $\hat{t}^{s}_{n}$ requires an additional assumption on the rate of bandwidth $\sigma_{n}$ and the kernel function $k$. Since both are chosen by the policy maker, the assumption is not a restriction on unobservables but a condition on properly picking $\sigma_{n}$ and $k$. ###### Assumption 5. (Bandwidth and kernel) 1. 5.1 (Rate of $\sigma_{n}$) $\frac{\log n}{n\sigma_{n}^{4}}\rightarrow 0$ as $n\rightarrow\infty$. 2. 5.2 (Kernel function) Kernel function $k(\cdot):\mathbb{R}\rightarrow\mathbb{R}$ satisfies Assumption 4 and the following: * • $k(\cdot)$ is continuous, bounded, and with limits $\lim_{x\rightarrow-\infty}k(x)=0$ and $\lim_{x\rightarrow\infty}k(x)=1$. * • $k(\cdot)$ is twice differentiable, with uniformly bounded derivatives $k^{\prime}$ and $k^{\prime\prime}$. * • $\int k^{\prime}(x)^{4}dx$, $\int k^{\prime\prime}(x)^{2}dx$, and $\int|x^{2}k^{\prime\prime}(x)|dx$ are finite. * • For some integer $h\geq 2$ and each integer $i\in[1,h]$, $\int|x^{i}k^{\prime}(x)|dx=0$ for $i<h$ and $\int|x^{h}k^{\prime}(x)|dx=d\neq 0$, with $d$ finite. * • For any integer $i\in[0,h]$, any $\eta>0$, and any sequence $\sigma_{n}\rightarrow 0$, $\lim_{n\rightarrow\infty}\sigma_{n}^{i-h}\int_{|\sigma_{n}x|>\eta}|x^{i}k^{\prime}(x)|dx=0$, and $\lim_{n\rightarrow\infty}\sigma_{n}^{-1}\int_{|\sigma_{n}x|>\eta}|k^{\prime\prime}(x)|dx=0$. * • $\int xk^{\prime\prime}(x)dx=1$, $\lim_{n\rightarrow\infty}\int_{|\sigma_{n}x|>\eta}|xk^{\prime\prime}(x)|dx=0$. An example of a function $k$ satisfying Assumption 5.2 with $h=2$ is the cumulative distribution function of the standard normal distribution. I can now derive the asymptotic distribution of $\hat{t}^{s}_{n}$. ###### Theorem 4. Consider the SWM policy $\hat{t}^{s}_{n}$ defined in equation (11) and the optimal policy $t^{*}$ defined in equation (4). Under Assumptions 1, 2 with $s=h+1$ for some $h\geq 2$, and 5, as $n\rightarrow\infty$: 1. 1. if $n\sigma_{n}^{2h+1}\rightarrow\infty$, $\sigma_{n}^{-h}(\hat{t}^{s}_{n}-t^{*})\rightarrow^{p}H^{-1}A;$ 2. 2. if $n\sigma_{n}^{2h+1}\rightarrow\lambda<\infty$, $(n\sigma_{n})^{1/2}(\hat{t}^{s}_{n}-t^{*})\rightarrow^{d}\mathcal{N}(\lambda^{1/2}H^{-1}A,H^{-2}\alpha_{2}K);$ where $A$, $\alpha_{1}$, and $\alpha_{2}$ are: $\displaystyle A=$ $\displaystyle-\frac{1}{h!}\alpha_{1}\int_{y}\left(Y_{1}-Y_{0}\right)\varphi^{h}_{x}(y,t^{*})dy$ (12) $\displaystyle\alpha_{1}=$ $\displaystyle\int_{\zeta}\zeta^{h}k^{\prime}\left(\zeta\right)d\zeta$ (13) $\displaystyle\alpha_{2}=$ $\displaystyle\int_{\zeta}k^{\prime}\left(\zeta\right)^{2}d\zeta.$ (14) The asymptotic distribution of $(n\sigma_{n})^{1/2}(\hat{t}^{s}_{n}-t^{*})$ is normal, centered at the asymptotic bias $\lambda^{1/2}H^{-1}A$ introduced by the smoothing function, which exploits local information giving non-zero weights to treated units in the untreated region and vice versa. The bias and the variance of the distribution depend on the population distribution through $K$ and $H$, as for the EWM policy, and also through $A$, a new term that determines the bias. In the definition of $A$, $\varphi^{h}_{x}$ is the $h$ derivative with respect to $x$ of $\varphi(y_{0},y_{1},x)$, the joint density distribution of $Y_{0}$, $Y_{1}$, and $X$: the integral in the expression for $A$ is the expectation of the $h$-derivative of $f_{x}(X)\tau(X)$ computed in $X=t^{*}$, whose existence is guaranteed by Assumption 2.3 with $s=h+1$. $\alpha_{1}$ and $\alpha_{2}$ depends only on kernel function $k$, and are hence known. The difference between Theorems 4 and 2 is the degree of smoothness required by Assumption 2.3. In Theorem 2, the objective function $W(t)$ is required to be twice differentiable, while Theorem 4 requires $h+1$ continuous derivatives, where $h$ directly impacts rate of convergence of the policy: to achieve a rate faster than the EWM policy, $\hat{t}^{s}_{n}$ requires $W(t)$ to be three times differentiable. As for the EWM policy, results in Theorem 4 can be used to derive asymptotic valid confidence intervals for $\hat{t}^{s}_{n}$ (see appendix A), and, combined with equation 6, to characterize the asymptotic distribution of the regret $\mathcal{R}(\hat{t}^{s}_{n})$, as derived in the next corollary. ###### Corollary 4.1. Asymptotic distribution of regret $\mathcal{R}(\hat{t}^{s}_{n})$ is: $\displaystyle n\sigma_{n}\mathcal{R}(\hat{t}^{s}_{n})\rightarrow^{d}\frac{1}{2}\frac{\alpha_{2}K}{H}\chi^{2}\left(1,\frac{\lambda A^{2}}{\alpha_{2}K}\right)$ where $\chi^{2}\left(1,\frac{\lambda A^{2}}{\alpha_{2}K}\right)$ is a non- centered chi-squared distribution with 1 degree of freedom and non-central parameter $\frac{\lambda A^{2}}{\alpha_{2}K}$. The expected value of the asymptotic distribution is: $\displaystyle\frac{1}{2}\frac{\alpha_{2}K}{H}\left(1+\frac{\lambda A^{2}}{\alpha_{2}K}\right)=\frac{\alpha_{2}}{2}\frac{K}{H}+\frac{1}{2}\frac{\lambda A^{2}}{H}.$ (15) Let $\sigma_{n}=(\lambda/n)^{1/(2h+1)}$ with $\lambda\in(0,\infty)$. The expectation of the asymptotic regret is minimized by setting $\lambda=\lambda^{*}=\frac{\alpha_{2}K}{2hA^{2}}$: in this case, the expectation of the asymptotic distribution scaled by $n^{\frac{2h}{2h+1}}$ is $A^{\frac{2}{2h+1}}K^{\frac{2h}{2h+1}}H^{-1}C^{s}$, where $C^{s}=\frac{2h+1}{2}\left(\frac{\alpha_{2}}{2h}\right)^{\frac{2h}{2h+1}}$ is a constant not dependent on $P$. With the optimal bandwidth $\sigma_{n}=O_{p}(n^{-\frac{1}{2h+1}})$, the regret converges at $n^{\frac{2h}{2h+1}}$ rate. For $h\geq 2$, this implies that the regret converges faster with the SWM than with the EWM policy: the extra smoothness assumption has been exploited to achieve a better rate for the asymptotic regret. The corollary is valid for distributions satisfying Assumptions 1, 2 and 3, which imply a bounded $A$: the state space can be characterized setting $\overline{A}<\infty$ as the upper bound for $|A|$, implying a maximum for the expectation of the asymptotic regret equal to $\overline{A}^{\frac{2}{2h+1}}\overline{K}^{\frac{2h}{2h+1}}\underline{H}^{-1}C^{s}$. ### 3.3 Comparison of Regrets $\mathcal{R}(\hat{t}^{e}_{n})$ and $\mathcal{R}(\hat{t}^{s}_{n})$ Corollaries 2.1 and 4.1 showed that the regret with the SWM policy has a faster convergence rate than with the EWM policy. Assuming the accuracy of the asymptotic approximations, the comparison can extend beyond the rates to investigate how differently the asymptotic distributions depend on $P$, through $H$, $K$, and $A$. Since it is more intuitive to compare expectations than entire distributions, I will focus on this comparison, with the understanding that similar reasoning also applies to alternative statistics of the asymptotic distributions, such as medians or different quantiles. The findings in the corollaries suggest that, for any sample size, one can select distributions $P$ in a manner that makes each expectation of both the asymptotic regrets smaller. Hence, both the EWM and the SWM can be the better policy. If the asymptotic behavior is reflected in finite sample, this implies that in a given application where $P$ is unknown it is impossible to determine which policy would guarantee a smaller expected regret. In statistical decision theory, this ambiguity is recognized: focusing on the worst case scenario, the policy maker should choose the policy that guarantees the smaller regret when the expectation of the asymptotic regret is maximized. In this case, for any sample size, it means to compare $n^{-\frac{2}{3}}\overline{K}^{\frac{2}{3}}\underline{H}^{-\frac{1}{3}}C^{e}$ and $n^{-\frac{2h}{2h+1}}\overline{A}^{\frac{2}{2h+1}}\overline{K}^{\frac{2h}{2h+1}}\underline{H}^{-1}C^{s}$, and choose the policy accordingly. Which policy is uniformly better does hence depends on the application, and specifically on values of parameters $\overline{K}$, $\underline{H}$, and $\overline{A}$, which the policy maker must set in advance. It is interesting to investigate for which distributions $P$ one policy is expected to do relatively better than the other, in a pointwise sense. It’s clear that the SWM policy does relatively better with increasing sample size due to its faster convergence rate, for any $P$. Conversely, for any sample size, when $h=2$, the ratio of the expectations of asymptotic regrets is $\frac{H^{\frac{2}{3}}}{A^{\frac{2}{5}}K^{\frac{2}{15}}}\frac{C^{e}}{C^{s}}$: the SWM policy does relatively better for larger $H$ and smaller $K$ and $A$. The negative impact of $A$ is straightforward, since it only affects the bias for the SWM policy, without influencing the distribution of the EWM. The role of $H$ and $K$ is less intuitive, since they appear in both distributions but with different exponents. It’s worth noting that the SWM policy holds a relative advantage when the derivative of the CATE is higher, and its variance is lower- scenarios where the benefits of selecting a threshold closer to the optimal are larger. Further comparisons of the regrets could explore the interplay between sample size and the distribution $P$, focusing on sequences $P_{n}$ that change with sample size. Appendix B delves into this investigation. It is important to acknowledge that the relevance of these discussions on asymptotic results relies on their ability to approximate behaviors in finite samples, as, after all, the policymaker has only access to a finite sample of experimental data. Guided by the theoretical results just derived, Monte Carlo simulations in the next section allow to analyze the finite sample regrets associated with the EWM and the SWM policies. ## 4 Monte Carlo Simulations I examine the finite sample properties of the EWM and SWM policies using Monte Carlo simulations. The scope of this section is twofold: first, I will provide examples of data generating processes which lead different rankings for the two policies in terms of asymptotic regret. Then, I will verify how the asymptotic results approximate the finite sample distributions, and compare the finite sample regrets of the two policies. As data generating process, consider the following distribution $P$ of $(Y_{0},Y_{1},D,X)$: $\displaystyle X$ $\displaystyle\sim\mathcal{N}(0,1)$ $\displaystyle\epsilon_{1}$ $\displaystyle\sim\mathcal{N}(0,\gamma)$ $\displaystyle Y_{1}$ $\displaystyle=X^{3}+\beta_{2}X^{2}+\beta_{1}X+\epsilon_{1}$ $\displaystyle Y_{0}$ $\displaystyle\sim\mathcal{N}(0,\gamma)$ $\displaystyle D$ $\displaystyle\sim\text{Bern}(p).$ Under $P$, the potential outcome $Y_{0}$ does not depend on the index $X$, and the treatment is randomly assigned with constant probability $p$. Parameter values are chosen such that $\mathbb{E}[Y_{1}|X=x]$ and hence $\mathbb{E}[Y_{1}-Y_{0}|X=x]$ are increasing function of $x$, and the optimal threshold $t^{*}$ is 0. It can be verified that such $P$ implies the following: $\displaystyle K=$ $\displaystyle\phi(0)\left(\frac{\gamma^{2}}{p}+\frac{\gamma^{2}}{1-p}\right)$ $\displaystyle H=$ $\displaystyle\phi(0)\beta_{1}$ $\displaystyle A=$ $\displaystyle-\phi(0)\beta_{2}$ $\displaystyle W(t)=$ $\displaystyle\beta_{2}\left(1-\Phi(t)+t\phi(t)\right)+\beta_{1}\phi(t)+\left(t^{2}\phi(t)+2\phi(t)\right)$ where $\phi(t)$ and $\Phi(t)$ are the pdf and the cdf of the standard normal distribution, respectively. I consider two models characterized by different parameter values: Model | $\gamma$ | $\beta_{1}$ | $\beta_{2}$ | $p$ ---|---|---|---|--- 1 | 1 | 1 | -0.5 | 0.5 2 | 3 | 0.5 | -1 | 0.5 Table 1 reports values for $K$, $H$, and $A$, and asymptotic expected regrets for both policies for sample of size $n\in\\{500,1000,2000,3000\\}$. For the SWM policy, asymptotic regrets are computed using the infeasible optimal bandwidth $\sigma_{n}^{*}$. Compared to Model 1, Model 2 entails larger $K$ and $A$, and smaller $H$. Results from the previous section hence imply the regret with the EWM policy being relatively lower in Model 2. Consider the case with $n=500$. In model 1, the asymptotic expected regret is higher with the EWM policy, whereas in model 2, it’s higher with the SWM policy: this confirms that the ranking of asymptotic expected regrets depends on the unknown data distribution $P$. Because of the fastest rate, though, as $n$ increases the SWM policy exhibits relatively better performance. Regardless of the specific distribution $P$, there exists a certain sample size beyond which the asymptotic expected regret with the SWM policy becomes smaller. In model 2, when $n=1,000$, the inversion of ranking already occurs. Table 1: Values of $K$, $H$, and $A$, and asymptotic expected regret using both EWM and SWM policies across different models, are presented. The regret for the SWM policy is computed using the optimal bandwidth $\sigma_{n}^{*}$. To facilitate the reading, asymptotic expected regrets have been scaled by a factor of 10,000. Model | n | EWM | SWM | K | H | A ---|---|---|---|---|---|--- 1 | $500$ | $96.190$ | $39.714$ | $1.596$ | $0.399$ | $0.199$ $1,000$ | $60.596$ | $22.809$ | $1.596$ | $0.399$ | $0.199$ $2,000$ | $38.173$ | $13.101$ | $1.596$ | $0.399$ | $0.199$ $3,000$ | $29.131$ | $9.471$ | $1.596$ | $0.399$ | $0.199$ 2 | $500$ | $400.166$ | $439.442$ | $9.575$ | $0.199$ | $0.399$ $1,000$ | $252.089$ | $252.393$ | $9.575$ | $0.199$ | $0.399$ $2,000$ | $158.806$ | $144.962$ | $9.575$ | $0.199$ | $0.399$ $3,000$ | $121.192$ | $104.805$ | $9.575$ | $0.199$ | $0.399$ To investigate the finite sample distributions of regret, I draw samples of size $n$ from $P$ 5,000 times for each model. Each sample is used to estimate the thresholds $\hat{t}^{e}_{n}$ and $\hat{t}^{s}_{n}$. Estimating $\hat{t}^{s}_{n}$ requires specifying a bandwidth $\sigma_{n}$, for which I adopt the following method: I use the estimated policy $\hat{t}^{e}_{n}$ to compute $\hat{A}_{n}$ and $\hat{K}_{n}$, which are then used to compute the optimal $\hat{\lambda}_{n}^{*}$, and the optimal bandwidth $\hat{\sigma}_{n}^{*}$. The SWM policy $\hat{t}^{s}_{n}$ is hence estimated with this bandwidth $\hat{\sigma}_{n}^{*}$, which consistently estimates the optimal one if $\hat{A}_{n}$ and $\hat{K}_{n}$ consistently estimate $A$ and $K$. I also consider $\hat{t}^{s}_{n}$ with the infeasible optimal $\sigma_{n}^{*}$ computed from the data generating process. Estimates for $\hat{t}^{e}_{n}$ and $\hat{t}^{s}_{n}$ are used to compute regrets $\mathcal{R}(\hat{t}^{e}_{n})$ and $\mathcal{R}(\hat{t}^{s}_{n})$. I thus obtain the finite sample distributions of the regret, which can be compared with the asymptotic distributions derived in Corollaries 2.1 and 4.1. Table 2 presents the mean of these finite sample and asymptotic distributions, also depicted in Figures 1 and 2. Corresponding tables and figures for the median regret are provided in Appendix C. The last column of each table reports the ratio between the finite sample mean regrets for the EWM and SWM policy, facilitating the comparison: a ratio larger than one indicates that the SWM policy outperforms the EWM policy. These ratios increase with the sample size, reflecting the faster asymptotic convergence rate of the SWM policy. Similar to the asymptotic results, in finite sample the SWM policy does relatively better as the sample size increases. Table 2: Finite sample and asymptotic expected regrets with the EWM and SWM policies across different models are presented. Finite sample (empirical) values are computed through 5,000 Monte Carlo simulations. The last column displays the ratio between finite sample expected regrets with the EWM and SWM policies. Model | n | EWM | SWM | Ratio ---|---|---|---|--- | | empirical | asymptotic | empirical $\sigma_{n}^{*}$ | empirical $\hat{\sigma}_{n}^{*}$ | asymptotic | 1 | $500$ | $89.635$ | $96.190$ | $31.492$ | $77.747$ | $39.714$ | $1.153$ $1,000$ | $58.799$ | $60.596$ | $18.248$ | $44.240$ | $22.809$ | $1.329$ $2,000$ | $37.297$ | $38.173$ | $10.887$ | $29.176$ | $13.101$ | $1.278$ $3,000$ | $28.615$ | $29.131$ | $7.872$ | $16.189$ | $9.471$ | $1.768$ 2 | $500$ | $276.867$ | $400.166$ | $215.876$ | $291.654$ | $439.442$ | $0.949$ $1,000$ | $201.182$ | $252.089$ | $151.181$ | $209.069$ | $252.393$ | $0.962$ $2,000$ | $149.572$ | $158.806$ | $107.733$ | $151.069$ | $144.962$ | $0.990$ $3,000$ | $124.168$ | $121.192$ | $90.251$ | $125.565$ | $104.805$ | $0.989$ Figure 1: The figure illustrates asymptotic and finite sample expected regrets for the EWM and SWM policies in Model 1, corresponding to the values reported in Table 2. Figure 2: The figure illustrates asymptotic and finite sample expected regrets for the EWM and SWM policies in Model 2, corresponding to the values reported in Table 2. Simulations enable comparison between finite sample regrets and their asymptotic counterparts. Across all models and sample sizes, the asymptotic approximation for the feasible SWM policy (with the estimated bandwidth $\hat{\sigma}_{n}^{*}$) is relatively less accurate. Simulations suggest that this is partly attributable to the need for estimating an additional tuning parameter, the bandwidth $\sigma_{n}$. When the SWM policy is estimated using the infeasible optimal bandwidth $\sigma^{*}_{n}$, in fact, the asymptotic approximation is more accurate and the regret is smaller. In Model 1, as illustrated in Figure 1, both the finite sample and the asymptotic expected regrets are lower for the SWM policy. Conversely, in Model 2 (illustrated in Figure 2), the finite sample expected regret is lower with the EWM policy. This confirms the impossibility of ranking the policies in a pointwise sense: different distributions $P$ result in different rankings for the finite sample expected regrets. It is important to note that the ranking of the EWM and the SWM policies indicated by the asymptotic results may differ from the actual finite sample comparison. Consider, for example, Model 2 with $n=2,000$: despite the asymptotic analysis suggesting a smaller expected regret with the SWM policy, the EWM actually guarantees a smaller regret. In this scenario, even if $P$ were known in advance, choosing according to the asymptotic approximation would not have been optimal. As $n$ increases, the approximation improves, and the rankings based on asymptotic analysis and finite sample comparisons coincide. Monte Carlo simulations have confirmed that the asymptotic results can approximate some finite sample behavior of the regrets, highlighting some caveats to consider when applying conclusions from asymptotic analysis to finite sample regrets with the EWM and SWM policies. However, they have not yet provided insight into the practical significance of the differences between the two policies, whether these differences are relevant or negligible in real-world scenarios. An empirical illustration is useful to answer these questions, illustrating the different implications that the policies may have. ## 5 Empirical Illustration I consider the same empirical setting as kitagawa2018should: experimental data from the National Job Training Partnership Act (JTPA) Study. bloom1997benefits describes the experiment in detail. The study randomized whether applicants would be eligible to receive a mix of training, job-search assistance, and other services provided by the JTPA for a period of 18 months. Background information on the applicants was collected before treatment assignment, alongside administrative and survey data on the applicants’ earnings over the subsequent 30 months. I consider the same sample of 9,223 observations as in kitagawa2018should. The outcome variable $Y$ represents the total individual earnings during the 30 months following program assignment, and the treatment variable $D$ is a binary indicator denoting whether the individual was assigned to the program (intention-to-treat). The threshold policy is implemented by considering the individual’s earnings in the year preceding the assignment as the index $X$. Treatment is exclusively assigned to workers with prior earnings below the threshold, based on the expectation that program services yield a more substantial positive effect for individuals who previously experienced lower earnings. Experimental data are employed to determine the threshold beyond which the treatment, on average, harms the recipients. The bandwidth to estimate the SWM policy is chosen as in the Monte Carlo simulation, using the EWM policy $\hat{t}^{e}_{n}$ to compute $\hat{A}_{n}$, $\hat{K}_{n}$, the optimal $\hat{\lambda}_{n}^{*}$, and then the optimal bandwidth $\hat{\sigma}_{n}^{*}$. Table 3 reports the threshold estimates, including the confidence intervals constructed as discussed in Appendix A. The threshold with the SWM policy is 700 dollars lower than with the EWM (5,924 vs 6,614 dollars), a drop of $10.5\%$. The lower threshold implies that the treatment would target fewer workers: if the EWM policy were implemented, $82.9\%$ of the workers in the sample would receive the program services, compared to the $78.9\%$ with the SWM policy, resulting in a $4$ percentage point decrease. Table 3: Summary of the Empirical Welfare Maximizer (EWM) and Smoothed Welfare Maximizer (SWM) policies. | EWM | SWM ---|---|--- Optimal threshold | 6614 | 5924 Confidence interval | (4880.6, 8347.4) | (5832.7, 6411.3) Bootstrapped confidence interval | (4748.5, 8060) | Expected asymptotic $\mathcal{R}$ | 41.299 | 5.296 Median asymptotic $\mathcal{R}$ | 19.47 | 2.597 % of workers treated | 82.912 | 78.911 Results in corollaries 2.1 and 4.1 allow to estimate the expected and the median regret with the two policies: the asymptotic approximation suggests that the average regret drops from $42$ to $5.5$ dollars per worker, when comparing the EWM and SWM. In this context, the SWM would guarantee an average gain of $37.6$ dollars per worker over the 30-month period under study, equating to $909$ dollars on average for the workers who would change their treatment assignment under the two policies. The numbers in the table should be considered with care, and clearly, the intention of this empirical illustration was not to advocate for a specific new job-training policy. Rather, the application aimed to assess if the EWM and SWM policies may have implications with relevant economic differences. Results suggest this is the case: together with theoretical and simulation findings, this implies that the choice between the EWM and the SWM policy should be thoughtfully considered, as it may determine relevant improvement in population welfare. ## 6 Conclusion In this paper, I addressed the problem of using experimental data to estimate optimal threshold policies when the policymaker seeks to minimize the regret associated with implementing the policy in the population. I first examined the Empirical Welfare Maximizer threshold policy, deriving its asymptotic distribution, and showing how it links to the asymptotic distribution of its regret. I then introduced the Smoothed Welfare Maximizer policy, replacing the indicator function in the EWM policy with a smooth kernel function. Under the assumptions commonly made in the policy learning literature, the convergence rate for the worst-case regret of the SWM is faster than with the EWM policy. A comparative analysis of the asymptotic distributions of the two policies was conducted, to investigate how differently their regrets depend on the data distribution $P$. Monte Carlo simulations corroborated the asymptotic finding that the SWM policy may perform better than the commonly studied EWM policy also in finite sample. An empirical illustration displayed that the implications of the two policies can remarkably differ in real-world application. Three sets of problems remain open for future research, to extend the results of this paper in diverse directions. While this study compared the EWM policy with its smoothed counterpart SWM, the literature in statistical decision theory has also examined alternative policy functions. Consider, for example, the Augmented Inverse Propensity Weighted policy proposed by athey2021policy: what is the asymptotic distribution of the AIPW policy’s regret in the context of threshold policies? How differently does it depend on $P$? Is it possible, analogously to what I did in this paper for the EWM policy, to modify the AIPW policy by smoothing the indicator function in its definition? Second, it would be interesting to extend the smoothing approach from the EWM policy to other policy classes. Threshold policies are convenient as they depend on only one parameter. Still, besides more convoluted derivations, the same intuition of smoothing the indicator function also seems valid for the linear index or the multiple indices policy. The questions to explore include adapting the theory developed in this paper to these policy classes and whether this approach could be generalized even to all cases where the EWM policy is applied. Lastly, the framework developed in this paper for using experimental data to estimate optimal policies could inform experimental design. While the existing literature mainly focuses on optimal design for estimating the average treatment effect, it could be valuable to consider scenarios when estimating the threshold policy is the goal: how should the experimental design be adapted? How the allocation of units to treatment and control groups would change? The results presented in this paper, elucidating the connection between the distribution $P$ and the regret of the policy, provide a natural foundation for exploring experimental designs optimal for threshold policy estimation. ## References * Abrevaya and Huang (2005) Abrevaya, J. and J. Huang (2005). On the bootstrap of the maximum score estimator. Econometrica 73(4), 1175–1204. * Aiken et al. (2022) Aiken, E., S. Bellue, D. Karlan, C. Udry, and J. E. Blumenstock (2022). Machine learning and phone data can improve targeting of humanitarian aid. Nature 603(7903), 864–870. * Amemiya (1985) Amemiya, T. (1985). Advanced econometrics. Harvard university press. * Athey and Wager (2021) Athey, S. and S. Wager (2021). Policy learning with observational data. Econometrica 89(1), 133–161. * Banerjee and McKeague (2007) Banerjee, M. and I. W. McKeague (2007). Confidence sets for split points in decision trees. The Annals of Statistics 35(2), 543–574. * Banerjee and Wellner (2001) Banerjee, M. and J. A. Wellner (2001). Likelihood ratio tests for monotone functions. Annals of Statistics, 1699–1731. * Bloom et al. (1997) Bloom, H. S., L. L. Orr, S. H. Bell, G. Cave, F. Doolittle, W. Lin, and J. M. Bos (1997). The benefits and costs of jtpa title ii-a programs: Key findings from the national job training partnership act study. Journal of human resources, 549–576. * Card et al. (2008) Card, D., C. Dobkin, and N. Maestas (2008). The impact of nearly universal insurance coverage on health care utilization: evidence from medicare. American Economic Review 98(5), 2242–2258. * Cattaneo et al. (2020) Cattaneo, M. D., M. Jansson, and K. Nagasawa (2020). Bootstrap-based inference for cube root asymptotics. Econometrica 88(5), 2203–2219. * Chernoff (1964) Chernoff, H. (1964). Estimation of the mode. Annals of the Institute of Statistical Mathematics 16(1), 31–41. * Crost et al. (2014) Crost, B., J. Felter, and P. Johnston (2014). Aid under fire: Development projects and civil conflict. American Economic Review 104(6), 1833–1856. * Groeneboom and Wellner (2001) Groeneboom, P. and J. A. Wellner (2001). Computing chernoff’s distribution. Journal of Computational and Graphical Statistics 10(2), 388–400. * Haushofer et al. (2022) Haushofer, J., P. Niehaus, C. Paramo, E. Miguel, and M. W. Walker (2022). Targeting impact versus deprivation. Technical report, National Bureau of Economic Research. * Hirano and Porter (2009) Hirano, K. and J. R. Porter (2009). Asymptotics for statistical treatment rules. Econometrica 77(5), 1683–1701. * Horowitz (1992) Horowitz, J. L. (1992). A smoothed maximum score estimator for the binary response model. Econometrica: journal of the Econometric Society, 505–531. * Hussam et al. (2022) Hussam, R., N. Rigol, and B. N. Roth (2022). Targeting high ability entrepreneurs using community information: Mechanism design in the field. American Economic Review 112(3), 861–98. * Kamath and Kim (2007) Kamath, P. S. and W. R. Kim (2007). The model for end-stage liver disease (meld). Hepatology 45(3), 797–805. * Kim and Pollard (1990) Kim, J. and D. Pollard (1990). Cube root asymptotics. The Annals of Statistics, 191–219. * Kitagawa et al. (2022) Kitagawa, T., S. Lee, and C. Qiu (2022). Treatment choice with nonlinear regret. arXiv preprint arXiv:2205.08586. * Kitagawa and Tetenov (2018) Kitagawa, T. and A. Tetenov (2018). Who should be treated? empirical welfare maximization methods for treatment choice. Econometrica 86(2), 591–616. * Leboeuf et al. (2020) Leboeuf, J.-S., F. LeBlanc, and M. Marchand (2020). Decision trees as partitioning machines to characterize their generalization properties. Advances in Neural Information Processing Systems 33, 18135–18145. * Léger and MacGibbon (2006) Léger, C. and B. MacGibbon (2006). On the bootstrap in cube root asymptotics. Canadian Journal of Statistics 34(1), 29–44. * Manski (1975) Manski, C. F. (1975). Maximum score estimation of the stochastic utility model of choice. Journal of econometrics 3(3), 205–228. * Manski (2004) Manski, C. F. (2004). Statistical treatment rules for heterogeneous populations. Econometrica 72(4), 1221–1246. * Manski (2021) Manski, C. F. (2021). Econometrics for decision making: Building foundations sketched by haavelmo and wald. Econometrica 89(6), 2827–2853. * Manski (2023) Manski, C. F. (2023). Probabilistic prediction for binary treatment choice: with focus on personalized medicine. Journal of Econometrics 234(2), 647–663. * Manski and Tetenov (2023) Manski, C. F. and A. Tetenov (2023). Statistical decision theory respecting stochastic dominance. The Japanese Economic Review 74(4), 447–469. * Mbakop and Tabord-Meehan (2021) Mbakop, E. and M. Tabord-Meehan (2021). Model selection for treatment choice: Penalized welfare maximization. Econometrica 89(2), 825–848. * Mohammadi et al. (2005) Mohammadi, L., S. Van De Geer, and J. Shawe-Taylor (2005). Asymptotics in empirical risk minimization. Journal of Machine Learning Research 6(12). * Rai (2018) Rai, Y. (2018). Statistical inference for treatment assignment policies. Unpublished Manuscript. * Shigeoka (2014) Shigeoka, H. (2014). The effect of patient cost sharing on utilization, health, and risk protection. American Economic Review 104(7), 2152–2184. * Stoye (2012) Stoye, J. (2012). Minimax regret treatment choice with covariates or with limited validity of experiments. Journal of Econometrics 166(1), 138–156. * Sun et al. (2021) Sun, H., E. Munro, G. Kalashnov, S. Du, and S. Wager (2021). Treatment allocation under uncertain costs. arXiv preprint arXiv:2103.11066. * Sun (2021) Sun, L. (2021). Empirical welfare maximization with constraints. arXiv preprint arXiv:2103.15298. * Taylor (2003) Taylor, J. (2003). Corporation income tax brackets and rates, 1909-2002. Statistics of Income. SOI Bulletin 23(2), 284–291. * Viviano and Bradic (2023) Viviano, D. and J. Bradic (2023). Fair policy targeting. Journal of the American Statistical Association, 1–14. ## Appendix A Confidence Intervals for Threshold Policies Results derived in Section 3 can be used to construct confidence intervals that asymptotically cover the optimal threshold policy with a given probability, and to conduct hypotheses tests. It is important to remark that, in a decision problem setting, hypotheses testing does not have a clearly motivated justification, and indeed, statistical decision theory is the alternative approach to deal with decisions under uncertainty, as pointed out in manski2021econometrics. Rather than advocating for confidence intervals and hypothesis tests for threshold policies, this appendix aims to provide a procedure agnostic on why one may be interested in it. For the EWM policy, rai2018statistical proposes some confidence intervals uniformly valid for several policy classes. They rely on test inversion of a certain bootstrap procedure, which compares the welfare generated by all the policies in the class. My procedure is much simpler for the EWM threshold policies, and I directly construct confidence intervals from the asymptotic distributions derived in Theorem 2. An analogous approach, built over results in Theorem 4, is then used to construct confidence intervals for the SWM policy. ### A.1 Empirical Welfare Maximizer Policy Consider the asymptotic distribution for the EWM threshold policy derived in Theorem 2: $\displaystyle n^{1/3}\left(\hat{t}^{e}_{n}-t^{*}\right)\rightarrow^{d}(2\sqrt{K}/H)^{2/3}\mathop{\rm arg~{}max}\limits_{r}\left(B(r)-r^{2}\right).$ (16) If $H$ and $K$ were known, confidence intervals for the optimal policy $t^{*}$ with asymptotic coverage $1-\alpha$ could be constructed as $(\hat{t}^{e}_{n}-w^{e}_{n},\hat{t}^{e}_{n}+w^{e}_{n})$, where $\displaystyle w^{e}_{n}=n^{-1/3}(2\sqrt{K}/H)^{2/3}c_{\alpha/2}$ (17) and $c_{\alpha/2}$ is the critical value, the upper $\alpha/2$ quantile of the distribution of $\max_{r}B(r)-r^{2}$. In practice, $H$ and $K$ are unknown and should be estimated. They are defined as: $\displaystyle K=$ $\displaystyle f_{x}(t^{*})\left(\frac{1}{p(\textbf{X})}\mathbb{E}[Y_{1}^{2}|X=t^{*}]+\frac{1}{1-p(\textbf{X})}\mathbb{E}[Y_{0}^{2}|X=t^{*}]\right)$ $\displaystyle H=$ $\displaystyle f_{x}(t^{*})\left(\frac{\partial\mathbb{E}\left[Y_{1}-Y_{0}|X=t^{*}\right]}{\partial X}\right).$ and can be estimated by a plug-in method: consider kernel density estimator $\hat{f}_{x}(x)$ for $f_{x}(x)$, and local linear estimators $\hat{\kappa}_{j}(x)$ and $\hat{\nu}_{j}^{\prime}(x)$ for $\kappa_{j}(x)=\mathbb{E}\left[Y_{j}^{2}|X=x,D=j\right]$ and $\nu_{j}^{\prime}(x)=\frac{\partial\nu_{j}(x)}{\partial x}=\frac{\partial\mathbb{E}\left[Y_{j}|X=x,D=j\right]}{\partial x}$. Define estimators $\hat{K}_{n}$ and $\hat{H}_{n}$ by: $\displaystyle\hat{K}_{n}=\hat{f}_{x}(\hat{t}^{e}_{n})\left(\frac{1}{p(\textbf{X})}\hat{\kappa}_{1}(\hat{t}^{e}_{n})+\frac{1}{1-p(\textbf{X})}\hat{\kappa}_{0}(\hat{t}^{e}_{n})\right)$ (18) and $\displaystyle\hat{H}_{n}=\hat{f}_{x}(\hat{t}^{e}_{n})(\hat{\nu}_{1}^{\prime}(\hat{t}^{e}_{n})-\hat{\nu}_{0}^{\prime}(\hat{t}^{e}_{n})).$ (19) Under the additional assumption that the second derivatives of $f_{x}$, $\nu_{1}$ and $\nu_{0}$ are continuous and bounded in a neighborhood of $t^{*}$, and with the proper choice of bandwidth sequences, $\hat{K}_{n}$ and $\hat{H}_{n}$ are consistent estimators for $K$ and $H$. Feasible confidence intervals with asymptotic coverage $1-\alpha$ can hence be constructed as $(\hat{t}^{e}_{n}-\hat{w}^{e}_{n},\hat{t}^{e}_{n}+\hat{w}^{e}_{n})$, where $\displaystyle\hat{w}^{e}_{n}=n^{-1/3}(2\sqrt{\hat{K}_{n}}/\hat{H}_{n})^{2/3}c_{\alpha/2}.$ (20) #### A.1.1 Bootstrap To avoid relying on tabulated values for $c_{\alpha/2}$ and on estimation of $K$, an alternative approach to inference for the EWM policy is the bootstrap. Nonparametric bootstrap is not valid for $\hat{t}^{e}_{n}$ and, more generally, for “cube root asymptotics” estimators (abrevaya2005bootstrap; leger2006bootstrap). Nonetheless, cattaneo2020bootstrap provide a consistent bootstrap procedure for estimators of this type. Consistency is achieved by altering the shape of the criterion function defining the estimator whose distribution must be approximated. The standard nonparametric bootstrap is inconsistent for $Q_{0}(r)=-\frac{1}{2}Hr$ as defined in the proof of Theorem 2, and hence the procedure in cattaneo2020bootstrap directly estimates this non-random part. Let $\\{Z_{i}^{b}\\}$ be a random sample from the empirical distribution $P_{n}$, and define the estimator $\hat{t}^{b}_{n}$ as: $\displaystyle\hat{t}^{b}_{n}=\mathop{\rm arg~{}max}\limits_{t}\frac{1}{n}\sum_{i=1}^{n}\left[\left(\frac{D_{i}^{b}Y_{i}^{b}}{p(\textbf{X}_{i}^{b})}-\frac{(1-D_{i}^{b})Y_{i}^{b}}{(1-p(\textbf{X}_{i}^{b}))}\right)\mathbf{1}\\{X_{i}^{b}>t\\}\right]-$ (21) $\displaystyle\frac{1}{n}\sum_{i=1}^{n}\left[\left(\frac{D_{i}Y_{i}}{p(\textbf{X}_{i})}-\frac{(1-D_{i})Y_{i}}{(1-p(\textbf{X}_{i}))}\right)\mathbf{1}\\{X_{i}>t\\}\right]-\frac{1}{2}(t-\hat{t}_{n})^{2}\hat{H}_{n}.$ (22) The bootstrap procedure proposed by cattaneo2020bootstrap is the following: 1. 1. Compute $\hat{t}^{e}_{n}$ as described in equation (9). 2. 2. Using $\hat{t}^{e}_{n}$, compute $\hat{H}_{n}$ as described in equation (19). 3. 3. Using $\hat{t}^{e}_{n}$, $\hat{H}_{n}$, and the bootstrap sample $\\{Z_{i}^{b}\\}$, compute $\hat{t}^{b}_{n}$ as described in equation (21). 4. 4. Iterate step 3 to obtain the distribution of $n^{1/3}\left(\hat{t}^{b}_{n}-\hat{t}^{e}_{n}\right)$, and use it as an estimate for the distribution of $n^{1/3}\left(\hat{t}^{e}_{n}-t^{*}\right)$. To be valid, the procedure needs an additional assumption. ###### Assumption 6. (Bounded 4th moment) Potential outcomes distribution are such that $\frac{1}{n^{2/3}}\mathbb{E}[Y_{1}^{4}|X=t^{*}]=o(1)$ and $\frac{1}{n^{2/3}}\mathbb{E}[Y_{0}^{4}|X=t^{*}]=o(1)$. Assumption 6 guarantees that the envelope $G_{R}$ is such that $PG_{R}^{4}=o(R^{-1})$. Theorem 5 proves that the distribution of $n^{1/3}\left(\hat{t}^{b}_{n}-\hat{t}^{e}_{n}\right)$ consistently estimates the distribution of $n^{1/3}\left(\hat{t}^{e}_{n}-t^{*}\right)$, and validate the bootstrap procedure. ###### Theorem 5. Consider estimators $\hat{t}^{e}_{n}$ defined in equation (9) and $\hat{t}^{b}_{n}$ defined in equation (21) and the estimand $t^{*}$ defined in equation (4). Under Assumptions 1, 2 with $s=2$, 3, and 6, as $\hat{H}_{n}\rightarrow^{p}H$ and $n\rightarrow\infty$, $\displaystyle n^{1/3}\left(\hat{t}^{b}_{n}-\hat{t}_{n}\right)\rightarrow^{d}(2\sqrt{K}/H)^{2/3}\mathop{\rm arg~{}max}\limits_{r}\left(B(r)-r^{2}\right)$ (23) where the limiting distribution is the same as in Theorem 2. Distribution of $n^{1/3}\left(\hat{t}^{b}_{n}-\hat{t}^{e}_{n}\right)$ can hence be used to construct asymptotic valid confidence intervals and run hypothesis tests for $\hat{t}^{e}_{n}$. ### A.2 Smoothed Welfare Maximizer Policy Consider the asymptotic distribution for the SWM threshold policy derived in Theorem 4, for $n\sigma_{n}^{2h+1}\rightarrow\lambda<\infty$: $\displaystyle(n\sigma_{n})^{1/2}(\hat{t}^{s}_{n}-t^{*})\rightarrow^{d}\mathcal{N}(\lambda^{1/2}H^{-1}A,H^{-2}\alpha_{2}K).$ (24) $\lambda$, $\sigma_{n}$, and $\alpha_{2}$ are known. If also $K$, $H$, and $A$ were known, confidence intervals for the optimal policy $t^{*}$ with asymptotic coverage $1-\alpha$ could be constructed as $(\hat{t}^{s}_{n}-b_{n}-w^{s}_{n},\hat{t}^{s}_{n}-b_{n}+w^{s}_{n})$, where $\displaystyle b_{n}=(n\sigma_{n})^{-1/2}\lambda^{1/2}\frac{A}{H}$ (25) $\displaystyle w^{s}_{n}=(n\sigma_{n})^{-1/2}(\sqrt{\alpha_{2}K}/H)c_{\alpha/2}$ (26) and $c_{\alpha/2}$ the upper $\alpha/2$ quantile of the standard normal distribution. In practice, $K$, $H$, and $A$ are unknown and should be estimated. As usual with inference involving bandwidths and kernels, two approaches are available: estimate and remove the asymptotic bias, or undersmooth. For the first approach, consider estimators in equation (18) for $\hat{K}_{n}$ and in equation (19) for $\hat{H}_{n}$, substituting $\hat{t}^{e}_{n}$ with $\hat{t}^{s}_{n}$. For $A$, recall that $\displaystyle A=$ $\displaystyle-\frac{1}{h!}\alpha_{1}\int_{y}\left(Y_{1}-Y_{0}\right)\varphi^{h}_{x}(y,t^{*})dy$ (27) $\displaystyle=$ $\displaystyle-\frac{1}{h!}\alpha_{1}\left[2f^{\prime}_{x}(t^{*})[\lambda^{\prime}_{1}(t^{*})-\lambda^{\prime}_{0}(t^{*})]+f_{x}(t^{*})[\lambda^{\prime\prime}_{1}(t^{*})-\lambda^{\prime\prime}_{0}(t^{*})]\right]$ (28) where $f_{x}(x)$ is the probability density function of $X$ and $\nu_{j}(x)=\mathbb{E}[Y_{j}|X=x,D=j]$. Consider kernel density estimator $\hat{f}_{x}(x)$ and $\hat{f}^{\prime}_{x}(x)$ for $f_{x}(x)$ and $f^{\prime}_{x}(x)$, and local linear estimators $\hat{\nu}_{j}^{\prime}(x)$ and $\hat{\nu}_{j}^{\prime\prime}(x)$ for $\nu_{j}^{\prime}(x)=\frac{\partial\nu_{j}(x)}{\partial x}$ and $\nu_{j}^{\prime\prime}(x)=\frac{\partial^{2}\nu_{j}(x)}{\partial^{2}x}$. Define estimator $\hat{A}_{n}$ by: $\displaystyle\hat{A}_{n}=-\frac{1}{h!}\alpha_{1}\left[2\hat{f}^{\prime}_{x}(\hat{t}^{s}_{n})[\hat{\lambda}^{\prime}_{1}(\hat{t}^{s}_{n})-\hat{\lambda}^{\prime}_{0}(\hat{t}^{s}_{n})]+\hat{f}_{x}(\hat{t}^{s}_{n})[\hat{\lambda}^{\prime\prime}_{1}(\hat{t}^{s}_{n})-\hat{\lambda}^{\prime\prime}_{0}(\hat{t}^{s}_{n})]\right]$ (29) which consistently estimate $A$ the additional assumption that the third derivatives of $f_{x}$, $\nu_{1}$ and $\nu_{0}$ are continuous and bounded in a neighborhood of $t^{*}$, and with the proper choice of bandwidth sequences. Confidence intervals with asymptotic coverage $1-\alpha$ can hence be constructed as $(\hat{t}^{s}_{n}-\hat{b}_{n}-\hat{w}^{s}_{n},\hat{t}^{s}_{n}-\hat{b}_{n}+\hat{w}^{s}_{n})$, where $\displaystyle\hat{b}_{n}=(n\sigma_{n})^{-1/2}\lambda^{1/2}\frac{\hat{A}_{n}}{\hat{H}_{n}}$ (30) $\displaystyle\hat{w}^{s}_{n}=(n\sigma_{n})^{-1/2}(\sqrt{\alpha_{2}\hat{K}_{n}}/\hat{H}_{n})c_{\alpha/2}$ (31) The second approach relies on undersmoothing, and chooses a suboptimally small $\sigma_{n}$ to eliminate the asymptotic bias, with no need to estimate $A$. Instead of a bandwidth sequence $\sigma_{n}=O_{p}(n^{-\frac{1}{2h+1}})$, it considers a sequence $\sigma_{n}=o_{p}(n^{-\frac{1}{2h+1}})$ such that $n\sigma_{n}^{2h+1}\rightarrow\lambda=0$, and ensures $b_{n}\to 0$. Confidence intervals with asymptotic coverage $1-\alpha$ can hence be constructed as $(\hat{t}^{s}_{n}-\hat{w}^{s}_{n},\hat{t}^{s}_{n}+\hat{w}^{s}_{n})$, with $\hat{w}^{s}_{n}$ defined as above. ## Appendix B Local Asymptotics The interplay between the population distribution $P$ and the sample size can be studied considering a local asymptotic framework, considering a sequence of population distributions $\\{P_{n}\\}$ that varies with $n$. I focus on the following two sequences. ###### Definition 1. (Sequence $\\{P^{1}_{n}\\}$) The sequence of distributions $\\{P^{1}_{n}\\}$ is such that $\frac{\sqrt{K_{n}}}{H_{n}}=n^{\gamma}$, with $\gamma\in[0,\frac{h}{2h+1})$, and $\frac{A_{n}}{H_{n}}=1$. ###### Definition 2. (Sequence $\\{P^{2}_{n}\\}$) The sequence of distributions $\\{P^{2}_{n}\\}$ is such that $\frac{\sqrt{K_{n}}}{H_{n}}=1$, and $\frac{A_{n}}{H_{n}}=n^{\gamma}$, with $\gamma\in[0,\frac{h}{2h+1})$. Sequence $\\{P^{1}_{n}\\}$ mimics a scenario where $\sqrt{K}$ is large compared to $H$. This occurs when the variance of the conditional treatment effect, $\operatorname{\mathrm{Var}}(Y_{1}-Y_{0}|X=t^{*})$, is large compared to the derivative of the conditional ATE, $\frac{\partial\mathbb{E}_{P}\left[Y_{1}-Y_{0}|X=t^{*}\right]}{\partial X}$, or compared to the density of the index, $f_{x}(t^{*})$. In these situations, the population welfare remains relatively stable for thresholds in the neighborhood of the optimal one. The limit case with $\gamma=\frac{1}{2}$ coincides with the local asymptotic framework proposed by hirano2009asymptotics and studied by athey2021policy. Sequence $\\{P^{2}_{n}\\}$, instead, mimics a situation where $A$ is large compared to $H$. When $h=2$, this happens when the second derivative of the conditional ATE, $\frac{\partial^{2}\mathbb{E}_{P}\left[Y_{1}-Y_{0}|X=t^{*}\right]}{\partial^{2}X}$, is large compared to the first derivative, $\frac{\partial\mathbb{E}_{P}\left[Y_{1}-Y_{0}|X=t^{*}\right]}{\partial X}$. Since the asymptotic bias of $\hat{t}^{s}_{n}$ is $\lambda\frac{A_{n}}{H_{n}}$, sequence $\\{P^{2}_{n}\\}$ represents situation where this bias is large relatively to the sample size. Let $r_{n}^{e}$ and $r_{n}^{s}$ denote the rates of convergence of $\mathcal{R}(\hat{t}^{e}_{n})$ and $\mathcal{R}(\hat{t}^{s}_{n})$, i.e. let $r_{n}^{e}$ and $r_{n}^{s}$ be sequences such that $\mathcal{R}(\hat{t}^{e}_{n})=O_{p}(r_{n}^{e})$ and $\mathcal{R}(\hat{t}^{s}_{n})=O_{p}(r_{n}^{s})$. The following theorem establishes a relationship between $r_{n}^{e}$ and $r_{n}^{s}$ under $\\{P^{1}_{n}\\}$ and $\\{P^{2}_{n}\\}$. ###### Theorem 6. Let Assumptions 1, 2 with $s=h+1$ for some $h\geq 2$, 3, and 5 hold. Consider sequences $\\{P^{1}_{n}\\}$ and $\\{P^{2}_{n}\\}$ of data generating processes. The limit of the ratio $\frac{r_{n}^{e}}{r_{n}^{s}}$ depends on $\gamma$ as follows: * • $\frac{r_{n}^{e}}{r_{n}^{s}}\to\infty$ if $\gamma\in(\bar{\gamma},\frac{h}{2h+1})$ * • $\frac{r_{n}^{e}}{r_{n}^{s}}=O_{p}(1)$ if $\gamma=\bar{\gamma}$ * • $\frac{r_{n}^{e}}{r_{n}^{s}}\to 0$ if $\gamma\in[0,\bar{\gamma})$. where $\bar{\gamma}=\frac{h-1}{2h+1}$ under $\\{P^{1}_{n}\\}$, and $\bar{\gamma}=\frac{1}{3}\frac{h-1}{2h+1}$ under $\\{P^{2}_{n}\\}$. Theorem 6 shows how, under sequences $\\{P^{1}_{n}\\}$ and $\\{P^{2}_{n}\\}$, the EWM policy guarantees a regret convergence rate faster than the SWM policy whenever the parameter $\gamma$ exceeds a certain value $\bar{\gamma}$. For sequence $\\{P^{1}_{n}\\}$, the result depends on the fact that the term $\frac{\sqrt{K_{n}}}{H_{n}}$ enters the asymptotic distributions of $\hat{t}^{e}_{n}$ and $\hat{t}^{s}_{n}$ with different powers ($\frac{2}{3}$ and 1, respectively): if $\frac{\sqrt{K_{n}}}{H_{n}}$ is large enough relatively to the sample size, the exponent lower than one makes $r^{e}_{n}$ faster (and hence in the limit $\hat{t}^{e}_{n}$ preferable). For sequence $\\{P^{2}_{n}\\}$, the result is due to the asymptotic bias of the SWM policy, proportional to $\frac{A_{n}}{H_{n}}$. Since the asymptotic distribution of the regret of the EWM policy remains constant in $\\{P^{2}_{n}\\}$, when the bias of $\hat{t}^{s}_{n}$ is large enough compared to sample size, $\hat{t}^{e}_{n}$ becomes preferable. The value of $\bar{\gamma}$ is increasing in $h$, the order of the kernel $k$: a smoother objective function $\mathbb{E}[(Y_{1}-Y_{0})\mathbf{1}\\{X>t\\}]$ amplifies the benefit of using the SWM policy, expanding the region of values of $\gamma$ where the SWM policy has a faster rate compared to the EWM. Considering the sequence $\\{P^{1}_{n}\\}$ with $\gamma=\frac{1}{2}$, athey2021policy show that their AIPW policy achieves the uniform fastest asymptotic convergence rate. Results in Theorem 6 are different: without focusing on a single specific sequence, their goal is to shed light on how the asymptotic behavior of the EWM and the SWM policies depends on $P$ and sample size. ## Appendix C Monte Carlo Simulation I report the analogous of Table 2 and Figures 1 and 2 for the median regret. Comments made for the mean also extend to the median regret. Table 4: Finite sample and asymptotic median regret with the EWM and SWM policies across different models. Finite sample (empirical) values are computed through 5,000 Monte Carlo simulations. The last column reports the ratio between finite sample median regrets with the EWM and SWM policies. Model | n | EWM | SWM | Ratio ---|---|---|---|--- | | empirical | asymptotic | empirical $\sigma_{n}^{*}$ | empirical $\hat{\sigma}_{n}^{*}$ | asymptotic | 1 | $500$ | $39.809$ | $45.347$ | $15.113$ | $24.169$ | $18.459$ | $1.647$ $1,000$ | $27.547$ | $28.567$ | $8.692$ | $14.145$ | $10.602$ | $1.948$ $2,000$ | $18.816$ | $17.996$ | $5.073$ | $8.507$ | $6.089$ | $2.212$ $3,000$ | $14.043$ | $13.733$ | $3.471$ | $5.491$ | $4.402$ | $2.558$ 2 | $500$ | $147.029$ | $188.651$ | $116.784$ | $159.876$ | $204.255$ | $0.920$ $1,000$ | $110.372$ | $118.843$ | $82.473$ | $118.695$ | $117.314$ | $0.930$ $2,000$ | $87.325$ | $74.866$ | $55.731$ | $79.436$ | $67.379$ | $1.099$ $3,000$ | $72.122$ | $57.134$ | $46.007$ | $68.656$ | $48.714$ | $1.050$ Figure 3: Figure reports asymptotic and finite sample median regrets for the EWM and SWM policies, illustrating the values reported in Table 4. Figure 4: Figure reports asymptotic and finite sample median regrets for the EWM and SWM policies, illustrating the values reported in Table 4. ## Appendix D Proofs ### Theorem 1 ###### Theorem 1. Consider the EWM policy $\hat{t}^{e}_{n}$ defined in equation (9) and the optimal policy $t^{*}$ defined in equation (4). Under Assumptions 1 and 2 with $s=0$, $\displaystyle\hat{t}^{e}_{n}\rightarrow^{a.s.}t^{*}$ i.e. $\hat{t}^{e}_{n}$ is a consistent estimator for $t^{*}$. ###### Proof. Estimator $\hat{t}^{e}_{n}$ in (9) can be written as $\displaystyle\hat{t}^{e}_{n}=$ $\displaystyle\mathop{\rm arg~{}max}\limits_{t}\frac{1}{n}\sum_{i=1}^{n}\left[\left(\frac{D_{i}Y_{i}}{p(\textbf{X}_{i})}-\frac{(1-D_{i})Y_{i}}{(1-p(\textbf{X}_{i}))}\right)(\mathbf{1}\\{X_{i}>t\\}-\mathbf{1}\\{X_{i}>t^{*}\\})\right]$ (32) $\displaystyle=$ $\displaystyle\mathop{\rm arg~{}max}\limits_{t}\frac{1}{n}\sum_{i=1}^{n}m\left(Z_{i},t\right)$ (33) where $\displaystyle m\left(Z,t\right)=\left(\frac{DY}{p(\textbf{X})}-\frac{(1-D)Y}{(1-p(\textbf{X}))}\right)\left(\mathbf{1}\\{X>t\\}-\mathbf{1}\\{X>t^{*}\\}\right)$ (34) and $\\{Z_{i}\\}$ is a sample of $n$ observation from distribution $P$. Define $P_{n}m\left(\cdot,t\right)=\sum_{i=1}^{n}m\left(Z_{i},t\right)$ and $Pm\left(Z,t\right)=\mathbb{E}_{P}[m\left(Z,t\right)]$. With this notation, equation (4) and (9) can be written as $\displaystyle t^{*}=\mathop{\rm arg~{}max}\limits_{t}Pm\left(Z,t\right)$ (35) $\displaystyle\hat{t}^{e}_{n}=\mathop{\rm arg~{}max}\limits_{t}P_{n}m\left(\cdot,t\right).$ (36) Threshold policies I am considering can be seen as a tree partition of depth 1. Tree partitions of finite depth are a VC class (leboeuf2020decision), and hence $m\left(\cdot,t\right)$ is a manageable class of functions. Consider the envelope function $F=2\left|\frac{DY}{p(\textbf{X})}-\frac{(1-D)Y}{(1-p(\textbf{X}))}\right|$. Note that $\mathbb{E}\left[\left|\frac{DY}{p(\textbf{X})}-\frac{(1-D)Y}{(1-p(\textbf{X}))}\right|^{2}\right]=\frac{1}{p(\textbf{X})}\mathbb{E}[Y_{1}^{2}]+\frac{1}{1-p(\textbf{X})}\mathbb{E}[Y_{0}^{2}]$: Assumption 2.2 guarantees the existence of $\mathbb{E}\left[\left|\frac{DY}{p(\textbf{X})}-\frac{(1-D)Y}{(1-p(\textbf{X}))}\right|^{2}\right]$. It follows from corollary 3.2 in kim1990cube that $\displaystyle\sup_{t}\left|P_{n}m(\cdot,t)-Pm(Z,t)\right|\rightarrow^{a.s.}0.$ (37) Under Assumptions 2.1 and 2.3, $Pm(Z,t)$ is continuous in $t$, and $t^{*}$ is the unique maximizer. Hence, $\displaystyle\sup_{t}\left|P_{n}m(\cdot,t)-Pm(Z,t)\right|+Pm(Z,t^{*})\geq$ (38) $\displaystyle\left|P_{n}m(\cdot,\hat{t}^{e}_{n})-Pm(Z,\hat{t}^{e}_{n})\right|+Pm(Z,t^{*})\geq$ (39) $\displaystyle\left|P_{n}m(\cdot,\hat{t}^{e}_{n})-Pm(Z,\hat{t}^{e}_{n})\right|+Pm(Z,\hat{t}^{e}_{n})\geq$ (40) $\displaystyle P_{n}m(\cdot,\hat{t}^{e}_{n})\geq P_{n}m(\cdot,t^{*})\rightarrow Pm(Z,t^{*})$ (41) where the second inequality is due to the fact that $t^{*}$ is the maximizer of $Pm(Z,t)$, the third comes from the triangular inequality, the fourth from $\hat{t}^{e}_{n}$ being the maximizer of $P_{n}m(\cdot,t)$, and the last limit comes from LLN. This prove that $P_{n}m(\cdot,\hat{t}^{e}_{n})\rightarrow^{a.s.}Pm(Z,t^{*})$, and hence $Pm(Z,\hat{t}^{e}_{n})\rightarrow^{a.s.}Pm(Z,t^{*})$. Since $t^{*}$ is the unique maximizer of $Pm(Z,t)$ and $Pm(Z,t)$ is continuous, $\hat{t}^{e}_{n}\rightarrow^{a.s.}t^{*}$. It means that $\hat{t}^{e}_{n}$ is a consistent estimator for $t^{*}$. ∎ ### Theorem 2 ###### Theorem 2. Consider the EWM policy $\hat{t}^{e}_{n}$ defined in equation (9) and the optimal policy $t^{*}$ defined in equation (4). Under Assumptions 1, 2 with $s=2$ and 3, as $n\rightarrow\infty$, $\displaystyle n^{1/3}\left(\hat{t}^{e}_{n}-t^{*}\right)\rightarrow^{d}(2\sqrt{K}/H)^{2/3}\mathop{\rm arg~{}max}\limits_{r}\left(B(r)-r^{2}\right)$ (42) where $B(r)$ is the two-sided Brownian motion process, and $K$ and $H$ are $\displaystyle K=$ $\displaystyle f_{x}(t^{*})\left(\frac{1}{p(\textbf{X})}\mathbb{E}[Y_{1}^{2}|X=t^{*}]+\frac{1}{1-p(\textbf{X})}\mathbb{E}[Y_{0}^{2}|X=t^{*}]\right)$ $\displaystyle H=$ $\displaystyle f_{x}(t^{*})\left(\frac{\partial\mathbb{E}_{P}\left[Y_{1}-Y_{0}|X=t^{*}\right]}{\partial X}\right).$ ###### Proof. The proof shows that conditions for the main theorem in kim1990cube hold; hence, their result is valid for $\hat{t}^{e}_{n}$. For completeness, I report the theorem. ###### Theorem Kim and Pollard. Consider estimators defined by maximization of processes $\displaystyle P_{n}g(\cdot,\theta)=\frac{1}{n}\sum_{i=1}^{n}g\left(\xi_{i},\theta\right)$ (43) where $\left\\{\xi_{i}\right\\}_{i}$ is a sequence of i.i.d. observations from a distribution $P$ and $\\{g(\cdot,\theta):\theta\in\Theta\\}$ is a class of functions indexed by a subset $\Theta$ in $\mathbb{R}^{k}$. The envelope $G_{R}(\cdot)$ is defined as the supremum of $g(\cdot,\theta)$ over the class $\displaystyle\mathcal{G}_{R}=\left\\{|g(\cdot,\theta)|:\left|\theta-\theta_{0}\right|\leq R\right\\},\quad R>0.$ (44) Let $\left\\{\theta_{n}\right\\}$ be a sequence of estimators for which 1. 1. $P_{n}g\left(\cdot,\theta_{n}\right)\geq\sup_{\theta\in\Theta}P_{n}g(\cdot,\theta)-o_{P}\left(n^{-2/3}\right)$. 2. 2. $\theta_{n}$ converges in probability to the unique $\theta_{0}$ that maximizes $Pg(\cdot,\theta)$. 3. 3. $\theta_{0}$ is an interior point of $\Theta$. Let the functions be standardized so that $g\left(\cdot,\theta_{0}\right)\equiv 0$. If the classes $\mathcal{G}_{R}$ for $R$ near 0 are uniformly manageable for the envelopes $G_{R}$ and satisfy: 1. 4. $Pg(\cdot,\theta)$ is twice differentiable with second derivative matrix $-H$ at $\theta_{0}$. 2. 5. $K(s,r)=\lim_{\alpha\rightarrow\infty}\alpha Pg\left(\cdot,\theta_{0}+r/\alpha\right)g\left(\cdot,\theta_{0}+s/\alpha\right)$ exists for each $s,r$ in $\mathbb{R}^{k}$ and $\lim_{\alpha\rightarrow\infty}\alpha Pg\left(\cdot,\theta_{0}+r/\alpha\right)^{2}\left\\{\left|g\left(\cdot,\theta_{0}+r/\alpha\right)\right|>\varepsilon\alpha\right\\}=0$ for each $\varepsilon>0$ and $r$ in $\mathbb{R}^{k}$. 3. 6. $PG_{R}^{2}=O(R)$ as $R\rightarrow 0$ and for each $\varepsilon>0$ there is a constant C such that $PG_{R}^{2}\mathbf{1}\\{G_{R}>C\\}\leq\varepsilon R$ for $R$ near 0. 4. 7. $P\left|g\left(\cdot,\theta_{1}\right)-g\left(\cdot,\theta_{2}\right)\right|=O\left(\left|\theta_{1}-\theta_{2}\right|\right)$ near $\theta_{0}$. Then, the process $n^{2/3}P_{n}g\left(\cdot,\theta_{0}+rn^{-1/3}\right)$ converges in distribution to a Gaussian process $Q(r)$ with continuous sample paths, expected value $-\frac{1}{2}r^{\prime}Hr$ and covariance kernel $K$. If $H$ is positive definite and if $Q$ has nondegenerate increments, then $n^{1/3}\left(\theta_{n}-\theta_{0}\right)$ converges in distribution to the (almost surely unique) random vector that maximizes $Q$. I apply Theorem Theorem Kim and Pollard by taking $\xi_{i}=Z_{i},\theta=t,\theta_{n}=\hat{t}^{e}_{n},\theta_{0}=t^{*}$, $g(\cdot,\theta)=m(\cdot,t)$, where $m(\cdot,t)$ is standardized: $\displaystyle m\left(Z_{i},t\right)=\left(\frac{D_{i}Y_{i}}{p(\textbf{X}_{i})}-\frac{(1-D_{i})Y_{i}}{(1-p(\textbf{X}_{i}))}\right)\left(\mathbf{1}\\{X_{i}>t\\}-\mathbf{1}\\{X_{i}>t^{*}\\}\right).$ (45) First, I will verify that conditions 1-7 apply to my setting: 1. 1. $P_{n}m(\cdot,t)$ takes only finite ($n+1$) values; hence, condition 1 is satisfied with the equality. 2. 2. In Theorem 1, I proved that $\hat{t}^{e}_{n}$ is a consistent estimator for $t^{*}$. 3. 3. $t^{*}$ is an interior point of $\mathcal{T}$ by Assumption 2.1. I need to prove that the classes $\mathcal{G}_{R}$ for $R$ near 0 are uniformly manageable for the envelopes $G_{R}$. The envelope function $G_{R}(\cdot)$ is defined as $\displaystyle G_{R}(z)$ $\displaystyle=\sup\left\\{m(z,t):\left|t-t^{*}\right|\leq R\right\\}$ (46) $\displaystyle=\sup_{\left|t-t^{*}\right|\leq R}\left[\left(\frac{dy}{p(\textbf{x})}-\frac{(1-d)y}{(1-p(\textbf{x}))}\right)\left(\mathbf{1}\\{x>t\\}-\mathbf{1}\\{x>t^{*}\\}\right)\right]$ (47) $\displaystyle=\left|\frac{dy}{p(\textbf{x})}-\frac{(1-d)y}{(1-p(\textbf{x}))}\right|\mathbf{1}\\{\left|x-t^{*}\right|<R\\}$ (48) and I have: $\displaystyle PG_{R}^{2}$ $\displaystyle=\mathbb{E}\left[\left(\frac{DY}{p(\textbf{X})}-\frac{(1-D)Y}{(1-p(\textbf{X}))}\right)^{2}\mathbf{1}\\{\left|X-t^{*}\right|<R\\}\right]$ (49) $\displaystyle=\mathbb{E}\left[\left(\frac{DY}{p(\textbf{X})}-\frac{(1-D)Y}{(1-p(\textbf{X}))}\right)^{2}\Bigg{|}X\in(t^{*}-R,t^{*}+R)\right]$ (50) $\displaystyle=2R\mathbb{E}\left[\left(\frac{DY}{p(\textbf{X})}-\frac{(1-D)Y}{(1-p(\textbf{X}))}\right)^{2}\Bigg{|}X=t^{*}\right]+o(1)=O(R)$ (51) where the second to last equality comes from Assumption 2.2. The envelope function is uniformly square-integrable for $R$ near 0, and therefore, the classes $\mathcal{G}_{R}$ are uniformly manageable. 1. 4. Define $h(t)=Pm(Z,t)$ and consider derivatives: $\displaystyle h(t)=$ $\displaystyle\mathbb{E}_{P}\left[\left(\frac{DY}{p(\textbf{X})}-\frac{(1-D)Y}{(1-p(\textbf{X}))}\right)(\mathbf{1}\\{X>t\\}-\mathbf{1}\\{X>t^{*}\\})\right]=$ (52) $\displaystyle\mathbb{E}_{P}\left[\left(Y_{1}-Y_{0}\right)(\mathbf{1}\\{X>t\\}-\mathbf{1}\\{X>t^{*}\\})\right]$ (53) $\displaystyle h^{\prime}(t)=$ $\displaystyle- f_{x}(t)\mathbb{E}_{P}\left[Y_{1}-Y_{0}\Big{|}X=t\right]$ (54) $\displaystyle h^{\prime\prime}(t)=$ $\displaystyle-f^{\prime}_{x}(t)\mathbb{E}_{P}\left[Y_{1}-Y_{0}\Big{|}X=t\right]-f_{x}(t)\left(\frac{\partial\mathbb{E}_{P}\left[Y_{1}-Y_{0}|X=t\right]}{\partial X}\right).$ (55) Assumption 2.3 with $s=2$ guarantees the existence of $h^{\prime}$ and $h^{\prime\prime}$. Since $\mathbb{E}_{P}\left[Y_{1}-Y_{0}\Big{|}X=t^{*}\right]=0$, $H$ is given by $\displaystyle H=-h^{\prime\prime}(t^{*})=f_{x}(t^{*})\left(\frac{\partial\mathbb{E}_{P}\left[Y_{1}-Y_{0}|X=t^{*}\right]}{\partial X}\right).$ (56) 2. 5. This condition is divided into two parts. First, I prove the existence of $K(s,r)=\lim_{\alpha\rightarrow\infty}\alpha Pm\left(\cdot,t^{*}+r/\alpha\right)m\left(\cdot,t^{*}+s/\alpha\right)$ for each $s,r$ in $\mathbb{R}$. Covariance $K$ is: $\displaystyle Pm$ $\displaystyle\left(\cdot,t^{*}+r/\alpha\right)m\left(\cdot,t^{*}+s/\alpha\right)=\mathbb{E}\left[\left(\frac{DY}{p(\textbf{X})}-\frac{(1-D)Y}{(1-p(\textbf{X}))}\right)^{2}\right.$ (57) $\displaystyle(\mathbf{1}\\{X>t^{*}+r/\alpha\\}-\mathbf{1}\\{X>t^{*}\\})(\mathbf{1}\\{X>t^{*}+s/\alpha\\}-\mathbf{1}\\{X>t^{*}\\})\Big{]}.$ (58) If $rs<0$, covariance and $K(s,r)$ are 0. If $rs>0$, and suppose $r>0$: $\displaystyle Pm\left(\cdot,t^{*}+r/\alpha\right)m\left(\cdot,t^{*}+s/\alpha\right)=$ (59) $\displaystyle\mathbb{E}\left[\left(\frac{DY}{p(\textbf{X})}-\frac{(1-D)Y}{(1-p(\textbf{X}))}\right)^{2}\Big{|}X\in(t^{*},t^{*}+\min\\{r,s\\}/\alpha)\right]$ (60) and hence $\displaystyle K(s,r)$ $\displaystyle=\lim_{\alpha\rightarrow\infty}\alpha\mathbb{E}\left[\left(\frac{DY}{p(\textbf{X})}-\frac{(1-D)Y}{(1-p(\textbf{X}))}\right)^{2}\Big{|}X\in(t^{*},t^{*}+\min\\{r,s\\}/\alpha)\right]$ (61) $\displaystyle=\min\\{r,s\\}f_{x}(t^{*})\mathbb{E}\left[\left(\frac{DY}{p(\textbf{X})}-\frac{(1-D)Y}{(1-p(\textbf{X}))}\right)^{2}\Big{|}X=t^{*}\right].$ (62) where the equality is due to continuity of $f_{x}$ (Assumption 2.3 with $s=2$). Boundedness of the quantity follows from Assumptions 2.2 and 2.3. Now, I will prove that $\lim_{\alpha\rightarrow\infty}\alpha Pm\left(\cdot,t^{*}+r/\alpha\right)^{2}\mathbf{1}\left\\{\left|m\left(\cdot,t^{*}+r/\alpha\right)\right|>\varepsilon\alpha\right\\}=0$ for each $\varepsilon>0$ and $r$ in $\mathbb{R}$. I have: $\displaystyle\alpha Pm\left(\cdot,t^{*}+r/\alpha\right)^{2}\mathbf{1}\left\\{\left|m\left(\cdot,t^{*}+r/\alpha\right)\right|>\varepsilon\alpha\right\\}=$ (63) $\displaystyle\alpha\mathbb{E}\left[\left(\frac{DY}{p(\textbf{X})}-\frac{(1-D)Y}{(1-p(\textbf{X}))}\right)^{2}\right.$ (64) $\displaystyle(\mathbf{1}\\{X>t^{*}+r/\alpha\\}-\mathbf{1}\\{X>t^{*}\\})^{2}\mathbf{1}\left\\{\left|m\left(\cdot,t^{*}+r/\alpha\right)\right|>\varepsilon\alpha\right\\}\Big{]}=$ (65) $\displaystyle\alpha\mathbb{E}\left[\left(\frac{DY}{p(\textbf{X})}-\frac{(1-D)Y}{(1-p(\textbf{X}))}\right)^{2}\right.$ (66) $\displaystyle(\mathbf{1}\\{X>t^{*}+r/\alpha\\}-\mathbf{1}\\{X>t^{*}\\})^{2}\mathbf{1}\left\\{\left|\frac{DY}{p(\textbf{X})}-\frac{(1-D)Y}{(1-p(\textbf{X}))}\right|>\varepsilon\alpha\right\\}\Big{]}\leq$ (67) $\displaystyle\alpha\mathbb{E}\left[\left(\frac{DY}{p(\textbf{X})}-\frac{(1-D)Y}{(1-p(\textbf{X}))}\right)^{2}\mathbf{1}\left\\{\left|\frac{DY}{p(\textbf{X})}-\frac{(1-D)Y}{(1-p(\textbf{X}))}\right|>\varepsilon\alpha\right\\}\right].$ (68) Some algebra gives $\displaystyle\mathbb{E}\left[\left(\frac{DY}{p(\textbf{X})}-\frac{(1-D)Y}{(1-p(\textbf{X}))}\right)^{2}\mathbf{1}\left\\{\left|\frac{DY}{p(\textbf{X})}-\frac{(1-D)Y}{(1-p(\textbf{X}))}\right|>\epsilon\right\\}\right]=$ (69) $\displaystyle\mathbb{E}\left[\left(\frac{DY_{1}^{2}}{p(\textbf{X})^{2}}+\frac{(1-D)Y_{0}^{2}}{(1-p(\textbf{X}))^{2}}\right)\mathbf{1}\left\\{\left|\frac{DY}{p(\textbf{X})}-\frac{(1-D)Y}{(1-p(\textbf{X}))}\right|>\epsilon\right\\}\right]=$ (70) $\displaystyle\mathbb{E}\left[\frac{Y_{1}^{2}}{p(\textbf{X})}\mathbf{1}\left\\{\left|\frac{Y_{1}}{p(\textbf{X})}\right|>\epsilon\right\\}\right]+\mathbb{E}\left[\frac{Y_{0}^{2}}{(1-p(\textbf{X}))}\mathbf{1}\left\\{\left|\frac{Y_{0}}{(1-p(\textbf{X}))}\right|>\epsilon\right\\}\right]=$ (71) $\displaystyle\frac{1}{p(\textbf{X})}\mathbb{E}\left[Y_{1}^{2}\mathbf{1}\left\\{\left|Y_{1}\right|>\epsilon_{1}\right\\}\right]+\frac{1}{(1-p(\textbf{X}))}\mathbb{E}\left[Y_{0}^{2}\mathbf{1}\left\\{\left|Y_{0}\right|>\epsilon_{0}\right\\}\right]$ (72) and hence the condition is satisfied if $\displaystyle\lim_{\alpha\rightarrow\infty}\alpha\mathbb{E}\left[Y_{1}^{2}\mathbf{1}\left\\{\left|Y_{1}\right|>\epsilon\alpha\right\\}\right]=0$ (73) $\displaystyle\lim_{\alpha\rightarrow\infty}\alpha\mathbb{E}\left[Y_{0}^{2}\mathbf{1}\left\\{\left|Y_{0}\right|>\epsilon\alpha\right\\}\right]=0.$ (74) Consider the limit for $Y_{1}$: $\displaystyle\lim_{\alpha\rightarrow\infty}\alpha\mathbb{E}\left[Y_{1}^{2}\mathbf{1}\left\\{\left|Y_{1}\right|>\epsilon\alpha\right\\}\right]=\lim_{\alpha\rightarrow\infty}\frac{\int_{\epsilon\alpha}Y_{1}^{2}\varphi_{1}(y_{1})dy_{1}}{\alpha^{-1}}=\lim_{\alpha\rightarrow\infty}\frac{\epsilon^{3}\alpha^{2}\varphi_{1}(\epsilon\alpha)}{\alpha^{-2}}=$ (75) $\displaystyle\lim_{\alpha\rightarrow\infty}\epsilon^{3}\alpha^{4}\varphi_{1}(\epsilon\alpha)=\lim_{\alpha\rightarrow\infty}\epsilon^{3}\alpha^{4}|\epsilon\alpha|^{-(4+\delta)}o(1)=0$ (76) where the second to last equality follows from Assumption 3. 3. 6. I showed that $PG_{R}^{2}=O(R)$ as $R\rightarrow 0$ in equation (49). I need to prove that for each $\varepsilon>0$ there is a constant C such that $PG_{R}^{2}\mathbf{1}\\{G_{R}>C\\}\leq\varepsilon R$ for $R$ near 0. For any $\varepsilon>0$ and $C>0$: $\displaystyle PG_{R}^{2}$ $\displaystyle\mathbf{1}\\{G_{R}>C\\}=\mathbb{E}\left[\left(\frac{DY}{p(\textbf{X})}-\frac{(1-D)Y}{(1-p(\textbf{X}))}\right)^{2}\mathbf{1}\\{\left|X-t^{*}\right|<R\\}\mathbf{1}\\{G_{R}>C\\}\right]\leq$ (77) $\displaystyle\mathbb{E}\left[\left(\frac{DY}{p(\textbf{X})}-\frac{(1-D)Y}{(1-p(\textbf{X}))}\right)^{2}\mathbf{1}\\{\left|X-t^{*}\right|<R\\}\mathbf{1}\left\\{\left|\frac{DY}{p(\textbf{X})}-\frac{(1-D)Y}{(1-p(\textbf{X}))}\right|>C\right\\}\right]\leq$ (78) $\displaystyle\mathbb{E}\left[\left(\frac{DY}{p(\textbf{X})}-\frac{(1-D)Y}{(1-p(\textbf{X}))}\right)^{2}\mathbf{1}\left\\{\left|\frac{DY}{p(\textbf{X})}-\frac{(1-D)Y}{(1-p(\textbf{X}))}\right|>C\right\\}\right]\rightarrow 0$ (79) where the last limit is taken for $C\rightarrow\infty$, and follows from Assumption 3. 4. 7. I need to show that $P\left|m\left(\cdot,t_{1}\right)-m\left(\cdot,t_{2}\right)\right|=O\left(\left|t_{1}-t_{2}\right|\right)$ near $t^{*}$. Consider $t_{2}>t_{1}$: $\displaystyle P\left|m\left(\cdot,t_{1}\right)-m\left(\cdot,t_{2}\right)\right|=\mathbb{E}\left[\left|\left(\frac{DY}{p(\textbf{X})}-\frac{(1-D)Y}{(1-p(\textbf{X}))}\right)(\mathbf{1}\\{X>t_{1}\\}-\mathbf{1}\\{X>t_{2}\\})\right|\right]\leq$ (80) $\displaystyle M_{x}\mathbb{E}\left[\left|\left(\frac{DY}{p(\textbf{X})}-\frac{(1-D)Y}{(1-p(\textbf{X}))}\right)\right|\Big{|}X\in(t_{1},t_{2})\right]\leq M_{x}M_{y}|t_{2}-t_{1}|$ (81) where $M_{x}=\max_{x\in(t_{1},t_{2})}f_{x}(x)$ and $M_{y}=\max_{x\in(t_{1},t_{2})}\mathbb{E}\left[\left|\left(\frac{DY}{p(\textbf{X})}-\frac{(1-D)Y}{(1-p(\textbf{X}))}\right)\right|\Big{|}X\right]$. $M_{x}<\infty$ and $M_{y}<\infty$ because of Assumption 2.3. Assumptions 1-7 of Theorem Theorem Kim and Pollard by kim1990cube are hence satisfied. It follows that, for $n\rightarrow\infty$, $\displaystyle n^{1/3}\left(\hat{t}^{e}_{n}-t^{*}\right)\rightarrow^{d}\mathop{\rm arg~{}max}\limits_{r}Q(r)$ (82) where $Q(r)=Q_{1}(r)+Q_{0}(r)$, and $Q_{1}$ is a non degenerate zero-mean Gaussian process with covariance $K$, while $Q_{0}(r)$ is non-random and $Q_{0}(r)=-\frac{1}{2}r^{2}H$. Limiting distribution $\mathop{\rm arg~{}max}\limits_{r}Q(r)$ is of chernoff1964estimation type. It can be shown (banerjee2001likelihood) that $\displaystyle\mathop{\rm arg~{}max}\limits_{r}Q(r)=^{d}(2\sqrt{K}/H)^{2/3}\mathop{\rm arg~{}max}\limits_{r}B(r)-r^{2}$ (83) where $B(r)$ is the two-sided Brownian motion process, $K$ is: $\displaystyle K=$ $\displaystyle f_{x}(t^{*})\mathbb{E}\left[\left(\frac{DY}{p(\textbf{X})}-\frac{(1-D)Y}{(1-p(\textbf{X}))}\right)^{2}\Big{|}X=t^{*}\right]$ (84) $\displaystyle=$ $\displaystyle f_{x}(t^{*})\left(\frac{1}{p(\textbf{X})}\mathbb{E}[Y_{1}^{2}|X=t^{*}]+\frac{1}{1-p(\textbf{X})}\mathbb{E}[Y_{0}^{2}|X=t^{*}]\right)$ (85) and $H$ is: $\displaystyle H=f_{x}(t^{*})\left(\frac{\partial\mathbb{E}_{P}\left[Y_{1}-Y_{0}|X=t^{*}\right]}{\partial X}\right).$ (86) This completes the proof of the theorem. ∎ ### Corollary 2.1 ###### Corollary 2.1. The asymptotic distribution of regret $\mathcal{R}(\hat{t}^{e}_{n})$ is: $\displaystyle n^{\frac{2}{3}}\mathcal{R}(\hat{t}^{e}_{n})\rightarrow^{d}\left(\frac{2K^{2}}{H}\right)^{\frac{1}{3}}\left(\mathop{\rm arg~{}max}\limits_{r}B(r)-r^{2}\right)^{2}.$ The expected value of the asymptotic distribution is $K^{\frac{2}{3}}H^{-\frac{1}{3}}C^{e}$, where $C^{e}=\sqrt[3]{2}\mathbb{E}\left[\left(\mathop{\rm arg~{}max}\limits_{r}B(r)-r^{2}\right)^{2}\right]$ is a constant not dependent on $P$. ###### Proof. Result in equation (6) for $\hat{t}^{e}_{n}$ implies $\displaystyle n^{\frac{2}{3}}\mathcal{R}(\hat{t}^{e}_{n})=\frac{1}{2}W^{\prime\prime}(\tilde{t})\left(n^{\frac{1}{3}}\left(\hat{t}^{e}_{n}-t^{*}\right)\right)^{2},$ where $|\tilde{t}-t^{*}|\leq|\hat{t}_{n}-t^{*}|$. By continuous mapping theorem $\displaystyle W^{\prime\prime}(\tilde{t})\rightarrow^{p}W^{\prime\prime}(t^{*})=H$ and hence by Slutsky’s theorem $\displaystyle n^{\frac{2}{3}}\left(W(\hat{t}_{n})-W(t^{*})\right)\rightarrow^{d}\left(\frac{2K^{2}}{H}\right)^{\frac{1}{3}}\left(\mathop{\rm arg~{}max}\limits_{r}B(r)-r^{2}\right)^{2}.$ ∎ ### Theorem 3 ###### Theorem 3. Consider the SWM policy $\hat{t}^{s}_{n}$ defined in equation (11) and the optimal policy $t^{*}$ defined in equation (4). Under Assumptions 1, 2 with $s=0$ and 4, $\displaystyle\hat{t}^{s}_{n}\rightarrow^{a.s.}t^{*}$ i.e. $\hat{t}^{s}_{n}$ is a consistent estimator for $t^{*}$. ###### Proof. To prove the result, I show that conditions for Theorem 4.1.1 in amemiya1985advanced hold, and hence $\hat{t}^{s}_{n}$ is consistent for $t^{*}$. First, define function $m^{s}(Z,t)$: $\displaystyle m^{s}\left(Z,t\right)=\left(\frac{DY}{p(\textbf{X})}-\frac{(1-D)Y}{(1-p(\textbf{X}))}\right)\left(k\left(\frac{X-t}{\sigma_{n}}\right)-k\left(\frac{X-t^{*}}{\sigma_{n}}\right)\right)$ and recall definitions of $m(Z,t)$, $P_{n}$, and $P$ introduced in the proof of Theorem 2: $\displaystyle m\left(Z,t\right)$ $\displaystyle=\left(\frac{DY}{p(\textbf{X})}-\frac{(1-D)Y}{(1-p(\textbf{X}))}\right)\left(\mathbf{1}\\{X>t\\}-\mathbf{1}\\{X>t^{*}\\}\right)$ (87) $\displaystyle P_{n}m\left(\cdot,t\right)$ $\displaystyle=\sum_{i=1}^{n}m\left(Z_{i},t\right)$ (88) $\displaystyle Pm\left(Z,t\right)$ $\displaystyle=\mathbb{E}_{P}[m\left(Z,t\right)].$ (89) In this notation, $\hat{t}^{s}_{n}=\mathop{\rm arg~{}max}\limits_{t}P_{n}m^{s}(.,t)$ and $t^{*}=\mathop{\rm arg~{}max}\limits_{t}Pm(Z,t)$. I can now show that conditions $A$, $B$, and $C$ for Theorem 4.1.1 in amemiya1985advanced hold: * A) Parameter space $\mathcal{T}$ is compact by Assumption 2.1. * B) Function $P_{n}m^{s}\left(Z_{i},t\right)$ is continuous in $t$ for all $Z$ and is a measurable function of $Z$ for all $t\in\mathcal{T}$, as $k(\cdot)$ is continuous by Assumption 4. * C1) I need to prove that $P_{n}m^{s}\left(Z_{i},t\right)$ converges a.s. to $Pm(Z,t)$ uniformly in $t\in\mathcal{T}$ as $n\rightarrow\infty$, i.e. $\sup_{t}\left|P_{n}m^{s}(\cdot,t)-Pm(Z,t)\right|\rightarrow^{a.s.}0$. Note that: $\displaystyle\sup_{t}\left|P_{n}m^{s}(\cdot,t)-Pm(Z,t)\right|\leq$ (90) $\displaystyle\sup_{t}\left|P_{n}m^{s}(\cdot,t)-Pm^{s}(Z,t)\right|+\sup_{t}\left|Pm^{s}(\cdot,t)-Pm(Z,t)\right|.$ (91) I need to show that the two addends on the right-hand side converge to zero. To show uniform convergence of $P_{n}m^{s}(\cdot,t)$ to $Pm^{s}(Z,t)$, I consider sufficient conditions provided by Theorem 4.2.1 in amemiya1985advanced. $m^{s}(Z,t)$ is continuous in $t\in\mathcal{T}$ with $\mathcal{T}$ compact, and measurable in $Z$. I only need to show that $\mathbb{E}[\sup_{t\in\mathcal{T}}|m^{s}(Z,t)|]<\infty$. By Assumption 4, $k(\cdot)$ is a bounded function, i.e. it exists an $M$ such that $|k(x)|<M$ for all $x$. Hence $\mathbb{E}[\sup_{t\in\mathcal{T}}|m^{s}(Z,t)|]\leq M\mathbb{E}\left[\left|\frac{DY}{p(\textbf{X})}-\frac{(1-D)Y}{(1-p(\textbf{X}))}\right|\right]$, and $\mathbb{E}\left[\left|\frac{DY}{p(\textbf{X})}-\frac{(1-D)Y}{(1-p(\textbf{X}))}\right|\right]<\infty$ by Assumption 2.2. To show uniform convergence of $Pm^{s}(\cdot,t)$ to $Pm(Z,t)$, note that $t\in\mathcal{T}$, where $\mathcal{T}$ is bounded, and hence the result holds for $\sigma_{n}\rightarrow 0$. * C2) By Assumption 2.1, $t^{*}$ is the unique global maximum of $Pm(Z,t)$. Assumptions $A$, $B$, $C$ of Theorem 4.1.1 in amemiya1985advanced are satisfied, and hence $\hat{t}^{s}_{n}\rightarrow^{a.s.}t^{*}$. ∎ ### Lemmas Proof of Theorem 4 requires some intermediate lemmas, stated and proved below. Arguments follows the ideas in horowitz1992smoothed, but are adapted to my context. I report the entire proof for completeness, even when it overlaps with the original in horowitz1992smoothed. To make the notation simpler, define: $\displaystyle\hat{S}_{n}(t,\sigma_{n})=\frac{1}{n}\sum_{i=1}^{n}\left[\left(\frac{D_{i}Y_{i}}{p(\textbf{X}_{i})}-\frac{(1-D_{i})Y_{i}}{(1-p(\textbf{X}_{i}))}\right)k\left(\frac{X_{i}-t}{\sigma_{n}}\right)\right]$ and note that $\hat{t}^{s}_{n}=\mathop{\rm arg~{}max}\limits_{t}\hat{S}_{n}(t,\sigma_{n})$. Then define: $\displaystyle\hat{S}_{n}^{1}(t,\sigma_{n})=\frac{\partial\hat{S}_{n}(t,\sigma_{n})}{\partial t}=-\frac{1}{\sigma_{n}}\frac{1}{n}\sum_{i=1}^{n}\left[\left(\frac{D_{i}Y_{i}}{p(\textbf{X}_{i})}-\frac{(1-D_{i})Y_{i}}{(1-p(\textbf{X}_{i}))}\right)k^{\prime}\left(\frac{X_{i}-t}{\sigma_{n}}\right)\right]$ $\displaystyle\hat{S}_{n}^{2}(t,\sigma_{n})=\frac{\partial\hat{S}_{n}(t,\sigma_{n})}{\partial^{2}t}=\frac{1}{\sigma_{n}^{2}}\frac{1}{n}\sum_{i=1}^{n}\left[\left(\frac{D_{i}Y_{i}}{p(\textbf{X}_{i})}-\frac{(1-D_{i})Y_{i}}{(1-p(\textbf{X}_{i}))}\right)k^{\prime\prime}\left(\frac{X_{i}-t}{\sigma_{n}}\right)\right].$ Indicate with $\varphi_{y,x}(y,x)$ the joint distribution of $Y_{1}$, $Y_{0}$, and $X$, and with $\varphi_{y|x}(y|x)$ the conditional distribution, where $\varphi_{y,x}(y,x)=\varphi_{y|x}(y|x)f_{x}(x)$. #### Lemma 1 ###### Lemma 1. Let Assumptions 1, 2 with $s=h+1$ for some $h\geq 2$, and 5 hold. Then $\displaystyle\lim_{n\rightarrow\infty}\mathbb{E}\left[\sigma_{n}^{-h}\hat{S}_{n}^{1}(t^{*},\sigma_{n})\right]=A$ (92) $\displaystyle\lim_{n\rightarrow\infty}\operatorname{\mathrm{Var}}\left[(n\sigma_{n})^{1/2}\hat{S}_{n}^{1}(t^{*},\sigma_{n})\right]=\alpha_{2}K.$ (93) ###### Proof. First, I will prove that $\lim_{n\rightarrow\infty}\mathbb{E}\left[\sigma_{n}^{-h}\hat{S}_{n}^{1}(t^{*},\sigma_{n})\right]=A$: $\displaystyle\mathbb{E}\left[\sigma_{n}^{-h}\hat{S}_{n}^{1}(t^{*},\sigma_{n})\right]$ $\displaystyle=-\frac{\sigma_{n}^{-h}}{\sigma_{n}}\mathbb{E}\left[\left(\frac{DY}{p(\textbf{X})}-\frac{(1-D)Y}{(1-p(\textbf{X}))}\right)k^{\prime}\left(\frac{X_{i}-t^{*}}{\sigma_{n}}\right)\right]=$ (94) $\displaystyle=-\frac{\sigma_{n}^{-h}}{\sigma_{n}}\mathbb{E}\left[\left(Y_{1}-Y_{0}\right)k^{\prime}\left(\frac{X_{i}-t^{*}}{\sigma_{n}}\right)\right]=$ (95) $\displaystyle=-\sigma_{n}^{-h}\int_{x}\int_{y}\left(Y_{1}-Y_{0}\right)\frac{1}{\sigma_{n}}k^{\prime}\left(\frac{X_{i}-t^{*}}{\sigma_{n}}\right)\varphi_{y,x}(y,x)dydx=$ (96) $\displaystyle=-\sigma_{n}^{-h}\int_{\zeta}\int_{y}\left(Y_{1}-Y_{0}\right)k^{\prime}\left(\zeta\right)\varphi_{y,x}(y,t^{*}+\zeta\sigma_{n})dyd\zeta$ (97) where in the last line I made the substitution $\zeta=\frac{X_{i}-t^{*}}{\sigma_{n}}$. Consider the Taylor expansion of $\varphi$ around $\varphi(y,t^{*})$: $\displaystyle\varphi(y,t^{*}+\zeta\sigma_{n})$ $\displaystyle=\varphi(y,t^{*})+\zeta\sigma_{n}\varphi^{1}_{2}(y,t^{*})+\frac{1}{2}(\zeta\sigma_{n})^{2}\varphi^{2}_{2}(y,t^{*})+\dots$ (98) $\displaystyle=\varphi(y,t^{*})+\left(\sum_{i=1}^{h-1}\frac{1}{i!}\varphi^{i}_{2}(y,x=t^{*})\zeta^{i}\sigma_{n}^{i}\right)+\frac{1}{h!}\varphi^{h}_{2}(y,\tilde{t})\zeta^{h}\sigma_{n}^{h}$ (99) with $|\tilde{t}-t^{*}|\leq|t^{*}+\zeta\sigma_{n}-t^{*}|$. Existence of $\varphi^{m}_{2}(y,x)$, the $m$-derivatives of $\varphi(y,x)$ with respect to its second argument, is guaranteed by Assumption 2.3 with $s=h+1$. Write $\mathbb{E}\left[\sigma_{n}^{-h}\hat{S}_{n}^{1}(t^{*},\sigma_{n})\right]$ as $I_{n1}+I_{n2}+I_{n3}$, where: $\displaystyle I_{n1}=$ $\displaystyle-\sigma_{n}^{-h}\int_{\zeta}\int_{y}\left(Y_{1}-Y_{0}\right)k^{\prime}\left(\zeta\right)\varphi(y,t^{*})dyd\zeta$ (100) $\displaystyle=$ $\displaystyle-\sigma_{n}^{-h}\int_{\zeta}k^{\prime}\left(\zeta\right)d\zeta\underbrace{\int_{y}\left(Y_{1}-Y_{0}\right)dy}_{=\mathbb{E}[Y_{1}-Y_{0}|X=t^{*}]=0}=0$ (101) $\displaystyle I_{n2}=$ $\displaystyle-\sigma_{n}^{-h}\int_{\zeta}\int_{y}\left(Y_{1}-Y_{0}\right)k^{\prime}\left(\zeta\right)\left(\sum_{i=1}^{h-1}\frac{1}{i!}\varphi^{i}_{2}(y,x=t^{*})\zeta^{i}\sigma_{n}^{i}\right)dyd\zeta$ (102) $\displaystyle=$ $\displaystyle\sigma_{n}^{-h}\int_{y}\left(Y_{1}-Y_{0}\right)\left(\sum_{i=1}^{h-1}\frac{1}{i!}\varphi^{i}_{2}(y,x=t^{*})\sigma_{n}^{i}\underbrace{\int_{\zeta}k^{\prime}\left(\zeta\right)\zeta^{i}d\zeta}_{=0}\right)dy=0.$ (103) Result on $I_{n1}$ follows from definition of $t^{*}$, while Assumption 5.2 guarantees result on $I_{n2}$. Finally, consider $I_{n3}$: $\displaystyle I_{n3}=$ $\displaystyle-\sigma_{n}^{-h}\int_{\zeta}\int_{y}\left(Y_{1}-Y_{0}\right)k^{\prime}\left(\zeta\right)\frac{1}{h!}\varphi^{h}_{2}(y,\tilde{t})\zeta^{h}\sigma_{n}^{h}dyd\zeta$ (104) $\displaystyle=$ $\displaystyle-\frac{1}{h!}\int_{\zeta}k^{\prime}\left(\zeta\right)\zeta^{h}d\zeta\int_{y}\left(Y_{1}-Y_{0}\right)\varphi^{h}_{2}(y,\tilde{t})dy$ (105) and conclude that: $\displaystyle\lim_{n\rightarrow\infty}\mathbb{E}\left[\sigma_{n}^{-h}\hat{S}_{n}^{1}(t^{*},\sigma_{n})\right]=-\frac{1}{h!}\int_{\zeta}\zeta^{h}k^{\prime}\left(\zeta\right)d\zeta\int_{y}\left(Y_{1}-Y_{0}\right)\varphi^{h}_{2}(y,t^{*})dy=A.$ (106) Now I will prove that $\lim_{n\rightarrow\infty}\operatorname{\mathrm{Var}}\left[(n\sigma_{n})^{1/2}\hat{S}_{n}^{1}(t^{*},\sigma_{n})\right]=\alpha_{2}K$. Note that: $\displaystyle\operatorname{\mathrm{Var}}\left[(n\sigma_{n})^{1/2}\hat{S}_{n}^{1}(t^{*},\sigma_{n})\right]=$ $\displaystyle\operatorname{\mathrm{Var}}\left[(n\sigma_{n})^{-1/2}\sum_{i=1}^{n}\left[\left(\frac{D_{i}Y_{i}}{p(\textbf{X}_{i})}-\frac{(1-D_{i})Y_{i}}{(1-p(\textbf{X}_{i}))}\right)k^{\prime}\left(\frac{X_{i}-t^{*}}{\sigma_{n}}\right)\right]\right]=$ (107) $\displaystyle=$ $\displaystyle\sigma_{n}\operatorname{\mathrm{Var}}\left[\frac{1}{\sigma_{n}}\left(\frac{DY}{p(\textbf{X})}-\frac{(1-D)Y}{(1-p(\textbf{X}))}\right)k^{\prime}\left(\frac{X-t^{*}}{\sigma_{n}}\right)\right]=$ (108) $\displaystyle=$ $\displaystyle\sigma_{n}\mathbb{E}\left[\frac{1}{\sigma_{n}^{2}}\left(\frac{DY}{p(\textbf{X})}-\frac{(1-D)Y}{(1-p(\textbf{X}))}\right)^{2}k^{\prime}\left(\frac{X-t^{*}}{\sigma_{n}}\right)^{2}\right]-$ (109) $\displaystyle\sigma_{n}\mathbb{E}\left[\frac{1}{\sigma_{n}}\left(\frac{DY}{p(\textbf{X})}-\frac{(1-D)Y}{(1-p(\textbf{X}))}\right)k^{\prime}\left(\frac{X-t^{*}}{\sigma_{n}}\right)\right]^{2}.$ (110) Second term in the last expression goes to 0 as $n\rightarrow\infty$. For the first term, observe that: $\displaystyle\sigma_{n}\mathbb{E}\left[\frac{1}{\sigma_{n}}\left(\frac{DY}{p(\textbf{X})}-\frac{(1-D)Y}{(1-p(\textbf{X}))}\right)^{2}k^{\prime}\left(\frac{X-t^{*}}{\sigma_{n}}\right)^{2}\right]=$ (111) $\displaystyle\int_{x}\int_{y}\left(\frac{DY}{p(\textbf{X})}-\frac{(1-D)Y}{(1-p(\textbf{X}))}\right)^{2}k^{\prime}\left(\frac{X-t^{*}}{\sigma_{n}}\right)^{2}\frac{1}{\sigma_{n}}\varphi_{y,x}(y,x)dydx=$ (112) $\displaystyle\int_{\zeta}\int_{y}\left(\frac{DY}{p(\textbf{X})}-\frac{(1-D)Y}{(1-p(\textbf{X}))}\right)^{2}k^{\prime}\left(\zeta\right)^{2}\varphi_{y,x}(y,t^{*}+\zeta\sigma_{n})dyd\zeta$ (113) where in the last line I made the substitution $\zeta=\frac{X_{i}-t^{*}}{\sigma_{n}}$. Conclude that $\displaystyle\operatorname{\mathrm{Var}}\left[(n\sigma_{n})^{1/2}\hat{S}_{n}^{1}(t^{*},\sigma_{n})\right]=$ $\displaystyle\int_{\zeta}k^{\prime}\left(\zeta\right)^{2}d\zeta f_{x}(t^{*})\mathbb{E}\left[\left(\frac{DY}{p(\textbf{X})}-\frac{(1-D)Y}{(1-p(\textbf{X}))}\right)^{2}|X=t^{*}\right]$ (114) $\displaystyle=$ $\displaystyle\alpha_{2}K.$ (115) Note that $\alpha_{2}K$ is bounded by Assumptions 2.2, 2.3, and 5.2. ∎ #### Lemma 2 ###### Lemma 2. Let Assumptions 1, 2 with $s=h+1$ for some $h\geq 2$, and 5 hold. If $n\sigma_{n}^{2h+1}\rightarrow\infty$, $\sigma_{n}^{-h}\hat{S}_{n}^{1}(t^{*},\sigma_{n})$ converges in probability to A. If $n\sigma_{n}^{2h+1}$ has a finite limit $\lambda$, $(n\sigma_{n})^{1/2}\hat{S}_{n}^{1}(t^{*},\sigma_{n})$ converges in distribution to $\mathcal{N}(\lambda^{1/2}A,\alpha_{2}K)$. ###### Proof. Note that $\operatorname{\mathrm{Var}}\left[(\sigma_{n})^{-h}\hat{S}_{n}^{1}(t^{*},\sigma_{n})\right]=\underbrace{(n\sigma_{n}^{2h+1})^{-1}}_{\rightarrow 0}\underbrace{\operatorname{\mathrm{Var}}\left[(n\sigma_{n})^{1/2}\hat{S}_{n}^{1}(t^{*},\sigma_{n})\right]}_{\rightarrow\alpha_{2}K}.$ So the first result follows from lemma 1 and Chebyshev’s inequality. For the second result, first note that under the stated assumptions and from lemma 1, $\mathbb{E}\left[(n\sigma_{n})^{1/2}\hat{S}_{n}^{1}(t^{*},\sigma_{n})\right]=\underbrace{(n\sigma_{n}^{2h+1})^{1/2}}_{\rightarrow\lambda^{1/2}}\underbrace{\mathbb{E}\left[\sigma_{n}^{-h}\hat{S}_{n}^{1}(t^{*},\sigma_{n})\right]}_{A}$ and so the result follows if I show that $U_{n}=(n\sigma_{n})^{1/2}\left(\hat{S}_{n}^{1}(t^{*},\sigma_{n})-\mathbb{E}\left[\hat{S}_{n}^{1}(t^{*},\sigma_{n})\right]\right)\rightarrow^{d}\mathcal{N}(0,\alpha_{2}K).$ Note that $\displaystyle U_{n}=$ $\displaystyle(n\sigma_{n})^{1/2}\frac{1}{n}\sum_{i=1}^{n}\left[\underbrace{\left(\frac{D_{i}Y_{i}}{p(\textbf{X}_{i})}-\frac{(1-D_{i})Y_{i}}{(1-p(\textbf{X}_{i}))}\right)k^{\prime}\left(\frac{X_{i}-t}{\sigma_{n}}\right)\frac{1}{\sigma_{n}}}_{=B}-\mathbb{E}[B]\right]=$ (116) $\displaystyle=$ $\displaystyle\sum_{i=1}^{n}\left(\frac{\sigma_{n}}{n}\right)^{1/2}(B-\mathbb{E}[B])$ (117) and hence $U_{n}$ has characteristic function $\psi(\tau)^{n}$, where $\psi(\tau)=\mathbb{E}\left[\exp\left(i\tau\left(\frac{\sigma_{n}}{n}\right)^{1/2}(B-\mathbb{E}[B])\right)\right]$ and $\displaystyle\psi^{{}^{\prime}}(\tau)=\mathbb{E}\left[i\left(\frac{\sigma_{n}}{n}\right)^{1/2}(B-\mathbb{E}[B])\exp\left(i\tau\left(\frac{\sigma_{n}}{n}\right)^{1/2}(B-\mathbb{E}[B])\right)\right]$ (118) $\displaystyle\psi^{{}^{\prime\prime}}(\tau)=\mathbb{E}\left[-\frac{\sigma_{n}}{n}(B-\mathbb{E}[B])\exp\left(i\tau\left(\frac{\sigma_{n}}{n}\right)^{1/2}(B-\mathbb{E}[B])\right)\right].$ (119) Note that $\psi^{{}^{\prime}}(0)=0$ and $\psi^{{}^{\prime\prime}}(0)=-\frac{\sigma_{n}}{n}\operatorname{\mathrm{Var}}(B)=-\frac{1}{n}(\alpha_{2}K+o(1))$, since lemma 1 proved that $\lim_{n\rightarrow\infty}\sigma_{n}\operatorname{\mathrm{Var}}(B)=\alpha_{2}K$. A Taylor series expansion of $\psi(\tau)$ about $\tau=0$ yields: $\psi(\tau)=\underbrace{\psi(0)}_{=1}+\underbrace{\psi^{{}^{\prime}}(0)}_{=0}\tau+\frac{1}{2}\underbrace{\psi^{{}^{\prime\prime}}(0)}_{=-\frac{\alpha_{2}K}{n}}\tau^{2}+o\left(\frac{\tau^{2}}{n}\right)=1-\frac{1}{2n}\alpha_{2}K\tau^{2}+o\left(\frac{\tau^{2}}{n}\right)$ and hence the characteristic function of $U_{n}$ has limit: $\displaystyle\lim_{n\rightarrow\infty}\left[1-\frac{1}{2n}\alpha_{2}K\tau^{2}+o\left(\frac{\tau^{2}}{n}\right)\right]^{n}=\exp\left(-\alpha_{2}K\frac{\tau^{2}}{2}\right).$ (120) Since $\exp\left(-\alpha_{2}K\frac{\tau^{2}}{2}\right)$ is the characteristic function of $\mathcal{N}(0,\alpha_{2}K)$, the second result of the lemma holds. ∎ #### Lemma 3 ###### Lemma 3. Let Assumptions 1, 2 with $s=h+1$ for some $h\geq 2$, and 5 hold. Let $\eta>0$ be such that $\varphi_{y,x}(y,x)$ has second derivative uniformly bounded for almost every $X$ if $|X-t^{*}|<\eta$. For $\theta\in\mathbb{R}$, define $\hat{S}^{\theta}_{n}(\theta)$ by $\displaystyle\hat{S}^{\theta}_{n}(\theta)=-(n\sigma_{n}^{2})^{-1}\sum_{i=1}^{n}\left[\left(\frac{D_{i}Y_{i}}{p(\textbf{X}_{i})}-\frac{(1-D_{i})Y_{i}}{(1-p(\textbf{X}_{i}))}\right)k^{\prime}\left(\frac{X_{i}-t^{*}}{\sigma_{n}}+\theta\right)\right].$ (121) Define the sets $\Theta_{n}(n=1,2,\dots)$ by $\Theta_{n}=\\{\theta:\theta\in\mathbb{R},\sigma_{n}|\theta|\leq\frac{\eta}{2}\\}$. Then $\displaystyle\operatorname{\mathrm{plim}}_{n\rightarrow\infty}\sup_{\theta\in\Theta_{n}}|\hat{S}^{\theta}_{n}(\theta)-\mathbb{E}[\hat{S}^{\theta}_{n}(\theta)]|=0.$ (122) In addition, there are finite numbers $\alpha_{1}$ and $\alpha_{2}$ such that for all $\theta\in\Theta_{n}$ $|\mathbb{E}[\hat{S}^{\theta}_{n}(\theta)]-H\theta|\leq o(1)+\alpha_{1}\sigma_{n}|\theta|+\alpha_{2}\sigma_{n}\theta^{2}$ uniformly over $\theta\in\Theta_{n}$. ###### Proof. To prove the first result, first define $\displaystyle-g_{i}(\theta)=$ $\displaystyle\left(\frac{D_{i}Y_{i}}{p(\textbf{X}_{i})}-\frac{(1-D_{i})Y_{i}}{(1-p(\textbf{X}_{i}))}\right)k^{\prime}\left(\frac{X_{i}-t^{*}}{\sigma_{n}}+\theta\right)-$ (123) $\displaystyle\mathbb{E}\left[\left(\frac{D_{i}Y_{i}}{p(\textbf{X}_{i})}-\frac{(1-D_{i})Y_{i}}{(1-p(\textbf{X}_{i}))}\right)k^{\prime}\left(\frac{X_{i}-t^{*}}{\sigma_{n}}+\theta\right)\right].$ (124) It is necessary to prove that for any $\varepsilon>0$ $\lim_{n\rightarrow\infty}\operatorname{Pr}\left[\sup_{\theta\in\Theta_{n}}\left(n\sigma_{n}^{2}\right)^{-1}\left|\sum_{n=1}^{N}g_{i}(\theta)\right|>\varepsilon\right]=0.$ Given any $\delta>0$, divide each set $\Theta_{n}$ into nonoverlapping subsets $\Theta_{nj}(j=1,2,\ldots)$ such that the distance between any two points in the same subset does not exceed $\delta\sigma_{n}^{2}$ and the number $\Gamma_{n}$ of subsets does not exceed $C\sigma_{n}^{-3(q-1)}$ for some $C>0$. Let $\left\\{\theta_{Ni}\right\\}$ be a set of vectors such that $\theta_{nj}\in\Theta_{nj}$. Then $\displaystyle\operatorname{Pr}\left[\sup_{\theta\in\Theta_{n}}\left(n\sigma_{n}^{2}\right)^{-1}\right.$ $\displaystyle\left.\left|\sum_{n=1}^{n}g_{i}(\theta)\right|>\varepsilon\right]=$ (125) $\displaystyle=$ $\displaystyle\operatorname{Pr}\left[\bigcup_{j=1}^{\Gamma_{n}}\sup_{\theta\in\Theta_{nj}}\left(n\sigma_{n}^{2}\right)^{-1}\left|\sum_{i=1}^{n}g_{i}(\theta)\right|>\varepsilon\right]$ (126) $\displaystyle\leqslant$ $\displaystyle\sum_{j=1}^{\Gamma_{n}}\operatorname{Pr}\left[\sup_{\theta\in\Theta_{nj}}\left(n\sigma_{n}^{2}\right)^{-1}\left|\sum_{i=1}^{n}g_{i}(\theta)\right|>\varepsilon\right]$ (127) $\displaystyle\leqslant$ $\displaystyle\underbrace{\sum_{j=1}^{\Gamma_{n}}\operatorname{Pr}\left[\left(n\sigma_{n}^{2}\right)^{-1}\left|\sum_{i=1}^{n}g_{i}\left(\theta_{nj}\right)\right|>\varepsilon/2\right]}_{=B_{1}}$ (128) $\displaystyle+\underbrace{\sum_{j=1}^{\Gamma_{n}}\operatorname{Pr}\left[\left(n\sigma_{n}^{2}\right)^{-1}\sum_{i=1}^{n}\sup_{\theta\in\Theta_{nj}}\left|g_{i}(\theta)-g_{i}\left(\theta_{nj}\right)\right|>\varepsilon/2\right]}_{=B_{2}},$ (129) where the last two lines follow from the triangle inequality. By Hoeffding’s inequality, there are finite numbers $c_{1}>0$ and $c_{2}>0$ such that $\operatorname{Pr}\left[\left(n\sigma_{n}^{2}\right)^{-1}\left|\sum_{i=1}^{n}g_{i}\left(\theta_{nj}\right)\right|>\varepsilon/2\right]\leqslant c_{1}\exp\left(-c_{2}n\sigma_{n}^{4}\right).$ Therefore, $B_{1}$ is bounded by $Cc_{1}\sigma_{n}^{-3(q-1)}\exp\left(-c_{2}n\sigma_{n}^{4}\right)$, which converges to 0 as $n\rightarrow\infty$ by Assumption 5.1. In addition, by Assumption 4 there is a finite $c_{3}>0$ such that if $\theta\in\Theta_{nj}$, $\displaystyle\left|-\left(\frac{D_{i}Y_{i}}{p(\textbf{X}_{i})}-\frac{(1-D_{i})Y_{i}}{(1-p(\textbf{X}_{i}))}\right)\left[k^{\prime}\left(\frac{X_{i}-t^{*}}{\sigma_{n}}+\theta\right)-k^{\prime}\left(\frac{X_{i}-t^{*}}{\sigma_{n}}+\theta_{nj}\right)\right]\right|$ (130) $\displaystyle\leqslant c_{3}\left|\theta-\theta_{nj}\right|\leqslant c_{3}\delta\sigma_{n}^{2}.$ (131) So $\left(n\sigma_{n}^{2}\right)^{-1}\sum_{i=1}^{n}\sup_{\theta\in\Theta_{nj}}\left|g_{i}(\theta)-g_{i}\left(\theta_{nj}\right)\right|\leqslant 2c_{3}\delta.$ Choose $\delta<\varepsilon/4c_{3}$. Then $B_{2}$ is 0. This establishes $\operatorname{\mathrm{plim}}_{n\rightarrow\infty}\sup_{\theta\in\Theta_{n}}|\hat{S}^{\theta}_{n}(\theta)-\mathbb{E}[\hat{S}^{\theta}_{n}(\theta)]|=0$. To prove the second result, start noting that $\displaystyle\mathbb{E}[\hat{S}^{\theta}_{n}(\theta)]=-\sigma_{n}^{-2}\mathbb{E}\left[\left(\frac{DY}{p(\textbf{X})}-\frac{(1-D)Y}{(1-p(\textbf{X}))}\right)k^{\prime}\left(\frac{X-t^{*}}{\sigma_{n}}+\theta\right)\right]=I_{n1}+I_{n2}$ (132) where $\displaystyle I_{n1}=-\sigma_{n}^{-2}\int_{|X-t^{*}|\leq\eta}\int_{y}\left(\frac{DY}{p(\textbf{X})}-\frac{(1-D)Y}{(1-p(\textbf{X}))}\right)k^{\prime}\left(\frac{X-t^{*}}{\sigma_{n}}+\theta\right)\varphi(y,x)dydx$ (133) $\displaystyle I_{n2}=-\sigma_{n}^{-2}\int_{|X-t^{*}|>\eta}\int_{y}\left(\frac{DY}{p(\textbf{X})}-\frac{(1-D)Y}{(1-p(\textbf{X}))}\right)k^{\prime}\left(\frac{X-t^{*}}{\sigma_{n}}+\theta\right)\varphi(y,x)dydx.$ (134) First, consider $I_{n2}$ and observe that $\displaystyle I_{n2}=-\sigma_{n}^{-2}\int_{|X-t^{*}|>\eta}k^{\prime}\left(\frac{X-t^{*}}{\sigma_{n}}+\theta\right)\underbrace{\int_{y}\left(\frac{DY}{p(\textbf{X})}-\frac{(1-D)Y}{(1-p(\textbf{X}))}\right)\varphi(y|x)dy}_{=\mathbb{E}[Y_{1}-Y_{0}|X]}f_{x}(x)dx$ (135) and since $\mathbb{E}[Y_{1}-Y_{0}|X]$ is bounded by Assumption 2.2, $|I_{n2}|\leq\left|C\sigma_{n}^{-2}\int_{|X-t^{*}|>\eta}k^{\prime}\left(\frac{X-t^{*}}{\sigma_{n}}+\theta\right)f_{x}(x)dx\right|.$ Define $\zeta=\frac{X-t^{*}}{\sigma_{n}}+\theta$. Since $\sigma_{n}|\theta|\leq\frac{\eta}{2}$, when $|X-t^{*}|>\eta$ $\displaystyle|\zeta|=$ $\displaystyle\left|\frac{X-t^{*}}{\sigma_{n}}+\theta\right|=\pm\left(\frac{X-t^{*}}{\sigma_{n}}+\theta\right)\geq\pm\left(\frac{X-t^{*}}{\sigma_{n}}\right)-|\theta|=\left|\frac{X-t^{*}}{\sigma_{n}}\right|-|\theta|$ (136) $\displaystyle\geq$ $\displaystyle\left|\frac{X-t^{*}}{\sigma_{n}}\right|-\frac{\eta}{2\sigma_{n}}>\frac{\eta}{\sigma_{n}}-\frac{\eta}{2\sigma_{n}}=\frac{\eta}{2\sigma_{n}}.$ (137) and so the event $|X-t^{*}|>\eta$ implies $|\zeta|>\frac{\eta}{2\sigma_{n}}$. Then $\displaystyle|I_{n2}|\leq$ $\displaystyle\left|C\sigma_{n}^{-2}\int_{|X-t^{*}|>\eta}k^{\prime}\left(\frac{X-t^{*}}{\sigma_{n}}+\theta\right)f_{x}(x)dx\right|$ (138) $\displaystyle=$ $\displaystyle\left|C\sigma_{n}^{-1}\int_{|X-t^{*}|>\eta}k^{\prime}\left(\zeta\right)f_{x}(t^{*}-\theta\sigma_{n})d\zeta\right|$ (139) $\displaystyle\leq$ $\displaystyle\left|C\underbrace{f_{x}(t^{*}-\theta\sigma_{n})}_{\rightarrow f_{x}(t^{*})}\underbrace{\sigma_{n}^{-1}\int_{|\zeta|>\eta/\sigma_{n}}k^{\prime}\left(\zeta\right)d\zeta}_{\rightarrow 0}\right|.$ (140) The fact that $f_{x}(t^{*}-\theta\sigma_{n})\rightarrow f_{x}(t^{*})$ bounded by Assumption 2.3 with $s=h+1$ and $\sigma_{n}^{-1}\int_{|\zeta|>\eta/\sigma_{n}}k^{\prime}\left(\zeta\right)d\zeta\rightarrow 0$ by Assumption 5.2 implies $\operatorname{\mathrm{plim}}_{n\rightarrow\infty}\sup_{\theta\in\Theta_{n}}|I_{n2}|=0.$ Recall that $I_{n1}$ is defined as $I_{n1}=-\sigma_{n}^{-2}\int_{|X-t^{*}|\leq\eta}\int_{y}\left(\frac{DY}{p(\textbf{X})}-\frac{(1-D)Y}{(1-p(\textbf{X}))}\right)k^{\prime}\left(\frac{X-t^{*}}{\sigma_{n}}+\theta\right)\varphi(y,x)dydx.$ Consider a Taylor expansion of $\varphi(y,x)$ about $x=t^{*}$: $\varphi(y,x)=\varphi(y,t^{*})+\varphi^{\prime}(y,t^{*})(x-t^{*})+\frac{1}{2}\varphi^{\prime\prime}(y,\tilde{t})(x-t^{*})^{2}$ with $|\tilde{t}-t^{*}|\leq|x-t^{*}|$. Write $I_{n1}$ as $J_{n1}+J_{n2}+J_{n3}$ where $\displaystyle J_{n1}=$ $\displaystyle-\sigma_{n}^{-2}\int_{|X-t^{*}|\leq\eta}\int_{y}\left(\frac{DY}{p(\textbf{X})}-\frac{(1-D)Y}{(1-p(\textbf{X}))}\right)k^{\prime}\left(\frac{X-t^{*}}{\sigma_{n}}+\theta\right)\varphi(y,t^{*})dydx$ (141) $\displaystyle=$ $\displaystyle-\sigma_{n}^{-2}\underbrace{\mathbb{E}[Y_{1}-Y_{0}|X=t^{*}]}_{=0}\int_{|X-t^{*}|\leq\eta}k^{\prime}\left(\frac{X-t^{*}}{\sigma_{n}}+\theta\right)f_{x}(t^{*})dx=0$ (142) $\displaystyle J_{n2}=$ $\displaystyle-\sigma_{n}^{-2}\int_{|X-t^{*}|\leq\eta}\int_{y}\left(\frac{DY}{p(\textbf{X})}-\frac{(1-D)Y}{(1-p(\textbf{X}))}\right)k^{\prime}\left(\frac{X-t^{*}}{\sigma_{n}}+\theta\right)\varphi^{\prime}(y,t^{*})(x-t^{*})dydx$ (143) $\displaystyle J_{n3}=$ $\displaystyle-\sigma_{n}^{-2}\int_{|X-t^{*}|\leq\eta}\int_{y}\left(\frac{DY}{p(\textbf{X})}-\frac{(1-D)Y}{(1-p(\textbf{X}))}\right)k^{\prime}\left(\frac{X-t^{*}}{\sigma_{n}}+\theta\right)\frac{1}{2}\varphi^{\prime\prime}(y,\tilde{t})(x-t^{*})^{2}dydx.$ (144) Consider $J_{n2}$ and the substitution $\zeta=\frac{X-t^{*}}{\sigma_{n}}+\theta$: $\displaystyle J_{n2}=$ $\displaystyle-\int_{|\zeta-\theta|\leq\eta/\sigma_{n}}\int_{y}\left(\frac{DY}{p(\textbf{X})}-\frac{(1-D)Y}{(1-p(\textbf{X}))}\right)k^{\prime}\left(\zeta\right)(\zeta-\theta)\varphi^{\prime}(y,t^{*})dyd\zeta$ (145) $\displaystyle=$ $\displaystyle-\int_{|\zeta-\theta|\leq\eta/\sigma_{n}}\int_{y}\left(\frac{DY}{p(\textbf{X})}-\frac{(1-D)Y}{(1-p(\textbf{X}))}\right)k^{\prime}\left(\zeta\right)\zeta\varphi^{\prime}(y,t^{*})dyd\zeta+$ (146) $\displaystyle\int_{|\zeta-\theta|\leq\eta/\sigma_{n}}\int_{y}\left(\frac{DY}{p(\textbf{X})}-\frac{(1-D)Y}{(1-p(\textbf{X}))}\right)k^{\prime}\left(\zeta\right)\theta\varphi^{\prime}(y,t^{*})dyd\zeta$ (147) $\displaystyle=$ $\displaystyle-\int_{|\zeta-\theta|\leq\eta/\sigma_{n}}\zeta k^{\prime}\left(\zeta\right)d\zeta\underbrace{\int_{y}\left(\frac{DY}{p(\textbf{X})}-\frac{(1-D)Y}{(1-p(\textbf{X}))}\right)\varphi^{\prime}(y,t^{*})dy}_{=H}+$ (148) $\displaystyle\theta\int_{|\zeta-\theta|\leq\eta/\sigma_{n}}k^{\prime}\left(\zeta\right)d\zeta\underbrace{\int_{y}\left(\frac{DY}{p(\textbf{X})}-\frac{(1-D)Y}{(1-p(\textbf{X}))}\right)\varphi^{\prime}(y,t^{*})dy}_{=H}$ (149) where $H=f_{x}(t^{*})\left(\frac{\partial\mathbb{E}_{P}\left[Y_{1}-Y_{0}|X=t^{*}\right]}{\partial X}\right)$ is bounded by Assumption 2.3. Since $\int\zeta k^{\prime}\left(\zeta\right)d\zeta=0$ and $\sigma_{n}|\theta|\leq\frac{\eta}{2}$: $\displaystyle\left|\int_{|\zeta-\theta|\leq\eta/\sigma_{n}}\zeta k^{\prime}\left(\zeta\right)d\zeta\right|=\left|\int_{|\zeta-\theta|>\eta/\sigma_{n}}\zeta k^{\prime}\left(\zeta\right)d\zeta\right|\leq\left|\int_{|\zeta|>\eta/2\sigma_{n}}\zeta k^{\prime}\left(\zeta\right)d\zeta\right|.$ (150) By Assumption 5.2, $\left|\int_{|\zeta|>\eta/2\sigma_{n}}\zeta k^{\prime}\left(\zeta\right)d\zeta\right|$ converges to 0 uniformly over $\theta\in\Theta_{n}$. It means that $\int_{|\zeta-\theta|\leq\eta/\sigma_{n}}\zeta k^{\prime}\left(\zeta\right)d\zeta$ converges uniformly to 0. Consider $\theta H\int_{|\zeta-\theta|\leq\eta/\sigma_{n}}k^{\prime}\left(\zeta\right)d\zeta$, and note that, since $\int k^{\prime}\left(\zeta\right)d\zeta=1$, $\displaystyle\left|\theta H-\theta H\int_{|\zeta-\theta|\leq\eta/\sigma_{n}}k^{\prime}\left(\zeta\right)d\zeta\right|=\left|\theta H\int_{|\zeta-\theta|>\eta/\sigma_{n}}k^{\prime}\left(\zeta\right)d\zeta\right|\leq$ (151) $\displaystyle|\sigma_{n}\theta H|\sigma_{n}^{-1}\int_{|\zeta-\theta|>\eta/\sigma_{n}}k^{\prime}\left(\zeta\right)d\zeta\leq\frac{\eta}{2}\sigma_{n}^{-1}\int_{|\zeta-\theta|>\eta/\sigma_{n}}k^{\prime}\left(\zeta\right)d\zeta.$ (152) The last term is bounded uniformly over $n$ and $\theta\in\Theta_{n}$ and converges to 0 by Assumption 5.2. It means that $\displaystyle\lim_{n\rightarrow\infty}\left|\sup_{\theta\in\Theta_{n}}J_{n2}-\theta H\right|=0.$ (153) Finally, consider $J_{n3}$: $\displaystyle|J_{n3}|=$
# Weakly Supervised Co-training with Swapping Assignments for Semantic Segmentation Xinyu Yang1 Hossein Rahmani1 Sue Black2 Bryan M. Williams1 1Lancaster University 2St John’s College of the University of Oxford <EMAIL_ADDRESS><EMAIL_ADDRESS> ###### Abstract Class activation maps (CAMs) are commonly employed in weakly supervised semantic segmentation (WSSS) to produce pseudo-labels. Due to incomplete or excessive class activation, existing studies often resort to offline CAM refinement, introducing additional stages or proposing offline modules. This can cause optimization difficulties for single-stage methods and limit generalizability. In this study, we aim to reduce the observed CAM inconsistency and error to mitigate reliance on refinement processes. We propose an end-to-end WSSS model incorporating guided CAMs, wherein our segmentation model is trained while concurrently optimizing CAMs online. Our method, Co-training with Swapping Assignments (CoSA), leverages a dual-stream framework, where one sub-network learns from the swapped assignments generated by the other. We introduce three techniques: i) soft perplexity-based regularization to penalize uncertain regions; ii) a threshold-searching approach to dynamically revise the confidence threshold; and iii) contrastive separation to address the coexistence problem. CoSA demonstrates exceptional performance, achieving mIoU of 76.2% and 51.0% on VOC and COCO validation datasets, respectively, surpassing existing baselines by a substantial margin. Notably, CoSA is the first single-stage approach to outperform all existing multi-stage methods including those with additional supervision. ## 1 Introduction The objective of weakly supervised semantic segmentation (WSSS) is to train a segmentation model without relying on pixel-level labels but utilizing only weak and cost-effective annotations, such as image-level classification labels [3, 26, 50], object points [4, 45], and bounding boxes [16, 28, 57, 36]. In particular, image-level classification labels have commonly been employed as weak labels due to their minimal or negligible annotation effort involved [2, 60]. With the absence of precise localization information, image-level WSSS often necessitates the use of the coarse localization offered by class activation maps (CAMs) [72]. CAMs pertain to the intermediate outputs derived from a classification network. They can visually illustrate the activation regions corresponding to each individual class. Thus, they are often used to generate pseudo masks for training. However, CAMs suffer from i) Inconsistent Activation: CAMs demonstrate variability and lack robustness in accommodating geometric transformations of input images [60], resulting in inconsistent activation regions for the same input. ii) Inaccurate Activation: activation region accuracy is often compromised, resulting in incomplete or excessive class activation, only covering the discriminative object regions [1]. Despite enhanced localization mechanisms in the variants GradCAM [55] and GradCAM++ [7], they still struggle to generate satisfactory pseudo-labels for WSSS [60]. Thus, many WSSS works are dedicated to studying CAM refinement or post- processing [1, 15, 31]. In general, WSSS methods [2, 65, 18, 50] comprise three stages: CAM generation, refinement, and segmentation training with pseudo-labels. This multi-stage framework is known to be time-consuming and complex as several models must be trained at different stages. In contrast, single-stage models [3, 70, 52], which include a unified network of all stages, are more efficient. They are trained to optimize the segmentation and classification tasks at the same time; however, their CAMs are not explicitly trained. As a result, they need refinement to produce high-quality pseudo-labels, often leveraging hand-crafted modules, such as CRF in [70], PAMR in [3], PAR in [52, 53]. As the refinement modules are predefined and offline, they decouple the CAMs from the primary optimization. When the refined CAMs are employed as learning objectives for segmentation, the optimization of the segmentation branch may deviate from that of the classification branch. Hence, it is difficult for a single-stage model to optimize its segmentation task while yielding satisfactory CAM pseudo-labels. This optimization difficulty underlies the inferior performance in single-stage approaches compared to multi-stage ones. Further, such hand-crafted refinement modules require heuristic tuning and empirical changes, thereby limiting their adaptability to novel datasets. Despite the potential benefits of post-refinement in addressing the aforementioned two issues associated with CAMs, which have been extensively discussed in WSSS studies, there has been limited exploration in explicit online optimization for CAMs. The absence of fully optimized CAMs is an important factor in the existing indispensability of this refinement. In this paper, we take a different approach, proposing a model that optimizes CAMs in an end-to-end fashion, resulting in reliable, consistent and accurate CAMs for WSSS without the necessity for subsequent refinements in two respects. First, we note that even though CAM is differentiable, it is not robust to variation. As the intermediate output of a classification model, CAMs are not fully optimized for segmentation purpose since the primary objective is to minimize classification error. This implies that within an optimized network, numerous weight combinations exist that can yield accurate classification outcomes, while generating CAMs of varying qualities. To investigate this, we conduct oracle experiments, training a classification model while simultaneously guiding the CAMs with segmentation ground truth. A noticeable enhancement in quality is observed in guided compared to vanilla CAMs, without compromising classification accuracy. Second, we demonstrate the feasibility of substituting the oracle with segmentation pseudo-labels (SPL) in the context of weak supervision. Consequently, we harness the potential of SPL for WSSS by co-training both CAMs and segmentation through mutual learning. We explore an effective way to substitute the CAM refinement process, i.e. guiding CAMs in an end-to-end fashion. Our method optimizes the CAMs and segmentation prediction simultaneously thanks to the differentiability of CAMs. To achieve this, we adopt a dual-stream framework that includes an online network (ON) and an assignment network (AN), inspired by self- supervised frameworks [5, 22]. The AN is responsible for producing CAM pseudo- labels (CPL) and segmentation pseudo-labels (SPL) to train the ON. Since CPL and SPL are swapped for supervising segmentation and CAMs, respectively, our method is named Co-training with Swapping Assignments (CoSA). Our end-to-end framework enables us to leverage the quantified reliability of pseudo-labels for training online, as opposed to relying on offline hard pseudo-labels as the existing methodologies [2, 65, 15, 50]. Thus, our model can incorporate soft regularization driven by uncertainty, where the CPL perplexity is continually assessed throughout training. This regularization is designed to adaptively weight the segmentation loss, considering the varying reliability levels of different regions. By incorporating the soft CPL, CoSA enables dynamic learning at different time-steps rather than the performance being limited by the predefined CPL. The threshold is a key parameter for generating the CPL [60, 50, 53]. It not only requires tuning but also necessitates dynamic adjustment to align with the model’s learning state at various time-steps. CoSA integrates threshold searching to dynamically adapt its learning state, as opposed to the commonly used hard threshold scheme [18, 13, 52]. With the proposed dynamic thresholding, we eliminate the laborious task of manual parameter tuning and enhance performance. We further address a common issue in WSSS with CAMs, known as the coexistence problem, whereby certain class activations display extensive false positive regions that inaccurately merge the objects with their surroundings. In response, we introduce a technique to leverage low-level CAMs enriched with object-specific details to contrastively separate the foreground regions from the background. The proposed CoSA greatly surpasses existing WSSS methods. Our approach achieves 76.2% and 75.1% mIoU on the VOC val and test splits, and 51.0% on the COCO val split, which are +5.1%, +3.9%, and +8.7% ahead of the previous single-stage SOTA [53]. The contributions of this paper are as follows: 1) We are the first to propose SPL as a substitute for guiding CAMs in WSSS. We present compelling evidence showcasing its potential to produce more reliable, consistent and accurate CAMs. 2) We introduce a dual-stream framework with swapped assignments in WSSS, which co-optimizes the CAMs and segmentation predictions in an end-to- end fashion. 3) We develop a reliability-based adaptive weighting in accommodating the learning dynamics. 4) We incorporate threshold searching to automatically adjust the threshold, ensuring alignment with the learning state at different training time-steps. 5) We address the CAM coexistence issue and propose a contrastive separation approach to regularize CAMs. This greatly alleviates the coexistence problem, significantly enhancing the results of affected classes. 6) We demonstrate CoSA’s SOTA results on key challenging WSSS benchmarks, significantly surpassing existing methods. Our source code and model weights will be available. ## 2 Related Work Multi-Stage WSSS. The majority of image-level WSSS work is multi-stage, typically comprising three stages: classification training (CAM generation), CAM refinement, and segmentation training. Some approaches employ heuristic strategies to address incomplete activation regions. For example, adversarial erasing [71, 58, 30, 68], feature map optimization [32, 14, 13, 12], self- supervised learning [60, 11, 56], and contrastive learning [27, 64, 73, 15] are employed. Some methods focus on post-refining the CAMs by propagating object regions from the seeds to their semantically similar pixels. AffinityNet [2], for instance, learns pixel-level affinity to enhance CPL. This has motivated other work [1, 20, 9, 38] that utilize additional networks to generate more accurate CPL. Other work is dedicated to studying optimization given the coarse pseudo-labels: [40] explores uncertainty of noisy labels, [43] adaptively corrects CPL during early learning, and [50] enhances boundary prediction through co-training. Since image-level labels alone do not yield satisfactory results, several methods incorporate additional modalities, such as saliency maps [37, 38, 73, 18] and CLIP models [67, 42, 63]. More recently, vision transformers [17] have emerged as prominent models for various vision tasks. Several WSSS studies benefit from vision transformers: [21] enhance CAMs by incorporating the attention map from ViT; [65] introduces class-specific attention for discriminative object localization; [42] and [67] leverage multi-modal transformers to enhance performance. Single-Stage WSSS. In contrast, single-stage methods are much more efficient. They contain a shared backbone with heads for classification and segmentation [3, 70, 52, 53]. The pipeline involves generating and refining the CAMs, leveraging an offline module, such as PAMR [3], PAR [52], or CRF [70]. Subsequently, the refined CPL are used for segmentation. Single-stage methods exhibit faster speed and a lower memory footprint but are challenging to optimize due to the obfuscation in offline refinement. As a result, they often yield inferior performance compared to multi-stage methods. More recently, with the success of ViT, single-stage WSSS has been greatly advanced. AFA [52] proposes learning reliable affinity from attention to refine the CAMs. Similarly, ToCo [53] mitigates the problem of over-smoothing in vision transformers by contrastively learning from patch tokens and class tokens. The existing works depend heavily on offline refinement of CAMs. In this study, we further explore the potential of single-stage approaches and showcase the redundancy of offline refinement. We propose an effective alternative for generating consistent, and accurate CAMs in WSSS. ## 3 Method ### 3.1 Guiding Class Activation Maps Class activation maps are determined by the feature map $F$ and the weights $W_{\text{fc}}$ for the last FC layer [72]. Let us consider a $C$ classes classification problem: $\small\mathcal{L}_{\text{cls}}(Z,Y)\\!=\\!\frac{-1}{C}\sum_{c=1}^{C}\\!\Big{[}Y^{c}\log\sigma_{Z}^{c}+(1-Y^{c})\log\left(1-\sigma_{Z}^{c}\right)\Big{]},$ (1) where $\sigma_{Z}^{c}\triangleq\sigma({Z^{c}})$ represents Sigmoid activation, $Y\triangleq Y_{\text{gt}}$ denotes the one-hot multi-class label, and $Z\triangleq GW_{\text{fc}}^{\top}\\!\in\\!\mathbb{R}^{C}$ represents the prediction logits, derived from the final FC layer, where $G\\!=\\!\texttt{Pooling}(F)\\!\in\\!\mathbb{R}^{D}$ is a spatial pooled feature from $F\\!\in\\!\mathbb{R}^{HW\times D}$. During training, Eq. 1 is optimized with respect to the learnable parameters $\theta$ in the backbone. When gradients flow backwards from $G$ to $F$, only a fraction of elements in $F$ get optimized, implying that a perturbation in $F$ does not guarantee corresponding response in $G$ due to the spatial pooling, resulting in non- determinism in the feature map $F$. This indeterminate nature can lead to stochasticity of the generated CAMs. Figure 1: Oracle Experiments on VOC. CAMs are guided by the ground truth (GT), proposed segmentation pseudo-labels (SPL), no guidance (NO) and random noise (NS). (a): classification performance; (b): CAM quality; (c) CAMs visualization. All experiments employ 2k-iters warm-up before guidance is introduced. To demonstrate this, we conduct oracle experiments in which we supervise the output CAMs from a classifier directly with the ground truth segmentation (GT). This enables all elements in $F$ to be optimized. For comparison, we conduct experiments wherein the CAMs are (i) not guided (NO), (ii) guided with masks of random noise (NS). Results, shown in Fig. 1, demonstrate that different guidance for $M$ does not affect classification even for the NS group, as all experiment groups achieved over 97% classification precision. However, drastic differences can be observed w.r.t. the quality of the CAMs. The GT group results in a notable quality improvement over the NO group, as shown in Fig. 1(b)(c). In contrast, the NS group sabotages the CAMs. This suggests the stochasticity of CAMs and explains their variability and lack robustness, something also observed in [2, 60, 13]. Figure 2: Co-training with Swapping Assignments (CoSA). We propose an end-to- end dual-stream weakly-supervised segmentation framework, capable of co- optimizing the segmentation prediction and CAMs by leveraging the swapped assignments, namely CAM pseudo-labels (CPL) and segmentation pseudo-labels (SPL). Our framework comprises two networks: an assignment network (AN) and an online network (ON), where the AN is responsible for generating pseudo-labels for training the ON. While the AN has identical architecture to the ON, it is updated through exponential moving average (EMA) of the ON. The diagram on the right provides an illustration of the architecture. Given weak-augmented images as input, the AN produces CPL to supervise segmentation in the ON ($\mathcal{L}_{\text{c2s}}$). During training, the CPL is softened by reliability-based adaptive weighting (RAW), formed based on CAM perplexity estimation and dynamic thresholding. The AN also generates SPL which is utilized to supervise the CAMs ($\mathcal{L}_{\text{s2c}}$). Further, the CAMs are regularized to contrastively separate the foreground from the background regions ($\mathcal{L}_{\text{csc}}$). Note that the ON is also trained for classification using the image-level class labels ($\mathcal{L}_{\text{cls}}$). Since relying on GT segmentation is not feasible in WSSS, we propose an alternative for guiding CAMs, employing mask predictions as segmentation pseudo-labels (SPL). As shown in Fig. 1, an SPL-guided classifier yields CAMs that significantly outperform vanilla CAMs (NO group), performing close to the oracle trained with the GT. Motivated by this, we introduce a co-training mechanism in which CAMs and mask predictions are optimized mutually without any additional CAM refinement. ### 3.2 Co-training with Swapping Assignments Overall Framework. As shown in Fig. 2, CoSA contains two networks: an online network (ON) and an assignment network (AN). ON, parameterized by $\Theta$, comprises three parts: a backbone encoder, FC layers, and a segmentation head. AN has the same architecture as ON but uses different weights, denoted $\Theta^{\prime}$. ON is trained with the pseudo assignments generated by AN, while AN is updated by the exponential moving average of ON: $\Theta^{\prime}\leftarrow m\Theta^{\prime}+(1-m)\Theta,$ where $m\in[0,1]$ denotes a momentum coefficient. Consequently, the weights of AN represent a delayed and more stable version of the weights of ON, which helps to yield a consistent and stabilized learning target [22]. An image and class label pair $(x,Y_{\text{gt}})$ is randomly sampled from a WSSS dataset $\mathcal{D}$. CoSA utilizes two augmented views $\mathcal{T}_{s}(x)$ and $\mathcal{T}_{w}(x)$ as input for ON and AN, respectively, representing strong and weak image transformations. During training, AN produces CAMs $\mathcal{M}^{\prime}$ and segmentation predictions $\mathcal{S}^{\prime}$. The CAM pseudo-labels (CPL) and segmentation pseudo- labels (SPL) are generated by $\mathcal{M}^{\prime}$ and $\mathcal{S}^{\prime}$ after filtering with respect to $Y_{\text{gt}}$. CPL and SPL are subsequently used as learning targets for supervising the segmentation predictions $\mathcal{S}$ and CAMs $\mathcal{M}$ from ON, respectively. Swapping Assignments. Our objective is to co-optimize $\mathcal{S}$ and $\mathcal{M}$. A naive approach could enforce the learning objectives $\mathcal{S}\triangleq\mathcal{S}^{\prime}$ and $\mathcal{M}\triangleq\mathcal{M}^{\prime}$ as a knowledge distillation process [25], where AN and ON play the roles of teacher and student. However, this assumes availability of a pretrained teacher which is not possible in WSSS settings. Instead, we setup a swapped self-distillation with the objective: $\small\mathcal{L}_{\text{swap}}=\mathcal{L}_{\text{c2s}}(\mathcal{S},\mathcal{M}^{\prime})+\mathcal{L}_{\text{s2c}}(\mathcal{M},\mathcal{S}^{\prime})~{},\vspace{-5pt}$ (2) where $\mathcal{L}_{\text{c2s}}$ optimizes the segmentation performance given the CPL, and $\mathcal{L}_{\text{s2c}}$ considers the CAM quality with respect to SPL. Building on self-distillation [6, 48], we present this swapped self- distillation framework tailored to facilitate information exchange between the CAMs and segmentation. Figure 3: CPL Analysis on val splits of VOC (a, c, e) and COCO (b, d, f). (a) and (b): heatmap of CPL accuracy vs. confident ranges (x-axis) for different time-steps (y-axis). (c) and (d): correlation between perplexity and accuracy of CPL for different time-steps. (e) and (f): distribution of CAMs’ confidence categorized by the proposed dynamic threshold. (best viewed under zoom) ### 3.3 Segmentation Optimization. CAM2Seg Learning. As the CAMs in CoSA are inherently guided, extra refinement [29, 2, 52] is not required, and they can be directly employed as learning targets. Nonetheless, CAMs primarily concentrate on the activated regions of the foreground while disregarding the background. As per the established convention [60, 15, 53], a threshold value $\xi$ is employed for splitting the foreground and the background. Formally, the CAM pseudo-label (CPL) is given by: $\small\hat{\mathcal{Y}}^{\text{CPL}}_{x,y}=\left\\{\begin{aligned} &\mathtt{argmax}(\mathcal{M}_{x,y}^{\prime})+1,&\text{if $\nu\geq\xi$,}\\\ &0,&\text{if $\nu<\xi$,}\\\ \end{aligned}\right.,\vspace{-5pt}$ (3) where $\nu\triangleq\mathtt{max}(\mathcal{M}_{x,y}^{\prime})$ denotes the the maximum activation, $0$ denotes the background index. Then, the CAM2seg learning objective $\mathcal{L}_{\text{c2s}}$ is cross entropy between ${\mathcal{Y}}^{\text{CPL}}$ and $\mathcal{S}$, as with the general supervised segmentation loss [10]. Reliability based Adaptive Weighting. Segmentation performance depends heavily on the pseudo-labels. Despite the high-quality CPL generated by our guided CAMs, their reliability must be assessed, particularly in the initial training phases. Existing methods use post-refinement to increase reliability [70, 3]. As CoSA can generate online pseudo-labels, we propose to leverage confidence information to compensate the CAM2Seg loss during training. Specifically, we propose to assess the perplexity scores for each pixel in $\hat{\mathcal{Y}}^{\text{CPL}}$ and leverage these scores to re-weight $\mathcal{L}_{\text{c2s}}$ for penalizing unreliable regions. However, estimating per-pixel perplexity is non-trivial. Through empirical analysis, we observe a noteworthy association between the confidence values of CAMs and their accuracy at each time-step. This correlation suggests that regions with extremely low or high confidence exhibit higher accuracy throughout training, as shown in Fig. 3(a)(b). To quantitatively model perplexity, we make two assumptions: i) the reliability of pseudo-labels is positively correlated with their accuracy, and ii) the perplexity score is negatively correlated with the reliability. Then, per-pixel perplexity of $\hat{\mathcal{Y}}^{\text{CPL}}_{x,y}$ is defined as: $\small\mathcal{P}_{x,y}=\left\\{\begin{aligned} &\left[-\log\left(\lambda_{\alpha}(\nu-\xi)/(1-\xi)\right)\right]^{\lambda_{\beta}}&\text{if $\nu\geq\xi$,}\\\ &\left[-\log\left(\lambda_{\alpha}(\xi-\nu)/\xi\right)\right]^{\lambda_{\beta}}&\text{if $\nu<\xi$,}\\\ \end{aligned}\right.$ (4) where the term within the logarithm denotes the normalized distance to $\xi$ in $[0,1]$. The logarithm ensures $\mathcal{P}_{x,y}\\!\rightarrow\\!+\infty$ as distance $\\!\rightarrow\\!0$, and $\mathcal{P}_{x,y}\\!\rightarrow\\!0$ as distance $\rightarrow\\!1$. $\lambda_{\alpha}\in\mathbb{R}^{+}$ controls the perplexity score’s minimum value and $\lambda_{\beta}\in\mathbb{R}^{+}$ determines the sharpness or smoothness of the distribution. Higher $\mathcal{P}_{x,y}$ indicates confidence of $\hat{\mathcal{Y}}^{\text{CPL}}_{x,y}$ closer to threshold $\xi$. This observation is substantiated by Fig. 3 (a)(b), where confidence values near $\xi\\!=\\!0.5$ exhibit lower reliability. Furthermore, the correlation between perplexity and accuracy remains significant across various training time-steps and datasets, as depicted in Fig. 3(c)(d). Since we hypothesize negative reliability-perplexity correlation, the reliability score can be defined as the reciprocal of perplexity. To accommodate reliability variation for different input, we use the normalized reliability as the per-pixel weights for $\mathcal{L}_{\text{c2s}}$. Thus, Reliability based Adaptive Weighting (RAW) is defined as: $\mathcal{W}_{x,y}^{\text{raw}}=|\mathcal{R}|/\sum_{i,j\in\mathcal{R}}{(\mathcal{P}_{i,j}\mathcal{P}_{x,y})}^{-1}$, where $|\mathcal{R}|$ represents total number of pixels in a batch. Then, the re-weighted CAM2seg loss for each position $(x,y)$ can be defined as $\small\mathcal{L}_{\text{c2s}}(x,y)\\!\\!=\\!\\!-\mathcal{W}_{x,y}^{\text{raw}}\\!\sum_{c=0}^{C}\\!\Bigg{[}\\!\mathbbm{1}\\!\big{[}\hat{\mathcal{Y}}^{\text{CPL}}_{x,y}\\!\\!=\\!\\!c\big{]}\\!\\!\log\\!\\!\Bigg{(}\\!\frac{\exp{\mathcal{S}^{c}_{x,y}}}{\sum_{k=0}^{C}\exp\mathcal{S}^{k}_{x,y}}\\!\\!\Bigg{)}\\!\\!\Bigg{]}.$ (5) Dynamic Threshold. Existing WSSS work [52, 53] prescribes a fixed threshold to separate foreground and background, which neglects inherent variability due to prediction confidence fluctuation during training. Obviously, applying a fixed threshold in Fig. 3(a)(b) is sub-optimal. To alleviate this, we introduce dynamic thresholding. As shown in Fig. 3(e)(f), the confidence distribution reveals discernible clusters. We assume the foreground and background pixels satisfy a bimodal Gaussian Mixture distribution. Then, the optimal dynamic threshold $\xi^{\star}$ is determined by maximizing the Gaussian Mixture likelihood: $\displaystyle\xi^{\star}=\underset{\xi}{\mathtt{argmax}}$ $\displaystyle\prod_{x\in\\{\mathcal{M}^{\prime}\geq\xi\\}}\tilde{\pi}_{fg}\mathcal{N}\left(x\mid\tilde{\mu}_{fg},\tilde{\Sigma}_{fg}\right)$ (6) $\displaystyle+$ $\displaystyle\prod_{x\in\\{\mathcal{M}^{\prime}<\xi\\}}\tilde{\pi}_{bg}\mathcal{N}\left(x\mid\tilde{\mu}_{bg},\tilde{\Sigma}_{bg}\right)~{},$ where $\mathcal{N}(x\mid\mu,\Sigma)$ denotes the Gaussian function and $\pi$, $\mu$, $\Sigma$ represent the weight, mean and covariance. To avoid mini-batch bias, we maintain a queue to fit GMM, with the current $\mathcal{M}^{\prime}$ batch enqueued and the oldest dequeued. This practice facilitates the establishment of a gradually evolving threshold, contributing to learning stabilization. ### 3.4 CAM Optimization. Seg2CAM Learning. To generate SPL, segmentation predictions $\mathcal{S}^{\prime}$ are filtered by the weak labels $Y_{\text{gt}}$ and transformed into probabilities: $\small\mathcal{S}^{\prime~{}c}_{x,y}=\left\\{\begin{aligned} &-\infty,\\!\\!\\!&\text{if $Y_{\text{gt}}^{c}=0$,}\\\ &\mathcal{S}^{\prime~{}c}_{x,y},\\!\\!\\!&\text{if $Y_{\text{gt}}^{c}\neq 0$,}\\\ \end{aligned}\right.\;\;\;\;\hat{\mathcal{Y}}^{\text{SPL}}_{x,y}=\mathtt{Softmax}(\frac{\mathcal{S}^{\prime}_{x,y}}{\tau})~{},\vspace{-5pt}$ (7) where $\tau$ represents the softmax temperature to sharpen the $\hat{\mathcal{Y}}^{\text{SPL}}$. Then, we arrive Seg2CAM learning objective: $\displaystyle\mathcal{L}_{\text{s2c}}=-\frac{1}{C|\mathcal{R}|}$ $\displaystyle\sum_{c=1}^{C}\sum_{x,y\in\mathcal{R}}\bigg{[}\hat{\mathcal{Y}}^{\text{SPL}}_{x,y}[c]\log(\sigma(\mathcal{M}^{c}_{x,y}))$ (8) $\displaystyle+$ $\displaystyle(1-\hat{\mathcal{Y}}^{\text{SPL}}_{x,y}[c])\log(1-\sigma(\mathcal{M}^{c}_{x,y}))\bigg{]}~{},$ where $\mathcal{R}$ represents all the positions in SPL. Coexistence Problem in CAMs. Certain class activations often exhibit large false positive regions, where objects are incorrectly merged with surroundings, as shown in Fig. 8 (1st row). For instance, the classes ’bird’ and ’tree branches’, ’train’ and ’railways’, etc. frequently appear together in VOC. We refer to this issue as the coexistence problem. We hypothesize that the coexistence problem is attributed to three factors: i) Objects that coexist in images, such as ’tree branches’, are not annotated w.r.t. weak labels, which makes it challenging for a model to semantically distinguish coexistence. ii) Training datasets lack sufficient samples for such classes. iii) High-level feature maps, though rich in abstract representations and semantic information, lack essential low-level features such as edges, textures, and colors [24]. Thus, CAMs generated from the last layer are poor in low-level information for segmenting objects. Conversely, segmenting objects with high-level semantics is hindered due to factors i) and ii). Figure 4: $\mathcal{M}$ and $\mathcal{M}^{\dagger}$ Comparisons. (a): mIoU vs. time-steps for $\mathcal{M}$ and $\mathcal{M}^{\dagger}$ on VOC val. (b): same as (a) but filtered by perplexity. (c): cases of coexistence issue in $\mathcal{M}$ but not in $\mathcal{M}^{\dagger}$. Contrastive Separation in CAMs. We posit that the effective usage of low-level information can alleviate the coexistence problem. Since shallower-layer feature maps contain more low-level information, we propose to extract CAMs from earlier in the backbone, denoted $\mathcal{M}^{\dagger}$, and present a comparison with $\mathcal{M}$ in Fig. 4, showing that substituting $\mathcal{M}$ with $\mathcal{M}^{\dagger}$ is not feasible due to the lower mIoU upperbound of $\mathcal{M}^{\dagger}$. If we consider the confident regions in $\mathcal{M}$ and $\mathcal{M}^{\dagger}$, i.e. filter by a low- pass perplexity $\mathcal{P}$, $\\{\mathcal{M}^{\dagger}_{x,y}\\!\mid\\!\mathcal{P}_{x,y}\\!\leq\\!\epsilon\\}$ are better than $\\{\mathcal{M}_{x,y}\\!\mid\\!\mathcal{P}_{x,y}\\!\leq\\!\epsilon\\}$, as shown in Fig. 4(b), where $\epsilon$ represents a low-pass coefficient. Fig. 4(c) illustrates the presence of coexistence issues in $\mathcal{M}$ but absence in $\mathcal{M}^{\dagger}$. Those findings suggest that $\mathcal{M}^{\dagger}$ is worse than $\mathcal{M}$ in general, but better for those regions with low perplexity. In CoSA, we propose to regularize $\mathcal{M}$ by $\mathcal{M}^{\dagger\prime}$ (from AN). We define positive $\mathcal{R}^{+}_{i,j}$ and negative $\mathcal{R}^{-}_{i,j}$ regions as $\displaystyle\mathcal{R}^{+}_{i,j}$ $\displaystyle=\Big{\\{}(x,y)\mid\mathcal{P}_{x,y}\leq\epsilon,~{}\hat{y}^{\text{CPL}}_{x,y}=\hat{y}^{\text{CPL}}_{i,j},(x,y)\neq(i,j)\Big{\\}}$ (9) $\displaystyle\mathcal{R}^{-}_{i,j}$ $\displaystyle=\Big{\\{}(x,y)\mid\mathcal{P}_{x,y}\leq\epsilon,~{}\hat{y}^{\text{CPL}}_{x,y}\neq\hat{y}^{\text{CPL}}_{i,j}\Big{\\}}~{},$ where $(i,j)\\!\in\\!\Omega$, $\Omega\\!=\\!\\{(x,y)\\!\mid\\!\mathcal{P}_{x,y}\\!\leq\\!\epsilon\\}$ is low-perplexity region in $\mathcal{M}^{\dagger\prime}$, and $\hat{y}^{\text{CPL}}$ represents the CPL of $\mathcal{M}^{\dagger\prime}$. Then, we arrive the contrastive separation loss for $\mathcal{M}$: $\small\mathcal{L}_{\text{csc}}=-\frac{1}{\lvert\Omega\rvert}\sum_{i,j\in\Omega}\frac{1}{\lvert\mathcal{R}^{+}_{i,j}\rvert}\sum_{x,y\in\mathcal{R}^{+}_{i,j}}\log\frac{L_{x,y}^{i,j}}{L_{x,y}^{i,j}+K_{n,m}^{i,j}}~{},$ (10) where $L_{x,y}^{i,j}\\!=\\!\exp(l_{d}(\mathcal{M}_{i,j},\mathcal{M}_{x,y})/\tau)$, $l_{d}(a,b)$ measures the (a,b) distance, $\tau$ denotes the InfoNCE loss [47] temperature, and $K_{n,m}^{i,j}\\!=\\!\sum_{n,m\in\mathcal{R}^{-}_{i,j}}L_{n,m}^{i,j}$. Overall Objectives. The objectives encompass the aforementioned losses and a further $\mathcal{L}_{\text{c2s}}^{\mathcal{M}^{\dagger}}$ to stabilize training and accelerate convergence, resulting in the CoSA objective: $\small\mathcal{L}_{\text{CoSA}}\\!\\!=\\!\mathcal{L}_{\text{cls}}\\!+\mathcal{L}_{\text{cls}}^{\mathcal{M}^{\dagger}}\\!\\!+\\!\lambda_{\text{c2s}}\big{(}\mathcal{L}_{\text{c2s}}\\!+\\!\mathcal{L}_{\text{c2s}}^{\mathcal{M}^{\dagger}}\big{)}\\!+\\!\lambda_{\text{s2c}}\mathcal{L}_{\text{s2c}}+\\!\lambda_{\text{csc}}\mathcal{L}_{\text{csc}}.$ (11) ## 4 Experiments ### 4.1 Experiment Details and Results Datasets. We evaluate on two benchmarks: PASCAL VOC 2012 [19] and MS-COCO 2014 [41]. VOC encompasses 20 categories with train, val, and test splits of 1464, 1449, and 1456 images. Following WSSS practice [2, 3, 65], SBD [23] is used to augment the train split to 10,582. COCO contains 80 categories with train and val splits of approx. 82K and 40K images. Our model is trained and evaluated using _only_ the image-level classification labels111Not available for VOC test split and so not used in evaluation., and employing mIoU as evaluation metrics. Implementation Details. Following [53], we use ImageNet pretrained ViT-base (ViT-B) [17] as the encoder. For classification, we use global max pooling (GMP) [51] and the CAM approach [72]. For the segmentation decoder, we use LargeFOV [10], as with [53]. ON is trained with AdamW [46]. The learning rate is set to 6E-5 in tandem with polynomial decay. AN is updated with a momentum of $0.9994$. For preprocessing, the images are cropped to $448^{2}$, then weak/strong augmentations are applied (see Supp. Materials). The perplexity constants $(\lambda_{\alpha},\lambda_{\beta})$ are set to $(0.8,1)$, GMM- fitting queue length is $100$, and softmax temperature $\tau$ is $0.01$. The low perplexity threshold $\epsilon$ is set to $1$ and the loss weight factors $(\lambda_{\text{c2s}},\lambda_{\text{s2c}},\lambda_{\text{csc}})$ to $(0.1,0.05,0.1)$. Method | Backbone | train | val ---|---|---|--- RRM [70] AAAI’2020 | WR38 | – | 65.4 1Stage [3] CVPR’2020 | WR38 | 66.9 | 65.3 AFA [52] CVPR’2022 | MiT-B1 | 68.7 | 66.5 MCT [65] CVPR’2022 | MCT | 69.1 | – ViT-PCM [51] ECCV’2022 | ViT-B | 71.4 | 69.3 Xu _et al_. [66] CVPR’2023 | ViT-B | 66.3 | – ACR-ViT [31] CVPR’2023 | ViT-B | 70.9 | – CLIP-ES [42] CVPR’2023 | ViT-B | 75.0 | – ToCo [53] CVPR’2023 | ViT-B | 73.6 | 72.3 CoSA | ViT-B | 78.5 | 76.4 CoSA∙ | ViT-B | 78.9 | 77.2 Table 1: Comparisons of CPL. CAM pseudo-labels evaulation on VOC dataset. Backbone denotes the encoder used for generating the CAMs. $\bullet$ represents the ensemble of $\mathcal{M}^{\prime}$ and $\mathcal{M}^{\dagger\prime}$ in CoSA. Methods | Sup. | Net. | VOC | COCO ---|---|---|---|--- val | test | val Supervised Upperbounds. | | Deeplab [10] TPAMI’2017 | $\mathcal{F}$ | R101 | 77.6 | 79.7 | – WideRes38 [61] PR’2019 | $\mathcal{F}$ | WR38 | 80.8 | 82.5 | – ViT-Base [17] ICLR’2021 | $\mathcal{F}$ | ViT-B | 80.5 | 81.0 | – UperNet-Swin [44] ICCV’2021 | $\mathcal{F}$ | SWIN | 83.4 | 83.7 | – Multi-stage Methods. | | L2G [26] CVPR’2022 | $\mathcal{I}+\mathcal{S}$ | R101 | 72.1 | 71.7 | 44.2 Du _et al_. [18] CVPR’2022 | $\mathcal{I}+\mathcal{S}$ | R101 | 72.6 | 73.6 | – CLIP-ES [42] CVPR’2023 | $\mathcal{I}+\mathcal{L}$ | R101 | 73.8 | 73.9 | 45.4 ESOL [39] NeurIPS’2022 | $\mathcal{I}$ | R101 | 69.9 | 69.3 | 42.6 BECO [50] CVPR’2023 | $\mathcal{I}$ | R101 | 72.1 | 71.8 | 45.1 Mat-Label [59] ICCV’2023 | $\mathcal{I}$ | R101 | 73.0 | 72.7 | 45.6 CoSA-MS | $\mathcal{I}$ | R101 | 76.5 | 75.3[1] | 50.9 Xu _et al_. [66] CVPR’2023 | $\mathcal{I}+\mathcal{L}$ | WR38 | 72.2 | 72.2 | 45.9 W-OoD [35] CVPR’2022 | $\mathcal{I}$ | WR38 | 70.7 | 70.1 | – MCT [65] CVPR’2022 | $\mathcal{I}$ | WR38 | 71.9 | 71.6 | 42.0 ex-ViT [69] PR’2023 | $\mathcal{I}$ | WR38 | 71.2 | 71.1 | 42.9 ACR-ViT [31] CVPR’2023 | $\mathcal{I}$ | WR38 | 72.4 | 72.4 | – MCT+OCR [15] CVPR’2023 | $\mathcal{I}$ | WR38 | 72.7 | 72.0 | 42.0 CoSA-MS | $\mathcal{I}$ | WR38 | 76.6 | 74.9[2] | 50.1 ReCAM [14] CVPR’2022 | $\mathcal{I}$ | SWIN | 70.4 | 71.7 | 47.9 LPCAM [12] CVPR’2023 | $\mathcal{I}$ | SWIN | 73.1 | 73.4 | 48.3 CoSA-MS | $\mathcal{I}$ | SWIN | 81.4 | 78.4[3] | 53.7 Single-stage (End-to-end) Methods. | | 1Stage [3] CVPR’2020 | $\mathcal{I}$ | WR38 | 62.7 | 64.3 | – RRM [70] AAAI’2020 | $\mathcal{I}$ | WR38 | 62.6 | 62.9 | – AFA [52] CVPR’2022 | $\mathcal{I}$ | MiT-B1 | 66.0 | 66.3 | 38.9 RRM [70]† AAAI’2020 | $\mathcal{I}$ | ViT-B | 63.1 | 62.4 | – ViT-PCM [51] ECCV’2022 | $\mathcal{I}$ | ViT-B | 69.3 | – | 45.0 ToCo [53] CVPR’2023 | $\mathcal{I}$ | ViT-B | 71.1 | 72.2 | 42.3 CoSA | $\mathcal{I}$ | ViT-B | 76.2 | 75.1[4] | 51.0 CoSA∗ | $\mathcal{I}$ | ViT-B | 76.4 | 75.2[5] | 51.1 Table 2: Weakly Supervised Semantic Segmentation Results. Sup.: supervision type. Net.: segmentation backbone. $\mathcal{F}$: Fully supervised, $\mathcal{I}$: Image-level labels, $\mathcal{S}$: Saliency maps, $\mathcal{L}$: language models. $*$ represents CRF [10] postprocessing results. CAM Quality Comparison. Tab. 1 shows CoSA CPL results on VOC compared with existing WSSS methods, using our $\hat{\mathcal{Y}}^{\text{CPL}}$ ($\xi\\!\\!=\\!\\!0.5$). Our method yields 78.5% and 76.4% mIoU on train and val. Notably, an ensemble of $\mathcal{M}^{\prime}$ and $\mathcal{M}^{\dagger\prime}$ improves performance to 78.9% and 77.2%, suggesting the activation of $\mathcal{M}^{\prime}$ is orthogonal to that of $\mathcal{M}^{\dagger\prime}$. Semantic Segmentation Comparison. We compare our method with existing SOTA WSSS methods on VOC and COCO for semantic segmentation in Tab. 2. CoSA achieves 76.2% and 75.1% on VOC12 val and test, respectively, surpassing the highest-performing single-stage model (ToCo) by 5.1% and 2.9%, as well as all multi-stage methods, including those with additional supervision. In the COCO evaluation, CoSA consistently outperforms other approaches, demonstrating a significant increase of 8.7% over the top-performing single-stage methods. Further, there is a also 2.7% improvement over the leading multi-stage method [12]. While our primary goal is to provide an end-to-end WSSS solution, we also offer a multi-stage version of CoSA, denoted as CoSA-MS in Tab. 2, where various standalone segmentation networks are trained using our CPL. Our CoSA- MS models can also attains SOTA performance in multi-stage scenarios. Qualitative Comparison. Fig. 5 presents CAMs and segmentation visualizations, comparing with recent methods: MCT, BECO, and ToCo. As shown, our method can generate improved CAMs and produce well-aligned segmentation, exhibiting superior results in challenging segmentation problems with intra-class variation and occlusions. In addition, CoSA performs well w.r.t. the coexistence cases (Fig. 5 R1, R2), while existing methods struggle. Moreover, CoSA reveals limitations in the GT segmentation (Fig. 5 R4). Figure 5: Qualitative Comparison. The results are reported on the val splits of VOC (in R1 \- R3) and COCO (in R4 \- R6). The official codebases and provided weights for MCT [65], BECO [50], and ToCo [53] are used for this comparison. (best viewed under zoom; see Supp. Materials for more high-res Comparisons). ### 4.2 Ablation Studies CoSA Module Analysis. We begin by employing CAMs directly as the supervision signal for segmentation, akin to [70], albeit without refinement, and gradually apply CoSA modules to this baseline. As shown in Tab. 3(a), the mIoUs progressively improve with addition of our components. Further, we examine the efficacy of each CoSA component. As shown in Tab. 3(b), the elimination of each component results in deteriorated performance, most notably for CSC. (a) mIoU (inc.) Base. GC SA RAW CSC DT VOC COCO ✓ 55.96 37.32 ✓ ✓ 63.09 (+7.13) 42.55 (+5.23) ✓ ✓ ✓ 64.41 (+8.45) 43.92 (+6.60) ✓ ✓ ✓ ✓ 68.22 (+12.26) 45.39 (+8.07) ✓ ✓ ✓ ✓ 71.66 (+15.70) 47.10 (+9.78) ✓ ✓ ✓ ✓ ✓ 75.54 (+19.58) 49.67 (+12.35) ✓ ✓ ✓ ✓ ✓ ✓ 76.19 (+20.23) 51.00 (+13.68) (b) mIoU (dec.) CoSA GC SA RAW CSC DT VOC COCO ✓ 76.19 51.00 ✓ ✗ 75.54 (-0.65) 49.67 (-1.33) ✓ ✗ 69.89 (-6.30) 45.95 (-5.05) ✓ ✗ 72.45 (-3.74) 47.83 (-3.17) ✓ ✗ 72.10 (-4.09) 49.04 (-1.96) ✓ ✗ 74.12 (-2.07) 49.67 (-1.33) Table 3: Ablation Study on Contribution of Each Component. (a): gradually add proposed components to baseline. (b): systematically exclude components from CoSA. GC: Guided CAMs, SA: Swapping Assignments, RAW: Reliability based Adaptive Weighting, CSC: Contrastive Separation in CAMs, and DT: Dynamic Threshold. mIoU is reported on PASCAL VOC12 and COCO val splits. (a) Source Detach train val GT None 83.99 80.16 NO – 72.28 71.38 SPL $F$ 74.19 73.36 SPL $W_{\text{fc}}$ 78.05 76.15 SPL None 78.54 76.37 (b) Method C-mIoU mIoU FPR [8] 53.09 53.34 MCT [65] 58.46 61.24 ToCo [53] 63.62 72.33 w/o CSC 62.61 67.82 w/ CSC 82.34 76.37 Table 4: Ablation Study of GC & CSC. (a): CPL performance comparison on VOC for CAMs. Detach: stop gradient in GC for feature map $F$ or FC weights $W_{\text{fc}}$. Source: guidance sources. (b): CPL performance comparison on VOC. FPR, MCT and ToCo results are based on provided code and weights. C-mIoU: mIoU for classes with coexistence issues. Figure 6: Ablative Study of SA. The performance of SPL (left) and CPL (right) w.r.t. iterations on VOC val set are shown for CoSA with or without SA. Figure 7: Ablation Study of RAW. (left) boxplot of mIoU, perplexity and MAE to (1,0) for individual CPLs on VOC val. (right) perplexity reduction over times. Figure 8: Coexistence Problems & Effect of CSC. The class activation for bird, train, airplane, and boat are presented from left to right. (best viewed under zoom) Impact of Guided CAMs. We evaluate the impact of including guided CAMs w.r.t. CAM quality, comparing a baseline using vanilla CAMs [72] as CPL following [70, 52] with the proposed guided CAMs. As shown in Tab. 4(a), our guided CAMs notably enhance CPL quality by 6.26% and 4.99% for train and val splits. Further, we conduct experiments to ascertain the extent to which the two CAM components, feature map $F$ and classification weights $W_{\text{fc}}$, exert greater impact on guiding CAMs. As shown, disconnection of $F$ from the gradient chains results in 74.19% and 73.36%, while the detachment of $W_{\text{fc}}$ decrease the results slightly to 78.05% and 76.37%. This suggests that guiding CAMs primarily optimizes the feature maps, verifying our initial hypothesis of the inherent non-deterministic feature map contributing to the stochasticity of CAMs in Sec. 3.1. Impact of Swapping Assignments (SA). Tab. 3(b) suggests that eliminating SA results in significant mIoU decreases, highlighting the importance of this training strategy. Further examination of the ON and AN w.r.t. SPL and CPL indicates that, in later training stages, AN consistently outperforms ON for both SPL and CPL, as shown in Fig. 8, due to AN performing a form of model ensembling similar to Polyak-Ruppert averaging [49, 54]. We observe a noticeable disparity of mIoUs between two ONs (solid orange line vs. solid blue line in Fig. 8), which may be attributed to the superior quality of CPL and SPL from the AN facilitating a more robust ON for CoSA. The momentum framework, originally introduced to mitigate noise and fluctuations of the online learning target [22, 6], is used for information exchange across CAMs and segmentation in CoSA. To the best of our knowledge, we are the first to apply and demonstrate the efficacy of this type of training scheme in WSSS. Impact of RAW. Tab. 3(b) shows notable mIoU reduction without RAW. We conduct further studies to investigate its effect on perplexity reduction. The boxplot in Fig. 8 suggests that RAW leads to higher mIoU but lower perplexity. Fig. 8(right) illustrates a faster decrease in perplexity when RAW is used, affirming its impact on perplexity reduction. Impact of CSC. Our CSC is introduced to address the coexistence issue. We establish C-mIoU to measure the CAM quality for those coexistence-affected classes. As shown in Tab. 4(b), applying CSC sees a boost in C-mIoU and mIoU, which surpass the existing methods. Some visual examples demonstrating these enhancements are given in Fig. 8. Impact of Dynamic Threshold. We evaluate CoSA using some predetermined thresholds, comparing them with one employing dynamic threshold on VOC val split (see Supp. Materials for results). The performance is sensitive to the threshold, but dynamic thresholding achieves 0.65% increased performance over the best manual finetuning while saving 80% of hyper-parameter searching time. ## 5 Conclusion This paper presents an end-to-end WSSS method: Co-training with Swapping Assignments (CoSA), which eliminates the need for CAM refinement and enables concurrent CAM and segmentation optimization. Our empirical study reveals the non-deterministic behaviors of CAMs and that proper guidance can mitigate such stochasticity, leading to substantial quality enhancement. We propose explicit CAM optimization leveraging segmentation pseudo-labels in our approach, where a dual-stream model comprising an online network for predicting CAMs and segmentation masks, and an ancillary assignment network providing swapped assignments (SPL and CPL) for training, is introduced. We further propose three techniques within this framework: RAW, designed to mitigate the issue of unreliable pseudo-labels; contrastive separation, aimed at resolving coexistence problems; and a dynamic threshold search algorithm. Incorporating these techniques, CoSA outperforms all SOTA methods on both VOC and COCO WSSS benchmarks while achieving exceptional speed-accuracy trade-off. ## References * [1] Jiwoon Ahn, Sunghyun Cho, and Suha Kwak. Weakly supervised learning of instance segmentation with inter-pixel relations. In IEEE Computer Vision and Pattern Recognition (CVPR), pages 2209–2218, 2019. * [2] Jiwoon Ahn and Suha Kwak. Learning pixel-level semantic affinity with image-level supervision for weakly supervised semantic segmentation. In IEEE Computer Vision and Pattern Recognition (CVPR), pages 4981–4990, 2018. * [3] Nikita Araslanov and Stefan Roth. Single-stage semantic segmentation from image labels. In IEEE Computer Vision and Pattern Recognition (CVPR), pages 4253–4262, 2020. * [4] Amy Bearman, Olga Russakovsky, Vittorio Ferrari, and Li Fei-Fei. What’s the point: Semantic segmentation with point supervision. In European Conference on Computer Vision (ECCV), pages 549–565. Springer, 2016. * [5] Mathilde Caron, Ishan Misra, Julien Mairal, Priya Goyal, Piotr Bojanowski, and Armand Joulin. Unsupervised learning of visual features by contrasting cluster assignments. Neural Information Processing Systems (NeurIPS), 33:9912–9924, 2020. * [6] Mathilde Caron, Hugo Touvron, Ishan Misra, Hervé Jégou, Julien Mairal, Piotr Bojanowski, and Armand Joulin. Emerging properties in self-supervised vision transformers. In IEEE Computer Vision and Pattern Recognition (CVPR), pages 9650–9660, 2021. * [7] Aditya Chattopadhay, Anirban Sarkar, Prantik Howlader, and Vineeth N Balasubramanian. Grad-cam++: Generalized gradient-based visual explanations for deep convolutional networks. In IEEE Winter Conference on Applications of Computer Vision (WACV), pages 839–847. IEEE, 2018. * [8] Liyi Chen, Chenyang Lei, Ruihuang Li, Shuai Li, Zhaoxiang Zhang, and Lei Zhang. Fpr: False positive rectification for weakly supervised semantic segmentation. In IEEE International Conference on Computer Vision (ICCV), pages 1108–1118, 2023. * [9] Liyi Chen, Weiwei Wu, Chenchen Fu, Xiao Han, and Yuntao Zhang. Weakly supervised semantic segmentation with boundary exploration. In European Conference on Computer Vision (ECCV), pages 347–362. Springer, 2020. * [10] Liang-Chieh Chen, George Papandreou, Iasonas Kokkinos, Kevin Murphy, and Alan L Yuille. Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs. IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 40(4):834–848, 2017. * [11] Qi Chen, Lingxiao Yang, Jian-Huang Lai, and Xiaohua Xie. Self-supervised image-specific prototype exploration for weakly supervised semantic segmentation. In IEEE Computer Vision and Pattern Recognition (CVPR), pages 4288–4298, 2022. * [12] Zhaozheng Chen and Qianru Sun. Extracting class activation maps from non-discriminative features as well. In IEEE Computer Vision and Pattern Recognition (CVPR), pages 3135–3144, 2023. * [13] Zhang Chen, Zhiqiang Tian, Jihua Zhu, Ce Li, and Shaoyi Du. C-cam: Causal cam for weakly supervised semantic segmentation on medical image. In IEEE Computer Vision and Pattern Recognition (CVPR), pages 11676–11685, 2022. * [14] Zhaozheng Chen, Tan Wang, Xiongwei Wu, Xian-Sheng Hua, Hanwang Zhang, and Qianru Sun. Class re-activation maps for weakly-supervised semantic segmentation. In IEEE Computer Vision and Pattern Recognition (CVPR), pages 969–978, 2022. * [15] Zesen Cheng, Pengchong Qiao, Kehan Li, Siheng Li, Pengxu Wei, Xiangyang Ji, Li Yuan, Chang Liu, and Jie Chen. Out-of-candidate rectification for weakly supervised semantic segmentation. In IEEE Computer Vision and Pattern Recognition (CVPR), pages 23673–23684, 2023. * [16] Jifeng Dai, Kaiming He, and Jian Sun. Boxsup: Exploiting bounding boxes to supervise convolutional networks for semantic segmentation. In IEEE International Conference on Computer Vision (ICCV), pages 1635–1643, 2015. * [17] Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, et al. An image is worth 16x16 words: Transformers for image recognition at scale. In International Conference on Learning Representations (ICLR), 2021. * [18] Ye Du, Zehua Fu, Qingjie Liu, and Yunhong Wang. Weakly supervised semantic segmentation by pixel-to-prototype contrast. In IEEE Computer Vision and Pattern Recognition (CVPR), pages 4320–4329, 2022. * [19] Mark Everingham, Luc Van Gool, Christopher KI Williams, John Winn, and Andrew Zisserman. The pascal visual object classes (voc) challenge. International journal of computer vision (IJCV), 88:303–338, 2010. * [20] Junsong Fan, Zhaoxiang Zhang, Tieniu Tan, Chunfeng Song, and Jun Xiao. Cian: Cross-image affinity net for weakly supervised semantic segmentation. In AAAI Conference on Artificial Intelligence (AAAI), volume 34, pages 10762–10769, 2020. * [21] Wei Gao, Fang Wan, Xingjia Pan, Zhiliang Peng, Qi Tian, Zhenjun Han, Bolei Zhou, and Qixiang Ye. Ts-cam: Token semantic coupled attention map for weakly supervised object localization. In IEEE International Conference on Computer Vision (ICCV), pages 2886–2895, 2021. * [22] Jean-Bastien Grill, Florian Strub, Florent Altché, Corentin Tallec, Pierre Richemond, Elena Buchatskaya, Carl Doersch, Bernardo Avila Pires, Zhaohan Guo, Mohammad Gheshlaghi Azar, et al. Bootstrap your own latent-a new approach to self-supervised learning. Neural Information Processing Systems (NeurIPS), 33:21271–21284, 2020. * [23] Bharath Hariharan, Pablo Arbeláez, Lubomir Bourdev, Subhransu Maji, and Jitendra Malik. Semantic contours from inverse detectors. In IEEE International Conference on Computer Vision (ICCV), pages 991–998. IEEE, 2011. * [24] Bharath Hariharan, Pablo Arbeláez, Ross Girshick, and Jitendra Malik. Hypercolumns for object segmentation and fine-grained localization. In IEEE Computer Vision and Pattern Recognition (CVPR), pages 447–456, 2015. * [25] Geoffrey Hinton, Oriol Vinyals, and Jeff Dean. Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531, 2015. * [26] Peng-Tao Jiang, Yuqi Yang, Qibin Hou, and Yunchao Wei. L2g: A simple local-to-global knowledge transfer framework for weakly supervised semantic segmentation. In IEEE Computer Vision and Pattern Recognition (CVPR), pages 16886–16896, 2022. * [27] Tsung-Wei Ke, Jyh-Jing Hwang, and Stella Yu. Universal weakly supervised segmentation by pixel-to-segment contrastive learning. In International Conference on Learning Representations (ICLR), 2020. * [28] Anna Khoreva, Rodrigo Benenson, Jan Hosang, Matthias Hein, and Bernt Schiele. Simple does it: Weakly supervised instance and semantic segmentation. In IEEE Computer Vision and Pattern Recognition (CVPR), pages 876–885, 2017. * [29] Philipp Krähenbühl and Vladlen Koltun. Efficient inference in fully connected crfs with gaussian edge potentials. Neural Information Processing Systems (NeurIPS), 24, 2011. * [30] Hyeokjun Kweon, Sung-Hoon Yoon, Hyeonseong Kim, Daehee Park, and Kuk-Jin Yoon. Unlocking the potential of ordinary classifier: Class-specific adversarial erasing framework for weakly supervised semantic segmentation. In IEEE International Conference on Computer Vision (ICCV), pages 6994–7003, 2021. * [31] Hyeokjun Kweon, Sung-Hoon Yoon, and Kuk-Jin Yoon. Weakly supervised semantic segmentation via adversarial learning of classifier and reconstructor. In IEEE Computer Vision and Pattern Recognition (CVPR), pages 11329–11339, 2023. * [32] Jungbeom Lee, Eunji Kim, Sungmin Lee, Jangho Lee, and Sungroh Yoon. Ficklenet: Weakly and semi-supervised semantic image segmentation using stochastic inference. In IEEE Computer Vision and Pattern Recognition (CVPR), pages 5267–5276, 2019. * [33] Jungbeom Lee, Eunji Kim, Jisoo Mok, and Sungroh Yoon. Anti-adversarially manipulated attributions for weakly supervised semantic segmentation and object localization. IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 2022. * [34] Jungbeom Lee, Eunji Kim, and Sungroh Yoon. Anti-adversarially manipulated attributions for weakly and semi-supervised semantic segmentation. In IEEE Computer Vision and Pattern Recognition (CVPR), pages 4071–4080, 2021. * [35] Jungbeom Lee, Seong Joon Oh, Sangdoo Yun, Junsuk Choe, Eunji Kim, and Sungroh Yoon. Weakly supervised semantic segmentation using out-of-distribution data. In IEEE Computer Vision and Pattern Recognition (CVPR), pages 16897–16906, 2022. * [36] Jungbeom Lee, Jihun Yi, Chaehun Shin, and Sungroh Yoon. Bbam: Bounding box attribution map for weakly supervised semantic and instance segmentation. In IEEE Computer Vision and Pattern Recognition (CVPR), pages 2643–2652, 2021. * [37] Seungho Lee, Minhyun Lee, Jongwuk Lee, and Hyunjung Shim. Railroad is not a train: Saliency as pseudo-pixel supervision for weakly supervised semantic segmentation. In IEEE Computer Vision and Pattern Recognition (CVPR), pages 5495–5505, 2021. * [38] Jing Li, Junsong Fan, and Zhaoxiang Zhang. Towards noiseless object contours for weakly supervised semantic segmentation. In IEEE Computer Vision and Pattern Recognition (CVPR), pages 16856–16865, 2022. * [39] Jinlong Li, Zequn Jie, Xu Wang, Lin Ma, et al. Expansion and shrinkage of localization for weakly-supervised semantic segmentation. In Neural Information Processing Systems (NeurIPS), 2022. * [40] Yi Li, Yiqun Duan, Zhanghui Kuang, Yimin Chen, Wayne Zhang, and Xiaomeng Li. Uncertainty estimation via response scaling for pseudo-mask noise mitigation in weakly-supervised semantic segmentation. In AAAI Conference on Artificial Intelligence (AAAI), volume 36, pages 1447–1455, 2022. * [41] Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dollár, and C Lawrence Zitnick. Microsoft coco: Common objects in context. In European Conference on Computer Vision (ECCV), pages 740–755. Springer, 2014. * [42] Yuqi Lin, Minghao Chen, Wenxiao Wang, Boxi Wu, Ke Li, Binbin Lin, Haifeng Liu, and Xiaofei He. Clip is also an efficient segmenter: A text-driven approach for weakly supervised semantic segmentation. In IEEE Computer Vision and Pattern Recognition (CVPR), pages 15305–15314, 2023. * [43] Sheng Liu, Kangning Liu, Weicheng Zhu, Yiqiu Shen, and Carlos Fernandez-Granda. Adaptive early-learning correction for segmentation from noisy annotations. In IEEE Computer Vision and Pattern Recognition (CVPR), pages 2606–2616, 2022. * [44] Ze Liu, Yutong Lin, Yue Cao, Han Hu, Yixuan Wei, Zheng Zhang, Stephen Lin, and Baining Guo. Swin transformer: Hierarchical vision transformer using shifted windows. In IEEE International Conference on Computer Vision (ICCV), pages 10012–10022, 2021. * [45] Zhengzhe Liu, Xiaojuan Qi, and Chi-Wing Fu. One thing one click: A self-training approach for weakly supervised 3d semantic segmentation. In IEEE Computer Vision and Pattern Recognition (CVPR), pages 1726–1736, 2021. * [46] Ilya Loshchilov and Frank Hutter. Decoupled weight decay regularization. arXiv preprint arXiv:1711.05101, 2017. * [47] Aaron van den Oord, Yazhe Li, and Oriol Vinyals. Representation learning with contrastive predictive coding. arXiv preprint arXiv:1807.03748, 2018. * [48] Maxime Oquab, Timothée Darcet, Théo Moutakanni, Huy Vo, Marc Szafraniec, Vasil Khalidov, Pierre Fernandez, Daniel Haziza, Francisco Massa, Alaaeldin El-Nouby, et al. Dinov2: Learning robust visual features without supervision. arXiv preprint arXiv:2304.07193, 2023. * [49] Boris T Polyak and Anatoli B Juditsky. Acceleration of stochastic approximation by averaging. SIAM journal on control and optimization, 30(4):838–855, 1992. * [50] Shenghai Rong, Bohai Tu, Zilei Wang, and Junjie Li. Boundary-enhanced co-training for weakly supervised semantic segmentation. In IEEE Computer Vision and Pattern Recognition (CVPR), pages 19574–19584, 2023. * [51] Simone Rossetti, Damiano Zappia, Marta Sanzari, Marco Schaerf, and Fiora Pirri. Max pooling with vision transformers reconciles class and shape in weakly supervised semantic segmentation. In European Conference on Computer Vision (ECCV), pages 446–463. Springer, 2022. * [52] Lixiang Ru, Yibing Zhan, Baosheng Yu, and Bo Du. Learning affinity from attention: end-to-end weakly-supervised semantic segmentation with transformers. In IEEE Computer Vision and Pattern Recognition (CVPR), pages 16846–16855, 2022. * [53] Lixiang Ru, Heliang Zheng, Yibing Zhan, and Bo Du. Token contrast for weakly-supervised semantic segmentation. In IEEE Computer Vision and Pattern Recognition (CVPR), pages 3093–3102, 2023. * [54] David Ruppert. Efficient estimations from a slowly convergent robbins-monro process. Technical report, Cornell University Operations Research and Industrial Engineering, 1988. * [55] Ramprasaath R Selvaraju, Michael Cogswell, Abhishek Das, Ramakrishna Vedantam, Devi Parikh, and Dhruv Batra. Grad-cam: Visual explanations from deep networks via gradient-based localization. In IEEE International Conference on Computer Vision (ICCV), pages 618–626, 2017. * [56] Wataru Shimoda and Keiji Yanai. Self-supervised difference detection for weakly-supervised semantic segmentation. In IEEE International Conference on Computer Vision (ICCV), pages 5208–5217, 2019. * [57] Chunfeng Song, Yan Huang, Wanli Ouyang, and Liang Wang. Box-driven class-wise region masking and filling rate guided loss for weakly supervised semantic segmentation. In IEEE Computer Vision and Pattern Recognition (CVPR), pages 3136–3145, 2019. * [58] Kunyang Sun, Haoqing Shi, Zhengming Zhang, and Yongming Huang. Ecs-net: Improving weakly supervised semantic segmentation by using connections between class activation maps. In IEEE Computer Vision and Pattern Recognition (CVPR), pages 7283–7292, 2021. * [59] Changwei Wang, Rongtao Xu, Shibiao Xu, Weiliang Meng, and Xiaopeng Zhang. Treating pseudo-labels generation as image matting for weakly supervised semantic segmentation. In IEEE International Conference on Computer Vision (ICCV), pages 755–765, 2023. * [60] Yude Wang, Jie Zhang, Meina Kan, Shiguang Shan, and Xilin Chen. Self-supervised equivariant attention mechanism for weakly supervised semantic segmentation. In IEEE Computer Vision and Pattern Recognition (CVPR), pages 12275–12284, 2020. * [61] Zifeng Wu, Chunhua Shen, and Anton Van Den Hengel. Wider or deeper: Revisiting the resnet model for visual recognition. Pattern Recognition, 90:119–133, 2019. * [62] Tete Xiao, Yingcheng Liu, Bolei Zhou, Yuning Jiang, and Jian Sun. Unified perceptual parsing for scene understanding. In European Conference on Computer Vision (ECCV), pages 418–434, 2018. * [63] Jinheng Xie, Xianxu Hou, Kai Ye, and Linlin Shen. Clims: Cross language image matching for weakly supervised semantic segmentation. In IEEE Computer Vision and Pattern Recognition (CVPR), pages 4483–4492, 2022. * [64] Jinheng Xie, Jianfeng Xiang, Junliang Chen, Xianxu Hou, Xiaodong Zhao, and Linlin Shen. C2am: Contrastive learning of class-agnostic activation map for weakly supervised object localization and semantic segmentation. In IEEE Computer Vision and Pattern Recognition (CVPR), pages 989–998, 2022. * [65] Lian Xu, Wanli Ouyang, Mohammed Bennamoun, Farid Boussaid, and Dan Xu. Multi-class token transformer for weakly supervised semantic segmentation. In IEEE Computer Vision and Pattern Recognition (CVPR), pages 4310–4319, 2022. * [66] Lian Xu, Wanli Ouyang, Mohammed Bennamoun, Farid Boussaid, and Dan Xu. Learning multi-modal class-specific tokens for weakly supervised dense object localization. In IEEE Computer Vision and Pattern Recognition (CVPR), pages 19596–19605, 2023. * [67] Rongtao Xu, Changwei Wang, Jiaxi Sun, Shibiao Xu, Weiliang Meng, and Xiaopeng Zhang. Self correspondence distillation for end-to-end weakly-supervised semantic segmentation. In AAAI Conference on Artificial Intelligence (AAAI). AAAI Press, 2023. * [68] Sung-Hoon Yoon, Hyeokjun Kweon, Jegyeong Cho, Shinjeong Kim, and Kuk-Jin Yoon. Adversarial erasing framework via triplet with gated pyramid pooling layer for weakly supervised semantic segmentation. In European Conference on Computer Vision (ECCV), pages 326–344. Springer, 2022. * [69] Lu Yu, Wei Xiang, Juan Fang, Yi-Ping Phoebe Chen, and Lianhua Chi. ex-vit: A novel explainable vision transformer for weakly supervised semantic segmentation. Pattern Recognition, page 109666, 2023. * [70] Bingfeng Zhang, Jimin Xiao, Yunchao Wei, Mingjie Sun, and Kaizhu Huang. Reliability does matter: An end-to-end weakly supervised semantic segmentation approach. In AAAI Conference on Artificial Intelligence (AAAI), volume 34, pages 12765–12772, 2020. * [71] Xiaolin Zhang, Yunchao Wei, Jiashi Feng, Yi Yang, and Thomas S Huang. Adversarial complementary learning for weakly supervised object localization. In IEEE Computer Vision and Pattern Recognition (CVPR), pages 1325–1334, 2018. * [72] Bolei Zhou, Aditya Khosla, Agata Lapedriza, Aude Oliva, and Antonio Torralba. Learning deep features for discriminative localization. In IEEE Computer Vision and Pattern Recognition (CVPR), pages 2921–2929, 2016. * [73] Tianfei Zhou, Meijie Zhang, Fang Zhao, and Jianwu Li. Regional semantic contrast and aggregation for weakly supervised semantic segmentation. In IEEE Computer Vision and Pattern Recognition (CVPR), pages 4299–4309, 2022. ## Appendix A Additional Results ### A.1 Hyper-parameter Finetuning Here, we examine the impact of hyper-parameter variation with CoSA resulting from our finetuning. The fine-tuning of each hyper-parameter is demonstrate with the remaining parameters fixed at their determined optimal values. Loss Weights. We demonstrate the finetuning of the Seg2CAM and CAM2Seg loss weights in Tab. 5(a)(b). A significant mIoU decrease is observed as $\lambda_{c2s}$ reduces the influence of the segmentation branch, as expected. The mIoU reaches its peak when $\lambda_{\text{s2c}}\\!=\\!0.05$ and $\lambda_{\text{c2s}}\\!=\\!0.1$. Low-perplexity Filter. We finetune the coefficient for the low-pass perplexity filter $\epsilon$, described in eq. (9) of the main paper. The corresponding findings are illustrated in Tab. 5(c). Optimum performance is obtained when $\epsilon$ is set to $1$, either decreasing or increasing this value can impair the performance of our model. EMA Momentum. Here, the momentum used for updating the assignment network is finetuned. Results presented in Tab. 5(d) indicate that the optimal performance is achieved when $m=0.9994$. Additionally, we find that setting $m=1$ freezes the assignment network, breaking the training of online network and leading to framework collapse. Fixed Threshold vs. Dynamic Threshold. In this study, we evaluate CoSA with predetermined thresholds. The results are presented in Fig. 9. As shown, the performance peaks when this threshold is set to $0.45$, with an mIoU of $75.54\%$. However, our dynamic threshold can outperform the best manual finetuning by $0.65\%$. Despite the incurred additional $10\%$ computation overhead, our threshold searching algorithm obviates time-consuming finetuning efforts, resulting in nearly 80% reduction in hyper-parameter searching time in this case and $(1-1.1n^{-1})\%$ in general where $n$ thresholds are considered. In addition, the adoption of dynamic thresholding can enhance the generalizability to novel datasets. (a) Seg2CAM weight $\lambda_{\text{s2c}}$ $\lambda_{\text{s2c}}$ | mIoU ---|--- $0.2$ | 73.79 $0.1$ | 74.67 $0.05$ | 76.19 $0.025$ | 75.25 $0.0125$ | 74.33 (b) CAM2Seg weight $\lambda_{\text{c2s}}$ $\lambda_{\text{c2s}}$ | mIoU ---|--- $0.4$ | 74.67 $0.2$ | 75.56 $0.1$ | 76.19 $0.05$ | 73.95 $0.025$ | 61.55 (c) Perplexity filter $\epsilon$ $\epsilon$ | mIoU ---|--- $\infty$ | 73.66 $2$ | 75.52 $1$ | 76.19 $0.5$ | 74.30 $0.1$ | 70.63 (d) Momentum $m$ $m$ | mIoU ---|--- $0.9990$ | 73.79 $0.9992$ | 75.40 $0.9994$ | 76.19 $0.9996$ | 75.58 $0.9999$ | 71.42 $1.0000$ | 15.99 Table 5: Hyper-parameters Finetuning Results. Parameter searching for (a) Loss weight for CAM2Seg $\lambda_{\text{c2s}}$; (b) Loss weight for Seg2CAM $\lambda_{\text{s2c}}$; (c) Low-pass perplexity filter coefficient $\epsilon$; (e) EMA Momentum $m$ for updating assignment network. mIoU represents semantic segmentation result on PASCAL VOC val split. Figure 9: Threshold finetuning. (left) determined dynamic threshold during training. (right) mIoU comparison of fixed threshold vs. the purposed dynamic threshold on VOC val. ### A.2 Per-class Segmentation Comparisons We show the per-class semantic segmentation results on VOC val and test splits as well as COCO val split. Comparisons on VOC. Tab. 12 illustrates the CoSA per-class mIoU results compared with recent works: AdvCAM [33], MCT [65], ToCo [53], Xu _et al_. [66], BECO [50]. To be fair in comparison, we include CoSA with CRF [10] postprocessing results, denoted as CoSA∗, same as other SOTA models. Notably, CoSA dominates in 10 out of 21 classes. In particular, categories like boat ($5.9\%\\!\uparrow$), chair ($8.2\%\\!\uparrow$), and sofa ($17.2\%\\!\uparrow$), demonstrate substantial lead over the SOTA models. In the VOC test split (depicted in Tab. 12), we still observe its superiority over other SOTA methods, where CoSA dominates in 15 out of 21 classes. Comparisons on COCO. We compare CoSA with recent WSSS works for individual class performance on the COCO val set. As illustrated in Tab. 13, CoSA outperforms its counterparts in 56 out of 81 classes. Particularly, classes such as truck ($10.6\%\\!\uparrow$), tie ($14.3\%\\!\uparrow$), kite ($12.4\%\\!\uparrow$), baseball glove ($20.3\%\\!\uparrow$), knife ($14.5\%\\!\uparrow$), ($10.6\%\\!\uparrow$), carrot ($13.0\%\\!\uparrow$), donuts ($10.0\%\\!\uparrow$), couch ($13.9\%\\!\uparrow$), oven ($13.0\%\\!\uparrow$), and toothbrush ($10.0\%\\!\uparrow$) exhibit remarkable leading performance. ### A.3 Further Qualitative Comparisons More visualizations of our CoSA results are given in Fig. 10 for VOC and Fig. 11, Fig. 12 for COCO. When compared to other SOTA models, CoSA exhibits i) better foreground-background separation (evidenced in R2–R3 in Fig. 10 and R1–R10 in Fig. 11); ii) more robust to inter-class variation and occlusion (affirmed in R4–R7 in Fig. 10 and R1–R4 in Fig. 12). iii) less coexistence problem (demonstrated in R9–R11 in Fig. 10 and R8– R10 in Fig. 12); Last but not least, our CoSA can reveal certain limitations in manual GT segmentation, as depicted in R8 in Fig. 10 and R5–R7 in Fig. 12. We also show our CoSA results on VOC test set in Fig. 14 and some failure cases in Fig. 14. ## Appendix B Further Analysis Impact of CRF. The conditional random field (CRF) proposes to optimize the segmentation by utilizing the low-level information obtained from the local interactions of pixels and edges [10]. Traditionally, a manually designed CRF postprocessing step has been widely adopted for refining segmentation [15, 50] or CAMs [51, 65, 70] in WSSS. As our aim is to develop a fully end-to-end WSSS solution, incorporating CRF postprocessing contradicts this principal. Through our experiments, we demonstrate that CoSA, unlike other single-stage methods, does not heavily depend on CRF. Our results indicate that incorporating CRF results in marginal improvement of $0.2\%$, $0.1\%$, and $0.1\%$ for VOC val, VOC test, and COCO val, respectively, as presented in Tab. 2 of the main paper. Tab. 6(a) suggests that in comparison to other SOTA models, our CoSA exhibits a lesser dependency on the CRF postprocessing. On the contrary, eliminating the CRF step leads to a noteworthy enhancement of 165% in terms of inference speed, as demonstrated in Tab. 6(b). (a) Method w/o CRF w/ CRF AFA [52] 63.8 66.0 (+2.2) VIT-PCM [51] 67.7 71.4 (+3.7) ToCo [53] 69.2 71.1 (+0.9) CoSA 76.2 76.4 (+0.2) (b) CoSA Speed w/o CRF 4.11 imgs/s w/ CRF 1.83 imgs/s Table 6: CRF Impact. (a): Comparisons of CRF impact on SOTA single-stage WSSS methods on VOC val. (b): Inference speed with and without CRF. Speed tested using a single 3090 GPU. | CAMs | CAMs | Seg. | Total | mIoU ---|---|---|---|---|--- Generation | Refinement | Training MCT [65] | 2.2hrs (21M) | 11.1hrs (106M) | 15.5hrs (124M) | 28.8hrs (251M) | 71.6 BECO [50] | 0.9hrs (23M) | 6.5hrs (24M) | 22.2hrs (91M) | 29.6hrs (138M) | 71.8 ToCo [53] | 9.9hrs (98M) | 9.9hrs (98M) | 72.2 | CAMs and Seg. Co-optimization | | CoSA | 8.7hrs (92M) | 8.7hrs (92M) | 75.1 Table 7: Training Speed and Parameters Comparisons. We report the detailed training time, parameters and final mIoU on VOC test split for MCT, BECO, ToCo and our CoSA. All methods are tested using the same machine with a single 3090 GPU. The official MCT, BECO and ToCo code repositories are utilized in this study. Transformation | Description | Parameter Setting ---|---|--- RandomRescale | Rescale the image by $r$ times, $r$ randomly sampled from $r\sim U(r_{min},r_{max})$. | $r_{min}=0.5,r_{max}=2$ RandomFlip | Randomly horizontally flip a image with probability of $p$. | $p=0.5$ RandomCrop | Randomly crop a image by a hight $h$ and a width $w$. | $w=448,h=448$ GaussianBlur | Randomly blur a image with probability of $p$. | $p=0.5$ Table 8: Weak data augmentation $\mathcal{T}_{w}$ for the input of assignment network. Transformation | Description | Parameter Setting ---|---|--- RandomRescale | Rescale the image by $r$ times, $r$ randomly sampled from $r\sim U(r_{min},r_{max})$. | $r_{min}=0.5,r_{max}=2$ RandomFlip | Randomly horizontally flip a image with probability of $p$. | $p=0.5$ RandomCrop | Randomly crop a image by a hight $h$ and a width $w$. | $w=448,h=448$ GaussianBlur | Randomly blur a image with probability of $p$. | $p=0.5$ OneOf | Select one of the transformation in a transformation set $T$. | $T=$ TransAppearance Table 9: Strong data augmentation $\mathcal{T}_{s}$ for the input of online network image. Transformation | Description | Parameter Setting ---|---|--- Identity | Returns the original image. | Autocontrast | Maximizes the image contrast by setting the darkest (lightest) pixel to black (white). | Equalize | Equalizes the image histogram. | RandSolarize | Invert all pixels above a threshold value $T$. | $T\in U(0,1)$ RandColor | Adjust the color balance. $C=0$ returns a black&white image, $C=1$ returns the original image. | $C\in U(0.05,0.95)$ RandContrast | Adjust the contrast. $C=0$ returns a solid grey image, $C=1$ returns the original image. | $C\in U(0.05,0.95)$ RandBrightness | Adjust the brightness. $C=0$ returns a black image, $C=1$ returns the original image. | $C\in U(0.05,0.95)$ RandSharpness | Adjust the sharpness. $C=0$ returns a blurred image, $C=1$ returns the original image. | $C\in U(0.05,0.95)$ RandPolarize | Reduce each pixel to $C$ bits. | $C\in U(4,8)$ Table 10: Appearance transformations, called TransAppearance, used in strong data augmentation. Efficiency Study. Unlike multi-stage approaches, CoSA is extremely efficient in training. It can be trained end-to-end efficiently. When training a semantic segmentation model with weak labels on the VOC dataset, our method requires a mere 8.7 hours of training time and a total of 92M parameters. In contrast, MCT [65] would necessitate approximately 231% more time (20.1hrs $\uparrow$) and 173% more parameters (159M $\uparrow$) for the same task, and BECO [50] would require around 240% more time (20.9hrs $\uparrow$) and 50% more parameters (46M $\uparrow$). When compared to the single-stage method, CoSA also demonstrate its advantage in speed-accuracy trade-off. Further details regarding the efficiency study can be found in Tab. 7. Algorithm 1 CoSA Training Pseudo Code 1:Require: $\mathcal{D}$ $\triangleright$ image-level classification dataset 2:Require: $\mathcal{F}_{\Theta},~{}\mathcal{F}_{\Theta^{\prime}}$ $\triangleright$ online network parameterized by $\Theta$ and assignment network by $\Theta^{\prime}$ 3:$\mathcal{F}_{\Theta}$ $\leftarrow$ Init, $\mathcal{F}_{\Theta^{\prime}}$ $\leftarrow$ Init $\triangleright$ initialize networks with pretrained backbone 4:do 5: $x$, $Y_{\text{gt}}$ $\leftarrow$ Sample($\mathcal{D}$) $\triangleright$ sample a mini-batch of image and weak-label pairs 6: $x_{s},x_{w}$ $\leftarrow$ $\mathcal{T}_{s}(x),\mathcal{T}_{w}(x)$ $\triangleright$ apply strong and weak augumentations 7: $\\{x_{w}^{s}\\}$ $\leftarrow$ $\texttt{multiscale}(x_{w})$ $\triangleright$ generate a set of $x_{w}$ with different scales 8: $\\{\mathcal{M}^{\prime},~{}\mathcal{M}^{\dagger\prime},~{}\mathcal{S}^{\prime}\\}$ $\leftarrow$ $\mathcal{F}_{\Theta^{\prime}}(\\{x_{w}^{s}\\})$ $\triangleright$ forward a set of $x_{w}$ in assignment network 9: $\mathcal{M}^{\prime},~{}\mathcal{M}^{\dagger\prime},~{}\mathcal{S}^{\prime}$ $\leftarrow$ Maxpool$(\\{\mathcal{M}^{\prime}\\})$, Maxpool$(\\{\mathcal{M}^{\dagger\prime}\\})$, Avgpool$(\\{\mathcal{S}^{\prime}\\})$ $\triangleright$ ensemble multiscale outputs 10: $\mathcal{M}^{\prime},~{}\mathcal{M}^{\dagger\prime},~{}\mathcal{S}^{\prime}$ $\leftarrow$ Filter$(\mathcal{M}^{\prime},~{}\mathcal{M}^{\dagger\prime},~{}\mathcal{S}^{\prime})$, $\triangleright$ filter CAMs and segmentation prediction with $Y_{\text{gt}}$ 11: $Z,~{}Z^{\dagger},~{}\mathcal{M},~{}\mathcal{M}^{\dagger},~{}\mathcal{S}$ $\leftarrow$ $\mathcal{F}_{\Theta}(x_{s})$ $\triangleright$ forward $x_{s}$ in online network 12: $\mathcal{L}_{\text{cls}}+\mathcal{L}_{\text{cls}}^{\mathcal{M}^{\dagger}}$ $\leftarrow$ $\mathcal{L}_{\text{cls}}(Z,Y_{\text{gt}})+\mathcal{L}_{\text{cls}}(Z^{\dagger},Y_{\text{gt}})$ $\triangleright$ get classification losses for $\mathcal{M}$ and $\mathcal{M}^{\dagger}$ by eq. (1) 13: $\xi^{\star}$ $\leftarrow$ solve eq. (6) with $\mathcal{M}^{\prime}$ $\triangleright$ get dynamic threshold 14: $\hat{\mathcal{Y}}^{\text{CPL}}$ $\leftarrow$ eq. (2) with $\mathcal{M}^{\prime},~{}\xi^{\star}$ $\triangleright$ obtain CPL 15: ${\mathcal{P}}$ $\leftarrow$ eq. (4) with $\mathcal{M}^{\prime},~{}\xi^{\star}$ $\triangleright$ estimate perplexity score 16: ${\mathcal{L}_{\text{c2s}}}$ $\leftarrow$ eq. (5) with $\hat{\mathcal{Y}}^{\text{CPL}},\mathcal{S},~{}{\mathcal{P}}$ $\triangleright$ get CAM2seg loss 17: ${\mathcal{L}_{\text{c2s}}^{\mathcal{M}^{\dagger}}}$ $\leftarrow$ follow 14 – 17 but with $\mathcal{M}^{\dagger\prime}$ $\triangleright$ get another CAM2seg loss 18: $\hat{\mathcal{Y}}^{\text{SPL}}$ $\leftarrow$ eq. (7) with $\mathcal{S}^{\prime}$ $\triangleright$ obtain SPL 19: ${\mathcal{L}_{\text{s2c}}}$ $\leftarrow$ eq. (8) with $\hat{\mathcal{Y}}^{\text{SPL}},~{}\mathcal{M}$ $\triangleright$ get Seg2CAM loss 20: $\mathcal{R}^{+},~{}\mathcal{R}^{-}$ $\leftarrow$ eq. (9) with $\mathcal{P},~{}\hat{\mathcal{Y}}^{\text{CPL}}$ $\triangleright$ define positive and negative correlation matrix 21: ${\mathcal{L}_{\text{csc}}}$ $\leftarrow$ eq. (10) with $\mathcal{M},~{}\mathcal{R}^{+},~{}\mathcal{R}^{-}$ $\triangleright$ get contrastive seperation loss 22: $\mathcal{L}_{\text{CoSA}}$ $\leftarrow$ $\\!\mathcal{L}_{\text{cls}}\\!+\mathcal{L}_{\text{cls}}^{\mathcal{M}^{\dagger}}\\!\\!+\\!\lambda_{\text{c2s}}\big{(}\mathcal{L}_{\text{c2s}}\\!+\\!\mathcal{L}_{\text{c2s}}^{\mathcal{M}^{\dagger}}\big{)}\\!+\\!\lambda_{\text{s2c}}\mathcal{L}_{\text{s2c}}+\\!\lambda_{\text{csc}}\mathcal{L}_{\text{csc}}.$ $\triangleright$ weighted sum as the overall training objective 23: $\Delta\Theta$ $\leftarrow$ $-\nabla_{\mathcal{L}_{\text{CoSA}}}\Theta$ $\triangleright$ backpropagate the overall loss 24: $\Theta$ $\leftarrow$ $\Theta+\Delta\Theta$ $\triangleright$ undate online network with gradient 25: $\Theta^{\prime}$ $\leftarrow$ $m\Theta^{\prime}+(1-m)\Theta$$\triangleright$ undate assignment network via EMA 26:until $\mathcal{L}_{\text{CoSA}}$ converge 27:end Method | bkg | plane | bike | bird | boat | bottle | bus | car | cat | chair | cow ---|---|---|---|---|---|---|---|---|---|---|--- AdvCAM [34] CVPR21 | 90.0 | 79.8 | 34.1 | 82.6 | 63.3 | 70.5 | 89.4 | 76.0 | 87.3 | 31.4 | 81.3 MCT [65] CVPR22 | 91.9 | 78.3 | 39.5 | 89.9 | 55.9 | 76.7 | 81.8 | 79.0 | 90.7 | 32.6 | 87.1 ToCo [53] CVPR23 | 91.1 | 80.6 | 48.7 | 68.6 | 45.4 | 79.6 | 87.4 | 83.3 | 89.9 | 35.8 | 84.7 Xu _et al_. [66] CVPR23 | 92.4 | 84.7 | 42.2 | 85.5 | 64.1 | 77.4 | 86.6 | 82.2 | 88.7 | 32.7 | 83.8 BECO [50] CVPR23 | 91.1 | 81.8 | 33.6 | 87.0 | 63.2 | 76.1 | 92.3 | 87.9 | 90.9 | 39.0 | 90.2 CoSA* (Ours) | 93.1 | 85.5 | 48.5 | 88.7 | 70.0 | 77.6 | 90.4 | 86.4 | 90.3 | 47.2 | 88.7 Method | table | dog | horse | mbike | person | plant | sheep | sofa | train | tv | mIoU AdvCAM [34] CVPR21 | 33.1 | 82.5 | 80.8 | 74.0 | 72.9 | 50.3 | 82.3 | 42.2 | 74.1 | 52.9 | 68.1 MCT [65] CVPR22 | 57.2 | 87.0 | 84.6 | 77.4 | 79.2 | 55.1 | 89.2 | 47.2 | 70.4 | 58.8 | 71.9 ToCo [53] CVPR23 | 60.5 | 83.7 | 83.7 | 76.8 | 83.0 | 56.6 | 87.9 | 43.5 | 60.5 | 63.1 | 71.1 Xu _et al_. [66] CVPR23 | 59.0 | 82.4 | 80.9 | 76.1 | 81.4 | 48.0 | 88.2 | 46.4 | 70.2 | 62.5 | 72.2 BECO [50] CVPR23 | 41.6 | 85.9 | 86.3 | 81.8 | 76.7 | 56.7 | 89.5 | 54.7 | 64.3 | 60.6 | 72.9 CoSA* (Ours) | 54.1 | 87.3 | 87.1 | 79.6 | 85.6 | 53.2 | 89.9 | 71.9 | 65.1 | 63.4 | 76.4 Table 11: Per-class Segmentation on VOC val Split. Comparison of per-class segmentation results on VOC val. CoSA is compared with AdvCAM, MCTformer, ToCo, Xu et al. and BECO. Best results are in bold. Method | bkg | plane | bike | bird | boat | bottle | bus | car | cat | chair | cow ---|---|---|---|---|---|---|---|---|---|---|--- AdvCAM [34] CVPR21 | 90.1 | 81.2 | 33.6 | 80.4 | 52.4 | 66.6 | 87.1 | 80.5 | 87.2 | 28.9 | 80.1 MCT [65] CVPR22 | 90.9 | 76.0 | 37.2 | 79.1 | 54.1 | 69.0 | 78.1 | 78.0 | 86.1 | 30.3 | 79.5 ToCo [53] CVPR23 | 91.5 | 88.4 | 49.5 | 69.0 | 41.6 | 72.5 | 87.0 | 80.7 | 88.6 | 32.2 | 85.0 CoSA* (Ours) | 93.3 | 88.1 | 47.0 | 84.2 | 60.2 | 75.0 | 87.7 | 81.7 | 92.0 | 34.5 | 87.8 Method | table | dog | horse | mbike | person | plant | sheep | sofa | train | tv | mIoU AdvCAM [34] CVPR21 | 38.5 | 84.0 | 83.0 | 79.5 | 71.9 | 47.5 | 80.8 | 59.1 | 65.4 | 49.7 | 68.0 MCT [65] CVPR22 | 58.3 | 81.7 | 81.1 | 77.0 | 76.4 | 49.2 | 80.0 | 55.1 | 65.4 | 54.5 | 68.4 ToCo [53] CVPR23 | 68.4 | 81.4 | 85.6 | 83.2 | 83.4 | 68.2 | 88.9 | 55.0 | 49.3 | 65.0 | 72.2 CoSA* (Ours) | 59.6 | 86.2 | 86.3 | 84.9 | 82.8 | 68.2 | 87.4 | 63.9 | 67.7 | 61.6 | 75.2 Table 12: Per-class Segmnetation on VOC test Split. Comparison of per-class segmentation results on VOC test. Results from AdvCAM, MCT, and ToCo are used for this comparison. Best results are in bold. Class | MCT[65] (CVPR22) | Xu _et al_. [66] (CVPR23) | ToCo[53] (CVPR23) | CoSA (Ours) | Class | MCT[65] (CVPR22) | Xu _et al_. [66] (CVPR23) | ToCo[53] (CVPR23) | CoSA (Ours) ---|---|---|---|---|---|---|---|---|--- background | 82.4 | 85.3 | 68.5 | 84.0 | wine glass | 27.0 | 33.8 | 20.6 | 42.1 person | 62.6 | 72.9 | 28.1 | 70.3 | cup | 29.0 | 35.8 | 26.0 | 33.1 bicycle | 47.4 | 49.8 | 39.7 | 52.4 | fork | 23.4 | 20.0 | 7.6 | 24.2 car | 47.2 | 43.8 | 38.9 | 54.3 | knife | 12.0 | 12.6 | 18.4 | 32.9 motorcycle | 63.7 | 66.2 | 55.1 | 71.9 | spoon | 6.6 | 6.7 | 3.0 | 9.0 airplane | 64.7 | 69.2 | 62.1 | 74.0 | bowl | 22.4 | 23.7 | 19.8 | 22.8 bus | 64.5 | 69.1 | 39.0 | 77.2 | banana | 63.2 | 64.4 | 71.5 | 69.3 train | 64.5 | 63.7 | 48.7 | 60.0 | apple | 44.4 | 50.8 | 55.5 | 61.3 truck | 44.8 | 43.4 | 37.3 | 55.4 | sandwich | 39.7 | 47.0 | 41.2 | 48.3 boat | 42.3 | 42.3 | 49.1 | 52.1 | orange | 63.0 | 64.6 | 70.6 | 69.2 traffic light | 49.9 | 49.3 | 47.3 | 55.1 | broccoli | 51.2 | 50.6 | 56.7 | 52.8 fire hydrant | 73.2 | 74.9 | 69.6 | 78.8 | carrot | 40.0 | 38.6 | 46.4 | 59.4 stop sign | 76.6 | 77.3 | 70.1 | 82.2 | hot dog | 53.0 | 54.0 | 60.1 | 59.9 parking meter | 64.4 | 67.0 | 67.9 | 71.5 | pizza | 62.2 | 64.1 | 54.9 | 56.5 bench | 32.8 | 34.1 | 43.9 | 50.2 | donut | 55.7 | 59.7 | 61.1 | 71.1 bird | 62.6 | 63.1 | 58.6 | 65.4 | cake | 47.9 | 50.6 | 42.5 | 57.0 cat | 78.2 | 76.2 | 74.0 | 79.8 | chair | 22.8 | 24.5 | 24.1 | 33.8 dog | 68.2 | 70.6 | 64.0 | 72.8 | couch | 35.0 | 40.0 | 44.2 | 58.1 horse | 65.8 | 67.1 | 66.1 | 71.4 | potted plant | 13.5 | 13.0 | 27.4 | 23.5 sheep | 70.1 | 70.8 | 67.9 | 74.3 | bed | 48.6 | 53.7 | 54.0 | 61.5 cow | 68.3 | 71.2 | 69.0 | 74.0 | dining table | 12.9 | 19.2 | 25.6 | 29.2 elephant | 81.6 | 82.2 | 79.7 | 81.9 | toilet | 63.1 | 66.6 | 62.0 | 69.7 bear | 80.1 | 79.6 | 76.8 | 85.3 | tv | 47.9 | 50.8 | 49.1 | 53.2 zebra | 83.0 | 82.8 | 77.5 | 76.3 | laptop | 49.5 | 55.4 | 55.7 | 63.9 giraffe | 76.9 | 76.7 | 66.1 | 68.5 | mouse | 13.4 | 14.4 | 8.6 | 16.4 backpack | 14.6 | 17.5 | 20.3 | 28.6 | remote | 41.9 | 47.1 | 56.6 | 49.1 umbrella | 61.7 | 66.9 | 70.9 | 73.4 | keyboard | 49.8 | 57.2 | 41.8 | 49.6 handbag | 4.5 | 5.8 | 8.1 | 11.9 | cellphone | 54.1 | 54.9 | 58.5 | 66.2 tie | 25.2 | 31.4 | 33.4 | 47.7 | microwave | 38.0 | 46.1 | 55.5 | 53.2 suitcase | 46.8 | 51.4 | 55.3 | 63.8 | oven | 29.9 | 35.3 | 36.2 | 49.2 frisbee | 43.8 | 54.1 | 39.6 | 63.1 | toaster | 0.0 | 2.0 | 0.0 | 0.0 skis | 12.8 | 13.0 | 4.0 | 22.5 | sink | 28.0 | 36.1 | 19.0 | 41.9 snowboard | 31.4 | 30.3 | 15.5 | 40.5 | refrigerator | 40.1 | 52.7 | 51.9 | 62.0 sports ball | 9.2 | 36.1 | 11.0 | 33.1 | book | 32.2 | 34.8 | 31.5 | 37.8 kite | 26.3 | 47.5 | 40.7 | 59.9 | clock | 43.2 | 51.5 | 32.9 | 55.2 baseball bat | 0.9 | 7.0 | 1.8 | 3.8 | vase | 22.6 | 25.8 | 33.3 | 33.8 baseball glove | 0.7 | 10.4 | 17.6 | 37.9 | scissors | 32.9 | 30.7 | 49.8 | 54.7 skateboard | 7.8 | 15.2 | 13.3 | 12.5 | teddy bear | 61.9 | 61.4 | 67.5 | 69.3 surfboard | 46.5 | 51.5 | 21.5 | 16.5 | hair drier | 0.0 | 1.3 | 10.0 | 0.3 tennis racket | 1.4 | 26.4 | 6.8 | 7.2 | toothbrush | 12.2 | 19.0 | 29.3 | 39.3 bottle | 31.1 | 37.1 | 25.7 | 35.1 | mIoU | 42.0 | 45.9 | 42.4 | 51.1 Table 13: Per-class Segmentation on COCO val Split. Comparison of per-class segmentation results on the COCO 2014 val set. CoSA is compared with MCTformer, Xu et al. and ToCo. Best results are in bold. ## Appendix C Further Implementation Details CoSA Implementation Details. For image preprocessing, weak transformation $\mathcal{T}_{w}$ and strong transformation $\mathcal{T}_{s}$ are employed in CoSA for the input of assignment network and online network, respectively. $\mathcal{T}_{w}$ and $\mathcal{T}_{s}$ details are given in Tab. 10 and Tab. 10. Following [53], we use the multi-scale inference in assignment network to produce CPL and SPL. For VOC training, CoSA is warmed up with 6K iterations, where $\lambda_{\text{c2s}}$, $\lambda_{\text{c2s}}$, and $\lambda_{\text{csc}}$ are set to $0$. In practice, we train CoSA for 20K iterations on 2 GPUs, with 2 images per GPU, or for 40K iterations on 1 GPU for some ablation experiments. For COCO training, CoSA is warmed up with 10K iterations and is trained on 2 GPUs, handling 4 images per GPU across 40K iterations. CoSA-MS Implementation Details. Tab. 2 in the main paper presents the segmentation results of the multi-stage version of our approach, known as CoSA-MS. In those experiments, we leverage the CAM pseudo-labels generated by our CoSA to directly train standalone segmentation networks. It is important to note that we do not use PSA [2], which is widely used in [65, 15], nor IRN [1], extensively used in [50, 12, 59, 31], for CPL post-refinement. For our R101 segmentation network, we use a ResNet101 version of DeepLabV3+ model, same as BECO [50]. As for the CoSA-MS with WR38 network, we utilize a encoder- decoder framework, where encoder is WideResNet38 [61] and decoder is LargeFoV [10], following the final step described in MCT [65]. Regarding the SWIN implementation, we use the SWIN-Base encoder [44] in conjunction with UperNet decoder [62], following the description in [14, 12]. Training Pseudo Code. we present the pseudo code for training CoSA in Algorithm 1. Figure 10: Qualitative Comparisons on VOC Dataset. CoSA exhibits 1) better foreground-background separation (R1–R3); 2) more robust to inter-class variation and occlusion (R4–R7); 3) limitations in the ground troth annotations (R8); 4) less coexistence problem (R9–R11). Different colors represent different categories: black: background; white: ignore areas; : chair; : plant; : cat; : person; : bottle; : sofa; : dog; : cow. : bird; : boat; The activated classes in the demonstration from top to bottom are: chair, cat, bottle, person, dog, person, cow, person, bird, boat, boat. Figure 11: Qualitative Comparisons on COCO Dataset. CoSA demonstrates superior quality in terms of foreground-background separation (R1–R10). Categories involved – R1: person, tie; R2: person, umbrella; R3: person, skis; R4: person, tie; R5: person, train, umbrella; R6: person, hot dog; R7: person, hot dog; R8: dog, frisbee; R9: bottle, toilet; R10: person, teddy bear; Categories in Bold denotes the activated classes in CAMs. Figure 12: More Qualitative Comparisons on COCO Dataset. CoSA shows 1) more robust to inter-class variation and occlusion (R1–R4); 2) limitations in the ground troth annotations (R5–R7); 3) less coexistence problem (R8–R10). Categories involved – R1: person, donuts; R2: person, surfboard. R3: person, car, motorcycle, bus; R4: toilet; R5: person, kite; R6: person, kite; R7: person, cell phone; R8: clock; R9: clock; R10: clock. Categories in Bold denotes the activated classes in CAMs. Figure 13: Visualization on VOC test. Different colors represent different categories: black: background; : car; : person; : boat; : plant; : dog; : cow. : dining-table. : bird; : sofa; : sheep; : house; : airplane; : cat. Figure 14: Illustrations of CoSA failure Cases. Different colors represent different categories: black: background; white: ignore areas; : plant; : person; : sofa; : dog; : cat; : chair; : motorbike; : bicycle. The activated classes in the demonstration from left to right and from top to bottom are: plant, sofa, dog, plant, person, cat, person, bicycle.
# DynaConF: Dynamic Forecasting of Non-Stationary Time-Series Siqi Liu Borealis AI &Andreas Lehrmann Borealis AI ###### Abstract Deep learning models have shown impressive results in a variety of time series forecasting tasks, where modeling the conditional distribution of the future given the past is the essence. However, when this conditional distribution is non-stationary, it poses challenges for these models to learn consistently and to predict accurately. In this work, we propose a new method to model non- stationary conditional distributions over time by clearly decoupling stationary conditional distribution modeling from non-stationary dynamics modeling. Our method is based on a Bayesian dynamic model that can adapt to conditional distribution changes and a deep conditional distribution model that can handle large multivariate time series using a factorized output space. Our experimental results on synthetic and popular public datasets show that our model can adapt to non-stationary time series better than state-of- the-art deep learning solutions. ## 1 Introduction Time series forecasting is a cornerstone of modern machine learning and has applications in a broad range of domains, such as operational processes [1], energy [2], and transportation [3]. In recent years, models based on deep neural networks have shown particularly impressive results [4, 5, 3, 1] and demonstrated the effectiveness of deep feature and latent state representations. Despite this exciting progress, current time series forecasting methods often make the implicit assumption that training and test data follow the same distribution. In real-world applications this assumption is often violated, which is known as _non-stationarity_ and poses serious practical challenges to a model’s robustness and predictive power. The statistics literature contains several related concepts of (non-)stationarity for time series, with weak and strong stationarity being the most widely used ones [6, 7]. Common to these variants of (non-)stationarity is that they are defined in terms of a stochastic process’ joint or marginal distribution. For example, given a discrete time series $\\{y_{t}\in\mathbb{R}\\}_{t\in\mathbb{Z}}$ and any subset of time points $\\{t_{1},t_{2},\ldots,t_{k}\\}$, we call the time series _strongly stationary_ if $\forall\tau\in\mathbb{Z}:\mathrm{p}(y_{t_{1}},y_{t_{2}},\ldots,y_{t_{k}})=\mathrm{p}(y_{t_{1}+\tau},y_{t_{2}+\tau},\ldots,y_{t_{k}+\tau})$. While non-stationarity in a stochastic process’ joint or marginal distribution is important and has been widely studied [7, 8, 9, 10], we argue that temporal _forecasting_ relies more heavily on the properties of a time-series’ _conditional_ distribution $\mathrm{p}(y_{t}|y_{t-B:t-1},\bm{x}_{t-B:t})$, where $y_{t-B:t-1}=(y_{t-B},y_{2},\ldots,y_{t-1})$, $B\in\mathbb{Z}_{>0}$ can be arbitrarily large, and $\bm{x}_{t}\in R^{Q}$ contains auxiliary information. Most forecasting methods, from traditional approaches (e.g., Autoregressive Integrated Moving Average (ARIMA; [11]), Generalized Autoregressive Conditional Heteroskedasticity (GARCH; [12, 13]), state-space models (SSMs; [14])) to more recent models (e.g., recurrent neural networks (RNNs; [15, 1]), temporal convolutional networks (TCNs; [16]), Transformers [17, 4], Neural Processes (NPs; [18, 19])), rely on this conditional distribution for predictions, but many of them implicitly assume its stationarity by using time-invariant model parameters. A model with a stationary conditional distribution can still handle non-stationarity in the joint or marginal distribution, such as seasonality and trend, by conditioning on extra features in $\bm{x}_{t}$, such as day of the week, but the conditional distribution itself may also change due to (1) unobserved causes and (2) new causes. For example, the daily number of posts from each user on a social media platform is unlikely to be robustly predictable from historical data, even with input features like day of the week, because (1) user activity is often affected by events that are not reflected in the observable data (e.g., illness); and (2) events that have not been seen before, such as a new functionality being added to the platform, may occur and change the functional pattern of the input-output relation between the conditioning and target variables in an unpredictable way. How to deal with these changes in the model’s _conditional_ distribution, which are based on a dynamic cause-effect structure, is the main focus of this work. Autoregressive (AR) models, TCNs, and Transformers model $\mathrm{p}(y_{t}|y_{t-B:t-1},\bm{x}_{t-B:t};\bm{\psi})$ with time-invariant parameters $\bm{\psi}$ and therefore assume stationarity in $\mathrm{p}(y_{t}|y_{t-B:t-1},\bm{x}_{t-B:t})$. In contrast, SSMs, RNNs, and NPs model $\mathrm{p}(y_{t}|y_{1:t-1},\bm{x}_{1:t};\bm{\psi})$, which has a growing number of conditioning variables (note the time range $1\\!:\\!t$) and therefore can potentially model different conditional distributions $\mathrm{p}(y_{t}|y_{t-B:t-1},\bm{x}_{t-B:t})$ at different time points $t$. However, these models need to achieve two goals using the same time-invariant structure: (1) modeling $\mathrm{p}(y_{t}|y_{t-B:t-1},\bm{x}_{t-B:t})$; and (2) modeling its changes over time. Because they do not incorporate explicit inductive biases for changes in $\mathrm{p}(y_{t}|y_{t-B:t-1},\bm{x}_{t-B:t})$, they either cannot learn different (seemingly inconsistent) relations between the conditioning and target variables (if the model capacity is limited) or tend to memorize the training data and are not able to generalize to new changes at test time. In this work we take a different approach to dealing with non-stationary conditional distributions in time series. The core of our model, called DynaConF, is a clean decoupling of the time-variant (non-stationary) part and the time-invariant (stationary) part of the distribution. The time-invariant part models a stationary conditional distribution, given some control variables, while the time-variant part focuses on modeling the changes in this conditional distribution over time through those control variables. Using this separation, we build a flexible time-invariant conditional model and make efficient inferences about how the model changes over time. At test time, our model takes both the uncertainty of the conditional distribution and non- stationarity into account when making predictions and can adapt to new changes in the conditional distribution over time in an online manner. ## 2 Related Work Time-series forecasting models have a rich history [7, 11], with many recent advances due to deep neural networks [20]. Here we discuss relations of related works to non-stationarity and our work. Non-Stationary Marginal Distribution. There are three common ways of dealing with non-stationarity in the marginal distribution, such as seasonality and trend: (1) Data transformation. In ARIMA, taking the difference of the time series over time can remove trend and seasonality. More advanced approaches based on exponential smoothing [21] or seasonal-trend decomposition [22] have also been combined with deep neural networks and achieve good performance [23, 24]. More recently, Kim et al. [10] propose to use reversible normalization/denormalization on the input/output of the time series model to account for (marginal) distribution shifts over time. (2) Inductive bias. As an alternative to data transformations, the underlying ideas of exponential smoothing and decomposition can also be incorporated into deep learning models as inductive biases to deal with seasonality/trend end-to-end [25, 26, 27, 2]. Similarly, for models based on Gaussian processes, the inductive biases can be added as specific (e.g. periodic/linear) kernel choices [28]. (3) Conditioning information. Adding features such as relative time (e.g, day of the week) and absolute time to the model input as conditional information is commonly used in deep probabilistic forecasting models [1, 3, 5, 29, 4]. In our work, we focus on proper handling of changes in the _conditional_ distribution. To deal with marginal distribution shifts, we simply add conditional information as in approach (3), although we could potentially utilize approach (1) and (2) as well. Non-Stationary Conditional Distribution. State-space models [30, 31, 32, 5, 33] and recurrent neural networks [1, 3, 4, 29] are among the most popular choices to model temporal dependencies in time series. When these models are applied to, and therefore conditioned on, the entire history of the time series, they can theoretically “memorize” different conditional distributions at different time points. However, for these models to generalize and adapt to new changes in the future, it is critical to have appropriate inductive biases built into the model. A common approach is to allow the state space model to switch between a discrete set of dynamics, which can be learned from training data [34, 35]. However, the model cannot adapt to continuous changes or generalize to new changes that have not been observed in the training data. In contrast, our model has explicit inductive biases to account for both continuous and discontinuous changes, and to adapt to new changes. Observation Model. The expressivity and flexibility of the observation model is a topic that is especially relevant in case of multivariate time series. Different observational models have been employed in time series models, including low-rank covariance structures [3], auto-encoders [30, 36], normalizing flows [31, 4], determinantal point processes [37], denoising diffusion models [29], and probabilistic circuits [38]. In this work, to deal with multivariate time series, we take a simple approach by assuming conditional independence in the output dimensions for scalability and consider more expressive observation models as orthogonal to our work. Online Approach. Continual learning [39, 40, 41, 42, 43, 44] also addresses (conditional) distribution changes in an online manner, but usually in a multi-task supervised learning setting and not in time series. In this work, our focus is on conditional distribution changes in time series, and the conditional distribution can change either continuously or discontinuously over time. ## 3 Method We study the problem of modeling and forecasting time series with changes in the conditional distribution $\mathrm{p}(\bm{y}_{t}|\bm{y}_{t-B:t-1},\bm{x}_{t-B:t})$ over time $t$, where $\bm{y}_{t}\in\mathbb{R}^{P}$ is the target time series, and $\bm{x}_{t}\in\mathbb{R}^{Q}$ is the input containing contextual information, under the following assumption: ###### Assumption 1 $\bm{y}_{t}$ only depends on a bounded history of $\bm{y}$ and $\bm{x}$ for all $t$. That is, there exists $B\in\mathbb{Z}_{>0}$ such that for all $t$, $\mathrm{p}(\bm{y}_{t}|\bm{y}_{<t},\bm{x}_{\leq t})=\mathrm{p}(\bm{y}_{t}|\bm{y}_{t-B:t-1},\bm{x}_{t-B:t})$. Although we assume $\bm{y}_{t}$ can only depend on the history up to $B$ time steps, its conditional distribution can change over time based on information beyond $B$ steps. Notice that this is not particularly restrictive in practice, since we usually have a finite amount of training data while $B$ can be arbitrarily large, although usually not needed. Specifically, given the historical data $\\{(\bm{x}_{t},\bm{y}_{t})\\}_{t=1}^{T}$, the task is to fit a model to the data and use it to forecast $\\{\bm{y}_{t}\\}_{t=T+1}^{T+H},\\{\bm{y}_{t}\\}_{t=T+H+1}^{T+H+H},\ldots,$ continually with a horizon and step size $H$. For notational simplicity, we assume that $\bm{x}_{t}$ only contains information known in advance at time $t$, such as day of the week. ### 3.1 Decoupled Model We assume the distribution $\mathrm{p}(\bm{y}_{t}|\bm{y}_{t-B:t-1},\bm{x}_{t-B:t})$ to be parameterized by $\bm{\theta}_{t}\in\mathbb{R}^{M}$: $\bm{\theta}_{t}=f(\bm{y}_{t-B:t-1},\bm{x}_{t-B:t};\bm{\phi}_{t},\bm{\psi}).$ (1) $f:\mathbb{R}^{B\times P}\times\mathbb{R}^{(B+1)\times Q}\to\mathbb{R}^{M}$ models the conditional distribution, with its own static parameters, denoted collectively as $\bm{\psi}$, and is modulated by a dynamic control variable $\bm{\phi}_{t}\in\mathbb{R}^{F}$, which can change over time according to a dynamic process we define later. A property we try to guarantee in our model is that if $\bm{\phi}_{t}$ stays the same at different time points, then the conditional distribution $\mathrm{p}(\bm{y}_{t}|\bm{y}_{t-B:t-1},\bm{x}_{t-B:t})$ stays the same as well. Figure 1 shows an overview of our model. Our key idea is to separate the time-variant part of the model from the time- invariant part, instead of allowing all components to vary across time. This simplifies the probabilistic inference and improves the generalization capabilities of the learned model by allowing the time-invariant part to learn from time-invariant input-output relations with time-variant modulations. ### 3.2 Conditional Distribution at One Time Point First we describe how we model the conditional distribution $\mathrm{p}(\bm{y}_{t}|\bm{y}_{t-B:t-1},\bm{x}_{t-B:t})$ at each time point $t$ without accounting for non-stationary effects (Figure 1, bottom). We use a neural network $g$ to summarize the historical and contextual information into a vector $\bm{h}_{t}\in\mathbb{R}^{D}$ as $\bm{h}_{t}=g(\bm{y}_{t-B:t-1},\bm{x}_{t-B:t})$. For example, $g$ could be a multi-layer perceptron (MLP) or a recurrent neural network (RNN). The parameters of $g$ are time-invariant and included in $\bm{\psi}$ (Eq. 1). We note that a key distinction between our model’s use of an RNN and a typical deep time series model using an RNN is that the latter keeps unrolling the RNN over time to model the dynamics of the time series. In contrast, we unroll the RNN for $B+1$ steps to summarize $(\bm{y}_{t-B:t-1},\bm{x}_{t-B:t})$ in the exact same way at each time point $t$, i.e., we apply it in a _time-invariant_ manner. We construct the distribution of $\bm{y}_{t}$ such that each dimension $i$ of $\bm{y}_{t}$, denoted as $y_{t,i}$, is conditionally independent from the others given $\bm{h}_{t}$, as this helps the learning and inference algorithms to scale better with the dimensionality of $\bm{y}_{t}$. First we transform $\bm{h}_{t}$ into $P$ vectors of dimension $E$, $\bm{z}_{t,i}\in\mathbb{R}^{E}$, $E<D$, as $\bm{z}_{t,i}=\tanh(\bm{W}_{z,i}\bm{h}_{t}+\bm{b}_{z,i}),\quad\forall i=1,\ldots,P$ (2) where $\bm{W}_{z,i}\in\mathbb{R}^{E\times D}$ and $\bm{b}_{z,i}\in\mathbb{R}^{E}$, so $\bm{z}_{t,i}$ corresponds to $y_{t,i}$. Then, from $\bm{z}_{t,i}$, we obtain the distribution parameters $\bm{\theta}_{t,i}$ of $y_{t,i}$. Specifically, we assume a normal distribution with a diagonal covariance for $\bm{y}_{t}$, so $y_{t,i}\sim\mathcal{N}(\mu_{t,i},\sigma^{2}_{t,i})$ and $\bm{\theta}_{t,i}:=(\mu_{t,i},\sigma^{2}_{t,i})$, with $\mu_{t,i}=\bm{w}_{\mu,i}^{T}\bm{z}_{t,i}+b_{\mu,i},\quad\sigma_{t,i}=s(\bm{w}_{\sigma,i}^{T}\bm{z}_{t,i}+b_{\sigma,i}),$ (3) where $\bm{w}_{\mu,i}\in\mathbb{R}^{E}$, $\bm{w}_{\sigma,i}\in\mathbb{R}^{E}$, and $s:\mathbb{R}\to\mathbb{R}_{>0}$ is the soft-plus function. We use $\bm{W}_{\mu}\in\mathbb{R}^{P\times E}$ and $\bm{b}_{\mu}\in\mathbb{R}^{P}$ to denote the result of stacking $\bm{w}_{\mu,i}^{T}$ and $b_{\mu,i}$ along $i$, and similarly $\bm{W}_{\sigma},\bm{b}_{\sigma}$, $\bm{z}_{t}$, $\bm{\mu}_{t}$, $\bm{\sigma}_{t}$. As we will see in Section 3.5, the use of a Normal distribution enables more efficient inference. ### 3.3 Conditional Distributions Across Time Points We have explained how we model the conditional distribution $\mathrm{p}(\bm{y}_{t}|\bm{y}_{t-B:t-1},\bm{x}_{t-B:t})$ at each time $t$. To model changes in the conditional distribution over time, we first specify which parameters to include in the control variable $\bm{\phi}_{t}\in\mathbb{R}^{F}$, which changes over time and modulates the conditional distribution (Figure 1, top). We choose $\bm{\phi}_{t}:=\operatorname{vec}(\bm{W}_{\mu})$, i.e., $\bm{\phi}_{t}$ is the vectorization of $\bm{W}_{\mu}$. Recall that $\bm{W}_{\mu}$ transforms $\bm{z}_{t}$ into the mean $\bm{\mu}_{t}$ of $\bm{y}_{t}$, where $\bm{z}_{t}$ is essentially a summary of the information in the conditioning variables $(\bm{y}_{t-B:t-1},\bm{x}_{t-B:t})$. By allowing $\bm{W}_{\mu}$ to be different at each time point $t$, the conditional mean of $\bm{y}_{t}$, $\mathrm{E}[\bm{y}_{t}|\bm{y}_{t-B:t-1},\bm{x}_{t-B:t}]$, can change as well. We could allow $\bm{W}_{\sigma}$ to change over time to model a time-variant conditional variance as well, but focusing on $\bm{W}_{\mu}$ reduces the dimensionality of $\bm{\phi}_{t}$ and enables more efficient inference utilizing Rao-Blackwellization (see Section 3.5). We propose to decompose $\bm{\phi}_{t}$ into a dynamic stochastic process $\bm{\chi}_{t}\in\mathbb{R}^{F}$ and a static vector $\bm{b}_{\phi}\in\mathbb{R}^{F}$ as $\bm{\phi}_{t}=\bm{\chi}_{t}+\bm{b}_{\phi}.$ (4) The intuition is that $\bm{b}_{\phi}$ captures the global information of $\bm{\phi}_{t}$ and acts as a baseline, while $\bm{\chi}_{t}$ captures the time-variant changes of $\bm{\phi}_{t}$ relative to $\bm{b}_{\phi}$. We assume that $\bm{\chi}_{t}$ follows the following generative process $\displaystyle\pi_{t}\sim\mathcal{B}(\lambda);$ $\displaystyle\bm{\chi}_{t}\sim\mathcal{N}(\bm{0},\bm{\Sigma}_{0}),\text{ if }\pi_{t}=0;$ (5) $\displaystyle\bm{\epsilon}_{t}\sim\mathcal{N}(\bm{0},\bm{\Sigma}_{d});$ $\displaystyle\bm{\chi}_{t}=\bm{\chi}_{t-1}+\bm{\epsilon}_{t},\text{ if }\pi_{t}=1.$ $\mathcal{B}$ and $\mathcal{N}$ denote the Bernoulli and normal distributions, respectively. $\pi_{t}\in\\{0,1\\}$ is a binary random variable that either generates the current $\bm{\chi}_{t}$ as a new sample drawn from a global distribution $\mathcal{N}(\bm{0},\bm{\Sigma}_{0})$, or as a continuation from the previous $\bm{\chi}_{t-1}$ following a simple stochastic process in the form of a random walk. The intention is to allow $\bm{\chi}_{t}$ to change both continuously (when $\pi_{t}=1$) through the random walk, and discontinuously (when $\pi_{t}=0$) through the global distribution, which captures the variety of possible changes of $\bm{\chi}_{t}$ in its parameter $\bm{\Sigma}_{0}$. We assume $\bm{\chi}_{t}$ to start at $t=B$, since it controls the _conditional_ distribution, whose first observation occurs at $t=B+1$. For the initial $\bm{\chi}_{B}$, we assume generation from $\mathcal{N}(\bm{0},\bm{\Sigma}_{0})$ as well. Our intention is that $\mathcal{N}(\bm{0},\bm{\Sigma}_{0})$ should be the distribution to generate new $\bm{\chi}_{t}$ whenever there is a drastic change in the conditional distribution of $\bm{y}_{t}$, so at $t=B$ it is natural to use that distribution. Recall that $\bm{\chi}_{t}\in\mathbb{R}^{F}$, with $F=P\times E$. We propose to separate $\bm{\chi}_{t}$ along the $P$ dimensions of $\bm{y}_{t}$ into $P$ groups. For each $i=1,\ldots,P$, we define $\bm{\chi}_{t,i}\in\mathbb{R}^{E}$ as in Eq. 5. The final $\bm{\chi}_{t}$ is the concatenation of $\bm{\chi}_{t,i}$ for all $i$. The intuition is to allow the group of components of $\bm{\chi}_{t}$ modulating each dimension of $\bm{y}_{t}$ to change independently from the others, corresponding to the conditional independence assumption we made in Section 3.2, so we can sample a subset of dimensions of $\bm{y}$ in each iteration during training to scale to high- dimensional time series. Figure 1: Overview. DynaConF is built on the principle of a clean decoupling of stationary conditional distribution modeling (red) and non-stationary dynamics modeling (blue). We predict the parameters ($\bm{\theta}$) of the conditional distribution (green) by aggregating time-invariant local context ($\bm{z}$) and modulating this context with time-variant global dynamics ($\bm{\phi}$) driven by a random walk ($\bm{\chi}$) with Bernoulli-restarts ($\bm{\pi}$). ### 3.4 Learning All parameters of the conditional distribution model and the prior are learned by fitting the model to the historical training data $\\{(\bm{y}_{t},\bm{x}_{t})\\}_{t=1}^{T}$. We train our model by maximizing the marginal log-likelihood, where the latent variables $\bm{\chi}_{t}$ are marginalized out. Given a trajectory of $\bm{\chi}_{B:T}$, the conditional log-likelihood is $\log\mathrm{p}(\bm{y}_{B+1:T}|\bm{y}_{1:B},\bm{x}_{1:T},\bm{\chi}_{B:T})=\sum_{t=B+1}^{T}\log\mathrm{p}(\bm{y}_{t}|\bm{y}_{t-B:t-1},\bm{x}_{t-B:t},\bm{\chi}_{t}).$ (6) Marginalizing out $\bm{\chi}_{t}$ gives us the log-likelihood objective $\log\mathrm{p}(\bm{y}_{B+1:T}|\bm{y}_{1:B},\bm{x}_{1:T})=\log\int\mathrm{p}(\bm{y}_{B+1:T}|\bm{y}_{1:B},\bm{x}_{1:T},\bm{\chi}_{B:T})\mathrm{p}(\bm{\chi}_{B:T})\mathrm{d}\bm{\chi}_{B:T}.$ (7) Since the integral is intractable, we instead introduce a variational distribution $\mathrm{q}(\bm{\chi}_{B:T})$ and maximize the following variational lower-bound $\mathcal{L}$ of the log-likelihood in Eq. 7: $\mathcal{L}:=\mathrm{E}_{q}[\log\mathrm{p}(\bm{y}_{B+1:T}|\bm{y}_{1:B},\bm{x}_{1:T},\bm{\chi}_{B:T})]+\mathrm{E}_{q}\left[\log\frac{p(\bm{\chi}_{B:T})}{\mathrm{q}(\bm{\chi}_{B:T})}\right]\leq\log\mathrm{p}(\bm{y}_{B+1:T}|\bm{y}_{1:B},\bm{x}_{1:T}).$ (8) Based on the conditional independence structure of the prior process, we construct the variational distribution $\mathrm{q}(\bm{\chi}_{B:T})$ similarly as an autoregressive process, but we assume a simple Normal distribution at each time step for efficient sampling and back-propagation. At the beginning, we assume $\mathrm{q}(\bm{\chi}_{B})=\mathrm{p}(\bm{\chi}_{B})$. Then, conditioning on the previous $\bm{\chi}_{t-1}$, we recursively define $\mathrm{q}(\bm{\chi}_{t}|\bm{\chi}_{t-1})=\mathcal{N}(\bm{a}_{t}\odot\bm{\chi}_{t-1}+(1-\bm{a}_{t})\odot\bm{m}_{t},\mathrm{diag}(\bm{s}_{t}^{2})),\quad\forall t=1,\ldots,T,$ (9) where $\bm{a}_{t}$, $\bm{m}_{t},\bm{s}_{t}\in\mathbb{R}^{F}$ are variational parameters. Intuitively, $\bm{a}_{t}$ is a gate that chooses between continuing from the previous $\bm{\chi}_{t-1}$, with noise $\mathcal{N}(0,\mathrm{diag}(\bm{s}^{2}_{t})$, and using a new distribution $\mathcal{N}(\bm{m}_{t},\mathrm{diag}(\bm{s}_{t}^{2}))$. We note that both terms in Eq. 8 factorize over time $t=B+1,\ldots,T$ as follows: $\displaystyle\mathcal{L}$ $\displaystyle=\mathrm{E}_{q(\bm{\chi}_{B:T})}\left[\sum_{t=B+1}^{T}\log\mathrm{p}(\bm{y}_{t}|\bm{y}_{t-B:t-1},\bm{x}_{t-B:t},\bm{\chi}_{t})\right]+\mathrm{E}_{q(\bm{\chi}_{B:T})}\left[\sum_{t=B+1}^{T}\log\frac{p(\bm{\chi}_{t}|\bm{\chi}_{t-1})}{\mathrm{q}(\bm{\chi}_{t}|\bm{\chi}_{t-1})}\right]$ (10) $\displaystyle=\sum_{t=B+1}^{T}\mathrm{E}_{q(\bm{\chi}_{t})}\left[\log\mathrm{p}(\bm{y}_{t}|\bm{y}_{t-B:t-1},\bm{x}_{t-B:t},\bm{\chi}_{t})\right]+\sum_{t=B+1}^{T}\mathrm{E}_{q(\bm{\chi}_{t-1:t})}\left[\log\frac{p(\bm{\chi}_{t}|\bm{\chi}_{t-1})}{\mathrm{q}(\bm{\chi}_{t}|\bm{\chi}_{t-1})}\right].$ In the above derivation, $\mathrm{p}(\bm{\chi}_{B})=\mathrm{q}(\bm{\chi}_{B})$ were canceled out due to the definition of $\mathrm{q}(\bm{\chi}_{B})$. The expectations in this equation can be evaluated by Monte-Carlo sampling from $\mathrm{q}(\bm{\chi})$ with the reparameterization trick [45] for back- propagation. In cases where training efficiency is critical, sequential sampling in the autoregressive posterior might not be feasible, and we leverage Inverse Autoregressive Flows (IAFs; [46]) for parallel sampling. While the latter does not reflect the structure of the prior and true posterior as closely as the autoregressive posterior, it is a viable option in cases where parallel sampling is necessary. In the IAF-based posterior, $\mathrm{q}(\bm{\chi}_{B:T})$ can be sampled over $t=B:T$ in parallel by sampling from a standard Normal distribution of dimension $T-B+1$ and transforming the sample through several MADE layers [47]. In our case, each MADE layer not only takes the output from the previous layer as input but is also conditioned on a learnable embedding representing each dimension of $\bm{\chi}_{t}$. Details of this posterior model can be found in Appendix A. We develop optimization procedures based on stochastic gradient descent (SGD) that work in practice on large datasets utilizing our modeling assumptions from Section 3.2 and Section 3.3. Specifically, we alternate between optimizing the conditional distribution model and the prior and posterior of the dynamic model with different sampling strategies to accommodate high dimensionalities and long time spans. More details of our optimization procedures are in Appendix B. ### 3.5 Forecasting At test time we are given the past observations $\bm{y}_{1:T}$ as well as the input features $\bm{x}_{1:T+H}$, including $H$ future steps, and infer the conditional distribution $\mathrm{p}(\bm{y}_{T+1:T+H}|\bm{y}_{1:T},\bm{x}_{1:T+H})$. Based on our modeling assumptions, the latter can be computed as $\displaystyle\mathrm{p}(\bm{y}_{T+1:T+H}|\bm{y}_{1:T},\bm{x}_{1:T+H})$ (11) $\displaystyle=$ $\displaystyle\int\mathrm{p}(\bm{y}_{T+1:T+H}|\bm{y}_{T+1-B:T},\bm{x}_{T+1-B:T+H},\bm{\chi}_{T+1:T+H})\mathrm{p}(\bm{\chi}_{T+1:T+H}|\bm{y}_{1:T},\bm{x}_{1:T})\mathrm{d}\bm{\chi}_{T+1:T+H}.$ The first factor in the integrand can be computed recursively by step-by-step predictions based on our conditional distribution model given $\bm{\chi}_{T+1:T+H}$, $\mathrm{p}(\bm{y}_{T+1:T+H}|\bm{y}_{T+1-B:T},\bm{x}_{T+1-B:T+H},\bm{\chi}_{T+1:T+H})=\prod_{t=T+1}^{T+H}\mathrm{p}(\bm{y}_{t}|\bm{y}_{t-B:t-1},\bm{x}_{t-B:t},\bm{\chi}_{t}).$ (12) The second factor in the integrand can be further factorized into $\mathrm{p}(\bm{\chi}_{T+1:T+H}|\bm{y}_{1:T},\bm{x}_{1:T})=\int\mathrm{p}(\bm{\chi}_{T+1:T+H}|\bm{\chi}_{T})\mathrm{p}(\bm{\chi}_{T}|\bm{y}_{1:T},\bm{x}_{1:T})\mathrm{d}\bm{\chi}_{T}.$ (13) We use Rao-Blackwellized particle filters [48] to infer $\mathrm{p}(\bm{\chi}_{T}|\bm{y}_{1:T},\bm{x}_{1:T})$, so our model can keep adapting to new changes in an online manner. Specifically, we jointly infer $\bm{\pi}_{t}$ and $\bm{\chi}_{t}$ with particles representing $\bm{\pi}_{t}$ and closed-form inference of $\bm{\chi}_{t}$. Once we have the posterior samples of $\mathrm{p}(\bm{\chi}_{T}|\bm{y}_{1:T},\bm{x}_{1:T})$, we use the prior model to sample trajectories of $\bm{\chi}_{T+1:T+H}$ conditioned on the samples of $\bm{\chi}_{T}$. With the sample trajectories of $\mathrm{p}(\bm{\chi}_{T+1:T+H}|\bm{y}_{1:T},\bm{x}_{1:T})$, we sample the trajectories of $\mathrm{p}(\bm{y}_{T+1:T+H}|\bm{y}_{T+1-B:T},\bm{x}_{T+1-B:T+H},\bm{\chi}_{T+1:T+H})$ using the aforementioned step-by-step predictions with our conditional distribution model. ## 4 Experiments Table 1: Baselines Models. Method | S | R | MV | Implementation ---|---|---|---|--- DeepAR [1] | ✓ | | | GluonTS[49, 50] DeepSSM [5] | ✓ | | | GluonTS TransformerMAF [4] | ✓ | ✓ | ✓ | PyTorchTS[51] DeepVAR (I)nd. [3] | ✓ | ✓ | ✓ | GluonTS DeepVAR (C)opula [3] | | ✓ | ✓ | GluonTS GP-Scaling [3] | | ✓ | ✓ | GluonTS GP-Copula [3] | | ✓ | ✓ | GluonTS LSTM-NVP [4] | | ✓ | ✓ | PyTorchTS LSTM-MAF [4] | | ✓ | ✓ | PyTorchTS TimeGrad [29] | | ✓ | ✓ | PyTorchTS [S = Synthetic; R = Real-World; MV = Multivariate] We compare our approach with $2$ univariate and $8$ multivariate time series models, both on synthetic (Section 4.1) and real-world (Section 4.2) datasets; see Table 1 for an overview, including references to the relevant literature and implementations. We note that DeepVAR is also called Vec-LSTM in previous works [3, 4]. For our own model we consider two variants: an ablated model without the dynamic updates to the conditional distribution described in Section 3.3 (StatiConF); and our full model including those contributions (DynaConF). In both cases we experiment with different encoder architectures. For synthetic data, we use either a two-layer MLP with $32$ hidden units (* – MLP) or a point-wise linear + $\tanh$ mapping (* – PP) as the encoder. For real-world data, we use an LSTM as the encoder. ### 4.1 Experiments on Synthetic Data #### Datasets For our experiments on synthetic data we simulate four conditionally non- stationary stochastic processes for $T=2500$ time steps. We use the first $1000$ steps as training data, the next $500$ steps as validation data, and the remaining $1000$ steps as test data: * • (AR(1) – Flip) We simulate an AR(1) process, $y_{t}=w_{t}y_{t-1}+\epsilon,\epsilon\sim\mathcal{N}(0,1)$, but resample its coefficient $w_{t}$ from a uniform categorical distribution over $\\{-0.5,+0.5\\}$ every 100 steps to introduce non-stationarity. * • (AR(1) – Dynamic) We simulate the same process as above but now resample $w_{t}$ from a continuous uniform distribution $\mathcal{U}(-1,1)$ every 100 steps. * • (AR(1) – Sin) We simulate the same process as above but now resample $w_{t}$ according to $w_{t}=\sin(2\pi t/T)$. Different from the two processes above, this process has a continuously changing non-stationary conditional distribution with its own time-dependent dynamics. * • (VAR(1) – Dynamic) This process can be viewed as a multivariate generalization of AR(1) – Dynamic and is used in our comparisons with multivariate baselines. We use a four-dimensional VAR process with an identity noise matrix. Similar to the univariate case, we resample the entries of the coefficient matrix from a continuous uniform distribution $\mathcal{U}(-0.8,0.8)$ every 250 steps. In addition, we ensure the stability of the resulting process by computing the eigenvalues of the coefficient matrix and discard it if its largest absolute eigenvalue is greater than $1$. #### Experimental Setup For univariate data (AR(1) – Flip/Sin/Dynamic), we compare our approach with the two univariate baselines DeepAR and DeepSSM. For multivariate data (VAR(1) – Dynamic), most baselines are redundant because their focus is on better observation models, while the underlying temporal backbone is similar. Since our synthetic observation distributions are simple, we compare with two model families that differ in their temporal backbone: DeepVAR (RNN backbone) and TransformerMAF (Transformer backbone). We use a context window size of $200$ to give the model access to the information needed to infer the current parameter of the true conditional distribution. We tried increasing the window size to $500$ on VAR(1) – Dynamic but did not see performance improvements. We also removed the unnecessary default input features of these models to prevent overfitting. For details of the setup, please see Appendix D. DynaConF is trained with the autoregressive posterior. For evaluation we use a rolling-window approach with a window size of $10$ steps. The final evaluation metrics are the aggregated results from all $100$ test windows. For univariate data we report the continuous ranked probability score (CRPS) [52], a commonly used score to measure how close the predicted distribution is to the true distribution. For multivariate data we also report $\textrm{CRPS}_{\Sigma}$, a popular variant of CRPS for multivariate distributions [3, 29, 4]. Additional results, including an evaluation based on mean squared error (MSE), can be found in the Appendix E. In all cases lower values indicate better performance. #### Results The results on synthetic data are shown in Table 2. For univariate data, our full model (DynaConF) outperforms its ablated counterpart (StatiConF) by 2.0% – 12.3%, validating the importance of our dynamic adaptation to non-starionary effects. DynaConF – PP is also superior to all univariate baselines, with its closest competitor DeepAR – 10 behind by an average of 6.1%. Furthermore, we note that our model with the pointwise encoder tends to outperform the MLP encoder, both for the ablated and full model. For multivariate data we observe similar trends. Here, our full model (DynaConF – PP) performs 24.3% (CRPS) / 33.8% ($\textrm{CRPS}_{\Sigma}$) better than the ablated model with static conditional distribution (StatiConF – PP) and 22.6% (CRPS) / 31.0% ($\textrm{CRPS}_{\Sigma}$) better than the best-performing baseline (DeepVAR – 160). As before, pointwise encoders tend to perform better than MLP encoders. Since for synthetic data we also have access to the ground-truth models, we include the corresponding scores as a reference and upper bound in terms of performance. Figure 2 shows qualitative results of our model on AR(1) – Flip/Sin/Dynamic. Note that because the encoder in our model is non-linear, the inferred $\bm{\phi}_{t}$ does not necessarily correspond to the original parameter. However, we can clearly see how it differs for Sin (continuous changes) vs Flip/Dynamic (discrete jumps) and how it tracks the ground-truth up to scale/sign. (a) Flip (b) Sin (c) Dynamic Figure 2: Qualitative Results. We show one dimension of $\bm{\phi}_{t}$ of DynaConF – PP inferred with particle filters at test time on (a) AR(1) – Flip, (b) AR(1) – Sin, and (c) AR(1) – Dynamic. The red dashed lines show the ground-truth parameters of the conditional distribution varying over time. The blue curves and bands are the medians and 90% confidence intervals of the posterior. Table 2: Quantitative Evaluation (Synthetic Data). We compare StatiConF/DynaConF to (a) $4$ univariate and (b) $6$ multivariate baselines. For univariate processes (AR(1)-Flip/Sin/Dynamic) we report CRPS. For multivariate processes (VAR(1)-Dynamic) we report both CRPS and $\textrm{CRPS}_{\Sigma}$. Values behind models indicate the number of hidden units. | Method | CRPS ---|---|--- | AR(1)-F | AR(1)-S | AR(1)-D | GroundTruth | 0.731$\scriptscriptstyle\pm\text{0.001}$ | 0.710$\scriptscriptstyle\pm\text{0.001}$ | 0.624$\scriptscriptstyle\pm\text{0.001}$ | DeepAR – 10 | 0.741$\scriptscriptstyle\pm\text{0.005}$ | 0.764$\scriptscriptstyle\pm\text{0.004}$ | 0.768$\scriptscriptstyle\pm\text{0.011}$ | DeepAR – 40 | 0.740$\scriptscriptstyle\pm\text{0.003}$ | 0.776$\scriptscriptstyle\pm\text{0.002}$ | 0.820$\scriptscriptstyle\pm\text{0.053}$ | DeepAR – 160 | 0.740$\scriptscriptstyle\pm\text{0.001}$ | 0.774$\scriptscriptstyle\pm\text{0.004}$ | 0.801$\scriptscriptstyle\pm\text{0.047}$ | DeepSSM | 0.755$\scriptscriptstyle\pm\text{0.001}$ | 0.761$\scriptscriptstyle\pm\text{0.001}$ | 0.803$\scriptscriptstyle\pm\text{0.001}$ (ours) | StatiConF – MLP | 0.753$\scriptscriptstyle\pm\text{0.001}$ | 0.784$\scriptscriptstyle\pm\text{0.002}$ | 0.764$\scriptscriptstyle\pm\text{0.003}$ StatiConF – PP | 0.752$\scriptscriptstyle\pm\text{0.002}$ | 0.763$\scriptscriptstyle\pm\text{0.001}$ | 0.783$\scriptscriptstyle\pm\text{0.002}$ DynaConF – MLP | 0.750$\scriptscriptstyle\pm\text{0.001}$ | 0.727$\scriptscriptstyle\pm\text{0.019}$ | 0.691$\scriptscriptstyle\pm\text{0.006}$ DynaConF – PP | 0.737$\scriptscriptstyle\pm\text{0.001}$ | 0.721$\scriptscriptstyle\pm\text{0.005}$ | 0.687$\scriptscriptstyle\pm\text{0.010}$ [AR(1)-*: F = Flip; S = Sin; D = Dynamic] (a) Univariate | Method | CRPS | $\textrm{CRPS}_{\Sigma}$ ---|---|---|--- | GroundTruth | 0.496$\scriptscriptstyle\pm\text{0.001}$ | $0.437\scriptscriptstyle\pm\text{0.001}$ | DeepVAR – 10 | 0.797$\scriptscriptstyle\pm\text{0.009}$ | $0.824\scriptscriptstyle\pm\text{0.015}$ | DeepVAR – 40 | 0.792$\scriptscriptstyle\pm\text{0.000}$ | $0.821\scriptscriptstyle\pm\text{0.002}$ | DeepVAR – 160 | 0.787$\scriptscriptstyle\pm\text{0.001}$ | $0.816\scriptscriptstyle\pm\text{0.002}$ | TransformerMAF – 8 | 0.800$\scriptscriptstyle\pm\text{0.001}$ | $0.831\scriptscriptstyle\pm\text{0.008}$ | TransformerMAF – 32 | 0.806$\scriptscriptstyle\pm\text{0.008}$ | $0.855\scriptscriptstyle\pm\text{0.011}$ | TransformerMAF – 128 | 0.866$\scriptscriptstyle\pm\text{0.077}$ | $0.873\scriptscriptstyle\pm\text{0.025}$ (ours) | StatiConF – MLP | 0.806$\scriptscriptstyle\pm\text{0.004}$ | $0.843\scriptscriptstyle\pm\text{0.005}$ StatiConF – PP | 0.805$\scriptscriptstyle\pm\text{0.002}$ | $0.850\scriptscriptstyle\pm\text{0.003}$ DynaConF – MLP | 0.762$\scriptscriptstyle\pm\text{0.036}$ | 0.777$\scriptscriptstyle\pm\text{0.047}$ DynaConF – PP | 0.609$\scriptscriptstyle\pm\text{0.012}$ | 0.563$\scriptscriptstyle\pm\text{0.019}$ (b) Multivariate: VAR(1) – Dynamic ### 4.2 Experiments on Real-World Data #### Datasets We evaluate the proposed method on six publicly available datasets [53, 2, 3]: (Exchange) daily exchange rates of 8 different countries from 1990 to 2016; (Solar) [54] solar power production in 10-minute intervals of 137 PV plants in 2006; (Electricity) [55] hourly electricity consumption of 370 customers from 2012 to 2014; (Traffic) [56] hourly occupancy data at 963 sensor locations in the San Francisco Bay area; (Taxi) rides taken in 30-minute intervals at 1214 locations in New York City in January 2015/2016; (Wikipedia) daily page views of 2000 Wikipedia articles. #### Experimental Setup We use the same train/test splits and the same input features, such as time of the day, as previous works with published code and results [3, 4, 29]. For our method, we first train StatiConF and then reuse its learned encoder in DynaConF, so the learning of DynaConF is focused on the dynamic model. DynaConF is trained with the IAF posterior for efficiency. Our models use a two-layer LSTM with 128 hidden units as the encoder, except for the 8-dimensional Exchange data, where the hidden size is 8. We stress again that, different from DeepVAR or LSTM-MAF, we use LSTM as an encoder of $(\bm{y}_{t-B:t-1},\bm{x}_{t-B,t})$ only, so we actually “restart” it at every time step. More details of our hyperparameters can be found in Appendix C. #### Results The results are shown in Table 3. As we can see, for different datasets and different evaluation metrics, the relative performance of each method can be different. This shows the diversity of these datasets and that different models may benefit from dataset-specific structure in different ways. However, DynaConF achieves the best performance more often than all the other baselines. Where it does not outperform, its performance is competitive consistently, unlike the baselines, whose relative performance (compared to others) varies significantly across datasets. We also note that our full model, DynaConF, which adapts to changes in the conditional distribution consistently outperforms our ablated model, StatiConF, which only models a static conditional distribution, except for electricity, where the performance is similar. This shows the effectiveness of modeling the dynamic changes in the conditional distribution. Further results, including CRPSΣ, and standard deviations of MSE and CRPS are in Appendix E. Table 3: Quantitative Evaluation (Real-World Data). We compare the CRPS and MSE scores of StatiConF/DynaConF and 7 baselines on 6 publicly available datasets. Lower values are better. Method | Exchange | Solar | Electricity | Traffic | Taxi | Wikipedia ---|---|---|---|---|---|--- CRPS | MSE | CRPS | MSE | CRPS | MSE | CRPS | MSE | CRPS | MSE | CRPS | MSE | [e-4] | | [e+2] | | [e+5] | | [e-4] | | [e+1] | | [e+7] DeepVAR (I) | 0.013 | 1.6 | 0.434 | 9.3 | 1.059 | 2.1 | 0.168 | 6.3 | 0.586 | 7.3 | 0.379 | 7.2 DeepVAR (C) | 0.009 | 1.9 | 0.384 | 29 | 0.084 | 55 | 0.165 | 15 | 0.416 | 5.1 | 0.247 | 3.8 GP-scaling | 0.017 | 2.9 | 0.415 | 11 | 0.053 | 1.8 | 0.140 | 5.2 | 0.346 | 2.7 | 1.549 | 5.5 GP-Copula | 0.008 | 1.7 | 0.371 | 9.8 | 0.056 | 2.4 | 0.133 | 6.9 | 0.360 | 3.1 | 0.236 | 4.0 LSTM-NVP | 0.010 | 2.4 | 0.365 | 9.1 | 0.059 | 2.5 | 0.172 | 6.9 | 0.327 | 2.6 | 0.333 | 4.7 LSTM-MAF | 0.012 | 3.8 | 0.378 | 9.8 | 0.051 | 1.8 | 0.124 | 4.9 | 0.314 | 2.4 | 0.282 | 3.8 TransformerMAF | 0.012 | 3.4 | 0.368 | 9.3 | 0.052 | 2.0 | 0.134 | 5.0 | 0.377 | 4.5 | 0.274 | 3.1 StatiConF (ours) | 0.011 | 4.4 | 0.343 | 6.6 | 0.059 | 2.0 | 0.143 | 4.1 | 0.311 | 2.2 | 0.427 | 4.1 DynaConF (ours) | 0.010 | 2.6 | 0.338 | 6.4 | 0.058 | 2.1 | 0.140 | 3.9 | 0.308 | 2.2 | 0.296 | 3.8 ## 5 Discussion In this work, we addressed the problem of modeling and forecasting time series with non-stationary conditional distributions. We proposed a new model, DynaConF, that explicitly decouples the time-invariant conditional distribution modeling and the time-variant non-stationarity modeling. We designed specific architectures, developed two types of variational posteriors, and employed Rao-Blackwellized particle filters to allow the model to train efficiently on large multivariate time series and adapt online at test time. Results on synthetic and real-world data show that our model can learn and adapt to different types of changes in the conditional distribution parameters and perform competitively or better than state-of-the-art time series forecasting models. Our model currently has the following limitations. (1) Since the variational posterior model complexity scales in $O(T)$, it is challenging to train the model on very long time series. (2) We used a simple observation distribution family in this work for efficient inference, but there are cases where this may impact performance. For future work, it would be interesting to develop new variational posteriors and training algorithms that can scale better and more flexible inference algorithms that can deal with more complex observation distributions. ## References * Salinas et al. [2020] David Salinas, Valentin Flunkert, Jan Gasthaus, and Tim Januschowski. DeepAR: Probabilistic forecasting with autoregressive recurrent networks. _International Journal of Forecasting_ , 2020. * Lai et al. [2018] Guokun Lai, Wei-Cheng Chang, Yiming Yang, and Hanxiao Liu. Modeling long-and short-term temporal patterns with deep neural networks. In _The 41st International ACM SIGIR Conference on Research & Development in Information Retrieval_, 2018. * Salinas et al. [2019] David Salinas, Michael Bohlke-Schneider, Laurent Callot, Roberto Medico, and Jan Gasthaus. High-dimensional multivariate forecasting with low-rank Gaussian Copula Processes. In _Advances in Neural Information Processing Systems_ , 2019\. * Rasul et al. [2021a] Kashif Rasul, Abdul-Saboor Sheikh, Ingmar Schuster, Urs M. Bergmann, and Roland Vollgraf. Multivariate probabilistic time series forecasting via conditioned normalizing flows. In _International Conference on Learning Representations_ , 2021a. * Rangapuram et al. [2018] Syama Sundar Rangapuram, Matthias W. Seeger, Jan Gasthaus, Lorenzo Stella, Yuyang Wang, and Tim Januschowski. Deep state space models for time series forecasting. In _Advances in Neural Information Processing Systems_ , 2018. * Brockwell and Davis [2009] Peter J. Brockwell and Richard A. Davis. _Time Series: Theory and Methods_. 2009\. * Hamilton [1994] James Douglas Hamilton. _Time Series Analysis_. 1994\. * Kwiatkowski et al. [1992] Denis Kwiatkowski, Peter CB Phillips, Peter Schmidt, and Yongcheol Shin. Testing the null hypothesis of stationarity against the alternative of a unit root: How sure are we that economic time series have a unit root? _Journal of Econometrics_ , 1992. * Dickey and Fuller [1979] David A. Dickey and Wayne A. Fuller. Distribution of the estimators for autoregressive time series with a unit root. _Journal of the American Statistical Association_ , 1979. * Kim et al. [2022] Taesung Kim, Jinhee Kim, Yunwon Tae, Cheonbok Park, Jang-Ho Choi, and Jaegul Choo. Reversible instance normalization for accurate time-series forecasting against distribution shift. In _International Conference on Learning Representations_ , 2022. * Box et al. [2015] George EP Box, Gwilym M. Jenkins, Gregory C. Reinsel, and Greta M. Ljung. _Time Series Analysis: Forecasting and Control_. 2015\. * Engle [1982] Robert F. Engle. Autoregressive conditional heteroscedasticity with estimates of the variance of United Kingdom inflation. _Econometrica: Journal of the Econometric Society_ , 1982. * Bollerslev [1986] Tim Bollerslev. Generalized autoregressive conditional heteroskedasticity. _Journal of Econometrics_ , 1986. * Kalman [1960] Re E Kalman. A new approach to linear filtering and prediction problems. _Transactions of the ASME-Journal of Basic Engineering_ , 1960. * Hochreiter and Schmidhuber [1997] Sepp Hochreiter and Jürgen Schmidhuber. Long short-term memory. _Neural Computation_ , 1997. * Bai et al. [2018] Shaojie Bai, J. Zico Kolter, and Vladlen Koltun. An empirical evaluation of generic convolutional and recurrent networks for sequence modeling. _arXiv preprint arXiv:1803.01271_ , 2018. * Vaswani et al. [2017] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Łukasz Kaiser, and Illia Polosukhin. Attention is all you need. _Advances in Neural Information Processing Systems_ , 2017. * Garnelo et al. [2018a] Marta Garnelo, Dan Rosenbaum, Christopher Maddison, Tiago Ramalho, David Saxton, Murray Shanahan, Yee Whye Teh, Danilo Rezende, and S. M. Ali Eslami. Conditional neural processes. In _International Conference on Machine Learning_ , 2018a. * Garnelo et al. [2018b] Marta Garnelo, Jonathan Schwarz, Dan Rosenbaum, Fabio Viola, Danilo J. Rezende, S. M. Eslami, and Yee Whye Teh. Neural processes. _arXiv preprint arXiv:1807.01622_ , 2018b. * Lim and Zohren [2020] Bryan Lim and Stefan Zohren. Time series forecasting with deep learning: A survey. _arXiv:2004.13408 [cs, stat]_ , 2020. * Holt [2004] Charles C. Holt. Forecasting seasonals and trends by exponentially weighted moving averages. _International Journal of Forecasting_ , 2004. * Cleveland et al. [1990] Robert B. Cleveland, William S. Cleveland, Jean E. McRae, and Irma Terpenning. STL: A seasonal-trend decomposition procedure based on loess. _Journal of Official Statistics_ , 1990. * Smyl [2020] Slawek Smyl. A hybrid method of exponential smoothing and recurrent neural networks for time series forecasting. _International Journal of Forecasting_ , 2020. * Bandara et al. [2020] Kasun Bandara, Christoph Bergmeir, and Slawek Smyl. Forecasting across time series databases using recurrent neural networks on groups of similar series: A clustering approach. _Expert Systems with Applications_ , 2020. * Oreshkin et al. [2020] Boris N. Oreshkin, Dmitri Carpov, Nicolas Chapados, and Yoshua Bengio. N-BEATS: Neural basis expansion analysis for interpretable time series forecasting. _arXiv:1905.10437 [cs, stat]_ , 2020. * Wu et al. [2021] Haixu Wu, Jiehui Xu, Jianmin Wang, and Mingsheng Long. Autoformer: Decomposition transformers with auto-correlation for long-term series forecasting. In _Advances in Neural Information Processing Systems_ , 2021\. * Woo et al. [2022] Gerald Woo, Chenghao Liu, Doyen Sahoo, Akshat Kumar, and Steven Hoi. ETSformer: Exponential Smoothing Transformers for Time-series Forecasting. _arXiv preprint arXiv:2202.01381_ , 2022. * Corani et al. [2021] Giorgio Corani, Alessio Benavoli, and Marco Zaffalon. Time series forecasting with Gaussian Processes needs priors. In _Joint European Conference on Machine Learning and Knowledge Discovery in Databases_ , 2021. * Rasul et al. [2021b] Kashif Rasul, Calvin Seward, Ingmar Schuster, and Roland Vollgraf. Autoregressive denoising diffusion models for multivariate probabilistic time series forecasting. In _Proceedings of the 38th International Conference on Machine Learning_ , 2021b. * Fraccaro et al. [2017] Marco Fraccaro, Simon Kamronn, Ulrich Paquet, and Ole Winther. A disentangled recognition and nonlinear dynamics model for unsupervised learning. In _Advances in Neural Information Processing Systems_ , 2017\. * de Bézenac et al. [2020] Emmanuel de Bézenac, Syama Sundar Rangapuram, Konstantinos Benidis, Michael Bohlke-Schneider, Richard Kurle, Lorenzo Stella, Hilaf Hasson, Patrick Gallinari, and Tim Januschowski. Normalizing Kalman filters for multivariate time series analysis. In H. Larochelle, M. Ranzato, R. Hadsell, M. F. Balcan, and H. Lin, editors, _Advances in Neural Information Processing Systems_ , 2020. * Tang and Matteson [2021] Binh Tang and David S Matteson. Probabilistic transformer for time series analysis. In _Advances in Neural Information Processing Systems_ , 2021\. * Klushyn et al. [2021] Alexej Klushyn, Richard Kurle, Maximilian Soelch, Botond Cseke, and Patrick van der Smagt. Latent matters: Learning deep state-space models. In _Advances in Neural Information Processing Systems_ , 2021\. * Ansari et al. [2021] Abdul Fatir Ansari, Konstantinos Benidis, Richard Kurle, Ali Caner Turkmen, Harold Soh, Alexander J. Smola, Bernie Wang, and Tim Januschowski. Deep explicit duration switching models for time series. _Advances in Neural Information Processing Systems_ , 2021. * Kurle et al. [2020] Richard Kurle, Syama Sundar Rangapuram, Emmanuel de Bézenac, Stephan Günnemann, and Jan Gasthaus. Deep Rao-Blackwellised particle filters for time series forecasting. _Advances in Neural Information Processing Systems_ , 2020. * Nguyen and Quanz [2021] Nam Nguyen and Brian Quanz. Temporal latent auto-encoder: A method for probabilistic multivariate time series forecasting. In _Proceedings of the AAAI Conference on Artificial Intelligence_ , 2021. * Le Guen and Thome [2020] Vincent Le Guen and Nicolas Thome. Probabilistic time series forecasting with shape and temporal diversity. _Advances in Neural Information Processing Systems_ , 2020. * Yu et al. [2021] Zhongjie Yu, Fabrizio G. Ventola, and Kristian Kersting. Whittle networks: A deep likelihood model for time series. In _International Conference on Machine Learning_ , 2021. * De Lange et al. [2021] Matthias De Lange, Rahaf Aljundi, Marc Masana, Sarah Parisot, Xu Jia, Ales Leonardis, Gregory Slabaugh, and Tinne Tuytelaars. A continual learning survey: Defying forgetting in classification tasks. _IEEE Transactions on Pattern Analysis and Machine Intelligence_ , 2021. * Parisi et al. [2019] German I. Parisi, Ronald Kemker, Jose L. Part, Christopher Kanan, and Stefan Wermter. Continual lifelong learning with neural networks: A review. _Neural Networks_ , 2019. * Kirkpatrick et al. [2017] James Kirkpatrick, Razvan Pascanu, Neil Rabinowitz, Joel Veness, Guillaume Desjardins, Andrei A. Rusu, Kieran Milan, John Quan, Tiago Ramalho, Agnieszka Grabska-Barwinska, Demis Hassabis, Claudia Clopath, Dharshan Kumaran, and Raia Hadsell. Overcoming catastrophic forgetting in neural networks. _arXiv:1612.00796 [cs, stat]_ , 2017. * Gupta et al. [2022] Vibhor Gupta, Jyoti Narwariya, Pankaj Malhotra, Lovekesh Vig, and Gautam Shroff. Continual learning for multivariate time series tasks with variable input dimensions. _arXiv:2203.06852 [cs]_ , 2022. * Nguyen et al. [2018] Cuong V. Nguyen, Yingzhen Li, Thang D. Bui, and Richard E. Turner. Variational continual learning. In _International Conference on Learning Representations_ , 2018. * Kurle et al. [2019] Richard Kurle, Botond Cseke, Alexej Klushyn, Patrick van der Smagt, and Stephan Günnemann. Continual learning with bayesian neural networks for non-stationary data. In _International Conference on Learning Representations_ , 2019. * Kingma and Welling [2013] Diederik P. Kingma and Max Welling. Auto-encoding variational bayes. _arXiv preprint arXiv:1312.6114_ , 2013. * Kingma et al. [2017] Diederik P. Kingma, Tim Salimans, Rafal Jozefowicz, Xi Chen, Ilya Sutskever, and Max Welling. Improving variational inference with inverse autoregressive flow. _arXiv:1606.04934 [cs, stat]_ , 2017. * Germain et al. [2015] Mathieu Germain, Karol Gregor, Iain Murray, and Hugo Larochelle. MADE: Masked autoencoder for distribution estimation. In _Proceedings of the 32nd International Conference on Machine Learning_ , 2015. * Doucet et al. [2000] Arnaud Doucet, Nando de Freitas, Kevin Murphy, and Stuart Russell. Rao-Blackwellised particle filtering for dynamic Bayesian networks. In _Proceedings of the Sixteenth Conference on Uncertainty in Artificial Intelligence_ , UAI’00, 2000. * Alexandrov et al. [2020] Alexander Alexandrov, Konstantinos Benidis, Michael Bohlke-Schneider, Valentin Flunkert, Jan Gasthaus, Tim Januschowski, Danielle C. Maddix, Syama Rangapuram, David Salinas, Jasper Schulz, Lorenzo Stella, Ali Caner Türkmen, and Yuyang Wang. GluonTS: Probabilistic and Neural Time Series Modeling in Python. _Journal of Machine Learning Research_ , 2020. * glu [2021] GluonTS, 2021. URL https://github.com/awslabs/gluon-ts. * [51] Kashif Rasul. PytorchTS. URL https://github.com/zalandoresearch/pytorch-ts. * Matheson and Winkler [1976] James E. Matheson and Robert L. Winkler. Scoring rules for continuous probability distributions. _Management Science_ , 1976. * [53] Public time series datasets. URL https://github.com/mbohlkeschneider/gluon-ts/tree/mv_release/datasets. * [54] Solar power dataset. URL http://www.nrel.gov/grid/solar-power-data.html. * [55] Electricity dataset. URL https://archive.ics.uci.edu/ml/datasets/ElectricityLoadDiagrams20112014. * [56] Traffic dataset. URL http://pems.dot.ca.gov. * Smith and Topin [2017] Leslie N. Smith and Nicholay Topin. Super-convergence: Very fast training of neural networks using large learning rates. _arXiv preprint arXiv:1708.07120_ , 2017. * Kingma and Ba [2014] Diederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. _arXiv preprint arXiv:1412.6980_ , 2014. ## Appendix A Normalizing-Flow-Based Variational Posterior We develop an alternative variational posterior model based on Inverse Autoregressive Flows (IAFs; [46]). In this model, $\mathrm{q}(\bm{\chi}_{B:T})$ can be sampled jointly over $t=B:T$ in parallel as follows. We again let $\mathrm{q}(\bm{\chi}_{B})=\mathrm{p}(\bm{\chi}_{B})$ and only use IAF to model $\mathrm{q}(\bm{\chi}_{B+1:T})$. Let $T^{\prime}=T-B$. To sample the $i$-th dimension of $\bm{\chi}_{B+1:T}$, we first sample a vector $\bm{z}_{0}$ of dimension $T^{\prime}$ from a standard multivariate normal distribution $\bm{z}_{0}\sim\mathcal{N}(\bm{0},\bm{I}).$ (14) Then, this sample is transformed sequentially for $l=1,\ldots,L$ as $\bm{z}_{l}=\bm{\mu}_{l}+\bm{\sigma}_{l}\odot\bm{z}_{l-1},$ (15) where $\bm{\mu}_{l}$ and $\bm{\sigma}_{l}$ are the output from a neural network, specifically MADE [47], with a soft-plus transformation on the latter to make sure it is positive. The final output $\bm{z}_{L}$ is taken as the sample of the $i$-th dimension of $\bm{\chi}_{B+1:T}$. The input of the MADE at the $l$-th iteration consists of not only the previous output $\bm{z}_{l-1}$ but also an additional learnable embedding for $\bm{\chi}_{B+1:T,i}$, the $i$-th dimension of $\bm{\chi}_{B+1:T}$. We set $L=3$ and use MADE of 2 hidden layers of size 1024 and learnable embeddings of size 512 for all our experiments. To compute $\mathrm{q}(\bm{\chi}_{B+1:T})$, we also need to compute the determinants of the Jacobians $|{\mathrm{d}\bm{z}_{l}}/{\mathrm{d}\bm{z}_{l-1}}|,l=1,\ldots,L$, in addition to the density of the base distribution, which is the standard normal $\mathcal{N}(\bm{0},\bm{I})$. Due to the structure of MADE and the definition of IAF (Eq. 15), the Jacobian determinants can be easily computed from $\bm{\sigma}_{l}$. ## Appendix B Optimization Procedure In contrast to existing time series models, we utilize a flexible variational posterior with a large number of parameters in the order of $O(T)$ and a structured prior model to account for conditional distribution changes over time. However, jointly optimizing over these variational parameters and the parameters in the conditional distribution model itself using stochastic gradient descent can be prohibitively demanding on the computational resources, especially GPU memory. Instead, we propose an alternative optimization procedure to learn these parameters. Specifically, instead of optimizing by stochastic gradient descent (SGD) over all parameters in both the conditional distribution model and the prior and variational posterior model, we propose to learn the model by alternating the optimization of the parameters in the conditional distribution model and the prior and variational posterior models. For the former, we condition on the samples from the current posterior model while optimizing the conditional distribution model on randomly sampled sub-sequences from the time series using SGD. For the latter, we fix the conditional distribution model, sample from the variational posterior model over the entire time series either sequentially or in parallel, depending on the variational posterior model we use, and compute the loss using the samples through the conditional distribution, prior, and posterior models. Then we perform an update on the prior and posterior model parameters using the gradient from the loss. When the time series is high-dimensional and GPU memory becomes a constraint during training, we randomly sample a subset of _observation_ dimensions for each batch, since the loss decomposes over the observation dimensions in our model. ## Appendix C Hyperparameters On synthetic data, we use the validation set to early stop and choose the best model for both the baselines and StatiConF. For DynaConF, we keep training as long as the loss is decreasing on the training set. We perform 50 updates per epoch. We use 32 hidden units for our 2-layer MLP encoder. For the baselines, we report their results with different hidden sizes, including the default ones. On real-world data, we use the “1cycle” learning rate scheduler [57] and manually choose the number of epochs for StatiConF. After training StatiConF, we reuse its encoders in DynaConF, so it only needs to learn the dynamic model. For DynaConF, we train until the loss on the training set converges. For our models, we use Adam [58] as the optimizer with the default learning rate of $0.001$ unless the learning rate is controlled by the “1cycle” learning rate scheduler. The dimension of the latent vector $\bm{z}_{t,i}$ (see Section 3.2) is set to $E=4$ across all the experiments. Because of the diversity of the real-world datasets, we further apply techniques to stablize training. Specifically, for all datasets, we use the mean and standard deviation to shift and scale each dimension of the time series. For Exchange, we use the mean and standard deviation of the recent past data in a moving context window. For the other datasets, we simply use the global mean and standard deviation of each dimension computed using the whole training set. In all cases, for forecasting, the output from the model is inversely scaled and shifted back for evaluation. Different from the rest of the datasets, Taxi and Wikipedia consist of counts. The time series in Taxi are relatively small counts, while the time series in Wikipedia are very large and can have extreme values. To stabilize training, we apply the dequantization technique used in [4] to Taxi, where we add a small noise from the uniform distribution $\mathcal{U}(-0.5,0.5)$ to the target time series $\bm{y}$. On Wikipedia, we use the quantiles (0.02 and 0.95) computed from the recent past data in a moving context window to Winsorize extreme values. ## Appendix D Experiment Setup Details On synthetic data, we keep most baseline hyperparameters to their default values but make the following changes to account for properties of our synthetic data: (1) To reduce overfitting we remove any unnecessary input features from the models and use only past observations with time lag $1$ as input. (2) To allow the models to adapt to changes in the conditional distribution we increase the context window size to $200$. This allows the models to see enough observations generated with the latest ground-truth distribution parameters, so the models have the necessary information to adapt to the current distribution. For VAR(1) – Dynamic, we also tried extending it to $500$, but it did not improve the performance. (3) DeepSSM allows modeling of trend and seasonality, but since our synthetic data do not have those, we explicitly remove those components from the model specification to avoid overfitting; (4) DeepVAR allows modeling of different covariance structures in the noise, such as diagonal, low-rank, and full-rank. Since our synthetic data follow a diagonal covariance structure in the noise, we explicitly specify that in DeepVAR. We run all experiments for three different random seeds independently and calculate and report the mean and standard deviation of each evaluation metric for each model. On synthetic datasets, we use 1000 sample paths to empirically estimate the predicted distributions for all models. On real-world datasets, we use 100 sample paths. We use three evaluation metrics: mean squared error (MSE), continuous ranked probability score (CRPS)[52] and CRPSΣ. Assume that we observe $y$ at time $t$ but a probabilistic forecasting model predicts the distribution of $y$ to be $F$. MSE is widely used for time series forecasting, and for a probabilistic forecasting model, where the mean of the distribution is used for point prediction, it is defined as $\text{MSE}(F,y)=(\mathrm{E}_{z\sim F}[z]-y)^{2}$ (16) for a single time point $t$ and averaged over all the time points in the test set. CRPS has been used for evaluating how close the predicted probability distribution is to the ground-truth distribution and is defined as $\text{CRPS}(F,y)=\int_{\mathbb{R}}(F(z)-\mathbb{I}[y\leq z])^{2}\mathrm{d}{z},$ (17) for a single time point $t$ and averaged over all the time points in the test set, where $\mathbb{I}$ denotes the indicator function. Generally, $F(z)$ can be approximated by the empirical distribution of the samples from the predicted distribution. Both MSE and CRPS can be applied to multivariate time series by computing the metric on each dimension and then averaging over all the dimensions. CRPSΣ is another metric that has been used in recent works [3, 4, 29] to evaluate multivariate probabilistc forecasting results. To compute it, the sum of the time series across all the dimensions is computed and then compared with the sum of the prediction using CRPS. That is $\text{CRPS}_{\Sigma}(F_{\Sigma},\sum_{i}y_{i})=\int_{\mathbb{R}}(F_{\Sigma}(z)-\mathbb{I}[\sum_{i}y_{i}\leq z])^{2}\mathrm{d}{z},$ (18) where $F_{\Sigma}$ is the distribution of the sum of the predicted values $\sum_{i}z_{i},z_{i}\sim F_{i}(z)$ and $i$ denotes the dimension $i$ of the time series. $F_{\Sigma}$ is usually approximated by the empirical distribution of the samples summed across the dimensions. ## Appendix E Additional Experiment Results Table 4 and 5 show the MSE results of the baselines and our models on the univariate and multivariate processes respectively. Table 6 shows the CRPSΣ scores of our models on the real-world datasets, while Table 7 and 8 show the full CRPS and MSE results of our models with means and standard deviations. The results of the baselines on the real-world data are from [4, 29]. Table 4: Quantitative Evaluation (Synthetic Data). MSE results on univariate processes AR(1)-Flip/Sin/Dynamic. Values behind architectures indicate the number of hidden units. | Method | MSE ---|---|--- | AR(1)-Flip | AR(1)-Sin | AR(1)-Dynamic | GroundTruth | 1.2$\scriptscriptstyle\pm\text{9.2e-4}$ | 1.6$\scriptscriptstyle\pm\text{3.0e-3}$ | 2.1$\scriptscriptstyle\pm\text{5.3e-3}$ | DeepAR – 10 | 1.3$\scriptscriptstyle\pm\text{1.7e-2}$ | 1.8$\scriptscriptstyle\pm\text{1.8e-2}$ | 3.2$\scriptscriptstyle\pm\text{7.2e-2}$ | DeepAR – 40 | 1.3$\scriptscriptstyle\pm\text{8.9e-3}$ | 1.8$\scriptscriptstyle\pm\text{1.3e-2}$ | 3.6$\scriptscriptstyle\pm\text{3.8e-1}$ | DeepAR – 160 | 1.3$\scriptscriptstyle\pm\text{2.8e-3}$ | 1.8$\scriptscriptstyle\pm\text{9.6e-3}$ | 3.5$\scriptscriptstyle\pm\text{3.3e-1}$ | DeepSSM | 1.3$\scriptscriptstyle\pm\text{2.2e-3}$ | 1.8$\scriptscriptstyle\pm\text{2.8e-3}$ | 3.3$\scriptscriptstyle\pm\text{3.0e-3}$ (ours) | StatiConF – MLP | 1.3$\scriptscriptstyle\pm\text{4.0e-3}$ | 1.9$\scriptscriptstyle\pm\text{7.7e-3}$ | 3.3$\scriptscriptstyle\pm\text{3.5e-2}$ StatiConF – PP | 1.3$\scriptscriptstyle\pm\text{4.7e-3}$ | 1.8$\scriptscriptstyle\pm\text{5.4e-3}$ | 3.3$\scriptscriptstyle\pm\text{6.2e-3}$ DynaConF – MLP | 1.3$\scriptscriptstyle\pm\text{4.0e-3}$ | 1.6$\scriptscriptstyle\pm\text{9.2e-2}$ | 2.6$\scriptscriptstyle\pm\text{5.2e-2}$ DynaConF – PP | 1.2$\scriptscriptstyle\pm\text{5.5e-3}$ | 1.6$\scriptscriptstyle\pm\text{2.7e-2}$ | 2.6$\scriptscriptstyle\pm\text{5.1e-2}$ Table 5: Quantitative Evaluation (Synthetic Data). MSE results on multivariate process VAR(1)-Dynamic. Values behind architectures indicate the number of hidden units. | Method | MSE ---|---|--- | GroundTruth | 2.8$\scriptscriptstyle\pm\text{7.6e-3}$ | DeepVAR – 10 | 8.4$\scriptscriptstyle\pm\text{5.6e-2}$ | DeepVAR – 40 | 8.4$\scriptscriptstyle\pm\text{2.7e-3}$ | DeepVAR – 160 | 8.3$\scriptscriptstyle\pm\text{2.2e-2}$ | TransformerMAF – 8 | 8.5$\scriptscriptstyle\pm\text{2.9e-2}$ | TransformerMAF – 32 | 8.5$\scriptscriptstyle\pm\text{3.7e-2}$ | TransformerMAF – 128 | 9.4$\scriptscriptstyle\pm\text{1.2e0}$ (ours) | StatiConF – MLP | 8.5$\scriptscriptstyle\pm\text{8.5e-2}$ StatiConF – PP | 8.5$\scriptscriptstyle\pm\text{2.3e-2}$ DynaConF – MLP | 7.9$\scriptscriptstyle\pm\text{5.7e-1}$ DynaConF – PP | 4.5$\scriptscriptstyle\pm\text{3.0e-1}$ Table 6: Quantitative Evaluation (Real-World Data). Full CRPSΣ results of StatiConF/DynaConF and 8 baselines with means and standard deviations on 6 publicly available datasets. Lower values are better. Method | Exchange | Solar | Electricity | Traffic | Taxi | Wikipedia ---|---|---|---|---|---|--- DeepVAR (I) | $0.008\scriptscriptstyle\pm\text{0.001}$ | $0.391\scriptscriptstyle\pm\text{0.017}$ | $0.025\scriptscriptstyle\pm\text{0.001}$ | $0.087\scriptscriptstyle\pm\text{0.041}$ | $0.506\scriptscriptstyle\pm\text{0.005}$ | $0.133\scriptscriptstyle\pm\text{0.002}$ DeepVAR (C) | $0.007\scriptscriptstyle\pm\text{0.000}$ | $0.319\scriptscriptstyle\pm\text{0.011}$ | $0.064\scriptscriptstyle\pm\text{0.008}$ | $0.103\scriptscriptstyle\pm\text{0.006}$ | $0.326\scriptscriptstyle\pm\text{0.007}$ | $0.241\scriptscriptstyle\pm\text{0.033}$ GP-Scaling | $0.009\scriptscriptstyle\pm\text{0.000}$ | $0.368\scriptscriptstyle\pm\text{0.012}$ | $0.022\scriptscriptstyle\pm\text{0.000}$ | $0.079\scriptscriptstyle\pm\text{0.000}$ | $0.183\scriptscriptstyle\pm\text{0.395}$ | $1.483\scriptscriptstyle\pm\text{1.034}$ GP-Copula | $0.007\scriptscriptstyle\pm\text{0.000}$ | $0.337\scriptscriptstyle\pm\text{0.024}$ | $0.024\scriptscriptstyle\pm\text{0.002}$ | $0.078\scriptscriptstyle\pm\text{0.002}$ | $0.208\scriptscriptstyle\pm\text{0.183}$ | $0.086\scriptscriptstyle\pm\text{0.004}$ LSTM-NVP | $0.006\scriptscriptstyle\pm\text{0.003}$ | $0.331\scriptscriptstyle\pm\text{0.020}$ | $0.024\scriptscriptstyle\pm\text{0.001}$ | $0.078\scriptscriptstyle\pm\text{0.001}$ | $0.175\scriptscriptstyle\pm\text{0.001}$ | $0.078\scriptscriptstyle\pm\text{0.001}$ LSTM-MAF | 0.005$\scriptscriptstyle\pm\text{0.003}$ | $0.315\scriptscriptstyle\pm\text{0.023}$ | $0.021\scriptscriptstyle\pm\text{0.000}$ | $0.069\scriptscriptstyle\pm\text{0.002}$ | $0.161\scriptscriptstyle\pm\text{0.002}$ | $0.067\scriptscriptstyle\pm\text{0.001}$ TransformerMAF | 0.005$\scriptscriptstyle\pm\text{0.003}$ | $0.301\scriptscriptstyle\pm\text{0.014}$ | $0.021\scriptscriptstyle\pm\text{0.000}$ | $0.056\scriptscriptstyle\pm\text{0.001}$ | $0.179\scriptscriptstyle\pm\text{0.002}$ | $0.063\scriptscriptstyle\pm\text{0.003}$ TimeGrad | $0.006\scriptscriptstyle\pm\text{0.001}$ | $0.287\scriptscriptstyle\pm\text{0.020}$ | $0.021\scriptscriptstyle\pm\text{0.001}$ | $0.044\scriptscriptstyle\pm\text{0.006}$ | 0.114$\scriptscriptstyle\pm\text{0.020}$ | 0.049$\scriptscriptstyle\pm\text{0.002}$ StatiConF | 0.006$\scriptscriptstyle\pm\text{0.001}$ | 0.255$\scriptscriptstyle\pm\text{0.045}$ | 0.020$\scriptscriptstyle\pm\text{0.004}$ | 0.033$\scriptscriptstyle\pm\text{0.002}$ | 0.149$\scriptscriptstyle\pm\text{0.013}$ | 0.069$\scriptscriptstyle\pm\text{0.009}$ DynaConF | 0.006$\scriptscriptstyle\pm\text{0.001}$ | 0.242$\scriptscriptstyle\pm\text{0.052}$ | 0.021$\scriptscriptstyle\pm\text{0.003}$ | 0.032$\scriptscriptstyle\pm\text{0.002}$ | 0.146$\scriptscriptstyle\pm\text{0.014}$ | 0.081$\scriptscriptstyle\pm\text{0.006}$ Table 7: Quantitative Evaluation (Real-World Data).. Full CRPS results with means and standard deviations on 6 publicly available datasets. Lower values are better. Method | Exchange | Solar | Electricity | Traffic | Taxi | Wikipedia ---|---|---|---|---|---|--- DeepVAR (I) | 0.013$\scriptscriptstyle\pm\text{0.000}$ | 0.434$\scriptscriptstyle\pm\text{0.012}$ | 1.059$\scriptscriptstyle\pm\text{0.001}$ | 0.168$\scriptscriptstyle\pm\text{0.037}$ | 0.586$\scriptscriptstyle\pm\text{0.004}$ | 0.379$\scriptscriptstyle\pm\text{0.004}$ DeepVAR (C) | 0.009$\scriptscriptstyle\pm\text{0.000}$ | 0.384$\scriptscriptstyle\pm\text{0.010}$ | 0.084$\scriptscriptstyle\pm\text{0.006}$ | 0.165$\scriptscriptstyle\pm\text{0.004}$ | 0.416$\scriptscriptstyle\pm\text{0.004}$ | 0.247$\scriptscriptstyle\pm\text{0.001}$ GP-Scaling | 0.017$\scriptscriptstyle\pm\text{0.000}$ | 0.415$\scriptscriptstyle\pm\text{0.009}$ | 0.053$\scriptscriptstyle\pm\text{0.000}$ | 0.140$\scriptscriptstyle\pm\text{0.002}$ | 0.346$\scriptscriptstyle\pm\text{0.348}$ | 1.549$\scriptscriptstyle\pm\text{1.017}$ GP-Copula | 0.008$\scriptscriptstyle\pm\text{0.000}$ | 0.371$\scriptscriptstyle\pm\text{0.022}$ | 0.056$\scriptscriptstyle\pm\text{0.002}$ | 0.133$\scriptscriptstyle\pm\text{0.001}$ | 0.360$\scriptscriptstyle\pm\text{0.201}$ | 0.236$\scriptscriptstyle\pm\text{0.000}$ LSTM-NVP | 0.010$\scriptscriptstyle\pm\text{0.001}$ | 0.365$\scriptscriptstyle\pm\text{0.020}$ | 0.059$\scriptscriptstyle\pm\text{0.001}$ | 0.172$\scriptscriptstyle\pm\text{0.001}$ | 0.327$\scriptscriptstyle\pm\text{0.001}$ | 0.333$\scriptscriptstyle\pm\text{0.001}$ LSTM-MAF | 0.012$\scriptscriptstyle\pm\text{0.003}$ | 0.378$\scriptscriptstyle\pm\text{0.032}$ | 0.051$\scriptscriptstyle\pm\text{0.000}$ | 0.124$\scriptscriptstyle\pm\text{0.002}$ | 0.314$\scriptscriptstyle\pm\text{0.003}$ | 0.282$\scriptscriptstyle\pm\text{0.002}$ TransformerMAF | 0.012$\scriptscriptstyle\pm\text{0.003}$ | 0.368$\scriptscriptstyle\pm\text{0.001}$ | 0.052$\scriptscriptstyle\pm\text{0.000}$ | 0.134$\scriptscriptstyle\pm\text{0.001}$ | 0.377$\scriptscriptstyle\pm\text{0.002}$ | 0.274$\scriptscriptstyle\pm\text{0.007}$ StatiConF | 0.011$\scriptscriptstyle\pm\text{0.001}$ | 0.343$\scriptscriptstyle\pm\text{0.031}$ | 0.059$\scriptscriptstyle\pm\text{0.003}$ | 0.143$\scriptscriptstyle\pm\text{0.001}$ | 0.311$\scriptscriptstyle\pm\text{0.006}$ | 0.427$\scriptscriptstyle\pm\text{0.001}$ DynaConF | 0.010$\scriptscriptstyle\pm\text{0.000}$ | 0.338$\scriptscriptstyle\pm\text{0.035}$ | 0.058$\scriptscriptstyle\pm\text{0.004}$ | 0.140$\scriptscriptstyle\pm\text{0.002}$ | 0.308$\scriptscriptstyle\pm\text{0.005}$ | 0.296$\scriptscriptstyle\pm\text{0.009}$ Table 8: Quantitative Evaluation (Real-World Data).. Full MSE results with means and standard deviations on 6 publicly available datasets. Lower values are better. Method | Exchange | Solar | Electricity | Traffic | Taxi | Wikipedia ---|---|---|---|---|---|--- [e-4] | [e+2] | [e+5] | [e-4] | [e+1] | [e+7] DeepVAR (I) | 1.6 | 9.3 | 2.1 | 6.3 | 7.3 | 7.2 DeepVAR (C) | 1.9 | 29 | 55 | 15 | 5.1 | 3.8 GP-Scaling | 2.9 | 11 | 1.8 | 5.2 | 2.7 | 5.5 GP-Copula | 1.7 | 9.8 | 2.4 | 6.9 | 3.1 | 4.0 LSTM-NVP | 2.4 | 9.1 | 2.5 | 6.9 | 2.6 | 4.7 LSTM-MAF | 3.8 | 9.8 | 1.8 | 4.9 | 2.4 | 3.8 TransformerMAF | 3.4 | 9.3 | 2.0 | 5.0 | 4.5 | 3.1 StatiConF | 4.4$\scriptscriptstyle\pm\text{2.0e-4}$ | 6.6$\scriptscriptstyle\pm\text{1.2e2}$ | 2.0$\scriptscriptstyle\pm\text{2.8e4}$ | 4.1$\scriptscriptstyle\pm\text{3.1e-6}$ | 2.2$\scriptscriptstyle\pm\text{7.1e-1}$ | 4.1$\scriptscriptstyle\pm\text{4.6e5}$ DynaConF | 2.6$\scriptscriptstyle\pm\text{1.6e-5}$ | 6.4$\scriptscriptstyle\pm\text{1.3e2}$ | 2.1$\scriptscriptstyle\pm\text{5.1e4}$ | 3.9$\scriptscriptstyle\pm\text{4.0e-6}$ | 2.2$\scriptscriptstyle\pm\text{6.9e-1}$ | 3.8$\scriptscriptstyle\pm\text{1.6e5}$
# Composition of rough singular integral operators on rearrangement invariant Banach type spaces Jiawei Tan Jiawei Tan: School of Mathematical Sciences Beijing Normal University Laboratory of Mathematics and Complex Systems Ministry of Education Beijing 100875 People’s Republic of China<EMAIL_ADDRESS>and Qingying Xue∗ Qingying Xue: School of Mathematical Sciences Beijing Normal University Laboratory of Mathematics and Complex Systems Ministry of Education Beijing 100875 People’s Republic of China<EMAIL_ADDRESS> ###### Abstract. Let $\Omega$ be a homogeneous function of degree zero and enjoy the vanishing condition on the unit sphere $\mathbb{S}^{n-1}(n\geq 2)$. Let $T_{\Omega}$ be the convolution singular integral operator with kernel ${\Omega(x)}{|x|^{-n}}$. In this paper, when $\Omega\in L^{\infty}(\mathbb{S}^{n-1})$, we consider the quantitative weighted bounds of the composite operators of $T_{\Omega}$ on rearrangement invariant Banach function spaces. These spaces contain the classical Lorentz spaces and Orlicz spaces as special examples. Weighted boundedness of the composite operators on rearrangement invariant quasi-Banach spaces were also given. ###### Key words and phrases: rough singular integral operator, composite operator, rearrangement invariant Banach function spaces , bilinear sparse operators. 2010 Mathematics Subject Classification. Primary 42B20, Secondary 42B35. The authors were partly supported by the National Key R&D Program of China (No. 2020YFA0712900) and NNSF of China (No. 12271041). ∗ Corresponding author, e-mail address<EMAIL_ADDRESS> ## 1\. Introduction and main results This paper aims to establish the quantitative weighted boundedness for the composition of rough singular integral operators in rearrangement invariant Banach spaces and quasi-Banach spaces. It is worthy to pointing out that the classical Lorentz spaces, Orlicz spaces and the Marcinkiewicz spaces are special examples of these spaces. The study of these rearrangement invariant function spaces has a long history. Indeed, in 1955, Lorentz [40] first showed that the Hardy-Littlewood maximal operator $M$ is bounded on rearrangement invariant Banach function space $\mathbb{X}$ if and only if $p_{\mathbb{X}}>1$. Subsequently, Boyd [5] proved that the Hilbert transform $H$ is also bounded on $\mathbb{X}$ if and only if $1<p_{\mathbb{X}}\leq q_{\mathbb{X}}<\infty.$ Here $p_{\mathbb{X}}$ and $q_{\mathbb{X}}$ denote the Boyd indices of $\mathbb{X}$ (see Section 2.1 below). These results were originally proved for Banach spaces, but they were later generalized to the quasi-Banach case with the same restrictions on Boyd’s index in [42]. Since then many other contributions came to enrich the literature on this subject, we refer the readers to [4, 21, 18] and the references therein. In particular, by using sparse domination, Anderson and Hu [2] obtained the quantitative weighted bounds for the maximal truncated singular integral operator on rearrangement invariant Banach spaces. We now give a brief review on the study of rough singular integrals. Let $\Omega$ be a homogeneous function of degree zero, $\Omega\in L^{1}(\mathbb{S}^{n-1})$ and satisfy the vanishing condition on the unit sphere $\mathbb{S}^{n-1}(n\geq 2)$ as follows (1.1) $\int_{\mathbb{S}^{n-1}}\Omega(y)d\sigma(y)=0,$ where $d\sigma(y)$ denotes the Lebesgue measure with restrictions on $\mathbb{S}^{n-1}$. In 1956, Calderón and Zygmund [8] introduced the following rough homogeneous singular integral operator (1.2) $T_{\Omega}f(x)=\text{p.v.}\int_{\mathbb{R}^{n}}\frac{\Omega\left(y/|y|\right)}{|y|^{n}}f(x-y)dy.$ Using the method of rotation, Calderón and Zygmund [8] demonstrated the $L^{p}$ $(1<p<\infty)$ boundedness of $T_{\Omega}$ whenever $\Omega$ is odd and $\Omega\in L^{1}(\mathbb{S}^{n-1})$, or $\Omega$ is even, $\Omega\in L\log L(\mathbb{S}^{n-1})$ and satisfies (1.1). In 1979, Connett [16], Ricci and Weiss [47] independently showed that $\Omega\in H^{1}({\mathbb{S}^{n-1}})$ is sufficient to warrant the $L^{p}$ boundedness of $T_{\Omega}$. Here $H^{1}({\mathbb{S}^{n-1}})$ denotes the Hardy space on ${\mathbb{S}^{n-1}}$ which contains $L\log L({\mathbb{S}^{n-1}})$ as a proper subspace. Conside the weak endpoint case and the weighted case. This area has flourished and has been enriched by several important works. Among them are the celebrated works of Christ, Christ and Rubio de Francia, Hofmann, Seeger, and Tao for weak type $(1,1)$ bounds of $T_{\Omega}$. In particular, in 1996, Seeger [48] proved that $T_{\Omega}$ is bounded from $L^{1}(\mathbb{R}^{n})$ to $L^{1,\infty}(\mathbb{R}^{n})$ with a sufficient condition $\Omega\in L\log L\left({\mathbb{S}^{n-1}}\right).$ For the weighted cases, it was Duoandikoetxea and Rubio de Francia [19] who obtained the $L^{p}(\mathbb{R},w(x)dx)$-boundedness of $T_{\Omega}$ for $1<p<\infty$ provided that $\Omega\in L^{\infty}(\mathbb{S}^{n-1})$ and $w$ is a Muckenhoupt $A_{p}$ weight. This result was later improved in [20] and [51]. It is worth mentioning that, in 2017, Hytönen et al. [33] obtained the quantitative weighted boundedness of $T_{\Omega}.$ For other works related to $T_{\Omega}$, we refer the readers to see [12, 47, 39, 25] and the references therein. In general, there are two distinct approaches in the study of singular integral operators. Consider it as a principal value operator of convolution type or as an operator of Fourier multipliers defined by $\widehat{T_{m}f}(\xi)=m(\xi)\hat{f}(\xi),$ where $m\in L^{\infty}\left(\mathbb{R}^{n}\right)$ and $\hat{f}$ denotes the Fourier transform of $f$. Let $\Omega\in L\log L(\mathbb{S}^{n-1})$ be homogeneous of degree zero and satisfy the vanishing condition (1.1). Then the following identity transformation relationship holds between $m$ and $\Omega$ $m(\xi)=\int_{\mathbb{S}^{n-1}}\Omega\left(y^{\prime}\right)\left(\log\frac{1}{\left|\xi\cdot y^{\prime}\right|}-\frac{i\pi}{2}\operatorname{sgn}\left(\xi\cdot y^{\prime}\right)\right)d\sigma(y).$ However, unfortunately, this identity does not provide an exact correspondence between various auxiliary conditions assumed on $m$ and $\Omega$. One of the fundamental questions in operator theory is what the composition of two operators is. This question is of great importance and attracts lots of attention. Indeed, it was known that the composition of singular integral operators arises typically in the study of algebra of singular integral (see [6, 9]) and the non-coercive boundary-value problems for elliptic equations (see [43, 45]). The answer to this question for $T_{\Omega}$ is easily obtained by using the Fourier multiplier representation, which states that the composition of two singular integral operators is an operator of the same form and the multiplier of the composition is the product of the two multipliers (see [49]). This answer is so elegant and useful that it actually forms the basis for the calculus of pseudo-differential operators. What if it is in the form of a principal value integral? Coifman and Meyer [14] considered the composition of classical Calderón-Zygmund operators and pointed out that if $T_{1},T_{2}$ are two Calderón-Zygmund operators, $T_{2}^{*}$ be the adjoint operator of $T_{2}$ and $T_{1}(1)=T_{2}^{*}(1)=0$, then the composite operator $T_{1}T_{2}$ is also a Calderón-Zygmund operator. It then follows that the composition of Calderón-Zygmund operators is still strong $(p,p)$ type and weak $(1,1)$ type. In 2018, Benea and Bernicot [3] used the sparse domination method to reduce the above conditions to $T_{1}(1)=0$ or $T^{*}_{2}(1)=0,$ and the weighted boundedness of the composite operator for the Calderón-Zygmund operators can also be obtained. In addition, previous result for Hardy- Littlewood maximal operators $M$ were obtained by Carozza and Passarelli di Napoli [10]. They showed that the following weak type endpoint estimates hold for the composition of $M$, $\left|\left\\{x\in\mathbb{R}^{n}:M^{k}f(x)>\lambda\right\\}\right|\lesssim\int_{\mathbb{R}^{n}}\Psi_{k-1}\left(\frac{|f(x)|}{\lambda}\right)dx,\quad 0\leq\beta<\infty,$ where $M^{k}$ is the $k$-th iterations of $M$ and $\Psi_{\beta}(t)=t\log^{\beta}(\mathrm{e}+t).$ By sparse domination, the results in [3] imply the conclusion that if $T_{1},T_{2}$ be two Calderón-Zygmund operators with $T_{1}(1)=0$, then for any $1<p<\infty,1<q<p$, and $w\in A_{p/q}\left(\mathbb{R}^{n}\right)$, $\left\|T_{1}T_{2}f\right\|_{L^{p}\left(\mathbb{R}^{n},w\right)}\lesssim[w]_{A_{p/q}}^{\max\left\\{\frac{1}{p-q},1\right\\}}\|f\|_{L^{p}\left(\mathbb{R}^{n},w\right)},$ where the precise definitions of $A_{p}\left(\mathbb{R}^{n}\right)$ weight and $A_{p}$ constants are listed in Section 2. It was Hu [27] who proved the weighted bound for the composite operator $T_{1}T_{2}$ without the assumption $T_{1}(1)=0.$ Recently, Hu [26] established the weighted weak type endpoint estimate for $T_{1}T_{2}.$ Still more recently, using the method of sparse domination, more accurate weighted estimate for the composition of rough homogeneous singular integral operators were obtained by Hu, Lai and Xue [29]. ###### Theorem A ([29]). Let $\Omega_{1},\Omega_{2}$ be homogeneous of degree zero, have mean value zero and $\Omega_{1},\Omega_{2}\in L^{\infty}\left(\mathbb{S}^{n-1}\right)$. Then for $p\in(1,\infty)$ and $w\in A_{p}\left(\mathbb{R}^{n}\right)$, $\displaystyle\left\|T_{\Omega_{1}}T_{\Omega_{2}}f\right\|_{L^{p}\left(\mathbb{R}^{n},w\right)}\lesssim$ $\displaystyle{[w]_{A_{p}}^{\frac{1}{p}}\left([w]_{A_{\infty}}^{\frac{1}{p^{\prime}}}+[\sigma]_{A_{\infty}}^{\frac{1}{p}}\right)\left([\sigma]_{A_{\infty}}+[w]_{A_{\infty}}\right)}$ $\displaystyle\times\min\left\\{[\sigma]_{A_{\infty}},[w]_{A_{\infty}}\right\\}\|f\|_{L^{p}\left(\mathbb{R}^{n},w\right)},$ where $p^{\prime}=p/(p-1),\sigma=w^{-1/(p-1)}.$ As we mentioned in the beginning of this paper, the main purpose of this paper is to obtain the boundedness of the composition for rough singular integral operators $T_{\Omega_{1}}T_{\Omega_{2}}$ on rearrangement invariant Banach spaces (RIBFS in the sequel) and quasi-Banach spaces (RIQBFS in the sequel) whenever both $\Omega_{1}$ and $\Omega_{2}$ belong to $L^{\infty}\left(\mathbb{S}^{n-1}\right)$. We summary our first main result as follows. ###### Theorem 1.1. Let $\Omega_{1},\Omega_{2}$ be homogeneous of degree zero, have the vanishing moment (1.1) and $\Omega_{1},\Omega_{2}\in L^{\infty}\left(\mathbb{S}^{n-1}\right)$. Let $1<r<\infty$ and $\mathbb{X}$ be a RIBFS with $1<p_{\mathbb{X}}\leq q_{\mathbb{X}}<\infty$, then there exist $q,q_{0}>1$ such that $\left\|T_{\Omega_{1}}T_{\Omega_{2}}f\right\|_{\mathbb{X}(w)}\lesssim\left\\{\begin{array}[]{ll}[w]_{A_{\infty}}^{2}[w]_{A_{p_{\mathbb{X}}/r}}^{\frac{2}{rq}}\left\|f\right\|_{\mathbb{X}(w)},&\text{ if }r<p_{\mathbb{X}}\leq q_{\mathbb{X}},w\in A_{\frac{p_{\mathbb{X}}}{r}};\\\ {[w]_{A_{\infty}}^{2}}[w]_{A_{p_{\mathbb{X}}}}^{\frac{1}{p_{\mathbb{X}}}}\left([w]_{A_{\infty}}+[w]_{A_{p_{\mathbb{X}}}}^{\frac{1}{p_{\mathbb{X}}}}\right)\left\|f\right\|_{\mathbb{X}(w)},&\text{ if }1<p_{\mathbb{X}}<q_{0},w\in A_{p_{\mathbb{X}}}.\end{array}\right.$ ###### Remark 1.2. If we set $\mathbb{X}=L^{p}$ with $1<p<\infty$, then $p_{\mathbb{X}}=q_{\mathbb{X}}=p$ and the result in Theorem 1.1 covers the conclusion in Theorem A as a special case. Furthermore, to the best knowledge of the author, even the quantitative weighted estimates for a single rough singular integral operator $T_{\Omega}$ on $\mathbb{X}$ is new. As a consequence of Theorem 1.1, it follows that: ###### Corollary 1.3. Let $\Omega_{1},\Omega_{2}$ be homogeneous of degree zero, have mean value zero and $\Omega_{1},\Omega_{2}\in L^{\infty}\left(\mathbb{S}^{n-1}\right)$. Let $1<r<\infty$ and $\mathbb{X}$ be a RIQBFS, which is $p$-convex for some $p>0$. If $p<p_{\mathbb{X}}\leq q_{\mathbb{X}}<\infty$, then there exist $q,q_{0}>1$ such that $\left\||T_{\Omega_{1}}T_{\Omega_{2}}f|^{\frac{1}{p}}\right\|_{\mathbb{X}(w)}\lesssim\left\\{\begin{array}[]{ll}[w]_{A_{\infty}}^{\frac{2}{p}}[w]_{A_{\frac{p_{\mathbb{X}}}{pr}}}^{\frac{2}{prq}}\left\||f|^{\frac{1}{p}}\right\|_{\mathbb{X}(w)},&pr<p_{\mathbb{X}},w\in A_{\frac{p_{\mathbb{X}}}{pr}};\\\ {[w]_{A_{\infty}}^{\frac{2}{p}}}[w]_{A_{\frac{p_{\mathbb{X}}}{p}}}^{\frac{1}{p_{\mathbb{X}}}}\left([w]^{\frac{1}{p}}_{A_{\infty}}+[w]_{A_{\frac{p_{\mathbb{X}}}{p}}}^{\frac{1}{p_{\mathbb{X}}}}\right)\left\||f|^{\frac{1}{p}}\right\|_{\mathbb{X}(w)},&p_{\mathbb{X}}<pq_{0},w\in A_{\frac{p_{\mathbb{X}}}{p}}.\end{array}\right.$ It is well known that some important facts may be significantly different between Banach spaces and quasi-Banach spaces. For example, the direction of the Hölder’s inequality in $L^{p}(0<p<1)$ and $L^{q}(q\geq 1)$ is opposite. When $\mathbb{X}$ is a space of rearrangement invariant quasi-Banach type, we obtain the following quantitative weighted bounds for the composition of rough singular integral operators. ###### Theorem 1.4. Let $\Omega_{1},\Omega_{2}$ be homogeneous of degree zero, have mean value zero and $\Omega_{1},\Omega_{2}\in L^{\infty}\left(\mathbb{S}^{n-1}\right)$. Let $\mathbb{X}$ be a RIQBFS, which is p-convex with $0<p\leq 1$. If $1<p_{\mathbb{X}}\leq q_{\mathbb{X}}<2p-\frac{1}{1+p_{\mathbb{X}}}p$, then for every $w\in A_{{p_{\mathbb{X}}}}$, $\left\|T_{\Omega_{1}}T_{\Omega_{2}}f\right\|_{\mathbb{X}(w)}\lesssim\left([w]_{A_{\infty}}^{1+\frac{1}{p}}+[w]_{A_{\infty}}^{2+\frac{1}{p}}\right)\left([w]_{A_{p_{\mathbb{X}}}}^{\frac{1}{p_{\mathbb{X}}}}+[w]_{A_{p_{\mathbb{X}}}}^{\frac{2}{p_{\mathbb{X}}}}\right)\left\|f\right\|_{\mathbb{X}(w)}.$ This paper is organized as follows. In Section 2, we present some lemmas and related definitions, such as RIBFS and RIQBFS, dyadic cubes, some maximal operators and bi-sublinear sparse operators. The proofs of Theorem 1.1 and Corollary 1.3 will be given in Section 3. In Section 4, we demonstrate Theorem 1.4. An application of Theorem 1.1 will be given in Section 5. In what follows, $C$ always denotes a positive constant that is independent of the main parameters involved but whose value may differ from line to line. For any $a,b\in\mathbb{R},a\lesssim b$ $(a\gtrsim b,$ respectively) denotes that there exists a constant $C>0$ such that $a\leq Cb;$ and $a\simeq b$ denotes $a\lesssim b$ and $b\lesssim a.$ $p^{\prime}$ will always denote the conjugate of $p,$ namely, $1/p+1/p^{\prime}=1$. ## 2\. Preliminary First, we recall some basic properties for RIBFS, RIQBFS, sparse family and Orlicz maximal operators. ### 2.1. RIBFS and RIQBFS Let’s start with some simple definitions. $\circ$ Basic definitions of RIBFS. Let $\mathcal{M}$ be the set of measurable functions on $\left(\mathbb{R}^{n},dx\right)$ and $\mathcal{M}^{+}$ be the nonnegative ones. A rearrangement invariant Banach norm is a mapping $\rho:\mathcal{M}^{+}\mapsto[0,\infty]$ such that the following properties hold: 1. i). $\rho(f)=0\Leftrightarrow f=0,$ a.e.; $\rho(f+g)\leq\rho(f)+\rho(g);\rho(af)=a\rho(f)$ for $a\geq 0$; 2. ii). If $0\leq f\leq g,$ a.e., then $\rho(f)\leq\rho(g)$; 3. iii). If $f_{n}\uparrow f,$ a.e., then $\rho\left(f_{n}\right)\uparrow\rho(f)$; 4. iv). If $E$ is a measurable set such that $|E|<\infty,$ then $\rho\left(\chi_{E}\right)<\infty,$ and $\int_{E}fdx\leq$ $C_{E}\rho(f),$ for some constant $0<C_{E}<\infty,$ depending on $E$ and $\rho,$ but independent of $f$; 5. v). $\rho(f)=\rho(g)$ if $f$ and $g$ are equimeasurable, that is, $d_{f}(\lambda)=d_{g}(\lambda),\lambda\geq 0$ where $d_{f}\left(d_{g}\right.$ respectively) denotes the distribution function of $f$ ($g$ respectively). By means of $\rho,$ a rearrangement invariant Banach function space (RIBFS) $\mathbb{X}$ can be defined: $\mathbb{X}=\left\\{f\in\mathcal{M}:\|f\|_{\mathbb{X}}:=\rho(|f|)<\infty\right\\}.$ Let $\mathbb{X}^{\prime}$ be the associated space of $\mathbb{X}$, which is also a Banach function space given by $\mathbb{X}^{\prime}=\left\\{f\in\mathcal{M}:\|f\|_{\mathbb{X}^{\prime}}:=\sup\left\\{\int_{\mathbb{R}^{n}}fgdx:g\in\mathcal{M}^{+},\rho(g)\leq 1\right\\}<\infty\right\\}.$ Note that in the present setting, $\mathbb{X}$ is a RIBFS if and only if $\mathbb{X}^{\prime}$ is a $\mathrm{RIBFS}$ ([4, Chapter 2, Corollary 4.4]). By definition, the following generalized Hölder’s inequality holds for any $f\in\mathbb{X},g\in\mathbb{X}^{\prime}$ : $\int_{\mathbb{R}^{n}}|fg|dx\leq\|f\|_{\mathbb{X}}\|g\|_{\mathbb{X}^{\prime}}.$ A key fact in a RIBFS $\mathbb{X}$ is that the Lorentz-Luxemburg theorem holds: $\|f\|_{\mathbb{X}}=\sup\left\\{\left|\int_{\mathbb{R}^{n}}fgdx\right|:g\in\mathbb{X}^{\prime},\|g\|_{\mathbb{X}^{\prime}}\leq 1\right\\}.$ Recall that the decreasing rearrangement function $f^{*}$ is defined by $f^{*}(t)=\inf\left\\{\lambda\geq 0:d_{f}(\lambda)\leq t\right\\},t\geq 0.$ An important property of $f^{*}$ is that it is equimeasurable with $f$. This allows one to obtain a representation of $\mathbb{X}$, i.e., Luxemburg’s representation theorem ([4, Chapter 2, Theorem 4.10]), which asserts that there exists a RIBFS $\overline{\mathbb{X}}$ over $\left(\mathbb{R}^{+},dt\right)$ such that $f\in\mathbb{X}$ if and only if $f^{*}\in\overline{\mathbb{X}}$, and in this case $\|f\|_{\mathbb{X}}=\left\|f^{*}\right\|_{\overline{\mathbb{X}}}$. From this it can be seen that the mapping $f\mapsto f^{*}$ is an isometry. In addition, notice that $\overline{\mathbb{X}}^{\prime}=\overline{\mathbb{X}^{\prime}}$ and $\|f\|_{\mathbb{X}^{\prime}}=\left\|f^{*}\right\|_{\overline{\mathbb{X}}^{\prime}}$ hold for the associated space. $\circ$ Weighted versions of RIBFS $\mathbb{X}$. Before we consider weighted versions of RIBFS $\mathbb{X}$, we need to recall the definitions of Muckenhoupt weights. Let $w$ be a non-negative locally integrable function defined on $\mathbb{R}^{n}.$ We say that a weight $w$ belongs to the Muckenhoupt class $A_{p}$ with $1<p<\infty,$ if $[w]_{A_{p}}:=\sup_{Q}\left(\frac{1}{|Q|}\int_{Q}w(x)dx\right)\left(\frac{1}{|Q|}\int_{Q}w(x)^{1-p^{\prime}}dx\right)^{p-1}<\infty,$ and for $w\in A_{\infty},$ $[w]_{A_{\infty}}:=\sup_{Q}\frac{1}{w(Q)}\int_{Q}M\left(w\chi_{Q}\right)(x)dx,$ where the supremum is taken over all cubes $Q\subset\mathbb{R}^{n}.$ The $A_{1}$ constant is defined by $[w]_{A_{1}}:=\sup_{x\in\mathbb{R}^{n}}\frac{Mw(x)}{w(x)},$ where $M$ is the Hardy-Littlewood maximal operator. Some important properties of weights are listed in the following lemmas. ###### Lemma 2.1 ([44]). Let $1<p<\infty$ and let $w\in A_{p}.$ Then $w\in A_{p-\varepsilon}$ with $\varepsilon:=\varepsilon(p)=\frac{p-1}{1+2^{n+1}[\sigma]_{A_{\infty}}}$ where $\sigma=w^{1-p^{\prime}}$ is the dual weight. Furthermore $[w]_{A_{p-\varepsilon}}\leq 2^{p-1}[w]_{A_{p}}.$ ###### Lemma 2.2 ([32]). Let $w\in A_{\infty}\left(\mathbb{R}^{n}\right)$. Then for any cube $Q$ and $\delta\in\left(1,1+\frac{1}{2^{11+n}[w]_{A_{\infty}}}\right]$, $\left(\frac{1}{|Q|}\int_{Q}w^{\delta}(x)\mathrm{d}x\right)^{\frac{1}{\delta}}\leq\frac{2}{|Q|}\int_{Q}w(x)\mathrm{d}x.$ We now give the weighted version of the RIBFS $\mathbb{X}$. First, the distribution function and the decreasing rearrangement with respect to $w$ are defined by $w_{f}(\lambda)=w\left(\\{x\in\mathbb{R}^{n}:|f(x)|>\lambda\\}\right);\quad f_{w}^{*}(t)=\inf\left\\{\lambda\geq 0:w_{f}(\lambda)\leq t\right\\}.$ In this way, the weighted version of the space $\mathbb{X}$ is given by $\mathbb{X}(w)=\left\\{f\in\mathcal{M}:\|f\|_{\mathbb{X}(w)}:=\left\|f_{w}^{*}\right\|_{\overline{\mathbb{X}}}<\infty\right\\}.$ Then, the same procedure applying on the associate spaces yields $\mathbb{X}^{\prime}(w)=\mathbb{X}(w)^{\prime}$ (see [18, p. 168]). $\circ$ Boyd indices and $r$ exponent. Next, we recall the Boyd indices of a RIBFS, which are closely related to some interpolation properties, see [4, Chapter 3] for more details. We start with the dilation operator $D_{t}$ of $\mathbb{X}$, $D_{t}f(s)=f\left(\frac{s}{t}\right),0<t<\infty,f\in\overline{\mathbb{X}},$ and its norm $h_{\mathbb{X}}(t)=\left\|D_{t}\right\|_{\overline{\mathbb{X}}\mapsto\overline{\mathbb{X}}},0<t<\infty.$ The lower and upper Boyd indices are defined, respectively, by the following form: $p_{\mathbb{X}}=\lim_{t\rightarrow\infty}\frac{\log t}{\log h_{\mathbb{X}}(t)}=\sup_{1<t<\infty}\frac{\log t}{\log h_{\mathbb{X}}(t)},\quad q_{\mathbb{X}}=\lim_{t\rightarrow 0^{+}}\frac{\log t}{\log h_{\mathbb{X}}(t)}=\inf_{0<t<1}\frac{\log t}{\log h_{\mathbb{X}}(t)}.$ A simple calculation shows that $1\leq p_{\mathbb{X}}\leq q_{\mathbb{X}}\leq\infty,$ which follows from the fact that $h_{\mathbb{X}}(t)$ is submultiplicative. In order to give the above definition a general explanation, we consider a special case. If $\mathbb{X}=L^{p}$ with $1<p<\infty$, then $h_{\mathbb{X}}(t)=t^{\frac{1}{p}}$ and thus $p_{\mathbb{X}}=q_{\mathbb{X}}=p.$ The relationship between the Boyd indices of $\mathbb{X}$ and $\mathbb{X}^{\prime}$ are as follows : $p_{\mathbb{X}^{\prime}}=\left(q_{\mathbb{X}}\right)^{\prime}$ and $q_{\mathbb{X}^{\prime}}=\left(p_{\mathbb{X}}\right)^{\prime},$ where $p$ and $p^{\prime}$ are conjugate exponents.(see [41, Chapter 11, Corollary 11.6]). Now, we consider the following $r$ exponent of RIBFS $\mathbb{X}$ with $0<r<\infty$: $\mathbb{X}^{r}=\left\\{f\in\mathcal{M}:|f|^{r}\in\mathbb{X}\right\\},$ and the norm $\|f\|_{\mathbb{X}^{r}}=\left\||f|^{r}\right\|_{\mathbb{X}}^{\frac{1}{r}}$. By the definition of Boyd indices it is easy to verify that $p_{\mathbb{X}^{r}}=p_{\mathbb{X}}\cdot r$ and $q_{\mathbb{X}^{r}}=q_{\mathbb{X}}\cdot r$. $\circ$ The case of RIQBFS. Analogy to $L^{p}$ space, for each $r\geq 1,$ $\mathbb{X}^{r}$ is still a RIBFS when $\mathbb{X}$ is a RIBFS. However, if $0<r<1,$ the space $\mathbb{X}^{r}$ is not necessarily a Banach space (see [18, p. 269]). Hence, it is natural to consider the quasi-Banach case. To see this, we first give the definition of the quasi-Banach function norm. We say a mapping $\rho^{\prime}:\mathcal{M}^{+}\mapsto[0,\infty)$ is a rearrangement invariant quasi-Banach function norm if $\rho^{\prime}$ satisfies the basic condition i),ii),iii) and v) with the triangle inequality replaced by the quasi-triangle inequality as follows: $\rho^{\prime}(f+g)\leq C\left(\rho^{\prime}(f)+\rho^{\prime}(g)\right),$ where $C$ is an absolute positive constant. Similarly, a rearrangement invariant quasi-Banach function space is a collection that consist of all measurable functions which satisfies $\rho^{\prime}(|f|)<\infty$. In order to get a better study in transformation between RIBFS and RIQBFS, we need to consider the following $p$-convex property with $p>0$ on $\mathbb{X}$ which is a RIQBFS (see [24, p. 3]): $\left\|\left(\sum_{j=1}^{N}\left|f_{j}\right|^{p}\right)^{\frac{1}{p}}\right\|_{\mathbb{X}}\lesssim\left(\sum_{j=1}^{N}\left\|f_{j}\right\|_{\mathbb{X}}^{p}\right)^{\frac{1}{p}}.$ A very important conclusion states that the $p$-convex property is equivalent to the fact that $\mathbb{X}^{\frac{1}{p}}$ is a RIBFS (see [18, p. 269]). According to the above results and using Lorentz-Luxemburg’s theorem again, we have $\|f\|_{\mathbb{X}}\simeq\sup\left\\{\left(\int_{\mathbb{R}^{n}}|f(x)|^{p}g(x)dx\right)^{\frac{1}{p}}:g\in\mathcal{M}^{+},\|g\|_{\mathbb{Y}^{\prime}}\leq 1\right\\},$ where $\mathbb{Y}^{\prime}$ is the associated space of the RIBFS $\mathbb{Y}=\mathbb{X}^{\frac{1}{p}}$. Simultaneously, for each $w\in A_{\infty}$ and $0<r<\infty$, one may define $\mathbb{X}(w)$ for a RIQBFS $\mathbb{X}$ and it enjoys that $\mathbb{X}(w)^{r}=\mathbb{X}^{r}(w)$(see [18, p. 269]). ###### Remark 2.3. It is necessary for us to make a remark about the newly added $p$-convex property. As Grafakos and Kalton ([24]) said “all practical spaces are $p$-convex for some $p>0$”. This is because there are only very few spaces that do not satisfy the $p$-convex property for any $p>0$ (see [34]). ### 2.2. Young function and Orlicz maximal operators In this subsection, we present some fundamental facts about Young functions and Orlicz local averages which will play an important role in our analysis. We refer the readers to [46] for more information. Firstly, let $\Phi$ be the set of functions $\phi:[0,\infty)\longrightarrow[0,\infty)$ which are non-negative, increasing and such that $\lim_{t\rightarrow\infty}\phi(t)=\infty$ and $\lim_{t\rightarrow 0}\phi(t)=0.$ If $\phi\in\Phi$ is convex we say that $\phi$ is a Young function. Next, we can define the average of the Luxemburg norm, namely $\phi$-norm, of $f$ over a cube $Q$ as $\|f\|_{\phi(\mu),Q}:=\inf\left\\{\lambda>0:\frac{1}{\mu(Q)}\int_{Q}\phi\left(\frac{|f(x)|}{\lambda}\right)d\mu\leq 1\right\\}.$ For the sake of notation, if $\mu$ is the Lebesgue measure, we write $\|f\|_{\phi,Q}$, and we denote $\|f\|_{\phi(w),Q},$ if $\mu=wdx$ is an absolutely continuous measure with respect to the Lebesgue measure. Each Young function $\phi$ enjoys the following generalized Hölder’s inequality: $\frac{1}{\mu(Q)}\int_{Q}|fg|d\mu\leq 2\|f\|_{\phi(\mu),Q}\|g\|_{\bar{\phi}(\mu),Q},$ where $\bar{\phi}(t)=\sup_{s>0}\\{st-\phi(s)\\}$ is the complementary function of $\phi$. Then we can naturally represent the Orlicz maximal operator $M_{\phi}f$ associated to the Young function $\phi$ with the following form: $M_{\phi}f(x):=\sup_{x\in Q}\|f\|_{\phi,Q}.$ Finally, we present some particular examples of maximal operators related to certain Young functions. * • If $\phi(t)=t^{r}$ with $r>1,$ then $M_{\phi}=M_{r}$. * • If we consider $\phi(t)=t\log^{\alpha}(e+t)$ with $\alpha>0,$ then $\bar{\phi}(t)\simeq e^{t^{1/\alpha}}-1$ and we denote $M_{\phi}=M_{L(\log L)^{\alpha}}$. We have that $M\leq M_{\phi}\lesssim M_{r}$ for all $1<r<\infty,$ moreover, it can be proved that $M_{\phi}\simeq M^{l+1},$ where $\alpha=l\in\mathbb{N}$ and $M^{l+1}$ is $M$ iterated $l+1$ times. * • $M_{\phi}=M_{L(\log L)^{\alpha}(\log\log L)^{\beta}}$ given by the function $\phi(t)=t\log^{\alpha}(e+t)\log^{\beta}(e+\log(e+t))$ with $\alpha,\beta>0.$ ### 2.3. Sparse family We also need a system of dyadic calculus from [37, 38], so in this subsection, we introduce a part of it. ###### Definition 2.4. By a dyadic lattice $\mathcal{D},$ we mean a collection of cubes which satisfies the following properties: 1. (i). For any $Q\in\mathcal{D}$ its sidelength $\ell_{Q}$ is of the form $2^{k},k\in\mathbb{Z}$; 2. (ii). $Q\cap R\in\\{Q,R,\emptyset\\}$ for any $Q,R\in\mathcal{D}$; 3. (iii). The cubes of a fixed sidelength $2^{k}$ form a partition of $\mathbb{R}^{n}$. An interesting and crucial theorem in dyadic calculus is Three Lattice Theorem, which asserts that given a dyadic lattice $\mathcal{D},$ there exist $3^{n}$ dyadic lattices $\mathcal{D}_{1},\ldots,\mathcal{D}_{3^{n}}$ such that for each cube $Q\in\mathcal{D}$, we can find a cube $R_{Q}$ in some $\mathcal{D}_{j}$ such that $Q\subseteq R_{Q}$ and $3l_{Q}=l_{R_{Q}}$. According to the method of taking dyadic lattice $\mathcal{D}$ in Definition 2.4, we can give the definition of sparse family $\mathcal{S}$ as follows. ###### Definition 2.5. Let $\mathcal{D}$ be a dyadic lattice. $\mathcal{S}\subset\mathcal{D}$ is called a $\eta$-sparse family with $\eta\in(0,1)$ if for every cube $Q\in\mathcal{S},$ $\left|\bigcup_{P\in\mathcal{S},P\subsetneq Q}P\right|\leq(1-\eta)\left|Q\right|.$ There is another equivalent definitions of sparsity for a collection of sets. If we define $E(Q)=Q\backslash\bigcup_{P\in\mathcal{S},P\subsetneq Q}P,$ then it is easy to deduce that the sets $E(Q)$ are pairwise disjoint and $|E(Q)|\geq\eta|Q|$. Let $\mathcal{D}$ be a dyadic lattice and $\mathcal{S}\subseteq\mathcal{D}$ be a $\eta$-sparse family, the sparse operator is defined by $\mathcal{A}_{r,\mathcal{S}}f(x)=\sum_{Q\in\mathcal{S}}\langle|f|\rangle_{r,Q}\chi_{Q}(x)=\sum_{Q\in\mathcal{S}}\left(\frac{1}{|Q|}\int_{Q}|f(y)|^{r}dy\right)^{\frac{1}{r}}\chi_{Q}(x),$ where $r>0$ and $\langle|f|\rangle_{r,Q}^{r}=\frac{1}{|Q|}\int_{Q}|f(y)|^{r}dy$. Furthermore, associated with the constants $\beta\in[0,\infty)$ and $r_{1},r_{2},r\in[1,\infty)$, we can define the bilinear sparse operators $\mathcal{A}_{\mathcal{S};L(\log L)^{\beta},L^{r}}$ and $\mathcal{A}_{\mathcal{S};L^{r_{1}},L^{r_{2}}}$ by $\mathcal{A}_{\mathcal{S};L(\log L)^{\beta},L^{r}}(f,g)=\sum_{Q\in\mathcal{S}}|Q|\|f\|_{L(\log L)^{\beta},Q}\langle|g|\rangle_{r,Q},$ $\mathcal{A}_{\mathcal{S};L^{r_{1}},L^{r_{2}}}(f,g)=\sum_{Q\in\mathcal{S}}|Q|\langle|f|\rangle_{r_{1},Q}\langle|g|\rangle_{r_{2},Q}.$ According to the above definitions, we can give the condition that the operator $T$ satisfies the bilinear sparse domination. More specifically, for $\beta,q\in(0,\infty)$, we say that a sublinear operator $T$ acting on $\cup_{p\geq 1}L^{p}\left(\mathbb{R}^{n}\right)$ enjoys a $\left(L(\log L)^{\beta},L^{q}\right)$-bilinear sparse domination with bound $A$, if for each bounded function $f$ with compact support, there exists a sparse family $\mathcal{S}$ of cubes, such that $\left|\int_{\mathbb{R}^{n}}g(x)Tf(x)dx\right|\leq A\mathcal{A}_{\mathcal{S},L(\log L)^{\beta},L^{q}}(f,g),$ holds for all bounded function $g$. The following results of bilinear sparse domination are crucial in our analysis ([29, Corollary 5.1]). ###### Lemma 2.6 ([29]). Let $\Omega_{1},\Omega_{2}$ be homogeneous of degree zero, have mean value zero and $\Omega_{1},\Omega_{2}\in L^{\infty}\left(\mathbb{S}^{n-1}\right)$. Let $r\in(1,3/2]$. Then for each bounded function $f$ with compact support, there exists a $\frac{1}{2}\frac{1}{9^{n}}$-sparse family of cubes $\mathcal{S}=\\{Q\\}$, and functions $J_{1}$ and $J_{2}$, such that for each function $g$, (2.1) $\displaystyle\left|\int_{\mathbb{R}^{n}}J_{1}(x)g(x)\mathrm{d}x\right|\lesssim r^{\prime}\mathcal{A}_{\mathcal{S};L(\log L),L^{r}}(f,g),$ $\displaystyle\left|\int_{\mathbb{R}^{n}}J_{2}(x)g(x)\mathrm{d}x\right|\lesssim r^{\prime 2}\mathcal{A}_{\mathcal{S};L^{1},L^{r}}(f,g),$ and for a. e. $x\in\mathbb{R}^{n}$, (2.2) $T_{\Omega_{1}}T_{\Omega_{2}}f(x)=J_{1}(x)+J_{2}(x).$ Here, we give a brief introduction to sparse domination. The study of sparse domination is a very active research area in Harmonic analysis in recent years. It helps to simplify the proof of some well known results and even can be used to obtain the dependence of the norm constant on the weight functions. For example, Lerner [36] introduced a class of sparse operators and gave an alternative and simple proof of the $A_{2}$ conjecture. Later on, Lerner [37] obtained quantitative weighted bounds for Calderón-Zygmund operator $T$ which satisfies a Hölder-Lipschitz condition. Since then, great attentions have been paid to the study of the sparse bounds. We refer the readers to [11, 15, 17] and the references therein for more informations. ## 3\. Proofs of Theorems 1.1 This section is devoted to prove Theorem 1.1 and Corollary 1.3. To demonstrate Theorem 1.1, we need the following lemma in [2, Lemma 3.3]. ###### Lemma 3.1 ([2]). Let $\mathbb{X}$ be an RIQBFS which is $p$-convex for some $0<p\leq 1$. If $1<p_{\mathbb{X}}<\infty$, then for all $w\in A_{p_{\mathbb{X}}},$ we have $\|M\|_{\mathbb{X}(w)\mapsto\mathbb{X}(w)}\leq C[w]_{A_{p_{\mathbb{X}}}}^{1/p_{\mathbb{X}}},$ where $C$ is an absolute constant only depending on $p_{\mathbb{X}}$ and $n$. ###### Proof of Theorem 1.1. We define the weighted dyadic Hardy-Littlewood maximal operators $M_{w}^{\mathcal{D}}$ and $M_{w,r}^{\mathcal{D}}$ by $M_{w}^{\mathcal{D}}f(x):=\sup_{x\in R,R\in\mathcal{D}}\frac{1}{w(R)}\int_{R}|f(y)|w(y)dy,\quad f\in L_{\operatorname{loc}}^{1}\left(\mathbb{R}^{n}\right),$ $M_{w,r}^{\mathcal{D}}f(x):=\sup_{x\in R,R\in\mathcal{D}}\left(\frac{1}{w(R)}\int_{R}|f(y)|^{r}w(y)dy\right)^{\frac{1}{r}},\quad f\in L_{\operatorname{loc}}^{r}\left(\mathbb{R}^{n}\right),$ where $w\in A_{\infty}$ and $\mathcal{D}$ is the given dyadic grid. First, we consider the case of $1<p_{\mathbb{X}}\leq q_{\mathbb{X}}<2-\frac{1}{1+p_{\mathbb{X}}}=:q_{0}.$ For a fixed $w\in A_{p_{\mathbb{X}}}$, by (2.2), we obtain $\displaystyle\left\|T_{\Omega_{1}}T_{\Omega_{2}}f\right\|_{\mathbb{X}(w)}$ $\displaystyle=\sup_{\|g\|_{\mathbb{X}^{\prime}(w)}\leq 1}\left|\int_{\mathbb{R}^{n}}T_{\Omega_{1}}T_{\Omega_{2}}f(x)g(x)w(x)dx\right|$ $\displaystyle\leq\sup_{\|g\|_{\mathbb{X}^{\prime}(w)}\leq 1}\left|\int_{\mathbb{R}^{n}}J_{1}(x)g(x)w(x)dx\right|+\sup_{\|g\|_{\mathbb{X}^{\prime}(w)}\leq 1}\left|\int_{\mathbb{R}^{n}}J_{2}(x)g(x)w(x)dx\right|$ $\displaystyle=:I_{1}+I_{2},$ where $I_{i}=\sup\limits_{\|g\|_{\mathbb{X}^{\prime}(w)}\leq 1}\left|\int_{\mathbb{R}^{n}}J_{i}(x)g(x)w(x)dx\right|,i=1,2.$ Consider the contribuion of $I_{1}$. Take and fix any $g\in\mathbb{X}^{\prime}(w)$ with $\|g\|_{\mathbb{X}^{\prime}(w)}\leq 1$, by (2.1), $\left|\int_{\mathbb{R}^{n}}J_{1}(x)g(x)w(x)dx\right|\lesssim r^{\prime}\sum_{Q\in\mathcal{S}}\left\|f\right\|_{L(\log{L}),Q}{\langle|gw|\rangle}_{r,Q}|Q|.$ A direct computation gives that $\displaystyle{\langle|gw|\rangle}_{r,Q}$ $\displaystyle=\left(\frac{1}{|Q|}\int_{Q}|g(x)w(x)|^{r}dx\right)^{\frac{1}{r}}=\left(\frac{1}{|Q|}\int_{Q}|g(x)|^{r}|w(x)|^{\frac{1}{s}}|w(x)|^{r-\frac{1}{s}}dx\right)^{\frac{1}{r}}$ $\displaystyle\leq\left(\frac{1}{|Q|}\int_{Q}|g(x)|^{rs}w(x)dx\right)^{\frac{1}{rs}}\left(\frac{1}{|Q|}\int_{Q}|w(x)|^{(r-\frac{1}{s})s^{\prime}}dx\right)^{\frac{1}{rs^{\prime}}}.$ Take appropriate $r$ and $s$ such that $(r-\frac{1}{s})s^{\prime}<1+\frac{1}{2^{11+n}[w]_{A_{\infty}}}.$ For example, let us choose $r=1+\frac{1}{2^{13+n}p_{\mathbb{X}}[w]_{A_{\infty}}},s=1+\frac{1}{2p_{\mathbb{X}}},$ then $(r-\frac{1}{s})s^{\prime}=1+\frac{1}{2^{13+n}p_{\mathbb{X}}[w]_{A_{\infty}}}+\frac{1}{2^{12+n}[w]_{A_{\infty}}}<1+\frac{1}{2^{11+n}[w]_{A_{\infty}}}.$ Thus, combining Lemma 2.2, we deduce that $\displaystyle\frac{1}{|Q|}\int_{Q}(w(x))^{(r-\frac{1}{s})s^{\prime}}dx$ $\displaystyle\leq\left(\frac{2}{|Q|}\int_{Q}w(x)dx\right)^{1-\frac{1}{rs}}.$ Now one can get $\displaystyle{\langle|gw|\rangle}_{r,Q}\cdot|Q|$ $\displaystyle\leq\left(\frac{1}{|w(Q)|}\int_{Q}|g(x)|^{rs}w(x)dx\right)^{\frac{1}{rs}}w(Q)=:g_{Q,w}^{rs}\cdot w(Q).$ Taking into account the generalized Hölder’s inequality and recalling that $M^{2}$ is $M$ iterated 2 times, we have $\displaystyle\sum_{Q\in\mathcal{S}}\left\|f\right\|_{L(\log L),Q}{\langle|gw|\rangle}_{r,Q}|Q|$ $\displaystyle\leq\sum_{Q\in\mathcal{S}}\left\|f\right\|_{L(\log L),Q}w(Q)g_{Q,w}^{rs}$ $\displaystyle\leq\sum_{B\in\mathcal{B}}\left\|f\right\|_{L(\log L),B}g_{B,w}^{2s}\sum_{\begin{subarray}{c}R\in\mathcal{S}\\\ \pi(R)=B\end{subarray}}w(R)$ $\displaystyle\leq C(n)[w]_{A_{\infty}}\sum_{B\in\mathcal{B}}\left\|f\right\|_{L(\log L),B}w(B)g_{B,w}^{2s}$ $\displaystyle=C(n)[w]_{A_{\infty}}\int_{\mathbb{R}^{n}}\sum_{B\in\mathcal{B}}\left\|f\right\|_{L(\log L),B}g_{B,w}^{2s}\chi_{B}(x)w(x)dx$ $\displaystyle\leq C(n)[w]_{A_{\infty}}\int_{\mathbb{R}^{n}}M_{L(\log L)}f(x)M_{w,2s}^{\mathcal{D}}g(x)w(x)dx$ $\displaystyle\leq C(n)[w]_{A_{\infty}}\int_{\mathbb{R}^{n}}M^{2}f(x)M_{w,2s}^{\mathcal{D}}g(x)w(x)dx$ $\displaystyle\leq C(n)[w]_{A_{\infty}}\left\|M^{2}f\right\|_{\mathbb{X}(w)}\left\|M_{w,2s}^{\mathcal{D}}g\right\|_{\mathbb{X}^{\prime}(w)},$ where $\mathcal{B}$ is the family of the principal cubes in the usual sense and $\pi(R)$ is the minimal principal cube which contains $R$. That is $\mathcal{B}=\cup_{k=0}^{\infty}\mathcal{B}_{k}$ with $\mathcal{B}_{0}:=\\{$ maximal cubes in $\mathcal{S}\\}$ and $\mathcal{B}_{k+1}:=\underset{B\in\mathcal{B}_{k}}{\cup}\operatorname{ch}_{\mathcal{B}}(B),\quad\operatorname{ch}_{\mathcal{B}}(B)=\\{R\subsetneq B\text{ maximal s.t. }\tau(R)>2\tau(B)\\},$ where $\tau(R)=\|f\|_{{L(\log L)},R}g_{R,w}^{2s}$. Now we observe that, using Lemma 3.1, $\displaystyle I$ $\displaystyle\leq C(n)[w]_{A_{\infty}}^{2}[w]_{A_{p_{\mathbb{X}}}}^{\frac{2}{p_{\mathbb{X}}}}\left\|f\right\|_{\mathbb{X}(w)}\left\|\left(M_{w}^{\mathcal{D}}(|g|^{2s})\right)^{\frac{1}{2s}}\right\|_{\mathbb{X}^{\prime}(w)}$ $\displaystyle=C(n)[w]_{A_{\infty}}^{2}[w]_{A_{p_{\mathbb{X}}}}^{\frac{2}{p_{\mathbb{X}}}}\left\|f\right\|_{\mathbb{X}(w)}\left\|\left(M_{w}^{\mathcal{D}}(|g|^{2s})\right)\right\|_{\mathbb{X}^{\prime\frac{1}{2s}}(w)}^{\frac{1}{2s}}$ $\displaystyle\lesssim[w]_{A_{\infty}}^{2}[w]_{A_{p_{\mathbb{X}}}}^{\frac{2}{p_{\mathbb{X}}}}\left\|f\right\|_{\mathbb{X}(w)}\left\||g|^{2s}\right\|_{\mathbb{X}^{\prime\frac{1}{2s}}(w)}^{\frac{1}{2s}}$ $\displaystyle\leq[w]_{A_{\infty}}^{2}[w]_{A_{p_{\mathbb{X}}}}^{\frac{2}{p_{\mathbb{X}}}}\left\|f\right\|_{\mathbb{X}(w)},$ where in the second inequality of the countdown, we have used the fact that the boundedness of $M_{w}^{\mathcal{D}}$( see [18, Theorem 3.2]). Indeed, using the condition $q_{\mathbb{X}}<\frac{2s}{2s-1}$, it’s easy to deduce that $p_{\mathbb{X}^{\prime\frac{1}{2s}}}=\frac{p_{{\mathbb{X}}^{\prime}}}{2s}=\frac{{(q_{\mathbb{X}})}^{{}^{\prime}}}{2s}>1.$ Now we turn our attention to the estimate of $I_{2}.$ For this case, arguing as in the first case, we obtain $\displaystyle\sum_{Q\in\mathcal{S}}\langle|f|\rangle_{Q}\langle|gw|\rangle_{r,Q}|Q|$ $\displaystyle\lesssim\sum_{Q\in\mathcal{S}}\langle|f|\rangle_{Q}g_{Q,w}^{rs}w(Q)$ $\displaystyle\leq\sum_{Q\in\mathcal{S}}\frac{1}{w(Q)}\left(\int_{Q}(Mf(x))^{\frac{1}{2}}(M_{w,2s}^{\mathcal{D}}g(x))^{\frac{1}{2}}w(x)dx\right)^{2}.$ That fact together with Lemma 2.6 yields (3.1) $\left|\int_{\mathbb{R}^{n}}J_{2}(x)g(x)w(x)dx\right|\lesssim r^{\prime 2}\sum_{Q\in\mathcal{S}}\frac{1}{w(Q)}\left(\int_{Q}(Mf(x))^{\frac{1}{2}}(M_{w,2s}^{\mathcal{D}}g(x))^{\frac{1}{2}}w(x)dx\right)^{2}.$ Now we note that $\mathcal{S}$ is a $\frac{1}{2\cdot 9^{n}}$ -sparse family (i.e., for any $Q\in\mathcal{S},$ there exists $E(Q)$ such that $|E(Q)|\geq\frac{1}{2\cdot 9^{n}}|Q|$ ). Hence, for each dyadic cube $Q\in\mathcal{S}$, we have (3.2) $\displaystyle\sum_{R\subseteq Q}w(R)=\sum_{R\subseteq Q}\frac{w(R)}{|R|}|R|$ $\displaystyle\leq 2\cdot 9^{n}\sum_{R\subseteq Q}\frac{w(R)}{|R|}\cdot|E(R)|$ $\displaystyle\leq 2\cdot 9^{n}\sum_{R\subseteq Q}\int_{E(R)}M(w\chi_{R})(x)dx$ $\displaystyle\leq 2\cdot 9^{n}\int_{Q}M(w\chi_{Q})(x)dx$ $\displaystyle\leq 2\cdot 9^{n}[w]_{A_{\infty}}w(Q).$ Combining (3.2) with (3.1) and the Carleson embedding theorem, together with the generalized Hölder’s inequality, one may obtain $\displaystyle I_{2}$ $\displaystyle\lesssim[w]_{A_{\infty}}^{2}\sup_{\|g\|_{\mathbb{X}^{\prime}(w)}\leq 1}\sum_{Q\in\mathcal{S}}\frac{1}{w(Q)}\left(\int_{Q}(Mf(x))^{\frac{1}{2}}(M_{w,2s}^{\mathcal{D}}g(x))^{\frac{1}{2}}w(x)dx\right)^{2}$ $\displaystyle\lesssim[w]_{A_{\infty}}^{3}\left\|Mf\right\|_{\mathbb{X}(w)}\sup_{\|g\|_{\mathbb{X}^{\prime}(w)}\leq 1}\left\|M_{w,2s}^{\mathcal{D}}g\right\|_{\mathbb{X}^{\prime}(w)}$ $\displaystyle\leq[w]_{A_{\infty}}^{3}[w]_{A_{p_{\mathbb{X}}}}^{\frac{1}{p_{\mathbb{X}}}}\left\|f\right\|_{\mathbb{X}(w)}\sup_{\|g\|_{\mathbb{X}^{\prime}(w)}\leq 1}\left\||g|^{2s}\right\|_{\mathbb{X}^{\prime\frac{1}{2s}}(w)}^{\frac{1}{2s}}$ $\displaystyle\leq[w]_{A_{\infty}}^{3}[w]_{A_{p_{\mathbb{X}}}}^{\frac{1}{p_{\mathbb{X}}}}\left\|f\right\|_{\mathbb{X}(w)}.$ This inequality, combined with the estimate of $I_{1}$ yields $\left\|T_{\Omega_{1}}T_{\Omega_{2}}f\right\|_{\mathbb{X}(w)}\lesssim[w]_{A_{\infty}}^{2}[w]_{A_{p_{\mathbb{X}}}}^{\frac{1}{p_{\mathbb{X}}}}\left([w]_{A_{\infty}}+[w]_{A_{p_{\mathbb{X}}}}^{\frac{1}{p_{\mathbb{X}}}}\right)\left\|f\right\|_{\mathbb{X}(w)}.$ To end the proof we consider the case of $r<p_{\mathbb{X}}\leq q_{\mathbb{X}}<\infty,$ where $1<r<\infty$ is any fixed constant. If $w\in A_{p_{\mathbb{X}}/r},$ $\displaystyle\left\|T_{\Omega_{1}}f\right\|_{\mathbb{X}(w)}$ $\displaystyle=\sup_{\|g\|_{\mathbb{X}^{\prime}(w)}\leq 1}\left|\int_{\mathbb{R}^{n}}T_{\Omega_{1}}f(x)g(x)w(x)dx\right|$ $\displaystyle\lesssim\sup_{\|g\|_{\mathbb{X}^{\prime}(w)}\leq 1}\sum_{Q\in{\mathcal{S}}}\langle|f|\rangle_{r,Q}\langle|gw|\rangle_{1,Q}|Q|,$ where the last step follows from [15, Theorem A] or [35, Corollary 3.4]. By a direct computation and the generalized Hölder’s inequality, it follows that $\displaystyle\sum_{Q\in{\mathcal{S}}}\langle|f|\rangle_{r,Q}\langle|gw|\rangle_{1,Q}|Q|$ $\displaystyle=\sum_{Q\in{\mathcal{S}}}\langle|f|\rangle_{r,Q}\frac{1}{w(Q)}\int_{Q}|g(x)|w(x)dxw(Q)$ $\displaystyle\lesssim[w]_{A_{\infty}}\int_{\mathbb{R}^{n}}M_{r}f(x)M_{w}^{\mathcal{D}}g(x)w(x)dx$ $\displaystyle\lesssim[w]_{A_{\infty}}\left\|M_{r}f\right\|_{\mathbb{X}(w)}\left\|M_{w}^{\mathcal{D}}g\right\|_{\mathbb{X}^{\prime}(w)}$ $\displaystyle\lesssim[w]_{A_{\infty}}\left\|M_{r}f\right\|_{\mathbb{X}(w)}\left\|g\right\|_{\mathbb{X}^{\prime}(w)},$ where in the first inequality we apply the Carleson embedding theorem. Hence, by the fact that $\left\|M_{r}f\right\|_{\mathbb{X}(w)}\lesssim[w]_{A_{p_{\mathbb{X}}/r}}^{\frac{1}{rq}}\left\|f\right\|_{\mathbb{X}(w)}$ (see [50, Lemma 3.1]), where $q=p_{\mathbb{X}}/{r}-\varepsilon(p_{\mathbb{X}}/{r})$ and $\varepsilon(p)$ is defined in Lemma 2.1, we obtain $\left\|T_{\Omega_{1}}f\right\|_{\mathbb{X}(w)}\leq C[w]_{A_{\infty}}[w]_{A_{p_{\mathbb{X}}/{r}}}^{\frac{1}{rq}}\left\|f\right\|_{\mathbb{X}(w)}.$ This inequality, combining with the definition of $T_{\Omega_{1}}T_{\Omega_{2}}$ gives $\left\|T_{\Omega_{1}}T_{\Omega_{2}}f\right\|_{\mathbb{X}(w)}\leq C[w]_{A_{\infty}}^{2}[w]_{A_{p_{\mathbb{X}}/{r}}}^{\frac{2}{rq}}\left\|f\right\|_{\mathbb{X}(w)},$ This finish the proof of the second case. Therefore, we obtain $\left\|T_{\Omega_{1}}T_{\Omega_{2}}f\right\|_{\mathbb{X}(w)}\lesssim\left\\{\begin{array}[]{ll}[w]_{A_{\infty}}^{2}[w]_{A_{p_{\mathbb{X}}/{r}}}^{\frac{2}{rq}}\left\|f\right\|_{\mathbb{X}(w)},&\text{ if }r<p_{\mathbb{X}}\leq q_{\mathbb{X}},w\in A_{\frac{p_{\mathbb{X}}}{r}};\\\ {[w]_{A_{\infty}}^{2}}[w]_{A_{p_{\mathbb{X}}}}^{\frac{1}{p_{\mathbb{X}}}}\left([w]_{A_{\infty}}+[w]_{A_{p_{\mathbb{X}}}}^{\frac{1}{p_{\mathbb{X}}}}\right)\left\|f\right\|_{\mathbb{X}(w)},&\text{ if }1<p_{\mathbb{X}}<q_{0},w\in A_{p_{\mathbb{X}}},\end{array}\right.$ where $q_{0}=2-\frac{1}{1+p_{\mathbb{X}}}$. ∎ Using Theorem 1.1, we can easily get Corollary 1.3. ###### Proof of Corollary 1.3. Recall that if $\mathbb{X}$ be a RIQBFS with $p$-convex then $\mathbb{X}^{\frac{1}{p}}$ is a RIBFS. This together with Theorem 1.1 lead to the desired result. As a matter of fact, it suffices for us to prove that for any $r>0$, $\left\||f|^{r}\right\|^{1/r}_{\mathbb{X}(w)}=\left\|f\right\|_{\mathbb{X}^{r}(w)}$ and $p_{\mathbb{X}^{r}}=r\cdot p_{\mathbb{X}},q_{\mathbb{X}^{r}}=r\cdot q_{\mathbb{X}}$ hold for RIQBFS $\mathbb{X}$. To see this, using the fact that $\mathbb{X}^{1/p}$ is a RIBFS, we have $p_{\mathbb{X}^{r}}=p_{\mathbb{X}^{\frac{1}{p}pr}}=pr\cdot p_{\mathbb{X}^{\frac{1}{p}}}.$ On the other hand, $p_{\mathbb{X}}=p_{\mathbb{X}^{\frac{1}{p}p}}=p\cdot p_{\mathbb{X}^{\frac{1}{p}}}.$ Therefore $p_{\mathbb{X}^{r}}=r\cdot p_{\mathbb{X}}$ and it also works for $q_{\mathbb{X}}$. From the definitions of the norm of $\mathbb{X}^{r},$ it is easy to check that $\left\|f\right\|_{\mathbb{X}^{r}(w)}=\left\||f|^{pr}\right\|^{\frac{1}{pr}}_{\mathbb{X}^{\frac{1}{p}}(w)}$ and $\left\||f|^{r}\right\|_{\mathbb{X}(w)}=\left\||f|^{pr}\right\|^{\frac{1}{p}}_{\mathbb{X}^{\frac{1}{p}}(w)}.$ Then we have $\left\||f|^{r}\right\|^{1/r}_{\mathbb{X}(w)}=\left\|f\right\|_{\mathbb{X}^{r}(w)}.$ ∎ ## 4\. Proofs of Theorem 1.4 The section will be devoted to prove Theorem 1.4. Using sparse domination method, we will show that the bilinear sparse domination of the composition of rough singular integral operators holds, which implies Theorem 1.4. We start with some definitions. For a linear operator $T$ and $1\leq r<\infty$, we define the corresponding grand maximal truncated operator $\mathscr{M}_{T,r}$ by $\mathscr{M}_{T,r}f(x):=\sup_{Q\ni x}|Q|^{-\frac{1}{r}}\left\|T\left(f\chi_{\mathbb{R}^{n}\backslash 3Q}\right)\chi_{Q}\right\|_{L^{r}\left(\mathbb{R}^{n}\right)},$ where the supremum is taken over all cubes $Q\subset\mathbb{R}^{n}$ containing $x$. It is well known that the operator $\mathscr{M}_{T,r}$ was introduced by Lerner [35] and it plays a key role in establishing bilinear sparse domination of rough operator. Let $T_{1},T_{2}$ be two linear operators. We define the grand maximal operator $\mathscr{M}_{T_{1}T_{2},r}^{*}$ by $\mathscr{M}_{T_{1}T_{2},r}^{*}f(x):=\sup_{Q\ni x}\left(\frac{1}{|Q|}\int_{Q}\left|T_{1}\left(\chi_{\mathbb{R}^{n}\backslash 3Q}T_{2}\left(f\chi_{\mathbb{R}^{n}\backslash 9Q}\right)\right)(\xi)\right|^{r}\mathrm{~{}d}\xi\right)^{\frac{1}{r}}.$ Now, we can state bilinear sparse domination of the composition operator associated with RIQBFS $\mathbb{X}.$ ###### Lemma 4.1. Let $1<r<\frac{3}{2}$, $0\leq\beta_{1},\beta_{2}<\infty.$ Let $T_{1},T_{2}$ be two linear operators satisfying additionally the following conditions 1. (i). the operator $T_{1}$ is bounded on $L^{r^{\prime}}\left(\mathbb{R}^{n}\right)$ with bound $A$; 2. (ii). there exists $A_{0}>0$ such that for each $\lambda>0$, $\left|\left\\{x\in\mathbb{R}^{n}:\left|T_{1}T_{2}f(x)\right|>\lambda\right\\}\right|\lesssim\int_{\mathbb{R}^{n}}\frac{A_{0}|f(x)|}{\lambda}\log^{\beta_{1}}\left(\mathrm{e}+\frac{A_{0}|f(x)|}{\lambda}\right)\mathrm{d}x;$ 3. (iii). there exists $A_{1},A_{2}>0$ such that for each $\lambda>0$, $\left|\left\\{x\in\mathbb{R}^{n}:\mathscr{M}_{T_{1},r^{\prime}}T_{2}f(x)>\lambda\right\\}\right|\lesssim\int_{\mathbb{R}^{n}}\frac{A_{1}|f(x)|}{\lambda}\log^{\beta_{1}}\left(\mathrm{e}+\frac{A_{1}|f(x)|}{\lambda}\right)\mathrm{d}x$ and $\left|\left\\{x\in\mathbb{R}^{n}:\mathscr{M}_{T_{2},r^{\prime}}f(x)>\lambda\right\\}\right|\lesssim\int_{\mathbb{R}^{n}}\frac{A_{2}|f(x)|}{\lambda}\log^{\beta_{2}}\left(\mathrm{e}+\frac{A_{2}|f(x)|}{\lambda}\right)\mathrm{d}x.$ Then for each $0<p\leq 1$ and a bounded function $f$ with compact support, there exists a $\frac{1}{2\cdot 9^{n}}$-sparse family of cubes $\mathcal{S}=\\{Q\\}$, and functions $H_{1}$ and $H_{2}$, such that for each function $g$, $\int_{\mathbb{R}^{n}}|H_{1}(x)|^{p}|g(x)|dx\lesssim\left(A_{0}^{p}+A_{1}^{p}\right)\mathcal{A}^{(p,1)}_{\mathcal{S};L(\log L)^{\beta_{1}},L^{(r^{{}^{\prime}}/p)^{{}^{\prime}}}}(f,g),$ $\int_{\mathbb{R}^{n}}|H_{2}(x)|^{p}|g(x)|dx\lesssim A^{p}A_{2}^{p}\mathcal{A}^{(p,1)}_{\mathcal{S};L(\log L)^{\beta_{2}},L^{(r^{{}^{\prime}}/p)^{{}^{\prime}}}}(f,g),$ and for a.e. $x\in\mathbb{R}^{n}$, $T_{1}T_{2}f(x)=H_{1}(x)+H_{2}(x),$ where $\mathcal{A}^{(a,b)}_{\mathcal{S};L(\log L)^{\beta},L^{t}}(f,g):=\sum_{Q\in\mathcal{S}}|Q|\|f\|^{a}_{L(\log L)^{\beta},Q}\langle|g|\rangle^{b}_{t,Q}$ with $0\leq a,b,\beta<\infty,1\leq t<\infty.$ ###### Proof of Lemma 4.1. For any $0<p\leq 1$, in order to establish bilinear sparse domination over operator $(T_{1}T_{2}f)^{p}$, we are going to follow the scheme of the proof of [35, Theorem 3.1], together with some ideas in [26]. For a fixed cube $Q_{0}$, we should also consider a local version of operators $\mathscr{M}_{T_{2},r^{\prime}}$ and $\mathscr{M}_{T_{1}T_{2},r^{\prime}}^{*}$ by $\mathscr{M}_{T_{2};r^{\prime};Q_{0}}f(x)=\sup_{Q\ni x,Q\subset Q_{0}}|Q|^{-\frac{1}{r^{\prime}}}\left\|\chi_{Q}T_{2}\left(f\chi_{3Q_{0}\backslash 3Q}\right)\right\|_{L^{r^{\prime}}\left(\mathbb{R}^{n}\right)}$ and $\mathscr{M}_{T_{1}T_{2},r^{\prime};Q_{0}}^{*}f(x)=\sup_{Q\ni x,Q\subset Q_{0}}\left(\frac{1}{|Q|}\int_{Q}\left|T_{1}\left(\chi_{\mathbb{R}^{n}\backslash 3Q}T_{2}\left(f\chi_{9Q_{0}\backslash 9Q}\right)\right)(\xi)\right|^{r^{\prime}}\mathrm{d}\xi\right)^{\frac{1}{r^{\prime}}},$ respectively. Then we define three sets $E_{i}$ with $i=1,2,3$ by $E_{1}=\left\\{x\in Q_{0}:\left|T_{1}T_{2}\left(f\chi_{9Q_{0}}\right)(x)\right|>DA_{0}\|f\|_{L(\log L)^{\beta_{1}},9Q_{0}}\right\\};$ $E_{2}=\left\\{x\in Q_{0}:\mathscr{M}_{T_{2},r^{\prime};Q_{0}}f(x)>DA_{2}\|f\|_{L(\log L)^{\beta_{2},},9Q_{0}}\right\\};$ $E_{3}=\left\\{x\in Q_{0}:\mathscr{M}_{T_{1}T_{2},r^{\prime};Q_{0}}^{*}f(x)>DA_{1}\|f\|_{L(\log L)^{\beta_{1}},9Q_{0}}\right\\},$ with $D$ a positive constant. Let $E=\cup_{i=1}^{3}E_{i}.$ Hence, by our hypothesis $(ii),$ $(iii)$ and [29, Lemma 4.3], taking $D$ large enough, we deduce that $|E|\leq\frac{1}{2^{n+2}}\left|Q_{0}\right|.$ Now, by using the Calderón-Zygmund decomposition to the function $\chi_{E}$ on $Q_{0}$ at height $\lambda=$ $\frac{1}{2^{n+1}}$, we obtain pairwise disjoint cubes $\left\\{P_{l}\right\\}_{l}\subset\mathcal{D}\left(Q_{0}\right)$ such that $\chi_{E}(x)\leq\frac{1}{2^{n+1}}$ for a.e. $x\notin\bigcup_{l}P_{l}$. Together with this we immediately obtain $\left|E\backslash\bigcup_{l}P_{l}\right|=0.$ At the same time, for each $l\geq 1$, we also have $\frac{1}{2^{n+1}}\left|P_{l}\right|\leq\left|P_{l}\cap E\right|\leq\frac{1}{2}\left|P_{l}\right|.$ Observe that $\sum_{l}\left|P_{l}\right|\leq 2^{n+1}|E|\leq\frac{1}{2}\left|Q_{0}\right|,$ $P_{l}\cap E^{c}\neq\emptyset$. Let $\displaystyle G_{1}(x):=$ $\displaystyle T_{1}T_{2}\left(f\chi_{9Q_{0}}\right)(x)\chi_{Q_{0}\backslash\cup_{l}P_{l}}(x)$ $\displaystyle+\sum_{l}T_{1}\left(\chi_{\mathbb{R}^{n}\backslash 3P_{l}}T_{2}\left(f\chi_{9Q_{0}\backslash 9P_{l}}\right)\right)(x)\chi_{P_{l}}(x).$ Then we have the following claim: $\int_{\mathbb{R}^{n}}|G_{1}(x)|^{p}|g(x)|dx\lesssim\left(A_{0}^{p}+A_{1}^{p}\right)\|f\|^{p}_{L(\log L)^{\beta_{1}},9Q_{0}}\langle|g|\rangle_{(\frac{r^{\prime}}{p})^{\prime},Q_{0}}\left|Q_{0}\right|.$ Indeed, noting that $0<p\leq 1$ and the definition of $G_{1}$ we have $\displaystyle|G_{1}(x)|^{p}\leq$ $\displaystyle\left|T_{1}T_{2}\left(f\chi_{9Q_{0}}\right)(x)\right|^{p}\chi_{Q_{0}\backslash\cup_{l}P_{l}}(x)$ $\displaystyle+\sum_{l}|T_{1}\left(\chi_{\mathbb{R}^{n}\backslash 3P_{l}}T_{2}\left(f\chi_{9Q_{0}\backslash 9P_{l}}\right)\right)(x)|^{p}\chi_{P_{l}}(x)$ $\displaystyle=:L_{1}(x)+\sum_{l}L^{p}_{2,l}(x),$ where $L_{2,l}(x)=|T_{1}\left(\chi_{\mathbb{R}^{n}\backslash 3P_{l}}T_{2}\left(f\chi_{9Q_{0}\backslash 9P_{l}}\right)\right)(x)|\chi_{P_{l}}(x)$ with $l=1,2,\ldots$. Therefore, $\displaystyle\int_{\mathbb{R}^{n}}|G_{1}(x)|^{p}|g(x)|dx$ $\displaystyle\leq\int_{Q_{0}\backslash\cup_{l}P_{l}}L_{1}(x)|g(x)|dx+\sum_{l}\int_{P_{l}}L_{2,l}^{p}(x)|g(x)|dx$ $\displaystyle\leq\left(\int_{Q_{0}\backslash E}+\int_{E\backslash\cup_{l}P_{l}}\right)L_{1}(x)|g(x)|dx+\sum_{l}\int_{P_{l}}L_{2,l}^{p}(x)|g(x)|dx$ $\displaystyle=\int_{Q_{0}\backslash E}L_{1}(x)|g(x)|dx+\sum_{l}\int_{P_{l}}L_{2,l}^{p}(x)|g(x)|dx,$ where the last inequality is due to the fact that $\left|E\backslash\bigcup_{l}P_{l}\right|=0.$ Note that for any $x\in Q_{0}\backslash E$ yields that $x\notin E_{1}$ which implies that $\left|T_{1}T_{2}\left(f\chi_{9Q_{0}}\right)(x)\right|\leq DA_{0}\|f\|_{L(\log L)^{\beta_{1}},9Q_{0}}.$ This estimate combined with Hölder’s inequality for any $t\geq 1$ yields (4.1) $\displaystyle\int_{Q_{0}\backslash E}L_{1}(x)|g(x)|dx$ $\displaystyle\leq D^{p}A_{0}^{p}\|f\|^{p}_{L(\log L)^{\beta_{1}},9Q_{0}}{\langle|g|\rangle}_{1,Q_{0}}|Q_{0}|$ $\displaystyle\leq D^{p}A_{0}^{p}\|f\|^{p}_{L(\log L)^{\beta_{1}},9Q_{0}}{\langle|g|\rangle}_{t,Q_{0}}|Q_{0}|.$ For each $l$, using the fact that $P_{l}\cap E^{c}\neq\emptyset,$ we observe that there exists some $x_{l}\in P_{l}\cap E^{c}$ such that $\inf_{\xi\in P_{l}}\mathscr{M}_{T_{1}T_{2},r^{\prime};Q_{0}}^{*}f(\xi)\leq\mathscr{M}_{T_{1}T_{2},r^{\prime};Q_{0}}^{*}f(x_{l}).$ Combining this inequality and by the Hölder’s inequality with $\frac{r^{\prime}}{p}\geq r^{\prime}>1$, it gives (4.2) $\displaystyle\sum_{l}\int_{P_{l}}L_{2,l}^{p}(x)|g(x)|dx$ $\displaystyle\lesssim\sum_{l}\left(\int_{P_{l}}L_{2,l}^{r^{\prime}}(x)dx\right)^{\frac{p}{r^{\prime}}}\left(\int_{P_{l}}|g(x)|^{(\frac{r^{\prime}}{p})^{\prime}}dx\right)^{\frac{1}{(\frac{r^{\prime}}{p})^{\prime}}}$ $\displaystyle=\sum_{l}\left(\frac{1}{|P_{l}|}\int_{P_{l}}L_{2,l}^{r^{\prime}}(x)dx\right)^{\frac{p}{r^{\prime}}}|P_{l}|^{\frac{p}{r^{\prime}}}\left(\int_{P_{l}}|g(x)|^{(\frac{r^{\prime}}{p})^{\prime}}dx\right)^{\frac{1}{(\frac{r^{\prime}}{p})^{\prime}}}$ $\displaystyle\leq\sum_{l}\left(\inf_{\xi\in P_{l}}\mathscr{M}_{T_{1}T_{2},r^{\prime};Q_{0}}^{*}f(\xi)\right)^{p}|P_{l}|^{\frac{p}{r^{\prime}}}\left(\int_{P_{l}}|g(x)|^{(\frac{r^{\prime}}{p})^{\prime}}dx\right)^{\frac{1}{(\frac{r^{\prime}}{p})^{\prime}}}$ $\displaystyle\leq D^{p}A_{1}^{p}\|f\|^{p}_{L(\log L)^{\beta_{1}},9Q_{0}}\sum_{l}|P_{l}|^{\frac{p}{r^{\prime}}}\left(\int_{P_{l}}|g(x)|^{(\frac{r^{\prime}}{p})^{\prime}}dx\right)^{\frac{1}{(\frac{r^{\prime}}{p})^{\prime}}}$ $\displaystyle\leq D^{p}A_{1}^{p}\|f\|^{p}_{L(\log L)^{\beta_{1}},9Q_{0}}\left(\sum_{l}|P_{l}|\right)^{\frac{p}{r^{\prime}}}\left(\sum_{l}\int_{P_{l}}|g(x)|^{(\frac{r^{\prime}}{p})^{\prime}}dx\right)^{\frac{1}{(\frac{r^{\prime}}{p})^{\prime}}}$ $\displaystyle\leq A_{1}^{p}\|f\|^{p}_{L(\log L)^{\beta_{1}},9Q_{0}}\left|Q_{0}\right|^{1-\frac{1}{(\frac{r^{\prime}}{p})^{\prime}}}\left(\int_{Q_{0}}|g(x)|^{(\frac{r^{\prime}}{p})^{\prime}}dx\right)^{\frac{1}{(\frac{r^{\prime}}{p})^{\prime}}}$ $\displaystyle=A_{1}^{p}\|f\|^{p}_{L(\log L)^{\beta_{1}},9Q_{0}}\langle|g|\rangle_{(\frac{r^{\prime}}{p})^{\prime},Q_{0}}\left|Q_{0}\right|.$ Combining this bounds with (4.1), we see that our claim holds with $t=(\frac{r^{\prime}}{p})^{\prime}$. Let $G_{2}(x)$ be the function defined by $G_{2}(x):=\sum_{l}T_{1}\left(\chi_{3P_{l}}T_{2}\left(f\chi_{9Q_{0}\backslash 9P_{l}}\right)\right)(x)\chi_{P_{l}}(x).$ For each function $g$, applying the Hölder’s inequality and the $L^{r^{\prime}}$ boundedness of $T_{1}$ yield $\displaystyle\int_{\mathbb{R}^{n}}|G_{2}(x)|^{p}|g(x)|dx$ $\displaystyle\leq\sum_{l}\left(\int_{P_{l}}\left|T_{1}\left(\chi_{3P_{l}}T_{2}\left(f\chi_{9Q_{0}\backslash 9P_{l}}\right)\right)(x)\right|^{r^{\prime}}(x)dx\right)^{\frac{p}{r^{\prime}}}\langle|g|\rangle_{(\frac{r^{\prime}}{p})^{\prime},P_{l}}\left|P_{l}\right|^{\frac{1}{(\frac{r^{\prime}}{p})^{\prime}}}$ $\displaystyle\leq A^{p}\sum_{l}\left(\int_{3P_{l}}\left|T_{2}\left(f\chi_{9Q_{0}\backslash 9P_{l}}\right)(x)\right|^{r^{\prime}}dx\right)^{\frac{p}{r^{\prime}}}\langle|g|\rangle_{(\frac{r^{\prime}}{p})^{\prime},P_{l}}\left|P_{l}\right|^{\frac{1}{(\frac{r^{\prime}}{p})^{\prime}}}$ $\displaystyle\leq 3^{n}A^{p}\sum_{l}\left(\frac{1}{|3P_{l}|}\int_{3P_{l}}\left|T_{2}\left(f\chi_{9Q_{0}\backslash 9P_{l}}\right)(x)\right|^{r^{\prime}}dx\right)^{\frac{p}{r^{\prime}}}\langle|g|\rangle_{(\frac{r^{\prime}}{p})^{\prime},P_{l}}\left|P_{l}\right|.$ The same reasoning as what we have done for $G_{1}$ then gives (4.3) $\displaystyle\int_{\mathbb{R}^{n}}|G_{2}(x)|^{p}|g(x)|dx$ $\displaystyle\lesssim A^{p}\sum_{l}\left(\inf_{\xi\in P_{l}}\mathscr{M}_{T_{2},r^{\prime};Q_{0}}^{*}f(\xi)\right)^{p}|P_{l}|^{\frac{p}{r^{\prime}}}\left(\int_{P_{l}}|g(x)|^{(\frac{r^{\prime}}{p})^{\prime}}dx\right)^{\frac{1}{(\frac{r^{\prime}}{p})^{\prime}}}$ $\displaystyle\lesssim A^{p}A_{2}^{p}\|f\|^{p}_{L(\log L)^{\beta_{2}},9Q_{0}}\sum_{l}|P_{l}|^{\frac{p}{r^{\prime}}}\left(\int_{P_{l}}|g(x)|^{(\frac{r^{\prime}}{p})^{\prime}}dx\right)^{\frac{1}{(\frac{r^{\prime}}{p})^{\prime}}}$ $\displaystyle\leq A^{p}A_{2}^{p}\|f\|^{p}_{L(\log L)^{\beta_{2}},9Q_{0}}\left(\sum_{l}|P_{l}|\right)^{\frac{p}{r^{\prime}}}\left(\sum_{l}\int_{P_{l}}|g(x)|^{(\frac{r^{\prime}}{p})^{\prime}}dx\right)^{\frac{1}{(\frac{r^{\prime}}{p})^{\prime}}}$ $\displaystyle\leq A^{p}A_{2}^{p}\|f\|^{p}_{L(\log L)^{\beta_{2}},9Q_{0}}\langle|g|\rangle_{(\frac{r^{\prime}}{p})^{\prime},Q_{0}}\left|Q_{0}\right|.$ We note that at each point $x\in\mathbb{R}^{n},$ $T_{1}T_{2}\left(f\chi_{9Q_{0}}\right)(x)\chi_{Q_{0}}(x)=G_{1}(x)+G_{2}(x)+\sum_{l}T_{1}T_{2}\left(f\chi_{9P_{l}}\right)(x)\chi_{P_{l}}(x).$ Observe that the last term on the right-hand side is consistent with the form on the left-hand side, so we can iterate with $T_{1}T_{2}\left(f\chi_{9P_{l}}\right)(x)\chi_{P_{l}}(x)$ instead of $T_{1}T_{2}\left(f\chi_{9Q_{0}}\right)(x)\chi_{Q_{0}}(x),$ and so on. For fixed $j_{1},\ldots,j_{m-1}\in\mathbb{Z}^{+}$, let $\\{Q_{0}^{j_{1}\ldots j_{m-1}j_{m}}\\}_{j_{m}}$ be the cubes obtained at the $m$-th stage of the decomposition process to the cube $Q_{0}^{j_{1}\ldots j_{m-1}}$, where $\\{Q_{0}^{j_{1}}\\}=\\{P_{j}\\}$ . For each fixed $j_{1}\ldots,j_{m}$, define the functions $G_{Q_{0},1}^{j_{1}\ldots j_{m}}f$ and $G_{Q_{0},2}^{j_{1}\ldots j_{m}}f$ by $G_{Q_{0},1}^{j_{1}\ldots j_{m}}f(x)=T_{1}\left(\chi_{\mathbb{R}^{n}\backslash 3Q_{0}^{j_{1}\ldots j_{m}}}T_{2}\left(f\chi_{9Q_{0}^{j_{1}\ldots j_{m-1}}\backslash 9Q_{0}^{j_{1}\ldots j_{m}}}\right)\right)(x)\chi_{Q_{0}^{j_{1}\ldots j_{m}}}(x)$ and $G_{Q_{0},2}^{j_{1}\ldots j_{m}}f(x)=T_{1}\left(\chi_{3Q_{0}^{j_{1}\ldots j_{m}}}T_{2}\left(f\chi_{9Q_{0}^{j_{1}\ldots j_{m-1}}\backslash 9Q_{0}^{j_{1}\ldots j_{m}}}\right)\right)(x)\chi_{Q_{0}^{j_{1}\ldots j_{m}}}(x),$ respectively. Let $\mathcal{F}=\left\\{Q_{0}\right\\}\cup_{m=1}^{\infty}\cup_{j_{1},\ldots,j_{m}}\left\\{Q_{0}^{j_{1}\ldots j_{m}}\right\\}$. It is easy to check that $\mathcal{F}\subset\mathcal{D}\left(Q_{0}\right)$ is a $\frac{1}{2}$-sparse family with $\sum_{l}\left|P_{l}\right|\leq\frac{1}{2}\left|Q_{0}\right|$. Then (4.4) $\displaystyle G_{Q_{0},1}(x)=T_{1}$ $\displaystyle T_{2}\left(f\chi_{9Q_{0}}\right)\chi_{Q_{0}\backslash\cup_{j_{1}}Q_{0}^{j_{1}}}(x)$ $\displaystyle+\sum_{m=1}^{\infty}\sum_{j_{1},\ldots,j_{m}}T_{1}T_{2}\left(f\chi_{9Q_{0}^{j_{1}\ldots j_{m}}}\right)\chi_{Q_{0}^{j_{1}\ldots j_{m}}\backslash\cup_{j_{m+1}}Q_{0}^{j_{1}\ldots j_{m+1}}}(x)$ $\displaystyle+\sum_{m=1}^{\infty}\sum_{j_{1},\ldots,j_{m}}G_{Q_{0},1}^{j_{1}\ldots j_{m}}f(x)\chi_{Q_{0}^{j_{1}\ldots j_{m}}}(x).$ Similar as what we have done for $G_{2}$, we may define function $G_{Q_{0},2}$ by $G_{Q_{0},2}(x)=\sum_{m=1}^{\infty}\sum_{j_{1}\ldots j_{m}}G_{Q_{0},2}^{j_{1}\ldots j_{m}}f(x)\chi_{Q_{0}^{j_{1}\ldots j_{m}}}(x).$ Then for a. e. $x\in\mathbb{R}^{n}$, $T_{1}T_{2}\left(f\chi_{9Q_{0}}\right)(x)\chi_{Q_{0}}(x)=G_{Q_{0},1}(x)+G_{Q_{0},2}(x).$ We are now ready to combine all our ingredients to finish the proof. In fact, by applying (4.2), (4.3) and (4.1) with $t=(\frac{r^{\prime}}{p})^{\prime},$ we obtain (4.5) $\int_{\mathbb{R}^{n}}|G_{Q_{0},1}(x)|^{p}|g(x)|dx\lesssim\left(A_{0}^{p}+A_{1}^{p}\right)\sum_{Q\in\mathcal{F}}\|f\|^{p}_{L(\log L)^{\beta_{1}},9Q}\langle|g|\rangle_{(\frac{r^{\prime}}{p})^{\prime},Q}\left|Q\right|,$ (4.6) $\int_{\mathbb{R}^{n}}|G_{Q_{0},2}(x)|^{p}|g(x)|dx\lesssim\left(A^{p}A_{2}^{p}\right)\sum_{Q\in\mathcal{F}}\|f\|^{p}_{L(\log L)^{\beta_{2}},9Q}\langle|g|\rangle_{(\frac{r^{\prime}}{p})^{\prime},Q}\left|Q\right|.$ Observe that $\bigcup_{l}Q_{l}=\mathbb{R}^{n}$, where the cubes $Q_{l}$’s have disjoint interiors and $\operatorname{supp}f\subset 9Q_{j}$ for each $l$. To see this, we begin by taking a cube $Q_{0}$ such that $\operatorname{supp}f\subset Q_{0}$. And cover $9Q_{0}\backslash Q_{0}$ by $9^{n}-1$ congruent cubes $Q_{l}$. For every $l,Q_{0}\subset 9Q_{l}$. We continue to do the same way for $27Q_{0}\backslash 9Q_{0}$ and so on. It’s easy to check that the union of the cubes $Q_{l}$ of this process, including $Q_{0}$, satisfies our requirement. Applying to each $Q_{l}$, we have that the above estimates (4.5) and (4.6) hold for $\mathcal{F}_{l}.$ Moreover, for a.e. $x\in\mathbb{R}^{n}$, (4.7) $T_{1}T_{2}f(x)=\sum_{l}G_{Q_{l},1}f(x)+\sum_{l}G_{Q_{l},2}f(x)=:H_{1}f(x)+H_{2}f(x).$ Let $\mathcal{S}$ denote $\left\\{9Q:Q\in\bigcup_{l}\mathcal{F}_{l}\right\\}.$ From the definitions of $\mathcal{F}_{l}$, it is easy to check that $\mathcal{S}$ is a $\frac{1}{2\cdot 9^{n}}$-sparse family. This fact, together with (4.7) gives us the desired results. ∎ Applying Lemma 4.1 to the rough singular integral operators $T_{\Omega_{1}}$ and $T_{\Omega_{2}}$, we can complete the proof of Theorem 1.4. ###### Proof of Theorem 1.4. Let $T_{1},T_{2}$ be two linear operators satisfying the conditions in Lemma 4.1. Note that $\mathbb{X}$ is a RIQBFS with $p$-convex, which implies that $\mathbb{Y}=\mathbb{X}^{\frac{1}{p}}$ is a RIBFS. Then Lemma 4.1 tells us that for a.e.$x\in\mathbb{R}^{n},$ $T_{1}T_{2}f(x)=H_{1}(x)+H_{2}(x).$ Therefore, (4.8) $\displaystyle\left\|T_{1}T_{2}f\right\|_{\mathbb{X}(w)}$ $\displaystyle\lesssim\left\|H_{1}\right\|_{\mathbb{X}(w)}+\left\|H_{2}\right\|_{\mathbb{X}(w)}$ $\displaystyle\backsimeq\sum_{i=1}^{2}\sup_{\left\|g\right\|_{\mathbb{Y}^{\prime}(w)}\leq 1}\left(\int_{\mathbb{R}^{n}}|H_{i}(x)|^{p}g(x)w(x)dx\right)^{\frac{1}{p}}$ $\displaystyle=:\sup_{\left\|g\right\|_{\mathbb{Y}^{\prime}(w)}\leq 1}\left(\mathcal{L}_{1}(g)\right)^{\frac{1}{p}}+\sup_{\left\|g\right\|_{\mathbb{Y}^{\prime}(w)}\leq 1}\left(\mathcal{L}_{2}(g)\right)^{\frac{1}{p}},$ where $\mathcal{L}_{i}(g)=\int_{\mathbb{R}^{n}}|H_{i}(x)|^{p}g(x)w(x)dx,i=1,2.$ Consider first the estimate of $\mathcal{L}_{1}(g).$ By Lemma 4.1, we have $\displaystyle\mathcal{L}_{1}(g)$ $\displaystyle\lesssim(A_{0}^{p}+A_{1}^{p})\mathcal{A}^{(p,1)}_{\mathcal{S};L(\log L)^{\beta_{1}},L^{(r^{{}^{\prime}}/p)^{{}^{\prime}}}}(f,gw)$ $\displaystyle=(A_{0}^{p}+A_{1}^{p})\sum_{Q\in\mathcal{S}}\left\|f\right\|_{L(\log L)^{\beta_{1}},Q}^{p}\langle|gw|\rangle_{(\frac{r^{\prime}}{p})^{\prime},Q}|Q|.$ Observe that $(\frac{r^{\prime}}{p})^{\prime}\leq r$ for $0<p\leq 1,$ and then a direct calculation shows that $|Q|\langle|gw|\rangle_{(\frac{r^{\prime}}{p})^{\prime},Q}\leq\langle|gw|\rangle_{r,Q}|Q|\leq w(Q)\cdot g_{Q,w}^{rs},$ where the definitions of $s$ and $g_{Q,w}^{rs}$ are as the same as in the proof of Theorem 1.1. Hence, by the Carleson embedding theorem, $\displaystyle\mathcal{L}_{1}(g)$ $\displaystyle\lesssim(A_{0}^{p}+A_{1}^{p})\sum_{Q\in\mathcal{S}}\left\|f\right\|_{L(\log L)^{\beta_{1}},Q}^{p}w(Q)g_{Q,w}^{rs}$ $\displaystyle\lesssim(A_{0}^{p}+A_{1}^{p})[w]_{A_{\infty}}\int_{\mathbb{R}^{n}}\left(M_{L(\log L)^{\beta_{1}}}f(x)\right)^{p}M_{w,2s}^{\mathcal{D}}g(x)w(x)dx$ $\displaystyle\lesssim(A_{0}^{p}+A_{1}^{p})[w]_{A_{\infty}}\int_{\mathbb{R}^{n}}\left(M^{\beta_{1}+1}f(x)\right)^{p}M_{w,2s}^{\mathcal{D}}g(x)w(x)dx,$ which, together with the generalized Hölder’s inequality, implies that $\displaystyle\mathcal{L}_{1}(g)$ $\displaystyle\lesssim(A_{0}^{p}+A_{1}^{p})[w]_{A_{\infty}}\left\|(M^{\beta_{1}+1}f)^{p}\right\|_{\mathbb{Y}(w)}\left\|M_{w,2s}^{\mathcal{D}}g\right\|_{\mathbb{Y}^{\prime}(w)}$ $\displaystyle=(A_{0}^{p}+A_{1}^{p})[w]_{A_{\infty}}\left\|M^{\beta_{1}+1}f\right\|_{\mathbb{X}(w)}^{p}\left\|M_{w}^{\mathcal{D}}(g^{2s})\right\|_{\mathbb{Y}^{\prime\frac{1}{2s}}(w)}^{\frac{1}{2s}}$ $\displaystyle\lesssim(A_{0}^{p}+A_{1}^{p})[w]_{A_{\infty}}[w]_{A_{p_{\mathbb{X}}}}^{\frac{\beta_{1}+1}{p_{\mathbb{X}}}p}\left\|f\right\|^{p}_{\mathbb{X}(w)},$ where the last inequality follows from Lemma 3.1 and the fact $p_{\mathbb{Y}^{\prime\frac{1}{2s}}}=\frac{p_{\mathbb{Y}^{\prime}}}{2s}=\frac{1}{2s}\frac{q_{\mathbb{X}}}{q_{\mathbb{X}}-p}>1.$ Therefore (4.9) $\left\|H_{1}\right\|_{\mathbb{X}(w)}\lesssim(A_{0}^{p}+A_{1}^{p})^{\frac{1}{p}}[w]_{A_{\infty}}^{\frac{1}{p}}[w]_{A_{p_{\mathbb{X}}}}^{\frac{\beta_{1}+1}{p_{\mathbb{X}}}}\left\|f\right\|_{\mathbb{X}(w)}.$ Now we turn to the proof of $\left\|H_{2}\right\|_{\mathbb{X}(w)}.$ The same reasoning as what we have done for $\mathcal{L}_{1}(g)$ yields that $\mathcal{L}_{2}(g)=\int_{\mathbb{R}^{n}}|H_{2}(x)|^{p}|g(x)|w(x)dx\lesssim A^{p}A_{2}^{p}[w]_{A_{\infty}}[w]_{A_{p_{\mathbb{X}}}}^{\frac{\beta_{2}+1}{p_{\mathbb{X}}}p}\left\|f\right\|_{\mathbb{X}(w)}^{p}\left\|g\right\|_{\mathbb{Y}^{\prime}(w)}.$ By taking the supermum over $\|g\|_{\mathbb{Y}^{\prime}(w)}\leq 1,$ we obtain (4.10) $\left\|H_{2}\right\|_{\mathbb{X}(w)}\lesssim AA_{2}[w]_{A_{\infty}}^{\frac{1}{p}}[w]_{A_{p_{\mathbb{X}}}}^{\frac{\beta_{2}+1}{p_{\mathbb{X}}}}\left\|f\right\|_{\mathbb{X}(w)}.$ Combining the above estimates (4.8), (4.9) and (4.10), we conclude that $\left\|T_{1}T_{2}f\right\|_{\mathbb{X}(w)}\lesssim\left[(A_{0}^{p}+A_{1}^{p})^{\frac{1}{p}}+AA_{2}\right][w]_{A_{\infty}}^{\frac{1}{p}}\left([w]_{A_{p_{\mathbb{X}}}}^{\frac{\beta_{1}+1}{p_{\mathbb{X}}}}+[w]_{A_{p_{\mathbb{X}}}}^{\frac{\beta_{2}+1}{p_{\mathbb{X}}}}\right)\left\|f\right\|_{\mathbb{X}(w)}.$ Finally, we set $T_{1}=T_{\Omega_{1}},T_{2}=T_{\Omega_{2}},A_{0}=1,A_{1}=A=A_{2}=r^{\prime},\beta_{1}=1,\beta_{2}=0.$ This together with the proof of Corollary 5.1 in [29] yields $\left\|T_{\Omega_{1}}T_{\Omega_{2}}f\right\|_{\mathbb{X}(w)}\lesssim\left([w]_{A_{\infty}}^{1+\frac{1}{p}}+[w]_{A_{\infty}}^{2+\frac{1}{p}}\right)\left([w]_{A_{p_{\mathbb{X}}}}^{\frac{1}{p_{\mathbb{X}}}}+[w]_{A_{p_{\mathbb{X}}}}^{\frac{2}{p_{\mathbb{X}}}}\right)\left\|f\right\|_{\mathbb{X}(w)}.$ This finishes the proof of Theorem 1.4. ∎ ## 5\. Applications This section will be devoted to give an application of Theorem 1.1. We consider the boundedness of certain non-standard Calderón-Zygmund operators with rough kernels in RIBFS $\mathbb{X}$. For fixed $n\geq 2$, let $\Omega$ be a function with homogeneous of degree zero, integrable on the unit sphere $\mathbb{S}^{n-1}$ and satisfy the vanishing moment condition that for all $1\leq j\leq n$, (5.1) $\int_{\mathbb{S}^{n-1}}\Omega\left(x\right)x_{j}d\sigma(x)=0,$ where $x_{j}(1\leq j\leq n)$ denote the $j$-th variable of $x\in\mathbb{R}^{n}$. Note that the vanishing condition here is different from (1.1). Let $A$ be a function on $\mathbb{R}^{n}$ whose derivatives of order one in $\mathrm{BMO}\left(\mathbb{R}^{n}\right),$ namely, $\nabla A\in\mathrm{BMO}$. Then we can define the non-standard rough Calderón-Zygmund operator $T_{\Omega,A}$ by (5.2) $T_{\Omega,A}f(x)=\text{p.v.}\int_{\mathbb{R}^{n}}\frac{\Omega(x-y)}{|x-y|^{n+1}}\left(A(x)-A(y)-\nabla A(y)\cdot(x-y)\right)f(y)dy.$ The dual operator of $T_{\Omega,A}$ has the following form $\widetilde{T}_{\Omega,A}f(x)=\text{p.v.}\int_{\mathbb{R}^{n}}\frac{\Omega(x-y)}{|x-y|^{n+1}}(A(x)-A(y)-\nabla A(x)\cdot(x-y))f(y)dy.$ The operator ${T}_{\Omega,A}$ is closely related to the Calderón commutator, of interest in PDE, and was first studied by Cohen [13]. An interesting aspect of this operator is that it may not satisfy the classical standard kernel condition, even if the kernel $\Omega$ is a smooth kernel. This is also the main reason why people call it the non-standard singular integral operator. We refer the reader to [13, 31, 28] and their references for more details on this topic. It is worth mentioning that Hu et al. [30] recently got the endpoint $L\log L$ type estimate and the $L^{p}$ boundedness of ${T}_{\Omega,A}$ with $\Omega\in L(\log L)^{2}(\mathbb{S}^{n-1}).$ In addition, they also obtained the following results: ###### Theorem C ([30]). Let $\Omega\in L^{\infty}\left(\mathbb{S}^{n-1}\right)$ be homogeneous of degree zero, satisfy the vanishing condition (5.1), and $A$ be a function on $\mathbb{R}^{n}$ with derivatives of order one in $\mathrm{BMO}\left(\mathbb{R}^{n}\right)$. Then for $p\in(1,\infty)$ and $w\in A_{p}\left(\mathbb{R}^{n}\right)$, the following weighted norm inequality holds $\left\|T_{\Omega,A}f\right\|_{L^{p}\left(\mathbb{R}^{n},w\right)}\lesssim[w]_{A_{p}}^{\frac{1}{p}}\left([w]_{A_{\infty}}^{\frac{1}{p}}+[\sigma]_{A_{\infty}}^{\frac{1}{p}}\right)[\sigma]_{A_{\infty}}\min\left\\{[\sigma]_{A_{\infty}},[w]_{A_{\infty}}\right\\}\|f\|_{L^{p}\left(\mathbb{R}^{n},w\right)}.$ As in [30, Theorem 4.11] and [30, Theorem 5.6], by Theorem 1.1, we obtain ###### Theorem 5.1. Let $\Omega$ be homogeneous of degree zero, have the vanishing moment (5.1) and $\Omega\in L^{\infty}\left(\mathbb{S}^{n-1}\right)$, and $A$ be a function on $\mathbb{R}^{n}$ with derivatives of order one in $\mathrm{BMO}\left(\mathbb{R}^{n}\right)$. Let $1<r<\infty$ and $\mathbb{X}$ be a RIBFS with $1<p_{\mathbb{X}}\leq q_{\mathbb{X}}<\infty$, then there exist $q>1$ such that $\left\|T_{\Omega,A}f\right\|_{\mathbb{X}(w)}\lesssim\left\\{\begin{array}[]{ll}[w]_{A_{\infty}}[w]_{A_{p_{\mathbb{X}}/r}}^{\frac{1}{rq}}\left\|f\right\|_{\mathbb{X}(w)},&\text{ if }r<p_{\mathbb{X}}\leq q_{\mathbb{X}}<\infty,w\in A_{\frac{p_{\mathbb{X}}}{r}};\\\ {[w]_{A_{\infty}}^{2}}[w]_{A_{p_{\mathbb{X}}}}^{\frac{2}{p_{\mathbb{X}}}}\left\|f\right\|_{\mathbb{X}(w)},&\text{ if }1<p_{\mathbb{X}}\leq q_{\mathbb{X}}<2-\frac{1}{1+p_{\mathbb{X}}},w\in A_{p_{\mathbb{X}}}.\end{array}\right.$ ## References * [1] M. Akcoglu, R. L. Jones and P. Schwartz, Variation in probability, ergodic theory and analysis, Illinois J. Math. 42 (1) (1998), 154-177. * [2] T. C. Anderson and B. Hu, A unified method for maximal truncated Calderón-Zygmund operators in general function spaces by sparse domination, Proc. Edinb. Math. Soc. 63 (2) (2020), 229-247. * [3] C. Benea and F. Bernicot, Conservation de certaines propriétés á travers un contrôle épars d’un opérateur et applications au projecteur de Leray–Hopf, Ann. Inst. Fourier(Grenoble), 68 (2018), 2329-2379. * [4] C. Bennett and R. Sharply, Interpolation of Operators, Academic Press, New York, (1988). * [5] D. W. Boyd, The Hilbert transform on rearrangement-invariant spaces, Canad. J. Math. 19 (1967), 599-616. * [6] A. P. Calderón, Algebras of singular integral operators, Proceedings of the International Congress Mathematicians, (1968), 393-395. * [7] A. P. Calderón and A. Zygmund, On the existence of certain singular integrals, Acta Math. 88 (1952), 85-139. * [8] A. P. Calderón and A. Zygmund, On singular integrals, Amer. J. Math. 78 (1956), 289-309. * [9] A. P. Calderón and A. Zygmund, Algebras of certain singular operators, Am. J. Math. 78 (1956), 310-320. * [10] N. Carozza and A. Passarelli di Napoli, Composition of maximal operators, Publ. Mat. 40 (1996), 397-409. * [11] M. E. Cejas, K. Li, C. Pérez and I.P. Rivera-Ríos, Vector-valued operators, optimal weighted estimates and the $C_{p}$ condition, Sci. China Math. 63 (7) (2020), 1339-1368. * [12] M. Christ and J. -L. Rubio de Francia, Weak type (1, 1) bounds for rough operators, II, Invent. Math. 93 (1988), 225-237. * [13] J. Cohen, A sharp estimate for a multilinear singular integral in $\mathbb{R}^{n}$, Indiana Univ. Math. J. 30 (1981), 693-702. * [14] R. R. Coifman and Y. Meyer, Wavelets: Calderón-Zygmund Operators and Multilinear Operators, Cambridge University Press, Cambridge, 1997. * [15] J. M. Conde-Alonso, A. Culiuc, F. Di Plinio and Y. Ou, A sparse domination principle for rough singular integrals, Anal. PDE. 10 (5) (2017), 1255-1284. * [16] W. C. Connett, Singular integral near $L^{1}$, Proc. Sympos. Pure Math. 35 (1979), 163-165. * [17] A. Culiuc, F. Di Plinio and Y. Ou, Uniform sparse domination of singular integrals via dyadic shifts, Math. Res. Lett. 2 (1) (2018), 21-42. * [18] G. Curbera, J.Cuerva, J. Martell and C. Pérez, Extrapolation with weights, rearrangement invariant function spaces, modular inequalities and applications to singular integrals, Adv. Math. 203 (2006), 256-318. * [19] J. Duoandikoetxea and J. L. Rubio de Francia, Maximal and singular integrals via Fourier transform estimates, Invent. Math. 84 (1986), 541-561. * [20] J. Duoandikoetxea, Weighted norm inequalities for homogeneous singular integrals, Trans. Am. Math. Soc. 336 (1993), 869-880. * [21] D. E. Edmunds and W. D. Evans, Hardy Operators, Function Spaces and Embeddings, Springer, Berlin, 2004. * [22] L. Grafakos, Classical Fourier Analysis, 3rd ed., Graduate Texts in Mathematics, 249, Springer, New York, 2014. * [23] L. Grafakos, Modern Fourier Analysis, 3rd ed., Graduate Texts in Mathematics, 250, Springer, New York, 2014. * [24] L. Grafakos and N. Kalton, Some remarks on multilinear maps and interpolation, Math. Ann. 319 (2001), 151-180. * [25] L. Grafakos, A. Stefanov, Convolution Calderón-Zygmund singular integral operators with rough kernels, Analysis of Divergence, Appl. Numer. Harmon. Anal. (1999), 119-143. * [26] G. Hu, Weighted weak type endpoint estimates for the composition of Calderón-Zygmund operators, J. Aust. Math. Soc. 109 (2020), 320-339. * [27] G. Hu, Quantitative weighted bounds for the composition of Calderón-Zygmund operators, Banach J. Math. Anal. 13 (2019), 133-150. * [28] Y. Hu, An estimate for multilinear singular integrals on $\mathbb{R}^{d}$, Beijing Daxue Xuebao, (1985), no. 3, 19-26. 552 (Chinese. English summary) * [29] G. Hu, X. Lai and Q. Xue, On the composition of rough singular integral operators, J. Geom. Anal., 31 (3) (2021), 2742-2765. * [30] G. Hu, X. Tao, Z. Wang and Q. Xue, On the boundedness of non-standard rough singular integral operators, arXiv: 2203.05249 [math.CA] (2022). * [31] G. Hu and D. Yang, Sharp function estimates and weighted norm inequalities for multilinear singular integral operators, Bull. London Math. Soc. 35 (2003), 759-769. * [32] T. P. Hytönen and C. Pérez, Sharp weighted bounds involving $A_{\infty}$, Anal. PDE. 6 (2013), 777-818. * [33] T. Hytönen, L. Roncal and O.Tapiola, Quantitative weighted estimates for rough homogeneous singular integrals, Israel J. Math. 218 (2017), 133-164. * [34] W. B. Johnson and G. Schechtman, Sums of independent random variables in rearrangement invariant function spaces, Ann. Probab. 17 (1989), 789-808. * [35] A. K. Lerner, A weak type estimates for rough singular integrals, Rev. Mat. Iberoam. 35 (5) (2019), 1583-1602. * [36] A. K. Lerner, A simple proof of the $A_{2}$ conjecture, Int. Math. Res. Not. 24 (14) (2013), 3159-3170. * [37] A. K. Lerner, On an estimate of Calderón-Zygmund operators by dyadic positive operators, J. Anal. Math. 121 (2013), 141-161. * [38] A. K. Lerner and F. Nazarov, Intuitive dyadic calculus: the basics, Expo. Math. 37(3) (2019), 225-265. * [39] K. Li, C. Pérez, I. P. Rivera-Rios and L. Roncal, Weighted norm inequalities for rough singular integral operators, J. Geom. Anal. 29 (2019), 2526-2564. * [40] G. G. Lorentz, Majorants in spaces of integrable functions, Amer. J. Math. 77 (1955), 484-492. * [41] L. Maligranda, Orlicz Spaces and Interpolation, Seminars in Mathematics 5, Univ. Estadual de Campinas, Campinas, 1989. * [42] S. J. Montgomery-Smith, The Hardy Operator and Boyd Indices, Interaction between Functional Analysis, Harmonic Analysis, and Probability, Lecture Notes in Pure and Appl. Math. 175 (1996), 359-364. * [43] A. Nagel, F. Ricci, E. M. Stein and S. Wainger, Algebras of singular integral operators with kernels controlled by multiple norms, Mem. Am. Math. Soc. 256 (1230) (2018), vii+141. * [44] C. Pérez, Singular integrals and weights. (English summary) Harmonic and geometric analysis, 91-143, Adv. Courses Math. CRM Barcelona, Birkhäuser/Springer Basel AG, Basel, 2015. * [45] D. H. Phone and E. M. Stein, Some further classes of pseudo-differential and singular integral operators arising in boundary-value problems I, composition of operators, Amer. J. Math. 104 (1982), 141-172. * [46] M. M. Rao and Z. Ren, Theory of Orlicz Spaces, Marcel Dekker, New York, 1991. * [47] F. Ricci and G. Weiss, A characterization of $H^{1}(\Sigma_{n-1})$, Proc. Symp. Pure Math. 35 (part I) (1979), 289-294. * [48] A. Seeger, Singular integral operators with rough convolution kernels, J. Amer. Math. Soc. 9 (1996), 95-105. * [49] R. S. Strichartz, Compositions of singular integral operators, J. Funct. Anal. 49 (1982), 91-127. * [50] J. Tan and Q. Xue, Weighted variation inequalities for singular integrals and commutators in rearrangement invariant Banach and quasi-Banach spaces, Acta Math. Sin. (Engl. Ser.) 39 (2023), no. 7, 1389-1413. * [51] D. K. Watson, Weighted estimates for singular integrals via Fourier transform estimates, Duke Math. J. 60 (1990), 389-399. * [52] Y. Wen, H. Wu and Q. Xue, Sparse dominations and weighted variation inequalities for singular integrals and commutators, J. Geom Anal. 32 (2022), Paper No. 297, 30 pp.
# Temporal-Spatial Processing of Event Camera Data via Delay-Loop Reservoir Neural Network Richard Lau ([email protected]), Anthony Tylan-Tyler (anthony.tylan- [email protected]), Lihan Yao ([email protected]), Rey de Castro Roberto ([email protected]), Robert Taylor ([email protected]), Isaiah Jones <EMAIL_ADDRESS> (Peraton Labs 331 Newman Springs Road, Red Bank NJ, 07701, USA) ###### Abstract This paper describes a temporal-spatial model for video processing with special applications to processing event camera videos. We propose to study a conjecture motivated by our previous study of video processing with delay loop reservoir (DLR) neural network [1, 2], which we call _Temporal-Spatial Conjecture_ (TSC). The TSC postulates that there is significant information content carried in the temporal representation of a video signal and that machine learning algorithms would benefit from separate optimization of the spatial and temporal components for intelligent processing. To verify or refute the TSC, we propose a _Visual Markov Model_ (VMM) which decompose the video into spatial and temporal components and estimate the mutual information (MI) of these components. Since computation of video mutual information is complex and time consuming, we use a Mutual Information Neural Network[10] to estimate the bounds of the mutual information. Our result shows that the temporal component carries significant MI compared to that of the spatial component. This finding has often been overlooked in neural network literature. In this paper, we will exploit this new finding to guide our design of a _delay-loop reservoir_ neural network for event camera classification, which results in a _18% improvement_ on classification accuracy. _Keywords_ Temporal-Spatial Conjecture, Video Markov Model, Mutual Information, Mutual Information Neural Estimate, Delay Loop Reservoir, Event Camera, Edge Application. ## 1 Introduction While there are many Artificial Intelligence/Machine Learning (AI/ML) techniques in literature that are used in practice for classification and prediction of video signals, most employ techniques that are handcrafted by experts in the field, often without theoretical justifications [5, 6]. In this paper, which is based upon work supported by the Defense Advanced Research Projects Agency (DARPA) on “Reservoir-based Event Camera for Intelligent Processing on the Edge (RECIPE)”, we decompose a video signal into temporal and spatial component and postulate a Temporal-Spatial Conjecture (TSC) that suggests benefits by separating temporal and spatial processing to reduce training time and complexity. To validate or refute the TSC, we propose a Video Markov Model (VMM) that captures inter-dependence of key components of the visual signal. The VMM quantifies the dependence by measuring by their mutual information (MI). Results from the VMM provide insight to help explain why certain approaches work and possibly guidance for the construction of improved AI/ML algorithms. The TSC study was motivated by a prior result from our previous work in HyDDENN [1, 2], where we had found that seperate temporal/spatial processing in a drone/bird classifier leads to an improvement of Signal-to-Noise ratio compared to a classifier that processes the combined temporal-spatial information. In this paper, we aim to quantify such advantage. To demonstrate the TSC, we apply the lessons learned from the VMM to process event camera videos. Recent years have seen growing research efforts in “Event Cameras”, which provides a new way of capturing visual sensing in the emerging field of neuromorphic engineering. Event cameras are fundamentally different from traditional visual cameras, and has created a paradigm shift in how visual information is acquired. A traditional camera samples the scene periodically (e.g. 30 frames/sec) and captures light intensity on all the pixels. In contrast, an event camera records only differential pixel intensity information upon scene changes. This represents a fundamentally different method of capturing visual information and leads to the following benefits: 1) the sensed data is sparse and significantly smaller and thus reduces storage and transport requirements by a factor of 1000×; 2) ultra-high temporal resolution and low latency (in microseconds); and 3) very high dynamic range (140dB), and 4) low power consumption. These characteristics have great impact on the next generation of sensors deployed in both military and commercial systems, including self-driving automobiles, high-speed Unmanned Aerial Vehicles (UAV), robotics, ultra-high-speed data transfer, storage reduction, and smart surveillance. To qualify the potential improvement, we will apply the TSC-inspired architecture to process event video via a Delay-Loop Reservoir Neural Network (DLR) for classifying and predicting spike signals from event-based cameras. Advantages of DLR for edge networks have been studied and reported in [2]. In this paper, we will demonstrate how TSC is used in improving DLR for processing event videos. This paper is organized as follows. In Section 2, we describe the TSC and its motivation, followed by defining the VMM structure. Section 2 also includes a description of the DVS event dataset that we use to perform the study. Section 3 goes into detailed description of the Mutual Information Neural Estimate approach for estimating the Mutual Information for the VMM. It then describes the results when the VMM is applied to the DVS data. Motivated by the TSC result and insight, Section 4 applies the Delay-Loop Reservoir (DLR) for classification of the DVS data. It compares performance results between a direct DLR vs. a modified DLR as inspired by the TSC and explains how lessons learned from the TSC helps to design a better architecture for the DLR. Section 5 discusses future research in extension of the study and implications of the TSC. Section 6 gives the final conclusion. ## 2 Temporal Spatial Conjecture The study of the TSC is motivated by understanding the fundamental theory behind processing video signals. Video processing algorithms often process the video signal as a 3-dimensional tensor of space and time, which requires large amount of memory and computational resource. Since a video signal is made up of consecutive video frames, each of which are static images, the spatial content of nearby frames is often highly correlated or even repetitive. However, most current AI/ML techniques used for classification and prediction of video signals do not exploit this correlation. While well-designed AI/ML algorithms, often handcrafted by experts in the field, can extract this correlation to achieve the goal, the algorithms are excessively computationally demanding, which becomes a significant problem for efficient edge applications. The current study asks the following question: Are there efficient ways to break up the video signal before processing which may lead to more efficient resource usage? We start with postulating a conjecture, called Temporal- Spatial Conjecture (TSC), which states that separating the temporal and spatial processing is beneficial in visual processing to reduce training time and complexity. The TSC was motivated by a prior result from our previous work in HyDDENN [1, 2], where we had found that separating temporal/spatial processing in a drone/bird classifier leads to an improvement of Signal-to- Noise ratio of 4dB compared to a classifier that processes the combined temporal-spatial information. In this paper, we will build an analytic tool to quantify such advantage. ### 2.1 Video Model To analyze the TSC, we build a Video Markov Model (VMM) for analysis of information content of the components of the video signal. We first represent an event video ($V$) as a set of contiguous event frames, $F_{t},$for $t=1,...,D$ are the time samples. The frames are sampled in time, which can be non-uniform. For notational simplicity we assume uniform sampling in the following. Thus, we write, $V={F_{t}}\hskip 150.0ptt=1,...,D$ The VMM is created via two procedures. First it decomposes the event-based video signal ($V$) into a spatial component ($S$) and a temporal component ($T$). The decomposition should satisfy two criteria: 1. 1. Each component $S$, $T$, is a subset of the events of the original signal $V$. 2. 2. The union of $S$ and $T$, carries information that approaches that of the entire signal. Such information is measured with respect to an AI/ML application such as predicting or classifying the label of the video. Figure 1: Video Markov Model As shown in Figure 1, the spatial component $S$ is defined to be a set of sampled event frames such that the frames have negligible correlation among them. Thus, we write $S=\\{F_{t_{u}}\\}\hskip 100.0ptt_{u}=\text{time index of uncorrelated frames}$ There are different ways to obtain the spatial component. One way is to randomly shuffle the event frames so that they are uncorrelated. Another method is to sparsely sample the event frames such that there is little correlation between the samples. Note that both methods satisfy criteria 1 given above. However, the later method is usually preferred since it reduces processing and storage complexity. While the spatial component modifies and reduces temporal content, the temporal component retains temporal information but allows reduction of the spatial information. There are numerous ways that $T$ can be defined to satisfy the VMM criteria. We propose to define $T$ as clusters of events in contiguous frames. Thus, we write $T=\\{C_{t}\\}\hskip 150.0ptt=1,...,D$ where $C_{t}s$ are clusters of the events at frame of time $t$. This definition does not place stringent requirements on the structure of the clusters but requires that the temporal information be unchanged from that of the original video signal. The second part of the VMM is to compute or learn the mutual information for 3 configurations as shown in Figure 2. We describe in more detailed in the following section on the computation and estimation of these MIs _I_. The amount of MI carried in these models provide answer to the TSC and hints on how to design efficient AI/ML architectures. Figure 2: VMM sub structure diagrams for study of TSC ### 2.2 Event Data For testing of the VMM and testing of subsequent even data classification, we will use Dynamic Vision Sensor (DVS) dataset [7]. DVS is a low-power, real- time, event-based hand gesture recognition system. It is the first implemented entirely on event-based hardware. DVS captures changes in pixel brightness, transmitting data only when a change is detected, which significantly reduces power consumption compared to traditional frame-based cameras. The DVS dataset comprises of 11 hand gesture categories from 29 subjects under 3 illumination conditions (see Figure 3). Figure 3: Examples from 8 of the 11 classes in the DVS dataset Figure 4: VMM RGB captures of gestures, the corresponding event stream, and the aggregation of events into frames. To process the data, the event stream is first partitioned in time as shown in Figure 4, where events in a partition, i.e. a time window, is aggregated into one frame. In the DVS gesture dataset, each frame collects events over a time window of 30 milliseconds. By sequencing 200 consecutive frames, an observation represents $\sim$6 seconds in time. ## 3 Mutual Information Extraction Solving the TSC requires computation of Mutual Information, $I$, which is a difficult task due to the large size of the data. This section describes how we use the Mutual Information Neural Estimate (MINE) [10] method to estimate $I$. ### 3.1 Mutual Information The mutual information between two variables is a measure of how dependent they are on each other. For a pair of variables where one is known, the mutual information quantifies how much is known about the unknown variable from the known variable. If the mutual information is high, then the unknown variable is well constrained by the known variable. If the mutual information is low, then the unknown variable is relatively unconstrained by the known variable. Applying this to the TSC, we can use the mutual information to see how well the separate spatial and temporal information can be associated with the video label. The mutual information is defined as the difference between the entropy of a single variable, $H)X=\mathbb{E}_{p}[logp(x)]$, where $\mathbb{E}_{P}[x]$ is the expectation value of $x$ for the distribution $P$, and the joint entropy of both, $H(X|Y):$ $I(X,Y)=H(X)-H(X|Y)$ The mutual information between the two variables is constrained from above by $I(X,Y)\leq min(H(X),H(Y))$ This constraint arises from $I(X,Y)=I(Y,X)$ and that $min(H(X|Y))=0$ when $X$ is fully determined by $Y$. Thus the maximum mutual information occurs when one variable is fuly determined by the other and is then constrained by which variable has the lowest entropy. This constraint will allow us to determine when the mutual information is ’high’ as well as compare $I(V_{S},V_{C})$ and $I(V_{T},V_{C})$ where $V_{S}$ is the spacial information of the video, $V_{T}$ is the temporal information of the video, and $V_{C}$ is the video class label. ### 3.2 Mutual Information Neural Estimate Determining the mutual information $I(X,Y)$ can be done when the joint distribution of the variables is known. In our case, these distributions are not known and, further complicating matters, the video data is very high dimensional. For the individual frames, for example, the probability distribution is over a 32768 dimensional space. Due to its high dimensionality, the sample frames are likely to lie very far from each other, making the estimation of the whole distribution unfeasible. In order to overcome this limitation, we use the Mutual Information Neural Estimation (MINE) [10] to estimate the mutual information between pairs of variables. Figure 5: A diagrammatic representation of the MINE algorithm MINE is a neural network which estimates the mutual information using an approximate calculation of the Kullback-Leiber (KL) divergence. To begin, the mutual information can be rewritten in terms of the KL diverence between the joint distribution $P(X|Y)$ and the product of the marginal distribution $P_{X}\oplus P_{Y}$ $I(X,Y)=D_{KL}(P(X|Y)\hskip 2.0pt||\hskip 2.0ptP(X)\oplus P(Y))$ The KL divergence between two distributions is defined as $I(X,Y)=D_{KL}(P||Q)=\mathbb{E}_{P}[log(\frac{P}{Q})]$ $D_{KL}$ can be estimated using the Donsker-Varadhan representation as $D_{KL}(P||Q)\geq\sup_{T^{\prime}\in\mathcal{F}}\mathbb{E}_{P}[T^{\prime}]-log(\mathbb{E}_{Q}[e^{T^{\prime}-1}])$ where $T^{\prime}$ is a function and $\mathcal{F}$ is a subset of functions. Using this inequality, the neural network shown in Figure 2 attempts to learn the function $T^{\prime}$ which maximizes the Donsker-Varadhan representation of the KL divergence. Figure 6: An example of the trajectories making up $V_{T}$ from a video of the ‘clapping hands’ class of the DVS128 Gesture dataset Applying MINE to video data requires us to rigorously define the video spatial ($V_{S}$) and temporal ($V_{T}$) information. We choose to represent the $V_{S}$ as a segment of 5 full resolution images covering 0.5s of video (a flattened 5x2x128x128 tensor). $V_{T}$, is represented as a flattened 8x200 matrix of the trajectories of 4 centroids defined by $k$-means clustering on the event stream which has been sliced along time into segments of 0.027s (see Figure 6). Using these definitions, the MINE can be used to calculate the entropy of each variable as $I(X,X)=H(X)$. This gives us $H(V_{S})>24$, and $H(V_{C})\geq 2.49$. As the MINE estimate is established to be tight, we use $H(V_{T})$ as the maximum value for both $I(V_{S},V_{C})$ and $I(V_{T},V_{C})$. Thus we can directly compare the values of $I(V_{S},V_{C})$ and $I(V_{T},V_{C})$ to understand the relative information content of each. Next, we can use MINE to estimate $I(V_{S},V_{C})\geq 2.13$ (85% of maximum) and $I(V_{T},V_{C})\geq 2.31$ (92% of maximum) ### 3.3 Implication of Mutual Information on TSC These results suggest that for event camera data, the bulk of the information for labelling the video data is in the temporal information. As we highlighted before, it is difficult to separate the spatial and temporal information. For example, $V_{T}$ is composed of the spatial location of cluster centroids over time which still includes spatial information. Similarly, our definition of $V_{S}$ covers 0.5s of events over 5 frames (0.1s of events compiled into each frame), which includes changes over time. If, instead, a single frame compiled from 0.1s of events is used for $V_{S}$, the mutual information drops to 1.26 (50% of maximum), suggesting that the spatial information alone contains significantly less information necessary for labelling the videos. An additional complication of the spatial information is its high dimensionality. At a resolution of 2x128x128, the resulting vector has 32768 components. In order to simplify the spatial information while retaining the temporal information which appears to contain the bulk of the video information, we propose to down sample the spatial resolution to 2x8x8 over 200 frames, giving vectors of 25600 a 21% reduction in size to cover the whole video compared to a single frame while still retaining the bulk of the spatial information. Labelling this the full video information, $V$, we find that $I(V,V_{C})\geq 2.44$ (98% of maximum). Thus, by simplifying the spatial information through down sampling while retaining all of the temporal information, we are able to improve our result by 6%. This result is significant as it motivates us to design a corresponding DLR to take advange of the insight learned. This will be verified in an DLR application in the following section. ## 4 DLR Classification of Event Camera Data This section will illustrate how the insight gained from the TSC study motivate an improvement on the Delay Loop Reservoir (DLR) algorithm for event camera processing. ### 4.1 DLR Algorithm Figure 7: Delay-Loop Reservoir structure As shown in Figure 7, the Delay Loop Reservoir (DLR) [2] is a nonlinear, high dimensional model within the reservoir computing framework. The DLR operates on serialized stream data by sequentially upsampling each sample into high dimensions, via random weights. The resulting high dimensional representation is then processed by the core delay loop, which serves two functions: 1) inject nonlinearity using hyperbolic tangent; and 2) allows processing temporal processing via a leaky factor which combine past temporal data with current data. The leaky factor thus controls the degree of “memory” of the DLR, and can be used to select frequency components for processing. After loop processing, the data is classified by a linear classifier such as a ridge regressor. Note that the entire DLR has only training at the ridge regressor stage and the delay loop itself does not required training. This is a main difference between reservoir based NN and other NN’s. The simplicity of the DLR architecture leads to practical implementation in hardware, which has been demonstrated in FPGA implementation as reported in [2]. ### 4.2 Initial Result DLR Algorithm We first apply the DLR on the 11-class classification task of the DVS dataset with tuning parameters including the leaky factor, loop dimensionality, and ridge regression regularization. Our initial result showed an accuracy of 76%. We noted that this sub-par performance can be attributed to an overfitting problem. Overfitting occurs when a model learns the training data too well, including its noise and outliers, and therefore performs poorly on unseen data. This is especially prevalent for event camera data, where noise among events in an event stream may become a ‘crutch’ for the model. As an illustration, Figure 8 shows the spurious noisy events that cause overfitting. The red blob in the event stream has been memorized by the model during training. Whenever the spurious event appears, the model predicts class $i$. However, in the test set, an event stream of class $i$ does not contain this specific event. Figure 8: Example of spurious events in the event stream ### 4.3 TSC-inspired modification of the DLR Algorithm As illustrated above, the problem of overfitting in event classification is that the DLR tries to classify using the noisy data. It does a good job for the training data and indeed this is what had found. During training, the DLR performs close to 100% accuracy while its accuracy falls to 76 In traditional NN, methods to mitigate overfitting in neural networks include reducing the number of weights in the NN or generation of new training data by data augmentation. These would not work in DLR as the loop does not have training parameter and regularization in the ridge regressor was already used. Data augmentation is also not desirable as it adds complexity to the model. Figure 9: TSC-inspired preprocessing for DLR improvement Instead, we will apply a data reduction module to the DLR as inspired by the TSC result. This modification is shown in Figure 9. The Sparse Spatial module suggests lowering the sampling rate of the events while the Temporal module captures frames from the event. To tackle the overfitting problem, we will substantially reduce the spatial sampling rate, but retaining sufficient temporal resolution, which is consistent with the TSC result that the temporal component carries more information for label classification. Using this modification, the new DLR architecture is shown in Figure 10. Figure 10: Modified DLR Architecture The main addition of Figure 10 is the Spatial Filter and Down-sample module. We apply a low-pass filter to remove high-frequency noise that could lead to aliasing artifacts before subsampling. The subsampling process reduces the image size by keeping every $n$-th pixel in each row and column, and discarding the rest. The procedure $g$ is applied to pixel coordinate $(i,j)$, given by $g(i,j)=\sum_{k,l}^{n}f(k,l)h(i-\frac{k}{r},j-\frac{l}{r})$ where $f(k,l)$ is the smoothing filter, $h(i-\frac{k}{r},j-\frac{l}{r})$ subsamples the blurred image to dimensions r by r. Figure 11 shows effect of the sampling process for various subsampling rates. Figure 11: Image of subsampling effect Figure 12: DLR Accuracy with respect to different frame sub-sampling Figure 12 shows the performance improvement of the modified DLR architecture. It shows that the largest improvement is achieved by downsampling the event frame to an 8 × 8 event pixel frames. This result cannot be adequately explained by traditional methods to tackle overfitting in machine learning, as traditional NN methods focus on spatial learning. However, this result can be fully explained from the viewpoint of the TSC as temporal component carries more information. Since noise is most prominent in the spatial component, we can substantially reduce the frame size and thus reducing overfitting to a minimum without affecting the temporal component. Using the modified DLR architecture, with fine tuning of other DLR parameters including leaky factor, loop dimensionality, we were able to achieve an accuracy of 89.6% for the 11-class DVS data classification. This is in comparison with an accuracy of 96% [11] when attention neural networks are used. Thus, the DLR has achieved on-par performance with State of the Art Attention Neural Network but with substantial reduction in training time and computation complexity as measured by Multiply-and-Accumulate units [2]. ## 5 Future Work This paper is motivated by exploring processing of the temporal component of a video signal as this area of research seems to have not been the focus in machine learning algorithm development. The proposed VMM validates an important result for applying DLR to event data by optimizing the spatial component. We have not investigated optimization with respect to the temporal domain. For example, what would be an optimum frame structure? And what would be a joint spatial-temporal optimum structure? Also, we would also like to study the impact of TSC on other applications such as object segmentation and object recognition. ## 6 Conclusion This paper describes a temporal-spatial model for video processing with special applications to processing event camera videos. We first describe the Temporal-Spatial Conjecture which postulates that there is significant information content carried in the temporal representation of a video signal. TSC also advocates that machine learning algorithms would benefit from separate optimization of the spatial and temporal components for intelligent processing. We described the Visual Markov Model which decomposed videos into spatial and temporal components and estimate the mutual information of these components with respect to machine learning functions. The VMM study found that temporal component carries more information than its spatial counterpart. The finding is important, which was exploited to help design a modification to the delay-loop reservoir algorithm for classification of DVS event data. The modified DLR algorithm tackles the overfitting problem successfully and achieved 18% improvement on classification accuracy, thereby validating the TSC. ## 7 Acknowledgement This paper is based upon work supported by the DARPA MTO contract N6600122C4025 on “Reservoir-based Event Camera for Intelligent Processing on the Edge (RECIPE)". We thank Dr. Whitney Mason and Dr. Timothy Klausutis, Program Managers, for their vision on the TSC and DLR and guidance of the program, Greg Jones for his guidance and technical discussions on the program, and Justin Mauger (CIV USN NIWC PACIFIC CA (USA)) for many technical discussions. The views, opinions and/or findings expressed are those of the authors and should not be interpreted as representing the official views or policies of DARPA or the U.S. Government. ## References * [1] DARPA AIE. on Hyper Dimensional Data Enabled Neural Networks (HyDDENN) A.I. Exploratory PA-19-03-03, December 2019. * [2] Richard Lau, Lihan Yao, Todd Huster, William Johnson, Stephen Arleth, Justin Wong, Devin Ridge, Michael Fletcher, William C. Headley. Scaled-Time-Attention Robust Edge Network. CoRR abs/2107.04688 (2021) * [3] DARPA AIE. on Photonic Edge AI Compact Hardware (PEACH) DARPA-PA-18-02-05 * [4] Guy Van der Sande, Daniel Brunner and Miguel C. Soriano Advances in photonic reservoir computing Nanophotonics, 6(3), 2017. * [5] He Wang, Edmond S. L. Ho, Hubert P. H. Shum, Zhanxing Zhu Spatio-Temporal Manifold Learning for Human Motions via Long-Horizon Modeling IEEE Trans. Vis. Comput. Graph. 27(1): 216-227 (2021). * [6] Li Yao, Atousa Torabi, Kyunghyun Cho “Describing Videos by Exploiting Temporal Structure,” arXiv:1502.08029v5 [stat.ML] 1 Oct 2015. * [7] A. Amir, B. Taba, D. Berg, T. Melano, J. McKinstry, C. Di Nolfo, T. Nayak, A. Andreopoulos, G. Garreau, M. Mendoza, J. Kusnitz, M. Debole, S. Esser, T. Delbruck, M. Flickner, and D. Modha. "A Low Power, Fully Event-Based Gesture Recognition System," 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, 2017. * [8] Pierot et al Prophesee 1 Megapixel Automotive Detection Dataset. "Learning to Detect Objects with a 1 Megapixel Event Camera," NeuroIPS 2020. * [9] Lukoševičius, M., & Jaeger, H Reservoir computing approaches to recurrent neural network training. Computer Science Review, 3(3), 127-149. 2009. * [10] Mohamed Ishmael Belghazi, Aristide Baratin, Sai Rajeshwar, Sherjil Ozair, Yoshua Bengio, Aaron Courville, Devon Hjelm. "Mutual Information Neural Estimation," Proceedings of the 35th International Conference on Machine Learning, PMLR 80:531-540, 2018. * [11] Sabater, L. Montesano, A. Murillo "Event Transformer. A sparse-aware solution for efficient event data processing" arXiv preprint arXiv:1804.09028, 2018. * [12] T. J. O’Shea, T. Roy and T. C. Clancy "Over-the-Air Deep Learning Based Radio Signal Classification," IEEE Journal of Selected Topics in Signal Processing, vol. 12, no. 1, Feb. 2018. * [13] Deepsig18 data: https://www.deepsig.ai/datasets. Vectorized deepsig18 data https://drive.google.com/open?id=1vrzz1Dbf98E-Q79-3CFjGNS7sw1HJVM6. * [14] Richard Lau, T. Woodward “See-through Obscurants via Compressive Sensing in Degraded Visual Environment,” SPIE April 2015. doi:10.1117/12.2178039
# Managing Clouds in Cloud Platforms Hassan Gobjuka Verizon 919 Hidden Ridge Irving, TX 75083 Email<EMAIL_ADDRESS>Kamal A. Ahmat Hassan Gobjuka Department of Information Technology Verizon City University of New York 919 Hidden Ridge New York, NY 11101 Irving, TX 75038<EMAIL_ADDRESS><EMAIL_ADDRESS> ## I Motivation Managing cloud services is a fundamental challenge in today s virtualized environments. These challenges equally face both providers and consumers of cloud services. The issue becomes even more challenging in virtualized environments that support mobile clouds. Cloud computing platforms such as Amazon EC2 provide customers with flexible, on demand resources at low cost. However, they fail to provide seamless infrastructure management and monitoring capabilities that many customers may need. For instance, Amazon EC2 doesn’t fully support cloud services automated discovery and it requires a private set of authentication credentials. Salesforce.com, on the other hand, do not provide monitoring access to their underlying systems. Moreover, these systems fail to provide infrastructure monitoring of heterogenous and legacy systems that don’t support agents. In this work, we explore how to build a cloud management system that combines heterogeneous management of virtual resources with comprehensive management of physical devices. We propose an initial prototype for automated cloud management and monitoring framework. Our ultimate goal is to develop a framework that have the capability of automatically tracking configuration and relationships while providing full event management, measuring performance and testing thresholds, and measuring availability consistently. Armed with such a framework, operators can make better decisions quickly and more efficiently. ## II Challenges These tasks are achieved through an agentless monitoring of the cloud’s infrastructure. While traditional network management methods suffer from inherited difficulties [1, 2], implementing seamless network management and monitoring framework entails several new challenges: * • Discovering the relationship of virtualized resources to underlying physical infrastructure. * • Minimizing the overhead of monitoring and problem determination across a physical and virtualized infrastructure. * • Handling security-related constraints that may affect data collection is probably one of the most serious issues. * • Response action should be taken regarding a particular virtual or physical device within the hard response deadline time frame. In agentless-based monitoring systems, this can be insured only by implementing high number of threads, which in turn increases complexity. * • Dealing with infrastructure management issues such as root-cause analysis becomes more complex. ## III Design and Implementation We propose an event-based model where events are placed on an in-memory publish/subscribe bus on the Management Server, enabling a high throughput of events. The event bus architecture, depicted in Figure 1 enables any authorized mediator to create events on the bus, and any authorized consumer to access events from the bus. Events on the bus show current status of infrastructure components. Figure 1: An initial prototype of our cloud management and monitoring system. The framework will provide a set of APIs to simplify creation of consumer and mediator applications. A set of language extensions and Web services will be used to enable Perl, Ruby, or Java scripts to create events on the bus. To support high level of reliability and scalability, the Distributed Collector subsystem will be multi-threaded. Furthermore, events will are normalized from any source into a common format, which will enable consistent processing. ## References * [1] T. Benson, A. Akella, and D. A. Maltz, Unraveling the complexity of network management, In NSDI, 2009. * [2] H. Gobjuka, and Y. Breitbart, Ethernet Topology Discovery for Networks with Incomplete Information, IEEE/ACM Transactions on Networking, 2010, In Press.
# Deep Learning in Target Space Michael Fairbank<EMAIL_ADDRESS> Spyridon Samothrakis<EMAIL_ADDRESS> Luca Citi<EMAIL_ADDRESS> Department of Computer Science and Electronic Engineering University of Essex Colchester, CO4 3SQ, UK ###### Abstract Deep learning uses neural networks which are parameterised by their weights. The neural networks are usually trained by tuning the weights to directly minimise a given loss function. In this paper we propose to re-parameterise the weights into targets for the firing strengths of the individual nodes in the network. Given a set of targets, it is possible to calculate the weights which make the firing strengths best meet those targets. It is argued that using targets for training addresses the problem of exploding gradients, by a process which we call cascade untangling, and makes the loss-function surface smoother to traverse, and so leads to easier, faster training, and also potentially better generalisation, of the neural network. It also allows for easier learning of deeper and recurrent network structures. The necessary conversion of targets to weights comes at an extra computational expense, which is in many cases manageable. Learning in target space can be combined with existing neural-network optimisers, for extra gain. Experimental results show the speed of using target space, and examples of improved generalisation, for fully-connected networks and convolutional networks, and the ability to recall and process long time sequences and perform natural-language processing with recurrent networks. Keywords: Deep Learning, Neural Networks, Targets, Exploding Gradients, Cascade Untangling ## 1 Introduction A feed-forward artificial neural network (NN) is a function $f(\vec{x},\vec{w})$, parameterised by a weights vector $\vec{w}$, that maps an input vector $\vec{x}$ to an output vector $\vec{y}=f(\vec{x},\vec{w})$. This paper initially considers feed-forward fully-connected layered NNs with ${n_{\mathrm{L}}}$ layers, as illustrated in Figure 1. Input $\vec{x}$Output $\vec{y}$Hidden LayersLayer:12345 Figure 1: Example feed-forward NN with structure “3-2-3-2-3”, with five layers (${n_{\mathrm{L}}}=5$). An input vector $\vec{x}\in\mathbb{R}^{3}$ (in this example) is fed in from the left. Data propagates along the forward arrows (weights) causing nodes to fire, layer by layer, eventually producing output vector $\vec{y}\in\mathbb{R}^{3}$. The precise equations governing a NN are given in Section 2.1. Bias weights are not shown here, and this NN does not include shortcut connections. NNs can be used in many problem domains, including pattern recognition, classification and function approximation (Bishop, 1995; Goodfellow et al., 2016). There are also numerous industrial and scientific applications for NNs, including vision, neurocontrol, language translation, image captioning, reinforcement learning and game playing (Silver et al., 2017; Mnih et al., 2015; Karpathy and Fei-Fei, 2015; Fairbank et al., 2014a, b; Sutskever et al., 2014; Samothrakis et al., 2016). Training a NN means deciding upon an appropriate value for the weights vector $\vec{w}$ so that the NN performs the desired task successfully. This training process is usually an iterative numerical method that works by trying continually to adjust $\vec{w}$ so as to minimise some real-valued loss function $L(\vec{x}_{1},\vec{x}_{2},\ldots,\vec{x}_{n_{\mathrm{p}}},\vec{w})$ for a given set of ${n_{\mathrm{p}}}$ example input vectors $(\vec{x}_{1},\vec{x}_{2},\ldots,\vec{x}_{n_{\mathrm{p}}})$. In a supervised- learning task, the loss function is designed so that when minimised, each output vector $\vec{y}_{i}=f(\vec{x}_{i},\vec{w})$ matches as possible closely some given data label or desired value $\vec{y}^{*}_{i}$, for each input vector $\vec{x}_{i}$ for $i\in\\{1,\ldots,{n_{\mathrm{p}}}\\}$. In unsupervised tasks, the loss function would represent some other objective, for example a penalty in a reinforcement-leaning problem, or an ability to reconstruct or group the input data. The loss function measures how well the NN is achieving its desired task and its value at each point in weight space creates a surface, which the training process attempts to traverse to find a suitably low point. Most training algorithms use the gradient of the error function with respect to the weights, $\frac{\partial{L}}{\partial{\vec{w}}}$, which is calculated by the celebrated backpropagation algorithm (Werbos, 1974; Rumelhart et al., 1986). Two major difficulties for training are that the loss surface can be very crinkly in places, making the algorithms very slow, and also that the surface may be riddled with sub-optimal local minima and saddle points. It is these problems that the various training algorithms in existence, including novel activation functions and weight-initialisation schemes, are designed to overcome, to varying extents. When a NN processes an input vector $\vec{x}$, as illustrated in Figure 1, the internal (hidden) neurons and output neurons in it will fire at different strengths, or activations. Hence there is a real number, the activation strength, associated with each node. These activation values can be gathered together for all hidden layers and the output layer to form a single vector, $\vec{a}$. Hence for each input vector $\vec{x}_{i}$, and given set of weights $\vec{w}$, there will be an associated activation vector $\vec{a}_{i}$. Given the NN weights $\vec{w}$ and several input vectors $\\{\vec{x}_{1},\vec{x}_{2},\ldots,\vec{x}_{n_{\mathrm{p}}}\\}$, the set of vectors $\\{\vec{a}_{1},\vec{a}_{2},\ldots,\vec{a}_{n_{\mathrm{p}}}\\}$ is uniquely determined by the equations that govern the NN’s operation. Conversely, given an arbitrary set of target activation vectors, $\\{\vec{a}_{1},\vec{a}_{2},\ldots,\vec{a}_{n_{\mathrm{p}}}\\}$, and corresponding input vectors, $\\{\vec{x}_{1},\vec{x}_{2},\ldots,\vec{x}_{n_{\mathrm{p}}}\\}$, a relatively cheap calculation using linear algebra could take place to uniquely determine the weight vector $\vec{w}$ that most closely achieves the set of target- activation vectors. Therefore the training process could work by iteratively improving the targets, instead of the weights. That is the central idea of this paper: to do NN training in target space (the space of all possible sets $\\{\vec{a}_{1},\vec{a}_{2},\ldots,\vec{a}_{n_{\mathrm{p}}}\\}$) instead of the usual weight space (the space of all possible $\vec{w}$). The motivation for switching from weight-space learning to target space is now discussed. With weight-space learning, any small adjustment to a weight in an early layer shown in Fig. 1 will make the activations coming out of that layer change by a correspondingly small amount. However these changed activations will have a knock-on effect in changing the activations in the next layer, and so on with each subsequent layer, often forming a cascade of changes which reverberate through the later layers. If the subsequent layers’ neurons are all close to their firing thresholds, or are on a particularly steep part of the activation function, then the small change in the early layer could have a catastrophic scrambling effect on the NN output. This is why the error surface in weight space is so crinkly, or even chaotic (Skorokhodov and Burtsev, 2019; Phan and Hagan, 2013). This is not a desirable property for any learning strategy to have to cope with. Another way of stating that a small change to a weight causes a catastrophic scrambling of behaviour, is to say that the sensitivity of the loss function with respect to that weight is very large. This is referred to as the exploding-gradients problem (Hochreiter and Schmidhuber, 1997a), and we hypothesise that this is the main reason why NNs with many layers are difficult to train using standard backpropagation. With target space, any small change to the targets for one layer will still cause a correspondingly small change to the activations of that layer. But then the algorithm that tries to match the node activations to their targets in the subsequent layers will try to choose the weights intelligently so the disturbance to later layers is minimised, an effect which we call cascade untangling (see Fig. 2). If successful, this should minimise the disturbance caused by the initial small change, and hence make the error surface in target space much smoother than that of weight space, directly addressing the exploding-gradients problem. Increased smoothness of the surface will also reduce the number of local minima in it, and make the crevices in it wider and easier to follow by gradient descent. This should be increasingly beneficial for NNs with many layers, and even more so for recurrent neural networks (RNNs) where the output of a neural network is looped back to be combined with subsequent inputs, causing data to cycle around the network many times. We discuss target-space techniques for RNNs in Section 4.1, but initally focus on feed-forward networks. If the cascade untangling of target-space learning works as intended, then the path explored by gradient-descent should be more direct and hence reach lower minima. This could contribute to better generalisation by the neural network (Nakkiran et al., 2020). Furthermore, the resulting loss-function surface in target space should be smoother in general, and in particular it may be flatter at the final resting place of the optimisation process. This could also contribute to better generalisation, since flat minima are hypothesised by Hochreiter and Schmidhuber (1997b) to produce better generalisation than a sharper minimum (although this is an area for further research because it might not be straightforward to directly compare the flatness between two different parameterisations of a loss-surface (Dinh et al., 2017)). The experimental results given in this paper show that using target space does indeed allow for gaining better performance in the training of deeper networks than occurs with weight space, and includes examples of improved generalisation and improved number of training iterations required for feed- forward networks, recurrent networks and convolutional layered networks; but with a higher computational cost per training iteration (due to the linear algebra process which converts from target space to weight space). We argue that this extra cost motivates choosing deeper but narrow network architectures, when training a network in target space. Input Batch345678654Layer 1Layer 2Layer 3Layer 4Output BatchLoss Figure 2: In this analogy, a ball bounces deterministically down through a grid of pins, like in the game bagatelle. This represents a neural network processing a batch of input vectors and producing a batch of output vectors. The objective of training the neural network is to arrange the pins to make the ball bounce into a region of minimum loss at the bottom. Each row of pins represents a layer of weights in the neural network (but the number of pins in each row is unrelated to the number of nodes in that network layer). The $x$-coordinate of the ball’s launch position represents a whole training batch of input vectors, compressed down to a single $x$-coordinate for this visualisation. Likewise, the ball’s horizontal position at each layer of pins represents all of the hidden-state activation vectors at that layer, for each training pattern, compressed down to a single $x$-coordinate. With weight-space learning, we consider what effect sliding Layer 1 of pins sideways will have on the final destination of the ball – clearly it will often catastrophically scramble the ball’s trajectory (exploding gradients). In contrast, with target-space learning, whenever Layer 1’s pins are moved, the positions of the lower rows of pins automatically adjust themselves to try to stabilise the ball’s trajectory as much as possible. This represents “cascade untangling”. In target space, learning takes place by actively bending segments of the ball’s zig-zag trajectory, while causing only minimal disturbances to the other trajectory segments. The rest of the paper is structured as follows. In the rest of this section, we discuss related published work. In Section 2 we define the main target- space algorithm for feed-forward layered neural networks, and then discuss background technical information about the method in Section 3. In Section 4 we show how the method can be extended to convolutional and recurrent neural networks. In Section 5, we give experimental results for feed-forward, convolutional and recurrent neural networks. Finally, in Section 6, we give conclusions. ### 1.1 Related work Target-space techniques were originally proposed by Rohwer (1990) under the name of “moving targets”, and then re-proposed under different names by Atiya and Parlos (2000); Enrique Castillo and Alonso-Betanzos (2006). There are some technical difficulties with these early works, which were later identified and improved upon by Carreira-Perpinan and Wang (2014), and follow-up work. These prior target-space methods, and their modern variants, are described in more detail in Section 3.4. Other modern deep-learning methods enable the training of deep networks in a different way from target space. Some of these are described here. Exploding gradients in deep neural networks were first analysed by Bengio et al. (1994) and Hochreiter and Schmidhuber (1997a). They also identified and defined the opposite problem, vanishing gradients, which also occurs in deep and recurrent networks. The solution proposed by Hochreiter and Schmidhuber (1997a), Long Short-Term Memory (LSTM) networks, focuses on solving vanishing gradients in recurrent networks, and is very effective, especially at spotting and exploiting patterns in long time sequences. The target-space solution we propose focuses only on addressing exploding gradients, but when combined with a powerful optimiser like Adam, can also learn and exploit long time sequences (even compared to LSTM networks); as shown in Sections 5.3-5.4. Glorot et al. (2011) identified that vanishing and exploding gradients could largely be controlled by changing the non-linear functions used which affect the node’s firing activation. They proposed to replace the conventional logistic-sigmoid and hyperbolic-tangent function by a rectified linear function, $\operatorname{ReLU}(x)$. Since their proposed activation function has a maximum gradient of 1, it limits the scale of a cascade of changes arising from any perturbed weight, and hence eases training of deep networks. It does not entirely prevent the gradients from decaying/exploding though, since the magnitude of the gradients are also amplified proportional to the magnitude of the weights in each layer (Hochreiter and Schmidhuber, 1997a). Furthermore, the rectified linear function produces some problems of its own, with its unbound magnitude of its output; which can lead to infinities appearing, particularly in recurrent networks. These infinities make the proposed $\operatorname{ReLU}$ activation function inappropriate for recurrent networks. We compare and include our method with a variant of this activation function in Section 5. Another significant recent breakthrough in training deep networks has been through the careful choice of the magnitude by which weights are randomised before training commences. The magnitudes derived by Glorot and Bengio (2010) and He et al. (2015) are carefully chosen so that the mean and variance in activations of each node remain 0 and 1 respectively, regardless of the depth of the network. This prevents the activations at each layer growing without bound, or saturating on the flat parts of the $\tanh$ activation function, and thus prevent gradients from decaying or exploding. Batch Normalisation (BN) (Ioffe and Szegedy, 2015) is a powerful method for helping with the training of deep networks. This method can be viewed as a simplification and close relative of target space, and also similar in aim as the above weight-initilisation methods, in that BN prevents the activations of nodes at subsequent layers from growing or saturating without bound. BN works by setting an individual “target” for the mean $\mu$ and standard-deviation $\sigma$ for every node in a layer. These are applied to normalise the entire training batch passing through the given node. This normalisation can help by performing some limited form of cascade untangling, but to a lesser extent than target space does, since with BN the targets are just summary statistics for a whole node. BN is proven to work well in practice, and there has been some discussion on how it works so well (Santurkar et al., 2018). BN also has a relatively low computational cost compared to target space. However target space can do a better job of cascade untangling and training deep networks. We describe empirical comparisons of BN to target space in Section 5. ## 2 Target-Space Algorithm for Layered Feed-Forward Networks In the first two subsections we describe the notation for ordinary weight- space learning for neural networks. The target-space algorithm is then defined in the subsequent subsections. ### 2.1 Terminology, feed-forward and training mechanisms for a Neural Network We extend the basic NN architecture described in Figure 1 to act on a batch of size ${n_{\mathrm{b}}}$ patterns simultaneously. Concatenate the batch of input column vectors $\\{\vec{x}_{b_{1}},\vec{x}_{b_{2}},\ldots,\vec{x}_{b_{n_{\mathrm{b}}}}\\}$ side by side into a single matrix $X$ with ${n_{\mathrm{b}}}$ columns. Then we can define a feed-forward neural network (FFNN) as a function that maps this matrix, $X$, to an output matrix, $Y$. The network is split into ${n_{\mathrm{L}}}$ layers of nodes, each node having an activation function, $g:\mathbb{R}\rightarrow\mathbb{R}$, and there being a matrix of weights between each pair of layers. The activation function $g$ is usually smooth, monotonic and non-linear. Common choices are $g(x)=\tanh(x)$ or the $\operatorname{ReLU}$ function (Glorot et al., 2011). The layers, respectively, consist of $d_{1}$, $d_{2}$, $\ldots$, $d_{{n_{\mathrm{L}}}}$ nodes, as shown in Figure 1. Thus $X\in\mathbb{R}^{d_{1}\times{n_{\mathrm{b}}}}$ and $Y\in\mathbb{R}^{d_{n_{\mathrm{L}}}\times{n_{\mathrm{b}}}}$. In the most general case, each layer $j$ is connected to each later layer $k>j$, via a matrix of weights $W_{j,k}\in\mathbb{R}^{d_{k}\times d_{j}}$. The network is then said to have “all shortcut connections”. However in the more common case, shortcut connections are not included and the only non-zero weight matrices are between consecutive layers. Each node has a bias which can be implemented by having an extra “layer 0” which contains just one node that always has activation of unity. Thus for each layer $j$, $W_{0,j}\in\mathbb{R}^{d_{j}\times 1}$ is a column vector of weights coming from layer 0, which represent bias values for layer $j$. The activations are calculated layer-by-layer, according to Algorithm 1. We allow the function $g$ to be applied to a vector or matrix in an elementwise manner, i.e. $(g(A))^{ij}:=g(A^{ij})$, for all $i$ and $j$. In line 4 of the algorithm, $\mathbb{I}\left({j}\right)$ denotes the set of integer layer- numbers of all layers that feed forwards into layer $j$. So for example, for a fully-connected layered network with all shortcut connections, $\mathbb{I}\left({3}\right)=\\{0,1,2\\}$. Algorithm 1 Feed-Forward Dynamics 1: $A_{0}\leftarrow[1\ 1\ \ldots\ 1]$ {Bias nodes $\in\mathbb{R}^{1\times{n_{\mathrm{b}}}}$; a row vector of 1s} 2: $A_{1}\leftarrow X$ {Input matrix. $X\in\mathbb{R}^{d_{1}\times{n_{\mathrm{b}}}}$.} 3: for $j=2$ to ${n_{\mathrm{L}}}$ do 4: $S_{j}\leftarrow\sum_{k\in\mathbb{I}\left({j}\right)}W_{k,j}A_{k}$ {Sums received by each node. $S_{j}\in\mathbb{R}^{d_{j}\times{n_{\mathrm{b}}}}$.} 5: $A_{j}\leftarrow g(S_{j})$ {Apply activation function. $A_{j}\in\mathbb{R}^{d_{j}\times{n_{\mathrm{b}}}}.$} 6: end for 7: $Y\leftarrow A_{{n_{\mathrm{L}}}}$ {Output Matrix. $Y\in\mathbb{R}^{d_{{n_{\mathrm{L}}}}\times{n_{\mathrm{b}}}}$.} Running the feed-forward algorithm with an input matrix $X$ generates a sequence of intermediate work-space matrices, $A_{j}$ and $S_{j}$ for all layers $j$, whose elements hold the activations and sums, respectively, of each layer’s nodes. These matrices and the output matrix $Y$ are to be retained for later use. The $p^{\mathrm{th}}$ column of each matrix $X$, $A_{j}$, $S_{j}$ and $Y$ all correspond to the same pattern $p$. To train the neural-network, we first define the loss function, or error function, $L:(X,\vec{w})\rightarrow\mathbb{R}$, where $\vec{w}$ is a vector of all of the weights in the network. For supervised learning, the most common loss functions are the mean-squared error and cross-entropy loss. Then, we seek to minimise $L$ with respect to $\vec{w}$ using gradient descent: $\displaystyle\Delta\vec{w}=-\eta\frac{\partial{L}}{\partial{\vec{w}}}.$ (1) This weight update is applied iteratively, with a small positive learning rate $\eta$. The learning rate $\eta$ can be changed over training time, or a more advanced optimiser could be used to try to accelerate learning (e.g. RPROP (Riedmiller and Braun, 1993), conjugate gradients (Møller, 1993), Levenberg- Marquardt (Bishop, 1995), RMSProp (Tieleman and Hinton, 2012), or Adam (Kingma and Ba, 2014)). To compute the gradients in the right-hand side of (1) efficiently, we can use the back-propagation algorithm (Werbos, 1974; Rumelhart et al., 1986), or equivalently automatic differentiation packages provided by a neural-network software library (Rall, 1981; Werbos, 2005; Abadi et al., 2016). ### 2.2 Stacked Layer Input-Matrix and Weight-Matrix Notation For layer $j$, define ${A}_{{[0:{j})}}$ to be shorthand form for a vertically stacked block matrix of all the $A_{k}$ matrices that provide an input to layer $j$, i.e. for all the $k\in\mathbb{I}\left({j}\right)$. For example, for a simple layered feed-forward network we would have, $\displaystyle{A}_{{[0:{j})}}:=\begin{pmatrix}A_{0}\cr A_{j-1}\end{pmatrix},$ (2a) (where $A_{0}$ is the layer of bias nodes), and if all shortcut connections were present, then this would become, $\displaystyle{A}_{{[0:{j})}}$ $\displaystyle:=\begin{pmatrix}A_{0}\cr A_{1}\cr\vdots\cr A_{j-1}\end{pmatrix}.$ (2b) Also define ${W}_{{[0:{j}]}}$ as a side-by-side block concatenation of all the weight matrices that input to layer $j$. For example, with for a simple layered feed-forward network, we would get: $\displaystyle{W}_{{[0:{j}]}}:=\begin{pmatrix}W_{0,j}&W_{(j-1),j}\end{pmatrix},$ (3a) and, if all shortcut connections were present, we would get, $\displaystyle{W}_{{[0:{j}]}}$ $\displaystyle:=\begin{pmatrix}W_{0,j}&W_{1,j}&\ldots&W_{(j-1),j}\end{pmatrix}.$ (3b) This simplifies the formula for the NN feed-forward equations; line 4 of Algorithm 1 becomes, $S_{j}\leftarrow{W}_{{[0:{j}]}}{A}_{{[0:{j})}}.$ (4) ### 2.3 Using Targets to Parameterise a Neural Network Instead of Weights So far the neural-network parameters have been the weights $\vec{w}$. We now describe how we can switch the representation to “targets”. Define the matrices $T_{2}$, $T_{3}$, …, $T_{n_{\mathrm{L}}}$, to be the “target matrices” for each layer. These have the same dimensions as the corresponding $S_{j}$ matrices. In the target-space approach, the set of $T_{j}$ matrices will be the learnable parameters, replacing the role of the weight matrices. The weight matrices are relegated into calculated quantities that are dependent on the $T_{j}$ matrices. The target matrix for each layer $T_{j}$ holds the “targets” for the $S_{j}$ matrix at that layer; hence we want to choose the weights which make the $S_{j}$ matrices get as close as possible to the $T_{j}$ matrices, or to minimise $\left|\left|{S_{j}-T_{j}}\right|\right|$, where $\left|\left|{\cdot}\right|\right|$ denotes the Frobenius norm. To simplify computational complexity, we do this in a greedy layer-by-layer manner. Substituting (4) shows that we therefore need to find $\displaystyle{W}_{{[0:{j}]}}=\arg\min_{W}\left[\left|\left|{W{A}_{{[0:{j})}}-T_{j}}\right|\right|^{2}+\lambda\left|\left|{W}\right|\right|^{2}\right],$ (5) where the $\lambda\left|\left|{W}\right|\right|^{2}$ term is included to provide Tikhonov regularisation, which ensures that the solution in $W$ is unique and kept reasonably small. The minimisation in (5) is a standard least- squares problem from linear algebra, with solution ${W}_{{[0:{j}]}}=T_{j}{\left({A}_{{[0:{j})}}\right)^{\dagger}},$ (6) where the $\dagger$ indicates a regularised pseudoinverse matrix, defined by $A^{\dagger}:={A}^{T}(A{A}^{T}+\lambda I)^{-1}.$ (7) Here $A{A}^{T}$ is referred to as the Gramian matrix, $\lambda\geq 0$ specifies the amount of Tikhonov regularisation, and $I$ is the identity matrix. The presence of $\lambda I$ in (7) prevents the occurrence of non- invertible matrices.111An alternative to Tikhonov regularisation would be to use the Truncated Singlular Value Decomposition pseudoinverse, but this was avoided because the truncation means the derivatives are not as smooth. However the SVD (or similar decompositions) may be used to implement (7) in practice, to obtain improved numerical stability. Hence the layer weights and activations can be calculated layer by layer. The full method by which the weights are calculated from the target matrices is given in Algorithm 2. Algorithm 2 Converting Targets to Weights, in a FFNN, with Sequential Cascade Untangling (SCU) 1: $A_{0}\leftarrow[1\ 1\ \ldots\ 1]$ {Bias nodes. $A_{0}\in\mathbb{R}^{1\times{\overline{n}_{\mathrm{b}}}}$.} 2: $A_{1}\leftarrow\overline{X}$ {Input matrix. $\overline{X}\in\mathbb{R}^{d_{1}\times{\overline{n}_{\mathrm{b}}}}$.} 3: for $j=2$ to ${n_{\mathrm{L}}}$ do 4: ${W}_{{[0:{j}]}}\leftarrow T_{j}{\left({A}_{{[0:{j})}}\right)^{\dagger}}$ {Calculates weights to layer $j$. $T_{j}\in\mathbb{R}^{d_{j}\times{\overline{n}_{\mathrm{b}}}}$.} 5: $S_{j}\leftarrow{W}_{{[0:{j}]}}{A}_{{[0:{j})}}$ {$S_{j}\in\mathbb{R}^{d_{j}\times{\overline{n}_{\mathrm{b}}}}$.} 6: $A_{j}\leftarrow g(S_{j})$ {$A_{j}\in\mathbb{R}^{d_{j}\times{\overline{n}_{\mathrm{b}}}}$.} 7: end for The main inputs to this algorithm are an input matrix $\overline{X}$ with batch size ${\overline{n}_{\mathrm{b}}}$, and a list of target matrices $T_{j}$. The main outputs of this algorithm are the realised weight matrices, $W_{j}$. The quantities $S_{j}$ and $A_{j}$ are work-space matrices. Because ${A}_{{[0:{j})}}$ is a shorthand for a stack of activation matrices $A_{j}$, as defined in (2), it is intended that the changes to $A_{j}$ in line 6 will immediately affect the ${A}_{{[0:{j})}}$ matrices referenced in line 4 for higher values of $j$. This is what carries forwards the changes of an earlier layer, so that they can be corrected for by a later layer. Once these weight matrices are obtained, they are then used in Alg. 1 to calculate the actual NN output. Note that Alg. 2 followed by Alg. 1 are run back-to-back, in that order, and can therefore be viewed as one continuous computational graph. (This is in contrast with some prior published work on target space, e.g. Carreira-Perpinan and Wang, 2014, where there are alternating phases of updating the $W_{j}$ matrices followed by updating $T_{j}$ matrices. In our method, the $W_{j}$ matrices are defined as functions of the $T_{j}$ matrices, and there are no alternating phases.) Alg. 2 is designed to work with a potentially different input batch $\overline{X}$ from the input matrix $X$ used to evaluate the output of the main network via Alg. 1. This separation aids using mini-batches when training the network, which is discussed further in Sec. 3.1. Note that because Alg. 2 uses a different input matrix ($\overline{X}$) compared the input matrix $X$ used by Alg. 1, therefore the work-space matrices $S_{j}$ and $A_{j}$ are a different set of variables in Alg. 2, compared to Alg. 1. Note that the aim of matching the $S_{j}$ matrices to their targets by (5) will not be achieved exactly. In general where the number of patterns ${\overline{n}_{\mathrm{b}}}$ is larger than the rank of the weight matrix, matching the targets exactly will be impossible. Hence we carry forward the disturbances actually achieved to the $S_{j}$ matrices, as opposed to the disturbances intended by $T_{j}$ matrices, in Line 6 of Alg. 2. Then the subsequent layers’ targets will act to continue to try to dampen down this disturbance, taking into account the fact that the previous layer’s targets will not have been met exactly, so that the subsequent cascade of changes is always minimised as much as possible. Hence we refer to the algorithm as having Sequential cascade untangling (SCU). We found SCU to be much more effective when training neural networks than an alternative of assuming targets are met exactly, which would be implemented by replacing line 6 of Alg. 2 by $\displaystyle A_{j}\leftarrow g(T_{j}).$ (8) Since this approach does not carry forwards the actual cascade of changes beyond just one layer, we call this alternative approach “optimistic cascade untangling” (OCU), and this is what prior published research (for example, Rohwer, 1990; Atiya and Parlos, 2000; Enrique Castillo and Alonso-Betanzos, 2006) has always done. Experiments in Sec. 5.1 (Fig. 6) show a significant improvement in performance from using SCU over OCU on the Two-Spirals classification problem, and experiments in Sections 5.3 and 5.4 show the advantage it gives in recurrent neural networks. ### 2.4 Calculating the Learning Gradient in Target Space The previous subsection described an algorithm which converts targets to weights. The next objective is to be able to do gradient descent in target space, i.e. with respect to the targets themselves. Algorithm 2 can be viewed as a mapping function $m$ from targets to weights, such that $\displaystyle\vec{w}=m(\overline{X},\vec{\tau}),$ (9) where $\vec{\tau}$ is a shorthand for the vector of all target matrices flattened and concatenated together. Given such a differentiable mapping function, $m$, we can define the loss function $L$ in terms of the targets (which we will denote as $L^{\prime}$), as follows: $\displaystyle L^{\prime}(X,\vec{\tau}):=L(X,m(\overline{X},\vec{\tau}))$ (10) Consequently, using the chain rule we can convert gradient descent in weight space to gradient descent in target space: $\displaystyle\frac{\partial{L^{\prime}}}{\partial{\vec{\tau}}}=\left(\frac{\partial{m}}{\partial{\vec{\tau}}}\right)^{T}\frac{\partial{L}}{\partial{\vec{w}}},$ (11) where $\frac{\partial{m}}{\partial{\vec{\tau}}}$ uses Jacobian matrix notation, and $\frac{\partial{L}}{\partial{\vec{w}}}$ and $\frac{\partial{L^{\prime}}}{\partial{\vec{\tau}}}$ are treated as column vectors. This gradient $\frac{\partial{L^{\prime}}}{\partial{\vec{\tau}}}$ allows us to perform gradient descent in target space, directly on the main neural-network objective function, via $\displaystyle\Delta\vec{\tau}=-\eta\frac{\partial{L^{\prime}}}{\partial{\vec{\tau}}}.$ (12) Algorithm 3 applies (11) to calculate the $\frac{\partial{L^{\prime}}}{\partial{T_{j}}}$ matrices, for Algorithm 2’s mapping method. In this code, $A\odot B$ means the Hadamard or elementwise product. The algorithm uses workspace matrices $\delta A_{j}$ and $\delta S_{j}$, which are identically dimensioned to their non-prefixed counterparts, for each layer $j$. The matrix $\delta{A}_{{[0:{j})}}$ is built up of $\delta A_{k}$ matrices, in the same way as Equation (2), and similarly $\frac{\partial{L}}{\partial{{W}_{{[0:{j}]}}}}$ is composed of $\frac{\partial{L}}{\partial{W_{k,j}}}$ matrices like (3). It is assumed that these matrices point to the same underlying data, so for example, changing $\delta{A}_{{[0:{3})}}$ will immediately affect $\delta A_{2}$, and vice versa. Algorithm 3 Calculation of Learning Gradient in Target Space 0: $S_{j}$, $A_{j}$ and $W_{j}$ matrices calculated by Alg. 2 for input matrix $\overline{X}$, and $\frac{\partial{L}}{\partial{W_{k,j}}}$ matrices calculated by back-propagation applied to Alg. 1 for input matrix $X$. 1: $\forall j,\delta A_{j}\leftarrow 0$ 2: for $j={n_{\mathrm{L}}}$ to $2$ step $-1$ do 3: $\delta S_{j}\leftarrow(\delta A_{j})\odot g^{\prime}(S_{j})$ 4: $\frac{\partial{L^{\prime}}}{\partial{T_{j}}}\leftarrow\left(\frac{\partial{L}}{\partial{{W}_{{[0:{j}]}}}}+(\delta S_{j}){{A}_{{[0:{j})}}}^{T}\right)({A}_{{[0:{j})}}^{\dagger})^{T}$ 5: $\delta{A}_{{[0:{j})}}\leftarrow\delta{A}_{{[0:{j})}}+{{W}_{{[0:{j}]}}}^{T}(\delta S_{j}-\frac{\partial{L^{\prime}}}{\partial{T_{j}}})+({A}_{{[0:{j})}}{A}_{{[0:{j})}}^{T}+\lambda I)^{-1}\left(\left(\frac{\partial{L}}{\partial{{W}_{{[0:{j}]}}}}\right)^{T}+{{A}_{{[0:{j})}}}(\delta S_{j})^{T}\right)(T_{j}-S_{j})$ 6: end for The useful outputs of the algorithm are the quantities $\frac{\partial{L^{\prime}}}{\partial{T_{j}}}$, for all layers $j$, which can be written collectively as $\frac{\partial{L^{\prime}}}{\partial{\vec{\tau}}}$. Hence the algorithm gives $\frac{\partial{L^{\prime}}}{\partial{\vec{\tau}}}$, which can be used to perform gradient descent in target space (12). As with weight-space gradient descent, a more advanced optimiser might be applied to achieve a speed up. The target-gradient computation algorithm (Alg. 3) is derived in Appendix B. The most interesting part of the derivation is the differentiation under the matrix inverse operation. This was omitted by prior research (Rohwer, 1990; Enrique Castillo and Alonso-Betanzos, 2006), which indicates that their learning gradients were incorrect. Our informal experiments (not recorded here) showed that this severely reduced performance of those prior algorithms. Modern automatic-differentiation (Abadi et al., 2016) libraries correctly handle differentiation under a matrix inverse, but as this step is non-obvious to derive manually, we have included the explicit algorithm here. Alternatively, if Alg. 2 followed by Alg. 1 followed by the calculation of $L$ is passed through an automatic-differentiation library, then $\frac{\partial{L^{\prime}}}{\partial{\vec{\tau}}}$ will be calculated correctly, automatically. The algorithmic complexity to implement one iteration of target-space learning is derived (under various assumptions) in Appendix A.1 to be approximately $4{\overline{n}_{\mathrm{b}}}/{n_{\mathrm{b}}}$ times larger than time taken to implement one iteration of weight-space learning. Note that in this ratio, ${\overline{n}_{\mathrm{b}}}$ is the batch-size used for the target space matrix $\overline{X}$, and ${n_{\mathrm{b}}}$ is the batch size for the weight-space input matrix $X$. Hence if smaller mini-batches are used to acquire the weight-space gradient than are used in the target-space algorithms, then the time per iteration of the target-space algorithm (which cannot use tiny mini-batches) would become increasingly large in comparison to the weight-space calculations. Hence in the extreme case of pattern-by-pattern learning (${n_{\mathrm{b}}}=1$), the target-space algorithm would be slower by a very significant factor of approximately $4{\overline{n}_{\mathrm{b}}}$. In the experiments of Section 5.1, we use ${\overline{n}_{\mathrm{b}}}={n_{\mathrm{b}}}$, and the resulting theoretical ratio of 4 holds out well empirically. ## 3 Technical Aspects for Target-Space Implementations The previous section has defined the main target-space method. We now consider some technical aspects, including how to use mini-batching, the effects of choice of $\lambda$, how to initialise the target variables at the start of training, detail of differences between this method and previous published target-space work, and convergence properties of our method. ### 3.1 Mini-batching and the Choice of $\overline{X}$ For very large datasets, it becomes prohibitively expensive to compute $\frac{\partial{L}}{\partial{\vec{w}}}$ for the whole dataset. Hence with very large datasets, it is standard practice in deep-learning to use mini-batches; that is to operate on a smaller, randomly chosen, subset of the training data in any one training iteration, with ${n_{\mathrm{b}}}\ll{n_{\mathrm{p}}}$. The mini-batch chosen would be used to build the input matrix $X$ inputted to Alg. 1. Using mini-batching also introduces a stochastic element to the optimisation process, which is also beneficial in finding flatter final minima in the loss-function surface, and thus improving generalisation (Bottou, 2010; Masters and Luschi, 2018). As noted in Section 2.3, it is possible to use a different $X$ for the computation of $\frac{\partial{L}}{\partial{\vec{w}}}$ by backpropagation through Alg. 1 from the $\overline{X}$ used in the target-space calculations of Algs. 2-3. But unlike the random mini-batches which may be used for calculating $\frac{\partial{L}}{\partial{\vec{w}}}$, the $\overline{X}$ used for target space must be fixed; because every time we shuffle the mini-batches in $\overline{X}$, the corresponding learnable quantities $T_{j}$ would have their meaning scrambled, which would disrupt learning. For computational efficiency, it is possible for the patterns in $\overline{X}$ to be a mini-batch, i.e. a subset of the entire training set, or even a fixed random matrix222See Sec. 5.4 for an example of this.. But it must be a fixed matrix. The larger ${\overline{n}_{\mathrm{b}}}$ is (where ${\overline{n}_{\mathrm{b}}}$ is the number of columns in $\overline{X}$), the more computationally expensive things will become. So how large should ${\overline{n}_{\mathrm{b}}}$ be? Ideally, ${\overline{n}_{\mathrm{b}}}$ should be sufficiently large so that the Gramian matrix in (7) would not have any zero eigenvalues. The more non-zero eigenvalues this product has, i.e. the more linearly independent columns in each $A_{j}$, the more useful the pseudoinverses calculated will be in performing cascade untangling (defined in Section 1). If there are too few patterns in $\overline{X}$ then it will mean that target-space learning will not be able to generate usefully full-rank weight matrices in any layer where the number of layer inputs exceeds ${\overline{n}_{\mathrm{b}}}$, which can limit the representation capabilities of the neural network (see section 5.1, Fig. 7, for an example.) Since the side-dimension of $A_{j}A_{j}^{T}$ is equal to the number of inputs to layer $j$, as a rule of thumb, we recommend to set ${\overline{n}_{\mathrm{b}}}$ to be preferably as large as the widest layer in the network, and more so if the computational expense can be spared; as this will usually ensure the Gramian matrix is full rank. Achieving this while also maintaining computational efficiency motivates the use of network architectures which are deep and narrow, as opposed to architectures with a large number of nodes to each hidden layer. ### 3.2 Choice of $\lambda$ For choosing $\lambda$ in equation (7): if it is too large then the effect of the pseudoinverses in (7) will be dulled in their ability to perform cascade untangling. Hence for large $\lambda$, the benefits of target-space learning start to disappear. If $\lambda$ is too small, then the inverse might become close-to-singular. This would mean small changes in $A_{j}$ make large changes to the generated weight matrices, and hence the learning gradients in target space would become too steep. If instability in learning is observed, then $\lambda$ could be increased, to try to remove any particularly steep gradients in target space caused by the matrix inversion process. We used either $\lambda=0.001$ or $\lambda=0.1$ in all experiments in this paper. Note that the $\lambda$ in equation (5) is performing L2 regularisation only on the mapping between targets and weights. It does not limit the final magnitude of the weights in the neural network, since there is no restriction of the magnitude of $T$ in equation (6), and there is no cost on the magnitude of $T$ appearing in the main training objective function $L$. Hence, this L2 regularisation should not be confused with a desire to apply L2 regularisation on the weights of the neural network (weight decay), which would have the intention of regularising the neural network into having smaller magnitude weights. If that was required, then explicit weight decay terms (on the magnitudes of $W$) should be added into $L$. ### 3.3 Target initialisation At the start of training, the layer target matrices $T_{j}$ need to be randomised. We used a truncated normal distribution, with mean 0 and a fixed variance to randomise each element of each $T_{j}$ matrix. Since these initial layer targets have the same fixed variance at every layer, the variance of the magnitudes of the layer activations should be the same at every layer of the initially-randomised network. This is in contrast to weight-space initialisation, where unless the initial randomised weight magnitudes are chosen very carefully (such as by using the methods proposed by He et al. (2015); Glorot and Bengio (2010)), then the activations at subsequent layers can grow exponentially, eventually either saturating or becoming zero. We have empirically found that it may be beneficial to run Alg. 2 once immediately after the initial targets are randomised, to compute the weight matrices and $S_{j}$, and then to apply $T_{j}\leftarrow S_{j},\ \forall j,$ (13) exactly once before training commences. This simply projects the newly- randomised targets on to the hypersurface through target space which represents the subset of targets which are exactly achievable. This step is done in all of the target space experiments presented in this paper. It remains to be seen how much value this step adds, although our informal experiments seemed to show some benefit in our recurrent neural-network experiments. ### 3.4 Relationship to Prior Target-Space Research The work by Rohwer (1990) is a stand-out early work on target space which we discuss here, along with more recent notable work, particularly those following on from Carreira-Perpinan and Wang (2012). Some of the prior work is dedicated to recurrent networks (e.g. Atiya and Parlos, 2000), some is dedicated to feed-forward networks with one hidden layer (Enrique Castillo and Alonso-Betanzos, 2006), and some (especially more recent publications) is dedicated to general deep architectures (e.g. Rohwer, 1990; Carreira-Perpinan and Wang, 2012; Lee et al., 2015a, b; Taylor et al., 2016; Zhang et al., 2016; Frerix et al., 2017). In some of the prior works, the process which converts targets into weights seeks to minimise $\left|\left|{g(S_{j})-T_{j}}\right|\right|$ or $\left|\left|{S_{j}-g^{-1}(T_{j})}\right|\right|$ instead of $\left|\left|{S_{j}-T_{j}}\right|\right|$. Unfortunately there is no closed- form solution to minimise $\left|\left|{g(S_{j})-T_{j}}\right|\right|$ with respect to the weights, and the second option $\left|\left|{S_{j}-g^{-1}(T_{j})}\right|\right|$ requires the function $g$ to be invertible and the domain of $T_{j}$ to be restricted to the range of $g$. Early prior published work (Rohwer, 1990; Atiya and Parlos, 2000; Enrique Castillo and Alonso-Betanzos, 2006) is only applicable to the sum-of-squared loss function, and hence only to supervised regression problems. A significant defect of these early target-space methods, which probably held back their greater adoption, is that instead of optimising the main objective function $L$, they instead optimise an intermediate loss function, similar in concept (ignoring bias and shortcut connections) to $E(X,\vec{\tau})=\sum_{j}\left|\left|{W_{j}g(T_{j-1})-T_{j}}\right|\right|^{2},$ (14) instead of the true sum-of-squares cost function, $E(X,\vec{\tau})=\sum_{j}\left|\left|{W_{j}g(S_{j-1})-T_{j}}\right|\right|^{2}.$ (15) They aim to minimise (14) with respect to the variables $T_{j}$, subject to each $W_{j}$ satisfying (6), and subject to the final layer’s targets satisfying $T_{n_{\mathrm{L}}}=Y^{*}$, where $Y^{*}$ is the target data in the supervised regression problem. If (14) is successfully minimised down to zero then it will follow that $T_{j}=S_{j}$ for all $j$, and (14) will match (15), and so the supervised learning problem will be solved. However seeing as it is in general impossible to achieve a zero error in (14), it means that the first network layer will fail to achieve $S_{1}=T_{1}$ exactly, and hence the “input” to the second layer in (14), namely $g(T_{1})$, will be wrong. This misalignment between $S_{j}$ and $T_{j}$ will grow more and more as the layer number $j$ increases. The end result is that local minima in (14) do not align with local minima in (15), and so gradient descent on (14) does not actually minimise the intended loss function. This was a crucial error limiting the applicability of the methods by Rohwer (1990) and Enrique Castillo and Alonso- Betanzos (2006). Additionally the work by Rohwer (1990) and Enrique Castillo and Alonso-Betanzos (2006) make an incorrect derivative calculation in computing the learning gradient, by omitting to differentiate through the matrix inverse operation of equation (7). A related error of following the wrong gradient descent direction appears in the work of Atiya and Parlos (2000). They approximate $\frac{\partial{L^{\prime}}}{\partial{T_{j}}}=0$ for all $j<{n_{\mathrm{L}}}$, which is incorrect since cascade untangling can never occur perfectly. Later work rectifies these problems. The work by Carreira-Perpinan and Wang (2012) refers to the target variables as auxiliary coordinates. They solve the problems associated with (14) by instead using a bespoke objective function that is something like a weighted sum between (14) and (15), and where the weighting towards (15) is gradually increased during learning. This ensures that it is (15) that is finally optimised, while benefitting from the easier learning of (14) in earlier training. However their method requires alternating phases of minimisation with respect to $W_{j}$ followed by minimising with respect to $T_{j}$; and then both of these phases need interlacing with increasing the weighting of (15) versus (14). Our method streamlines this process by having a single optimisation to do, which avoids zig-zagging through the search space, and allows for acceleration methods to be applied. But in comparison, their method increases the decoupling of the layers by successfully using an equation based on (14) for the majority of the learning process. Frerix et al. (2017) extend upon the work of Carreira-Perpinan and Wang (2012) but they modify the cost function so that the targets within it are anchored to the forward-propagated activations (by an equation similar to (13); so that the targets are no-longer free variables to be learned). This modification creates an implicit quadratic cost function attached to each layer (similar to (14)) which enables the use of a semi-implicit optimisation algorithm based on proximal updates. The proximal updates can converge under much higher learning rates than would be possible with ordinary gradient descent. In “Difference Target Propagation”, Lee et al. (2015b) define a method which uses learnable targets for each hidden layer. In this method, the target at one layer $T_{j}$ is iteratively set to $L_{P}^{-1}(T_{j+1})$, where $L_{P}^{-1}$ is an inverse function of the layer’s forward-propagation function, and where this inverse (being generally an unknown function) is learned by an auto-associative network which learns to model $L_{P}$ for each given network layer. This method potentially allows training of networks with discrete activation functions. In “Deeply-Supervised Nets”, Lee et al. (2015a) add an extra support-vector machine classifier for the output of each layer. This provides extra training information; a kind of target for each hidden layer, which proves very effective in training deep classification networks. Taylor et al. (2016) use learnable targets for both the $A_{j}$ and $S_{j}$ matrices, and update these learnable variables with iterative application of a closed-form Bregman method, which trains the network to solve the objective function, without needing to use any form of gradient descent. Zhang et al. (2016) use a similar iterative scheme to train neural networks to generate supervised hash codes. In summary, much of the prior work shows the potential and power of target space, and the recent prior work addresses the problems appearing earlier in novel ways. Our work provides several notable further enhancements and alternatives to the prior work, particularly regarding the introduction of the SCU method, which we show in our experiments is beneficial to performance. Furthermore, none of the prior work shows how to separate the input matrix $\overline{X}$ (which is used for calculating the weights from targets) from the input matrix $X$ (which is used to run the neural network in Alg. 1). Our work also introduces the correction of gradient calculations through the pseudoinverse operation (which is necessary to apply (11) correctly); the separation of the main objective’s loss function from the intermediate closed-form least-squares minimisation; and the introduction of mini-batches. The simplicity of the method, and the view of searching in “target space”, gives a single, simple, gradient-descent objective, i.e. (12), which can easily be combined with existing acceleration schemes such as Adam. ### 3.5 Convergence Properties and Representation Capabilities of Target Space The target-space gradient descent update (12) is derived to be true gradient descent on the loss function $L(\vec{x},m(\overline{X},\vec{\tau}))$ with respect to $\vec{\tau}$. The loss function $L$ is the main learning objective function, as chosen by the practitioner. For example, for a regression problem, $L$ could be the mean-squared error, or for classification problems, it could be cross-entropy loss. A potential source of confusion is that there is a second loss function appearing in the least-squares sub-problem given by (5), and also that the targets in each layer will not usually be matched exactly; but this least- squares sub-problem is completely separate from the neural-network’s main objective function, $L$. To see this more clearly, the mapping from targets to weights, $\vec{w}=m(\overline{X},\vec{\tau})$, given by Alg. 2, could be replaced by any other well-defined differentiable mapping function. Regardless of what the differentiable function $m$ is, and regardless of how well any targets are matched or not matched, gradient descent is still performed on the main neural-network objective function $L$ by (11) and (12). Any sufficiently small step size in target space by (12) will yield a decrease in $L$, since, to first order: $\displaystyle\Delta L$ $\displaystyle\approx\left(\Delta\vec{\tau}\right)^{T}\frac{\partial{L^{\prime}}}{\partial{\vec{\tau}}}$ (ignoring higher order terms) $\displaystyle=-\eta\left(\frac{\partial{L^{\prime}}}{\partial{\vec{\tau}}}\right)^{T}\frac{\partial{L^{\prime}}}{\partial{\vec{\tau}}}$ (by (12)) $\displaystyle\leq 0$ (16) Since the function $L$ has a lower bound, $L$ will decrease monotonically but not beyond that bound. Hence convergence of $L(\vec{x},m(\overline{X},\vec{\tau}))$ to some limit is guaranteed. Similarly, the standard convergence proofs for gradient descent with appropriately chosen step sizes apply here (Bertsekas, 1999, Section 1.2.2). Since the differentiable mapping function $m$ is arbitrary, the convergence guarantees work just as well as for the OCU and SCU variants described in Section 2.3. The difference is that we hope that the target-space loss-surface is smoother in one variant than the other (and that both variants are smoother than in weight space), and therefore they will produce faster convergence and better generalisation (which can only be justified empirically; see discussion in Section 1 and empirical results in Section 5). For any given set of weights we can run the neural network forwards, and can capture the sums $S_{j}$ at each layer $j$, and assign these to the targets at each layer, by (13). Ignoring the Tikhonov regularisation in (5), this will mean the targets will generate weights approximately equal to the given set of weights that we started with. This shows that any point in weight space has at least one equivalent representation in target space, such that $m$ is a many- to-one function, and hence any local minimum in weight space could be reached by gradient descent from an appropriate random start point in target space. An important question is the relationships between “solutions” (i.e. stationary points) of the target-space problem, $L(\vec{x},m(\overline{X},\vec{\tau}))$, and those of the original weight- space one, $L(\vec{x},\vec{w})$. Equation (11) shows that whenever $\frac{\partial{L}}{\partial{\vec{w}}}=0$, we must also have $\frac{\partial{L^{\prime}}}{\partial{\vec{\tau}}}=0$. Furthermore, when Alg. 2 is used to define the mapping function $m$, Appendix C shows that whenever $\frac{\partial{L^{\prime}}}{\partial{\vec{\tau}}}=0$, we must also have $\frac{\partial{L}}{\partial{\vec{w}}}=0$. Hence any stationary point in target space is also a stationary point in weight space, and also the reverse is true. For a step in target space $\Delta\vec{\tau}$, applying a first-order Taylor- Series Expansion of (9) gives: $\displaystyle\Delta\vec{w}$ $\displaystyle\approx\frac{\partial{m}}{\partial{\vec{\tau}}}\left(\Delta\vec{\tau}\right)$ (first-order Taylor Series) $\displaystyle=-\eta\frac{\partial{m}}{\partial{\vec{\tau}}}\left(\frac{\partial{L^{\prime}}}{\partial{\vec{\tau}}}\right)$ (by (12)) $\displaystyle=-\eta\frac{\partial{m}}{\partial{\vec{\tau}}}\left(\frac{\partial{m}}{\partial{\vec{\tau}}}\right)^{T}\frac{\partial{L}}{\partial{\vec{w}}}$ (by (11)) (17) Comparing (17) to (1) shows that a first-order approximation to gradient descent in target space via (12) is equivalent to descent in weight space, but where each weight-space direction is multiplied by a positive semi-definite preconditioner matrix $\frac{\partial{m}}{\partial{\vec{\tau}}}\left(\frac{\partial{m}}{\partial{\vec{\tau}}}\right)^{T}$. However by explicitly working in target space, we get the benefit of being able to apply an acceleration procedure to the descent steps in target space, such as Adam, and still retain the convergence guarantees proven for that acceleration method. We would lose these guarantees if we applied the semi- definite preconditioner matrix in weight space, and then applied Adam afterwards. Also, rather than viewing target space simply as weight space with this particular preconditioner, we have found empirically that issuing (17) directly can be more unstable than using the exact function $\vec{w}=m(\overline{X},\vec{\tau})$, presumably due to the first-order approximation used in (17); although this is an area for further research. If mini-batching is used to generate samples of $X$, then the expectation of the gradient descent direction in target space can be derived as follows. Denote the sampled mini-batch as $\hat{X}$, and $\hat{L}:=L(\hat{X},\vec{w})$, and $\mathbb{E}_{\hat{X}}$ to be the expectation operator with respect to $\hat{X}$. Then, $\displaystyle\mathbb{E}_{\hat{X}}\left(\Delta\vec{w}\right)$ $\displaystyle=-\eta\mathbb{E}_{\hat{X}}\left(\frac{\partial{m}}{\partial{\vec{\tau}}}\frac{\partial{m}}{\partial{\vec{\tau}}}^{T}\frac{\partial{\hat{L}}}{\partial{\vec{w}}}\right)$ (by (17)) $\displaystyle=-\eta\left(\frac{\partial{m}}{\partial{\vec{\tau}}}\frac{\partial{m}}{\partial{\vec{\tau}}}^{T}\right)\mathbb{E}_{\hat{X}}\left(\frac{\partial{\hat{L}}}{\partial{\vec{w}}}\right)$ $\displaystyle=-\eta\left(\frac{\partial{m}}{\partial{\vec{\tau}}}\frac{\partial{m}}{\partial{\vec{\tau}}}^{T}\right)\frac{\partial{L}}{\partial{\vec{w}}}.$ (18) The second line above follows because the mapping function $m$ is independent of the sample chosen $\hat{X}$, for a given $\vec{w}=m(\overline{X},\vec{\tau})$. The final line concludes that even though mini-batching may be used, with $\overline{X}$ independent of $\hat{X}$, the expectation of the learning gradient in target space will still produce a preconditioned descent step on the loss function on the whole dataset $L$. ## 4 Specific Deep Architectures The target-space method can be extended to different neural architectures and layer types. Here we show specifically how the method can be extended to convolutional neural networks and recurrent neural networks. ### 4.1 Application to RNNs Recurrent neural networks (RNNs) are a powerful architecture of neural networks, which extend the feed-forward network by having one or more recurrent (backward pointing) weights. These feedback connections allow information from previous inputs be retained and to contribute extra information to subsequent inputs to the network. This creates short-term memory, which allows the network to remember and act on past inputs, enabling a RNN to potentially have much greater functionality than a FFNN, potentially allowing it to act like an agent interacting with an environment. Successful RNN applications are in areas such as neurocontrol, time-series analysis, image captioning, language translation, and question answering (Karpathy and Fei-Fei, 2015; Fairbank et al., 2014a, b; Sutskever et al., 2014; Samothrakis et al., 2016). However RNNs are generally more difficult to train than feed- forward networks, with major challenges being vanishing or exploding learning gradients, making it difficult for a RNN to remember information over long time sequences. This section describes how a RNN can be trained in target-space. Target-space methods potentially allow RNNs to tackle more complex time sequences and data- processing tasks which previously have been very challenging for RNNs to solve. A simplified recurrent architecture is shown in Fig. 3. This architecture consumes ${n_{\mathrm{t}}}$ input matrices $X^{({t})}$, one at each time step ${t}\in\\{1,...,{n_{\mathrm{t}}}\\}$, and produces ${n_{\mathrm{t}}}$ output matrices $Y^{({t})}$. At each time step, data from an input matrix $X^{({t})}$ enters the RNN from the left and propagates forwards in the usual manner. When data reaches the “context layer”, layer ${c_{\mathrm{L}}}$, it loops back to the start of the RNN, and is combined with the next input matrix to go through the RNN again. Data loops around the recurrent layers many times, each time also passing through the exit layers which perform some final post-processing on the data to deliver the output matrices $Y^{({t})}$. $A_{1}^{(t)}$Next Input$X^{(t)}$$A_{2}^{(t)}$ContextInput Nodes:$A_{c_{L}}^{(t-1)}$$A_{3}^{(t)}$$\ldots$$A_{c_{L}}^{(t)}$ContextLayer$A_{c_{L}}^{(t)}$$\ldots$Exit Layers$A_{n_{L}}^{(t)}$Next Output$Y^{(t)}$Recurrent feedback tonext “loop”: $t\leftarrow t+1$ Figure 3: Diagram showing dataflow in a Recurrent Neural Network (RNN). Arrows show dataflow. Each rectangle shows a layer of nodes in the neural network; the layers with only a single rectangle are those that make no transformation to the incoming data. The data cycles around the network multiple times in “loops”, each loop indexed by ${t}$. Algorithm 4 describes the process in greater detail. The “exit layers” do any necessary post-processing on the data. Extra shortcut connections, or repeated recurrent structure, may be present to obtain different RNN architectures. Pseudocode is given in Alg. 4. In this notation, layer 0 is reserved for the bias nodes; layer 1 is for the input matrices $X^{({t})}$, and layer 2 is for feedback received from the later context layer ${c_{\mathrm{L}}}$. Superscript numbers in brackets indicate the time step, ${t}$. Algorithm 4 Recurrent NN Dynamics 0: On entry, require ${n_{\mathrm{t}}}$ input matrices ${X}^{({t})}\in\mathbb{R}^{d_{1}\times{n_{\mathrm{p}}}}.$ 1: $A_{c_{\mathrm{L}}}^{(0)}\leftarrow 0$ {Initial context units are zero} 2: for ${t}=1$ to ${n_{\mathrm{t}}}$ do 3: $A_{0}^{({t})}\leftarrow[1\ 1\ \ldots\ 1]$ {Bias nodes} 4: $A_{1}^{({t})}\leftarrow X^{({t})}$ {${t}^{\mathrm{th}}$ input matrix.} 5: $A_{2}^{({t})}\leftarrow A_{c_{\mathrm{L}}}^{({t}-1)}$ {Feedback from context layer} 6: for $j=3$ to ${n_{\mathrm{L}}}$ do 7: $S_{j}^{({t})}\leftarrow{W}_{{[0:{j}]}}{A}_{{[0:{j})}}^{({t})}$ 8: $A_{j}^{({t})}\leftarrow g(S_{j}^{({t})})$ 9: end for 10: $Y^{({t})}\leftarrow A_{{n_{\mathrm{L}}}}^{({t})}$ {${t}^{\mathrm{th}}$ output matrix. $Y^{({t})}\in\mathbb{R}^{d_{n_{\mathrm{L}}}\times{n_{\mathrm{p}}}}$.} 11: end for Each input matrix $X^{({t})}$ may itself contain a batch of several patterns (one in each column). Hence the matrices $A^{({t})}_{j}$ and $S^{({t})}_{j}$ have dimension $d_{j}\times{n_{\mathrm{b}}}$. An appropriate loss function $L$ would be chosen that is a function of some or all of the $Y^{({t})}$ matrices, and then the gradient of this loss function with respect to the weights of the network, $\frac{\partial{L}}{\partial{\vec{w}}}$, can be found by automatic differentiation, using for example, backpropagation through time (Werbos, 1990), in execution time $O({n_{\mathrm{b}}}{n_{\mathrm{t}}}{n_{\mathrm{w}}})$, where ${n_{\mathrm{w}}}$ is the number of weights in the network. Then, assuming weight-space is being used, an iterative optimizer would use this gradient information to tune $\vec{w}$, and train the network. To incorporate target-space learning for a RNN, the intermediate objective is to make all the $S_{j}^{({t})}$ coming from Alg. 4 match as closely as possible some given target matrices $T_{j}^{({t})}$, for all time steps ${t}$. Hence, considering line 7 of Alg. 4, the objective is to choose a weight matrix ${W}_{{[0:{j}]}}$ so as to achieve, ${W}_{{[0:{j}]}}{A}_{{[0:{j})}}^{({t})}\approx T_{j}^{({t})}\ \ \text{for all $1\leq{t}\leq{n_{\mathrm{t}}}$}\text{,}$ or equivalently to achieve, as closely as possible, ${W}_{{[0:{j}]}}{A}_{{[0:{j})}}^{(:)}\approx T_{j}^{(:)},$ where we have defined ${A}_{{[0:{j})}}^{(:)}:=\begin{pmatrix}{A}_{{[0:{j})}}^{(1)}&{A}_{{[0:{j})}}^{(2)}&\ldots&{A}_{{[0:{j})}}^{({n_{\mathrm{t}}})}\end{pmatrix},\ 3\leq j\leq{n_{\mathrm{L}}}$ (19a) and $T_{j}^{(:)}:=\begin{pmatrix}T_{j}^{(1)}&T_{j}^{(2)}&\ldots&T_{j}^{({n_{\mathrm{t}}})}\end{pmatrix},\ 3\leq j\leq{n_{\mathrm{L}}}$ (19b) The least squares solution to this is the same as in (6) and (7): $\displaystyle{W}_{{[0:{j}]}}=T_{j}^{(:)}{\left({A}_{{[0:{j})}}^{(:)}\right)^{\dagger}},$ (20) however since this is a RNN, we now have the problem in that it is not possible to know the values of ${A}_{{[0:{j})}}^{(:)}$ until the network can by run by Alg. 4; but that algorithm cannot be run until equation (20) is solved. To break out of this cyclic dependency, we can approximate using the “optimistic” cascade untangling (OCU), given by (8), and therefore just set: $\displaystyle A_{j}^{({t})}\leftarrow g\left(T_{j}^{({t})}\right)\ \forall{t}.$ (21) This OCU step only needs doing on the context layer which feeds backward connections to the input layers. For the rest of the layers, it is preferable to use the SCU method. Alg. 5 shows how to do this in detail. This algorithm calculates the weights of a RNN from a given list of target matrices $T_{j}^{({t})}$, using the SCU method wherever possible, and the OCU method for the recurrent layer. The algorithm includes in line 10 an attempt to correct the error introduced by the OCU step once the exit layers (shown in Fig. 3) are reached. To modify the algorithm to a fully OCU method, then we would replace line 8 by equation (21), and delete lines 7 and 10. Algorithm 5 Conversion of Targets to Weights for a RNN (using SCU) 0: On entry, require ${\overline{n}_{\mathrm{t}}}$ input matrices $\overline{X}^{({t})}\in\mathbb{R}^{d_{1}\times{\overline{n}_{\mathrm{b}}}}.$ 1: $A_{0}^{({t})}\leftarrow[1\ 1\ \ldots\ 1]\ \forall{t}$ {Bias nodes} 2: $A_{1}^{({t})}\leftarrow\overline{X}^{({t})}\ \forall{t}$ 3: $A_{c_{\mathrm{L}}}^{({t})}\leftarrow g\left(T_{c_{\mathrm{L}}}^{({t})}\right)\ \forall{t}$ {Estimates $A_{c_{\mathrm{L}}}^{({t})}$ matrices by OCU method.} 4: $A_{2}^{(:)}\leftarrow\begin{pmatrix}0&A_{c_{\mathrm{L}}}^{(1)}&A_{c_{\mathrm{L}}}^{(2)}&\ldots&A_{c_{\mathrm{L}}}^{({n_{\mathrm{t}}}-1)}\end{pmatrix}$ {Applies recurrent feedback from layer ${c_{\mathrm{L}}}$ to layer 2. Hence $A_{2}^{(:)}$ is a block shifted-right version of $A_{c_{\mathrm{L}}}^{(:)}$.} 5: for $j=3$ to ${n_{\mathrm{L}}}$ do 6: ${W}_{{[0:{j}]}}\leftarrow T_{j}^{(:)}{\left({A}_{{[0:{j})}}^{(:)}\right)^{\dagger}}$ {Calculates weights to layer $j$} 7: $S_{j}^{(:)}\leftarrow{W}_{{[0:{j}]}}{A}_{{[0:{j})}}^{(:)}.$ 8: $A_{j}^{(:)}\leftarrow g\left(S_{j}^{(:)}\right)$ {SCU method} 9: if $j={c_{\mathrm{L}}}$ then 10: Use the newly calculated ${W}_{{[0:{j}]}}$ matrices (for $3\leq j\leq{c_{\mathrm{L}}}$) to run Alg. 4 (using $\overline{X}^{({t})}$ as the input matrices), up to layer ${c_{\mathrm{L}}}$, to obtain the true ${A}_{{[0:{{c_{\mathrm{L}}}+1})}}^{(:)}$ matrices. {This is an attempt to correct for the OCU estimation made in line 3.} 11: end if 12: end for For the reasons discussed in Sec. 3.1, the content and length of the target- space input matrices, $\overline{X}^{(t)}$ for $t=1,\ldots,{\overline{n}_{\mathrm{t}}}$, may differ from the content and length of the weight-space input matrices ($X^{(t)}$ for $t=1,\ldots,{n_{\mathrm{t}}}$). This algorithm merely outputs a set of weights of the RNN. The RNN would then have to be run separately, using Alg. 4, to obtain the set of output matrices $Y^{({t})}$. Since Alg. 5 defines the mapping from targets to weights, it is possible to calculate the learning gradient with respect to the targets (first going via $\frac{\partial{L}}{\partial{\vec{w}}}$) using automatic differentiation, and hence train the RNN in target space. For example, if Alg. 5 followed by Alg. 4 is passed to an auto-differentiation toolbox, then the toolbox will be able to correctly calculate $\frac{\partial{L^{\prime}}}{\partial{\vec{\tau}}}$ by differentiation through both algorithms sequentially. Section 5 shows experiments which do this, with successful results. The bottleneck in algorithmic complexity for Alg. 5 is in forming the Gramian matrix $AA^{T}$, which will take ${n_{\mathrm{i}}}^{2}{\overline{n}_{\mathrm{b}}}{\overline{n}_{\mathrm{t}}}$ flops by direct multiplication. This is similar to a full forward-unroll of the RNN with the input matrix $\overline{X}$. Hence the relative complexity of running Alg. 5 using $\overline{X}$, compared to Alg. 4 using input matrix $X$, is approximately ${\overline{n}_{\mathrm{b}}}{\overline{n}_{\mathrm{t}}}/{n_{\mathrm{b}}}{n_{\mathrm{t}}}$. This motivates a choice of using a small value of ${\overline{n}_{\mathrm{t}}}$ where possible. See Section 5.4 for an example. ### 4.2 Application to Convolutional Neural Networks Convolutional Neural Networks (CNNs) represent one of the most powerful modern deep-learning architectures and are particularly applicable to vision problems. The key innovation of the convolutional neural network is the 2D-convolution operation: a smaller weight matrix is “convolved” (i.e. a sliding dot product is performed) with the source image to calculate the activations in the next layer. The convolutional operation means the weight matrix connecting one layer to the next can be much smaller than that of a fully connected network; and also that this smaller group of weights, the convolutional “kernel”, will be applied to multiple patches of the image. This reuse helps in generalisation, and helps preserve spatial relationships in the image from one layer to the next. A CNN network structure is usually comprised of a mixture of layer types - including one or more convolutional layers, one or more down-sampling (max- pooling) layers, flattening operations that reduce a tensor from rank 4 down to rank 2, and one or more regular fully-connected layers (as described in Section 2). Further details of how these layers all work and are arranged with each other are given by LeCun et al. (1998). In generating a target-space method for training a CNN, it is only the convolutional layers and fully-connected layers that have any weights, and so only those two layer types that need modifying. Each convolutional layer takes as input a 2D image, of size width$\times$height, with a third depth dimension representing a number of input channels. Together with the batch size, ${n_{\mathrm{b}}}$, this input image is a rank-4 tensor, of shape [${n_{\mathrm{b}}}$, input_height, input_width, input_channels]. The convolutional kernel that acts on it is a rank-4 tensor of shape [kernel_height, kernel_width, input_channels, output_channels], and the layer’s final output is a rank-4 tensor of shape [${n_{\mathrm{b}}}$, output_height, output_width, output_channels]. The entire convolutional layer’s operation can be split into 6 steps: 1. 1. Flatten the kernel to a 2-D matrix with shape [kernel_height$\times$kernel_width$\times$input_channels, output_channels]. Call this matrix $W$. 2. 2. Extract image patches from the input tensor, and reshape them, to form a patches matrix $A$ of shape [num_patches, kernel_height$\times$kernel_width$\times$input_channels], where num_patches$={n_{\mathrm{b}}}\times$output_height$\times$output_width. 3. 3. Multiply the kernel matrix $W$ by the patches matrix $A$, obtaining $S=WA$. 4. 4. Add in the bias to $S$. 5. 5. Reshape the result back into rank-4 tensor of shape [${n_{\mathrm{b}}}$, output_height, output_width, output_channels] 6. 6. Apply the activation function $g$. To optimise this process, so as to be able to easily modify it for target- space training, we first combine the bias addition of step 4 with the matrix multiplication of step 3. This can be achieved by adding an extra row of 1s into $A$, as was done in equation (2a), and an extra column of weights to $W$, as was done in equation (3a). Then we need a target matrix $T$ of the same dimension as $S$ in line 3. Given this target matrix and the matrix $A$, we can derive the weights which best achieve the targets using the same least-squares process as with equation (6), i.e. $W=TA^{\dagger}$. This derived weight matrix $W$ is then used to calculate the actual product $S=WA$, and steps 5 and 6 (the reshape and activation function) are applied, completing the convolutional layer’s behaviour. The fully-connected layers are handled with their own target matrices and least-squares solution, as in Alg. 2. The rest of the layer-types in the CNN are unchanged - down-sampling does not use any targets (or weights), and nor does the reshape operation. Automatic differentiation can be used to compute the necessary learning gradients. The algorithmic complexity for the target-space CNN layer is derived in Appendix A.2, and is shown to be slower than the corresponding weight-space CNN layer by a factor which is bounded above by approximately $(3(k_{\mathrm{h}}k_{\mathrm{w}})+1){\overline{n}_{\mathrm{b}}}/{n_{\mathrm{b}}}$, where $k_{\mathrm{h}}$ and $k_{\mathrm{w}}$ are the kernel height and width, respectively. This is not a constant bound, even when ${\overline{n}_{\mathrm{b}}}={n_{\mathrm{b}}}$, unlike that found for the fully-connected network.333In future work, it is possible to remove this numerator factor of $k_{\mathrm{h}}k_{\mathrm{w}}$, since with a stride-length of 1 there is significant overlap between patches in the matrix $A$, and therefore optimisations can be made when forming $AA^{T}$. Hence there is an incentive in target space to choose CNN architectures with smaller kernel matrices, or to only use a subset of patches when forming the pseudoinverse matrix. In the CNN architectures used in the experiments of Section 5.2, the ratio is empirically found to be around 7 (with a 3-by-3 kernel), which is considerably better than the theoretical upper-bound. Part of this improvement might be down to the fact that the backward pass of automatic differentiation can reuse the expensive matrix products and inverses computed in the forward pass. This completes the description of how to use target space with a conventional CNN architecture. ## 5 Experiments In this section we show the performance of the target-space method on the Two- Spirals benchmark problem, and on four classic small-image vision benchmark problems for convolutional neural networks, and then we demonstrate the target-space method on some bit-stream manipulation tasks and a sentiment- analysis task for recurrent neural networks. The experiments show the effectiveness of the target-space method, in ability to train deep networks and produce improved generalisation. There are improved generalisation results on the CNN vision benchmarks compared to the equivalent weight-space method applied to the same CNN architecture. In the recurrent network tasks, it shows the target-space method being able to solve problems with long time-sequences, which appear to be intractable in weight space. All experiments were implemented using Python and Tensorflow v1.14 on a Tesla K80 GPU.444Source code for experiments is available at https://github.com/mikefairbank/dlts_paper_code Shading in graphs indicates 95% confidence intervals as calculated by the Python Seaborn package. ### 5.1 Two-Spirals Experiments The Two-Spirals classification problem consists of 194 two-dimensional training points, arranged in two interleaving spiral shapes, corresponding to the two output classes, each spiral revolving through three complete revolutions. The training and test sets are shown in Fig. 4. The test set was created as the angular midpoints between consecutive training points. A layered network architecture was used, with dimensions 2-5-5-5-2, and with all shortcut connections, following Riedmiller and Braun (1993). The cross- entropy loss function was used for training, and the $\tanh$ activation function used on all hidden layers, with softmax on the output layer. Fig. 4 shows the output function of two trained networks, mapped to a single scalar output, and visually indicates that the solutions attained in target space are smoother and capture the essence of the problem better than in weight space.555Although it should be noted that Levenberg Marquardt and conjugate gradient training can produce similarly nice solutions as the left figure. Figure 4: Typical results for the two-spirals trained network, after 4,000 Adam iterations; target space versus weight space. Red/blue crosses denote test set; circles denote the training set. Grey-scale background indicates network output for the given $(x,y)$-coordinate input. Smoothness of the target-space result shows how successful generalisation is more likely. Fig. 5-left shows the problem being solved using gradient-descent with optimal learning rates empirically determined as $\eta=10$ for target space and $\eta=0.1$ for weight space. The results show that with optimal learning rates, the target-space algorithm can fully learn the two-spirals problem’s training set, and generalise well to the test set, in around 1,000 epochs; compared to around 40,000 epochs for weight space to mostly learn the training set only. It does not seem possible to generalise as well to the test set in weight space, likely due to the unevenness appearing in Fig. 4-right. Figure 5: Results for Two-Spirals learning, using Batch Gradient Descent (on left) and Adam optimiser (on right). Fig. 5-right shows results when the Adam optimiser was used, and shows a similar outcome. The learning rate used was 0.01, which was found to be beneficial to both target space and weight space on this problem. In this problem, the target-space gradient descent converges to a solution in fewer epochs than Adam in weight space. These results all seem consistent with the target-space motivation for making the loss-function surface smoother, and the minima commonly found lead to better generalisation. In our implementation the processing time was on average 3.5 times longer for each target-space training iteration compared to each weight-space iteration. In all experiments, the full data-set was used in all training batches (${n_{\mathrm{b}}}={\overline{n}_{\mathrm{b}}}=194$). With target space, $\lambda=0.001$ was used for equation (7), and initial targets were randomised using a truncated normal distribution with $\sigma=1$, followed by the projection given by (13). For weight-space learning, the weights were randomised using the method of Glorot and Bengio (2010). Fig. 6 shows the effectiveness of the Sequential Cascade untangling (SCU) variant against the Optimistic Cascade untangling (OCU) target-space algorithm (described in Section 2.3), and indicates that the SCU method is more stable and effective than the OCU method. The same graph also shows that Batch Normalisation does not seem to help on this problem and network size, and in fact performs worse in weight space than without batch normalisation. Batch normalisation does significantly help though in the CNN experiments described in the next subsection. Figure 6: Results for Two-Spirals learning, using Adam Optimiser, comparing two forms of target space: Optimistic Cascade untangling (OCU) versus Sequential Cascade untangling (SCU), and against Batch Normalisation in weight space. Fig. 7 demonstrates the sensitivity of the target-space algorithm to two of its key hyper-parameters. The left diagram shows that reducing ${\overline{n}_{\mathrm{b}}}$, the number of patterns appearing in $\overline{X}$ (see Section 3.1), reduces the representation capability of the weights generated by equation (6). In each experimental trial, a random subset of ${\overline{n}_{\mathrm{b}}}$ columns of $X$ was chosen to form $\overline{X}$. The results show that as ${\overline{n}_{\mathrm{b}}}$ reduces below the size of the narrowest network layer (which is 17 in this network), the weight matrices generated from the targets become low-rank, and it is no longer possible to fully learn the training set. Fig. 7-right shows that as the $\lambda$ used in (7) increases, the ability of the algorithm also reduces (see Section 3.2). Furthermore, with $\lambda\ll 10^{-4}$ the algorithm stopped due to numerical errors causing non-invertible matrices to appear. Figure 7: Sensitivity of the Target Space algorithm to algorithm hyper- parameters ${\overline{n}_{\mathrm{b}}}$ and $\lambda$. ### 5.2 CNN Experiments In this set of experiments we train convolutional neural networks on the following four classic small-image classification problems: * • The MNIST digit dataset: 60,000 training samples of 28-by-28 grey-scale pixellated hand-written numeric digits, each labelled from 0-9, and a test set of 10,000 samples (LeCun et al., 2010). * • MNIST-Fashion dataset: 60,000 28x28 grayscale images of 10 labelled fashion categories, along with a test set of 10,000 images (Xiao et al., 2017). * • CIFAR10 dataset: 50,000 32x32 colour training images, labelled over 10 categories, and 10,000 test images (Krizhevsky et al., 2009). * • CIFAR100 dataset: 50,000 32x32 colour training images, labelled over 100 categories, and 10,000 test images (Krizhevsky et al., 2009). All of these datasets were used as training data without any modification to the training images. For example, we did not use any data-augmentation techniques, such as image rescaling and distortion, which are known to help improve neural-network performance (and to be necessary to achieve state-of- the art classification performance). The networks used here all had six compound convolutional/pooling layers, each of which consisted of a convolutional operation (with a square kernel of size $m\times m$, applied with stride-length 1 with “same” padding, and $c$ output channels) followed by an application of the activation function, followed by (possibly) an application of max-pooling (with a square kernel of size $k\times k$, and applied with stride-length $k$). Each max pooling operation of side length $k$ reduces the side-length of the image by factor $k$. Hence each compound convolutional layer can be summarised by a 3-tuple $(m,c,k)$, with $k=1$ if no max-pooling is used. Using this 3-tuple notation, the network architectures considered are listed in Table 1. After the convolutional layers, the layer output is flattened, and then passed through a number of fully-connected (dense) layers, as described in Table 1. Benchmark | Convolutional Layers | ---|---|--- Problem | (Convolution size - Number of channels - Max Pool size) | Dense Layers MNIST | (3-16-1)-(3-16-2)-(3-32-1)-(3-32-2)-(3-64-1)-(3-64-2) | 128-10 MNIST-Fashion | (3-16-1)-(3-16-2)-(3-32-1)-(3-32-2)-(3-64-1)-(3-64-2) | 128-10 CIFAR-10 | (3-32-1)-(3-32-2)-(3-64-1)-(3-64-2)-(3-128-1)-(3-128-2) | 128-10 CIFAR-100 | (3-32-1)-(3-32-2)-(3-64-1)-(3-64-2)-(3-128-1)-(3-128-2) | 512-128-100 Table 1: Convolutional Network Architectures considered for MNIST Problem All non-final layers used the “leaky-relu” activation function (Maas et al., 2013) defined by, $\operatorname{LReL}(x)=\max(x,0.2x),$ (22) and the final layer used softmax activation. Leaky-relu was found to slightly be better than the $\operatorname{ReLU}$ function, since it leaves fewer zeros in the activations which can potentially stall learning after the weights are initially randomised; and also can potentially make the Gramian matrix in (7) low rank. The networks were trained with the cross-entropy loss function and the Adam optimizer, with learning rate $0.001$ for weight-space learning, and $0.01$ for target-space learning. Mini-batches of size ${n_{\mathrm{b}}}=100$ were randomly generated at each iteration, for computing the $\frac{\partial{L}}{\partial{\vec{w}}}$ gradient. A fixed mini-batch of size ${\overline{n}_{\mathrm{b}}}=100$ was used for the targets’ input matrix $\overline{X}$. In weight space, the weight initialisation used magnitudes defined by He et al. (2015), which are derived to work well with $\operatorname{LReL}$. In target space, the targets values were all initially randomised with a truncated normal distribution with standard deviation 0.1, followed by the projection operation given by (13). $\lambda=0.1$ was used in equation (7). Results are shown in Table 2. The results show the target-space method helping generalisation performance, both with and without dropout (Srivastava et al., 2014), and when comparing against weight space both with and without batch- normalisation; and with ensemble architectures. The benefit of target space is noticeable in the latter 3 benchmark problems; mostly so in the most challenging benchmark problem, i.e. CIFAR100. The two CIFAR problems were given a time budget of 24 GPU hours to train each network. This allowed approximately 640 epochs in target space, and 5300 epochs in weight space (lowering to 4000 epochs when batch-norm was used). The two MNIST problems received a 8 GPU-hour time budget, resulting in approximately 480/3000/2400 epochs for target-space/weight space/BN, respectively. Hence roughly seven times more processing time was required per epoch for the target-space algorithms compared to the weight-space algorithms. Algorithm (no dropout) | MNIST | MNIST-Fashion | CIFAR-10 | CIFAR-100 ---|---|---|---|--- Weight Space | $99.26(\pm 0.01)\%$ | $91.6(\pm 0.1)\%$ | $77.9(\pm 0.2)\%$ | $40.3(\pm 0.7)\%$ Weight Space + Batch Normalisation | $\textbf{99.41}(\pm 0.04)\%$ | $91.6(\pm 0.2)\%$ | $80.7(\pm 0.2)\%$ | $46.5(\pm 0.6)\%$ Target Space | $99.29(\pm 0.04)\%$ | $\textbf{92.2}(\pm 0.2)\%$ | $\textbf{82.6}(\pm 0.3)\%$ | $\textbf{50.5}(\pm 0.2)\%$ Algorithm (with dropout) | MNIST | MNIST-Fashion | CIFAR-10 | CIFAR-100 Weight Space | $99.50(\pm 0.06)\%$ | $93.12(\pm 0.01)\%$ | $82.90(\pm 0.09)\%$ | $52.22(\pm 0.04)\%$ Weight Space + Batch Normalisation | $\textbf{99.58}(\pm 0.01)\%$ | $\textbf{94.07}(\pm 0.09)\%$ | $86.70(\pm 0.01)\%$ | $59.7(\pm 0.2)\%$ Target Space | $99.55(\pm 0.03)\%$ | $93.7(\pm 0.1)\%$ | $\textbf{87.4}(\pm 0.1)\%$ | $\textbf{60.4}(\pm 0.1)\%$ Algorithm (with dropout + ensemble) | MNIST | MNIST-Fashion | CIFAR-10 | CIFAR-100 Weight Space | 99.49 % | 93.99 % | 85.43 % | 56.85 % Weight Space + Batch Normalisation | 99.6 % | 94.5 % | 88.19 % | 62.51 % Target Space | 99.62 % | 94.34 % | 88.81 % | 63.24 % Table 2: Test-Set Accuracies for CNN Experiments, on Standard Datasets When dropout was used, it was applied with a dropout probability of 0.2 to all non-final dense layers, and all even-numbered convolutional layers. The results show that dropout provides useful benefit to both weight-space learning and target-space learning. When dropout was used in target space, dropout was independently applied during both the feed-forward algorithm used to calculate $\frac{\partial{L}}{\partial{\vec{w}}}$ using the mini-batch input matrix $X$, and the feed-forward algorithm to map from target space to weight space using the fixed input matrix $\overline{X}$.666Generalisation results were noticeably worse if dropout in target space was applied to either one of these two stages without the other. When batch normalisation was used, it was applied to every convolutional layer and to every non-final dense layer. Batch normalisation is only applicable to weight-space learning. In target space learning, the targets for each layer already define the batch mean and standard-deviation which batch normalisation hopes to specify; making the combination of batch normalisation with target space redundant. The error margins in Table 2 are calculated as the standard-deviations of just two trials; but are sufficiently small to convey the trend adequately. When the ensemble of networks were used, the outputs of the two networks created in the two trials were averaged after softmax. Ensemble networks can usually generalise better than any of their constituent networks individually, assuming the outputs of the constituent networks are somewhat independent of each other. In this scenario the independence comes from different initial randomisation, different shuffling of mini-batches, and different choices of the $\overline{X}$ matrix used by the target-space algorithm. The results show that target space and weight space are assisted by using such an ensemble; even one comprised of only two networks. ### 5.3 Bit-Stream Recurrent Neural-Network Experiments In this section we describe two recurrent neural-network experiments regarding remembering and manipulating streams of bits. The first experiment is to memorise and recall a random stream of bits. The RNN receives a new random bit at every time step $t$, and must output the bit it saw at the time step $t-N$, where $N$ is the delay length. As the delay length is increased, the problem gets harder, since more bits must be memorised. For example if the delay length is $N=2$, and the RNN receives a bit stream such as “1,1,1,1,0,1” (with most recent bits appearing at the right) then the RNN is expected to produce an output stream “-,-,1,1,1,1”. (The first two outputs in the sequence, each indicated by here “-”, are ignored, since the delay length in this example is 2.) The neural network has architecture $1-(N+3)-2$, with the hidden layer being fully connected to itself with recurrent connections (corresponding to setting ${c_{\mathrm{L}}}=3$ in Fig. 3). The hidden layer used $\tanh$ activation functions, and the final layer used softmax with cross-entropy loss function. The loss function was made to ignore the first $N$ outputs in the stream (since these are undefined). The $N+3$ recurrent hidden nodes are enough to allow the network to remember the most recent $N$ bits (with 3 spare nodes to add a little flexibility in solution), as required; for example the RNN could learn manipulate the remembered bits with a rotate-right bit-wise operation, so as to successfully queue and recall the bits, and forget about bits older than $N$. A batch size of 8,000 random bit streams of length ${n_{\mathrm{t}}}=N+50$ was used to train the network. Random mini-batches of size ${n_{\mathrm{b}}}=100$ were used during each training iteration. A fixed mini-batch of size ${\overline{n}_{\mathrm{b}}}=100$ with ${\overline{n}_{\mathrm{t}}}={n_{\mathrm{t}}}$ was used for the target-space matrices $\overline{X}^{(t)}$. In weight space, the weight initialisation used magnitudes defined by Glorot and Bengio (2010). In target space, the targets values were randomised with a truncated normal distribution with standard deviation 1, followed by a projection by equation (13). This projection step seemed to improve results for the target-space experiments. The networks were trained with 50,000 iterations of Adam optimiser, with learning rate 0.001 for both weight-space and target space, and with $\lambda=0.1$ for target space. A result was considered a success if a classification accuracy $\geq 99\%$ was achieved on the test set at any training iteration; otherwise it was a failure. Results are shown in Fig. 8 for various delay lengths. They show that the target-space method is able to learn sequences with a delay length of around two to three times as long as the weight-space methods are capable of, with a significantly less steep rise in the number of training iterations required for success; and that the target-space SCU method is significantly stronger than the target-space OCU method. Figure 8: Memorisation of a delayed binary stream of bits using a RNN. The left graph shows the ratio of trials which were successful in correctly learning $>99\%$ of the output bits correctly (in a test set). The right-hand graph shows, for those successful trials, the average iteration number at which success was first achieved. For comparison, an extra experiment was made using an LSTM network. Here the $N+3$ hidden nodes were replaced by $N+3$ LSTM memory cells. The LSTM network was trained in weight space, again using Adam for 50,000 iterations. Results are shown in the same Fig. 8. This trial shows that the LSTM network does not seem to help in solving this problem in weight-space. In a second RNN experiment, we modify the task from pure mermorization into one of binary addition. In this experiment, the target output is the binary sum of the stream of bits with the $N$-step delayed stream. To ease binary addition, the stream is assumed to arrive in bit-wise little-endian form. For example, if $N=2$, and the bit stream received is “1,0,1,1,0,1”, then the target output stream that the RNN must learn is “-,-,0,0,0,1”, which is calculated by binary addition: 1101+1011=00011. Here the target output stream terminated before the final carry bit could be delivered, so only the 0001 remained. As this problem was slightly harder than the previous one, since the relationship between the target-bit sequence and the past sequence is quite well disguised (the relationship has similarities to a delayed XOR problem but there is also a hidden carry-bit process to discover), we gave the recurrent network $N+5$ hidden recurrent nodes, i.e. two more than previously. Results are shown in Fig. 9. The experimental conditions are otherwise unchanged from the previous RNN experiment. In this experiment the strength of the target-space methods are again shown, with the SCU method again being capable of coping with delay lengths two to three times as long as the weight-space methods, and with better scaling of the number of iterations required. The strength of the SCU method’s results confirms the value of lines 8 and 10 in Alg. 5, when compared to the OCU method. Figure 9: Addition of a delayed binary stream of bits using a RNN. In both of these RNN experiments, the SCU method significantly beats the LSTM network. It therefore seems that the exploding-gradients problem (which target-space networks are designed to address) is more significant in this problem than the vanishing-gradients problem (which LSTM networks are designed to address). A complication in making this comparison is that Adam was used. Adam might have been picking up and aggressively accelerating tiny components of the gradients in target space, thus counteracting the vanishing gradients and helping the target-space methods compete with LSTM. Possibly in a more noisy problem environment, it will not be possible to accelerate such tiny gradients due to the low signal-to-noise ratio. In that case a combination of LSTM plus target space could be attempted. ### 5.4 RNN Movie-Review Sentiment Analysis In this final experiment we trained a RNN to solve the natural-language processing task of sentiment analysis for 50,000 movies reviews from the Internet Movie Database (IMDB) website. In this binary classification task, each review is labelled as either positive or negative. The dataset was obtained from the Tensorflow/Keras packages, with a 50-50 training/test-set split, using options of only including the top 5000 most frequent words, and padding/truncating all reviews to a length of 500 words each. A word-embedding vector of length 32 was used to encode each word from the vocabulary of size 5000 (Bengio et al., 2003; Mikolov et al., 2013). Once each word is converted into an embedded vector, the neural-network architecture is the same as in the previous experiment, but with 32 inputs, 100 nodes in the recurrent layer, and two output nodes. Each embedded word of a review is fed to the RNN one-by-one, making the sequence length ${n_{\mathrm{t}}}=500$. Only the final output matrix of the neural network, $Y^{(500)}$, is observed. Results are shown in Fig. 10 and are summarised in Table 3, and show that the target-space method’s performance slightly exceeds that of the LSTM network, and significantly exceeds ordinary neural networks trained in weight space. Figure 10: Results for Movie Sentiment Analysis RNN Problem. Algorithm / | Best Test | Average GPU time ---|---|--- Network Type | Accuracy | per Epoch (s) Weight Space | $79.0(\pm 5.2)\%$ | 51.3 Weight Space + LSTM | $87.3(\pm 0.6)\%$ | 111.3 Target Space | $87.7(\pm 0.2)\%$ | 64.9 Table 3: Results for Movie Sentiment Analysis RNN Problem All neural networks were trained using Adam with learning rate 0.001, and mini-batch sizes of ${n_{\mathrm{b}}}=40$. The target-space algorithm used $\lambda=0.001$. Weights and targets were initially randomised as in the previous subsection. Word embeddings were also initially randomised (using a normal distribution with $\mu=0$ and $\sigma=0.1$). Hence all weight and target matrices, and the embedding vectors, were learned in an end-to-end training process. To customise the target-space method to handle word embeddings efficiently, a fixed sequence of target-space input matrices $\overline{X}^{(t)}$ was chosen, for a sequence length of just ${\overline{n}_{\mathrm{t}}}=60$, and mini-batch size ${\overline{n}_{\mathrm{b}}}=40$. For efficiency, it was chosen that these matrices would represent some already-embedded word sequences. Hence each matrix $\overline{X}^{(t)}\in\mathbb{R}^{32\times 40}$, for $t=1,\ldots,60$. Each of the $\overline{X}^{(t)}$ matrices was generated using a uniform random distribution in the range [-1,1], and then held constant throughout training. The lower sequence length ${\overline{n}_{\mathrm{t}}}=60$ improves the algorithmic complexity factor (given at the end of Section 4.1), and results in a more competitive target- space training time in Table 3. Even though this sequence length (${\overline{n}_{\mathrm{t}}}=60$) was less than the true sequence length (${n_{\mathrm{t}}}=500$), the combination of fixed matrices $\overline{X}^{(t)}$ and target matrices $T_{j}^{(t)}$ provide enough information to define the weight matrices ${W}_{{[0:{j}]}}$ unambiguously using Alg. 5; even though $\overline{X}^{(t)}$ are fixed random matrices, and therefore do not conform to any valid movie-review style of writing. Hence the learning gradient $\frac{\partial{L}}{\partial{\vec{w}}}$ (which now includes the gradient of the learnable embedding matrix, $W_{embed}\in\mathbb{R}^{5000\times 32}$), can be calculated in weight space, using the full sequence lengths (500), and then converted to a target-space gradient $\frac{\partial{L^{\prime}}}{\partial{\vec{\tau}}}$, using Alg. 5, followed by automatic differentiation. To compute the gradient $\frac{\partial{L}}{\partial{W_{embed}}}$ in target space, in order to optimise those learnable variables too, we just used its value in weight space, without any modification. ## 6 Conclusions The target-space method provides an alternative search space in which to train deep and recurrent neural networks. The theory and experiments indicate that the loss-function surfaces being optimised are indeed smoother and easier to optimise in target-space than in weight space. This increased smoothness potentially leads to easier solution of problems and potentially leads to better generalisation capabilities in the final neural networks produced. Using target space comes at an added computational expense. In fully connected networks, where the batch sizes for $X$ and $\overline{X}$ roughly match, this is usually a modest constant cost of approximately 3 or 4 times as much computation per training iteration. With CNNs it can be more, being around 7 times in the CNN architectures considered in this paper, and more so if wider convolutional kernels are used. With the RNN experiments, which can be considered as extremely deep and narrow networks, the timings were of similar order of magnitude between weight space and target space. It is hoped that by careful choice of architecture, focusing on deeper networks with narrower hidden layers (possibly with several narrower layers running in parallel, which has already been proven as a powerful design by Xie et al., 2017, in their “ResNeXt” CNN design), and avoiding pattern-by-pattern learning, these costs can be minimised. It has been shown how to combine mini-batching with target space. The lack of mini-batching has historically been a major Achilles heel in the adoption of some previous sophisticated optimisers (for example conjugate gradients or Levenberg-Marquardt), with very large datasets. Target-space methods are particularly promising in recurrent neural-network environments. In the examples given, problems with sequence lengths that were previously intractable have been solved, and the LSTM results were surpassed in a natural-language problem. This is despite the fact that LSTM networks have extra features, such as memory gates, which make the learning task easier, yet the target-space learning has still managed to make ordinary RNNs outperform them. In the feed-forward problems given, target space has consistently produced better generalisation in deeper neural networks. The theoretical motivation for target space, in that using targets should be able to untangle the cascades of changes caused during training, with a beneficial outcome, appears to be feasible. Hence target space aims to directly address the recognised “exploding-gradients” problem which exists in deep learning. Regarding a hypothetical future of neural networks being able to produce simple programs similar to those formed by human programmers, we hypothesise that whenever a neural network gets to a really interesting point of training, then the neural activations will often all be very close to their firing thresholds, and the exploding-gradients problem becomes really significant in blocking further learning. For example if the neural network training process had somehow successfully managed to build a series of interlocking XOR gates, which were almost all working well together so as to implement a conventional computer program out of those logic gates, then the scrambling of behaviour from any potential infinitesimal weight change will always make learning destabilise in weight space. The target-space approach is designed to be helpful in these circumstances, and would seem to have more chance of making further progress than a simple weight-space search would. Our experimental results with recurrent neural networks over long time sequences combined with data-processing outperform the equivalent LSTM networks. Hence it seems that in those problems at least, the exploding- gradients problem is more significant than vanishing gradients; at least when Adam is allowed to accelerate the small gradients in target space. This is particularly paradoxical when it is noted that the objective of the target- space cascade untangling is to dampen down learning gradients even more, thus amplifying the vanishing-gradients problem. Many significant deep-learning innovations exist in prior published work. These include the closely-related method of batch normalisation, plus modern activation functions, optimisers, and weight-initialisation techniques. Many of these are more computationally efficient than target space, but are maybe slightly less effective; and some can be combined with target space. Sophisticated neural architectures, such as LSTM, CNNs, and more recently, attention models, Differentiable Neural Computers and Neural Turing Machines (Graves et al., 2014, 2016), exist, which all add to neural-network functionality, and which could all in-principle be trained in target space. So in final conclusion, the target-space method seems to be a powerful additional tool which has tremendous potential for the enhancement of deep learning. ## Appendix A Target-Space Algorithmic Complexity Calculations In this appendix we derive the algorithmic complexity for the main target- space algorithms. In these derivations, we ignore the computation of activation functions, and matrix additions, assuming these are dwarfed by matrix-multiplication operations. ### A.1 Computational Complexity for Fully-Connected Feed-forward networks First we consider the main target-space algorithm for feed-forward neural networks (i.e. Algorithms 2-3). For a given layer $j$, the input matrix to that layer is ${A}_{{[0:{j})}}$, the weight matrix is ${W}_{{[0:{j}]}}$ and the target matrix is $T_{j}$. For brevity, we will denote these three matrices without subscripts, as $A$, $W$ and $T$. Let $n_{\mathrm{i}}$ be an initialism for the number of inputs to the layer (i.e. the number of rows in $A$) and let $n_{\mathrm{o}}$ be the number of outputs from the layer (i.e. the number of rows in $T$). Since $A\in\mathbb{R}^{n_{\mathrm{i}}\times{\overline{n}_{\mathrm{b}}}}$, and if ${\overline{n}_{\mathrm{b}}}>n_{\mathrm{i}}$, then direct multiplication to form the Gramian $AA^{T}$ will take ${n_{\mathrm{i}}}^{2}{\overline{n}_{\mathrm{b}}}$ floating-point operations (flops). Assuming matrix inversion takes roughly $n^{3}$ flops, and since the Gramian is of shape $n_{\mathrm{i}}\times n_{\mathrm{i}}$, the formation of $(AA^{T}+\lambda I)^{-1}$ will take a further $(n_{\mathrm{i}})^{3}$ flops. The formation of the product with $A^{T}$ in equation (7) will take a further $(n_{\mathrm{i}})^{2}{\overline{n}_{\mathrm{b}}}$ flops. Since $T\in\mathbb{R}^{n_{\mathrm{o}}\times{\overline{n}_{\mathrm{b}}}}$, the multiplication by $T$ in equation (6) will take a further $n_{\mathrm{i}}n_{\mathrm{o}}{\overline{n}_{\mathrm{b}}}$ flops. Hence summing these four terms gives the total time to form the pseudoinverse and calculate the weight matrix in (6), as $(n_{\mathrm{i}})^{3}+2(n_{\mathrm{i}})^{2}{\overline{n}_{\mathrm{b}}}+n_{\mathrm{i}}n_{\mathrm{o}}{\overline{n}_{\mathrm{b}}}$. If however ${\overline{n}_{\mathrm{b}}}<n_{\mathrm{i}}$, then the matrix $A$ is taller than it is wide, and (7) can be rearranged using the Woodbury matrix identity into an equivalent but more efficient form: $A^{\dagger}:=(A^{T}{A}+\lambda I)^{-1}{A}^{T}.$ (23) If this version is used, then the computational complexity is identically derived, resulting in the same flop-count expression but with all occurrences of $n_{\mathrm{i}}$ and ${\overline{n}_{\mathrm{b}}}$ swapped. Hence the resulting overall flop count for calculating $W$ by a pseudoinverse, assuming the faster of the two equations (7) and (23) is used, is $\text{Flop count for $W$ calculation}=\begin{cases}(n_{\mathrm{i}})^{3}+2(n_{\mathrm{i}})^{2}{\overline{n}_{\mathrm{b}}}+n_{\mathrm{i}}n_{\mathrm{o}}{\overline{n}_{\mathrm{b}}}&\text{if $n_{\mathrm{i}}<{\overline{n}_{\mathrm{b}}}$}\\\ ({\overline{n}_{\mathrm{b}}})^{3}+2({\overline{n}_{\mathrm{b}}})^{2}n_{\mathrm{i}}+n_{\mathrm{i}}n_{\mathrm{o}}{\overline{n}_{\mathrm{b}}}&\text{otherwise}\end{cases}$ (24) Once the $W$ matrix for the layer is formed, the feed-forward calculation of the product $S_{j}=WA$ takes place, which is the same computational complexity as is required in ordinary weight space, i.e. requiring $\text{Flop count for $S_{j}$ calculation}=n_{\mathrm{i}}n_{\mathrm{o}}{\overline{n}_{\mathrm{b}}}$ (25) If it can be assumed that the number of nodes in each layer of the neural network is approximately the same, so that $d_{j}=\overline{d}$ for all $j$, and no shortcut connections are present, then we can assume that $n_{\mathrm{i}}\approx n_{\mathrm{o}}\approx\overline{d}$ (ignoring the single input from the bias node). If, as advocated in Section 3.1, we further assume that the size of the batch ${\overline{n}_{\mathrm{b}}}$ is larger than $\overline{d}$ (so that also ${\overline{n}_{\mathrm{b}}}>n_{\mathrm{i}}$), then summing the expressions in (24) and (25) and simplifying shows that the flop count for each layer of the target space Alg. 2 is bounded above by $4\overline{d}^{2}{\overline{n}_{\mathrm{b}}}$. In comparison, the weight- space forward-pass algorithm for a single layer is just given by (25), i.e. $\overline{d}^{2}{n_{\mathrm{b}}}$ flops. Hence the ratio of computation between target space and weight space is approximately upper-bounded by $(4{\overline{n}_{\mathrm{b}}}/{n_{\mathrm{b}}})$. Since automatic differentiation produces backward computations of similar algorithmic complexity as to the forward pass, the overall computation ratios for forward- and-backward passes between target space and weight space, when summed over all layers, is still approximately $(4{\overline{n}_{\mathrm{b}}}/{n_{\mathrm{b}}})$. ### A.2 Computational Complexity for a CNN layer in Target Space We now derive the computational complexity of the CNN target-space layer. Notate the convolutional kernel width and heights by $k_{\mathrm{w}}$ and $k_{\mathrm{h}}$ respectively, and the number of input and output channels by $n_{\mathrm{ic}}$ and $n_{\mathrm{oc}}$ respectively. Let $n_{\mathrm{patch}}$ be the number of image patches to be taken from each image. Since the number of inputs operated on by the flattened $W$ matrix is $n_{\mathrm{i}}=k_{\mathrm{h}}k_{\mathrm{w}}n_{\mathrm{ic}}$, and the number of outputs is $n_{\mathrm{o}}=n_{\mathrm{oc}}$, and the number of columns in the patches matrix $A$ is $n_{\mathrm{b^{\prime}}}={\overline{n}_{\mathrm{b}}}n_{\mathrm{patch}}$, then substituting these factors into (24) gives a total flop count for the formation of $W$ as: CNN Flop count for $W$ formation $\displaystyle=\begin{cases}(k_{\mathrm{h}}k_{\mathrm{w}}n_{\mathrm{ic}})^{3}+2(k_{\mathrm{h}}k_{\mathrm{w}}n_{\mathrm{ic}})^{2}n_{\mathrm{b^{\prime}}}+k_{\mathrm{h}}k_{\mathrm{w}}n_{\mathrm{ic}}n_{\mathrm{oc}}n_{\mathrm{b^{\prime}}}&\text{if $k_{\mathrm{h}}k_{\mathrm{w}}n_{\mathrm{ic}}<n_{\mathrm{b^{\prime}}}$}\\\ (n_{\mathrm{b^{\prime}}})^{3}+2(n_{\mathrm{b^{\prime}}})^{2}k_{\mathrm{h}}k_{\mathrm{w}}n_{\mathrm{ic}}+k_{\mathrm{h}}k_{\mathrm{w}}n_{\mathrm{ic}}n_{\mathrm{oc}}n_{\mathrm{b^{\prime}}}&\text{otherwise}\end{cases}$ (26) In contrast, the weight-space CNN forward pass only requires the formation of $S$, where the flop count is given by (25), which equates to only $k_{\mathrm{h}}k_{\mathrm{w}}n_{\mathrm{ic}}n_{\mathrm{oc}}{n_{\mathrm{b}}}n_{\mathrm{patch}}$ flops. If we argue like in Section A.1 that $n_{\mathrm{b^{\prime}}}>n_{\mathrm{i}}$ (which is quite probable with the large number of image patches being processed by a CNN), and $n_{\mathrm{oc}}\approx n_{\mathrm{ic}}$, then the flop count in target space is bounded above by CNN Flop count for $W$ formation $\displaystyle\lessapprox 3(k_{\mathrm{h}}k_{\mathrm{w}}n_{\mathrm{ic}})^{2}n_{\mathrm{b^{\prime}}}+k_{\mathrm{h}}k_{\mathrm{w}}(n_{\mathrm{ic}})^{2}n_{\mathrm{b^{\prime}}}$ $\displaystyle=(3(k_{\mathrm{h}}k_{\mathrm{w}})+1)k_{\mathrm{h}}k_{\mathrm{w}}(n_{\mathrm{ic}})^{2}{\overline{n}_{\mathrm{b}}}n_{\mathrm{patch}}$ (27) and hence the ratio of the flop count in target space to that in weight space is bounded above by approximately $(3(k_{\mathrm{h}}k_{\mathrm{w}})+1){\overline{n}_{\mathrm{b}}}/{n_{\mathrm{b}}}$. ## Appendix B Derivation of Algorithm 3 ### B.1 Preliminary Definitions Single-Entry Matrix: Define $[J^{ij}]$ to be the single-entry matrix with element at row $i$ and column $j$ equal to $\begin{cases}1&\text{if $m=i$ and $n=j$}\cr 0&\text{otherwise}\end{cases}$ (Petersen and Pedersen, 2012). This is useful when differentiating a matrix with respect to one of its elements, since $\frac{\partial{A}}{\partial{A^{ij}}}=[J^{ij}]$, with $[J^{ij}]$ having the same dimensions as $A$. Raised Indices Notation: Define upper indices (without parentheses) after a matrix variable to indicate the matrix element, so that for example $A^{ij}$ is the element of $A$ with row index $i$ and column index $j$. Define raised indices $[ij]$ in square brackets after a scalar function $f(i,j)$ to mean the whole matrix whose element at row $i$ and column $j$ is $f(i,j)$. For example, $\left({A^{ij}}\right)^{[ij]}\equiv A$, and $\left({A^{ji}}\right)^{[ij]}\equiv A^{T}$. Frobenius Inner Product, $\left<A,B\right>_{\\!F}$ For two $m\times n$ matrices $A$ and $B$, define $\left<A,B\right>_{\\!F}:=\sum_{\forall i,j}A^{ij}B^{ij}$. This inner product is useful when using the chain rule; for example, if $X$, $Y$ and $Z$ are matrices with $X=X(Y)$ and $Y=Y(Z)$ then $\frac{\partial{X^{mn}}}{\partial{Z^{ij}}}=\left<\frac{\partial{X^{mn}}}{\partial{Y}},\frac{\partial{Y}}{\partial{Z^{ij}}}\right>_{\\!F}$. Furthermore, if $L(X)$ is a scalar function, then $\frac{\partial{L}}{\partial{Y}}=\left({\left<\frac{\partial{X}}{\partial{Y^{ij}}},\frac{\partial{L}}{\partial{X}}\right>_{\\!F}}\right)^{[ij]}$. ### B.2 Basic Lemma for Combining Frobenius Inner Product with Single-entry Matrix A useful result for combining the inner product with $[J^{ij}]$ is $\left({\left<A[J^{ij}]B,C\right>_{\\!F}}\right)^{[ij]}=A^{T}CB^{T}$ (28) since $\left<A[J^{ij}]B,C\right>_{\\!F}=\sum_{mn}(A[J^{ij}]B)^{mn}C^{mn}=\sum_{mn}\left(\sum_{pq}A^{mp}[J^{ij}]^{pq}B^{qn}\right)C^{mn}$ $=\sum_{mn}(A^{mi}B^{jn}C^{mn})=(A^{T}CB^{T})^{ij}$. Similarly, $\left({\left<A[J^{ij}]^{T}B,C\right>_{\\!F}}\right)^{[ij]}=BC^{T}A$ (29) ### B.3 Matrix Differentiation Differentiating a scalar by a matrix gives an identically dimensioned matrix, e.g. $\left(\frac{\partial{L}}{\partial{X}}\right)^{ij}:=\frac{\partial{L}}{\partial{X^{ij}}}$. Similarly for differentiating a matrix by a scalar: $\left(\frac{\partial{X(a)}}{\partial{a}}\right)^{ij}:=\frac{\partial{X^{ij}(a)}}{\partial{a}}$. Matrix differentiation follows the usual product rule: $\displaystyle\frac{\partial{AB}}{\partial{X^{mn}}}$ $\displaystyle=\frac{\partial{A}}{\partial{X^{mn}}}B+A\frac{\partial{B}}{\partial{X^{mn}}}.$ (30) For example, if $A$,$B$ and $C$ are constant matrices, then $\displaystyle\frac{\partial{AXBXC}}{\partial{X^{mn}}}$ $\displaystyle=A[J^{mn}]BXC+AXB[J^{mn}]C.$ The derivative of an inverse matrix $A^{-1}$ is $\frac{\partial{A^{-1}}}{\partial{A^{ij}}}=-A^{-1}[J^{ij}]A^{-1}$ (Brookes, 2011). Combining this with the product rule gives $\displaystyle\frac{\partial{(BB^{T}+\lambda I)^{-1}}}{\partial{B^{ij}}}$ $\displaystyle=-(BB^{T}+\lambda I)^{-1}\left([J^{ij}]B^{T}+B[J^{ij}]^{T}\right)(BB^{T}+\lambda I)^{-1}$ $\displaystyle=-(BB^{T}+\lambda I)^{-1}[J^{ij}]B^{\dagger}-(B^{\dagger})^{T}[J^{ij}]^{T}(BB^{T}+\lambda I)^{-1}$ (31) And so, $\displaystyle\frac{\partial{B^{\dagger}}}{\partial{B^{ij}}}=$ $\displaystyle\frac{\partial{B^{T}(BB^{T}+\lambda I)^{-1}}}{\partial{B^{ij}}}$ (by (7)) $\displaystyle=$ $\displaystyle[J^{ij}]^{T}(BB^{T}+\lambda I)^{-1}-B^{T}\frac{\partial{(BB^{T}+\lambda I)^{-1}}}{\partial{B^{ij}}}$ (by product rule) $\displaystyle=$ $\displaystyle[J^{ij}]^{T}(BB^{T}+\lambda I)^{-1}-\big{(}B^{\dagger}[J^{ij}]B^{\dagger}+B^{\dagger}B[J^{ij}]^{T}(BB^{T}+\lambda I)^{-1}\big{)}$ (by (31)) $\displaystyle=$ $\displaystyle(I-B^{\dagger}B)[J^{ij}]^{T}(BB^{T}+\lambda I)^{-1}-B^{\dagger}[J^{ij}]B^{\dagger}$ (32) ### B.4 Ordered Partial Derivatives Define the notation $\frac{\partial{}}{\partial{{}^{*}}}$ to be the ordered partial derivatives (Werbos, 1974), which take into account cascading changes to all later layers’ weights and activations by Algorithm 2. For example $\frac{\partial{\vec{w}}}{\partial{{}^{*}A_{j}^{mn}}}$ describes how all the layers’ weights would change according to Algorithm 2 if a small perturbation was forced to occur to $A_{j}^{mn}$. For a layer $j$, define $\delta{A_{j}}:=\left({\frac{\partial{\vec{w}}}{\partial{{}^{*}A_{j}^{mn}}}\frac{\partial{L}}{\partial{\vec{w}}}}\right)^{[mn]}$. This matrix accounts for what effect a small change to $A_{j}$ will have on $L$, solely through the effect of cascading changes to later layers’ weights via alg. 2. Note that $\delta{A}$ is subtly different from $\frac{\partial{L}}{\partial{{}^{*}A}}$ since at the final layer $\delta{A_{n_{\mathrm{L}}}}=0$ (since there are no later layers whose weights can change), but $\frac{\partial{L}}{\partial{{}^{*}A_{n_{\mathrm{L}}}}}=\frac{\partial{L}}{\partial{Y}}\neq 0$. Similarly, define $\delta{S_{j}}:=\left({\frac{\partial{\vec{w}}}{\partial{{}^{*}S_{j}^{mn}}}\frac{\partial{L}}{\partial{\vec{w}}}}\right)^{[mn]}$ and $\delta{{W}_{{[0:{j}]}}}:=\left({\frac{\partial{\vec{w}}}{\partial{{}^{*}{W}_{{[0:{j}]}}^{mn}}}\frac{\partial{L}}{\partial{\vec{w}}}}\right)^{[mn]}$. ### B.5 Derivation of Algorithm 3 Define $\delta{{A}_{{[0:{j})}}}$ to be the composite of $\delta{A_{j}}$ matrices in the same way that the ${A}_{{[0:{j})}}$ matrices are composed of $A_{j}$ matrices, analogous to Eq. (2). The matrices ${W}_{{[0:{j}]}}$, ${A}_{{[0:{j})}}$, $T_{j}$, $Y_{j}$, $A_{j}$ and $S_{j}$ are for an arbitrary layer $j$. Throughout the following, all these matrices refer to the same subscripted value of $j$, therefore we omit this subscript to ease presentation. To avoid the clash of variable names between $A_{j}$ and ${A}_{{[0:{j})}}$, we define $B\equiv{A}_{{[0:{j})}}$ and $A\equiv A_{j}$ as shorthand. First we give useful results for $\frac{\partial{W}}{\partial{B^{mn}}}$ and $\delta{W}$: $\displaystyle\frac{\partial{W}}{\partial{B^{mn}}}=$ $\displaystyle T\big{[}(I-B^{\dagger}B)[J^{mn}]^{T}(BB^{T}+\lambda I)^{-1}-B^{\dagger}[J^{mn}]B^{\dagger}\big{]}$ (by (6) and (32)) $\displaystyle=$ $\displaystyle(T-S)[J^{mn}]^{T}(BB^{T}+\lambda I)^{-1}-W[J^{mn}]B^{\dagger}$ (by (6) and (4)) (33) The derivation for $\delta{W}=\left({\frac{\partial{\vec{w}}}{\partial{{}^{*}{W}_{{[0:{j}]}}^{mn}}}\frac{\partial{L}}{\partial{\vec{w}}}}\right)^{[mn]}$ in Equation (34) starts by adding two terms. The first term, $\frac{\partial{L}}{\partial{W}}$, accounts for the contribution from the changing weights in that particular layer. The second term, $\left({\left<\delta{S},\frac{\partial{S}}{\partial{W^{pq}}}\right>_{\\!F}}\right)^{[pq]}$, accounts for the cascading changes to all later layers’ weights (by the definition of $\delta{S}$). $\displaystyle\delta{W}=$ $\displaystyle\frac{\partial{L}}{\partial{W}}+\left({\left<\delta{S},\frac{\partial{S}}{\partial{W^{mn}}}\right>_{\\!F}}\right)^{[mn]}$ $\displaystyle=$ $\displaystyle\frac{\partial{L}}{\partial{W}}+\left({\left<\delta{S},[J^{mn}]B\right>_{\\!F}}\right)^{[mn]}$ (by (4) and (30)) $\displaystyle=$ $\displaystyle\frac{\partial{L}}{\partial{W}}+(\delta{S})B^{T}$ (by (28)) (34) To derive a formula that calculates $\frac{\partial{L^{\prime}}}{\partial{T}}$ for a particular layer given $\frac{\partial{L}}{\partial{\vec{w}}}$, we first note that changing $T^{mn}$ for one layer will initially just change the weights of that layer, according to $\frac{\partial{W}}{\partial{T^{mn}}}$. Then cascading changes to the later layers’ weights will occur via Algorithm 2, as a consequence of this initial single layer’s change of weights, and therefore all these cascading effects are represented by $\delta{W}$. Combining these two factors with the Frobenius inner-product gives: $\displaystyle\frac{\partial{L^{\prime}}}{\partial{T}}=$ $\displaystyle\left({\left<\delta{W},\frac{\partial{W}}{\partial{T^{mn}}}\right>_{\\!F}}\right)^{[mn]}$ $\displaystyle=$ $\displaystyle\left({\left<\delta{W},[J^{mn}]B^{\dagger}\right>_{\\!F}}\right)^{[mn]}$ (by (6) and (30)) $\displaystyle=$ $\displaystyle\left(\frac{\partial{L}}{\partial{W}}+(\delta{S})B^{T}\right)(B^{\dagger})^{T}$ (by (28) and (34)) (35) This requires calculation of the $\delta{S}$ matrices for each layer. Since $A^{mn}=g(S^{mn})$, the chain rule gives $\delta{S}=\delta{A}\odot g^{\prime}(S)$ (36) The derivation for $\delta{B}$ is given in Equation (LABEL:eqn:proofLDFR:dEdB). The first line of this derivation consists of two terms which are present, respectively, because changing $B$ will change the weights for that layer directly (via the equation $W=TB^{\dagger}$), and will also change the sums for that layer directly (via the equation $S=WB$). The effects of these two changes are what the terms $\delta{W}$ and $\delta{S}$, respectively, are defined to represent. $\displaystyle\delta{B}=$ $\displaystyle\left({\left<\delta{W},\frac{\partial{W}}{\partial{B^{mn}}}\right>_{\\!F}+\left<\delta{S},\frac{\partial{S}}{\partial{B^{mn}}}\right>_{\\!F}}\right)^{[mn]}$ $\displaystyle=$ $\displaystyle\left({\left<\delta{W},\left((T-S)[J^{mn}]^{T}(BB^{T}+\lambda I)^{-1}-W[J^{mn}]B^{\dagger}\right)\right>_{\\!F}+\left<\delta{S},W[J^{mn}]\right>_{\\!F}}\right)^{[mn]}$ (by (33), (4) and (30)) $\displaystyle=$ $\displaystyle(BB^{T}+\lambda I)^{-1}(\delta{W})^{T}(T-S)-W^{T}(\delta{W})(B^{\dagger})^{T}+W^{T}\delta{S}$ (by (28) and (29)) $\displaystyle=$ $\displaystyle W^{T}\left(\delta{S}-\frac{\partial{L^{\prime}}}{\partial{T}}\right)+\bigg{[}(BB^{T}+\lambda I)^{-1}\left(\frac{\partial{L}}{\partial{W}}+(\delta{S})B^{T}\right)^{T}(T-S)\bigg{]}$ (by (35) and (34)) This enables us to find $\delta{B}$ from $\delta{S}$ for a particular layer. Since $\delta{{A}_{{[0:{j})}}}\equiv\delta{B}$ is composed of $\delta{A_{j-1}}$, and $\delta{A_{n_{\mathrm{L}}}}=0$ we can calculate the $\delta{A}$ matrices backwards, layer by layer. Thus equations (35), (36) and (LABEL:eqn:proofLDFR:dEdB) give lines 4, 3, and 5 of Alg. 3 respectively. ## Appendix C Proof that a stationary point in target space corresponds to a stationary point in weight space In this appendix we show that if ${\vec{\tau}}^{*}$ is a stationary point for the target-space problem, i.e. $\left.\frac{\partial{L^{\prime}}}{\partial{\vec{\tau}}}\right|_{\vec{\tau}={\vec{\tau}}^{*}}=0$, then the corresponding vector of weights ${\vec{w}}^{*}$ obtained from ${\vec{\tau}}^{*}$ through Algorithm 2 is a stationary point for the resulting weight-space problem, i.e. $\left.\frac{\partial{L}}{\partial{\vec{w}}}\right|_{\vec{w}={\vec{w}}^{*}}=0$. After a preliminary definition and two lemmas, the main theorem and proof follows. Definition: Let $\displaystyle\qquad A^{\ddagger}:=A+\lambda\,(A^{+})^{T}.$ (38) where $A^{+}$ denotes the (non-regularised) Moore-Penrose pseudoinverse (Golub and Van Loan, 2013, Section 5.5.2). ###### Lemma 1 For any real-valued matrix $A$, the following identity holds: $A\,A^{\dagger}A^{\ddagger}=A$. Proof We use the singular value decomposition (SVD) to prove this. Let the shape of $A$ be $m\times n$, and the rank of $A$ be $r$. Let the full SVD of $A$ be given by $\displaystyle A=USV^{T},$ (39) where the matrices $U\in\mathbb{R}^{m\times m}$, $S\in\mathbb{R}^{m\times n}$ and $V\in\mathbb{R}^{n\times n}$, the only non-zero elements of $S$ are on its leading diagonal, and where $U$ and $V$ are orthogonal. Since $A$ is rank $r$, the matrix $S$ will have its first $r$ diagonal elements as non-zero and the remaining elements all zero. Hence we can partition $S$ into block-matrix form as follows: $\displaystyle S=\begin{pmatrix}\Sigma&0\\\ 0&0\end{pmatrix},$ (40) where $\Sigma$ is a diagonal matrix of shape $r\times r$, and the zeros are rectangular matrices of appropriate shape so as to make $S\in\mathbb{R}^{m\times n}$. Since $\Sigma$ is square and full rank, its Moore-Penrose pseudoinverse simplifies into an ordinary inverse: $\displaystyle\Sigma^{+}=\Sigma^{-1}.$ (41) Using the SVD, and repeatedly cancelling orthogonal self-products such as $U^{T}U$ and $V^{T}V$, we can write: $\displaystyle A^{\,\dagger\,}$ $\displaystyle=VS^{T}(SS^{T}+\lambda I)^{-1}U^{T}$ (by (7)) (42) $\displaystyle A^{+}$ $\displaystyle=VS^{+}U^{T}$ (Moore-Penrose SVD) (43) $\displaystyle A^{\,\ddagger\,}$ $\displaystyle=U(S+\lambda S^{+})V^{T}$ (by (38) and (43)) (44) Substituting these into the left-hand side of the lemma’s identity, and cancelling orthogonal self-products, gives, $\displaystyle A\,A^{\dagger}A^{\ddagger}$ $\displaystyle=USS^{T}(SS^{T}+\lambda I)^{-1}(S+\lambda S^{+})V^{T}$ $\displaystyle=U\begin{pmatrix}\Sigma^{2}&0\\\ 0&0\end{pmatrix}\begin{pmatrix}(\Sigma^{2}+\lambda I)^{-1}&0\\\ 0&\lambda^{-1}I\end{pmatrix}\begin{pmatrix}\Sigma+\lambda\Sigma^{-1}&0\\\ 0&0\end{pmatrix}V^{T}$ (by (40) and (41)) $\displaystyle=U\begin{pmatrix}\Sigma^{2}(\Sigma^{2}+\lambda I)^{-1}(\Sigma+\lambda\Sigma^{-1})&0\\\ 0&0\end{pmatrix}V^{T}$ (block multiplication) $\displaystyle=U\begin{pmatrix}\Sigma(\Sigma^{2}+\lambda I)^{-1}(\Sigma^{2}+\lambda\Sigma\Sigma^{-1})&0\\\ 0&0\end{pmatrix}V^{T}$ (commute diagonal matrices) $\displaystyle=A$ (by (39) and (40)) This proves the lemma. Remark: Lemma 1 is analogous to the identity for the non-regularised Moore- Penrose pseudoinverse given by $A\,A^{+}A=A$ (Golub and Van Loan, 2013, Section 5.5.2), which also holds for any real-valued matrix $A$. ###### Lemma 2 When the weights are calculated from the targets by Algorithm 2, given any layer $j$, such that $j={n_{\mathrm{L}}}$ or $\frac{\partial{L}}{\partial{{W}_{{[0:{k}]}}}}=0$ for all $k>j$, we have $\frac{\partial{L^{\prime}}}{\partial{T_{j}}}=0\Rightarrow\frac{\partial{L}}{\partial{{W}_{{[0:{j}]}}}}=0$. Proof First, let us write explicitly the derivative of the loss function $L$ with respect to the weights of layer $j$: $\displaystyle\frac{\partial{L}}{\partial{{W}_{{[0:{j}]}}}}$ $\displaystyle=\left({\left<\frac{\partial{L}}{\partial{S_{j}}},\frac{\partial{S_{j}}}{\partial{{W}_{{[0:{j}]}}^{mn}}}\right>_{\\!F}}\right)^{[mn]}$ $\displaystyle=\left({\left<\frac{\partial{L}}{\partial{S_{j}}},[J^{mn}]{A}_{{[0:{j})}}\right>_{\\!F}}\right)^{[mn]}$ (by (4) and (30)) $\displaystyle=\frac{\partial{L}}{\partial{S_{j}}}\,{A}_{{[0:{j})}}^{T},$ (by (28)) (45) And similarly, explicitly state its derivative with respect to the targets of the same layer: $\displaystyle\frac{\partial{L^{\prime}}}{\partial{T_{j}}}$ $\displaystyle=\left({\left<\frac{\partial{L}}{\partial{S_{j}}}+\sum_{k>j}\left({\left<\frac{\partial{L}}{\partial{{W}_{{[0:{k}]}}}},\frac{\partial{{W}_{{[0:{k}]}}}}{\partial{S_{j}^{pq}}}\right>_{\\!F}}\right)^{[pq]},\frac{\partial{S_{j}}}{\partial{T_{j}^{mn}}}\right>_{\\!F}}\right)^{[mn]}.$ (46) In this equation, instead of following (35), we have used the chain rule to produce an expression that explicitly connects $\frac{\partial{L^{\prime}}}{\partial{T}}$ to $\frac{\partial{L}}{\partial{S}}$. The summation in (46) evaluates to $\delta{S}_{j}$ defined in Section B.4, which accounts for the effects of the weights in all later layers $k>j$ which will change by Alg. 2 as a result of a change to $T_{j}$. Following on from the initial assumptions of this lemma, which were that either $j={n_{\mathrm{L}}}$ or the condition $\frac{\partial{L}}{\partial{{W}_{{[0:{k}]}}}}=0$ holds for all $k>j$, therefore the summation vanishes and (46) reduces to: $\displaystyle\frac{\partial{L^{\prime}}}{\partial{T_{j}}}$ $\displaystyle=\left({\left<\frac{\partial{L}}{\partial{S_{j}}},\frac{\partial{S_{j}}}{\partial{T_{j}^{mn}}}\right>_{\\!F}}\right)^{[mn]}$ $\displaystyle=\left({\left<\frac{\partial{L}}{\partial{S_{j}}},\frac{\partial{\left(T_{j}{\left({A}_{{[0:{j})}}\right)^{\dagger}}{A}_{{[0:{j})}}\right)}}{\partial{T_{j}^{mn}}}\right>_{\\!F}}\right)^{[mn]}$ (by (4) and (6)) $\displaystyle=\left({\left<\frac{\partial{L}}{\partial{S_{j}}},[J^{mn}]{\left({A}_{{[0:{j})}}\right)^{\dagger}}{A}_{{[0:{j})}}\right>_{\\!F}}\right)^{[mn]}$ (by (30)) $\displaystyle=\frac{\partial{L}}{\partial{S_{j}}}\,{A}_{{[0:{j})}}^{T}\,\left({A}_{{[0:{j})}}^{T}\right)^{\dagger}.$ (by (28)) (47) Aiming from a contradiction, let us assume that $\frac{\partial{L}}{\partial{{W}_{{[0:{j}]}}}}\neq 0$. Then one can choose an appropriately sized column vector $u$ such that $\frac{\partial{L}}{\partial{{W}_{{[0:{j}]}}}}\,u\neq 0$. We now consider a second vector $v=\big{(}{A}_{{[0:{j})}}^{T}\big{)}^{\ddagger}u$, using (38), and write: $\displaystyle\frac{\partial{L^{\prime}}}{\partial{T_{j}}}\,v$ $\displaystyle=\frac{\partial{L}}{\partial{S_{j}}}\,{A}_{{[0:{j})}}^{T}\,\big{(}{A}_{{[0:{j})}}^{T}\big{)}^{\dagger}\,v$ (by (47)) (48) $\displaystyle=\frac{\partial{L}}{\partial{S_{j}}}\,{A}_{{[0:{j})}}^{T}\,\big{(}{A}_{{[0:{j})}}^{T}\big{)}^{\dagger}\,\big{(}{A}_{{[0:{j})}}^{T}\big{)}^{\ddagger}\,u$ (by the definition of $v$) $\displaystyle=\frac{\partial{L}}{\partial{S_{j}}}\,{A}_{{[0:{j})}}^{T}\,u$ (by Lemma 1) $\displaystyle=\frac{\partial{L}}{\partial{{W}_{{[0:{j}]}}}}\,u.$ (by (45)) (49) While we initially assumed that the last line (49) is non-zero, the first line (48) must be zero, due to $\frac{\partial{L^{\prime}}}{\partial{\vec{\tau}}}=0$. This contradiction proves the lemma. ###### Theorem 3 When the weights are calculated from the targets by Algorithm 2, we have $\frac{\partial{L^{\prime}}}{\partial{\vec{\tau}}}=0\implies\frac{\partial{L}}{\partial{\vec{w}}}=0$. Proof We shall prove this result by induction, by first showing it holds for the last layer (used as the base case), and then showing that if it holds for all subsequent layers then it must also hold for the current layer (the inductive step). Lemma 2 explicitly handles the case where $j={n_{\mathrm{L}}}$, thus the base- case claim, that $\frac{\partial{L^{\prime}}}{\partial{\vec{\tau}}}=0$ implies $\frac{\partial{L}}{\partial{{W}_{{[0:{{n_{\mathrm{L}}}}]}}}}=0$, is true. Next we consider the inductive step, i.e. that if $\frac{\partial{L^{\prime}}}{\partial{\vec{\tau}}}=0$ and $\frac{\partial{L}}{\partial{{W}_{{[0:{k}]}}}}=0$ for all $k>j$, then $\frac{\partial{L}}{\partial{{W}_{{[0:{j}]}}}}$ must be zero. Again, Lemma 2 applies here, since it explicitly applies to $\frac{\partial{L}}{\partial{{W}_{{[0:{k}]}}}}=0$ for all $k>j$, and therefore the inductive step is also true. This completes the proof by induction. This final theorem concludes the proof that $\frac{\partial{L^{\prime}}}{\partial{\vec{\tau}}}=0$ implies $\frac{\partial{L}}{\partial{\vec{w}}}=0$, i.e. that a stationary point for the target-space problem is also a stationary point for the corresponding weight-space problem obtained through Algorithm 2. ## References
# A study of the hydrostatic mass bias dependence and evolution within The Three Hundred clusters Giulia Gianfagna,1,2 Elena Rasia,3,4 Weiguang Cui,5,6 Marco De Petris,2 Gustavo Yepes,6,7 Ana Contreras-Santos,6 and Alexander Knebe,6,7,8 1INAF, Istituto di Astrofisica e Planetologia Spaziali, via Fosso del Cavaliere 100, 00133 Rome, Italy 2Dipartimento di Fisica, Sapienza Universitá di Roma, Piazzale Aldo Moro, 5-00185 Roma, Italy 3IFPU - Institute for Fundamental Physics of the Universe, Via Beirut 2, 34014 Trieste, Italy 4INAF Osservatorio Astronomico di Trieste, via Tiepolo 11, I-34131, Trieste, Italy 5Institute for Astronomy, University of Edinburgh, Royal Observatory, Edinburgh EH9 3HJ, UK 6Departamento deFísica Teórica, Módulo 15, Facultad de Ciencias, Universidad Autónoma de Madrid, E-28049 Madrid, Spain 7Centro de Investigación Avanzada en Física Fundamental (CIAFF), Facultad de Ciencias, Universidad Autónoma de Madrid, 28049 Madrid, Spain 8International Centre for Radio Astronomy Research, University of Western Australia, 35 Stirling Highway, Crawley, Western Australia 6009, Australia E-mail: (Accepted XXX. Received YYY; in original form ZZZ) ###### Abstract We use a set of about 300 simulated clusters from The Three Hundred Project to calculate their hydrostatic masses and evaluate the associated bias by comparing them with the true cluster mass. Over a redshift range from 0.07 to 1.3, we study the dependence of the hydrostatic bias on redshift, concentration, mass growth, dynamical state, mass, and halo shapes. We find almost no correlation between the bias and any of these parameters. However, there is a clear evidence that the scatter of the mass-bias distribution is larger for low-concentrated objects, high mass growth, and more generically for disturbed systems. Moreover, we carefully study the evolution of the bias of twelve clusters throughout a major-merger event. We find that the hydrostatic-mass bias follows a particular evolution track along the merger process: to an initial significant increase of the bias recorded at the begin of merger, a constant plateaus follows until the end of merge, when there is a dramatic decrease in the bias before the cluster finally become relaxed again. This large variation of the bias is in agreement with the large scatter of the hydrostatic bias for dynamical disturbed clusters. These objects should be avoided in cosmological studies because their exact relaxation phase is difficult to predict, hence their mass bias cannot be trivially accounted for. ###### keywords: methods: numerical – galaxies: clusters: general – galaxies: clusters: intracluster medium – large-scale structure of Universe. ††pubyear: 2021††pagerange: A study of the hydrostatic mass bias dependence and evolution within The Three Hundred clusters–A study of the hydrostatic mass bias dependence and evolution within The Three Hundred clusters ## 1 Introduction Galaxy clusters are the most massive gravitationally-bound structures in the Universe and, as such, their mass plays a key role in the estimation of the cosmological parameters. The majority of their mass is composed by Dark Matter (DM, almost 80%), the remaining part is composed by stars (galaxies) and hot gas which is diffused between the galaxies and it is called Intra-Cluster Medium, ICM (see Kravtsov & Borgani, 2012, for a review). The mass of these objects in observations can be estimated in several ways: from galaxy kinematic, applying the virial theorem to the distribution of cluster galaxies (for example Li et al., 2021), or directly from the velocity distribution of cluster galaxies (Zwicky, 1937; Pratt et al., 2019; Tian et al., 2021; Herná ndez-Lang et al., 2022); from scaling laws, linking their total mass to several observable quantities, like X-rays luminosity or SZ (Sunyaev-Zeldovich) brightness (Nagarajan et al., 2018; Bulbul et al., 2019); from weak lensing (Von der Linden et al., 2014; Hoekstra et al., 2015; Okabe & Smith, 2016; Artis et al., 2022) and, recently, also combining the mentioned methods and using machine learning (Ntampaka et al., 2015; De Andres et al., 2022). Another frequently used approach is to derive the cluster mass under the assumption of the hydrostatic equilibrium (HE). This method uses temperature, density and pressure profiles of hot gas extracted from X-rays alone or combined with SZ effect observations (see Pratt et al. 2019 for a recent review). Three assumptions are the basis of the HE: the ICM must trace the cluster potential well, it must be spherically symmetric, and its pressure must be purely thermal. As shown in a recent review by Gianfagna et al. (2021), the HE mass evaluated in simulated samples is on average underestimating the true mass. In this paper we investigate more this subject and in particular the dependence of the bias by several other parameters that characterise the cluster dynamical state. Numerical simulation studies find that the bias has a negligible dependence with the redshift (Piffaretti & Valdarnini, 2008; Le Brun et al., 2017; Henson et al., 2017; Gianfagna et al., 2021). On the contrary, observational data are consistent with a decreasing mass bias along the redshift, see for instance Salvati et al. (2019) and Wicker et al. (2022) on Planck clusters. This dependence could be linked to a mass dependence of the bias, as clusters observed at higher redshifts tend to have a higher mass. However, the mass dependence is not detected in simulations, either using the spectroscopic-like weighted temperature111Note that spectroscopic-like weighted temperature is more similar to the X-ray temperature obtained from X-ray spectroscopic analysis of observed data. (Mazzotta et al., 2004) to estimate the HE mass (Piffaretti & Valdarnini, 2008; Le Brun et al., 2017; Henson et al., 2017; Pearce et al., 2019; Ansarifard et al., 2020; Barnes et al., 2021) or the mass-weighted temperature. Several authors also study the dependence of the bias on the dynamical state, the general findings also pointed towards no correlation, even if there is a hint that disturbed objects have a large scatter with respect to the relaxed ones (Piffaretti & Valdarnini, 2008; Rasia et al., 2012; Nelson et al., 2014; Henson et al., 2017; Ansarifard et al., 2020). By comparing results in literature it appears that the hydrostatic mass bias does not depend on various cosmological-simulation ingredients, such as the baryon physics included in the simulations, the mass particles resolution or the number of objects in the samples (Gianfagna et al., 2021). Due to this consistency for simulated results, the discrepancy on the bias dependence on redshift between simulation predictions and observation results is still puzzling. This could be due to less numerous observational samples, due to a different mass sampling at high redshift and due to the uncertainty in the total physical mass estimation in observation. The selection may also play a key role, for example the inclusion of disturbed objects. Indeed, all of the HE assumptions are violated during major mergers events which can induce strong dynamical perturbations, adding non-thermal support to the equilibrium budget (Nelson et al., 2012; Rasia et al., 2012, 2013; Angelinelli et al., 2020; Sereno et al., 2021). Therefore, the hydrostatic bias can significantly deviate from the expectation. In post- merger situations, then the merging object might have already been destroyed and thus it cannot be distinguished as a separate structure. Applying the hydrostatic-equilibrium technique to these cases (assuming they are in a relaxed state) will lead to a misinterpretation. Studying the extreme cases in simulations, where the dynamical state is clearly identified, will allow us to better understand the situations when the HE assumption is strongly violated and how the HE mass bias is impacted by mergers. The organisation of this paper is as follow: in Section 2 we introduce The Three Hundred set of galaxy clusters, in Section 3 we give an overview on the hydrostatic equilibrium method. In Section 4 the ICM radial profiles are introduced, along with their analysis, and we introduce the parameters which will be used to characterise the dynamical state, the accretion and the geometry of clusters. In Section 5 we provide our the results, and we conclude in Section 6. ## 2 The Three Hundred Project This work is based on clusters simulated for The Three Hundred Project, introduced in Cui et al. (2018). This project is based on the resimulations of a set of 324 Lagrangian regions extracted from the MultiDark Planck (MDPL2) dark-matter-only (DM) cosmological simulation (Klypin et al., 2016) with different simulation codes. MDPL2 consists of a periodic cube of comoving length $1h^{-1}$ Gpc and $3840^{3}$ N-body particles. The Lagrangian regions are identified at $z=0$ around the most massive objects found in the parent simulation with a comoving radius of $15h^{-1}$ Mpc. The particles in those massive clusters and in their surrounding were traced back to $z=120$ to generate the initial conditions for the zoomed-in re-simulations. The original mass resolution was kept in the central region, and those particles were split into dark matter and gas particles based on the cosmological baryon fraction $\Omega_{b}$. The high-resolution dark-matter-particle mass is $1.87\times 10^{9}\rm M_{\odot}$, while the initial gas mass is $3.48\times 10^{8}\rm M_{\odot}$. In the outer region, the dark matter particles were degraded with multiple levels of mass refinements in several concentric shells. The 324 zoomed-in regions have been hydro-dynamically re-simulated with three different baryon models: GADGET-X (Rasia et al., 2015), GADGET-MUSIC (Sembolini et al., 2013) and GIZMO-Simba (Davé et al., 2019; Cui et al., 2022). In this work we use the GADGET-X simulated clusters. This code includes several radiative processes, like gas cooling, star formation, thermal feedback from supernovae, chemical evolution and enrichment, super massive black holes with AGN feedback (see Cui et al., 2018, for details). The simulations assume a standard cosmological model according to the Planck Collaboration et al. (2016) results: $h=0.6777$ for the reduced Hubble parameter, $n=0.96$ for the primordial spectral index, $\sigma_{8}=0.8228$ for the amplitude of the mass density fluctuations in a sphere of $8h^{-1}$ Mpc comoving radius, $\Omega_{\Lambda}=0.692885$, $\Omega_{m}=0.307115$, and $\Omega_{b}=0.048206$ respectively for the density parameters of dark energy, dark matter, and baryonic matter. ### 2.1 Sample For the analysis presented here, we select the most massive cluster for each region that does not include any low-resolution particles within its virial radius. The evolution of the bias is tested at 9 redshifts, covering the redshift range between $z=0.07$ and $z=1.32$. The mean mass and the number of objects considered at each redshift are reported in Table 1 for the three studied overdensities: $\Delta=$ 2500, 500 and 200. These indicate the radius $R_{\rm\Delta}$ of a sphere whose density is either 2500, 500 or 200 times the critical density of the Universe at the consider cosmic time: $\rho_{\rm crit}=(3/8\pi G)H_{0}^{2}[\Omega_{M}(1+z)^{3}+\Omega_{\Lambda}],$ (1) where $H_{0}$ is the Hubble constant and G the gravitational constant. Table 1: The redshifts analysed in this work are written in the first column. The number of objects are in the second column. In the third, fourth and fifth columns are the mean masses at the three nominal overdensities 2500, 500, 200 with units of $10^{14}M_{\odot}$. $z$ | $N$ | $<M_{2500}>$ | $<M_{500}>$ | $<M_{200}>$ ---|---|---|---|--- | | $\rm[10^{14}M_{\odot}]$ | $\rm[10^{14}M_{\odot}]$ | $\rm[10^{14}M_{\odot}]$ 1.32 | 277 | 0.47 | 1.36 | 1.99 1.22 | 281 | 0.55 | 1.58 | 2.31 0.99 | 304 | 0.78 | 2.26 | 3.29 0.78 | 305 | 1.07 | 3.04 | 4.45 0.59 | 305 | 1.48 | 4.05 | 5.91 0.46 | 300 | 1.73 | 4.89 | 7.22 0.33 | 297 | 2.09 | 5.73 | 8.46 0.22 | 298 | 2.43 | 6.73 | 9.97 0.07 | 290 | 2.98 | 8.19 | 12.20 ## 3 The hydrostatic equilibrium mass As said in the introduction, the hydrostatic equilibrium hypothesis is often at the basis of the procedure that lead to estimate the mass of galaxy clusters from X-ray observations (Kravtsov & Borgani, 2012; Ettori et al., 2013). This assumption foresees that the gas thermal pressure, which naturally leads to expansion of the gas, is balanced by the gravitational forces, which, on the other side, causes the systems to collapse. The equilibrium is expressed as the equality between the gradient of the thermal pressure and that of the gravitational potential and it is supposed to hold at every concentric distance from the cluster centre. Connecting the gravitational potential to the mass, under the spherical assumption, leads to the following formula for the total mass inside a sphere of radius $r$: $M_{\rm HE,SZ}(<r)=-\frac{r^{2}}{G\rho_{\rm g}(r)}\frac{{\rm d}P_{\rm th}(r)}{{\rm d}\ r}$ (2) where $\rho_{\rm g}$ and $P_{\rm th}$ are the density and the thermal pressure of the gas, respectively. We will refer to this mass as $M_{\rm HE,SZ}$, since the radial pressure profile can be derived from Compton-$y$ parameter maps provided by clusters SZ observations at millimetre wavelengths. Assuming the equation of state of an ideal gas, the cluster mass can also be estimated with gas density and temperature separately: $M_{\rm HE,X}(<r)=-\frac{rk_{\rm B}T(r)}{G\mu m_{\rm p}}\left[\frac{{\rm d}\ln\rho_{\rm g}(r)}{{\rm d}\ln r}+\frac{{\rm d}\ln T(r)}{{\rm d}\ln r}\right].$ (3) We refer to this HE mass formulation as $M_{\rm HE,X}$ because historically this expression was used in the analysis of X-rays observations (Ansarifard et al., 2020). The detailed computation of the HE mass in our simulated sample will be introduced in next section. This mass is compared with the true total mass of the systems, obtained by summing over all dark matter, stars and gas particle masses inside a fixed aperture radius. The mass bias $b_{\rm SZ}$ or $b_{\rm X}$is defined as $b=\frac{M_{\rm true}-M_{\rm HE}}{M_{\rm true}}.$ (4) This bias can be either positive, when the HE mass is underestimating the true cluster mass, or negative, when there is an overestimation of the true mass. A null bias results in a perfect estimate of the true mass through HE. ## 4 Methods In this work, we aim to estimate the two hydrostatic masses, as presented in Eqs. (2) and (3). The ICM temperature, gas density and pressure radial profiles play a fundamental role in the calculation of these masses. In this section, we describe how we compute them in The Three Hundred simulations together with other relevant quantities which we use to study the correlations with the bias. The quantities linked to the study of the mass bias during a major merger event will be described in Section 5.2. ### 4.1 The ICM profiles The cluster profiles are extracted in logarithmically equal-spaced radial bins, centred at the maximum density (mostly coinciding with the minimum of the potential well, see also Cui et al. 2016; Ansarifard et al. 2020). Only gas particles with density below the star-formation density threshold and with temperature above 0.3 keV are considered as ICM for calculation, this is to insure that we are selecting the hot particles which can generate X-ray or SZ signals, in order to match the observations. Once the profiles are extracted, we fit them with analytic models which will be described below. We then use the analytical best-fitting curves to estimate the hydrostatic masses. Following this strategy will allow to avoid the small fluctuations that are presented in the 3D numerical profiles which strongly impact the calculation of the derivatives in the hydrostatic-equilibrium mass equations (see Gianfagna et al., 2021). On average, this still allow to well fit the large fluctuations (bumps or deeps in the profiles), caused by substructures, which should instead taken into account. This is valid especially for the temperature and density models, that have the largest number of parameters. The pressure profile model could instead cancel some large fluctuation, due to the smaller number of parameters, leading to a non- reliable SZ bias, but the two biases are compatible between each other (see Section 5) and also the scatter is not significantly different. The best-fit procedure of each profile is performed in the radial range [0.2-3]$\rm R_{500}$. This range was chosen to include the three studied overdensities: $\Delta=$ 2500, 500 and 200. We study the goodness of the fits using the $\chi^{2}$ test and found that all the fits can be classified as good fits. #### 4.1.1 Gas Density The 3D gas density profiles are estimated as the total gas mass in a spherical shell, divided by the shell volume: $\rho=\sum m_{\rm i}/V_{\rm shell}$. The model chosen to fit this profile is the one proposed by Vikhlinin et al. (2006): $\rho_{g}(r)=\rho_{0}^{2}\frac{(r/r_{\rm d})^{-a}}{(1+(r/r_{\rm d})^{2})^{3b-a/2}}\frac{1}{(1+(r/r_{\rm s})^{c})^{e/c}}+$ (5) $\hskip 28.45274pt+\frac{\rho_{02}^{2}}{(1+(r/r_{\rm d2})^{2})^{3b_{2}}},$ where we fix the parameter $c$ equal to 3 and we impose the following limitation: $e<5$. The other 8 parameters are left free. Often in literature this model is simplified by discarding the second beta model, here we keep it to have a reliable fit also in the cluster region near $\rm R_{2500}$. For reference, we report in Table 2 the medians of all best-fit parameters computed from the sample at $z=0.07$. Table 2: The medians and $16^{\rm th}-84^{\rm th}$ percentiles of best-fit free parameters in Eq.5 for gas density profiles and the reduced $\chi^{2}$. The results only for all $z=0.07$ clusters are exhibited here. The $c$ parameter is fixed to 3. $\rho_{0}$[$10^{13}\rm M_{\odot}/Mpc^{3}$] | $r_{d}$[Mpc] | $r_{s}$[Mpc] | $a$ ---|---|---|--- $3.5^{+1.5}_{-1.3}$ | $0.71^{+0.20}_{-0.17}$ | $0.73^{+0.65}_{-0.29}$ | $1.2^{+1.2}_{-0.9}$ $b$ | $e$ | $\rho_{0,2}$[$10^{13}\rm M_{\odot}/Mpc^{3}$] | $r_{d,2}$[Mpc] $2.6^{+0.4}_{-0.7}$ | $2.5^{+0.4}_{-0.6}$ | $2.4^{+0.6}_{-0.8}$ | $0.95^{+0.36}_{-0.28}$ $b_{2}$ | $\chi^{2}$ | | $1.2^{+0.3}_{-0.2}$ | $1.1^{+0.8}_{-0.4}$ | | #### 4.1.2 Gas Temperature The mass-weighted temperature profile is estimated as a weighted average over the gas particles in the same spherical shells used for the gas density: $T=\frac{\sum_{i}(m_{i}T_{i})}{\sum_{i}m_{i}}.$ (6) where $m_{i}$ and $T_{i}$ are the mass and temperature of $i^{th}$ gas particle. Using the mass-weighted temperature is the optimal choice for hydrostatic equilibrium studies which theoretically connect the temperature with the gravitational mass (Biffi et al., 2014) and it is favorable with respect to the spectroscopic-like temperature (Mazzotta et al., 2004). The analytical model for the mass-weighted temperature used in this work was introduced again by Vikhlinin et al. (2006): $T(r)=T_{0}\ \frac{x+\tau}{x+1}\ \frac{(r/r_{\rm t})^{-\alpha}}{(1+(r/r_{\rm t})^{\beta})^{\gamma/\beta}},$ (7) $x=(r/r_{\rm cool})^{\alpha_{\rm cool}}.$ All the 8 parameters are free to vary. Their medians for the $z=0.07$ clusters are reported in Table 3. Table 3: The medians and $16^{\rm th}$ and $84^{\rm th}$ percentiles of best-fit free parameters of Eq.7 and the reduced $\chi^{2}$ computed for all $z=0.07$ clusters. $T_{0}$[keV] | $\tau$ | $r_{t}$[Mpc] | $\alpha$ | $\beta$ | $\gamma$ ---|---|---|---|---|--- $15.3^{+56.4}_{-10.3}$ | $0.37^{+0.32}_{-0.33}$ | $2.3^{+22.6}_{-1.29}$ | $0.29^{+0.71}_{-0.29}$ | $4.6^{+7.4}_{-4.6}$ | $1.3^{+2.2}_{-1.3}$ $r_{cool}$[Mpc] | $\alpha_{cool}$ | $\chi^{2}$ | | | $1.4^{+1.5}_{-0.7}$ | $5.4^{+4.6}_{-2.8}$ | $1.4^{+0.4}_{-0.1}$ | | | #### 4.1.3 Gas Pressure The radial thermal pressure profile is measured from the gas density and temperature of each gas particle. It is modelled by the generalised Navarro- Frenk-White (gNFW) model (Nagai et al., 2007): $P(r)=\frac{P_{0}}{x^{i}(1+x^{g})^{\frac{h-i}{g}}},$ (8) where $x=r/r_{s}$ is a dimensionless radial distance normalised to the scale radius, $r_{s}$. The parameters $h$ and $i$ are the slopes for outer and inner region, respectively, and $g$ is the steepness of the transition between the two regimes. This model has 4 free parameters, since $i=0.31$ is fixed (Arnaud et al., 2010). The medians of the best-fit parameters for the $z=0.07$ sample are listed in Table 4. Table 4: The medians and $16^{\rm th}$ and $84^{\rm th}$ percentiles of best-fit free parameters of Eq.8 and the reduced $\chi^{2}$ computed for all $z=0.07$ clusters. The $i$ parameter is fixed equal to $0.31$. $P_{0}[10^{-2}\rm keV/cm^{3}]$ | $r_{s}$ [Mpc] | $g$ | $h$ | $\chi^{2}$ ---|---|---|---|--- $2.6^{+5.5}_{-1.5}$ | $3.1^{+6.9}_{-2.5}$ | $1.1^{+1.3}_{-0.4}$ | $8.2^{+6.8}_{-4.0}$ | $1.1^{+0.8}_{-0.3}$ ### 4.2 The NFW concentration Similarly to the thermodynamical profiles, we compute the total density profiles which include the contribution from all particles in the simulations (dark matter, gas, stars). The total mass profile is fitted with a NFW model (Navarro et al., 1997): $M(<r)=M_{0}\left[\log(1+x)-\frac{x}{1+x}\right].$ (9) As before the radial coordinate is related to the scale radius, $x=r/r_{s}$. This parameter, together with the normalisation, is derived from a best- fitting procedure applied in the radial range between 100 kpc and $R_{200}$, which roughly reproduce a typical radial range used in weak-lensing analysis. The concentration within R500 is defined from the scale radius: $c_{500}=R_{500}/r_{s}$. ### 4.3 Relative mass growth We straightforwardly define the cluster mass growth as the relative mass difference between an initial, $t_{1}$, and a final time, $t_{2}$: $\frac{\Delta M/M}{{\rm d}t}=\frac{(M(t_{2})-M(t_{1}))/M(t_{1})}{[t_{2}-t_{1}]}.$ (10) Specifically, we estimated the mass growth from the merger tree between $z_{2}=0.33$ and $z_{1}=0.46$, so that $t_{2}-t_{1}$ corresponds to about 1 Gyr (precisely it is equal to $1.046$ Gyr). This interval corresponds to 5 simulation snapshots. ### 4.4 Dynamical state parameter The dynamical state of the clusters has been inferred by the combination of two indicators computed in 3D (Neto et al., 2007; Cui et al., 2017; Cialone et al., 2018; De Luca et al., 2021): * • $f_{s}=M_{\rm sub}/M_{\Delta}$, which is the fraction of cluster mass included in substructures inside a fixed overdensity; * • $\Delta_{\rm r}=|\textbf{r}_{\delta}-\textbf{r}_{\rm cm}|/R_{\Delta}$, which is the offset between the positions of the maximum density peak and the centre of mass of the cluster within $R_{\Delta}$ and normalised to that aperture radius. When correlating the dynamical-state and the HE mass bias, we evaluate all quantities of interest at the same overdensity value. In order to have a relaxed cluster, both indicators should be smaller than 0.1 (Cialone et al., 2018) while they should be both greater than 0.1 to have a disturbed one. The other cases are classified as intermediate or hybrid. The percentage of relaxed clusters is almost 50% at low redshifts, and decreases to 30% at high redshifts, while the percentage of disturbed clusters is 20% at low redshifts and increases to 30% at high redshifts (De Luca et al., 2021). As previously said, in our study we consider one combined parameter as in Haggar et al. (2020) but derived exclusively from the two mentioned indicators as in De Luca et al. (2021): $\chi_{\rm DS}=\sqrt{\frac{2}{\left(\frac{\Delta_{\rm r}}{0.1}\right)^{2}+\left(\frac{f_{s}}{0.1}\right)^{2}}}.$ (11) Using the $\chi_{\rm DS}$ parameter is a continuous way to classify the state of the cluster where the relaxed ones satisfy $\chi_{\rm DS}>1$. We use this parameter to study the dependence of the dynamical state on the HE bias (see Section 5.1.4). An extensive study of the relaxation state of The Three Hundred clusters including its dependence from redshift is presented in De Luca et al. (2021). ### 4.5 Triaxiality In order to test how much the lack of spherical symmetry is affecting the HE mass estimate, we calculate, using all particles within $R_{500}$, the three axes of the inertia tensor as in (Vega-Ferrero et al., 2017). We then use the ratio $c/a$ and the triaxiality of the halos: $t=\frac{1-(b/a)^{2}}{1-(c/a)^{2}},$ (12) where $a,b$ and $c$ indicate the major, intermediate, and minor axes, respectively. Depending on the value of the parameter $t$, the halo is considered as oblate if $0<t<1/3$; triaxial if $1/3<t<2/3$ and prolate if $2/3<t<1$. More simply, when $c/a$ is equal to $1$ the cluster is spherical. ## 5 Results In this section we present the results of our work. First, in Section 5.1, we study the HE mass bias dependencies on the redshift, mass, and on all other parameters: the concentration, the mass growth, the relaxation parameter and the triaxiality. We performed all these analyses at the three overdensities ($\Delta=2500,500,200$) and at all considered redshifts. However, most of the figures related to this part of the work (with the exception of the first) are only presented with $\Delta=500$ and at one specific redshift because we do not find a strong dependence of the results on either of these two quantities. Secondly, in Section 5.2, we analyse how the HE mass bias varies throughout strong merger events. In this case, we only analyse the behaviour of the bias at $\rm R_{200}$, since we take as reference the mergers analysis in Contreras-Santos et al. (2022), which is done considering the entire cluster volume and thus at overdensity 200. ### 5.1 Bias correlations #### 5.1.1 Redshift dependence The variation of $b_{SZ}$ and $b_{X}$ along the redshift range is presented in Fig. 1 at the 3 overdensities and for the different cluster dynamical states defined with the dynamical-state parameter, $\chi_{\rm DS}$. For all overdensities, the un-relaxed clusters (purple shaded region) have the largest scatter. Interestingly, this large scatter is mostly caused by the presence of low-value bias. We will present the reason in Section 5.2. No dependence of the bias on the redshift up to $z=1.25$ is detected in agreement with Le Brun et al. 2017; Henson et al. 2017; Salvati et al. 2018; Koukoufilippas et al. 2020. Furthermore, we notice that the median $b_{2500}$ is very similar to the median $b_{500}$ and that both are close to 0.1 (10% of bias) and as such systematically lower than $b_{200}$ (close to $\sim 0.2$). This is in agreement with Gianfagna et al. (2021), which showed a declining $b$ from outer to inner radius. This is occurring despite the differences both in the code (GADGET2 versus GADGET-X) and in the baryon models. Note, however, that the SZ and X median $b_{500}$ in Gianfagna et al. (2021) is slightly larger than this work. The biases dispersion at $R_{2500}$ seems to be slightly larger with respect to the other overdensities, this is probably due to the larger deviations in the cluster core properties, which could marginally affect the profile at $R_{2500}$. In The Three Hundred, the simulation produce a more diverse variety of simulated cores with respect to the GADGET2 code, and in some situations, the simulated clusters are extremely peaked at the centre (see Campitiello et al., 2022). Figure 1: The redshift evolution of the biases, $b_{\rm SZ}$ (left panels) and $b_{\rm X}$ (right panels). The median values of the bias for all clusters, relaxed and un-relaxed clusters are represented with dark cyan crosses, green diamonds and purple stars respectively. The shaded regions represent the $16^{th}$ and $84^{th}$ percentiles. The bias estimated at $\rm R_{200}$, $\rm R_{500}$ and $\rm R_{2500}$ are represented in the top, middle and bottom panels respectively. The dashed lines show the 0 and 0.2 bias for reference. #### 5.1.2 Concentration The concentration parameter $c_{500}$ of a halo is representative of the halo’s central density. In presence of a disturbed system the concentration is typically lower since the X-ray peak might have been destroyed by a merger event which also could have brought more mass in the external regions. For this it might be interesting to see if there is any correlation between the HE mass bias and the NFW concentration parameter. The bias as a function of the concentration is represented in Fig. 2 where the relaxed, disturbed and hybrid clusters are introduced with different symbols and colours. Hybrid clusters are these with either $f_{s}<0.1$ or $\Delta_{r}<0.1$. The two quantities do not show any dependence, as clear from the trend of the median value shown with a black line. We notice however that the scatter in the bias substantially decreases going towards larger concentration values. This is quantified by the half difference between the $16^{\rm th}$ and the $84^{\rm th}$ percentiles, $d2=0.5\times(p_{84}-p_{16})$ reported in Table 5 together with the bias percentiles. The value of $d2$ decreases from low (third row) to high concentrated clusters (first row) by 70-80%. This trend – reduced scatter with increasing concentration – is expected because most of the clusters with higher concentration are relaxed. Figure 2: The bias dependence on the concentration at $\rm R_{500}$ and at $z=0.07$, $b_{\rm SZ}$ is represented in the left panel, $b_{\rm X}$ in the right one. The black line represents the median bias value, with the shaded regions as 16th and 84th percentiles. The relaxed clusters are shown in green diamonds, the disturbed in purple stars and the hybrids in blue dots. Table 5: Table of the biases for the 50 clusters with the highest (first row) and lowest (third row) NFW concentration and the remaining clusters (second row) for the $z=0.59$ 0 sample. We report the median value ($m$), the $16^{\rm th}$ and $84^{\rm th}$ percentiles ($p_{16}$ and $p_{84}$), and their half difference ($d_{2}$) at $R_{500}$. $c_{500}$ | $b_{SZ}$ | | | | $b_{X}$ | | | ---|---|---|---|---|---|---|---|--- | $m$ | $p_{16}$ | $p_{84}$ | $d_{2}$ | m | $p_{16}$ | $p_{84}$ | $d_{2}$ high | 0.11 | 0.07 | 0.19 | 0.06 | 0.12 | 0.05 | 0.24 | 0.10 med | 0.11 | 0.00 | 0.18 | 0.09 | 0.13 | 0.00 | 0.24 | 0.12 low | 0.06 | -0.03 | 0.22 | 0.12 | 0.11 | -0.07 | 0.24 | 0.15 #### 5.1.3 Relative Mass growth The dependence of the bias at $z=0.333$ on the mass growth recorded in the last Gyr (and specifically since $z=0.46$) is shown in the top panel of Fig. 3, where both quantities are estimated at $\rm R_{500}$. The relative median values for high, low, and intermediate accreting clusters are listed in Table 6. The correlation is again very weak, the Pearson correlation coefficient is -0.20 for $b_{\rm SZ}$ and -0.36 for $b_{\rm X}$. However, the systems with the largest mass growth have also the largest dispersion. This is expected because the systems with the largest mass growth are also the most disturbed (purple stars in Fig. 3). This relation will be discussed more afterwards, however, we point out that clusters with large mass accretion will also have abundant non-thermal gas motions and, thus, a large non-thermal pressure component. The same trend is seen for the quantities estimated at $\rm R_{200}$, with a Pearson correlation coefficient of 0.16 for $b_{\rm SZ}$ and 0.21 for $b_{\rm X}$. In the bottom panel of Fig. 3 the difference between the bias at $z_{2}=0.33$ and at $z_{1}=0.46$ is shown ($\Delta b=b(z_{2})-b(z_{1})$). The variation in the bias for disturbed clusters (at large accretion rates) is negative, implying that the HE mass bias overall decreases. This phenomenon which might seem surprising will be better studied in Section 5.2, but we can already see from the top panel the origin of this trend: after a major merger (with a mass variation of at least 50% $\Delta M/M>0.5$) the mass computed by Eqs. 2 or 3 can be significantly larger than the true mass with a large scatter as well (see also Nelson et al., 2012). Table 6: Table of the biases values for the 50 clusters with the largest mass growth (first row), the 50 with the smallest (last row) and the remaining clusters (second row) for the $z=0.33$ sample. We report the median value ($m$), the $16^{\rm th}$ and $84^{\rm th}$ percentiles ($p_{16}$ and $p_{84}$), and their half difference ($d_{2}$) at R500. Mass | $b_{SZ}$ | | | | $b_{X}$ | | | ---|---|---|---|---|---|---|---|--- growth | m | $p_{16}$ | $p_{84}$ | d2 | m | $p_{16}$ | $p_{84}$ | d2 high | 0.09 | -0.05 | 0.17 | 0.11 | 0.09 | -0.12 | 0.20 | 0.16 med | 0.10 | -0.02 | 0.17 | 0.08 | 0.12 | 0.00 | 0.22 | 0.11 low | 0.14 | 0.06 | 0.22 | 0.08 | 0.24 | 0.10 | 0.35 | 0.13 Overall Fig.3 shows that the mass growth is a parameter that is capable to distinguish between relaxed and disturbed clusters, as seen also in Fig. 4. We, indeed, find a correlation betwee the mass growth parameter and $\chi_{\rm DS}$ equal to -0.46 both at $\rm R_{500}$ (shown in the Figure) and at $\rm R_{200}$. The good correlation is actually expected since one of the indicators defining $\chi_{\rm DS}$ is the mass in substructures. That said there is a certain level of contamination with the presence of disturbed objects with reduced mass growth or viceversa relaxed objects with significant mass growth. Notice, however, that both these peculiar cases have a HE bias value which is quite consistent with the average expectation of the samples, while extreme bias values such as those with $b<-0.2$ are only present for the largest mass growth. Figure 3: Top panel: the bias dependence on the mass growth $\frac{\Delta M/M}{dt}$ is represented. $\frac{\Delta M/M}{dt}$ is estimated from $z=0.46$ to $z=0.33$, which corresponds to 1 Gyr, while the biases are estimated at $z=0.33$. Bottom panel: the bias variation from $z=0.46$ to $z=0.33$ as a function of $\frac{\Delta M/M}{dt}$. The relaxed and disturbed clusters are represented with green diamonds and purple stars respectively, while the hybrids with blue dots. The cluster dynamical states shown in this plot are estimated at $z=0.33$. Figure 4: The relaxation parameter at $z=0.33$ is represented as a function of the mass growth from $z=0.46$ to $z=0.33$, all estimated at $\rm R_{500}$. The dot-dashed line represents the threshold under which a cluster is considered disturbed. #### 5.1.4 Relaxation parameter The bias as a function of the relaxation parameter is represented in Fig. 5, where, per definition, the coloured points of relaxed and disturbed clusters are completely separated. Weak or no correlations is found, indeed the Pearson correlation coefficients with the highest values are only equal to 0.13 for $b_{\rm SZ}$ and 0.23 for $b_{\rm X}$ at $z=1.32$; and 0.01 for $b_{\rm SZ}$ and 0.01 for $b_{\rm X}$ at $z=0.07$. Also in this case, the disturbed clusters show a wider dispersion with respect to the relaxed ones, see Table 7, where the difference between the bias percentiles grows by almost a factor of 2 from the lowest to the largest $\chi_{\rm DS}$ clusters. This is in agreement with Piffaretti & Valdarnini (2008); Rasia et al. (2012); Nelson et al. (2014); Henson et al. (2017); Ansarifard et al. (2020). Figure 5: The biases are represented as a function of the relaxation parameter $\chi_{\rm DS}$. The hybrids, relaxed and disturbed clusters are represented with blue dots, green diamonds and purple stars respectively. In the top panel we show the quantities at $z=0.07$ and in the bottom one at $z=1.3$. The lines are the results of a simple linear fitting result. The black line and the shaded region represent the median bias and the percentiles (16th and 84th). Table 7: Table of the bias values for the 50 highest (first row) and 50 lowest (third row) $\chi_{\rm DS}$ clusters and for the remaining clusters (second row) for the $z=0.59$ sample. We report the median value ($m$), the $16^{\rm th}$ and $84^{\rm th}$ percentiles ($p_{16}$ and $p_{84}$), and their half difference ($d_{2}$) at R500. $\chi_{\rm DS}$ | $b_{SZ}$ | | | | $b_{X}$ | | | ---|---|---|---|---|---|---|---|--- | m | $p_{16}$ | $p_{84}$ | d2 | m | $p_{16}$ | $p_{84}$ | d2 high | 0.10 | 0.05 | 0.16 | 0.06 | 0.12 | 0.05 | 0.23 | 0.09 med | 0.11 | -0.02 | 0.19 | 0.09 | 0.14 | 0.01 | 0.25 | 0.12 low | 0.11 | -0.04 | 0.21 | 0.13 | 0.07 | -0.12 | 0.20 | 0.16 #### 5.1.5 Total mass We continue to investigate the correlation between the HE bias and the cluster mass at $\rm R_{500}$ in Fig.6. Since we detect no variation on cosmic time (see Fig. 1), in this plot, we simultaneously show the biases at four redshifts (1.32, 0.78, 0.33, 0.07) to increase the halo mass range. We find no dependence of the bias on the total mass of the clusters in agreement with Piffaretti & Valdarnini (2008); Le Brun et al. (2017); Henson et al. (2017); Pearce et al. (2019); Ansarifard et al. (2020); Barnes et al. (2021). Figure 6: The biases are represented as a function of the cluster total mass, both at $\rm R_{500}$. The redshifts 1.32, 0.78, 0.33, 0.07 are represented in red, blue, magenta and green, respectively. The black line and the shaded region represent the binned median and the 16th-84th percentiles. #### 5.1.6 Triaxiality We study the dependence of the bias at $R_{500}$ on the sphericity of the halos, one of the HE main assumptions, quantified with the parameter $t$ and on $c/a$, the ratio of the minor and major axis. All the particles within $R_{500}$ are used to estimate the halo shape (Vega-Ferrero et al., 2017). The relation between the biases and $c/a$ is represented in Fig. 7, the results for $t$ are not shown, being very similar. From that Figure we notice that the overall correlation and the amplitude of the scatter seem to be uncorrelated with the axial ratio. Although, if we restrict to the most aspherical cases (third line in Table 8) we see that the both biases increases by almost 50% with respect to the objects with the highest $c/a$ values. Note that the halo shape estimated based on total mass is similar to the one based on hot gas (see Velliscig et al., 2015, for details). Therefore, we believe a similar result will be in place when the halo shape is estimated only from the gas particles. Figure 7: The biases at $\rm R_{500}$ are represented as a function of the axis ratio $c/a$. The hybrids, relaxed and disturbed are represented with blue dots, green diamonds and purple stars respectively. Both $b_{SZ}$ and $b_{X}$ are shown for $z=0.33$. The median value of the bias is represented with the black line, the 16th and 84th percentiles with the shaded region. Table 8: Table of the biases values for the 50 clusters with the highest (first row) and lowest (last row) axial ratio and for the remaining clusters (second row) for the $z=0.46$ sample. We report the median value ($m$), the $16^{\rm th}$ and $84^{\rm th}$ percentiles ($p_{16}$ and $p_{84}$), and their half difference ($d_{2}$) at R500. $c/a$ | $b_{SZ}$ | | | | $b_{X}$ | | | ---|---|---|---|---|---|---|---|--- | m | $p_{16}$ | $p_{84}$ | $d_{2}$ | m | $p_{16}$ | $p_{84}$ | $d_{2}$ high | 0.09 | 0.01 | 0.18 | 0.07 | 0.11 | -0.00 | 0.18 | 0.09 med | 0.09 | 0.00 | 0.17 | 0.08 | 0.14 | -0.02 | 0.23 | 0.13 low | 0.12 | 0.02 | 0.24 | 0.11 | 0.20 | 0.07 | 0.32 | 0.12 ### 5.2 HE mass bias and the merger history of the clusters Unlike observations, hydrodynamical simulations allow for tracking the whole history of a cluster, and thus to follow a merging event during all its phases. Specifically in this subsection, we study major merger events that are identified when a halo experiences a very rapid mass increase, primarily caused by the accretion of a single massive object (Contreras-Santos et al., 2022). Our main goal is to understand the evolution of the bias throughout these violent merging processes. To define and compute the relevant times to the merger event, we closely follow the definitions in Contreras-Santos et al. (2022) to which we refer for a more detailed explanations. In summary, instead of using absolute time as the variable in the merger history, which can vary with redshift, we normalise it to the halo dynamical time: $t_{dyn}=\sqrt{\frac{3}{4\pi}\frac{1}{200G\rho_{\rm crit}}}.$ (13) Since the critical density depends only on the cosmology and on the redshift, see Eq. (1), the dynamical time does not depend on the cluster features, but evolves only with redshift. As described before, we only consider the major merger events defined as an event that produces a mass increase of 100% within half of the dynamical time $\frac{\Delta M}{M}=\frac{M_{f}-M_{i}}{M_{i}}\geq 1,$ (14) where $M_{i}$ is the initial mass and $M_{f}$ is the final mass. Note that the masses in this case refer to the overdensity of 200. We use this larger radius for better capturing the evolution process inside the virial region. The merger event can easily be characterised by 4 particular redshifts (see Figure 1 of Contreras-Santos et al. 2022): * • $z_{\rm before}$: the last time before the merger begins. It is characterised by a relaxation parameter with high values (typically larger than 1). Soon after the two systems start to influence each other and the relaxation parameter decreases; * • $z_{\rm start}$: the merging halo enters in the main cluster $R_{200}$ and as a consequence the mass of the latter starts to grow; * • $z_{\rm end}$: the end point of the merger identified as the moment when the mass growth stops and the $\chi_{\rm DS}$ starts to grow again implying that the relaxation process is beginning. * • $z_{\rm after}$: the end of the whole merger phase when the cluster approaches a relaxed state by showing $\chi_{DS}$ closer or above $1$. In this work, we analyse how the bias evolves with the major merger event and how much time is needed for the bias to return to the average value in the relaxed population. In our sample, we select 12 clusters which experience a major merger, from $z_{\rm before}$ to $z_{\rm after}$, in the investigated redshift range [0.07,1.32]. In the top panel of Fig. 8 the two stacked biases, presented by mean bias values with $16^{th}-84^{th}$ percentiles as error bars, are shown as a function of $\Delta t/t_{\rm dyn}$, which is defined as $\frac{\Delta t}{t_{\rm dyn}}=\frac{t_{0+\textit{i}}-t_{0}}{t_{\rm dyn}}$ (15) where $t_{0}$ corresponds to the analysed redshift right before $z_{\rm before}$, and $t_{0+i}$ corresponds to the analysed redshifts following $t_{0}$. The vertical grey shaded areas represent the $\pm 1\sigma$ regions of the averaged time before, after, start and end of the merger. The yellow shaded region shows the $\pm 1\sigma$ region of the SZ bias for the relaxed clusters at $\rm R_{200}$, averaged over all the redshifts. At $\Delta t/t_{\rm dyn}=0$ both biases are in agreement with the typical relaxed clusters values. Right at the beginning of the merger phase ($z_{\rm before}$), the biases increase to $\sim 0.25-0.3$ due to the incoming substructure, that causes the true mass of the cluster to increase faster than the HE mass, leading to a slight increase of the bias. The latter stays almost constant until the end of the accretion of the secondary object. In this situation the HE mass is always underestimating the true mass of the object. A small increase of the bias is detected at around $z_{\rm end}$ followed by a quick decrease that lasts until $z_{\rm after}$, where a minimum is reached. Right at this time, around the end of the merger phase, the HE bias can even reach negative values. The in-fall of the substructure generates shocks propagating outward. These shocks cause a steep increase of the derivatives of the pressure, gas density and temperature profiles (see the bottom panel described later), leading to an increase of the HE mass, and a negative bias. This trend is in agreement with Bennett & Sijacki (2022) and Nelson et al. (2012), who conducted a similar study on simulated clusters. When the cluster approaches a relaxation phase at $\sim z_{\rm after}$, the bias returns to a value that is closer to the mean $b$ of relaxed clusters. This trend is again in agreement with what seen in the FABLE simulated clusters (Bennett & Sijacki, 2022). In the middle panel of Fig. 8 we show the evolution of the relaxation parameter during the merger process. The clusters are classified as relaxed at $t_{0}$ by definition, and then as the secondary cluster approaches (before $z_{\rm start}$) the $\chi_{\rm DS}$ drops. The relaxation parameter shows some fluctuation between $z_{\rm start}$ and $z_{\rm end}$ to finally grow again after the secondary objects is completely incorporated into the main cluster. At the end of the merger phase ($z_{\rm after}$), per definition $\chi_{\rm DS}$ is approaching 1. Notice that even after 4 dynamical times from the beginning of the merger the main cluster does not reach the original relaxation state, consistently with Contreras-Santos et al. (2022). In the bottom panel, we shown the relative evolution of all quantities entering the HE mass equations (both thermodynamic quantities and their derivatives). To be consistent with the analysis shown in the upper panels, all the values are computed at $R_{200}$. The pressure derivative is represented with blue triangles, the temperature and its derivative are represented as olive green dots and triangles, the gas density and its derivative are represented with orange dots and triangles. The temperature derivative does not show any particular trend and fluctuates around the original values with large error bars. The temperature values instead grows immediately as the secondary object approaches and keeps growing until the end of the process, this is expected as it is a consequence of the shock heating produced by the merger. The density value does not show any particular change until the secondary structure merge into the main halo after $z_{\rm start}$. The slope of the gas density at $R_{200}$ also increases for the presence of the massive companion. In both cases, the variation continues up to $z_{\rm end}$ before reaching a plateau. Finally, the pressure derivative is on the mid way between the temperature and the gas density derivatives. The increase in the derivative of the gas density, pressure and temperature due to the shocks generated in the merger or to the sloshing of the subclusters moving around the cluster core drive the HE masses of Eqs. 2 and 3 to be closer or even above the true mass. At $\rm R_{500}$ these considerations are still valid, the behaviour of all the quantities is the same. To conclude, the bias shows a stable evolution along the whole major merger event, albeit slightly difference between $b_{SZ}$ and $b_{X}$. However, the dramatic change of the bias value, especially in the period between $z_{\rm end}$ to $z_{\rm after}$ when the cluster is identified as un-relaxed, is the origin of the large scatter of the bias for the disturbed systems shown in all previous figures. For this reason, it is not suggested to estimate the HE masses for these dynamical un-relaxed clusters, unless we can correctly identify their states in the merging events which can be very difficult in observation. Figure 8: Top panel: the stacked biases at $\rm R_{200}$ (means and standard deviations) are represented as a function of the difference in times from $t_{0}$, divided by the dynamical time. The SZ bias is represented in green, while the X one in orange. The yellow shaded region represents the biases range of the relaxed clusters. The null bias is represented with a dashed line. Central panel: the stacked relaxation parameter estimated inside $\rm R_{200}$, the means and the standard deviations are represented as a function of $\Delta t/t_{dyn}$. The dashed line represents the threshold between relaxed and disturbed clusters. Bottom panel: each thermodynamic quantity which takes part in the estimation of the HE mass (and bias) divided by their value before the merger (indicated with a subscript 0) is shown as a function of $\Delta t/t_{dyn}$. The dashed line shows the value of 1 (the quantity does not change with respect to the age prior to the merger). The temperature and its derivative are represented in olive green dots and triangles respectively. Instead, the density and its derivative are represented as orange dots and triangles, and the pressure derivative is represented with blue triangles. In the plot the means and the standard deviations are represented. In all the panels, the vertical grey shaded areas represent the $\pm 1\sigma$ regions of the averaged time before, after, start and end of the merger. ## 6 Conclusions Galaxy cluster mass plays a major role in cluster cosmology. Several different methods are used to estimate it. In this paper we focus on the hydrostatic equilibrium approximation, which makes use of the temperature, pressure and density radial profiles of the ICM. These 3D profiles are computed from The Three Hundred simulations, which include a set of almost 300 simulated galaxy clusters that we studied at 10 different redshifts. From the profiles, we recover the masses under the HE approximation and we estimate the bias against the true cluster total mass. In this work the focus is specifically on the connection between the hydrostatic mass bias and different cluster properties, such as dynamical state, cluster total mass, mass-growth rate, NFW concentration and axial ratio. Moreover, we follow the bias during 12 major- merger events to understand the bias evolution along these events. The main findings are as follows: * • We do not find correlation between the biases, estimated at different radii, and the redshift, in agreement with other simulations. * • The bias and its scatter are influenced by the radii within which the bias is estimated, i.e. $R_{2500},R_{500}$ and $R_{200}$ in this work. The largest bias is at $R_{200}$ (almost 20%), while at $R_{500}$ and $R_{2500}$, the median value is around 10%. * • There is no correlation between the hydrostatic mass bias and the dynamical state of the clusters as measured with different indicators: the dynamical- state parameter $\chi_{\rm DS}$, the NFW concentration, the relative mass growth, and the triaxiality. However, whenever one of these parameters is typically associated to a disturbed objects (e.g. low $\chi_{\rm DS}$, low concentration, high mass growth) we detect a bias scatter that can be almost twice as that of the relaxed systems. The scatter has, instead, little dependence on mass, redshift, and sphericity. * • A moderate correlation is found between the relaxation parameter and the mass growth. This is expected because the larger is the mass growth, the more likely the cluster is disturbed. * • By stacking clusters while experiencing major merger events we find that the main object becomes more disturbed along the merger with a slightly higher $b$, when the secondary structure collides. After $z_{\rm end}$ – the halo mass growth stops and $b$ drops significantly because of the shocks propagating outward generated by the in-falling substructure. These shocks cause a steep increase of the derivatives of the pressure, gas density and temperature profiles, even leading to an overestimation of the HE mass. At the end of the merger phase when clusters get closer to a relaxed dynamical state with $\chi\sim 1$, the bias assumes the typical values of relaxed clusters. Although no correlation is found between the HE bias and the dynamical state of the cluster at a given redshift, we find that selecting a sample with the typical characteristics of regular objects, such as high NFW concentration, low mass accretion rate, high $\chi_{\rm DS}$, and high $c/a$ ratio, leads to a reduced cluster-to-cluster scatter. We also found a correlation between various merger phases and the HE bias which can be close to 0 when the cluster is at the end of merger process. This, however, occurs when the cluster is not yet relaxed confirming that including yet-no relaxed clusters could largely increase the bias scatter. Our results are in agreement with other simulations, where a large bias is found mostly generated by deviations from spherical symmetry (which is at the base of HE), temperature dishomogeneities, but also the presence of substructures or gas motions, generating non-thermal pressure component. In the first part of the paper, we have proved that the HE bias is not dependent on the latter, while in the last part of the paper we explore how the bias changes during a merger. Here we show that the presence of a substructure, and consequently a merger, can indeed deeply affect the cluster dynamical state and the HE mass bias, leading even to an overestimation of the true mass. During the merger events, see Fig. 8, the bias is almost always positive, i.e. the HE mass is underestimating the true cluster mass. However, even with the highest bias value during the merger events, we can only come close to the bias suggested by Planck. Therefore, we need to look for other reasons for explaining this bias tension. The next-generation satellites, like XRISM (X-Ray Imaging and Spectroscopy Mission), hopefully Athena and the proposed probe LEM (Line Emission Mapper), but also eROSITA-data analysis, are going to give precise indication on the gas velocities and dispersion and, consequently on cluster dynamical state. ## Acknowledgements The authors would like to thank the Referee for the constructive and helpful comments. The simulations used in this work have been performed in the MareNostrum Supercomputer at the Barcelona Supercomputing Center, thanks to CPU time granted by the Red Española de Supercomputación. WC is supported by the STFC AGP Grant ST/V000594/1 and the Atracción de Talento Contract no. 2020-T1/TIC-19882 granted by the Comunidad de Madrid in Spain. He further acknowledges the science research grants from the China Manned Space Project with NO. CMS-CSST-2021-A01 and CMS-CSST-2021-B01. GY acknowledges financial support from the MICIU/FEDER (Spain) under project grant PGC2018-094975-C21. MDP acknowledges support from Sapienza Università di Roma thanks to Progetti di Ricerca Medi 2019, RM11916B7540DD8D. AK is supported by the Ministerio de Ciencia, Innovación y Universidades (MICIU/FEDER) under research grant PGC2018-094975-C21 and further thanks Piero Umiliani for ‘svezia, inferno e paradiso’. ## Data Availability The data underlying this article were produced as part of The Three Hundred Project (Cui et al., 2018). They will be shared on request to The Three Hundred Collaboration, at https://www.the300-project.org. ## References * Angelinelli et al. (2020) Angelinelli M., Vazza F., Giocoli C., Ettori S., Jones T. W., Brunetti G., Brüggen M., Eckert D., 2020, MNRAS, 495, 864–885 * Ansarifard et al. (2020) Ansarifard S., et al., 2020, A&A, 634, A113 * Arnaud et al. (2010) Arnaud M., Pratt G. W., Piffaretti R., Böhringer H., Croston J. H., Pointecouteau E., 2010, A&A, 517, 1 * Artis et al. (2022) Artis E., Melin J.-B., Bartlett J., Murray C., 2022, EPJ Web of Conferences, 257, 00004 * Barnes et al. (2021) Barnes D. J., Vogelsberger M., Pearce F. A., Pop A.-R., Kannan R., Cao K., Kay S. T., Hernquist L., 2021, MNRAS, 506 * Bennett & Sijacki (2022) Bennett J. S., Sijacki D., 2022, MNRAS, 514, 313 * Biffi et al. (2014) Biffi V., Sembolini F., De Petris M., Valdarnini R., Yepes G., Gottlöber S., 2014, MNRAS, 439, 588 * Bulbul et al. (2019) Bulbul E., et al., 2019, ApJ, 871, 50 * Campitiello et al. (2022) Campitiello M. G., et al., 2022, A&A, 665, A117 * Cialone et al. (2018) Cialone G., De Petris M., Sembolini F., Yepes G., Baldi A. S., Rasia E., 2018, MNRAS, 477, 139 * Contreras-Santos et al. (2022) Contreras-Santos A., et al., 2022, MNRAS, 511, 2897 * Cui et al. (2016) Cui W., et al., 2016, MNRAS, 456, 2566 * Cui et al. (2017) Cui W., Power C., Borgani S., Knebe A., Lewis G. F., Murante G., Poole G. B., 2017, MNRAS, 464, 2502 * Cui et al. (2018) Cui W., et al., 2018, MNRAS, 480, 2898 * Cui et al. (2022) Cui W., et al., 2022, MNRAS, 514, 977 * Davé et al. (2019) Davé R., Anglés-Alcázar D., Narayanan D., Li Q., Rafieferantsoa M. H., Appleby S., 2019, MNRAS, 486, 2827 * De Andres et al. (2022) De Andres D., et al., 2022, EPJ Web of Conferences, 257, 00013 * De Luca et al. (2021) De Luca F., De Petris M., Yepes G., Cui W., Knebe A., Rasia E., 2021, MNRAS, 504, 5383 * Ettori et al. (2013) Ettori S., Donnarumma A., Pointecouteau E., Reiprich H. T., Giodini S., Lovisari L., Schmidt R. W., 2013, Space Sci Rev, 177, 119–154 * Gianfagna et al. (2021) Gianfagna G., et al., 2021, MNRAS, 502, 5115 * Haggar et al. (2020) Haggar R., Gray M. E., Pearce F. R., Knebe A., Cui W., Mostoghiu R., Yepes G., 2020, MNRAS, 492, 6074 * Henson et al. (2017) Henson M. A., Barnes D. J., Kay S. T., McCarthy I. G., Schaye J., 2017, MNRAS, 465, 3361 * Herná ndez-Lang et al. (2022) Herná ndez-Lang D., et al., 2022, MNRAS, 517, 4355 * Hoekstra et al. (2015) Hoekstra H., Herbonnet R., Muzzin A., Babul A., Mahdavi A., Viola M., Cacciato M., 2015, MNRAS, 449, 685 * Klypin et al. (2016) Klypin A., Yepes G., Gottlöber S., Prada F., Heß S., 2016, MNRAS, 457, 4340 * Koukoufilippas et al. (2020) Koukoufilippas N., Alonso D., Bilicki M., Peacock J. A., 2020, MNRAS, 491, 5464 * Kravtsov & Borgani (2012) Kravtsov A. V., Borgani S., 2012, ARA&A, 50, 353 * Le Brun et al. (2017) Le Brun A. M. C., McCarthy I. G., Schaye J., Ponman T. J., 2017, MNRAS, 466, 4442 * Li et al. (2021) Li Q., Han J., Wang W., Cui W., Li Z., Yang X., 2021, MNRAS, 505, 3907 * Mazzotta et al. (2004) Mazzotta P., Rasia E., Moscardini L., Tormen G., 2004, MNRAS, 354, 10 * Nagai et al. (2007) Nagai D., Kravtsov A. V., Vikhlinin A., 2007, ApJ, 668, 1 * Nagarajan et al. (2018) Nagarajan A., et al., 2018, MNRAS, 488, 1728 * Navarro et al. (1997) Navarro J. F., Frenk C. S., White S. D. M., 1997, ApJ, 490, 493 * Nelson et al. (2012) Nelson K., Rudd D. H., Shaw L., Nagai D., 2012, ApJ, 751, 121 * Nelson et al. (2014) Nelson K., Lau E. T., Nagai D., Rudd D. H., Yu L., 2014, ApJ, 782, 107 * Neto et al. (2007) Neto A. F., et al., 2007, MNRAS, 381, 1450 * Ntampaka et al. (2015) Ntampaka M., Trac H., Sutherland D. J., Battaglia N., Póczos B., Schneider J., 2015, ApJ, 803, 50 * Okabe & Smith (2016) Okabe N., Smith G. P., 2016, MNRAS, 461, 3794 * Pearce et al. (2019) Pearce F. A., Kay S. T., Barnes D. J., Bower R. G., Schaller M., 2019, MNRAS, 491, 1622 * Piffaretti & Valdarnini (2008) Piffaretti R., Valdarnini R., 2008, A&A, 491, 71 * Planck Collaboration et al. (2016) Planck Collaboration et al., 2016, A&A, 594, A13 * Pratt et al. (2019) Pratt G. W., Arnaud M., Biviano A., Eckert D., Ettori S., Nagai D., Okabe N., Reiprich T. H., 2019, Space Sci Rev, 215 * Rasia et al. (2012) Rasia E., et al., 2012, New Journal of Physics, 14, 055018 * Rasia et al. (2013) Rasia E., Borgani S., Ettori S., Mazzotta P., Meneghetti M., 2013, ApJ, 776, 39 * Rasia et al. (2015) Rasia E., et al., 2015, ApJ, 813, L17 * Salvati et al. (2018) Salvati L., Douspis M., Aghanim N., 2018, A&A, 614, A13 * Salvati et al. (2019) Salvati L., Douspis M., Ritz A., Aghanim N., Babul A., 2019, A&A, 626, A27 * Sembolini et al. (2013) Sembolini F., Yepes G., De Petris M., Gottlöber S., Lamagna L., Comis B., 2013, MNRAS, 429, 323 * Sereno et al. (2021) Sereno M., Lovisari L., Cui W., Schellenberger G., 2021, MNRAS, 507, 5214–5223 * Tian et al. (2021) Tian Y., Cheng H., McGaugh S. S., Ko C.-M., Hsu Y.-H., 2021, ApJ Letters, 917, L24 * Vega-Ferrero et al. (2017) Vega-Ferrero J., Yepes G., Gottlöber S., 2017, MNRAS, 467, 3226 * Velliscig et al. (2015) Velliscig M., et al., 2015, MNRAS, 453, 721 * Vikhlinin et al. (2006) Vikhlinin A., Kravtsov A., Forman W., Jones C., Markevitch M., Murray S. S., Speybroeck L. V., 2006, ApJ, 640, 691 * Von der Linden et al. (2014) Von der Linden A., et al., 2014, MNRAS, 443, 1973–1978 * Wicker et al. (2022) Wicker R., Douspis M., Salvati L., Aghanim N., 2022, EPJ Web of Conferences, 257, 00046 * Zwicky (1937) Zwicky F., 1937, ApJ, 86, 217
# A criterion for hypersymmetry on discrete groupoids F. Flores F. Flores 111 2010 Mathematics Subject Classification: Primary 43A20, Secondary 47L65, 47L30. Key Words: Groupoid, symmetric Banach algebra, Fell Bundle, C* algebra. ###### Abstract Given a Fell bundle $\mathscr{C}\overset{q}{\to}\Xi$ over the discrete groupoid $\Xi$, we study the symmetry of the associated Hahn algebra $\ell^{\infty,1}(\Xi\\!\mid\\!\mathscr{C})$ in terms of the isotropy subgroups of $\Xi$. We prove that $\Xi$ is symmetric (resp. hypersymmetric) if and only if all of the isotropy subgroups are symmetric (resp. hypersymmetric). We also characterize hypersymmetry using Fell bundles with constant fibers, showing that for discrete groupoids, ’hypersymmetry’ equals ’rigid symmetry’. ## 1 Introduction This article treats the symmetry of certain Banach ∗-algebras connected with Fell bundles over discrete groupoids. The study of symmetry for groupoid algebras started not long ago, with Austad and Ortega [1], and found a continuation in [2], where Jauré, Măntoiu and myself first treated the problem of symmetry for Fell bundles over discrete groupoids. ###### Definition 1.1. A Banach ∗-algebra $\mathfrak{B}$ is called symmetric if the spectrum of $b^{*}b$ is positive for every $b\in\mathfrak{B}$ (this happens if and only if the spectrum of any self-adjoint element is real). In this regard, the main result of the paper is the following. ###### Theorem 1.2. Let $\Xi$ be a discrete groupoid. Then: 1. 1. The algebra $\ell^{\infty,1}(\Xi)$ is symmetric if and only if every isotropy group is symmetric. 2. 2. The algebra $\ell^{\infty,1}(\Xi\\!\mid\\!\mathscr{C})$ is symmetric for every Fell bundle $\mathscr{C}$ over $\Xi$ if and only if every isotropy group is hypersymmetric. This result is interesting as it reduces the question of the (hyper)symmetry of a given groupoid to the (hyper)symmetry of its isotropy subgroups and the study of group $\ell^{1}$-algebras is far more developed. For example, see [1, 2, 4, 10, 13, 9, 5, 8] and the reference therein. The article is divided in 3 parts: Section 2 deals with preliminares; there we introduce Fell bundles, the Hahn algebra of a Fell bundle and define what (hyper)symmetry for a groupoid means. In Section 3 we introduce a characterization for hypersymmetry using only Fell bundles with constant fibers. Its analogous to the characterization made for groups by Jauré and Măntoiu in [4]. Finally, Section 4 deals with the proof of Theorem 1.2. This is achieved by writing a general discrete groupoid as a disjoint union of transitive groupoids and proving that transitive groupoids are isomorphic to transitive transformation groups. Using the available theory for groups yields the desired result. ## 2 Preliminaries Let $\Xi$ be a groupoid, with unit space $\mathcal{U}:=\Xi^{(0)}$ , source map ${\rm d}$ and range map ${\rm r}$ . The ${\rm d}$ and ${\rm r}$-fibers are $\Xi_{u}=\\{\xi\in\Xi\mid{\rm d}(\xi)=u\\}$ and $\Xi^{u}=\\{\xi\in\Xi\mid{\rm r}(\xi)=u\\}$. The set of composable pairs is $\Xi^{(2)}\\!:=\\{(x,y)\\!\mid\\!{\rm r}(y)={\rm d}(x)\\}\,.$ The isotropy group associated with the unit $u\in\mathcal{U}$ is $\Xi_{u}^{u}=\Xi_{u}\cap\Xi^{u}$, it is a subgroupoid of $\Xi$, which happens to be a group. We endow the groupoid $\Xi$ with the discrete topology. Let us introduce an important class of groupoids which will be useful. ###### Definition 2.1. A groupoid $\Xi$ is called transitive if for any pair $u,v\in\mathcal{U}$, there exists $x\in\Xi$ such that ${\rm r}(x)=u$ and ${\rm d}(x)=v$. The concept of a transitivity is borrowed from the theory of dynamical systems. And it comes from a natural but hidden action of the groupoid on its unit space. So the groupoid is transitive if and only if this action is transitive, cf [3, Example 2.2, Corollary 7.8]. In this article we are going to work with Fell bundles $\mathscr{C}\overset{q}{\to}\Xi$ over the groupoid ${\Xi}$ (see [6, 11]). A Fell bundle is composed of fibers and it satisfies that each fiber $\mathfrak{C}_{x}\\!:=q^{-1}(\\{x\\})$ is a Banach space with norm $\parallel\\!\cdot\\!\parallel_{\mathfrak{C}_{x}}$ , the topology of $\mathscr{C}$ coincides with the norm topology on each fiber, there are antilinear continuous involutions $\mathfrak{C}_{x}\ni\\!a\to a^{\bullet}\\!\in\mathfrak{C}_{x^{-1}}$ and for all $(x,y)\in\Xi^{(2)}$ there are continuous multiplications $\mathfrak{C}_{x}\\!\times\\!\mathfrak{C}_{y}\ni(a,b)\to a\bullet b\in\mathfrak{C}_{xy}$ satisfying the following axioms valid for $a\in\mathfrak{C}_{x}\,,b\in\mathfrak{C}_{y}\,$ and $(x,y)\in\Xi^{(2)}$ : * • $\parallel\\!ab\\!\parallel_{\mathfrak{C}_{xy}}\,\leq\,\parallel\\!a\\!\parallel_{\mathfrak{C}_{x}}\parallel\\!b\\!\parallel_{\mathfrak{C}_{y}}$ , * • $(ab)^{\bullet}=b^{\bullet}a^{\bullet}$, * • $\parallel\\!a^{\bullet}a\\!\parallel_{\mathfrak{C}_{{\rm d}(x)}}=\,\parallel\\!a\\!\parallel_{\mathfrak{C}_{x}}^{2}$ , * • $a^{\bullet}a$ is positive in $\mathfrak{C}_{{\rm d}(x)}$ . From these axioms it follows that $\mathfrak{C}_{x}$ is a $C^{*}$-algebra for every unit $x\in\mathcal{U}$ . Sometimes we simply write $\mathscr{C}=\bigsqcup_{x\in\Xi}\mathfrak{C}_{x}$ for the Fell bundle. Our object of study is the Hahn algebra $\ell^{\infty,1}(\Xi\\!\mid\\!\mathscr{C})$ adapted to Fell bundles [11], which in our case it is formed by the sections $\Phi:\Xi\to\mathfrak{C}$ (thus satisfying $\Phi(x)\in\mathfrak{C}_{x}$ for every $x\in\Xi$) that can be obtained as a limit of finitely-supported sections in the Hahn-type norm $\lVert\Phi\rVert_{\infty,1}\,:=\max\Big{\\{}\sup_{u\in\mathcal{U}}\sum_{{\rm r}(x)=u}\\!\parallel\\!\Phi(x)\\!\parallel_{\mathfrak{C}_{x}}\,,\,\sup_{u\in\mathcal{U}}\sum_{{\rm d}(x)=u}\\!\parallel\\!\Phi(x)\\!\parallel_{\mathfrak{C}_{x}}\\!\Big{\\}}.$ (2.1) It is a Banach ∗-algebra under the multiplication $(\Phi*\Psi)(x):=\sum_{yz=x}\Phi(y)\bullet\Psi\big{(}z)$ (2.2) and the involution $\Phi^{*}(x):=\Phi(x^{-1})^{\bullet}.$ (2.3) ###### Remark 2.2. Let us point out some of the nature of the functions in $\ell^{\infty,1}(\Xi\\!\mid\\!\mathscr{C})$. If $\Phi_{n}\in\ell^{\infty,1}(\Xi\\!\mid\\!\mathscr{C})$ is a sequence of sections with finite support and $\Phi_{n}\to\Phi$, then the convergence is uniform. Indeed, let $x\in\Xi$ and observe that $\displaystyle\lVert\Phi_{n}(x)-\Phi(x)\rVert_{\mathfrak{C}_{x}}$ $\displaystyle\leq\sum_{y\in\Xi^{r(x)}}\lVert\Phi_{n}(y)-\Phi(y)\rVert_{\mathfrak{C}_{y}}$ $\displaystyle\leq\sup_{u\in\mathcal{U}}\sum_{y\in\Xi^{u}}\lVert\Phi_{n}(y)-\Phi(y)\rVert_{\mathfrak{C}_{y}}$ $\displaystyle\leq\lVert\Phi_{n}-\Phi\rVert_{\ell^{\infty,1}(\Xi\,\mid\,\mathscr{C})}.$ So $\lim_{n}\sup_{x\in\Xi}\lVert\Phi_{n}(x)-\Phi(x)\rVert_{\mathfrak{C}_{x}}=0$. This implies that the function $\Phi$ has countable support and vanishes at $\infty$. We denote by $C^{*}(\Xi\,|\,\mathscr{C})$ the enveloping $C^{*}$-algebra of the Hahn algebra $\ell^{\infty,1}(\Xi\\!\mid\\!\mathscr{C})$. It is a known fact that $\ell^{\infty,1}(\Xi\\!\mid\\!\mathscr{C})$ is a dense ∗-subalgebra of $C^{*}(\Xi\\!\mid\\!\mathscr{C})$. ###### Definition 2.3. 1. (i) The discrete groupoid $\Xi$ is called symmetric if the convolution Banach ∗-algebra $\ell^{\infty,1}(\Xi)$ is symmetric. 2. (ii) The discrete groupoid $\Xi$ is called hypersymmetric if given any Fell bundle $\mathscr{C}\\!=\bigsqcup_{x\in\Xi}\mathfrak{C}_{x}$ , the Banach ∗-algebra $\ell^{\infty,1}(\Xi\\!\mid\\!\mathscr{C})$ is symmetric. ###### Example 2.4. If $\Pi\subset X\\!\times\\!X$ is an equivalence relation on $X$, one can make $\Xi=\Pi$ a discrete groupoid by defining the operations $\begin{split}{\rm d}(x,y)=(y,y)\,,\quad&{\rm r}(x,y)=(x,x)\,,\quad(x,y)(y,z)=(x,z)\,,\quad(x,y)^{-1}\\!=(y,x)\,.\end{split}$ The unit space is $\,\mathcal{U}={\sf Diag}(X)$ and it gets canonically identified with $X$, via the homeomorphism $(x,x)\mapsto x$ . In this case all of the isotropy groups correspond to the trivial group $\Pi_{u}^{u}=\\{(u,u)\\}$, so Theorem 4.9 will guarantee that $\Pi$ is hypersymmetric. A particular examples is the so called pair groupoid, $\Pi=X\times X$. ## 3 A characterization of hypersymmetry for discrete groupoids In [2] we introduced some special Fell bundles arising from Hilbert bundles to characterize the hypersymmetry of a discrete groupoid. However, in this paper we will improve this characterization: One can actually verify hypersymmetry by looking at much simpler algebras, associated to Fell bundles with constant fibers. Let us make precise the Fell bundles of interest. ###### Definition 3.1. By a (left) groupoid action of a discrete groupoid $\Xi$ on the $C^{*}$-bundle $\mathscr{A}\\!:=\bigsqcup_{u\in\mathcal{U}}\mathfrak{A}_{u}\overset{p}{\to}\mathcal{U}$ over its unit space we understand a continuous map $\mathscr{A}\rtimes\Xi:=\\{(\alpha,x)\in\mathscr{A}\\!\times\\!\Xi\\!\mid\\!p(\alpha)={\rm d}(x)\\}\ni(\alpha,x)\to\mathcal{T}(\alpha,x)\equiv\mathcal{T}_{x}(\alpha)\in\mathscr{A},$ satifying the axioms 1. (a) $p\big{[}\mathcal{T}_{x}(\alpha)\big{]}={\rm r}(x)$ , $\forall\,x\in\Xi\,,\,\alpha\in\mathfrak{A}_{{\rm d}(x)}$ , 2. (b) each $\mathcal{T}_{x}$ is a ∗-isomorphism $:\mathfrak{A}_{{\rm d}(x)}\\!\to\mathfrak{A}_{{\rm r}(x)}$ , 3. (c) $\mathcal{T}_{u}={\rm id}_{\mathfrak{A}_{u}}$ , $\forall\,u\in\mathcal{U}$ , 4. (d) if $(x,y)\in\Xi^{(2)}$ and $(\alpha,y)\in\mathscr{A}\rtimes\Xi$ , then $\big{(}\mathcal{T}_{y}(\alpha),x\big{)}\in\mathscr{A}\rtimes\Xi$ and $\mathcal{T}_{xy}(\alpha)=\mathcal{T}_{x}\big{[}\mathcal{T}_{y}(\alpha)\big{]}$ . ###### Definition 3.2. Let $\mathcal{T}$ be a groupoid action of $\Xi$ on the $C^{*}$-bundle $\mathscr{A}\\!:=\bigsqcup_{u\in\mathcal{U}}\mathfrak{A}_{u}\overset{p}{\to}\mathcal{U}$. We define its associated Fell bundle as follows. The underlying space is $\mathscr{C}_{\mathcal{T}}:=\mathscr{A}\rtimes\Xi$ with the topology inherited from the product topology and the obvious projection $q$. We endow it with the operations $(\alpha,x)\bullet(\beta,y):=(\alpha\mathcal{T}_{x}(\beta),xy)\,,\textup{when }(x,y)\in\Xi^{(2)}$ and $(\alpha,x)^{\bullet}:=(\mathcal{T}_{x^{-1}}(\alpha^{*}),x^{-1})$ to get a Fell bundle over $\Xi$ . A section is now a map $\Phi:\Xi\to\mathscr{C}_{\mathcal{T}}$ such that $\Phi(x)\equiv\big{(}\varphi(x),x\big{)}\in\mathfrak{C}_{x}=\mathfrak{A}_{{{\rm r}}(x)}\\!\times\\!\\{x\\}\,,\quad\forall\,x\in\Xi\,.$ So we may identify every section $\Phi\in\ell^{\infty,1}(\Xi\,|\,\mathscr{C}_{\mathcal{T}})$ with a function $\Phi:\Xi\to\mathscr{A}$ (note the abuse of notation), such that $\Phi(x)\in\mathfrak{A}_{{{\rm r}}(x)}$. We will also denote the algebra $\ell^{\infty,1}(\Xi\,|\,\mathscr{C}_{\mathcal{T}})$ by $\ell^{\infty,1}_{\mathcal{T}}(\Xi,\mathscr{A})$ to recall its particular nature. If the action $\mathcal{T}$ is trivial, meaning that $\mathfrak{A}_{u}\equiv\mathfrak{A}$ for all $u\in\mathcal{U}$ and $\mathcal{T}_{x}\equiv{\rm id}_{\mathfrak{A}}$ for all $x\in\Xi$, then we denote the resulting algebra simply by $\ell^{\infty,1}(\Xi,\mathfrak{A})$. ###### Remark 3.3. Let $\Phi,\Psi\in\ell^{\infty,1}_{\mathcal{T}}(\Xi,\mathscr{A})$. In this case, one may write the algebraic laws as $\big{[}\Phi*\Psi\big{]}(x)=\\!\sum_{y\in\Xi^{{\rm r}(x)}}\\!\Phi(y)\mathcal{T}_{y}\big{[}\Psi(y^{-1}x)\big{]}\,,$ (3.1) $\Phi^{*}(x)=\mathcal{T}_{x}\big{[}\Phi(x^{-1})\big{]}^{*}.$ (3.2) And the Hahn-type norm as $\parallel\\!\Phi\\!\parallel_{\ell^{\infty,1}_{\mathcal{T}}(\Xi,\mathscr{A})}=\max\Big{\\{}\sup_{u\in\mathcal{U}}\sum_{{\rm r}(x)=u}\\!\parallel\\!\Phi(x)\\!\parallel_{\mathfrak{A}_{{\rm r}(x)}}\,,\,\sup_{u\in\mathcal{U}}\sum_{{\rm d}(x)=u}\\!\parallel\\!\Phi(x)\\!\parallel_{\mathfrak{A}_{{\rm r}(x)}}\\!\Big{\\}}\,.$ (3.3) ###### Lemma 3.4. Let $\mathscr{C}\\!=\bigsqcup_{x\in\Xi}\mathfrak{C}_{x}$ be a Fell bundle over the discrete groupoid $\Xi$. Then there exists and an isometric ∗-monomorphism $\varphi:\ell^{\infty,1}(\Xi\,|\,\mathscr{C})\to\ell^{\infty,1}\big{(}\Xi,C^{*}(\Xi\,|\,\mathscr{C})\big{)}.$ (3.4) ###### Proof. Set $\mathfrak{A}:=C^{*}(\Xi\,|\,\mathscr{C})$ and, for every $x\in\Xi$ we embed $\mathfrak{C}_{x}$ into $\ell^{\infty,1}(\Xi\,|\,\mathscr{C})\subset\mathfrak{A}$, by setting for each $a\in\mathfrak{C}_{x}$ $(\theta_{x}a)(y):=a\ \ {\rm if}\ \ y=x\,,\quad(\theta_{x}a)(y):=0_{\mathfrak{C}_{x}}\ \ {\rm if}\ \ y\neq x\,.$ It is not hard to prove that if $(x,y)\in\Xi^{(2)}$, then $\theta_{x}a*\theta_{y}b=\theta_{xy}(a\bullet b)\quad\textup{ and }\quad(\theta_{x}a)^{*}=\theta_{x^{-1}}a^{\bullet}$ hold. On the other hand, one also has $\lVert\theta_{x}a\rVert_{\mathfrak{A}}=\lVert a\rVert_{\mathfrak{C}_{x}}$: If $x\in\mathcal{U}$, this equality holds because $\theta_{x}:\mathfrak{C}_{x}\to\mathfrak{A}$ is a ∗-monomorphism of $C^{*}$-algebras, hence isometric. If $x\in\Xi$ is not an unit, then one may apply the -now standard- trick $\lVert\theta_{x}a\rVert^{2}_{\mathfrak{A}}=\lVert(\theta_{x}a)^{*}*(\theta_{x}a)\rVert_{\mathfrak{A}}=\lVert\theta_{x^{-1}x}(a^{\bullet}\bullet a)\rVert_{\mathfrak{A}}=\lVert a^{\bullet}\bullet a\rVert_{\mathfrak{C}_{x^{-1}x}}=\lVert a\rVert^{2}_{\mathfrak{C}_{x}}$ to conclude that $\theta_{x}$ preserves the mentioned norms. This allows us to successfully define $\varphi(\Phi)(x)=\theta_{x}\Phi(x),\textup{ for }\Phi\in\ell^{\infty,1}(\Xi\,|\,\mathscr{C})$ and have an isometry: $\displaystyle\lVert\varphi(\Phi)\rVert_{\ell^{\infty,1}(\Xi,{\mathfrak{A}})}$ $\displaystyle=\max\Big{\\{}\sup_{u\in\mathcal{U}}\sum_{{\rm r}(x)=u}\\!\parallel\\!\theta_{x}\Phi(x)\\!\parallel_{\mathfrak{A}}\,,\,\sup_{u\in\mathcal{U}}\sum_{{\rm d}(x)=u}\\!\parallel\\!\theta_{x}\Phi(x)\\!\parallel_{\mathfrak{A}}\\!\Big{\\}}$ $\displaystyle=\max\Big{\\{}\sup_{u\in\mathcal{U}}\sum_{{\rm r}(x)=u}\\!\parallel\\!\Phi(x)\\!\parallel_{\mathfrak{C}_{x}}\,,\,\sup_{u\in\mathcal{U}}\sum_{{\rm d}(x)=u}\\!\parallel\\!\Phi(x)\\!\parallel_{\mathfrak{C}_{x}}\\!\Big{\\}}$ $\displaystyle=\lVert\Phi\rVert_{\ell^{\infty,1}(\Xi\,|\,\mathscr{C})}.$ Now we check that $\varphi$ is an ∗-homomorphism, $\displaystyle\big{[}\varphi(\Phi)*\varphi(\Psi)\big{]}(x)$ $\displaystyle=\sum_{yz=x}\theta_{y}\Phi(y)*\theta_{z}\Psi(z)$ $\displaystyle=\theta_{x}\sum_{yz=x}\Phi(y)\bullet\Psi(z)$ $\displaystyle=\varphi(\Phi*\Psi)(x)$ and $\varphi(\Phi^{*})(x)=\theta_{x}\Phi^{*}(x)=\theta_{x}\Phi(x^{-1})^{\bullet}=\big{[}\theta_{x^{-1}}\Phi(x^{-1})\big{]}^{*}=\varphi(\Phi)(x^{-1})^{*}=\varphi(\Phi)^{*}(x).$ This finishes the proof.∎ ###### Remark 3.5. Observe that $\ell^{\infty,1}(\Xi,\mathfrak{A})\cong\ell^{\infty,1}(\Xi)\,\hat{\otimes}\,\mathfrak{A}$, where $\hat{\otimes}$ denotes the projective tensor product. Indeed, given $(\varphi,a)\in\ell^{\infty,1}(\Xi)\times\mathfrak{A}$, define the function $\varphi\otimes a$ by $\big{[}\varphi\otimes a\big{]}(x):=a\varphi(x),\quad\forall x\in\Xi.$ The map $(\varphi,a)\mapsto\varphi\otimes a$ defined in $\ell^{\infty,1}(\Xi)\times\mathfrak{A}\to\ell^{\infty,1}(\Xi,\mathfrak{A})$ is bilinear, has norm $1$ (it satisfies $\lVert\varphi\otimes a\rVert_{\ell^{\infty,1}(\Xi,\mathfrak{A})}\leq\lVert a\rVert_{\mathfrak{A}}\lVert\varphi\rVert_{\ell^{\infty,1}(\Xi)}$) and it extends to ∗-isomorphism. $\iota:\ell^{\infty,1}(\Xi)\,\hat{\otimes}\,\mathfrak{A}\to\ell^{\infty,1}(\Xi,\mathfrak{A})$ The following corollary effectively reduces our concerns to the study of tensor products, just as in the group case (cf. [4, Theorem 2.4]). ###### Corollary 3.6. A discrete groupoid $\Xi$ is hypersymmetric if and only if the Banach ∗-algebra $\ell^{\infty,1}(\Xi,\mathfrak{A})\cong\ell^{\infty,1}(\Xi)\,\hat{\otimes}\,\mathfrak{A}$ is symmetric, for every $C^{*}$-algebra $\mathfrak{A}$. ###### Proof. Every algebra of the form $\ell^{\infty,1}(\Xi,\mathfrak{A})$ comes from a Fell bundle (recall Definition 3.2), so it is symmetric if $\Xi$ is hypersymmetric. On the other hand, because of Lemma 3.4, given any Fell bundle $\mathscr{C}$, $\ell^{\infty,1}(\Xi\,|\,\mathscr{C})$ may be identified as a closed ∗-subalgebra of some algebra of the form $\ell^{\infty,1}(\Xi,\mathfrak{A})$. So it will be symmetric if the latter is symmetric, by [12, Theorem 11.4.2]. ∎ ###### Remark 3.7. In the group case, this condition has been called ’rigid symmetry’ by many authors (including myself) and it was introduced by Leptin and Poguntke in [9]. It can also be seen in [1, 2, 4, 10, 13]. Before going into the following sections, let us simplify some notation. Let $\mathfrak{A},\mathfrak{B}$ be Banach ∗-algebras. We will denote by $\mathfrak{A}\hookrightarrow\mathfrak{B}$ the fact that there exists an isometric ∗-monomorphism $\iota:\mathfrak{A}\to\mathfrak{B}$. In this language, the conclusion of Lemma 3.4 can be written as $\ell^{\infty,1}(\Xi\,|\,\mathscr{C})\hookrightarrow\ell^{\infty,1}\big{(}\Xi,C^{*}(\Xi\,|\,\mathscr{C})\big{)}$. On the other hand, the dual notation $\mathfrak{A}\twoheadrightarrow\mathfrak{B}$ means that there exists some contractive ∗-epimorphism $\pi:\mathfrak{A}\to\mathfrak{B}$. ## 4 The result for discrete groupoids The rest of the paper is devoted to prove Theorem 4.9. We will follow the strategy detailed in the introduction, starting with a complete (well-known) characterization of discrete transitive groupoids as group transformation groupoids. ###### Definition 4.1. Let $\gamma$ be a continuous action of the (discrete) group ${\sf G}$ on the topological space $X$, we define the transformation groupoid $\Xi:={\sf G}\ltimes_{\gamma}\\!X$ associated to it as follows. As a topological space ${\sf G}\ltimes_{\gamma}\\!X$, it is just ${\sf G}\times X$, the maps ${\rm r},{\rm d}$ are given by ${\rm d}(a,x)=x$ and ${\rm r}(a,x)=\gamma_{a}(x)$. The composition is $\big{(}b,\gamma_{a}(x)\big{)}(a,x):=(ba,x)$ and inversion reads $(a,x)^{-1}:=\big{(}a^{-1}\\!,\gamma_{a}(x)\big{)}$ . The unit space is $\mathcal{U}=\\{{{\sf e}}\\}\times X\equiv X$. ###### Proposition 4.2. Let $\Xi$ be a discrete transitive groupoid with isotropy group ${\sf G}$. There is an abelian group structure on $\mathcal{U}$ and an action $\gamma$ of ${\sf G}^{\prime}={\sf G}\times\mathcal{U}$ on $\mathcal{U}$ such that $\Xi\cong{\sf G}^{\prime}\ltimes_{\gamma}\mathcal{U}$. ###### Proof. Let us give $\mathcal{U}$ some abelian group structure with additive notation and define the action of ${\sf G}^{\prime}$ by $\gamma_{(\alpha,w)}(v)=w+v$. In order to construct an isomorphism $\varphi$, fix some $u\in\mathcal{U}$ and realize ${\sf G}$ as $\Xi_{u}^{u}$. Since $\Xi$ is transitive, for every $v\in\mathcal{U}$, there exists an arrow $z_{v}\in\Xi$ such that ${\rm d}(z_{v})=v$ and ${\rm r}(z_{v})=u$. Then $\varphi:{\sf G}^{\prime}\ltimes_{\gamma}\mathcal{U}\to\Xi\textup{ defined by }\varphi\big{(}(x,w),v\big{)}=z^{-1}_{w+v}xz_{v},\textup{ is the required isomorphism.}$ Let us verify that $\varphi$ is indeed a groupoid isomorphism: * (i) If $\big{(}(x_{1},w_{1}),w_{2}+v\big{)}\big{(}(x_{2},w_{2}),v\big{)}=\big{(}(x_{1}x_{2},w_{1}+w_{2}),v\big{)}$, then $\displaystyle\varphi\big{(}(x_{1}x_{2},w_{1}+w_{2}),v\big{)}$ $\displaystyle=z^{-1}_{w_{1}+w_{2}+v}x_{1}x_{2}z_{v}$ $\displaystyle=z^{-1}_{w_{1}+w_{2}+v}x_{1}z_{w_{2}+v}z_{w_{2}+v}^{-1}x_{2}z_{v}$ $\displaystyle=\varphi\big{(}(x_{1},w_{1}),w_{2}+v\big{)}\varphi\big{(}(x_{2},w_{2}),v\big{)}$ * (ii) If $\big{(}(x,w),v\big{)}\in{\sf G}^{\prime}\ltimes_{\gamma}\mathcal{U}$, then $\big{(}(x,w),v\big{)}^{-1}=\big{(}(x^{-1},-w),w+v\big{)}$ and $\displaystyle\varphi\big{(}(x^{-1},-w),w+v\big{)}$ $\displaystyle=z^{-1}_{v}x^{-1}z_{w+v}$ $\displaystyle=(z^{-1}_{w+v}xz_{v})^{-1}=\varphi\big{(}(x,w),v\big{)}^{-1}.$ * (iii) The inverse function $\varphi^{-1}$ is given by $\varphi^{-1}(\xi)=\big{(}(z_{{\rm r}(\xi)}\xi z_{{\rm d}(\xi)}^{-1},{\rm r}(\xi)-{\rm d}(\xi)),{\rm d}(\xi)\big{)}$. ∎ ###### Remark 4.3. It follows from Proposition 4.2 that the pair groupoid $\mathcal{U}\times\mathcal{U}$ is isomorphic to $\mathcal{U}\ltimes_{\gamma|_{\mathcal{U}}}\mathcal{U}$. (Here ${\sf G}=\\{{\sf e}\\}$ is the trivial group) While the commutativity of $\mathcal{U}$ was not essential for the previous proof -any group structure would have worked out-, it will be of vital importance in the future (cf. the proof of Theorem 4.7). That is why it will be occasionally remarked in the following propositions. The following lemma requires some notation. Given a $C^{*}$-algebra $\mathfrak{A}$ and a group action $\gamma:{\sf G}\to{\rm Bij}(\mathcal{U})$, denote by $\Gamma$ the ${\sf G}$-action on $\mathcal{C}_{0}(\mathcal{U},\mathfrak{A})$ satisfying $\Gamma_{x}(f)(u)=f\big{(}\gamma_{x^{-1}}(u)\big{)}$. ###### Lemma 4.4. Suppose that the discrete group ${\sf G}$ acts on $\mathcal{U}$ via $\gamma$ and $\mathfrak{A}$ is a $C^{*}$-algebra. Then $\ell^{1}_{\Gamma}\big{(}{\sf G},\mathcal{C}_{0}(\mathcal{U},\mathfrak{A})\big{)}\twoheadrightarrow\ell^{\infty,1}({\sf G}\ltimes_{\gamma}\mathcal{U},\mathfrak{A}).$ (4.1) ###### Proof. Define $\pi:\ell^{1}_{\Gamma}\big{(}{\sf G},\mathcal{C}_{0}(\mathcal{U},\mathfrak{A})\big{)}\to\ell^{\infty,1}({\sf G}\ltimes_{\gamma}\mathcal{U},\mathfrak{A})$ by the formula $\pi(\Phi)(x,u)=\Phi(x)\big{(}\gamma_{x}(u)\big{)}$. This map is well-defined and clearly surjective, while it also satisfies $\displaystyle\lVert\pi(\Phi)\rVert_{\ell^{\infty,1}({\sf G}\ltimes_{\gamma}\mathcal{U},\mathfrak{A})}$ $\displaystyle=\max\Big{\\{}\sup_{u\in\mathcal{U}}\sum_{x\in{\sf G}}\\!\parallel\\!\Phi(x)\big{(}\gamma_{x}(u)\big{)}\\!\parallel_{\mathfrak{A}}\,,\,\sup_{u\in\mathcal{U}}\sum_{x\in{\sf G}}\\!\parallel\\!\Phi(x)(u)\\!\parallel_{\mathfrak{A}}\\!\Big{\\}}$ $\displaystyle\leq\sum_{x\in{\sf G}}\sup_{u\in\mathcal{U}}\\!\parallel\\!\Phi(x)(u)\\!\parallel_{\mathfrak{A}}=\sum_{x\in{\sf G}}\lVert\Phi(x)\rVert_{\mathcal{C}_{0}(\mathcal{U},\mathfrak{A})}=\lVert\Phi\rVert_{\ell^{1}_{\Gamma}({\sf G},\mathcal{C}_{0}(\mathcal{U},\mathfrak{A}))}$ and $\displaystyle\pi(\Phi*\Psi)(x,u)$ $\displaystyle=\big{[}\Phi*\Psi\big{]}(x)\big{(}\gamma_{x}(u)\big{)}$ $\displaystyle=\Big{[}\sum_{y\in{\sf G}}\Phi(y)\Gamma_{y}\big{[}\Psi(y^{-1}x)\big{]}\Big{]}\big{(}\gamma_{x}(u)\big{)}$ $\displaystyle=\sum_{y\in{\sf G}}\Phi(y)\big{(}\gamma_{x}(u)\big{)}\Psi(y^{-1}x)\big{(}\gamma_{y^{-1}x}(u)\big{)}$ $\displaystyle=\sum_{y\in{\sf G}}\pi(\Phi)(y,\gamma_{y^{-1}x}(u))\pi(\Psi)(y^{-1}x,u)$ $\displaystyle=\big{[}\pi(\Phi)*\pi(\Psi)\big{]}(x,u).$ Finally, we see that $\displaystyle\pi(\Phi^{*})(x,u)$ $\displaystyle=\Phi^{*}(x)\big{(}\gamma_{x}(u)\big{)}$ $\displaystyle=\Gamma_{x}\big{[}\Phi(x^{-1})^{*}]\big{(}\gamma_{x}(u)\big{)}=\Phi(x^{-1})(u)^{*}=\big{[}\pi(\Phi)(x^{-1},\gamma_{x}(u))\big{]}^{*}=\pi(\Phi)^{*}(x,u).$ Hence $\pi$ is a contractive ∗-epimorphism. ∎ Lemma 4.4 will be used in the following form, which allows us to focus on the study of $\ell^{1}$-algebras arising from group $C^{*}$-dynamical systems. ###### Corollary 4.5. If $\ell^{1}_{\Gamma}\big{(}{\sf G},\mathcal{C}_{0}(\mathcal{U},\mathfrak{A})\big{)}$ is a symmetric Banach ∗-algebra, then $\ell^{\infty,1}({\sf G}\ltimes_{\gamma}\mathcal{U},\mathfrak{A})$ is also symmetric. ###### Proof. In Lemma 4.4, we showed that $\ell^{1}_{\Gamma}\big{(}{\sf G},\mathcal{C}_{0}(\mathcal{U},\mathfrak{A})\big{)}\twoheadrightarrow\ell^{\infty,1}({\sf G}\ltimes_{\gamma}\mathcal{U},\mathfrak{A})$, so the conclusion follows from [12, Theorem 11.4.2]. ∎ ###### Proposition 4.6. Let $({\sf G}\times{\sf H},\Gamma,\mathfrak{A})$ be a $C^{*}$-dynamical system and assume that ${\sf G}$ acts trivially on $\mathfrak{A}$. Then one has $\ell^{1}_{\Gamma}({\sf G}^{\prime},\mathfrak{A})\cong\ell^{1}({\sf G})\,\hat{\otimes}\,\ell^{1}_{\Gamma}({\sf H},\mathfrak{A}),$ (4.2) where ${\sf G}^{\prime}:={\sf G}\times{\sf H}$. ###### Proof. Observe that $\iota:\ell^{1}_{\Gamma}({\sf G}^{\prime},\mathfrak{A})\to\ell^{1}\big{(}{\sf G},\ell^{1}_{\Gamma}({\sf H},\mathfrak{A})\big{)}\quad\textup{ defined by }\quad\iota(\Phi)(x)(y)=\Phi(x,y)$ is an isometric ∗-isomorphism. Indeed, bijectivity is direct, while $\displaystyle\lVert\iota(\Phi)\rVert_{\ell^{1}({\sf G},\ell^{1}_{\Gamma}({\sf H},\mathfrak{A}))}$ $\displaystyle=\sum_{x\in{\sf G}}\lVert\iota(\Phi)(x)\rVert_{\ell^{1}_{\Gamma}({\sf H},\mathfrak{A})}=\sum_{x\in{\sf G}}\sum_{y\in{\sf H}}\lVert\Phi(x,y)\rVert_{\mathfrak{A}}=\lVert\Phi\rVert_{\ell^{1}_{\Gamma}({\sf G}^{\prime},\mathfrak{A})}$ and, identifying $(1_{\sf G},b)\in{\sf G}^{\prime}$ with $b\in{\sf H}$, $\displaystyle\big{[}\iota(\Phi)*\iota(\Psi)\big{]}(x)(y)$ $\displaystyle=\Big{[}\sum_{a\in{\sf G}}\iota(\Phi)(a)*\iota(\Psi)(a^{-1}x)\Big{]}(y)$ $\displaystyle=\sum_{a\in{\sf G}}\sum_{b\in{\sf H}}\iota(\Phi)(a)(b)\Gamma_{b}\big{[}\iota(\Psi)(a^{-1}x)(b^{-1}y)\big{]}$ $\displaystyle=\sum_{(a,b)\in{\sf G}\times{\sf H}}\Phi(a,b){\Gamma}_{(a,b)}\big{[}\Psi(a^{-1}x,b^{-1}y)\big{]}$ $\displaystyle=\iota(\Phi*\Psi)(x)(y).$ Finally, $\displaystyle\iota(\Phi^{*})(x)(y)=\Phi^{*}(x,y)=\Gamma_{y}\big{[}\Phi(x^{-1},y^{-1})^{*}\big{]}=\iota(\Phi)(x^{-1})^{*}(y)=\iota(\Phi)^{*}(x)(y).$ So $\ell^{1}_{\Gamma}({\sf G}^{\prime},\mathfrak{A})\cong\ell^{1}\big{(}{\sf G},\ell^{1}_{\Gamma}({\sf H},\mathfrak{A})\big{)}\cong\ell^{1}({\sf G})\,\hat{\otimes}\,\ell^{1}_{\Gamma}({\sf H},\mathfrak{A})$. ∎ We can put together the previous lemmas to obtain the following result, which is Theorem 1.2 for transitive groupoids. ###### Theorem 4.7. Let $\Xi$ be a discrete transitive groupoid with isotropy subgroup ${\sf G}$. If ${\sf G}$ is symmetric (resp. hypersymmetric), then $\Xi$ is symmetric (resp. hypersymmetric). ###### Proof. Let us divide the proof in two cases, the first one being about hypersymmetry. Because of Proposition 4.2, we may assume that $\Xi={\sf G}^{\prime}\ltimes_{\gamma}{\sf H}$, with ${\sf G}^{\prime}={\sf G}\times{\sf H}$ and $\gamma_{(g,h)}(k)=h+k$. First suppose that ${\sf G}$ is hypersymmetric. Because of corollaries 3.6 and 4.5, it is enough to show that ${\sf G}\times{\sf H}$ is hypersymmetric (or ’rigidly symmetric’, see Remark 3.7). But this follows from the fact that ${\sf H}$ is abelian and [9, Theorem 7]. Now suppose that ${\sf G}$ is only symmetric. Note that $\gamma|_{\sf H}$ coincides with the action of ${\sf H}$ on itself by left translation, so we will denote it by ${\rm lt}$. Then $\displaystyle\ell^{1}_{\Gamma}\big{(}{\sf G}^{\prime},\mathcal{C}_{0}({\sf H})\big{)}$ $\displaystyle\overset{\eqref{eq2}}{\cong}\ell^{1}({\sf G})\,\hat{\otimes}\,\ell^{1}_{\rm lt}\big{(}{\sf H},\mathcal{C}_{0}({\sf H})\big{)}$ $\displaystyle\overset{\eqref{eq}}{\hookrightarrow}\ell^{1}({\sf G})\,\hat{\otimes}\,\ell^{1}({\sf H})\,\hat{\otimes}\,\big{(}{\sf H}\ltimes_{\rm lt}\mathcal{C}_{0}({\sf H})\big{)}$ $\displaystyle\cong\ell^{1}({\sf H})\,\hat{\otimes}\,\ell^{1}\big{(}{\sf G},\mathcal{K}(\ell^{2}({\sf H})\big{)}).$ The final isomorphism holds because of the Stone-von Neumann theorem: ${\sf H}\ltimes_{\rm lt}\mathcal{C}_{0}({\sf H})\cong\mathcal{K}(\ell^{2}({\sf H})\big{)}$. $\ell^{1}\big{(}{\sf G},\mathcal{K}(\ell^{2}({\sf H})\big{)})$ is symmetric because of [5, Theorem 1], hence $\ell^{1}({\sf H})\,\hat{\otimes}\,\ell^{1}\big{(}{\sf G},\mathcal{K}(\ell^{2}({\sf H})\big{)})$ is symmetric because of [8, Theorem 5]. We conclude that $\ell^{1}_{\Gamma}\big{(}{\sf G}^{\prime},\mathcal{C}_{0}({\sf H})\big{)}$ and thus $\ell^{\infty,1}(\Xi)$ are symmetric. ∎ ###### Corollary 4.8. The pair groupoid over a discrete set $\mathcal{U}$ is hypersymmetric. Now we will proceed to upgrade Theorem 4.7 to the case of general discrete groupoids. ###### Theorem 4.9. A discrete groupoid $\Xi$ is symmetric (resp. hypersymmetric) if and only if its isotropy subgroups are symmetric (resp. hypersymmetric). ###### Proof. Is obvious that symmetry (resp. hypersymmetry) of $\Xi$ implies the symmetry (resp. hypersymmetry) of its isotropy groups ($\ell^{1}(\Xi_{u}^{u},\mathfrak{A})\hookrightarrow\ell^{\infty,1}(\Xi,\mathfrak{A})$ in a natural way). As the ’only if’ part is clear, let us prove the ’if’ part. Let $\mathfrak{A}$ be a $C^{*}$-algebra and $\Phi$ be an arbitrary section in $\ell^{\infty,1}(\Xi,\mathfrak{A})$. $\Phi$ has a countable support (see Remark 2.2), and since every discrete groupoid can be decomposed as a disjoint union of discrete transitive subgroupoids, we can find a countable number of disjoint transitive subgroupoids $\\{\Xi(i)\\}_{i\in\mathbb{N}}$, such that ${\rm supp}(\Phi)\subset\bigcup_{i=1}^{\infty}\Xi(i)$. Define $\Phi_{n}\in\ell^{\infty,1}(\Xi,\mathfrak{A})$ as $\Phi_{n}(x):=\left\\{\begin{array}[]{ll}\Phi(x)&\textup{if\ }x\in\bigcup_{i=1}^{n}\Xi(i),\\\ 0_{\mathfrak{A}}&\textup{if\ }x\not\in\bigcup_{i=1}^{n}\Xi(i).\\\ \end{array}\right.$ We have that $\lim_{n}\Phi_{n}=\Phi$ in $\ell^{\infty,1}(\Xi,\mathfrak{A})$, but more importantly, ${\rm Spec}_{\ell^{\infty,1}(\Xi,\mathfrak{A})}(\Phi)=\bigcup_{i=1}^{\infty}{\rm Spec}_{\ell^{\infty,1}(\Xi,\mathfrak{A})}(\Phi_{n}).$ (4.3) So, to conclude, its enough to prove that ${\rm Spec}_{\ell^{\infty,1}(\Xi,\mathfrak{A})}(\Phi_{n})\subset\mathbb{R}$, whenever $\Phi^{*}=\Phi$. Indeed, if $\Phi^{*}=\Phi$, after fixing $n$ we see that $\Phi_{n}^{*}=\Phi_{n}$ and $\ell^{\infty,1}(\Xi,\mathfrak{A})$ contains an isometric ∗-isomorphic copy of $\ell^{\infty,1}(\bigcup_{i=1}^{n}\Xi(i),\mathfrak{A})$, obtained by extending the sections in the latter algebra with zeros outside its original domain. So by definition, one has that $\Phi_{n}\in\ell^{\infty,1}(\bigcup_{i=1}^{n}\Xi(i),\mathfrak{A})\subset\ell^{\infty,1}(\Xi,\mathfrak{A})$, which implies that ${\rm Spec}_{\ell^{\infty,1}(\Xi,\mathfrak{A})}(\Phi_{n})\subset{\rm Spec}_{\ell^{\infty,1}(\bigcup_{i=1}^{n}\Xi(i),\mathfrak{A})}(\Phi_{n}).$ But $\ell^{\infty,1}(\bigcup_{i=1}^{n}\Xi(i),\mathfrak{A})\cong\bigoplus_{i=1}^{n}\ell^{\infty,1}(\Xi(i),\mathfrak{A})$ and since the (finite) direct sum of symmetric Banach ∗-algebras is symmetric, ${\rm Spec}_{\ell^{\infty,1}(\bigcup_{i=1}^{n}\Xi(i),\mathfrak{A})}(\Phi_{n})\subset\mathbb{R}$ and the result follows. ∎ ###### Example 4.10. Let $\Xi={\sf G}\ltimes_{\gamma}\\!X$ be a transformation groupoid (see Definition 4.1), where $X$ is discrete. In this case, $\mathcal{U}=X$ and $\Xi_{u}^{u}=\\{g\in G\mid\gamma_{g}(u)=u\\}\times\\{u\\}$, which is identificable with the stabilizer of $u$, namely ${\rm Stab}_{\gamma}(u)\leq{\sf G}$. ###### Remark 4.11. We will finish this paper with a bit of wishful thinking: Is reasonable to believe that the groupoid analog of [2, Theorem 3.3] could be true. For a general locally compact groupoid one can define symmetry and hypersymmetry for $\Xi$ as the symmetry of the Hahn algebras $L^{\infty,1}(\Xi)$ and $L^{\infty,1}(\Xi\\!\mid\\!\mathscr{C})$, respectively. So we may pose two problems: 1. $(i)$ Is the (hyper)symmetry of a Hausdorff locally compact groupoid $\Xi$ implied by the (hyper)symmetry of its discretization $\Xi^{\rm dis}$? 2. $(ii)$ Is Corollary 3.6 still valid for étale groupoids? In view of Theorem 1.2, $(i)$ is equivalent to 1. $(i^{\prime})$ Is the (hyper)symmetry of a Hausdorff locally compact groupoid $\Xi$ implied by the (hyper)symmetry of its discretized isotropy groups $(\Xi_{u}^{u})^{\rm dis}$? ## References * [1] A. Austad and E. Ortega: _Groupoids and Hermitian Banach ∗-Algebras_, arXiv:2105.14793 [math.FA]. * [2] F. Flores, D. Jauré and M. Măntoiu: _Symmetry for algebras associated to Fell bundles over groups and groupoids_ , to appear in Journal of Operator Theory. * [3] F. Flores and M. Măntoiu: _Topological dynamics for groupoid actions_ , to appear in Groups, Geometry and Dynamics. * [4] D. Jauré and M. Măntoiu: _Symmetry and Spectral Invariance for Topologically Graded $C^{*}$-Algebras and Partial Action Systems_, Bull. London Math. Soc. 54(4), 1448–1469, (2022). * [5] W. Kugler: _On the Symmetry of Generalized $L^{1}$-Algebras_, Math. Z. 168(3), 241–262, (1979). * [6] A. Kumjian:_Fell Bundles over Groupoids_ , Proc. Amer. Math. Soc. 126(4), 1115–1125, (1998). * [7] H. Leptin: _Symmetrie in Banachschen Algebren_ , Arch. Math. (Basel), 27(4), 394–400, (1976). * [8] H. Leptin: _Ideal Theory in Group Algebras of Locally Compact Groups_. Inventiones mathematicae 31, 259-278, (1976). * [9] H. Leptin and D. Poguntke: _Symmetry and Nonsymmetry for Locally Compact Groups_ , J. Funct. Anal. 33(2), 119–134, (1979). * [10] M. Măntoiu: _Symmetry and Inverse Closedness for Banach $C^{*}$-Algebras Associated to Discrete Groups_, Banach Journal of Mathematical Analysis, 9(2), 289–310, (2015). * [11] P. Muhly and D. Williams: _Equivalence and Disintegration Theorems for Fell Bundles and Their $C^{*}$-Algebras_, Dissertationes Math. (Rozprawy Mat.), 456, 1–57, (2008). * [12] T.W. Palmer: _Banach Algebras and the General Theory of ∗-Algebras_, Vol. II. ∗-Algebras, Encyclopedia of Mathematics and its Applications, 69. Cambridge University Press, Cambridge, 2001. * [13] D. Poguntke: _Rigidly Symmetric $L^{1}$-Group Algebras_, Seminar Sophus Lie, 2, 189–197(1992). ADDRESS F. Flores Department of Mathematics, University of Virginia, 114 Kerchof Hall. 141 Cabell Dr, Charlottesville, Virginia, United States E-mail<EMAIL_ADDRESS>
# Separating Subversion Forcing Principles Hiroshi Sakai Graduate School of System Informatics, Kobe University, Rokko- dai 1-1, Nada, Kobe, 657-8501, Japan<EMAIL_ADDRESS>and Corey Bacal Switzer Institute of Discrete Mathematics and Geometry, Vienna University of Technology. Wiedner Hauptstrasse 8-10, 1040 Vienna, Austria <EMAIL_ADDRESS> ###### Abstract. We study a family of variants of Jensen’s _subcomplete forcing axiom_ , $\mathsf{SCFA}$ and _subproper forcing axiom_ , $\mathsf{SubPFA}$. Using these we develop a general technique for proving non-implications of $\mathsf{SCFA}$, $\mathsf{SubPFA}$ and their relatives and give several applications. For instance we show that $\mathsf{SCFA}$ does not imply $\mathsf{MA}^{+}(\sigma$-closed$)$ and $\mathsf{SubPFA}$ does not imply Martin’s Maximum. ###### Key words and phrases: ###### 2010 Mathematics Subject Classification: 03E17, 03E35, 03E50 _Acknowledgments:_ The first author would like to thank JSPS for the support through grant numbers 18K03397 and 21K03338. The second author would like to thank the Austrian Science Fund (FWF) for the generous support through grant number Y1012-N35. ## 1\. Introduction In this paper we study variants of subcomplete and subproper forcing classes with an eye towards investigating and distinguishing their forcing principles. Subcomplete and subproper forcing are two classes of forcing notions introduced by Jensen in [13] in connection with the extended Namba problem, see [14, Section 6.4] 111See Definition 1.4 below for precise definitions.. Both are iterable with revised countable support and generalize significantly $\sigma$-closed and proper forcing notions respectively while allowing, under some circumstances, new $\omega$-sequences of ordinals to be added to uncountably cofinal cardinals. As such each comes with a forcing axiom (consistent relative to a supercompact cardinal). The forcing axiom for subcomplete forcing in particular, dubbed $\mathsf{SCFA}$ by Jensen in [11, 14] is especially interesting as it is consistent with $\diamondsuit$ while implying some of the strong, structural consequences of $\mathsf{MM}$ such as $\mathrm{SCH}$ and strong reflection principles including the failure of square principles, see [14, Section 4]. Since their initial introduction subcomplete and subproper forcing have been tied to several applications and received further treatment, see for instance, [6, 8, 9, 10]. Already in [10] the second author and Fuchs found (seemingly) more general classes, dubbed “$\infty$-subcomplete” and “$\infty$-subproper” each containing their non “$\infty$” version respectively, and proved a variety of iteration and preservation theorems. The main theorem in that work was that the forcing axiom for $\infty$-subcomplete forcing notions, $\infty$-$\mathsf{SCFA}$, is compatible with a large variety of behavior on $\aleph_{1}$ when $\mathsf{CH}$ fails. For instance $\aleph_{1}=\mathfrak{d}<\mathfrak{c}=\aleph_{2}$ and the existence of Souslin trees are both consistent with $\infty$-$\mathsf{SCFA}$ $+\neg\mathsf{CH}$. In this paper we combine the $\infty$-versions of these forcing classes with further parametrization “above $\mu$” for cardinals $\mu$, initially investigated, somewhat sparingly, by Jensen in [12, Section 3]. This leads to a large family of forcing axioms $\infty$-$\mathsf{SubPFA}\upharpoonright\mu$ and $\infty$-$\mathsf{SCFA}\upharpoonright\mu$, where $\infty$-$\mathsf{SubPFA}$ and $\infty$-$\mathsf{SCFA}$ coincide with $\infty$-$\mathsf{SubPFA}\upharpoonright 2^{\aleph_{0}}$ and $\infty$-$\mathsf{SCFA}\upharpoonright 2^{\aleph_{0}}$ respectively. The main motivation of this work is to investigate how these axioms relate to one another and to other, more well known axioms such as $\mathsf{MM}$ and $\mathsf{MA}^{+}(\sigma\mbox{-closed})$. Formal definitions will be given in the second part of this introduction and Section 2 but the definitions of these axioms alongside well known results provides almost immediately that the following diagram of implications holds with $2^{\aleph_{0}}\leq\nu<\mu$ cardinals. $\mathsf{MM}$$\mathsf{MA}^{+}(\sigma{\rm- closed})$$\infty$-$\mathsf{SubPFA}$$\infty$-$\mathsf{SubPFA}\upharpoonright\nu$$\infty$-$\mathsf{SubPFA}\upharpoonright\mu$$\mathsf{PFA}$$\forall\kappa\neg\square_{\kappa}$$\infty$-$\mathsf{SCFA}$$\infty$-$\mathsf{SCFA}\upharpoonright\nu$$\infty$-$\mathsf{SCFA}\upharpoonright\mu$$\forall\kappa\geq 2^{\aleph_{0}}\neg\square_{\kappa}$$\forall\kappa\geq\nu^{\aleph_{0}}\neg\square_{\kappa}$$\forall\kappa\geq\mu^{\aleph_{0}}\neg\square_{\kappa}$ Figure 1. Subversion forcing principles and then some The main result of this work is that essentially no arrows are missing from Figure 1 above. ###### Main Theorem 1.1. Let $2^{\aleph_{0}}\leq\nu\leq\lambda<\mu=\lambda^{+}$ be cardinals with $\nu^{\omega}<\mu$. Assuming the consistency of a supercompact cardinal, the implications given in Figure 1 are complete in the sense that if no composition of arrows exists from one axiom to another then there is a model of $\mathsf{ZFC}$ in which the implication fails222Except for the trivial $\forall\kappa\neg\square_{\kappa}\to\forall\kappa\geq 2^{\aleph_{0}}\neg\square_{\kappa}$ which did not fit aesthetically into the picture.. As a corollary of this theorem and its proof we obtain separations of several “subversion” forcing principles from other, more well-studied reflection principles and forcing axioms. For instance we show the following. ###### Theorem 1.1 (See Theorem 3.1). Assuming the consistency of a supercompact cardinal, $\mathsf{SCFA}$ does not imply the failure of $\square_{\aleph_{1}}$ when $\mathsf{CH}$ fails. ###### Corollary 1.2. Assuming the consistency of a supercompact cardinal, $\mathsf{SCFA}$ does not imply $\mathsf{MA}^{+}(\sigma{\mbox{\rm-closed}})$. The rest of this paper is organized as follows. In the next subsection of this introduction we give relevant background and terminology. In the next Section we introduce the variants $\infty$-subcompleteness and $\infty$-subproperness above $\mu$ and discuss some of their properties. In Section 3 we study the forcing axioms associated to these classes and show, amongst other things, that they are distinct as well as the fact $\infty$-$\mathsf{SCFA}$ implies neither $\mathsf{MA}^{+}(\sigma{\rm-closed})$ nor $\neg\square_{\kappa}$ for any $\kappa<2^{\omega}$. In Section 4 we continue this investigation and show that $\infty$-$\mathsf{SubPFA}$ does not imply $\mathsf{MM}$. Section 5 concludes with some final remarks and open problems. ### 1.1. Preliminaries We conclude this introduction with the key definitions we will use throughout, beginning with that of subproperness and subcompleteness. These are these two classes of forcing notions defined by Jensen in [14] which have found several applications, see e.g [17, 13, 6, 9]. More discussion of these concepts can be found in [14] or [10]. Before beginning with the definition we will need one preliminary definition. Below we denote by $\mathsf{ZFC}^{-}$ the axioms of $\mathsf{ZFC}$ without the power set axiom. ###### Definition 1.3. A transitive set $N$ (usually a model of $\mathsf{ZFC}^{-}$) is _full_ if there is an ordinal $\gamma$ so that $L_{\gamma}(N)\models\mathsf{ZFC}^{-}$ and $N$ is regular in $L_{\gamma}(N)$ i.e. for all $x\in N$ and $f\in L_{\gamma}(N)$ if $f:x\to N$ then ${\rm ran}(f)\in N$. ###### Definition 1.4. Let $\mathbb{P}$ be a forcing notion and let $\delta(\mathbb{P})$ be the least size of a dense subset of $\mathbb{P}$. 1. (1) We say that $\mathbb{P}$ is _subcomplete_ if for all sufficiently large $\theta$, $\tau>\theta$ so that $H_{\theta}\subseteq N:=L_{\tau}[A]\models\mathsf{ZFC}^{-}$, $s\in N$, $\sigma:\bar{N}\prec N$ countable, transitive and full with $\sigma(\bar{\mathbb{P}},\bar{s},\bar{\theta})=\mathbb{P},s,\theta$ , if $\bar{G}\subseteq\bar{\mathbb{P}}\cap\bar{N}$ is generic then there is a $p\in\mathbb{P}$ so that if $p\in G$ is $\mathbb{P}$-generic over $V$ then in $V[G]$ there is a $\sigma^{\prime}:\bar{N}\prec N$ so that 1\. $\sigma^{\prime}(\bar{\mathbb{P}},\bar{s},\bar{\theta},\bar{\mu})=\mathbb{P},s,\theta,\mu$ 2\. $\sigma^{\prime}``\bar{G}\subseteq G$ 3\. ${\rm Hull}^{N}(\delta(\mathbb{P})\cup{\rm ran}(\sigma))={\rm Hull}^{N}(\delta(\mathbb{P})\cup{\rm ran}(\sigma^{\prime}))$ 2. (2) We say that $\mathbb{P}$ is _subproper_ if for all sufficiently large $\theta$, $\tau>\theta$ so that $H_{\theta}\subseteq N:=L_{\tau}[A]\models\mathsf{ZFC}^{-}$, $s\in N$, $p\in N\cap\mathbb{P}$, $\sigma:\bar{N}\prec N$ countable, transitive and full with $\sigma(\bar{p},\bar{\mathbb{P}},\bar{s},\bar{\theta})=p,\mathbb{P},s,\theta$, there is a $q\in\mathbb{P}$ so that $q\leq p$ and if $q\in G$ is $\mathbb{P}$-generic over $V$ then in $V[G]$ there is a $\sigma^{\prime}:\bar{N}\prec N$ so that 1\. $\sigma^{\prime}(\bar{p},\bar{\mathbb{P}},\bar{s},\bar{\theta})=p,\mathbb{P},s,\theta$ 2\. $(\sigma^{\prime})^{-1}``G$ is $\bar{\mathbb{P}}$-generic over $\bar{N}$ 3\. ${\rm Hull}^{N}(\delta(\mathbb{P})\cup{\rm ran}(\sigma))={\rm Hull}^{N}(\delta(\mathbb{P})\cup{\rm ran}(\sigma^{\prime}))$ Note that the special case where $\sigma=\sigma^{\prime}$ is properness (for subproperness) and (up to forcing equivalence) $\sigma$-closedness (for subcomplete). It was pointed out in [10] that the “Hulls” condition 3) in both definitions is somewhat unnatural. Indeed it is never used in applications and appears solely for the purpose of proving the iteration theorem, [14, Theorem 3]. In [10] Fuchs and the second author showed that by iterating with Miyamoto’s _nice iterations_ this condition could be avoided. As such it makes sense to define the following. ###### Definition 1.5. Let $\mathbb{P}$ be a forcing notion. 1. (1) We say that $\mathbb{P}$ is $\infty$-_subcomplete_ if for all sufficiently large $\theta$, $\tau>\theta$ so that $H_{\theta}\subseteq N:=L_{\tau}[A]\models\mathsf{ZFC}^{-}$, $s\in N$, $\sigma:\bar{N}\prec N$ countable, transitive and full with $\sigma(\bar{\mathbb{P}},\bar{s},\bar{\theta})=\mathbb{P},s,\theta$ , if $\bar{G}\subseteq\bar{\mathbb{P}}\cap\bar{N}$ is generic then there is a $p\in\mathbb{P}$ so that if $p\in G$ is $\mathbb{P}$-generic over $V$ then in $V[G]$ there is a $\sigma^{\prime}:\bar{N}\prec N$ so that 1\. $\sigma^{\prime}(\bar{\mathbb{P}},\bar{s},\bar{\theta},\bar{\mu})=\mathbb{P},s,\theta,\mu$ 2\. $\sigma^{\prime}``\bar{G}\subseteq G$ 2. (2) We say that $\mathbb{P}$ is $\infty$-_subproper_ if for all sufficiently large $\theta$, $\tau>\theta$ so that $H_{\theta}\subseteq N:=L_{\tau}[A]\models\mathsf{ZFC}^{-}$, $s\in N$, $p\in N\cap\mathbb{P}$, $\sigma:\bar{N}\prec N$ countable, transitive and full with $\sigma(\bar{p},\bar{\mathbb{P}},\bar{s},\bar{\theta})=p,\mathbb{P},s,\theta$, there is a $q\in\mathbb{P}$ so that $q\leq p$ and if $q\in G$ is $\mathbb{P}$-generic over $V$ then in $V[G]$ there is a $\sigma^{\prime}:\bar{N}\prec N$ so that 1\. $\sigma^{\prime}(\bar{p},\bar{\mathbb{P}},\bar{s},\bar{\theta})=p,\mathbb{P},s,\theta$ 2\. $(\sigma^{\prime})^{-1}``G$ is $\bar{\mathbb{P}}$-generic over $\bar{N}$ To be clear this is just the same as the definitions of the “non-$\infty$” versions, simply with the additional “Hulls” condition removed. As mentioned these classes come with an iteration theorem. ###### Theorem 1.6 (Theorem 3.19 (for Subcomplete) and Theorem 3.20 (for Subproper) of [10]). Let $\gamma$ be an ordinal and $\langle\mathbb{P}_{\alpha},\dot{\mathbb{Q}}_{\alpha}\;|\;\alpha<\gamma\rangle$ be a nice iteration in the sense of Miyamoto so that for all $\alpha<\gamma$ we have $\Vdash_{\mathbb{P}_{\alpha}}$“$\dot{\mathbb{Q}}_{\alpha}$ is $\infty$-subproper (respectively $\infty$-subcomplete). Then $\mathbb{P}_{\gamma}$ is $\infty$-subproper (respectively $\infty$-subcomplete). We note that the above theorem in the case of $\infty$-subproper forcing was originally proved first independently by Miyamoto in [16]. A consequence of this theorem (initially observed for the non $\infty$-versions by Jensen) is that, modulo a supercompact cardinal, these classes have a consistent forcing axiom. ###### Definition 1.7. Let $\Gamma$ be a class of forcing notions. The _forcing axiom for_ $\Gamma$, denoted $\mathsf{FA}(\Gamma)$ is the statement that for all $\mathbb{P}$ in $\Gamma$ and any $\omega_{1}$-sequence of dense subsets of $\mathbb{P}$, say $\\{D_{i}\;|\;i<\omega_{1}\\}$ there is a filter $G\subseteq\mathbb{P}$ which intersects every $D_{i}$. If $\Gamma$ is the class of ($\infty$-)subproper forcing notions we denote $\mathsf{FA}(\Gamma)$ by ($\infty$-)$\mathsf{SubPFA}$. Similarly if $\Gamma$ is the class of ($\infty$-)subcomplete forcing notions we denote $\mathsf{FA}(\Gamma)$ by ($\infty$-)$\mathsf{SCFA}$. It is not known whether up to forcing equivalence each class is simply equal to its “$\infty$”-version or if their corresponding forcing axioms are equivalent. However, since the “$\infty$” versions are more general (or appear to be) and avoid the unnecessary technicality of computing hulls we will work with them in this paper. Nearly everything written here however could be formulated for the “non-$\infty$” versions equally well, though we leave the translation to the oddly interested reader. If $\Gamma\subseteq\Delta$ then $\mathsf{FA}(\Delta)$ implies $\mathsf{FA}(\Gamma)$ so we get the following collection of implications, which are part of Figure 1. ###### Proposition 1.8. $\mathsf{MM}\to\infty\mbox{-}\mathsf{SubPFA}\to\mathsf{PFA}$ and $\mathsf{MM}\to\infty\mbox{-}\mathsf{SubPFA}\to\infty\mbox{-}\mathsf{SCFA}$ Here $\mathsf{MM}$, known as _Martin’s Maximum_ and introduced in [5], is the forcing axiom for forcing notions which preserve stationary subsets of $\omega_{1}$ (all $\infty$-subproper forcing notions have this property) and $\mathsf{PFA}$ is the forcing axiom for proper forcing notions. It is known from the work of Jensen, see also [10] that none of the above implications can be reversed with the exception of whether $\mathsf{SubPFA}$ implies $\mathsf{MM}$. In this paper we will show the consistency of $\mathsf{SubPFA}+\neg\mathsf{MM}$, see Theorem 4.1 below. On that note we move to our last preliminary. Many of the theorems in this paper involve showing that we can preserve some fragment of $\infty$-$\mathsf{SCFA}$ (or $\infty$-$\mathsf{SubPFA}$) via a forcing killing another fragment of it. Towards this end we will need an extremely useful theorem due to Cox. Below recall that a class of forcing notions $\Gamma$ is _closed under restrictions_ (see Definition 39 of [2]) if for all $\mathbb{P}\in\Gamma$ and all $p\in\mathbb{P}$ the lower cone $\mathbb{P}\upharpoonright p:=\\{q\in\mathbb{P}\;|\;q\leq p\\}\in\Gamma$. One can check that both the classes of $\infty$-subcomplete and $\infty$-subproper forcing notions (as well as the restrictions “above $\mu$” defined in Section 2) have this property. ###### Theorem 1.9 (Cox, see Theorem 20 of [2]). Let $\Gamma$ be a class of forcing notions closed under restrictions and assume $\mathsf{FA}(\Gamma)$ holds. Let $\mathbb{P}$ be a forcing notion. Suppose that for every $\mathbb{P}$-name $\dot{\mathbb{Q}}$ for a forcing notion in $\Gamma$ there is a $\mathbb{P}*\dot{\mathbb{Q}}$-name $\dot{\mathbb{R}}$ for a forcing notion so that the following hold: 1. (1) $\mathbb{P}*\dot{\mathbb{Q}}*\dot{\mathbb{R}}$ is in $\Gamma$ 2. (2) If $j:V\to N$ is a generic elementary embedding, $\theta\geq|\mathbb{P}*\dot{\mathbb{Q}}*\dot{\mathbb{R}}|^{+}$ is regular in $V$ and a) $H_{\theta}^{V}$ is in the wellfounded part of $N$; b) $j``H_{\theta}^{V}\in N$ has size $\omega_{1}$ in $N$; c) ${\rm crit}(j)=\omega_{2}^{V}$ d) There exists a $G*H*K$ in $N$ that is $(H_{\theta}^{V},\mathbb{P}*\dot{\mathbb{Q}}*\dot{\mathbb{R}})$-generic Then $N$ believes that $j``G$ has a lower bound in $j(\mathbb{P})$ Then $\Vdash_{\mathbb{P}}\mathsf{FA}(\Gamma)$ i.e. $\mathbb{P}$ preserves the forcing axiom for $\Gamma$. See [2] for more on strengthenings and generalizations of this wide ranging theorem. ## 2\. $\infty$-Subcompleteness and $\infty$-Subproperness above $\mu$ Most theorems in this paper filter through the notions of $\infty$-_subcompleteness (respectively $\infty$-subproperness) above_ $\mu$ for a cardinal $\mu$. These are technical strengthenings of $\infty$-subcompleteness (respective $\infty$-subproperness). In this section we define these strengthenings as well as make some elementary observations which will be used in rest of the paper. ###### Definition 2.1. Let $\mu$ be a cardinal and $\mathbb{P}$ a forcing notion. 1. (1) We say that $\mathbb{P}$ is $\infty$-_subcomplete above_ $\mu$ if for all sufficiently large $\theta$, $\tau>\theta$ so that $H_{\theta}\subseteq N:=L_{\tau}[A]\models\mathsf{ZFC}^{-}$, $s\in N$, $\sigma:\bar{N}\prec N$ countable, transitive and full with $\sigma(\bar{\mathbb{P}},\bar{s},\bar{\theta},\bar{\mu})=\mathbb{P},s,\theta,\mu$, if $\bar{G}\subseteq\bar{\mathbb{P}}\cap\bar{N}$ is generic then there is a $p\in\mathbb{P}$ so that if $p\in G$ is $\mathbb{P}$-generic over $V$ then in $V[G]$ there is a $\sigma^{\prime}:\bar{N}\prec N$ so that 1\. $\sigma^{\prime}(\bar{\mathbb{P}},\bar{s},\bar{\theta},\bar{\mu})=\mathbb{P},s,\theta,\mu$ 2\. $\sigma^{\prime}``\bar{G}\subseteq G$ 3\. $\sigma^{\prime}\upharpoonright\bar{\mu}=\sigma\upharpoonright\bar{\mu}$ 2. (2) We say that $\mathbb{P}$ is $\infty$-_subproper above_ $\mu$ if for all sufficiently large $\theta$, $\tau>\theta$ so that $H_{\theta}\subseteq N:=L_{\tau}[A]\models\mathsf{ZFC}^{-}$, $s\in N$, $p\in N\cap\mathbb{P}$, $\sigma:\bar{N}\prec N$ countable, transitive and full with $\sigma(\bar{p},\bar{\mathbb{P}},\bar{s},\bar{\theta},\bar{\mu})=p,\mathbb{P},s,\theta,\mu$, there is a $q\in\mathbb{P}$ so that $q\leq p$ and if $q\in G$ is $\mathbb{P}$-generic over $V$ then in $V[G]$ there is a $\sigma^{\prime}:\bar{N}\prec N$ so that 1\. $\sigma^{\prime}(\bar{p},\bar{\mathbb{P}},\bar{s},\bar{\theta},\bar{\mu})=p,\mathbb{P},s,\theta,\mu$ 2\. $(\sigma^{\prime})^{-1}``G$ is $\bar{\mathbb{P}}$-generic over $\bar{N}$ 3\. $\sigma^{\prime}\upharpoonright\bar{\mu}=\sigma\upharpoonright\bar{\mu}$ Concretely being $\infty$-subcomplete above $\mu$ simply means that $\mathbb{P}$ is subcomplete and, moreover, for any $\sigma:\bar{N}\prec N$ the corresponding $\sigma^{\prime}$ (in $V[G]$) witnessing the subcompleteness can be arranged to agree with $\sigma$ “up to $\mu$” i.e. on the ordinals below $\sigma^{-1}``\mu$ (and idem for $\infty$-subproperness). The “non-$\infty$” versions of these classes was first introduced by Jensen in [13] under the name “$\mu$-subcompleteness” and “$\mu$-subproperness”. They were investigated further by Fuchs in [7] who introduced the terminology “above $\mu$” and made several of the elementary observations we repeat below. Following Fuchs (as opposed to Jensen) have moved the parameter $\mu$ to the end to avoid the awkwardness of “$\mu$-$\infty$-subcomplete/$\mu$-$\infty$-subproper”. The following is immediate from the definitions. ###### Observation 2.2. Let $\mu<\nu$ be cardinals. If $\mathbb{P}$ is $\infty$-subcomplete (respectively $\infty$-subproper) above $\nu$ then it is $\infty$-subcomplete (respectively subproper) above $\mu$ and it is $\infty$-subcomplete (respectively $\infty$-subproper) (without any restriction). It is easy to see that being $\infty$-subcomplete (respectively, $\infty$-subproper) is equivalent to being $\infty$-subcomplete (respectively $\infty$-subproper) above $\omega_{1}$, however more is true, an observation due independently to the first author and Fuchs (see [7, Observation 4.2]). ###### Proposition 2.3. Let $\mathbb{P}$ be a forcing notion. $\mathbb{P}$ is $\infty$-subcomplete (respectively $\infty$-subproper) if and only if $\mathbb{P}$ is $\infty$-subcomplete above $2^{\aleph_{0}}$ (respectively $\infty$-subproper above $2^{\aleph_{0}}$). As noted above, this proposition (in the case of subcompleteness) is proved as Observation 4.2 of [7] but we give a detailed proof in order to help the reader get accustomed to $\infty$-subversion forcing as well as to include the mild difference of subproperness. However, let us note that essentially the point is that, using the definable well order in $L_{\tau}[A]$, the reals of $\bar{N}$ code the cardinality of the continuum. ###### Proof. We prove the case of $\infty$-subcomplete and leave the reader to check the case of $\infty$-subproper. Let $\mathbb{P}$ be a forcing notion. It is immediate as noted above that if $\mathbb{P}$ is $\infty$-subcomplete above $2^{\omega}$ then it is $\infty$-subcomplete so we need to check just the reverse direction. Thus assume that $\mathbb{P}$ is $\infty$-subcomplete and let $\tau>\theta$ be cardinals so that $\sigma:\bar{N}\prec N:=L_{\tau}[A]$ with $H_{\theta}\subseteq N$ be as in the definition of $\infty$-subcompleteness. Let $\bar{G}$ be a $\bar{\mathbb{P}}$-generic filter over $\bar{N}$ with $\bar{\mathbb{P}}$ the preimage of $\mathbb{P}$ under $\sigma$. Finally let $p\in\mathbb{P}$ force that there is a $\sigma^{\prime}:\bar{N}\prec N$ so that $\sigma^{\prime}(\bar{\mathbb{P}})=\mathbb{P}$ and $\sigma^{\prime}``\bar{G}\subseteq G$ for any generic $G\ni p$ (the existence of such a condition is the heart of the definition of $\infty$-subcompleteness of course). We need to show that $p$ forces that $\sigma^{\prime}\upharpoonright 2^{\aleph_{0}}=\sigma\upharpoonright 2^{\aleph_{0}}$, where, to be clear, $2^{\aleph_{0}}$ denotes cardinal (as computed in $\bar{N}$) which bijects onto the continuum (as defined in $\bar{N}$). To avoid confusion let us denote the cardinal $2^{\aleph_{0}}=\kappa$ (in $V$ and hence $N$) and the preimage of $\kappa$ in $\bar{N}$ under $\sigma$ as $\bar{\kappa}$. First note that by the absoluteness of $\omega$ we have that for all reals $x\in\bar{N}$ it must be the case that $\sigma(x)=\sigma^{\prime}(x)=x$ (and being a real is absolute between $\bar{N}$ and $V$). Moreover, since $N=L_{\tau}[A]$ there is a definable well order of the universe, and in particular there is a definable bijection of the reals onto $\kappa$, say $f:2^{\omega}\to\kappa$. By elementarity in $\bar{N}$ there is a definable bijection $\bar{f}:2^{\omega}\cap\bar{N}\to\bar{\kappa}$. But since $f$ is definable we have $\sigma(\bar{f})=\sigma^{\prime}(\bar{f})=f$ and hence for all $\alpha\in\bar{\kappa}$ we get $\sigma(\alpha)=\sigma(\bar{f}(\bar{f}^{-1}(\alpha)))=\sigma^{\prime}(\bar{f}(\bar{f}^{-1}(\alpha)))=\sigma^{\prime}(\alpha)$, as needed. ∎ In fact $2^{\aleph_{0}}$ is the best possible in $\mathsf{ZFC}$. Jensen showed that Namba forcing is $\infty$-subcomplete above $\omega_{1}$ assuming $\mathsf{CH}$ while it is not even $\infty$-subproper above $\omega_{2}$ in $\mathsf{ZFC}$, a consequence of the next observation. ###### Lemma 2.4. Let $\mu$ be a cardinal. 1. (1) If $\mathbb{P}$ is $\infty$-subproper above $\mu$ then any new countable set of ordinals less than $\mu$ added by $\mathbb{P}$ is covered by an old countable set of ordinals (less than $\mu$). In particular if $\Vdash_{\mathbb{P}}$“${\rm cf}(\mu)=\omega$” then ${\rm cf}(\mu)=\omega$ (in $V$). 2. (2) If $\mathbb{P}$ is $\infty$-subcomplete above $\mu$ then $\mathbb{P}$ adds no new countable sets of ordinals below $\mu$. ###### Proof. The proofs of both are similar to the corresponding proofs that every new countable set of ordinals added by a proper forcing notion is contained in an old countable set of ordinals and $\sigma$-closed forcing notions do not add new countable sets of ordinals at all respectively. The point is that to show the corresponding fact “below $\mu$” one only needs $\infty$-subproperness (respectively $\infty$-subcompleteness) above $\mu$. Let us begin with the first item. Assume that $\mu$ is a cardinal, $\mathbb{P}$ is $\infty$-subproper above $\mu$, $p\in\mathbb{P}$ and $\dot{x}$ is a $\mathbb{P}$-name so that $p\Vdash\dot{x}:\omega\to\mu$. We need to find a $q\leq p$ and a countable $X\subseteq\mu$ so that $q\Vdash{\rm im}(\dot{x})\subseteq\check{X}$. To this end, let $\sigma:\bar{N}\prec N$ be as in the definition of $\infty$-subproperness with $\sigma(\bar{p},\bar{\mathbb{P}},\bar{\mu},\dot{\bar{x}})=p,\mathbb{P},\mu,\dot{x}$. We claim that $X:=\sigma``\bar{\mu}$ is as needed. Indeed, by the definition of $\infty$-subproperness above $\mu$ there is a $q\leq p$ forcing that there is an embedding $\sigma^{\prime}:\bar{N}\prec N$ so that $\sigma^{\prime}(\bar{p},\bar{\mathbb{P}},\bar{\mu},\dot{\bar{x}})=p,\mathbb{P},\mu,\dot{x}$ and $\sigma^{\prime}\upharpoonright\bar{\mu}=\sigma\upharpoonright\bar{\mu}$. Fix such a $q$ and let $q\in G$ be $\mathbb{P}$-generic over $V$. Note that by elementarity we have that $\bar{p}\Vdash\dot{\bar{x}}(\check{n})\in\bar{\mu}$ for all $n<\omega$. Also since $\bar{G}:=(\sigma^{\prime})^{-1}``G$ is $\bar{\mathbb{P}}$-generic over $\bar{N}$ we can find in $\bar{N}$ ordinals $\bar{\mu}_{n}<\bar{\mu}$ so that $\bar{N}[\bar{G}]\models\dot{\bar{x}}^{\bar{G}}(n)=\bar{\mu}_{n}$ for each $n<\omega$. Finally we have that and $\sigma^{\prime}(\bar{p},\bar{\mathbb{P}},\bar{\mu},\dot{\bar{x}})=p,\mathbb{P},\mu,\dot{x}$ and hence, by elementarity again alongside the fact that $\sigma\upharpoonright\bar{\mu}=\sigma^{\prime}\upharpoonright\bar{\mu}$ we get $N[G]\models\dot{x}^{G}(n)=\sigma^{\prime}(\bar{\mu}_{n})=\sigma(\bar{\mu}_{n})\in X$ for all $n<\omega$. This completes the proof of the first item. The second case is nearly verbatim noting that since for $\infty$-subcomplete forcing we can choose $\bar{G}$ in the ground model we can actually define $\sigma``\dot{\bar{x}}^{\bar{G}}=(\sigma^{\prime})``\dot{\bar{x}}^{\bar{G}}$ in $V$. ∎ As mentioned before Lemma 2.4 an immediate consequence is the following. ###### Lemma 2.5. Namba forcing is not $\infty$-subproper above $\omega_{2}$. In particular Namba forcing is not $\infty$-subproper if $\mathsf{CH}$ fails. Finally we end this section with some observations about the associated forcing axioms for the classes we have been discussing. ###### Definition 2.6. Let $\mu$ be a cardinal. Denote by $\infty$-$\mathsf{SubPFA}\upharpoonright\mu$ the forcing axiom for forcing notions $\mathbb{P}$ which are $\infty$-subproper above $\mu$ and $\infty$-$\mathsf{SCFA}\upharpoonright\mu$ the same for $\mathbb{P}$ which are $\infty$-subcomplete above $\mu$. The following is immediate by Observation 2.2. ###### Proposition 2.7. Let $\mu<\nu$ be cardinals. We have that $\infty$-$\mathsf{SCFA}$ implies $\infty$-$\mathsf{SCFA}\upharpoonright\mu$ implies $\infty$-$\mathsf{SCFA}\upharpoonright\nu$. Similarly for the variants of $\infty$-$\mathsf{SubPFA}$. In the next section we will show that (in many cases) the reverse implications do not hold. ## 3\. Separating the $\infty$-$\mathsf{SCFA}\upharpoonright\mu$ Principles In this section we show that under certain cardinal arithmetic assumptions $\infty$-$\mathsf{SCFA}\upharpoonright\nu$ does not imply $\infty$-$\mathsf{SCFA}\upharpoonright\mu$ for $\mu<\nu$. Before proving this general theorem we introduce our technique with the simple example of separating $\infty$-$\mathsf{SCFA}\upharpoonright\omega_{1}$ from $\infty$-$\mathsf{SCFA}\upharpoonright\omega_{2}$. This involves showing that adding a $\square_{\omega_{1}}$-sequence to a model of $\infty$-$\mathsf{SCFA}$ preserves $\infty$-$\mathsf{SCFA}\upharpoonright\omega_{2}$. However, it follows from a result of Jensen, see [14] that $\mathsf{SCFA}+\mathsf{CH}$ implies the failure of $\square_{\omega_{1}}$. ### 3.1. The Case of $\infty$-$\mathsf{SCFA}\upharpoonright\omega_{2}$: Adding Non-Reflecting Structures of Size $\aleph_{2}$ Recall that for an uncountable cardinal $\lambda$ a $\square_{\lambda}$-sequence is a sequence $\langle C_{\alpha}\;|\;\alpha\in\lambda^{+}\cap{\rm Lim}\rangle$ so that for all $\alpha$ the following hold: 1. (1) $C_{\alpha}$ is club in $\alpha$ 2. (2) ${\rm ot}(\alpha)\leq\lambda$ 3. (3) For each $\beta\in{\rm lim}(C_{\alpha})$ we have that $C_{\alpha}\cap\beta=C_{\beta}$ We recall the poset $\mathbb{P}_{0}$ from [3, Example 6.6] for adding a square sequence. Conditions $p\in\mathbb{P}_{0}$ are functions so that the domain of $p$ is $\beta+1\cap{\rm Lim}$ for some $\beta\in\lambda^{+}\cap{\rm Lim}$ and 1. (1) For all $\alpha\in{\rm dom}(p)$ we have that $p(\alpha)$ is club in $\alpha$ with order type $\leq\lambda$; and 2. (2) If $\alpha\in{\rm dom}(p)$ then for each $\beta\in{\rm lim}(p(\alpha))$ we have $p(\alpha)\cap\beta=p(\beta)$ The order is end extension. We remark that a moment’s reflection confirms that this poset is $\sigma$-closed. Moreover it is ${<}\lambda^{+}$-strategically closed (see [3]). In particular it preserves cardinals up to $\lambda^{+}$. ###### Theorem 3.1. Assume $\infty$-$\mathsf{SCFA}\upharpoonright\omega_{2}$ and let $\mathbb{P}_{0}$ be the forcing notion defined above for adding a $\square_{\omega_{1}}$-sequence. Then $\Vdash_{\mathbb{P}_{0}}$ $\infty$-$\mathsf{SCFA}\upharpoonright\omega_{2}$. In particular if the existence of a supercompact cardinal is consistent with $\mathsf{ZFC}$ then $\infty$-$\mathsf{SCFA}\upharpoonright\omega_{2}+\square_{\omega_{1}}$ is consistent as well. Before proving this theorem we need to define one more poset. Recall that if $G\subseteq\mathbb{P}_{0}$ is generic and $\vec{\mathcal{C}}_{G}=\langle C_{\alpha}\;|\;\alpha\in\lambda^{+}\cap{\rm Lim}\rangle$ is the generic $\square_{\lambda}$-sequence added by $G$ then for any cardinal $\gamma<\lambda$ we can _thread the square sequence_ via the following poset, $\mathbb{T}_{G,\gamma}$. Conditions are closed, bounded subsets $c\subseteq\lambda^{+}$ so that $c$ has order type $<\gamma$, and for all limit points $\beta\in c$ we have that $\beta\cap c=C_{\beta}$. See [4, §6] and [15, p.7] for more on this threading poset. The point is the following. ###### Fact 3.2 (Lemma 6.9 of [4]). Let $\gamma<\lambda$ be cardinals, $\mathbb{P}_{0}$ the forcing notion described above for adding a $\square_{\lambda}$-sequence and $\dot{\mathbb{T}}_{\dot{G},\gamma}$ be the $\mathbb{P}_{0}$-name for the forcing to thread the generic square sequence with conditions of size $<\gamma$. Then $\mathbb{P}_{0}*\dot{\mathbb{T}}_{\dot{G},\gamma}$ has a dense $<\gamma$-closed subset. We can now prove Theorem 3.1. ###### Proof. We let $\mathbb{P}_{0}$ be the forcing described above for adding a $\square_{\omega_{1}}$-sequence (so $\lambda=\omega_{1}$). Let $\gamma=\aleph_{1}$ so in $V^{\mathbb{P}_{0}}$ the threading poset $\dot{\mathbb{T}}:=\dot{\mathbb{T}}_{\dot{G},\aleph_{1}}$ consists of countable closed subsets of $\omega_{2}$. We want to apply Theorem 1.9 to $\mathbb{P}_{0}$. Note that if $\dot{\mathbb{Q}}$ is a $\mathbb{P}_{0}$-name for an $\infty$-subcomplete above $\omega_{2}$ forcing notion, then $\dot{\mathbb{T}}=\dot{\mathbb{T}}_{\dot{G},\aleph_{1}}$ is absolute between $V^{\mathbb{P}_{0}}$ and $V^{\mathbb{P}_{0}*\dot{\mathbb{Q}}}$ by Lemma 2.4 (2). ###### Claim 3.3. It is enough to show that for any $\mathbb{P}_{0}$-name $\dot{\mathbb{Q}}$ for a forcing notion which is $\infty$-subcomplete above $\omega_{2}$, the three step $\mathbb{P}_{0}*\dot{\mathbb{Q}}*\dot{\mathbb{T}}$ is $\infty$-subcomplete above $\omega_{2}$. ###### Proof of Claim. This is because $\mathbb{T}$ adds a lower bound to $j``G$ as described in the statement of Theorem 1.9. In more detail, let $\dot{\mathbb{Q}}$ be a $\mathbb{P}_{0}$-name for a forcing notion which is $\infty$-subcomplete above $\omega_{2}$, we want to show that for $\dot{\mathbb{R}}=\dot{\mathbb{T}}$ the hypotheses of Theorem 1.9 are satisfied assuming that $\mathbb{P}_{0}*\dot{\mathbb{Q}}*\dot{\mathbb{T}}$ is $\infty$-subcomplete above $\omega_{2}$. Since this is exactly the first clause we only need to concern ourselves with the second one. Recall that, relativized to this situation, this says that if $j:V\to N$ is a generic elementary embedding, $\theta\geq|\mathbb{P}_{0}*\dot{\mathbb{Q}}*\dot{\mathbb{T}}|^{+}$ is regular in $V$ and a) $H_{\theta}^{V}$ is in the wellfounded part of $N$; b) $j``H_{\theta}^{V}\in N$ has size $\omega_{1}$ in $N$; c) ${\rm crit}(j)=\omega_{2}^{V}$ d) There exists a $G*H*K$ in $N$ that is $(H_{\theta}^{V},\mathbb{P}_{0}*\dot{\mathbb{Q}}*\dot{\mathbb{T}})$-generic Then $N$ believes that $j``G$ has a lower bound in $j(\mathbb{P}_{0})$. So fix some $\theta$ and $j:V\to N$ as described in a) to d). Note that $j``G=G$ by c) and the fact that $G$ is coded as a subset of $\omega_{2}^{V}$. Thus it suffices to find a lower bound of $G$ in $j(\mathbb{P}_{0})$. The point is now though that since $G*H*K\in N$ we can in particular form $\bigcup K\in N$ which is a club subset of $\omega_{2}^{V}=\sup_{p\in G}{\rm dom}(p)$ and coheres with all of the elements of $G$, and hence $(\bigcup G)\cup\langle\omega_{2}^{V},\bigcup K\rangle$ is as needed. ∎ Let us now show that $\mathbb{P}_{0}*\dot{\mathbb{Q}}*\dot{\mathbb{T}}$ is $\infty$-subcomplete above $\omega_{2}$. Let $\tau>\theta$ be sufficiently large cardinals and $\sigma:\bar{N}\prec N=L_{\tau}[A]\supseteq H_{\theta}$ be as in the definition of $\infty$-subcompleteness above $\omega_{2}$. Let $\sigma(\bar{\mathbb{P}}_{0},\dot{\bar{\mathbb{Q}}},\dot{\bar{\mathbb{T}}})=\mathbb{P}_{0},\dot{\mathbb{Q}},\dot{\mathbb{T}}$. Let $\bar{G}*\bar{H}*\bar{K}$ be $\bar{\mathbb{P}}_{0}*\dot{\bar{\mathbb{Q}}}*\dot{\bar{\mathbb{T}}}$-generic over $\bar{N}$. There are few things to note. First let us point out that $\bar{G}$ and $\bar{K}$ are (coded as) subsets of $\bar{\omega}_{2}$, the second uncountable cardinal from the point of view of $\bar{N}$ (so $\sigma(\bar{\omega}_{2})=\omega_{2}$). Next note that $\mathbb{P}_{0}*\dot{\mathbb{Q}}*\dot{\mathbb{T}}$ is forcing equivalent to $\mathbb{P}_{0}*\dot{\mathbb{T}}*\dot{{\mathbb{Q}}}$ since both $\dot{\mathbb{Q}}$ and $\dot{\mathbb{T}}$ are in $V^{\mathbb{P}_{0}}$, and the same for the “bar” versions in $\bar{N}$. Now note that since $\mathbb{P}_{0}*\dot{\mathbb{T}}$ has a $\sigma$-closed dense subset, $\sigma``\bar{G}*\bar{K}$ has a lower bound (in $N$), say $(p,t)$ ($t$ is in the ground model and the $\sigma$-closed dense subset is simply the collection of conditions whose second coordinate is a check name decided by $p$). By $\sigma$-closed-ness $(p,t)$ forces that there is a unique lift of $\sigma:\bar{N}\prec N$ to some $\sigma_{0}:\bar{N}[\bar{G}]\prec N[G]$ with $\sigma_{0}(\bar{G})=G$ for any $\mathbb{P}_{0}$-generic $G\ni p$ (technically we need to work in the extension by $\mathbb{P}_{0}*\dot{\mathbb{Q}}$, but we only want to specify the embedding of the $\bar{\mathbb{P}}_{0}$ extension). Fix such a $G$ (from which $\sigma_{0}$ is defined) and work in $V[G]$. Note that $\sigma_{0}``\bar{K}=\sigma``\bar{K}$ has $t\in N$ as a lower bound. Since $\mathbb{Q}:=\dot{\mathbb{Q}}^{G}$ is $\infty$-subcomplete above $\omega_{2}$ here we can apply the definition of $\infty$-subcompleteness to $\sigma_{0}:\bar{N}[\bar{G}]\prec N[G]$ to obtain a condition $\dot{q}^{G}:=q\in\mathbb{Q}$ so that if $H\ni q$ is $\mathbb{Q}$-generic over $V[G]$ then in $V[G][H]$ there is a $\sigma_{1}:\bar{N}[\bar{G}]\prec N[G]$ so that $\sigma_{1}(\bar{G},\bar{\mathbb{P}}_{0},\dot{\bar{\mathbb{Q}}}^{\bar{G}},\dot{\bar{\mathbb{T}}}^{\bar{G}})=G,\mathbb{P}_{0},\mathbb{Q},\mathbb{T}$ where $\mathbb{T}\in V[G]$ is $\dot{\mathbb{T}}^{G}$, $\sigma_{1}``\bar{H}\subseteq H$ and $\sigma_{1}\upharpoonright\bar{\omega}_{2}=\sigma\upharpoonright\bar{\omega}_{2}$. Note also that by condensation we have that $\bar{N}=L_{\bar{\tau}}[\bar{A}]$ and hence we can ensure that $\sigma_{1}\upharpoonright\bar{N}:\bar{N}\prec N$. Now by the first observation above we know that since $\bar{G}$ and $\bar{K}$ are coded as subsets of $\bar{\omega}_{2}$ it must be the case that in fact $\sigma_{1}\upharpoonright\bar{G}=\sigma\upharpoonright\bar{G}$ and idem for $\bar{K}$. In particular $(p,t)$ is still a lower bound in $\sigma_{1}``\bar{G}*\bar{K}$. But putting all of these observations together now ensures that the triple $(p,\dot{q},t)\in\mathbb{P}_{0}*\dot{\mathbb{Q}}*\dot{\mathbb{T}}$ forces that $\sigma_{2}:=\sigma_{1}\upharpoonright\bar{N}$ is as needed to witness that the three step is $\infty$-subcomplete above $\omega_{2}$ as needed. ∎ Note the following corollary of Theorem 3.1. ###### Corollary 3.4. $\infty$-$\mathsf{SCFA}$ does not imply $\mathsf{MA}^{+}(\sigma{\mbox{\rm- closed}})$ assuming the consistency of a supercompact cardinal. In particular $\infty$-$\mathsf{SCFA}$ does not imply $\mathsf{SCFA}^{+}$. ###### Proof. Begin with a model of $\infty$-$\mathsf{SCFA}+2^{\aleph_{0}}=2^{\aleph_{1}}=\aleph_{2}$ (for instance a model of $\mathsf{MM}$). Force with $\mathbb{P}_{0}$ to preserve these axioms and add a $\square_{\omega_{1}}$-sequence. Then $\infty$-$\mathsf{SCFA}\upharpoonright\omega_{2}$ and $\square_{\omega_{1}}$ hold in the extension by Theorem 3.1. But, since $\mathbb{P}_{0}$ does not collapse cardinals (by $2^{\aleph_{1}}=\aleph_{2}$) or add reals, the continuum is still $\aleph_{2}$ hence $\infty$-$\mathsf{SCFA}$ holds yet $\mathsf{MA}^{+}(\sigma\mbox{-closed})$ fails since this axiom implies that $\square_{\kappa}$ fails for all $\kappa$, see [5]. ∎ The proof of Theorem 3.1 can be generalized in many ways. Observe that very little about $\mathbb{P}_{0}$ is used. For instance almost an analogous proof gives the following. ###### Theorem 3.5. Assume $\infty$-$\mathsf{SCFA}\upharpoonright\omega_{2}$. The forcing $\mathbb{S}_{\omega_{2}}$ to add an $\omega_{2}$-Souslin tree preserves $\infty$-$\mathsf{SCFA}\upharpoonright\omega_{2}$. ###### Sketch. Let $\mathbb{S}_{\omega_{2}}$ be the standard forcing to add an $\omega_{2}$-Souslin tree: conditions are binary trees $p\subseteq 2^{<\omega_{2}}$ of size $<\aleph_{2}$ ordered by end extension. This adds an $\omega_{2}$-Souslin tree and is $\sigma$-closed. Let $\dot{T}_{\dot{G}}$ be the canonical name for the tree added i.e. if $G\subseteq\mathbb{S}_{\omega_{2}}$ is generic over $V$ then $(\dot{T}_{\dot{G}})^{G}=\bigcup G$. Let $\dot{\mathbb{Q}}$ be a $\mathbb{S}_{\omega_{2}}$-name for a forcing notion which is $\infty$-subcomplete above $\omega_{2}$. As before it is enough to show that $\mathbb{S}_{\omega_{2}}*\dot{\mathbb{Q}}*\dot{T}_{\dot{G}}$ is $\infty$-subcomplete above $\omega_{2}$ where $\dot{T}_{\dot{G}}$ is the name for the tree as a forcing notion. The proof is now though exactly as before noting that, since $\dot{T}_{\dot{G}}\in V^{\mathbb{S}_{\omega_{2}}}$ we have again that $\mathbb{S}_{\omega_{2}}*\dot{\mathbb{Q}}*\dot{T}_{\dot{G}}$ is forcing equivalent to $\mathbb{S}_{\omega_{2}}*\dot{T}_{\dot{G}}*\dot{\mathbb{Q}}$ and the generic is coded by a subset of $\omega_{2}$ hence not moved by the new embedding added by $\dot{\mathbb{Q}}$. ∎ We have the following corollary similar to Corollary 3.4 above by invoking a model of $\infty$-$\mathsf{SCFA}+2^{\aleph_{0}}=\aleph_{2}$. ###### Corollary 3.6. Assuming the consistency of a supercompact cardinal we have the consistency of $\mathsf{SCFA}+\neg\mathsf{CH}+\neg{\rm TP}(\omega_{2})$. Here ${\rm TP}(\omega_{2})$ is the tree property at $\omega_{2}$ i.e. no $\omega_{2}$-Aronszajn trees. This result contrasts with [18, Corollary 4.1] which shows that under Rado’s Conjecture, another forcing axiom-like statement compatible with $\mathsf{CH}$, ${\rm TP}(\omega_{2})$ is equivalent to $\neg\mathsf{CH}$. ### 3.2. The General Case The proof of Theorem 3.1 can be easily generalized to establishe that for any cardinal $\mu$ adding a $\square_{\mu}$ sequence via $\mathbb{P}_{0}$ preserves $\infty$-$\mathsf{SCFA}\upharpoonright\mu^{+}$. ###### Theorem 3.7. Let $\mu$ be an uncountable cardinal and assume $\infty$-$\mathsf{SCFA}\upharpoonright\mu^{+}$ holds. If $\mathbb{P}_{0}$ is the forcing from the previous subsection to add a $\square_{\mu}$-sequence then $\mathbb{P}_{0}$ preserves $\infty$-$\mathsf{SCFA}\upharpoonright\mu^{+}$. ###### Proof. In $V^{\mathbb{P}_{0}}$ let $\dot{\mathbb{T}}:=\dot{\mathbb{T}}_{\dot{G},\aleph_{1}}$. We only give the proof of the claim obtained from Claim 3.3 by replacing $\omega_{2}$ with $\mu$. The rest of the proof is exactly the same as Theorem 3.1 (just replace $\omega_{2}$ with $(\mu^{+})^{V}$). Suppose $j:V\to N$, $\theta$ and $G*H*K$ are as in the proof of Claim 3.3. Let $\beta:=(\mu^{+})^{V}=\sup_{p\in G}{\rm dom}(p)$. Then $\bigcup K\in N$ is a club subset of $\beta$ and coheres with all of the elements of $G$. Note that all initial segments of $\bigcup K$ are countable sets in $V$. So $K^{*}:=j``\bigcup K$ is club in $\beta^{*}:=\sup(j``\beta)$ and coheres with all of the elements of $G^{*}:=j``G$. Hence $(\bigcup G^{*})\cup\langle\beta^{*},K^{*}\rangle$ is a lower bound of $j``G$ in $j(\mathbb{P}_{0})$. ∎ Note that in particular $\mathsf{SCFA}$ does not imply $\square_{\mu}$ for any $\mu<2^{\aleph_{0}}$. On the other hand, as in Figure 1 at the introduction, we have the following theorem, which is essentially known. ###### Theorem 3.8. Let $2^{\aleph_{0}}\leq\nu\leq\kappa<\mu=\kappa^{+}$ be cardinals with $\nu^{\omega}<\mu$. Modulo the existence of a supercompact cardinal $\infty\mbox{-}\mathsf{SCFA}\upharpoonright\mu+\neg\infty\mbox{-}\mathsf{SCFA}\upharpoonright\nu$ is consistent. ###### Proof. By Theorem 3.7 we know that $\infty$-$\mathsf{SCFA}\upharpoonright\mu$ is consistent with $\square_{\kappa}$ hence it suffices to show that $\infty$-$\mathsf{SCFA}\upharpoonright\nu$ implies the failure of $\square_{\kappa}$. This is essentially known though it needs to be pieced together from a few sources. First, to get the failure of $\square_{\kappa}$ Jensen uses the forcing notion (at $\kappa$) from [14, Lemma 6.3]. Second, [8, Lemma 3.5] implies that this forcing notion is indeed $\infty$-subcomplete above $\nu$ under the cardinal arithmetic assumptions mentioned in the theorem statement. See the proof of [8, Lemma 3.5] and the discussion therein for more details. ∎ ## 4\. Separating $\mathsf{MM}$ from $\mathsf{SubPFA}$ In this section we prove the following result. ###### Theorem 4.1. Assume there is a supercompact cardinal. Then there is a forcing extension in which $\infty$-$\mathsf{SubPFA}$ holds but $\mathsf{MM}$ fails. In particular, modulo the large cardinal assumption, $\infty$-$\mathsf{SubPFA}$ does not imply $\mathsf{MM}$. The idea behind this theorem is a combination of the proof technique from [1, Theorem 2.6] and the proof of Theorem 3.1. Starting from a model of $\mathsf{MM}$ we will force to add a non-reflecting stationary set to $2^{\aleph_{0}}$ ($=\aleph_{2}$ since $\mathsf{MM}$ holds). This kills $\mathsf{MM}$ by the results of [5] but will preserve $\infty$-$\mathsf{SubPFA}$ by an argument similar to that of [1, Theorem 2.6]. The interesting difference is that $\infty$-$\mathsf{SubPFA}$ (in fact $\mathsf{SCFA}$) implies that there are no non-reflecting stationary sets above the continuum (and more) so here $\aleph_{2}$ matters which is not true for $\mathsf{PFA}$ (though the proof for $\aleph_{2}$ is the same for $\mathsf{PFA}$ and its “subversion”). We begin by recalling the relevant definitions. ###### Definition 4.2. Let $\kappa$ be a cardinal of uncountable cofinality and $S\subseteq\kappa$. For a limit ordinal $\alpha<\kappa$ of uncountable cofinality we say that $S$ reflects to $\alpha$ if $S\cap\alpha$ is stationary in $\alpha$. We say that $S$ is non-reflecting if it does not reflect to any $\alpha<\kappa$ of uncountable cofinality. ###### Fact 4.3 (See Theorem 9 [5]). $\mathsf{MM}$ implies that for every regular $\kappa>\aleph_{1}$ every stationary subset of $\kappa\cap{\rm Cof}(\omega)$ reflects. Compare this with the following. ###### Fact 4.4 (See Lemma 6[14]). $\mathsf{SCFA}$ implies that for every regular $\kappa>2^{\aleph_{0}}$ every stationary subset of $\kappa\cap{\rm Cof}(\omega)$ reflects. ###### Remark 1. Note that in [14] it is claimed that $\mathsf{SCFA}$ implies that the above holds for all $\kappa>\aleph_{1}$, regardless of the size of the continuum. However, Sean Cox observed333Private communication, see also the discussion in [8] preceding Lemma 3.5. that the proof only works for $\kappa>2^{\aleph_{0}}$. In light of Theorem 3.1 this bound is optimal. There is a natural forcing notion to add a non-reflecting stationary subset $S\subseteq\kappa\cap{\rm Cof}(\omega)$ for a fixed regular cardinal $\kappa$. The definition and basic properties are given in Example 6.5 of [3]. We record the basics here for reference. ###### Definition 4.5. Fix a regular cardinal $\kappa>\aleph_{1}$. The forcing notion $\mathbb{NR}_{\kappa}$ is defined as follows. Conditions are functions $p$ with domain the set of countably cofinal ordinals below some ordinal $\alpha<\kappa$ mapping into $2$ with the property that if $\beta\leq{\rm sup}({\rm dom}(p))$ has uncountable cofinality then there is a set $c\subseteq\beta$ club in $\beta$ which is disjoint from $p^{-1}(1)=\\{\alpha\in{\rm dom}(p)\mid p(\alpha)=1\\}$. The extension relation is simply $q\leq_{\mathbb{NR}_{\kappa}}p$ if and only if $q\supseteq p$. Proofs of the following can be found in [3]. ###### Proposition 4.6. For any regular $\kappa>\aleph_{1}$ the forcing $\mathbb{NR}_{\kappa}$ has the following properties. 1. (1) $\mathbb{NR}_{\kappa}$ is $\sigma$-closed. 2. (2) $\mathbb{NR}_{\kappa}$ is $\kappa$-strategically closed and in particular preserves cardinals. 3. (3) If $G\subseteq\mathbb{NR}_{\kappa}$ is generic then $S_{G}:=\bigcup_{p\in G}p^{-1}(1)$ is a non-reflecting stationary subset of $\kappa$. We neglect to give the definition of strategic closure since we will not need it beyond the fact stated above, see [4] or [3] for a definition. Let $\kappa$ be as above, $G\subseteq\mathbb{NR}_{\kappa}$ be generic over $V$ and let $S_{G}:=\bigcup_{p\in G}p^{-1}(1)$ be the generic non-reflecting stationary set. We want to define a forcing to kill $S_{G}$ (this will be the “$\dot{\mathbb{R}}$” in our application of Theorem 1.9). Specifically we will define a forcing notion $\mathbb{Q}_{S_{G}}$ so that forcing with $\mathbb{Q}_{S_{G}}$ will add a club to $\kappa\setminus S_{G}$ and hence kill the stationarity of $S_{G}$. Note that since $S_{G}$ is non-reflecting its complement must also be stationary and indeed has to be fat i.e. contain continuous sequences of arbitrary length $\alpha<\kappa$ cofinally high. ###### Definition 4.7. Borrowing the notation from the previous paragraph define the forcing notion $\mathbb{Q}_{S_{G}}$ as the set of closed, bounded subsets of $\kappa\setminus S_{G}$ ordered by end extension. Clearly the above forcing generically adds a club to the complement of $S_{G}$ thus killing its stationarity, see [3] Definition 6.10. It is also $\omega$-distributive. We are now ready to prove Theorem 4.1. ###### Proof of Theorem 4.1. Assume $\infty$-$\mathsf{SubPFA}$ holds (the consistency of this is the only application of the supercompact). Note that the continuum is $\aleph_{2}$ and will remain so in any cardinal preserving forcing extension which adds no reals. Let $\mathbb{P}=\mathbb{NR}_{\aleph_{2}}$, $G\subseteq\mathbb{P}$ be generic over $V$ and work in $V[G]$. Obviously in this model we have “there is a non-reflecting stationary subset of $\aleph_{2}$” and thus $\mathsf{MM}$ fails by Fact 4.3. We need to show that $\infty$-$\mathsf{SubPFA}$ holds. We will apply Theorem 1.9 much as in the proof of Theorem 3.1. Let $\dot{\mathbb{Q}}$ be a $\mathbb{P}$-name for an $\infty$-subproper forcing notion and let $\dot{\mathbb{R}}$ name $\mathbb{Q}_{S_{\dot{G}}}$ in $V^{\mathbb{P}*\dot{\mathbb{Q}}}$ (NOT just in $V^{\mathbb{P}}$ \- this is different than the proof of Theorem 3.1 and crucial). By exactly the same argument as in the proof of Theorem 3.1 it suffices to show that $\mathbb{P}*\dot{\mathbb{Q}}*\dot{\mathbb{R}}$ is $\infty$-subproper (in $V$). This is because (2) from Theorem 1.9 follows from the fact that, borrowing the notation from that Theorem applied to our situation $\dot{\mathbb{R}}$ shoots a club through the complement of $S_{G}$ hence $j``S_{G}=S_{G}$ is non- stationary in its supremum and so has a lower bound in $N$. So we show that $\mathbb{P}*\dot{\mathbb{Q}}*\dot{\mathbb{R}}$ is $\infty$-subproper. This is very similar to the proof of Theorem 3.1 but enough details are different to warrant repeating everything for completeness. Let $\tau>\theta$ be sufficiently large cardinals and $\sigma:\bar{N}\prec N=L_{\tau}[A]\supseteq H_{\theta}$ be as in the definition of $\infty$-subproperness. Let $\sigma(\bar{\mathbb{P}},\dot{\bar{\mathbb{Q}}},\dot{\bar{\mathbb{R}}},\bar{\omega_{2}})=\mathbb{P},\dot{\mathbb{Q}},\dot{\mathbb{R}},\omega_{2}$. Let $(p_{0},\dot{q}_{0},\dot{r}_{0})$ be a condition in $\mathbb{P}*\dot{\mathbb{Q}}*\dot{\mathbb{R}}$ with $\sigma(\bar{p}_{0},\dot{\bar{q}}_{0},\dot{\bar{r}}_{0})=(p_{0},\dot{q}_{0},\dot{r}_{0})$. Applying the $\sigma$-closure of $\mathbb{P}$ we can find a $\bar{\mathbb{P}}$-generic $\bar{G}$ over $\bar{N}$ and a condition $p\leq p_{0}$ so that $p$ is a lower bound on $\sigma``\bar{G}$ and, letting $\alpha={\rm sup}(\sigma``\bar{\omega}_{2})$, we have $p(\alpha)=0$ (i.e. $p$ forces $\alpha$ to not be in the generic stationary set). Let us assume $p\in G$ and note that this condition forces $\sigma``\bar{G}\subseteq G$ and hence $\sigma$ lifts uniquely to a $\tilde{\sigma}:\bar{N}[\bar{G}]\prec N[G]$ that $\tilde{\sigma}(\bar{G})=G$ and $\alpha:={\rm sup}(\sigma``\bar{\omega}_{2})\notin S_{G}$. Let $\bar{\mathbb{Q}}=\dot{\bar{\mathbb{Q}}}^{\bar{G}}$ as computed in $\bar{N}[\bar{G}]$ and let $\bar{q}_{0}=\dot{\bar{q}}_{0}^{\bar{G}}\in\bar{N}[\bar{G}]$. Applying the fact that $\dot{\mathbb{Q}}$ is forced to be $\infty$-subproper let $q\leq q_{0}=\tilde{\sigma}(\bar{q}_{0})$ be a condition forcing that if $H\subseteq\mathbb{Q}$ is $V$-generic with $q\in H$ then there is a $\sigma^{\prime}\in V[G][H]$ so that $\sigma^{\prime}:\bar{N}[\bar{G}]\prec N[G]$ as in the definition of $\infty$-subproperness (with respect to $\tilde{\sigma}$). Note that as in the proof of Theorem 3.1 $\sigma^{\prime}\upharpoonright\bar{N}:\bar{N}\prec N$ and $\sigma^{\prime}\upharpoonright\bar{\omega}_{2}=\sigma\upharpoonright\bar{\omega}_{2}$. Let $\tilde{\sigma}^{\prime}:\bar{N}[\bar{G}][\bar{H}]\to N[G][H]$ be the lift of $\sigma^{\prime}$, where $\bar{H}=(\sigma^{\prime})^{-1}``H$. ###### Claim 4.8. In $V[G][H]$ the set $S_{G}$ does not contain a club. ###### Proof of Claim. Since $\aleph_{2}$ is the continuum in $V[G]$ note that $\omega_{2}^{V[G]}$ remains uncountably cofinal in $V[G][H]$ (though of course it can be collapsed to $\omega_{1}$). Suppose towards a contradiction that $S_{G}$ contains a club and note that since we chose $\theta$ etc sufficiently large we have $N[G][H]\models$ “$\exists C$ which is club and $C\subseteq S_{G}$”. By elementarity there is a $\bar{C}\in\bar{N}[\bar{G}][\bar{H}]$ so that $\bar{N}[\bar{G}][\bar{H}]\models\bar{C}\subseteq\bar{S}_{G}\;{\rm is\,club}$ where $\bar{H}:=\sigma^{\prime}{}^{-1}H$ is $\bar{\mathbb{Q}}$-generic over $\bar{N}[\bar{G}]$ by the definition of $\infty$-subcompleteness and the choice of $q$. But now note that if $C=\tilde{\sigma}^{\prime}(\bar{C})$ then $C\cap\alpha$ is cofinal in $\alpha$ by elementarity so $\alpha\in C$ but $\alpha\notin S_{G}$ which is a contradiction. ∎ Given the claim we know that $\omega_{2}^{V}\setminus S_{G}$ is a stationary set in $V[G][H]$ and hence $\mathbb{R}:=\dot{\mathbb{R}}^{G*H}$ is the forcing to shoot a club through a stationary set. Let $\bar{\mathbb{R}}\in\bar{N}[\bar{G}][\bar{H}]$ be $\dot{\bar{\mathbb{R}}}^{\bar{G}*\bar{H}}$. Note that for each $\beta\in\bar{N}\cap\bar{\omega}_{2}$ it is dense (in $\bar{N}[\bar{G}][\bar{H}]$) that there is a condition $\bar{r}\in\bar{\mathbb{R}}$ with $\beta\in{\rm dom}(\bar{r})$. It follows that if $\bar{K}$ is generic for $\bar{\mathbb{R}}$ over $\bar{N}[\bar{G}][\bar{H}]$ with $\bar{K}\ni\bar{r}_{0}:=\dot{\bar{r}}^{\bar{G}*\bar{H}}$ then $\tilde{\sigma^{\prime}}``\bar{K}$ unions to a club in $\alpha\setminus S_{G}$. Since $\alpha\notin S_{G}$ we have that $r:=\bigcup\tilde{\sigma^{\prime}}``\bar{K}\cup\\{\alpha\\}$ is a condition in $\mathbb{R}$ which is a lower bound on $\tilde{\sigma^{\prime}}``\bar{K}$ and hence $r\leq\dot{r}_{0}^{G*H}$. Finally let $K\ni r$ be $\mathbb{R}$-generic over $V[G][H]$. It is now easy to check that the condition $(p,\dot{q},\dot{r})$ and $\sigma^{\prime}\upharpoonright\bar{N}$ collectively witness the $\infty$-subproperness of $\mathbb{P}*\dot{\mathbb{Q}}*\dot{\mathbb{R}}$ so we are done. ∎ We note that by the same proof adding a nonreflecting stationary set of $\mu\cap{\rm Cof}(\omega)$ for larger cardinals $\mu$ we can preserve $\infty$-$\mathsf{SubPFA}\upharpoonright\mu$. The following therefore holds. ###### Theorem 4.9. Let $2^{\aleph_{0}}\leq\mu\leq\lambda<\nu=\lambda^{+}$ be cardinals with $\mu^{\omega}<\nu$. Modulo the existence of a supercompact cardinal $\infty$-$\mathsf{SubPFA}\upharpoonright\nu+\neg\infty$-$\mathsf{SubPFA}\upharpoonright\mu$ is consistent. The proof of this Theorem finishes the proof of all nonimplications involved in Main Theorem 2. ## 5\. Conclusion and Open Questions We view this paper, alongside its predecessor [10] as showing, amongst other things, that the continuum forms an interesting dividing line for subversion forcing: below the continuum the “sub” plays no roles as witnessed, as the same non-implications can hold as those that hold for the non-sub versions. Above, it adds considerable strength to the associated forcing axioms. However, as of now we only know how to produce models of $\mathsf{SCFA}$ in which the continuum is either $\aleph_{1}$ or $\aleph_{2}$. The most pressing question in this area is therefore whether consistently $\mathsf{SCFA}$ can co-exist with a larger continuum. ###### Question 1. Is $\mathsf{SCFA}$ consistent with the continuum $\aleph_{3}$ or greater? We note here that the most obvious attempt to address this question i.e. starting with a model of $\mathsf{SCFA}$ and adding $\aleph_{3}$-many reals with e.g. ccc forcing, does not work, an observation due to the first author. ###### Lemma 5.1. Suppose $\mathbb{P}$ is a proper forcing notion adding a real. Then $\mathsf{SCFA}$ fails in $V^{\mathbb{P}}$. All that is needed about “properness” here is that being proper implies that stationary subsets of $\kappa\cap{\rm Cof}(\omega)$ are preserved. The proof of this is standard and generalizes the proof of Lemma 2.4 above (swapping subproper for proper and removing the bound by the continuum). ###### Proof. Assume $\mathbb{P}$ is proper. Let $G$ be a $\mathbb{P}$-generic filter over $V$. For a contradiction, assume $\mathsf{SCFA}$ holds in $V[G]$. Take a regular cardinal $\nu>2^{\omega}$ in $V[G]$. In $V$, take stationary partitions $\langle A_{k}:k<\omega\rangle$ of $\nu\cap{\rm Cof}(\omega)$ and $\langle D_{i}:i<\omega\rangle$ of $\omega_{1}$. In $V[G]$, take a subset $r$ of $\omega$ which is not in $V$. Let $\\{k(i)\\}_{i<\omega}$ be the increasing enumeration of $r$. By [14, Lemma 7.1], in $V[G]$, there is an increasing continuous function $f:\omega_{1}\to\nu$ such that $f[D_{i}]\subseteq A_{k(i)}$ for all $i<\omega$. Let $\alpha:={\rm sup}({\rm range}(f))$. Then, in $V[G]$ , we have that $r=\\{k\in\omega:A_{k}\cap\alpha$ is stationary in $\alpha\\}$. But the set $\\{k\in\omega:A_{k}\cap\alpha$ is stationary in $\alpha\\}$ is absolute between $V$ and $V[G]$ since $\mathbb{P}$ is proper and hence preserves stationary subsets of ${\rm Cof}(\omega)$ points. But then $r$ is in $V$, which is a contradiction. ∎ This shows that either $\mathsf{SCFA}$ implies the continuum is at most $\aleph_{2}$ \- though given the results of this paper this seems difficult to prove by methods currently available - or else new techniques for obtaining $2^{\aleph_{0}}\geq\aleph_{3}$ are needed, which is well known to be in general an open and difficult area on the frontiers of set theory. ## References * [1] Robert E. Beaudoin. The proper forcing axiom and stationary set reflection. Pacific J. Math., 149(1):13–24, 1991. * [2] Sean Cox. Forcing axioms, approchability and stationary set reflection. Journal of Symbolic Logic, 86(2), 2021. * [3] James Cummings. Iterated forcing and elementary embeddings. In Matthew Foreman and Akihiro Kanamori, editors, Handbook of Set Theory, pages 775–883. Springer, Dordrect, 2010. * [4] James Cummings, Matthew Foreman, and Menachem Magidor. Squares, scales and stationary reflection. Journal of Mathematical Logic, 1(01):35–98, 2001. * [5] Matthew Foreman, Menachem Magidor, and Saharon Shelah. Martin’s maximum, saturated ideals, and nonregular ultrafilters. I. Annals of Math, 127(1):1 – 47, 1988. * [6] Gunter Fuchs. Diagonal reflection on squares. Archive for Mathematical Logic, 58(1-2):1–26, 2019. * [7] Gunter Fuchs. Aronszajn tree preservation and bounded forcing axioms. Journal of Symbolic Logic, 86(1):293 – 315, 2021. * [8] Gunter Fuchs. Canonical fragments of strong reflection principles. Journal of Mathematical Logic, 21(3):2150023, 2021. * [9] Gunter Fuchs and Kaethe Minden. Subcomplete forcing, trees and generic absoluteness. Journal of Symbolic Logic, 83(3):920–938, 2018. * [10] Gunter Fuchs and Corey Bacal Switzer. Iteration theorems for subversions of forcing classes, 2020. * [11] Ronald B. Jensen. Forcing axioms compatible with CH. Handwritten Notes available at http://www-irm.mathematik.hu-berlin.de/ raesch/org/jensen.htm. * [12] Ronald B. Jensen. Iteration theorems for subcomplete and related forcings. Handwritten Notes available at http://www-irm.mathematik.hu-berlin.de/ raesch/org/jensen.html. * [13] Ronald B. Jensen. Subproper and subcomplete forcing. Handwritten Notes available at http://www-irm.mathematik.hu-berlin.de/ raesch/org/jensen.html. * [14] Ronald B. Jensen. Subcomplete forcing and L-forcing. In C. Chong, Qi Feng, Ted A. Slaman, and W. Hugh Woodin, editors, E-Recursion, Forcing and $C^{*}$-Algebras, volume 27 of Lecture Notes Series, Institute for Mathematical Sciences, National University of Singapore, pages 83–182. World Scientific, Singapore, 2014. * [15] Chris Lambie-Hanson and Menachem Magidor. On the strengths and weaknessess of weak squares. In James Cummings and Ernst Schimmerling, editors, Appalachian Set Theory: 2006-2012, pages 301–330. Cambridge University Press, London, 2012\. * [16] Tadatoshi Miyamoto. a class of preorders iterated under a type of rcs. RIMS Kokyuroku, (1754):81–90, 2011. * [17] Corey Bacal Switzer. Alternative Cichoń Diagrams and Forcing Axioms Compatible with CH. PhD thesis, The Graduate Center, The City University of New York, 2020\. * [18] Víctor Torres-Pérez and Liuzhen Wu. Strong Chang’s conjecture and the tree property at $\omega_{2}$. Topology Appl., 196(part B):999–1004, 2015.
# ToolTango: Common sense Generalization in Predicting Sequential Tool Interactions for Robot Plan Synthesis Shreshth Tuli<EMAIL_ADDRESS> Department of Computing Imperial College London, UK Rajas Bansal∗<EMAIL_ADDRESS> Department of Computer Science Stanford University, USA Rohan Paul<EMAIL_ADDRESS> Department of Computer Science and Engineering / Yardi School of Artificial Intelligence Indian Institute of Technology Delhi, India Mausam<EMAIL_ADDRESS> Department of Computer Science and Engineering / Yardi School of Artificial Intelligence Indian Institute of Technology Delhi, India Most work done when the authors were undergraduate students at Indian Institute of Technology Delhi, India. ###### Abstract Robots assisting us in environments such as factories or homes must learn to make use of objects as tools to perform tasks, for instance using a tray to carry objects. We consider the problem of learning commonsense knowledge of when a tool may be useful and how its use may be composed with other tools to accomplish a high-level task instructed by a human. Specifically, we introduce a novel neural model, termed ToolTango, that first predicts the next tool to be used, and then uses this information to predict the next action. We show that this joint model can inform learning of a fine-grained policy enabling the robot to use a particular tool in sequence and adds a significant value in making the model more accurate. ToolTango encodes the world state, comprising objects and symbolic relationships between them, using a graph neural network and is trained using demonstrations from human teachers instructing a virtual robot in a physics simulator. The model learns to attend over the scene using knowledge of the goal and the action history, finally decoding the symbolic action to execute. Crucially, we address generalization to unseen environments where some known tools are missing, but alternative unseen tools are present. We show that by augmenting the representation of the environment with pre- trained embeddings derived from a knowledge-base, the model can generalize effectively to novel environments. Experimental results show at least 48.8-58.1% absolute improvement over the baselines in predicting successful symbolic plans for a simulated mobile manipulator in novel environments with unseen objects. This work takes a step in the direction of enabling robots to rapidly synthesize robust plans for complex tasks, particularly in novel settings. ## 1 Introduction Advances in autonomy have enabled robots to enter human-centric domains such as homes and factories where we envision them performing general purpose tasks such as transport, assembly, and clearing. Such tasks require a robot to interact with objects, often using them as _tools_. For example, a robot asked to “take fruits to the kitchen”, can use a _tray_ for carrying items, a _stick_ to fetch objects beyond physical reach and may use a _ramp_ to reach elevated platforms. Previous work has shown that the ability to predict the possible use of tools for a given task is often useful in guiding a robot’s task planner towards plans likely to be feasible (?, ?, ?, ?, ?). In this work we consider the problem of predicting _which_ objects could be used as tools and _how_ their use can be composed for a task. In essence, we focus on the ability to predict appropriate tools that can guide the robot towards feasible and efficient plans and delegate the issue of dexterous tool manipulation to prior work (?, ?). Learning to predict task-directed tool interactions poses several challenges. First, real environments (a household or factory-like domain) are typically large where an expansive number of tool interactions may be possible (e.g., objects supporting others while transporting). Acquiring data for all feasible tool objects or exploring the space of tool interactions is challenging for any learner. Second, the usefulness of a tool varies with context. For example, placing milk in the cupboard may require the robot to elevate itself vertically using a ramp if the milk is placed at a height unreachable by the robot, but if the milk is kept on a table, a simple tray might suffice. Third, the robot may encounter new environments populated with novel objects not encountered during training. Hence, the agent’s model must be able to _generalize_ by reasoning about interactions with novel objects unseen during training. Humans possess innate commonsense knowledge about contextual use of tools for an intended goal (?). For example, a human actor when asked to move objects is likely to use trays, boxes, or even improvise with a new object with a flat surface. We aim at providing this commonsense to a robotic agent, so that it can generalize its knowledge to unseen tools, based on shared context and attributes of seen tools (see Figure 1). Figure 1: ToolTango acquires commonsense knowledge from human demonstrations leveraging graph-structured world representation, knowledge-based corpora and goal-conditioned attention to perform semantic tasks. Our aim is to acquire commonsense knowledge to develop a generalized goal-conditioned policy for a robot. This paper takes a step in the direction of enabling robots to learn how to perform high-level tasks such as compositional tool use in semantic tasks, particularly in novel environments. This paper makes four main contributions. As the first contribution, we present a crowd-sourced dataset of human- instructed plans where a human teacher guides a simulated mobile manipulator to leverage objects as tools to perform multi-step actions such as assembly, transport and fetch tasks. The process results in a corpus of $\sim\\!\\!1,500$ human demonstrated robot plans involving diverse goal settings and environment scenes. Second, the dataset mentioned above is first used to supervise a (1-step) neural imitation learner that predicts tool applicability given the knowledge of the world state and the intended goal. We show how learning a dense embedding for the environment and that of a background knowledge base can enable the model to generalize to novel scenes with new object instances that may share semantic attributes with objects seen during training; a common problem in state of the art task and policy learning approaches. We introduce a graph neural architecture, ToolNet, that encodes both the metric and relational attributes of the world state as well as available taxonomic resources such as $\mathrm{ConceptNet}$ (?). The ToolNet model predicts tool use by learning an attention over entities that can potentially serve as tools. Implicitly, the model acquires knowledge about primitive spatial characteristics (typically an output of a mapping system) and semantic attributes (typically contained in taxonomic resources) enabling generalization to novel contexts with previously unseen objects. Third, we present an imitation learner that uses the same dataset to make action predictions towards the intended goal. We term the model, Tool Interaction Prediction Network for Generalized Object environments (Tango). Similar to ToolNet, Tango also encodes the world state using a graph neural network and learns to attend over the scene using knowledge of the goal and the action history, finally decoding the symbolic action to execute. The action predictions are interleaved with physics simulation (or execution) steps, which ameliorates the need for modeling the complex effects of actions inherent in tool interactions. As a final contribution, we combine the two models ToolNet and Tango in a single architecture. Specifically, the joint model first makes the predictions for the next tool, and then uses this information to make a better action prediction. We show that this can inform learning of a fine-grained policy enabling the robot to use a particular tool in sequence and adds a significant value in making the model more accurate. We term this joint model ToolTango. Experimental evaluation with a simulated mobile manipulator demonstrates (a) accurate prediction of tool interaction sequences with high executability/goal-attainment likelihood, (b) common sense generalization to novel scenes with unseen object instances, and (c) robustness to unexpected errors during execution. Additionally, compared to Tango, we demonstrate the benefit of using ToolNet predictions for specific complex settings requiring multiple tools to reach the goal state. Experiments show that in previously seen settings, ToolTango gives an absolute improvement of 3.38-5.59% in reaching a goal state for a simulated mobile manipulator compared to the state-of-the-art Tango model. In unseen settings, the unified model gives 2.48-3.58% improvement compared to Tango. In comparison to a simple affordance prediction approach, the proposed model performs better in learning the pre- conditions for tool use. Further, the use of a neural model enables higher goal-reaching performance and faster training compared to a vanilla reinforcement learning approach. A preliminary version of ToolNet that performs single tool prediction using only the initial environment state was presented as part of the Workshop on Advances & Challenges in Imitation Learning in Robotics at Robotics Science and Systems (RSS) 2020 conference (?). Our Tango model was presented as part of the International Joint Conference in Artificial Intelligence (IJCAI) 2021 (?). This journal submission presents a substantially detailed exposition of the ToolNet and Tango models and includes additional supporting background material. Moreover, it also extends ToolNet to predict _next_ tool at each step of a multi-step action execution, instead of a single tool for the whole sequence, as in the original paper. This paper also combines the two models into ToolTango, and establishes its superior performance. The remainder of the paper is organized as follows. Section 2 overviews related work. Section 3 formulates the problem of predicting tool interaction sequences. Section 4 details the technical approach and presents the ToolTango model. The data collection platform and the dataset in detailed in Section 5. Section 6 provides the experimental details and results. Finally, Sections 7 and 8 summarizes the work and lays out avenues for future work. The associated code, data set and videos are available at https://github.com/reail- iitd/tango. Links to all supplementary material are given in Appendix A. ## 2 Related Work ### 2.1 Classical Planning Reaching goal states from a given world state through symbolic actions is closely tied to the domain of task and motion planning (TAMP) (?). Literature presents a large volume of work in this domain, ranging from constraint satisfaction to search based methods. For instance, ? (?) a planner to compose physics tool interactions using a logic-based symbolic planner. Similarly, ? (?) provide an implementation agnostic interface between a task and motion planner to combine both for goal-directed planning. ? (?) extend PDDL descriptions to include generic and declarative specifications enabling the planning to be domain independent. These methods only utilize the task properties through action constraints. For instance, the knowledge that a tray can carry multiple objects is only encoded through the constraint that an object can be placed on it. This allows such methods to create any feasible plan and not leverage commonsense knowledge to ensure that a goal state is reached in a few steps. Some recent works such as by ? (?) aim to learn object importance through human demonstrations. However, such methods can only work on objects seen previously in training and cannot generalize to unseen objects. We consider a part of the broad TAMP framework that focuses on determining the set of actions that are likely to take an agent to the goal state while taking the motion planning as a behavioural routine. We build our work on the observation that in domain with a large number of states and possible interactions, the task planning itself becomes challenging. We consider home and factory-like domains inspired from VirtualHome (?) and similar related works. We ensure that in our domains the number of objects is large and they can be contained within/supported or transported by other objects as tools (details in Section LABEL:sec:data_collection). Similar to the work by ? (?), our model learns to prune-away irrelevant objects, additionally considering a domain with richer inter-object interactions. Consequently, our learner makes additional use of semantic properties and exploits correlations between actions and outputs interactions that are likely to lead to successful plans. ### 2.2 Learning tool manipulation skills Learning control policies for manipulating tools has received recent attention in robotics. ? (?) and ? (?) learn tool manipulation policies from human demonstrations. ? (?) and ? (?) learn physics models and _effects_ enabling goal-directed compositional use. ? (?) address the problem of learning primitive physical decomposition of tool like object through its physical and geometric attributes enabling their human-like use. ? (?) learn physical properties of objects from unlabeled videos. ? (?) and ? (?) learn to interact with objects in a self-supervised setup. Efforts such as ? (?), and ? (?) plan tool interactions modeling contact and force interactions. Another set of efforts focuses on incorporating the ability to discover the use of objects as tools and using them for plan completion. Rich symbolic architectures such as ICARUS (?), KDAP (?) and ROAR (?, ?) attempt to model tool use via PDDL-like descriptions expressing the applicability and post effects of tool use. In particular, ICARUS (?) models bridge or staircase construction via lifted symbolic concepts which can be grounded to real or imagined objects in the environment. The framework presented in this paper takes inspiration from such classic works and builds on the use of learned representations for generalization to new scenes and objects. Further, instead of encoding such knowledge via a symbolic representation for each tool, the framework acquires such knowledge in a data-driven way from human demonstration. Thus, the aforementioned works focus on learning _how_ to manipulate a tool. Our paper considers the complementary problem of predicting _which_ objects may serve as tools for a given task while delegating the issue of tool manipulation to the works as mentioned earlier. The works of ? (?) and ? (?) considers the specific problem of learning the physical motion of tools (such as spatial, hammer, mug, etc.) from kinesthetic demonstration trajectories. Further, the authors present a framework to learn corrections or replacements enabling improvisation when the actual task execution scenario differs from the taught demonstration. The focus of the work is in learning physical motion of tools for short range tasks. This work lifts the problem of dexterous tool manipulation and focuses on utilizing tools to reach goal states (e.g., using a box to transfer multiple objects). This work also extends planning to leverage multiple tools to attain goals (using boxes and later using ramps). Rather than use fine-grained kinesthetic teaching, we instead use demonstrations of.long range plan executions. The two efforts are highly complementary. The tool-use trajectories learned by prior work could potentially be used in our framework to accomplish long-range tasks. Similarly, such works can use the framework presented here to compose skills and generalize to unseen tools. ### 2.3 Learning symbolic action sequences Others address the problem of acquiring knowledge for completing high-level task specifications. ? (?, ?) create a knowledge base of task decompositions as _action sketches_ and learn to translate sketches to executable plans. These efforts rely on the causal knowledge of sequences on sub-steps required to achieve an activity which are then contextually grounded. Instead, this work learns compositional tool use required to achieve the task without any causal sequence as input. ? (?) learn task decompositions from human demonstration videos. However, their work does not explicitly model the physical constraints of the robot and does not generalize to new environments. ? (?) and ? (?) learn trajectory-aware manipulation of deformable objects using a non-rigid registration method and human demonstrations. These methods can effectively handle visual variation in manipulating objects; however, they cannot be extended to generate action sequences to reach goal constraints given an unseen environment. ? (?) take a similar approach by collecting natural language corpora describing high-level tasks and learn to associate instructions to spatial attention over the scene. Other works study _relational_ planning domains, where model is trained using RL on small problems, but tested zero-shot on large problems (?, ?). ? (?) extend this to imitation learning framework, but these works cannot perform tool generalizations. ? (?) present a symbolic system where a robot imitates demonstrations from a single teacher. In new environments, it adapts the plan by performing object replacements using ConceptNet relation edges. A rich set of approaches learn affordances by mapping objects to preferred locations or learning the co-use of objects to accomplish a task. For instance, ? (?) provide a method of acquiring and transferring such knowledge to new tasks by learning to map between objects in the old and new environments in a task- dependent manner. This work does not explicitly attempt to learn such associations and presents a restricted generalization capacity. Instead, such learning is implicit in the learned policy that predicts a sequence of robot actions to achieve the goal while using tools (e.g., fetching the tool, using the tools and attaining its post effects). Our approach draws inspiration from the above-mentioned works in that, we learn to predict tools that can be considered as sub-goals to guide planning for a high-level task. In comparison to these approaches, our method provides two key contributions. First, we explicitly model the physical constraints arising from a mobile manipulator interacting in the work space. Second, instead of learning actions predicated on specific object instances, we address generalization to new object instances using primitive spatial and semantic characteristics. We propose a neural model trained using a corpus of multiple and varied demonstrations provided by several teachers. Our model uses a dense embedding of semantic concepts, enabling generalization beyond relationships explicitly stored in ConceptNet. ### 2.4 Commonsense knowledge in instruction following Acquisition of common sense knowledge has been previously explored for the task of robot instruction following (?). ? (?) present a symbolic knowledge base for procedural knowledge of tasks that is utilized for interpreting underspecified task instructions. Efforts such as ? (?) propose a similar database encoding common sense knowledge about object affordances (objects and their common locations). Others such as ? (?) learn motion preferences implicit in commands. ? (?) ground instructions for recipe preparation tasks. Their model can generalize to new recipes, but only in environments with previously _seen_ objects. In contrast, our model generalizes to worlds with previously _unseen_ tools. Others, such as ? (?) and (?) explore the problem of robot tool construction, i.e., creating tools from parts available in the environment. However, the limited set of commonsense concepts, such as attachment in (?) restricts the space for generalization in such works. A rule-based approach typically does not scale well to develop a robust generalization model. We thus utilize a commonsense embedding based approach that utilizes ConceptNet vectors to encapsulate the concepts and generalize to unseen tools and object settings. ? (?) present an instruction grounding model that leverages common sense taxonomic and affordance knowledge learned from linguistic co-associations. ? (?) consider the problem of learning physical common sense associated with objects and interactions required to achieve tasks from language only data sets. They study this problem in the context of question-answering to enable synthesis of textual responses that capture such physical knowledge. The aforementioned approaches predict latent constraints or affordances for a specified task. This work, additionally predicts the _sequence_ of tool interactions implicitly learning the causal relationships between tools use and effects. Specifically, this paper focuses on a learning common sense tool use in the context of following instructions that require multiple object interactions to attain the intended goal. ### 2.5 Synthetic Interaction Datasets Virtual environments have been used to collect human demonstrations for high- level tasks. ? (?) introduce a knowledge base of actions required to perform activities in a virtual home environment. ? (?) provide a vision-language dataset translating symbolic actions for a high-level activity to attention masks in ego-centric images. ? (?) curated data sets that provide a sequence of _How-To_ instructions for tasks such as preparing recipes. ? (?) present an affordance detection dataset for tool parts with geometric features. Others such as ? (?), ? (?) and ? (?) present simulation environments and data sets for tasks such as learning spatial affordances, situated interaction or learning low-level motor skills. The present data sets possess two limitations that make them less usable for the learning task addressed in this work. First, the data sets are collected using human actors or avatars but do not explicitly model a robot in their environment. Though virtual agents serve as a proxy for the robot, they preclude modeling of the physical constraints and the range of tasks an robot can perform. Second, a majority of the data sets aim at visual navigation and limited physical interaction with objects. They are less amenable to interactions (e.g., containment, pushing and attachment etc.) inherent in tool use. Data sets utilized in robotic tool use literature, including the UMD Part Affordance Dataset (?), are confined mainly to local use, such as finding the appropriate tool part for tool use and manipulation. Such data sets are less amenable to multi-stage plans in large workspaces. ## 3 Problem Formulation ### 3.1 Robot and Environment Model We consider a mobile manipulator operating in a known environment populated with objects. An object is associated with a pose, a geometric model and symbolic states such as $\mathrm{Open/Closed}$, $\mathrm{On/Off}$ etc. We consider object relations such as (i) _support_ e.g., a block supported on a tray, (ii) _containment_ : items placed inside a box/carton and (iii) _attachment_ : a nail attached to a wall, and (iv) _contact_ : a robot grasping an object. Let $s$ denote the world state that maintains (i) metric information: object poses, and (ii) symbolic information: object states, class type and object relations as $\mathrm{OnTop}$, $\mathrm{Near}$, $\mathrm{Inside}$ and $\mathrm{ConnectedTo}$. Let $s_{0}$ denote the initial world state and $\mathcal{O}(\cdot)$ denote a map from world state $s$ to the set of object instances $O=\mathcal{O}(s)$ populating state $s$. Let $\tau$ denote the set of tool objects that the robot can use in its plan. Note that only movable objects in the scene are considered as potential tools. Hence, $\tau\subseteq\mathcal{O}(s)$. Online, the robot may encounter _unseen_ objects in its environment. Let $A$ denote the robot’s symbolic action space. An action $a\in A$ is abstracted as $I(o^{1},o^{2})$, with an action type predicate $I\in\mathcal{I}$ that affects the states of objects $o^{1}\in O$ and $o^{2}\in O$, for instance, $\mathrm{Move(fruit_{0},tray_{0})}$. Here the arity of the interaction can be 2 or 1 depending on the interaction type. In case of arity of 1, we can drop the second object. Each action in our formulation is realized as a set of object relations in the environment that belongs to $\\{\mathrm{OnTop},\mathrm{Near},\mathrm{Inside},\mathrm{ConnectedTo}\\}$. The precondition and postconditions of actions are taken directly from prior work (?). Realization of these conditions using geometric properties is described in Appendix D. We shall also use the notion of a timestamp as a subscript to indicate prediction for each state in the execution sequence. The space of robot interactions include grasping, releasing, pushing, moving an object to another location or inducing discrete state changes (e.g.. opening/closing an object, operating a switch or using a mop). We assume the presence of an underlying low-level metric planner, encapsulated as a robot _skill_ , which realizes each symbolic action or returns if the action is infeasible. Robot actions are stochastic, modeling execution errors (unexpected collisions) and unanticipated outcomes (objects falling, changing the symbolic state). Let $\mathcal{T}(\cdot)$ denote the transition function. The successor state $s_{t+1}$ upon taking the action $a_{t}$ in state $s_{t}$ is generated from a physics simulator. Let $\eta_{t}=\\{a_{0},a_{1},\dots,a_{t-1}\\}$ denote the _action history_ till time $t$. ### 3.2 Semantic Goals and Interactions The robot’s goal is to perform tasks such as transporting or delivering objects to appropriate destinations, making an assembly, clearing or packing items or performing abstract tasks such as illuminating or cleaning the room. The robot is instructed by providing a _declarative_ goal $g$ expressing the symbolic constraint between world objects (?). For example, the _declarative_ goal, “place milk in fridge” can be expressed as a constraint $\mathrm{Inside(milk_{0},fridge_{0})}$ between specific object instances. There may be multiple instance of the same object, for eg. milk carton, in the environment. The sub-index is used to specify which instance is being referred to. Another example is of the task of moving all fruits onto the kitchen table that can be expressed as a list of constraints $\mathrm{OnTop(apple_{0},table_{0})}$, $\mathrm{OnTop(orange_{0},table_{0})}$ and $\mathrm{OnTop(banana_{0},table_{0})}$. Finally, the robot must synthesize a plan to satisfy the goal constraints. Goal-reaching plans may require using some objects as tools, for instance, using a container for moving items, or a ramp negotiate an elevation. ### 3.3 Predicting Tool Interactions We assume that the robot is primed with a set of primitive symbolic actions but lacks knowledge about how object characteristics can facilitate their use as in attaining high-level goals. Hence, the robot cannot predict the use of tray-like objects in transportation tasks, or the use of a stick to fetch an object at a distance. An exception is by discovering such characteristics via explicit simulation, which may be infeasible or intractable in large planning domains. Our goal is to learn common sense knowledge about _when_ an object can be used as a tool and _how_ their use can be sequenced for goal-reaching plans. We aim at learning a policy $\pi$ that estimates the next action $a_{t}$ conditioned on the the goal $g$ and the initial state $s$ (including the action history $\eta_{t}$, such that the robot’s _goal-reaching_ likelihood is maximized. We adopt the MAXPROB-MDP (?) formalism and estimate a policy that maximizes the goal-reaching likelihood from the given state. MAXPROB-MDP can be equivalently viewed as an infinite horizon, un-discounted MDP with a zero reward for non-goal states and a positive reward for goal states (?). Formally, let $P^{\pi}\left(s,g\right)$ denote the _goal- probability_ function that represents the likelihood of reaching the goal $g$ from a state $s$ on following $\pi$. Let $S^{\pi_{s}}_{t}$ be a random variable denoting the state resulting from executing the policy $\pi$ from state $s$ for $t$ time steps. Let $\mathcal{G}(s,g)$ denote the Boolean _goal check_ function that determines if the intended goal $g$ is entailed by a world state $s$ as $\mathcal{G}(s,g)\in\\{\mathrm{True(T)},\mathrm{False(F)}\\}$. The policy learning objective is formulated as maximizing the likelihood of reaching a goal-satisfying state $g$ from an initial state $s_{0}$, denoted as $\displaystyle\max_{\pi}P^{\pi}(s_{0},g)=\max_{\pi}\sum_{t=1}^{\infty}P\bigg{(}\mathcal{G}(S^{\pi_{s_{0}}}_{t},g)=\mathrm{T}:\mathcal{G}(S^{\pi_{s_{0}}}_{t^{\prime}},g)=\mathrm{F},~{}\forall t^{\prime}\in[1,t)\bigg{)}.$ (1) The policy is modeled as a function $f_{\theta}(.)$ parameterized by parameters $\theta$ that determines the next action for a given world state, the robot’s action history and the goal as $a_{t}=f_{\theta}\left(s_{t},g,\eta_{t}\right)$. We adopt imitation learning approach and learn the function $f_{\theta}(.)$ from demonstrations by human teachers. Let $\mathcal{D}_{\mathrm{Train}}$ denote the corpus of $N$ goal- reaching plans, $\mathcal{D_{\mathrm{Train}}}=\\{(s_{0}^{i},g^{i},\\{s_{j}^{i},a_{j}^{i}\\})\mid i\in\\{1,\ldots,N\\},j\in\\{0,t_{i}-1\\}\\},$ (2) where the $i^{th}$ datum consists of the initial state $s^{i}_{0}$, the goal $g^{i}$ and a state-action sequence $\\{(s^{i}_{0},a^{i}_{0}),\dots,(s^{i}_{t-1},a^{i}_{t-1})\\}$ of length $t_{i}$. The set of human demonstrations elucidate common sense knowledge about _when_ and _how_ tools can be used for attaining provided goals. The data set $\mathcal{D}_{\mathrm{Train}}$ supervises an imitation loss between the human demonstrations and the model predictions, resulting in learned parameters $\theta^{*}$. Online, the robot uses the learned model to sequentially predict actions and execute in the simulation environment till the goal state is attained. We also consider the _open-world_ case where the robot may encounter instances of _novel_ object categories _unseen_ in training, necessitating a _zero-shot_ generalization. ## 4 Technical Approach We aim at predicting the next robot action $a_{t}$, given the world state $s_{t}$, the intended goal $g$ and the history $\eta_{t}$ of past actions taken the robot. We consider learning to predict multi-step plans requiring complex interaction of objects as tools in possibly novel environments unseen during training. Our technical approach is built on three insights. First, we factor the overall learning task of predicting the action for each environment state by first predicting the tool required to perform the task and then predicting required interaction. This tool conditioned action prediction enables us to decouple the learning problem and make the model training tractable. Second, we incorporate learning dense embeddings of the robot’s environment as well as semantic representations for words trained on existing symbolic knowledge corpora. The learned representation allows the robot to generalize its predictions to novel environments populated with novel objects that may share semantic attributes with those encountered during training. Finally, we turn to a corpus of human demonstrated plans to train our learner. The corpus elucidates the commmon sense knowledge of sequencing tools interactions for a task. Formally, we introduce an imitation learner, realized as a hyper-parametric function (a neural network model) denoted as $f_{\theta}$ as follows: $a_{t}=f_{\theta}\left(s_{t},g,\eta_{t}\right)=f^{act}_{\theta}\left(f^{goal}_{\theta}\left(f^{state}_{\theta}\left(s_{t}\right),g,f^{hist}_{\theta}\left(\eta_{t}\right)\right),f^{tool}_{\theta}(s_{t},g,\eta_{t})\right).$ (3) It adopts an object-centric graph representation, learning a state encoding that fuses metric-semantic information about objects in the environment via function $f^{state}_{\theta}\left(\cdot\right)$. The function $f^{hist}_{\theta}\left(\cdot\right)$ encodes the action history. The model learns to attend over the world state conditioned on the declarative goal and the history of past actions through $f^{goal}_{\theta}\left(\cdot\right)$. We leverage an adapted version of our tool likelihood prediction model ToolNet, denoted as $f^{tool}_{\theta}\left(\cdot\right)$. Finally, the learned encodings are decoded as the next action for the robot to execute via $f^{act}_{\theta}\left(\cdot\right)$. Crucially, the predicted action is grounded over an _a-priori_ unknown state and type of objects in the environment. The predicted action is executed in the environment and the updated state action history is used for estimation at the next time step. Using a neural network as a hyper-parametric function for action prediction enables us to leverage ConceptNet embeddings to generalize and scale with the size of object sets, goal and interaction types. We utilize an imitation learning model with the dataset described in Equation 2 to learn the $\theta$ parameters of the neural network. The constituent model components are detailed next. ### 4.1 Graph Structured World Representation Figure 2: Updated ToolNet model encodes the metric-semantic world state using graph convolution (GGCN) and fully connected (FCN) layers. The model uses goal information and the robot’s action history to attend over a task-specific context, finally predicting the likelihood scores for each tool in the environment. A graph structured representation and inclusion of pre-trained word embeddings (from a knowledge base) facilitate generalization in predicting interactions in novel contexts with new objects unseen in training. #### 4.1.1 Semantic Reasoning We first describe the $f^{state}_{\theta}\left(\cdot\right)$ function in equation (3). We denote the robot’s current world state $s_{t}$ as an object- centric graph $G_{t}=(O,R)$. Each node in the graph represents an object instance $o\in O=\mathcal{O}(s_{t})$. The edge set consists of binary relationships $\mathrm{OnTop}$, $\mathrm{ConnectedTo}$, $\mathrm{Near}$ and $\mathrm{Inside}$ between objects $R\subseteq O\times O$. Let $l_{o}\in\\{0,1\\}^{p}$ represents discrete object states for the object $o$ (e.g. $\mathrm{Open/Closed}$, $\mathrm{On/Off}$). Here, $p$ represents the number of possible states of all objects. Next, it incorporates a pre-trained function $\mathcal{C}(\cdot)$ that embeds a word (like token of an object class or a relation) to a dense distributed representation, such that semantically close tokens appear close in the learned space (?). The use of such embeddings enables generalization, which we discuss subsequently. Consider the example goal of “Place the box in the cupboard”. In this case we want to satisfy the relationship $\mathrm{Inside}$ for cupboard and box. The entire environment can be considered as the graph where relations like $\mathrm{OnTop}(box,table)$ may be true. The pretrained function $\mathcal{C}(\cdot)$ can be any pretrained language model. Let $e_{o}=\mathcal{C}(o)\in\mathcal{R}^{q}$ denote the $q$-dimensional embedding for an object instance $o$. The embeddings $l_{o}$ and $e_{o}$ model object attributes that initialize the state of each object node in the graph. The local context for each $o$ is incorporated via a Gated Graph Convolution Network (GGCN) (?), which performs message passing between 1-hop vertices on the graph. Following (?), the gating stage is realized as a Gated Recurrent Unit (GRU) resulting in _graph-to-graph_ updates as: $\displaystyle\begin{split}r_{o}^{0}&=\mathrm{tanh}\left(W_{r}\left[l_{o};e_{o}\right]+b^{r}\right),\\\\[-3.0pt] x^{k}_{o}&=\sum_{j\in R}\sum_{o^{\prime}\in N^{j}(o)}W_{j}^{k}r_{o^{\prime}}^{k-1},\\\\[-3.0pt] r^{k}_{o}&=\mathrm{GRU}\left(r^{k-1}_{o},x^{k}_{o}\right).\\\\[-3.0pt] \end{split}$ (4) Here, the messages for object $o$ are aggregated over neighbors $N^{j}(o)$ connected by relation $j$ ($\forall j\in R$) during $n$ convolutions, resulting in an embedding $r^{n}_{o}$ for each object instance in the environment. Unlike a fully-connected neural network, a GGCN model facilitates inference over the relations across objects for an input object centric graph. These relations are crucial to predict the relevant actions and achieve the intended goal. GRUs in the convolution iterations facilitate stable training of the model (?). #### 4.1.2 Fusing Metric Information. Next, we incorporate the metric information associated with objects. Let $pose_{o}$ and $size_{o}$ represent the pose and size/extent (along xyz axes) for each object instance. The pose includes the spatial position and the orientation of the object. We do not explicitly provide the point-cloud geometric model and abstract out the surface information that may be required to infer the utility of objects as tools and instead rely on ConceptNet embeddings for the same. However, in our evaluation (Section 6), we utilize CAD models that closely represent the surface properties to model the physics of tool use. Another point to note here is that for the specific task of utilizing an object as a tool the pose information has little contribution. However, the pose information is required to accurately identify the system state. For instance, the pose information of the door of a fridge indicates whether the fridge is open or closed. This information is given as a categorical value i.e. the state vector $l_{o}$ to the GGCN model. To explicitly pass this information to the FCN model, we use the pose vector of objects as inputs. The properties are encoded using a $d$-layer Fully Connected Network (FCN) with a Parameterized ReLU (PReLU) (?) activation as: $\displaystyle\begin{split}m^{0}_{o}&=\mathrm{PReLU}\left(W_{mtr}^{0}[pose_{o};size_{o}]+b_{mtr}^{0}\right)\\\\[-3.0pt] m^{k}_{o}&=\mathrm{PReLU}\left(W_{mtr}^{k}m^{k-1}_{o}+b_{mtr}^{k}\right),\end{split}$ (5) resulting in the metric encoding $m^{d}_{o}$ for each object in the scene. A world state encoding (for $s_{t}$) is obtained by fusing the semantic and metric embeddings as $f^{state}_{\theta}(s_{t})=\\{\tilde{s}_{t}^{o}\\!=\\![r^{n}_{o};m^{d}_{o}]|\ \forall o\in{\cal O}(s_{t})\\}.$ (6) _Late fusion_ of the two encodings allows downstream predictors to exploit them independently. ### 4.2 Encoding the Action History Next, we define the $f^{hist}_{\theta}\left(\cdot\right)$ function in equation (3). The task of deciding the next action is informed by the agent’s action history in two ways. First, sequential actions are often temporally correlated. For example, a placing task often involves moving close to the box, opening it and then placing an item inside it. If the previous action involved moving close to a box, this information facilitates the model to leverage localized contextual information and not jump to manipulate another object in the goal specification. This, in tandem with the goal-conditioned attention facilitates seamless action sequence generation. Hence, maintaining the action history can help in prediction of the next action. Second, the set of actions the robot executed in the past provides a local context indicating the objects the the robot may utilize in the future. Formally, we encode the temporal action history $\eta_{t}$ using an $\mathrm{LSTM}$. We define action encoding $\mathcal{A}(a_{t-1})$ of $a_{t-1}=I_{t}(o^{1}_{t-1},o^{2}_{t-1})$ independent of the object set, as $\mathcal{A}(a_{t-1})=[\vec{I}_{t-1};\mathcal{C}(o^{1}_{t-1});\mathcal{C}(o^{2}_{t-1})]$, where $\vec{I}_{t-1}$ is a one-hot vector over possible interaction types $\mathcal{I}$, and $\mathcal{C}(o^{1}_{t-1})$ and $\mathcal{C}(o^{2}_{t-1})$ represent the word embeddings of the object instances $o^{1}_{t-1}$ and $o^{2}_{t-1}$. At each time step $t$, the $\mathrm{LSTM}$ encoder takes in the encoding of previous action, $\mathcal{A}(a_{t-1})$ and outputs the updated encoding $\tilde{\eta}_{t}$, given as $\tilde{\eta}_{t}=\mathrm{LSTM}(\mathcal{A}(a_{t-1}),\tilde{\eta}_{t-1})$. This results in the embedding vector $f^{hist}_{\theta}(\eta_{t})=\tilde{\eta}_{t}.$ (7) ### 4.3 Goal-conditioned Attention We now define the $f^{goal}_{\theta}\left(\cdot\right)$ function in equation (3). The goal $g$ consists of symbolic relations (e.g. $\mathrm{Inside}$, $\mathrm{OnTop}$ etc.) between object instances (e.g., carton and cupboard) that must be true at the end of the robot’s plan execution. The declarative goal input to the model is partitioned as relations $g_{rel}$ and the object instances specified in the goal $g_{obj}$. The resulting encondings are denoted as $\tilde{g}_{rel}$ and $\tilde{g}_{obj}$: $\tilde{g}_{rel}=\frac{1}{|g_{rel}|}\sum_{j\in g_{rel}}\mathcal{C}(j)\hskip 5.69054pt\mathrm{and}\hskip 5.69054pt\tilde{g}_{obj}=\frac{1}{|g_{obj}|}\sum_{o\in g_{obj}}\mathcal{C}(o).$ Next, the goal encoding and the action history encoding $\tilde{\eta}_{t}$ is used to learn attention weights over objects in the environment (?) such that $\displaystyle\begin{split}\epsilon_{o}=\mathrm{softmax}\left(W_{g}[\tilde{s}_{t}^{o};\tilde{g}_{obj};\tilde{\eta}_{t}]+b_{g}\right),\end{split}$ (8) where $\tilde{s}_{t}^{o}$ is obtained from $f^{state}_{\theta}(s_{t})$ in (6) and $\tilde{\eta}_{t}$ is obtained from $f^{hist}_{\theta}(\eta_{t})$ in (7). This results in the attended scene encoding $\Omega_{t}$ as: $\Omega_{t}=f^{goal}_{\theta}(\tilde{s}_{t}^{o},\tilde{g}_{obj},\tilde{\eta}_{t})=\sum_{o\in O}\mathrm{\epsilon_{o}}\tilde{s}_{t}^{o}.$ (9) The attention mechanism aligns the goal information with the scene learning a task-relevant context, relieving the model from reasoning about objects in the environment unrelated to the task, which may be numerous in realistic environments. ### 4.4 Tool Likelihood Prediction Now we define the function that predicts the likelihood scores for each tool in the environment state, i.e., the $f^{tool}_{\theta}\left(\cdot\right)$ function in equation (3). This likelihood score corresponds to the probability that a particular tool could be used to reach a goal state for a given environment state and declarative goal specification. In order to allow the model to generalize to unseen tools, instead of prediction over the pre- defined tool set $\hat{\tau}$, we allow the model to predict a likelihood score of a tool $t$ (which may not be present in any of the scenes in the training set) to be used as a tool using the object embedding ($e_{t}=\mathcal{C}(t)$ as described in Section 4.1.1). This recurrence is shown in the factored tool likelihood module in Figure 2. The prediction is made using the encoding of the state, i.e the attended scene embedding $\Omega_{t}$, the relational description of the goal $\tilde{g}_{rel}$ and the action history encoding $\tilde{\eta}_{t}$. Likelihood of each tool is computed for each $t$ using, $p_{t}=\mathrm{sigmoid}(W[\Omega_{t};\tilde{g}_{rel};\tilde{\eta}_{t};e_{t}])\ \forall\ t\in\hat{\tau},$ (10) where $\Omega_{t}$ is obtained using (9) and $\tilde{\eta}_{t}$ is obtained using (7). We now predict over a tool set $\tau$, which may be different from the fixed toolset seen during training set. We include the possibility of having tools present at inference time that are unseen during training. The factored style likelihood prediction enables our model to be flexible to make likelihood predictions for unseen tools. We now use this to define the likelihood score for each object as $p_{t}$ for tools and $0$ otherwise. Here $p_{t}$ is the probability that tool t can be used to complete the goal. Specifically, $f^{tool}_{\theta}(s_{t},g_{t},\eta_{t})=\\{p_{o}\text{ if }o\in\tau\text{ else }0\ |\ \forall o\in{\cal O}(s_{t})\\}.$ (11) Unlike the original ToolNet model (?), $f^{tool}_{\theta}\left(\cdot\right)$ function in this work can predict the tool likelihood scores for each time- step $t$ and utilizes action history encoding $\tilde{\eta}_{t}$ to capture temporal context. Figure 3: ToolTango neural model is architecturally similar to ToolNet but also incorporates tool likelihood scores predicted from the ToolNet model, finally decoding the next symbolic action for the robot to execute. The interaction type and objects are predicted auto-regressively and in a factored style. ### 4.5 Robot Action Prediction We now take the encoded information about the world state, goal and action history to decode the next symbolic action $a_{t}=I_{t}(o^{1}_{t},o^{2}_{t})$. The three components $I_{t}$, $o^{1}_{t}$ and $o^{2}_{t}$ are predicted auto- regressively, where the prediction of the interaction, $I_{t}$ is used for the prediction of the first object, $o^{1}_{t}$ and both their predictions are used for the second object prediction, $o^{2}_{t}$. For the object predictors $o^{1}_{t}$ and $o^{2}_{t}$, instead of predicting over a predefined set of objects, our model predicts a likelihood score of each object $o\in O$ based on its object embedding $\tilde{s}_{t}^{o}$ and tool likelihood score $p_{o}$ from (11). It then selects the object with highest likelihood score. The resulting factored likelihood allows the model to generalize to an _a-priori_ unknown number and types of object instances: $\displaystyle I_{t}=\mathrm{argmax}_{I\in\mathcal{I}}\left(\mathrm{softmax}(W_{I}[\Omega_{t};\tilde{g}_{rel};\tilde{\eta}_{t}]+b_{I})\right),$ (12) $\displaystyle\begin{split}o^{1}_{t}&=\mathrm{argmax}_{o\in{O}}\alpha_{t}^{o}\\\ &=\mathrm{argmax}_{o\in{O}}\ (\sigma(W_{\alpha}[\Omega_{t};\tilde{g}_{rel};\tilde{\eta}_{t};e_{o};p_{o};\vec{I}_{t}]+b_{\alpha}))\mathrm{,}\end{split}$ (13) $\displaystyle o^{2}_{t}=\mathrm{argmax}_{o\in{O}}\ (\sigma(W_{\beta}[\Omega_{t};\tilde{g}_{rel};\tilde{\eta}_{t};e_{o};p_{o};\vec{I}_{t};\alpha_{t}^{o}]+b_{\beta})).$ (14) Here $\alpha^{o}_{t}$ denotes the likelihood prediction of the first object. Finally, we impose grammar constraints (denoted as $\Lambda$) at inference time based on the number of arguments that the predicted interaction $I_{t}$ accepts. If $I_{t}$ accepts only one argument, only $o^{1}_{t}$ is selected, otherwise both are used. Thus, predicted action for time-step $t$ is denoted as $a_{t}=f^{act}_{\theta}(\Omega_{t},\tilde{g}_{rel},\tilde{\eta}_{t})=\Lambda[I_{t}(o^{1}_{t},o^{2}_{t})].$ (15) This action is then executed by the robot in simulation. Our simulation model follows closely to the one developed by ? (?) wherein symbolic actions such as $\mathrm{moveTo}$, $\mathrm{pick}$ and $\mathrm{place}$ are realized as relations in the environment state using a PyBullet simulator. For instance, the $\mathrm{moveTo}$ action changes the xyz coordinates of the robot such that it is $\mathrm{Near}$ the target object. Similarly, $\mathrm{pick}$ establishes a $\mathrm{connectedTo}$ relation between the target object and robot arm. For implementation specific details, visit https://github.com/reail-iitd/tango/wiki. The executed action and resulting world state is provided as input to the model for predicting the action at the next time step. The $f^{act}_{\theta}\left(\cdot\right)$ function denotes the ToolTango model. In our original Tango model presented in (?), the $p_{o}$ tool likelihood score was not part of equations (13) and (14). ### 4.6 Word Embeddings Informed by a Knowledge Base ToolTango uses word embedding function $\mathcal{C}(\cdot)$ that provides a dense vector representation for word tokens associated with object class and relation types. Contemporary models use word embeddings acquired from language modeling tasks (?). We adopt embeddings that are additionally informed by an existing knowledge graph $\mathrm{ConceptNet}$ (?) that provides a sufficiently large knowledge graph connecting words with edges expressing relationships such as $\mathrm{SimilarTo}$, $\mathrm{IsA}$, $\mathrm{UsedFor}$, $\mathrm{PartOf}$ and $\mathrm{CapableOf}$. Word embeddings (?) can be _retro-fitted_ such that words related using knowledge graph embeddings are also close in the embedding space (?). Using such (pre- trained) embeddings incorporates _general purpose_ relational knowledge to facilitate richer generalization for downstream policy learning. Thus, given an object, say $apple$, the word embedding function $\mathcal{C}(.)$ returns a dense $\mathrm{ConceptNet}$ vector that corresponds to this token. The complete sequence of steps is summarized in Figure 3. ### 4.7 Model Training We decompose the action prediction task into first predicting tool, i.e., the $f^{tool}_{\theta}\left(\cdot\right)$ function and then evaluating the $f^{act}_{\theta}\left(\cdot\right)$ function. We also decompose model training. First, we train the tool prediction model. The loss used to train this model is Binary Cross-Entropy with each $t\in\hat{\tau}$ acting as a class. The class label, $y^{i}_{t}$ is assigned 1 if it is used in a given demonstration, $i$ and 0 otherwise. We also use categorical weights based on plan execution time to encourage shorter plans (?). However, the knowledge of the time taken for different plans has not been injected into the model. In order to make this notion explicit to the model we use loss weighting such that for the dataset $\mathcal{D_{\mathrm{Train}}}$ defined in (2), $\mathcal{L}=-\sum_{i}\alpha_{i}\sum_{j}\sum_{t\in\tau}y^{i}_{j,t}\log(p^{i}_{j,t})+(1-y^{i}_{j,t})\log(1-p^{i}_{j,t}),$ (16) where, $p^{i}_{j,t}$ is obtained from $f^{tool}_{\theta}(s^{i}_{j},g^{i},\eta^{i}_{j})$ for each datapoint in $\mathcal{D_{\mathrm{Train}}}$ and $\alpha_{i}$ is a multiplier that is high for optimal plans (shortest among human demonstrations) and low otherwise. For a trained tool prediction model, we then train the $f^{act}_{\theta}\left(\cdot\right)$ ToolTango model, fine-tuning the weights of our updated ToolNet function $f^{tool}_{\theta}\left(\cdot\right)$. Here too we use the Binary Cross-Entropy loss, with the loss for the three predictors (action and the two objects) being added independently. Additional model training and hyperparameter details are given in Appendix B. This decomposed training style has direct implications on the training stability. First training ToolNet to predict the likelihood scores for tools enables action prediction informed by the likelihood scores of each tool in the environment. ## 5 Data Collection Platform and Annotation Domain | Plan lengths | Objects interact- | Tools used | Sample objects | Sample goal specifications ---|---|---|---|---|--- ed with in a plan | in a plan Home | 23.25$\pm$12.65 | 4.12$\pm$1.97 | 0.93$\pm$0.70 | floor1, wall, fridge123, cupboard123, tables1, couch1, big-tray1, tray1, book1, paper, cubes, light switch4, bottle, box2, fruits, chair15, stick, dumpster2, milk carton, shelf1, glue6, tape6, stool15, mop8, sponge8, vacuum8, dirt7, door2 | 1\. Place milk in fridge, 2. Place fruits in cupboard, 3. Remove dirt from floor, 4. Stick paper to wall, 5. Put cubes in box, 6. Place bottles in dumpster, 7. Place a weight on paper, 8. Illuminate the room. Factory | 38.77$\pm$23.17 | 4.38$\pm$1.85 | 1.44$\pm$0.97 | floor1, wall, ramp, worktable1, tray1, box2, crates1, stick, long-shelf1, lift1, cupboard123, drill4, hammer49, ladder5, trolley2, brick, blow dryer48, spraypaint4, welder4, generator4, gasoline, coal, toolbox2, wood cutter4, 3D printer4, screw9, nail9, screwdriver49, wood, platform1, oil7, water7, board, mop8, glue6, tape6, stool15 | 1\. Stack crated on platform, 2. Stick paper to wall, 3. Fix board on wall, 4. Turn on the generator, 5. Assemble and paint parts, 6. Move tools to workbench, 7. Clean spilled water, 8. Clean spilled oil. Table 1: Dataset characteristics. The average plan length measured as the length of action sequence), number of objects interacted in plan and number of tools used in plans with object and goal sets for Home and Factory domains. Object positions were sampled using Gaussian distribution. Objects in bold can be used as tools. Legend:- 1: surface, 2: can open/close, 3: container, 4: can operate, 5: can climb, 6: can apply, 7: can be cleaned, 8: cleaning agent, 9: can 3D print. Objects in bold can be used as tools. Object affordances derived from properties in the ConceptNet graph (?). Stool/ladder are objects used to represent a tool for raising the height of the robot. ### 5.1 Data Collection Environment We develop a low-fidelity simulation environment where the robot can take actions and interact with objects. The low-level physics of motion is less important as we are primarily concerned with the symbolic effects on the world state. To do this, we use PyBullet, a physics simulator (?), to generate home and factory-like environments populated with a virtual mobile manipulator (a Universal Robotics (UR5) arm mounted on a Clearpath Husky mobile base). The world state is object-centric including the metric locations of objects and the discrete states of symbolic attributes and relations. The objects in the domains were derived from real-world home and factory scenes that span Facebook Replica Dataset (?). These scenes were made to look as photo realistic as possible. The objects on other hand were chosen to span the YCB dataset (?). The CAD model for each object obtained from open-source repositories such as the Google 3D Warehouse111 https://3dwarehouse.sketchup.com/. This was done in order to keep the simulated physics as real as possible. The set of objects in the two domains are listed in Table 1. Each object in the environment is associated with a metric location and physical extent and may optionally possess discrete states such as $\mathrm{Open/Closed}$, $\mathrm{On/Off}$, etc. The world model is assumed to possess spatial notions such as $\mathrm{Near}$ or $\mathrm{Far}$. Objects in the world model can be supported by, contained within or connected with other objects (or the agent). Hence, we include semantic relations such as $\mathrm{OnTop}$, $\mathrm{Inside}$, $\mathrm{ConnectedTo}$ etc. More details in Appendix D. Robot Actions --- Push, Climb up/down, Open/Close, Switch on/off, Drop, Pick, Move to, Operate device, Clean, Release material on surface, Push until force Object Attributes Grabbed/Free, Outside/Inside, On/Off, Open/Close, Sticky/Not Sticky, Dirty/Clean, Welded/Not Welded, Drilled/Not Drilled, Driven/Not Driven, Cut/Not Cut, Painted/Not Painted Semantic Relations On top, Inside, Connected to, Near Metric Properties Position, Orientation, Size Table 2: Domain Representation. Robot symbolic actions, semantic attributes, relations to describe the world state and objects populating the scene in Home and Factory Domains. The robot possesses a set of behaviours or symbolic actions such as $\mathrm{Moving}$ towards an object, $\mathrm{Grasping}$, $\mathrm{Releasing/Dropping}$ or $\mathrm{Pushing}$ an object or $\mathrm{Operating}$ an entity to imply actions that induce discrete state changes such as opening the door before exiting, turning on a switch etc. We assume that the robot’s actions can be realized by the presence of an underlying controller. We encode the geometric requirements for actions as symbolic pre-conditions. Examples include releasing an object from the gripper before grasping another, opening the door before trying to exit the room. The set of possible actions and the range of interactions are listed in Table 2. The robot is tasked with goals that involve multiple interactions with objects derived from standardized data sets (?). These goals include: (a) _transporting_ objects from one region to another (including space on top of or inside other objects), (b) _fetching_ objects, which the robot must reach, grasp and return with, and (c) _inducing state changes_ such as illuminating the room or removing dirt from floor. We assume that the robot is instructed by providing _declarative_ goals. For example, the task of moving all fruits on top of the kitchen table can be modeled as a set intended constraints among the interacting objects. The effects of actions such as pushing or moving are simulated via a motion planner and propagated to the next time step. Abstract actions such as attachment, operating a tool or grasping/releasing objects are encoded symbolically as the establishment or release constraints. The simulation for these actions is coarse and considers their symbolic effects forgoing the exact motion/skills needed to implement them. We assume that the robot can realize abstract actions through low-level routines. We present the human agents with a scene and goal conditions we want to attain and receive actions in a custom grammar. The instructions are grounded in the world and the agent and the virtual environment show its effects to input the next action. This is repeated till a goal state is reached. Figure 4 illustrates the interactive platform used for data collection. Using a curated dataset, the robot must synthesize a plan of executable actions to satisfy the goal constraints. The presence of a rich space of interactions gives rise to plans with multiple interactions between objects. For example, “packing items into a basket and carrying the basket to the goal region”, “using a stick to fetch and drop an object beyond reach into a box”, “using a ramp/stool to elevate itself to fetch an object”. ### 5.2 Annotated Corpus To curate an imitation learning dataset, we recruit human instructors and provide them with goals. They instruct the robot by specifying a sequence of symbolic actions (one at a time) to achieve each goal. Each action is simulated so that they can observe its effects and the new world state. We encourage the instructors to to complete the task as quickly as possible, making use of available tools in the environment. To familiarize them with the simulation platform, we conduct tutorial sessions before data collection. Our resulting dataset consists of diverse plans with different action sets and object interactions. We collected plan traces from $\mathrm{12}$ human subjects using domain randomization with $\mathrm{10}$ scenes and $8$ semantic goals resulting in a corpus of $\mathrm{708}$ and $\mathrm{784}$ plans for the home and factory domains. Figure 5(a) and 5(b) show number of interactions with 10 most interacted objects and frequency of 10 most frequent actions respectively. The complete set of objects and goals is given in Table 1. We also perform data augmentation by perturbing the metric states in the human plan trace, performing random replacements of scene objects and validating plan feasibility in simulation. The process results in $\mathrm{3540}$ and $\mathrm{3920}$ plans, respectively. Variation was observed in tool use for an instructor for different goals, and within similar goals based on context. The annotated corpus was split as $(75\%:25\%)$ forming the Training data set and the Test data set to evaluate model accuracy. A $10\%$ fraction of the training data was used as the Validation set for hyper-parameter search. No data augmentation was performed on the Test set. Figure 4: Data Collection Interface. The human teacher instructs (left) a virtual mobile manipulator robot by specifying symbolic actions. The human- instructed plan is simulated and visualized (right). The user is shown the goal needed to be completed in text form. They select the user interaction, first object and second object to instruct the robot. The interface can be seen in action in the https://www.youtube.com/watch?v=lUWU3rK1Gno . (a) Object interactions (b) Symbolic actions Figure 5: Data set Characteristics. Distribution of plans with plan length for home and factory domains. Frequency of interaction of top 10 objects and frequency of actions for top 10 actions. The collected data set contains diverse interactions in complex spaces. ### 5.3 Generalization Test Set In order to asses the model’s capacity to generalize to unseen worlds, we curate a second test environment populated by instances of novel object types placed at randomized locations. The following sampling strategies were used: (i) _Position_ : perturbing and exchanging object positions in a scene. (ii) _Alternate_ : removal of the most frequently used tool in demonstrated plans evaluating the ability to predict the alternative, next best tool to use. (iii) _Unseen_ : replacing an object with a similar object, which is not present in training. (iv) _Random_ : replacing a tool with a randomly picked object which is _unrelated_ to the task. (v) _Goal_ : replacing the goal objects (objects included as part of the goal specifications). For example: milk and fridge in the goal “put milk inside fridge”. Here milk may be replaced with another similar object, such as apple and fridge by a container, such as cupboard. This process resulted in a Generalization Test set with $7460$ (goal, plan) pairs. ## 6 Experiments We first present the results corresponding to high-level tool prediction that leverages an updated ToolNet model. We then present the goal-reaching performance of Tango and the unified ToolTango model. Finally, we present a detailed analysis of the resulting plans in terms of generalization, robustness to execution errors and plan efficiency. ### 6.1 Sequential Tool Prediction #### 6.1.1 Baseline We compare against the basic GGCN model of Section 4.1.1 as our baseline model. This model is similar to the ResActGraph model proposed by ? (?) and its encoder incorporates technical ideas from recent imitation learning works (?) on action prediction. In the subsequent discussion, we consider similar size of the hyperparameter set across all models performing the same task. #### 6.1.2 Evaluation Metrics We first compare the accuracy of the updated ToolNet model on the test and generalization test sets. We test ToolNet’s tool prediction capabilities in two settings. In the first setting, we use the dataset as described in Section 5 and split it according to the scene instance, i.e., home and factory. We use accuracy as our performance measure. Here, a tool prediction is deemed correct if the predicted tool is used in at least one of the various annotated plans for the $(\rm{goal,scene})$ pair and incorrect otherwise. Model | Test Set | Generalization | Generalization Test cases ---|---|---|--- | Home | Factory | Home | Factory | Position | Alternate | Unseen | Random | Goal Baseline (GGCN) | 32.01 | 20.89 | 21.22 | 19.50 | 8.73 | 6.29 | 12.22 | 9.81 | 46.31 Updated ToolNet | 79.87 | 75.12 | 80.16 | 78.01 | 75.55 | 58.83 | 60.74 | 49.97 | 94.10 Table 3: A comparison of the updated ToolNet model with its previous version and GGCN baseline for the accuracy of prediction of tool sequences instead of single tool. Highest scores are shown in bold. #### 6.1.3 Results Table 3 shows the final accuracies on the test-set and generalization test-set with individual accuracies for each test-type. On the regular test set, ToolNet outperforms the baseline by 47.87 (149% higher) and 54.23 (259% higher) accuracy points on Home and Factory domains, respectively. A similar pattern is found in the generalization test set, where each improvement is 58.94 (278% higher) and 58.51 (300% higher) accuracy points for Home and Factory domains. The reasoning behind the improvements is as follows. The goal-conditioned attention gives major improvement in the Goal generalization test type, since goal objects get replaced in those examples. Explicitly biasing the model to use the features of those objects (through conditioned attention) increases their importance, and likely reduces overfitting. An example for Alternate case is when _generator_ is specified in the goal, and wood (the fuel for generator, and the most likely tool) is made absent from the scene. The model could err in giving attention to the _wood- cutter_ tool, which is often correlated with wood. However, conditioned attention gives low attention to _wood-cutter_ and predicts _gasoline_ , instead. Moreover, factored likelihood predictions helps the most in Unseen cases, since without this component, the model cannot predict any unseen tool. A decent performance of earlier models in this case is attributed to alternative possible correct answers (any alternative seen tool or no-tool) due to multiple annotations per scenario. The $\mathrm{ConceptNet}$ embeddings likely contain commonsense knowledge about unseen tools and objects, for example, whether a new tool is flat or not (which should help in ascertaining whether it can be used for transport or not). Using these embeddings makes huge improvement in Unseen cases where entirely new objects are to be predicted as tools. Finally, giving higher weight to optimal plans allows the model to differentiate tools by plan execution time and not human usage frequency. This helps in improved metric generalization, predicting nearby tools in the Position test case. Overall, the complete architecture provides the maximum generalization accuracy among all models. ### 6.2 Action Prediction #### 6.2.1 Baselines We compare to the following three baseline models. (1) _ResActGraph_ model (?), augmented with $\mathrm{FastText}$ embeddings (?). (2) _Affordance-only_ baseline inspired from (?) that learns a co-association between tasks and tools, implemented by excluding the graph convolutions and attention from Tango. (3) _Vanilla Deep Q-Learning (DQN)_ approach (?) that learns purely by interactions with a simulator, receiving positive reward for reaching a goal state. #### 6.2.2 Tool Prediction Accuracy Our experiments use the following accuracy metrics for model evaluation: (i) _Action prediction accuracy_ : the fraction of tool interactions predicted by the model that matched the human demonstrated action $a_{t}$ for a given state $s_{t}$, and (ii) _Plan execution accuracy_ : the fraction of estimated plans that are successful, i.e., can be executed by the robot in simulation and attain the intended goal (in max. $50$ steps). We first present the results for the vanilla Tango model and later present improvements with the ToolTango. #### 6.2.3 Comparing Tango with Baselines Table 4 (top half) compares the Tango model performance with the baseline models. The Tango model shows a $14-23$ point increase in _Action prediction accuracy_ and a $66-71$ points increase in the _Plan execution accuracy_ when compared to the _ResActGraph_ baseline. Note that the ResActGraph model learns a scene representation assuming a fixed and known set of object types and hence can only generalize to new randomized scenes of known objects. In contrast, the Tango model can not only generalize to randomized scenes with known object types (sharing the GGCN backbone with ResActGraph) but can to novel scenes new object types (relying on dense semantic embeddings) and an a-priori unknown number of instances (enabled by a factored likelihood). The _Affordance-only_ baseline model is confined to learning the possible association between a tool object type and the task specified by the human (largely ignoring the environment context). This approach addresses only a part of our problem as it ignores the sequential decision making aspect, where tools may need to be used in sequence to attain a goal. Finally, the vanilla DQN baseline achieves less than $20\%$ policy accuracy (even after a week of training). In contrast, the Tango model shows accurate results after training on imitation data for $12-18$ hours. The challenges in scaling can be attributed to the problem size ($\approx$1000 actions), long plans, sparse and delayed rewards (no reward until goal attainment). Model | Action Prediction | Plan Execution | Generalization Plan Execution Accuracy ---|---|---|--- | Home | Factory | Home | Factory | Home | Factory | Position | Alternate | Unseen | Random | Goal Baseline (ResActGraph) | 27.67 | 45.81 | 26.15 | 0.00 | 12.38 | 0.00 | 0.00 | 0.00 | 0.00 | 25.10 | 9.12 Affordance Only | 46.22 | 52.71 | 52.12 | 20.39 | 44.10 | 4.82 | 17.84 | 47.33 | 29.31 | 29.57 | 34.85 DQN | - | \- | 24.82 | 17.77 | 15.26 | 2.23 | 0.00 | 0.00 | 12.75 | 9.67 | 4.21 Tango | 59.43 | 60.22 | 92.31 | 71.42 | 91.30 | 60.49 | 93.44 | 77.47 | 81.60 | 59.68 | 59.41 Model Ablations \- GGCN (World Representation) | 59.43 | 60.59 | 84.61 | 27.27 | 78.02 | 38.70 | 70.42 | 58.79 | 60.00 | 56.35 | 38.64 \- Metric (World Representation) | 58.8 | 60.84 | 84.61 | 62.34 | 72.42 | 51.83 | 59.68 | 67.19 | 60.79 | 84.47 | 21.70 \- Goal-Conditioned Attn | 53.14 | 60.35 | 53.85 | 11.69 | 37.02 | 8.80 | 35.33 | 15.05 | 32.14 | 41.67 | 6.51 \- Temporal Action History | 45.91 | 49.94 | 24.61 | 0.00 | 8.55 | 0.00 | 0.00 | 0.00 | 0.00 | 30.56 | 1.15 \- Factored Likelihood | 61.32 | 61.34 | 95.38 | 85.71 | 34.22 | 43.44 | 90.50 | 14.82 | 30.65 | 64.67 | 53.26 \- ConceptNet | 63.52 | 60.35 | 89.23 | 57.14 | 81.86 | 56.97 | 82.33 | 68.61 | 74.57 | 65.73 | 47.92 \- Constraints | 57.23 | 57.74 | 64.62 | 37.66 | 62.98 | 41.95 | 84.95 | 45.39 | 39.99 | 36.11 | 84.85 \- Auto-regression | 56.60 | 60.22 | 69.23 | 50.65 | 73.75 | 53.32 | 71.24 | 66.43 | 61.27 | 70.51 | 34.66 Table 4: A comparison of _Action prediction_ and _Plan execution_ accuracies for the baseline, the proposed Tango model, and ablations. Results are presented for test and generalization data sets (under five sampling strategies) derived from the home and factory domains. Accuracy Prediction is the percentage of predicted actions matching the human input on the Test set. Plan Execution is the percentage of plans successfully executed in the Test set. Generalization Plan Accuracy is the percentage of plans successfully executed in the set. Highest scores shown in bold. Next, we assess the zero-shot transfer setting, i.e., whether the model can perform common sense generalization in worlds with new objects unseen in training. The same table shows that the plans predicted by Tango lead to an increase of up to $56$ points in plan execution accuracy on Generalization Test set over the best-performing baseline model. This demonstrates accurate prediction and use of unseen tool objects for a given goal. Specifically, in the home domain, if the _stool_ is not present in the scene, the model is able to use a _stick_ instead to fetch far-away objects. Similarly, if the robot can predict the use of a box for transporting objects even if it has only seen the use of a tray for moving objects during training. The ResActGraph model is unable to adapt to novel worlds and obtains zero points in several generalization tests. The poorer performance of the _Affordance-only_ model can again be attributed to the fact that planning tool interactions involves sequential decision- making. Even if the robot can use affordance similarity to replace a _tray_ object with a _box_ , it still needs to predict the opening of the _box_ before placing an item in its plan for a successful execution. This is corroborated by the drop in performance for the Unseen generalization tests for this model by $52.3$ points. Finally, the vanilla DQN model lacks a clear mechanism for transferring to novel settings, hence shows poor generalization in our experiments. #### 6.2.4 Ablation Analysis of Tango Components We analyze the importance of each component of the Tango model by performing an ablation study. Table 4 (lower half) presents the results. For a fair comparison, the model capacities remain the same during the ablation experiments. The model builds on the _GGCN_ environment representation encoding the inter- object and agent-object relational properties. The ablation of the GGCN component results in a reduction of 22% in the generalization accuracy in the factory domain (where tools may be placed at multiple levels in the factory). The inclusion of this component allows the robot to leverage relational properties such as OnTop to predict the use of tools such as a ramp to negotiate an elevation or a stick to fetch an object immediately beyond the manipulator’s reach. The _Metric_ component encodes the metric properties of objects in the scene such as positions, size etc. Experiments demonstrate its effectiveness in prediction tool interactions based on relative sizes of interacting objects. E.g., the model successfully predicts that _fruits_ can be transported using a _tray_ but larger _cartons_ require a _box_ for the same task. The ablation of this component leads to a reduction of $10.2$ points in the Alternate generalization tests as the ablated model unable to adapt the tool when there are unseen objects with different sizes than those seen during training. Next, we assess the impact of removing the _Goal-Conditioned Attention component_. This experiment shows a a significant reduction ($\approx 50$ points) in the _Plan execution accuracy_ on the Generalization Test set, particularly in scenes with a large number of objects. The attention mechanism allows learning of a restricted context of tool objects that may be useful for attaining the provided goal, in essence, filtering away goal-irrelevant objects populating the scene. Additionally, note that the inclusion of this component allows tool predictions to be _goal-aware_. Consequently, we observe ablating this component leads to a reduction of $53$ points in the Goal generalization test set where the goal objects are perturbed. The _Action History_ component utilizes the agent’s past interactions for the purpose of predicting the next tool interaction. The inclusion of this component allows learning of correlated and commonly repeated action sequences. For instance, the task of exiting from a room typically involves a plan fragment that includes moving to a door, opening it and exiting from the door and are commonly observed in a number of longer plans. The ablation of this component leads to erroneous predictions where a particular action in a common plan fragment is missing or incorrectly predicted. E.g., a robot attempting to pick an object inside an enclosure without opening the lid. In our experiments, we observe that ablating the model leads to a significant decrease in goal reach-ability, causing a $70$ point decrease in the _Plan Execution accuracy_ and $72$ point drop in the _Generalization accuracy_. The need for generalization to novel scenes implies that our model cannot assume a fixed and a-priori known set of objects that the robot can interact with. Generalization to an arbitrary number of objects in the scene is accomplished by factoring model predictions over individual objects in a recurrent manner. Ablating the factored likelihood components results in a simpler model that performs predictions over a known fixed-size object set. The simplified model displays a higher action-prediction and plan-execution accuracies in known world. Crucially, we observe that ablating this component results in a significant decrease of $51$ and $63$ points in the Unseen and the Alternate generalization test sets. Finally, the _ConceptNet embeddings_ are important for semantic generalization to unseen tool types. We replace ConceptNet embeddings with $\mathrm{FastText}$ (?) embeddings in the _-ConceptNet_ model to show their importance. The _-ConceptNet_ model shows poorer generalization (6.5% decrease) as it models word affinity as expressed in language only. $\mathrm{ConceptNet}$ embedding space models relational affinity between objects as present in the knowledge-base. Model | Action Prediction | Plan Execution | Generalization Plan Execution Accuracy ---|---|---|--- | Home | Factory | Home | Factory | Home | Factory | Position | Alternate | Unseen | Random | Goal TANGO | 59.43 | 60.22 | 92.31 | 71.42 | 91.30 | 60.49 | 93.44 | 77.47 | 81.60 | 59.68 | 59.41 ToolTANGO | 64.12 | 63.72 | 95.69 | 77.01 | 92.88 | 62.97 | 94.38 | 88.12 | 92.71 | 78.01 | 64.43 Table 5: A comparison of TANGO with ToolTANGO model. #### 6.2.5 Comparing Tango with ToolTango Table 5 compares the performance of ToolTango with the Tango model. ToolTango gives improved scores in both domains and for every generalization test case. The Action prediction accuracy of ToolTango is higher than Tango by 4.69 and 3.50 points for Home and Factory domains, respectively. Similarly, we see an increase in the Plan execution accuracy both for Test and Generalization Test sets. We get 3.38- 5.59 higher points on Test and 1.58-2.48 points in Generalization Test set for Home and Factory domains. The improvement is higher in Factory domain where the probability of using multiple tools in the same plan to reach a goal state is higher. This is particularly due to the independent tool prediction in ToolTango that aids action prediction in complex settings requiring longer action sequences (more details in Section 6.3). The performance improvement is highest for the Alternate and Random cases where the tools are replaced by an alternate tool or another non-tool object. This shows the importance of independent tool-likelihood score prediction in the ToolTango model. Figure 6: A simulated robot manipulator uses ToolTango to synthesize tool interactions in novel contexts with unseen objects. (top) ToolTango predicts the use of a box when tray is unavailable. (bottom) ToolTango predicts the use of hammer and nails when screws are unavailable. Figure 7: The model predicts the instance of tray (on the left) which is closer to the fruits (goal objects) other instance (one on the right). Figure 8: Interleaved action prediction and execution enables adaptation in case of unexpected errors during action execution. (top) robot recovers after the milk carton falls. (bottom) recovers after a fruit falls from the tray by picking it up again. ### 6.3 Analysis of Resulting Plans #### 6.3.1 Evidence of Generalization Figure 6 shows the robot using the learned model to synthesize a plan for a declarative goal. Here, if the goal is to transport fruits and human demonstrates usage of _tray_ and the model never sees _box_ while training, ToolTango uses _box_ in a scene where tray is absent, showing that it is able to predict semantically similar tools for task completion. Similarly, for the goal of fixing a board on the wall, if humans use _screws_ the agent uses _nails_ and _hammer_ when screws are absent from the scene. Figure 7 shows how the model uses the position information of tool objects to predict the tool closer to the goal object or the agent. The world representation encodes the metric properties of objects (position and orientation) that allows the robot to interact with nearer tool objects. Figure 9: Scatter plot comparing the lengths of plans obtained from model predictions and those from human demonstrations. The plans generated from ToolTango are of similar length as that of human demonstrators. #### 6.3.2 Robustness to Errors Figure 8 shows the robustness to unexpected errors and stochasticity in action execution. Consider the task of “fetching a carton”, where the milk carton is on an elevated platform, the model predicts the uses a stool to elevate itself. The carton falls due to errors during action execution. Following which, the robot infers that the stool is no longer required and directly fetches the carton. Similarly, for the task of “storing away the fruits in the cupboard”, the robot predicts the use of tray for the transport task. During execution the apple falls off the tray. The robot correctly re-collects the apple. #### 6.3.3 Plan Efficiency Comparison Figure 9 compares the length of robot plans predicted by the learned model against human demonstrated plans. We observe that, on average, the predicted plan lengths are close to the human demonstrated ones. In 12% cases, the plans predicted by ToolTango utilize tools satisfying the goal condition in fewer steps compared to the human demonstrated plan. Figure 10: Test cases where TANGO fails but ToolTANGO reaches the goal state. Typically in cases with complex and long plans with multiple tools being used. Tango predicts stool in the first case and coal in the second case, both of which are unreachable. ToolTango predicts reachable tools. (Top) Here the robot needs to place the milk carton which is at a higher location which is unreachable without a stool. In this test case, the stool in unreachable so the model uses a stick to reach the milk carton. (Bottom) Switching on the generator requires a fuel. In this test case coal is unreachable/unavailable and so the model instead uses gasoline to turn on the generator. Figure 11: Execution accuracy of inferred plans with plan length for Tango and ToolTango. The latter gives a higher goal-reaching performance than the former for plans with longer action sequences. Figure 12: An analysis of fractional errors during plan execution using the learned ToolTango model. The horizontal axis denotes the total errors for the ablated model. The absolute value of the total errors are shown in Table 4 #### 6.3.4 Analysis of Independent Tool Prediction Figure 10 demonstrates the advantage of independent tool prediction and how it augments the ToolTango to improve goal-reaching performance. The figure shows two cases where the Tango model is unable to reach the goal state, but ToolTango reaches a goal. The first example shows a setting where a milk carton is placed on top of the fridge and stool is kept directly in front of the fridge. In this case, Tango predicts the use of stool to reach the carton; however, it first opens the fridge door, making stool inaccessible. On the other hand, ToolTango predicts using a stick to drop the carton on top of the table and is able to place it inside the fridge. The second example shows a case where the generator needs to be powered on after adding a fuel source. Here, ToolTango predicts the picking coal that is on top of the shelf. However, it does not use any tool to elevate itself and the execution returns an error of unreachable object. Instead, ToolTango predicts the use of gasoline that is placed on the ground and is successfully able to execute the plan. Independent tool prediction also allows the model to perform better in complex settings requiring longer action sequences and using multiple tools. Figure 11 assesses the model accuracy with the lengths of the inferred plans. We observe that ToolTango has a higher plan execution accuracy in case of (goal, scene) pairs where the average plan length to attain a goal state is high. For instance, if the goal is to place fruits in side the cupboard and all the fruits are on top of the fridge, the robot needs to use a tool to first reach the fruits and then use another tool to carry them to the cupboard. In this case the Tango model directly tries to pick unreachable fruits or predicts an incorrect tool to reach the fruits. ToolTango is able to execute complex task by subsequently using a stool and tray. ## 7 Limitations and Future Work ### 7.1 Scaling to Longer Plans As we observed in Figure 11, the plan execution accuracy decreases by 20% on the Test sets and 30% on Generalization Test sets. This merits investigation into planning abstractions (?) for scaling to longer plan lengths in realistic domains. Figure 12 analyzes the errors encountered during plan execution using actions predicted by the proposed model. In 27% of the cases, the model misses a pre-requisite actions required for a pre-condition for initiating the subsequent action. For example, missing the need to open the door before exiting the room (object unreachable $19\%$) or missing opening the cupboard before picking an object inside it (object inside enclosure $8\%$). There is scope for improvement here by incorporating explicit causal structure (?). ### 7.2 Partially observable environments Figure 13: Likelihood estimates of exploration for different objects where “screw” might be found. In realistic scenarios, the robot’s environment may only be partially-known due to the limited view and scope of the robot’s sensors. For example, tools such as screws may be stored away in a tool box and may not be easily observed by the robot. In such a scenario, we expect the robot to learn to predict possible locations for exploration based on common sense knowledge. For example, the robot should explore the tool box. Further, the subgeometry information may be fed into the model to determine the set of objects in the environment. In order to address partially-observed worlds, we can extend the prediction model as follows. Instead of learning attention only over candidate objects, we can learn a _generalized_ attention over spatial relations modeled in the graph network. Such an extension allows the model to predict a preference order for locations that the robot should explore to find the required tool. Figure 13 illustrates an example, where the robot shows an example, where the robot predicts that the screws may be found in toolbox or workbench. Please note that this result is indicative of the possibility of using the model in partially-known world and will be investigated in detail as part of future work. ### 7.3 Extension to realistic robot control settings The action representations adopted in this work are inspired from task and motion planning communities (?, ?). We simulate a mobile manipulator (Clearpath Robotics Husky with a mounted Universal Robotics UR5 arm). The robot’s action space is modeled as high-level skills (or behaviors), each in- turn realized using a low-level motion plan or controller that is parameterized at run time. For navigation actions, a standard state-space planner (A* with Manhattan distance as a heuristic) for a coarse collision free path and realize point to point motion through a velocity-based controller. For the manipulation, we consider a 6 degree of freedom model of the manipulator. For object manipulation a crane-grasp was implemented where the arm would move to a pre-grasp position to the intended object of interest. The arm motion was realized using a joint state controller moving the arm to the end-effector pose. The precise manipulation of tools is abstracted as follows: once the robot end-effector makes contact a rigid joint is established for subsequent motions till it is released. As mentioned in Section 1, learning the precise manipulation of tools is not the focus of this work and can be delegated to a method such as the one proposed by ? (?). All robot actions are executed in the PyBullet physics engine with a mesh-based model of objects and a URDF-model for the robot manipulation system. The technical details appear in Appendix D. The entire implementation of the robot behaviors and the simulation environment is available at https://github.com/reail-iitd/tango for the use of the research community. ## 8 Conclusions This paper proposes ToolTango, a novel neural architecture that learns a policy to attain intended goals as tool interaction sequences leveraging fusion of semantic and metric representations, goal-conditioned attention, knowledge-base corpora. ToolTango is trained using a data set of human instructed robot plans with simulated world states in home and factory like environments. It uses an independent model that predicts likelihood scores for each tool at each time-step. The imitation learner demonstrates accurate commonsense generalization to environments with novel object instances using the learned knowledge of shared spatial and semantic characteristics. It also shows the ability to adapt to erroneous situations and stochasticity in action execution. Finally, ToolTango synthesizes a sequence of tool interactions with a high accuracy of goal-attainment. Acknowledgments Mausam is supported by an IBM SUR award, grants by Google, Bloomberg and 1MG, Jai Gupta chair fellowship, and a Visvesvaraya faculty award by Govt. of India. Rohan Paul acknowledges support from Pankaj Gupta Faculty Fellowship and DST’s Technology Innovation Hub (TIH) for Cobotics. Shreshth Tuli is supported by the President’s PhD Scholarship at Imperial College London. We thank the IIT Delhi HPC facility and Prof. Prem Kalra and Mr. Anil Sharma at the CSE VR Lab for compute resources. We thank Mr. Pulkit Sapra and Prof. P. V. M. Rao for assistance with CAD model creation. We thank ? (?) for sharing the implementation for baseline comparison. We are grateful to anonymous turkers and student volunteers for assisting in the data collection. ## References * Allen et al. Allen, K. R., Smith, K. A., and Tenenbaum, J. (2019). Rapid trial-and-error learning in physical problem solving.. In CogSci, p. 90. * Antunes et al. Antunes, A., Saponaro, G., Dehban, A., Jamone, L., Ventura, R., Bernardino, A., and Santos-Victor, J. (2015). Robotic tool use and problem solving based on probabilistic planning and learned affordances. In Proc. IEEE/RSJ Int. Conf. Intell. Robots Syst. Workshop Learn. Object Affordances Fundamental Step to Allow Prediction Plan. Tool use. * Bae et al. Bae, H., Kim, G., Kim, J., Qian, D., and Lee, S. (2019). Multi-robot path planning method using reinforcement learning. Applied Sciences, 9(15), 3057. * Bahdanau et al. Bahdanau, D., Cho, K. H., and Bengio, Y. (2015). Neural machine translation by jointly learning to align and translate. In 3rd International Conference on Learning Representations, ICLR 2015. * Bansal et al. Bansal, R., Tuli, S., Paul, R., and Mausam (2020). Toolnet: Using commonsense generalization for predicting tool use for robot plan synthesis. In Workshop on Advances & Challenges in Imitation Learning for Robotics at Robotics Science and Systems (RSS). * Bisk et al. Bisk, Y., Zellers, R., Bras, R. L., Gao, J., and Choi, Y. (2020). Piqa: Reasoning about physical commonsense in natural language. In AAAI. * Boteanu et al. Boteanu, A., Kent, D., Mohseni-Kabir, A., Rich, C., and Chernova, S. (2015). Towards robot adaptability in new situations. In 2015 AAAI Fall Symposium Series. Citeseer. * Calli et al. Calli, B., Singh, A., Bruce, J., Walsman, A., Konolige, K., Srinavasa, S. S., Abbeel, P., and Dollar, A. M. (2017). YCB Benchmarking Project: Object Set, Data Set and Their Applications. Journal of The Society of Instrument and Control Engineers, 56(10), 792–797. * Chen et al. Chen, H., Tan, H., Kuntz, A., Bansal, M., and Alterovitz, R. (2020). Enabling robots to understand incomplete natural language instructions using commonsense reasoning. In 2020 IEEE International Conference on Robotics and Automation (ICRA), pp. 1963–1969. IEEE. * Choi et al. Choi, D., Langley, P., and To, S. T. (2018). Creating and using tools in a hybrid cognitive architecture.. In AAAI Spring Symposia. * Coumans and Bai Coumans, E., and Bai, Y. (2016). Pybullet, a python module for physics simulation for games, robotics and machine learning. In GitHub repository. * Driess et al. Driess, D., Oguz, O., Ha, J.-S., and Toussaint, M. (2020). Deep visual heuristics: Learning feasibility of mixed-integer programs for manipulation planning. In 2020 IEEE International Conference on Robotics and Automation (ICRA), pp. 9563–9569. IEEE. * Finn et al. Finn, C., Yu, T., Zhang, T., Abbeel, P., and Levine, S. (2017). One-shot visual imitation learning via meta-learning. In Conference on Robot Learning, pp. 357–368. PMLR. * Fitzgerald et al. Fitzgerald, T., Goel, A., and Thomaz, A. (2018). Human-guided object mapping for task transfer. ACM Transactions on Human-Robot Interaction (THRI), 7(2), 1–24. * Fitzgerald et al. Fitzgerald, T., Goel, A., and Thomaz, A. (2021). Modeling and learning constraints for creative tool use. Frontiers in Robotics and AI, 8. * Gajewski et al. Gajewski, P., et al. (2019). Adapting everyday manipulation skills to varied scenarios. In 2019 International Conference on Robotics and Automation (ICRA), pp. 1345–1351. IEEE. * Garg et al. Garg, S., Bajpai, A., and Mausam (2019). Size independent neural transfer for RDDL planning. In Benton, J., Lipovetzky, N., Onaindia, E., Smith, D. E., and Srivastava, S. (Eds.), Proceedings of the Twenty-Ninth International Conference on Automated Planning and Scheduling, ICAPS 2018, Berkeley, CA, USA, July 11-15, 2019, pp. 631–636. AAAI Press. * Garg et al. Garg, S., Bajpai, A., and Mausam (2020). Symbolic network: Generalized neural policies for relational mdps. In Proceedings of the 37th International Conference on Machine Learning, ICML 2020, 13-18 July 2020, Virtual Event, Vol. 119 of Proceedings of Machine Learning Research, pp. 3397–3407. PMLR. * Garrett et al. Garrett, C. R., et al. (2021). Integrated task and motion planning. Annu. Rev. Control Robot. Auton. Syst, 2021(4), 1–30. * Garrett et al. Garrett, C. R., Lozano-Pérez, T., and Kaelbling, L. P. (2020). Pddlstream: Integrating symbolic planners and blackbox samplers via optimistic adaptive planning. In Proceedings of the International Conference on Automated Planning and Scheduling, Vol. 30, pp. 440–448. * He et al. He, K., Zhang, X., Ren, S., and Sun, J. (2015). Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In Proceedings of the IEEE international conference on computer vision, pp. 1026–1034. * Hermans et al. Hermans, T., Rehg, J. M., and Bobick, A. (2011). Affordance prediction via learned object attributes. In ICRA: Workshop on Semantic Perception, Mapping, and Exploration, pp. 181–184. * Holladay et al. Holladay, R., Lozano-Pérez, T., and Rodriguez, A. (2019). Force-and-motion constrained planning for tool use. In IROS. * Huang et al. Huang, D.-A., Nair, S., Xu, D., Zhu, Y., Garg, A., Fei-Fei, L., Savarese, S., and Niebles, J. C. (2019). Neural task graphs: Generalizing to unseen tasks from a single video demonstration. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 8565–8574. * Huang et al. Huang, S. H., Pan, J., Mulcaire, G., and Abbeel, P. (2015). Leveraging appearance priors in non-rigid registration, with application to manipulation of deformable objects. In 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 878–885. IEEE. * Hübner et al. Hübner, J. F., Bordini, R. H., and Wooldridge, M. (2006). Programming declarative goals using plan patterns. In International Workshop on Declarative Agent Languages and Technologies, pp. 123–140. Springer. * Jain et al. Jain, A., Das, D., Gupta, J. K., and Saxena, A. (2015). Planit: A crowdsourcing approach for learning to plan paths from large scale preference feedback. In ICRA, pp. 877–884. * Kho et al. Kho, G., Hung, C., and Cunningham, H. (2014). Robo brain: Massive knowledge base for robots. In Cornell Univ., USA, Tech. Rep. * Kingma and Ba Kingma, D. P., and Ba, J. (2014). Adam: A method for stochastic optimization. In CoRR arXiv:1412.6980. * Kolobov et al. Kolobov, A., Mausam, Weld, D. S., and Geffner, H. (2011). Heuristic search for generalized stochastic shortest path mdps. In ICAPS. * Kroemer et al. Kroemer, O., Ugur, E., Oztop, E., and Peters, J. (2012). A kernel-based approach to direct action perception. In 2012 IEEE international Conference on Robotics and Automation, pp. 2605–2610. IEEE. * Lee et al. Lee, A. X., Gupta, A., Lu, H., Levine, S., and Abbeel, P. (2015). Learning from multiple demonstrations using trajectory-aware non-rigid registration with applications to deformable object manipulation. In 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 5265–5272. IEEE. * Levihn and Stilman Levihn, M., and Stilman, M. (2014). Using environment objects as tools: Unconventional door opening. In 2014 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 2502–2508. IEEE. * Li et al. Li, Y., Tarlow, D., Brockschmidt, M., and Zemel, R. (2015). Gated graph sequence neural networks. In arXiv preprint arXiv:1511.05493. * Liao et al. Liao, Y.-H., Puig, X., Boben, M., Torralba, A., and Fidler, S. (2019). Synthesizing environment-aware activities via activity sketches. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 6291–6299. * Liu et al. Liu, Z., Freeman, W. T., Tenenbaum, J. B., and Wu, J. (2018). Physical primitive decomposition. In Proceedings of the European Conference on Computer Vision (ECCV), pp. 3–19. * Lynch et al. Lynch, C., Khansari, M., Xiao, T., Kumar, V., Tompson, J., Levine, S., and Sermanet, P. (2020). Learning latent plans from play. In Conference on Robot Learning, pp. 1113–1132. PMLR. * Mandlekar et al. Mandlekar, A., Zhu, Y., Garg, A., Booher, J., Spero, M., Tung, A., Gao, J., Emmons, J., Gupta, A., Orbay, E., et al. (2018). Roboturk: A crowdsourcing platform for robotic skill learning through imitation. In Conference on Robot Learning, pp. 879–893. PMLR. * Mausam and Kolobov Mausam, and Kolobov, A. (2012). Planning with Markov decision processes: An AI perspective. Synthesis Lectures on Artificial Intelligence and Machine Learning, 6(1), 1–210. * Migimatsu and Bohg Migimatsu, T., and Bohg, J. (2020). Object-centric task and motion planning in dynamic environments. IEEE Robotics and Automation Letters, 5(2), 844–851. * Mikolov et al. Mikolov, T., Grave, E., Bojanowski, P., Puhrsch, C., and Joulin, A. (2018). Advances in pre-training distributed word representations. In LREC. * Misra et al. Misra, D. K., Sung, J., Lee, K., and Saxena, A. (2016). Tell me dave: Context-sensitive grounding of natural language to manipulation instructions. IJRR, 35(1-3), 281–300. * Myers et al. Myers, A., Teo, C. L., Fermüller, C., and Aloimonos, Y. (2015). Affordance detection of tool parts from geometric features. In 2015 IEEE International Conference on Robotics and Automation (ICRA), pp. 1374–1381. IEEE. * Nair et al. Nair, A., Chen, D., Agrawal, P., Isola, P., Abbeel, P., Malik, J., and Levine, S. (2017). Combining self-supervised learning and imitation for vision-based rope manipulation. In 2017 IEEE International Conference on Robotics and Automation (ICRA), pp. 2146–2153. IEEE. * Nair and Chernova Nair, L., and Chernova, S. (2020). Feature guided search for creative problem solving through tool construction. In Frontiers in Robotics and AI, p. 205. Frontiers. * Nair et al. Nair, L., Srikanth, N. S., Erickson, Z. M., and Chernova, S. (2019a). Autonomous tool construction using part shape and attachment prediction.. In Robotics: Science and Systems. * Nair et al. Nair, S., Zhu, Y., Savarese, S., and Fei-Fei, L. (2019b). Causal induction from visual observations for goal directed tasks. In CoRR arXiv:1910.01751. * Nyga and Beetz Nyga, D., and Beetz, M. (2018). Cloud-based probabilistic knowledge services for instruction interpretation. In Robotics Research, pp. 649–664. Springer. * Nyga et al. Nyga, D., Roy, S., Paul, R., Park, D., Pomarlan, M., Beetz, M., and Roy, N. (2018). Grounding robot plans from natural language instructions with incomplete world knowledge. In Conference on Robot Learning, pp. 714–723. * Park et al. Park, D., Noseworthy, M., Paul, R., Roy, S., and Roy, N. (2019). Inferring task goals and constraints using bayesian nonparametric inverse reinforcement learning. In Proceedings of the 3rd Conference on Robot Learning (CoRL). * Puig et al. Puig, X., Ra, K., Boben, M., Li, J., Wang, T., Fidler, S., and Torralba, A. (2018). Virtualhome: Simulating household activities via programs. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8494–8502. * Sarathy and Scheutz Sarathy, V., and Scheutz, M. (2018). Macgyver problems: Ai challenges for testing resourcefulness and creativity. Advances in Cognitive Systems, 6, 31–44. * Scalise et al. Scalise, R., Li, S., Admoni, H., Rosenthal, S., and Srinivasa, S. S. (2018). Natural language instructions for human–robot collaborative manipulation. The International Journal of Robotics Research, 37(6), 558–565. * Sharma et al. Sharma, S., Gupta, J., Tuli, S., Paul, R., and Mausam (2022a). Goalnet: Inferring conjunctive goal predicates from human plan demonstrations for robot instruction following. In International Conference on Automated Planning and Scheduling (ICAPS) - Planning and Reinforcement Learning Workshop. * Sharma et al. Sharma, V., Arora, D., Geißer, F., Mausam, and Singla, P. (2022b). Symnet 2.0: Effectively handling non-fluents and actions in generalized neural policies for RDDL relational MDPs. In Proceedings of the 38th Conference on Uncertainty in Artificial Intelligence (UAI). * Shridhar et al. Shridhar, M., Thomason, J., Gordon, D., Bisk, Y., Han, W., Mottaghi, R., Zettlemoyer, L., and Fox, D. (2020). Alfred: A benchmark for interpreting grounded instructions for everyday tasks. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 10740–10749. * Silver et al. Silver, T., et al. (2021). Planning with learned object importance in large problem instances using graph neural networks. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 35, pp. 11962–11971. * Speer et al. Speer, R., Chin, J., and Havasi, C. (2017). ConceptNet 5.5: An open multilingual graph of general knowledge. In AAAI. * Speer et al. Speer, R., Chin, J., and Havasi, C. (2019). ConceptNet Numberbatch, the best pre-computed word embeddings you can use. In GitHub repository. * Srivastava et al. Srivastava, S., Fang, E., Riano, L., Chitnis, R., Russell, S., and Abbeel, P. (2014). Combined task and motion planning through an extensible planner-independent interface layer. In 2014 IEEE international conference on robotics and automation (ICRA), pp. 639–646. IEEE. * Straub et al. Straub, J., Whelan, T., Ma, L., Chen, Y., Wijmans, E., Green, S., Engel, J. J., Mur-Artal, R., Ren, C., Verma, S., et al. (2019). The replica dataset: A digital replica of indoor spaces. In CoRR arXiv:1906.05797. * Toussaint et al. Toussaint, M. A., Allen, K. R., Smith, K. A., and Tenenbaum, J. B. (2018). Differentiable physics and stable modes for tool-use and manipulation planning. In Robotics: Science and Systems Foundation. * Tuli et al. Tuli, S., Bansal, R., Paul, R., and Mausam (2021). Tango: Commonsense generalization in predicting tool interactions for mobile manipulators. In International Joint Conference on Artificial Intelligence (IJCAI). * Vega-Brown and Roy Vega-Brown, W., and Roy, N. (2020). Asymptotically optimal planning under piecewise-analytic constraints. In Algorithmic Foundations of Robotics XII, pp. 528–543. Springer. * Wu et al. Wu, J., Lim, J. J., Zhang, H., Tenenbaum, J. B., and Freeman, W. T. (2016). Physics 101: Learning physical object properties from unlabeled videos. In British Machine Vision Conference. * Wu et al. Wu, J., Yildirim, I., Lim, J. J., Freeman, B., and Tenenbaum, J. (2015). Galileo: Perceiving physical object properties by integrating a physics engine with deep learning. In Cortes, C., Lawrence, N. D., Lee, D. D., Sugiyama, M., and Garnett, R. (Eds.), Advances in Neural Information Processing Systems 28, pp. 127–135. Curran Associates, Inc. * Xie et al. Xie, A., Ebert, F., Levine, S., and Finn, C. (2019). Improvisation through physical understanding: Using novel objects as tools with visual foresight. In Robotics at Robotics Science and Systems (RSS). * Zhu et al. Zhu, Y., Tremblay, J., Birchfield, S., and Zhu, Y. (2021). Hierarchical planning for long-horizon manipulation with geometric and symbolic scene graphs. In 2021 IEEE International Conference on Robotics and Automation (ICRA), pp. 6541–6548. IEEE. ## A Supplementary Material All our code is available as a GitHub repository under BSD-2 License https://github.com/reail-iitd/tango. Instructions for reproducing the results are given at https://github.com/reail-iitd/tango/wiki. A supplementary video demonstrating our data collection platform and model’s generalization capability is available at https://www.youtube.com/watch?v=lUWU3rK1Gno. ## B Hyperparameter Details We detail the hyper-parameters for the ToolTango architecture introduced in this paper. * • _Graph Structured World Representation._ The Gated Graph Convolution Network (GGCN) was implemented with $4$-hidden layers, each of size $128$, with convolutions across $2$ time steps for every relation passing through a layer normalized GRU cell. The Parameterized ReLU activation function with a $0.25$ negative input slope was used in all hidden layers. * • _Word Embeddings._ The word embeddings (derived from $\mathrm{ConceptNet}$) were of size $300$. Additionally, the semantic state of each object was encoded as a one-hot vector of size $29$. Typically, there were $35$ and $45$ objects in the home and factory domains respectively. * • _Fusing Metric Information._ The metric encodings were generated from the metric information associated with objects using a $2$-layer Fully Connected Network (FCN) with $128$-sized layers. * • _Encoding Action History._ A Long Short Term Memory (LSTM) layer of size $128$ was used to encode the action history using the generalized action encoding $\mathcal{A}(I_{t}(o^{1}_{t},o^{2}_{t}))$. * • _Goal-conditioned Attention._ The attention network was realized as a $1$-layer FCN of layer size $128$ with a $\mathrm{softmax}$ layer at the end. * • _Tool Prediction._ To predict the tool-likelihood score $p_{t}$, a $3$-layer FCN was used, each hidden layer with size $128$ and output layer with size $1$ with sigmoid activation function. * • _Action Prediction._ To predict the action $I_{t}$, a $3$-layer FCN was used, each hidden layer with size $128$ and output layer with size $|\mathcal{I}|$. $I_{t}$ was converted to a one-hot encoding $\vec{I}_{t}$. This, with the object embedding $e_{o}$ was passed to the $o^{1}_{t}$ predictor via an FCN. This FCN consists of 3-hidden layers of size $128$ and a final layer of size $1$ with a sigmoid activation (for likelihood). The $\vec{I}_{t}$ and $o^{1}_{t}$ likelihoods were sent to the $o^{2}_{t}$ predictor to predict likelihoods for all object embeddings $e_{o}$. This part was realized as a $3$-layer FCN with hidden layer size $128$ and final layer of size $1$ with a sigmoid activation function. * • _Training parameters._ Model training used a learning rate of $5\times 10^{-4}$. The Adam optimizer (?) with a weight decay parameter of $10^{-5}$ and a batch size of $1$ was used. An early stopping criterion was applied for convergence. The _action prediction accuracy_ was used as the comparison metric on the validation set or up to a maximum of $200$ epochs. ## C World Scenes The figure 14 illustrates the object-centric graph representation of the 10 world scenes we used for Home and Factory domains. The agent node is shown in green, tools in black and objects with states in red. The relations of Close are shown in green (only populated for agent), On in black, Inside in red and Stuck to in blue. The legend for node IDs to objects are given below: Home: 0: floor, 1: walls, 2: door, 3: fridge, 4: cupboard, 5: husky, 6: table, 7: table2, 8: couch, 9: big-tray, 10: book, 11: paper, 12: paper2, 13: cube gray, 14: cube green, 15: cube red, 16: tray, 17: tray2, 18: light, 19: bottle blue, 20: bottle gray, 21: bottle red, 22: box, 23: apple, 24: orange, 25: banana, 26: chair, 27: ball, 28: stick, 29: dumpster, 30: milk, 31: shelf, 32: glue, 33: tape, 34: stool, 35: mop, 36: sponge, 37: vacuum, 38: dirt. Factory: 0: floor warehouse, 1: 3D printer, 2: assembly station, 3: blow dryer, 4: board, 5: box, 6: brick, 7: coal, 8: crate green, 9: crate peach, 10: crate red, 11: cupboard, 12: drill, 13: gasoline, 14: generator, 15: glue, 16: hammer, 17: ladder, 18: lift, 19: long shelf, 20: mop, 21: nail, 22: oil, 23: paper, 24: part1, 25: part2, 26: part3, 27: platform, 28: screw, 29: screwdriver, 30: spraypaint, 31: stick, 32: stool, 33: table, 34: tape, 35: toolbox, 36: trolley, 37: wall warehouse, 38: water, 39: welder, 40: wood, 41: wood cutter, 42: worktable, 43: ramp, 44: husky, 45: tray. (a) A sample home scene. (b) A sample factory scene. Figure 14: Sample World Scenes in Home and Factory domains ## D Realization of Object Relations Figure 15: Manipulation region of an object To implement various relations among objects PyBullet allows to define _manipulation regions_ around objects. These regions are virtual regions inside which a robot can manipulate that object. A visual description of this region is shown in Figure 15. We now define two object types. 1. 1. Standalone object: Such an object is defined as a collection of stereo- lithographic tetrahedrons with visual and collision properties. Each such object has a single manipulation region, denoted as $MR(object)$, which is the region around the object where it is considered to be in “vicinity” to the object and hence can be manipulated by the robot. Examples: cubes, fruits, milk, etc. 2. 2. Containing object: Such an object is a standalone object with an additional containment region, denoted as $CR(object)$, which is a proper subset of the manipulation region where if another object’s center lies is called to be “contained inside” this object. Examples: box, cupboard and fridge. Every object has a center defined arithmetic mean of the centers of the tetrahedrons, denoted by $C(object)$. We now define the two relation constraints of which others are different versions of combinations of: 1. 1. Inside: An object $A$ is said to be contained inside a containing object $B$ if the $C(A)\in CR(B)$. 2. 2. On Top: An object $A$ is said to be on top on another object $B$ if $C(A)_{z}>C(B)_{z}$ and $MR(A)\cap MR(B)\neq\phi$, where $C(O)_{z}$ denotes the z component of the center of object $O$.
expł{i ( k_y y + k_z z ) }̊ Stability of solar atmospheric structures harboring standing slow waves An analytical model in a compressible plasma M. Geeraerts 1 T. Van Doorsselaere 1 Centre for mathematical Plasma Astrophysics (CmPA), Mathematics Department, KU Leuven, Celestijnenlaan 200B bus 2400, B-3001 Leuven, Belgium In the context of the solar coronal heating problem, one possible explanation for the high coronal temperature is the release of energy by magnetohydrodynamic (MHD) waves. The energy transfer is believed to be possible, among others, by the development of the Kelvin-Helmholtz instability (KHI) in coronal loops. Our aim is to determine if standing slow waves in solar atmospheric structures such as coronal loops, and also prominence threads, sunspots, and pores, can trigger the KHI due to the oscillating shear flow at the structure's boundary. We used linearized nonstationary MHD to work out an analytical model in a cartesian reference frame. The model describes a compressible plasma near a discontinuous interface separating two regions of homogeneous plasma, each harboring an oscillating velocity field with a constant amplitude which is parallel to the background magnetic field and aligned with the interface. The obtained analytical results were then used to determine the stability of said interface, both in coronal and photospheric conditions. We find that the stability of the interface is determined by a Mathieu equation. In function of the parameters of this equation, the interface can either be stable or unstable. For coronal as well as photospheric conditions, we find that the interface is stable with respect to the KHI. Theoretically, it can, however, be unstable with respect to a parametric resonance instability, although it seems physically unlikely. We conclude that, in this simplified setup, a standing slow wave does not trigger the KHI without the involvement of additional physical processes. § INTRODUCTION Although it has been known for a long time that the corona is three orders of magnitude hotter than the photosphere, a definite explanation for this unexpected feature has continued to elude researchers. One possible explanation that has been advanced over the years, and which is supported by both observations and theoretical models, is the deposition of energy of magnetohydrodynamic (MHD) waves into the corona [Parnell & De Moortel, 2012, Arregui, 2015, Van Doorsselaere et al., 2020]. There are several mechanisms that allow for the conversion of wave energy into heating of the surrounding plasma. Possibilities include dissipation by Ohmic resistivity, values of which were calculated by Kovitya & Cram, 1983 for the photosphere and used by Chen et al., 2018, Geeraerts et al., 2020, and Chen et al., 2020 to infer its importance in damping slow waves in a photospheric pore. Mode coupling [Pascoe et al., 2010, Pascoe et al., 2012, Hollweg et al., 2013, De Moortel et al., 2016] and resonant absorption, both in the Alfvén [Hollweg & Yang, 1988, Hollweg et al., 1990, Goossens et al., 1992, Goossens et al., 2002, Soler et al., 2013] and cusp [Cadez et al., 1997, Erdélyi et al., 2001, Soler et al., 2009, Yu et al., 2017] continua, have also been studied for their potential to damp waves by transferring their energy to local oscillations. Another mechanism for damping waves in atmospheric structures and which has received considerable attention recently is the Kelvin-Helmholtz instability (KHI). Indeed, the transition of wave energy to turbulence and plasma heating on smaller scales in the coronal plasma is known to be facilitated by this instability [Heyvaerts & Priest, 1983, Ofman et al., 1994, Karpen et al., 1994, Karampelas et al., 2017, Afanasyev et al., 2019, Hillier et al., 2020, Shi et al., 2021]. The KHI has been observed in the solar atmosphere, for example on coronal mass ejection flanks [Ofman & Thompson, 2011, Foullon et al., 2011], and, more recently, on coronal loops [Samanta et al., 2019]. Slow magnetosonic waves were observed in solar coronal loops more than a decade ago, both as propagating waves and standing waves. Upwardly propagating disturbances along coronal loops have been observed, for example, by Berghmans & Clette, 1999, Nightingale et al., 1999, and De Moortel et al., 2000. These have been interpreted as propagating slow modes through a theoretical model by Nakariakov et al., 2000. Other observations of disturbances in coronal loops by the SUMER spectrometer have been analyzed and interpreted by Wang et al., 2002, Wang et al., 2003, Wang et al., 2003, Wang et al., 2007 as standing slow modes, whereas Kumar et al., 2013 and Mandal et al., 2016 reported the observation of a reflecting, also referred to as sloshing, slow wave in a coronal loop. Wang, 2011 provides a review of observations and modeling of standing slow waves in coronal loops. These modes have also been studied through numerical simulations, for example, by De Moortel & Hood, 2003, who found that thermal conduction is an important damping mechanism for propagating slow waves in coronal conditions, and by Mandal et al., 2016, who reported about the frequency dependent damping of propagating slow waves in a coronal loop. Structures in the lower solar atmosphere, such as sunspots and pores in the photosphere, have also been observed to harbor slow magnetosonic waves [Dorotovič et al., 2008, Dorotovič et al., 2014, Morton et al., 2011, Grant et al., 2015, Moreels et al., 2015, Freij et al., 2016], which were then classified as either surface or body waves [Moreels et al., 2013, Keys et al., 2018]. The KHI and its growth rate at the interface between two aligned sheared stationary flows have been known for a long time [Chandrasekhar, 1961]. Although in the purely hydrodynamics (HD) model the instability always develops in the presence of a shear flow, in the MHD model an aligned magnetic field can prevent its triggering. A natural question that then arises in the context of wave heating is under which conditions the KHI would develop in the presence of an oscillating shear flow. Indeed, it is possible that waves propagating or standing in solar coronal structures, such as loops, can trigger the KHI on the boundary and hereby convey some of their energy to the plasma background. Zaqarashvili et al., 2015, for example, studied the KHI in rotating and twisted magnetized jets in the solar atmosphere in the presence of a stationary shear flow and used their derivations to discuss the stability of standing kink and torsional Alfvén waves at the velocity antinodes. They found that the standing waves are always unstable whereas the propagating waves are stable. Transverse oscillations are known to be unstable to the KHI in coronal loops and numerical studies on this topic include Terradas et al., 2008, Antolin et al., 2014, Antolin et al., 2015, Antolin et al., 2017, Magyar et al., 2015, Guo et al., 2019, Karampelas et al., 2019 and Pascoe et al., 2020. There have also been several analytical studies regarding the stability of the interface between oscillating sheared flows. Kelly, 1965 looked at the HD case of two parallel sheared flows aligned with the interface, whereas Roberts, 1973 studied the same setup in the MHD model. More recently, Barbulescu et al., 2019 and Hillier et al., 2019 investigated the stability of transverse oscillations in coronal loops by modeling them locally at the loop boundary as a cartesian interface between sheared background flows, with the background velocity perpendicular to the background magnetic field. Each of these studies revealed that the interface between oscillating sheared flows is always unstable, in contrast to the constant sheared flows case. All of these studies relying on the simplifying assumption that the fluid is incompressible, it is worth asking whether their conclusions remain unchanged when compression is included. The goal of this paper is therefore twofold. Firstly, it aims at extending the known incompressible model of Roberts, 1973 for a plasma with an oscillating velocity field aligned with both the magnetic field and the interface to a compressible version. The focus lies on finding expressions for the eigenfunctions and, in particular, to derive their evolution over time. The interest in doing this lies in identifying the shortcomings made by the approximation of the incompressible model, at the cost of considerably more involved analytical derivations. This is the subject of Sections 2 and 3. Secondly, it aims at expressing more general instability conditions as compared to the incompressible model, by taking into account the subtleties arising due to the inclusion of compression. In particular, it will allow us to assess the stability of certain solar atmospheric structures harboring a standing slow wave. We will do so by using this model as a local approximation of the structure's boundary, at the position where the velocity shear is the greatest (for instance in a cusp resonance). We will also compare our findings for the local stability of slow waves to those of Barbulescu et al., 2019 and Hillier et al., 2019 for the local stability of fast kink waves. This is the subject of Section 4. § MODEL We derived an analytical model for a compressible plasma at the boundary of solar atmospheric structures which can, in a first rough approximation, be modeled as a cylinder with a discontinuous boundary separating two regions of homogeneous plasma with different properties. Such structures include coronal loops, prominence threads, sunspots and photospheric pores. The physical setup here is the same as in Roberts, 1973, except we included compression. We point out that, in a more realistic setup, a smooth transition layer would have to be included at the interface. This would result in the possibility of resonance occuring between the main oscillation of the structure and local slow mode oscillations in the boundary layer, as studied theoretically by Yu et al., 2017. The analytical derivations of Goossens et al., 2021 show that, when slow waves are resonantly absorbed in the cusp continuum, both the azimuthal component of vorticity and the parallel component of the plasma displacement are large. The huge amount of vorticity could indicate the possibility of the KHI developing in those conditions. However, the sharp spatial variation of the parallel displacement and the truly discontinuous interface separating two plasma regions are absent from the present model. This should be kept in mind when drawing conclusions regarding shear flows in resonances. In order to be able to make progress in the analytical derivations, the model uses a Cartesian coordinate system $(x, y, z)$, where $x=0$ is the interface and the $z$-direction is the longitudinal direction (i.e., along the cylinder's axis). The region $x<0$ represents the interior of the structure, whereas the region $x>0$ represents the surrounding plasma. This is a model for the local stability at the boundary, in the region of the structure where the shear in longitudinal velocity is the greatest, that is to say, at an antinode of the longitudinal component of the velocity eigenfunction. For the longitudinally fundamental slow mode, this would be at the middle of the structure (see Figure <ref>), whereas for the first longitudinal overtone, for example, this would be at a quarter of the structure's length measured from either end. The time variable is denoted by $t$. Sketch of a longitudinal cut along the axis of a coronal loop harboring a fundamental standing slow body sausage mode, at the velocity's maximal amplitude (left) and the local cartesian model at the boundary (right). The arrows on the left represent the velocity field and their lengths are to scale with the relative local magnitude of the field for a slow body sausage mode at a given time. The lengths and directions of the magnetic field arrows on the right figure are consistent for that same slow mode, whereas for the velocity arrows only the direction is consistent (the length of the exterior velocity arrow having been increased for visual clarification). After linearizing the ideal MHD equations, the equilibrium quantities are denoted by the subscript $0$ and the perturbed quantities are denoted by the subscript $1$. The regions on each side of the interface are two homogeneous but different plasmas. This means each region has its own values for the background quantities, which are assumed spatially constant. The background magnetic field is assumed to be a straight and constant axial field along the $z$-coordinate: $\pmb{B}_0 = B_{0z} \pmb{1}_z$. Furthermore, we assume that the background flow is oscillating with a certain frequency $\omega_0$, which represents the frequency of the standing slow wave: $\pmb{v}_0 = V_0 \cos \left( \omega_0 t \right) \pmb{1}_z$. It would, of course, be more accurate to also include a background oscillation for the other quantities, such as magnetic field, pressure, and density. In the context of solar atmospheric structures, the magnetic field oscillations that occur because of this external forcing in the background can, however, be neglected in a first approximation with respect to the strong longitudinal magnetic field. As for the density and pressure background oscillations, they can be neglected in this model because they are in antiphase with respect to the velocity in their longitudinal profile and thus have a node where the logitudinal component of velocity has an antinode. For slow modes, the longitudinal component of the velocity is typically much larger than its other components (see left of Figure <ref>), which can thus be neglected at the former's antinode as well. The perturbed density, thermal pressure, velocity and magnetic field are denoted with $\rho_1$, $p_1$, $\pmb{v}_1$ and $\pmb{B}_1$, respectively. In what follows the perturbed eulerian total pressure will be used as well, which is given by $P_1 = p_1 + \frac{\pmb{B}_0 \cdot \pmb{B}_1}{\mu_0}$. Although gravity certainly has a role in solar atmospheric wave dynamics, it is neglected in this model in order to make some analytical progress. The linearized compressible ideal MHD equations can, under these assumptions, be written as follows: \begin{align} &\d\frac{D \rho_1}{D t} +\rho_0 \l( \nabla \cdot \pmb{v}_1 \r) = 0\text{,} \label{eq1}\\ & \rho_1 \frac{\partial \pmb{v}_0}{\partial t} + \rho_0 \frac{D \pmb{v}_1}{D t} = -\nabla P_1 + \frac{1}{\mu_0} \left( \pmb{B}_0 \cdot \nabla \right) \pmb{B}_1 \text{,} \label{eq2}\\ &\frac{D \pmb{B}_1}{D t} = - \pmb{B}_0 \left( \nabla \cdot \pmb{v}_1 \right) + \left( \pmb{B}_0 \cdot \nabla \right) \pmb{v}_1 \text{,} \label{eq3}\\ &\frac{D p_1}{D t} + \rho_0 v_{\text{s}}^2 \l( \nabla \cdot \pmb{v}_1 \r) = 0\text{,} \label{eq4} \end{align} where $\frac{D f}{Dt} = \frac{\partial f}{\partial t} + \left(\pmb{v}_0 \cdot \nabla\right) f$ is the Lagrangian derivative of a quantity $f$, $v_{\text{s}} = \sqrt{\frac{\gamma p_0}{\rho_0}}$ is the speed of sound and $\mu_0$ the magnetic permeability of free space. The first equation, Eq. (<ref>), has this simpler form because $\nabla \rho_0 = 0$ in each region (i.e., both inside and outside the cylinder). Since the background quantities depend only on $x$ and $t$, the perturbed quantities can be Fourier-analyzed in the $y$ and $z$ coordinates and are thus assumed to have the following form: $f_1 = \overline{f}_1(x,t) \e$, for each of the perturbed quantities $p_1$, $\rho_1$, $v_{1x}$, $v_{1 y}$, $v_{1z}$, $B_{1x}$, $B_{1 y}$ and $B_{1z}$. § EXPRESSIONS FOR THE PERTURBED QUANTITIES In this section, we derive the governing equations for the evolution of linear MHD perturbations in a compressible plasma for the model described in the previous section. In the next section, we then try to use the obtained information to describe the stability of the interface in compressible plasma conditions. §.§ The central quantity $\c$} In what follows, an expression is derived for the compression term $$̧. This term is central to finding expressions for all other physical quantities, in the sense that they can all be derived solely from it. By using the Lagrangian displacement $\pmb{\xi}$ defined by $\frac{D \pmb{\xi}}{D t} = \pmb{v}_1$, it can be shown (see Appendix <ref>) that the compression term $\c$ must satisfy the following partial differential equation: \begin{align} &\d\frac{D^4 \l( \c \r)}{Dt^4} \; + \; i k_z \o_0 V_0 \frac{D^2 \l( \sin \l(\o_0 t \r) \l( \c \r) \r)}{Dt^2} \notag\\ & \qquad \; - \; \l( v_A^2 + v_s^2 \r) \frac{D^2}{Dt^2} \l( \frac{\pa^2 \l( \c \r)}{\pa x^2} \r) \; + \; k^2 \l( v_A^2 + v_s^2 \r) \frac{D^2 \l( \c \r)}{Dt^2} \notag\\ & \qquad \; - \; i k_z V_0 \o_0 \sin \l( \o_0 t \r) v_A^2 \frac{\pa^2 \l( \c \r)}{\pa x^2} \notag\\ & \qquad \; + \; i k_z k^2 V_0 \o_0 \sin \l(\o_0 t \r) v_A^2 \l( \c \r) \; - \; k_z^2 v_A^2 v_s^2 \frac{\pa^2 \l( \c \r)}{\pa x^2} \notag \\ & \qquad \; + \; k_z^2 k^2 v_A^2 v_s^2 \l( \c \r) = 0 \text{,} \label{Eqdivxi} \end{align} where $k = √(k_y^2 + k_z^2)$ and $v_A = B_0z / √(μ_0 ρ_0)$ is the Alfv\'en speed. This partial differential equation is of fourth order in $t$ and second order in $x$. Note that if $V_0 = 0$, such that the time dependence of the background quantities disappears, we retrieve an ordinary differential equation (ODE) of second order in $x$ which is the governing equation for the compression term $$̧ in a plasma without a background flow. This is the equation used for example in Edwin & Roberts, 1983 to study wave modes in a magnetic cylinder in the framework of ideal MHD in a plasma without background flow. By drawing inspiration from the derivations of Barbulescu et al., 2019, one finds that Eq. (<ref>) can take a simpler form if $\c$ is written as \begin{equation}\label{f-g} \c \; = \; f(x,t) g(t) \e \text{,} \end{equation} where $g(t) = expł{ -i k_z V_0 sinł( ø_0 t )̊/ø_0 }̊$. Since $g(t)$ has modulus equal to $1$ for all $t ∈ℝ$, the stability of $$̧ over time is entirely determined by $f$. Inserting expression (<ref>) into Eq. (<ref>) simplifies this equation considerably, now taking the following form: \begin{align} & {\frac {\partial ^{4} f(x,t)}{\partial {t}^{4}}} \; - \; \left( {v_{A}}^{2}+{v_{s}}^{2} \right) {\frac {\partial ^{4} f(x,t)}{ \partial {x}^{2}\partial {t}^{2}}} \notag\\ & \qquad \; + \; \left[ i \omega_{0} k_{z} V_{0} \sin \left( \omega_{0} t \right) + k^2 \left( {v_{A}}^{2}+{v_{s}}^{2} \right) \right] {\frac {\partial ^{2} f(x,t)}{\partial {t}^{2}}} \notag \\ & \qquad \; - \; \left[ i \omega_{0} k_{z} V_{0} {v_{A}}^{2} \sin \left( \omega_{0} t \right)+{k_{z}}^{2}{v_{A}}^{2}{v_{s}}^{2} \right] {\frac {\partial ^{2} f(x,t)}{\partial {x}^{2}}} \notag \\ & \qquad \; + \; 2 i \omega_{0}^2 k_{z} V_{0} \cos \left( \omega_{0} t \right) {\frac {\partial f(x,t)}{\partial t}} \notag \\ & \qquad \; + \; \l[ i \omega_0 k_z V_0 \sin \l( \omega_0 t \r) \l( k^2 v_A^2 - \o_0^2 \r) \r. \notag\\ & \l. \hspace{4.5cm} + k^2 {k_{z}}^{2}{v_{A}}^{2}{v_{s}}^2 \r] f(x,t) = 0 \text{.} \label{Eqf(x,t)} \end{align} Equation (<ref>) has a solution in the form of $F(x)G(t)$, where $F$ and $G$ satisfy the following ODEs (with $m$ a constant): \begin{equation} \label{EqF} \d\frac{d^2 F(x)}{dx^2} \; + \; m^2 F(x) \; = \; 0 \end{equation} \begin{align} &\d\frac{d^4 G(\t)}{d \t^4} \; + \; \l( a_1 + q_1 \sin \l( \t \r) \r) \frac{d^2 G(\t)}{d \t^2} \; + \; q_3 \cos \l( \t \r) \frac{d G(\t)}{d \t} \notag \\ & \hspace{4.0cm} + \; \l( a_2 + q_2 \sin \l( \t \r) \r) G(\t) \; = \; 0 \text{,} \label{EqG} \end{align} with $\t = \o_0 t$, and where \begin{align*} &\qquad a_1 = \frac{\l(v_A^2 + v_s^2 \r) \l(m^2 + k^2 \r)}{\o_0^2} \text{, } \;\; q_1 = \frac{i k_z V_0}{\o_0} \text{, } \\ &\qquad q_3 = \d\frac{2 i k_z V_0}{\o_0} \text{,} \\ &\qquad a_2 = \frac{k_z^2 v_A^2 v_s^2 \l( m^2 + k^2 \r)}{\o_0^4} \text{, } \;\;\;\;\; q_2 = \frac{i k_z V_0 \l[ \l( m^2 + k^2 \r) v_A^2 - \o_0^2 \r] }{\o_0^3} \text{.} \end{align*} These two equations are related by the constant $m$, which occurs in both of them. §.§.§ The spatial function $F$ Equation (<ref>) has a simple analytical solution, namely \begin{equation}\label{SolF} F(x) \; = \; C_1 \exp \l( i m x \r) \; + \; C_2 \exp \l( -i m x \r) \text{,} \end{equation} with $C_1$ and $C_2$ arbitrary constants. From this it can be inferred that $m$ plays the role of the wavenumber along $x$, which is the direction normal to the interface. The focus of this paper goes to standing slow waves in solar atmospheric structures that can be modeled as a straight cylinder with a circular base and a discontinuous boundary separating two homogeneous but different plasma regions. Therefore, each region has its own value for $m$ (namely $m_i$ for the interior and $m_e$ for the exterior), each being able to only take on specific values depending on the boundary conditions at the interface. Finding expressions for $m_i$ and $m_e$ is, however, not straightforward. This will be discussed in Section <ref>. We note that for the purpose of studying the local stability of the interface, $m$ will be taken purely imaginary in order to have evanescent spatial profiles in the normal direction. §.§.§ The temporal function $G$ Equation (<ref>) does not have a simple analytical solution. This equation governs the stability of the compression term $\c$ over time, its parameters depending on the background quantities $v_A$, $v_s$, $V_0$ and $ø_0$. The function $h(t) = G(t) g(t)$, where $G$ is the solution to equation \eqref{EqG}, actually describes the general time evolution of $$̧ in a compressible homogeneous plasma of inifinite extent with an oscillating background velocity which is parallel to the straight and constant equilibrium magnetic field. Although there is no closed-form analytical solution, the boundedness of $\c$ over time can be derived from the properties of Eq. \eqref{EqG}. Indeed, this linear fourth-order ODE has periodic coefficients with the same period and hence it falls in the category of equations which obey Floquet theory [Chicone, 2008]. Defining $τ= ø_0 t$, it can be rewritten as a $4 ×4$ system of first-order ODEs of the form $x'() = A() x()$, where the coefficient matrix $A()$ is periodic with a period of $2 π$. One can find four linearly independent fundamental solution vectors $x_1()$, $x_2()$, $x_3()$ and $x_4()$, which depend on their respective initial conditions. The matrix obtained by putting these four vectors into its columns is called a fundamental solution matrix. If we take as initial condition matrix the identity matrix, Floquet theory states that the corresponding fundamental solution matrix (which we denote by $X()$) evaluated at one period (i.e., at $τ= 2 π$ in our case) is intimately linked to the boundedness of the solutions of the ODE. Indeed, denoting the eigenvalues of $X(2π)$ by $λ_1$, $λ_2$, $λ_3$ and $λ_4$, Floquet's theorem states that for each distinct eigenvalue $λ$, if we write $λ= e^2 πμ$ where $μ∈ℂ$ is called the Floquet exponent, there exists an independent solution of the system which has the form \begin{equation} \label{FloquetSol} \pmb{x}(\t) = e^{\mu \t} \pmb{\Phi}(\t) \text{,} \end{equation} where $Φ$ contains $2 π$-periodic functions. This means that a solution to Eq. \eqref{EqG} is unstable if and only if $λ > 1$ for at least one eigenvalue $λ$ of $X(2 π)$. If there are less than four distinct eigenvalues of $X(2 π)$, it is possible there are less than four independent Floquet solutions of the form \eqref{FloquetSol}. If the eigenspace of $X(2 π)$ has dimension $4$, there are still four independent eigenvectors $Φ_i$ and thus four independent Floquet solutions. If the eigenspace of $X(2 π)$ has dimension less than $4$, then there are less than four independent Floquet solutions. In that case, extra independent solutions have to be found by using the Jordan normal form of $X(2 π)$ (see e.g., Cesari, 1963 for more details). These extra independent Jordan solutions are always unstable. The stability of $$̧ is thus entirely determined by the Floquet exponents $\mu$. A solution to Eq. (<ref>) can be written in the form of a series. Knowing that each independent Floquet solution has the form $e^{\mu \t} \Phi(\t)$ with $\Phi$ a $2 \pi$-periodic function, one can assume that such a basic independent solution can be written as \begin{equation}\label{Gseries} G(\t) \; = \; e^{\mu \t} \d\sum_{j = -\infty}^{\infty} \varphi_j \; e^{i j \t} \text{,} \end{equation} where the $\varphi_j$ are unknown coefficients. Writing $\mu=i \nu$ for convenience, the following recurence relation between the coefficients $\varphi_j$ can then be derived by inserting expression (<ref>) into Eq. (<ref>): \begin{equation} \label{recursion} - \varepsilon_j \; \varphi_{j-1} \;\; + \;\; \varphi_j \;\; + \;\; \varepsilon_j \; \varphi_{j+1} \;\; = \;\; 0\text{,} \end{equation} \begin{equation} \label{epsj} \varepsilon_j \; = \; \d\frac{\frac{1}{2} k_z V_0 \o_0 \l[ \l( j + \nu \r)^2 \o_0^2 - K^2 v_A^2 \r]}{\l( j + \nu \r)^4 \o_0^4 - \l( j + \nu \r)^2 \o_0^2 K^2 \l( v_A^2 + v_s^2 \r) + k_z^2 K^2 v_A^2 v_s^2} \end{equation} with $K = \sqrt{k^2 + m^2}$. Equations (<ref>) are nontrivially satisfied if and only if the following infinite determinant vanishes: \begin{equation} \label{Delta} \Delta = \begin{vmatrix} \ddots & \ddots & & & 0\\ \ddots & 1 & \varepsilon_{-1} & & \\ & -\varepsilon_0 & 1 & \varepsilon_0 & \\ & & -\varepsilon_1 & 1 & \ddots \\ 0 & & & \ddots & \ddots \end{vmatrix} \text{.} \end{equation} This is a Fredholm determinant. Denoting with $A$ the operator defined by the corresponding infinite matrix, this Fredholm determinant is well-defined if the operator $A-I$ (where $I$ is the identity operator) is a trace class operator on $\ell^2(\mathbb{Z})$, the Hilbert space of square-summable sequences of complex numbers with entire index. It can be shown that this is the case here (see for example Sträng, 2005 for the method to follow), except if the denominator of one of the $\varepsilon_j$ vanishes. This happens if the perturbation, which is a normal mode with wave vector $\pmb{K} = (m, k_y, k_z)$, satisfies the equation \begin{equation} \label{res} (j+\nu)^2 \o_0^2 \; = \; \d\frac{K^2 \l(v_A^2 + v_s^2 \r)}{2} \l\{ 1 \pm \l[ 1 - \d\frac{4 k_z^2 v_A^2 v_s^2}{K^2 \l( v_A^2 + v_s^2 \r)^2} \r]^{1/2} \r\} \text{,} \end{equation} for a $j \in \mathbb{Z}$. This represents a resonance between the background oscillator with frequency $\o_0$ and a magnetosonic wave, in a homogeneous plasma of infinite extent. In this case $m$ is a free parameter (like $k_y$ and $k_z$) and can potentially take any real value. In the model we describe in this paper, with an interface separating two such homogeneous plasmas, only certain specific values of $m$ (different in both regions) are physically possible. Because surface waves on the interface are actually of interest in this case, $m$ should be imaginary. In Section <ref> we study the resonance of these surface waves with the background oscillator. Considering $\Delta$ as a function of the two unknowns $\nu$ and $m$, it is thus well-defined for every $(\nu,m)$ except in the poles of the $\varepsilon_j$. It can also easily be checked with Eqs. (<ref>) and (<ref>) that $\Delta(\nu, m) = \Delta(\nu +1, m)$ and $\Delta(\nu, m) = \Delta(- \nu, m)$. As previously stated, there are in general four independent Floquet solutions of the form (<ref>) to Eq. (<ref>). Hence, all solutions for $\nu$ of $\Delta(\nu,m) = 0$ (as functions of $m$) which correspond to distinct solutions of the differential equation lie on the strip $-0.5 < \text{Re}\l[ \nu \r] \leq 0.5$. These distinct solutions relate as follows: $\nu_2 = -\nu_1$ and $\nu_4 = -\nu_3$. The four independent Floquet solutions in the most general case are thus of the form $e^{\mu_1 \t} \Phi_1(\t)$, $e^{-\mu_1 \t} \Phi_2(\t)$, $e^{\mu_3 \t} \Phi_3(\t)$ and $e^{-\mu_3 \t} \Phi_4(\t)$, where $\Phi_1, \Phi_2, \Phi_3$ and $\Phi_4$ are $2 \pi$-periodic functions. Recalling the earlier discussion in this section, we note that it is possible that there are less than four independent Floquet solutions if at least two eigenvalues $\lambda_j = e^{2 \pi i \nu_j}$ of $X(2 \pi)$ are equal. From the properties of $\Delta$, we can see that this happens in two situations. One possibility is if we have $\text{Re}[\nu_j] = n/2$ for some $n \in \mathbb{Z}$ and $\text{Im}[\nu_j] = 0$, for one of the solutions $\nu_j$. Indeed, the eigenvalues $e^{2 \pi i \nu_j}$ and $e^{-2 \pi i \nu_j}$ are equal in that case. Another possibility is if, for two solutions $\nu_j$ and $\nu_l$, we have $\text{Re}[\nu_j] = \text{Re}[\nu_l] + n$ for some $n \in \mathbb{Z}$ and $\text{Im}[\nu_j] = \text{Im}[\nu_l]$. In this case $e^{2 \pi i \nu_j}$ and $e^{2 \pi i \nu_l}$ will also be equal. We find the following formula for $\Delta$, derived in Appendix <ref>: \begin{equation} \label{DeltaFormula} \Delta \; = \; 1 + \d\sum_{n=1}^{\infty} \l[ \d\sum_{j_1 = - \infty}^{\infty} \d\sum_{j_2 = j_1 + 2}^{\infty} \ldots \d\sum_{j_n = j_{n-1} + 2}^{\infty} \l( \d\prod_{l=1}^n \varepsilon_{j_l} \varepsilon_{j_l +1} \r) \r] \text{.} \end{equation} It is clear from Eq. (<ref>) that $\varepsilon_j \approx \frac{k_z V_0}{2 j^2 \o_0}$ as $\abs{j} \to \infty$. We can then try to use the fact that $\varepsilon_j$ drops off as $1/j^2$ to approximate the cumbersome formula (<ref>). If we can assume that $\abs{\text{Im}[\nu]} \ll \abs{\text{Re}[\nu]}$, then since $-0.5 < \text{Re}\l[ \nu \r] \leq 0.5$ the quantity $\nu$ becomes negligible with respect to $j$ already for quite small $\abs{j}$ in that case. In Section <ref>, we see that this is a good assumption in both coronal and photospheric conditions, if the Alfvén Mach number $M_A = (V_{0i}-V_{0e})/v_{Ai}$ is small enough. One could then try considering, as a first approximation for $\Delta$, only the first few terms on the right-hand side of Eq. (<ref>). Taking $n=1$ and $j_1 \in \{-1,0,1\}$, this would yield: \begin{equation} \label{DeltaApprox} \Delta \;\; \approx \;\; 1 \;\; + \;\; \varepsilon_{-1} \; \varepsilon_0 \;\; + \;\; \varepsilon_0 \; \varepsilon_1 \text{.} \end{equation} Equation (<ref>) corresponds to Eq. (<ref>) with the infinite determinant in the right-hand side truncated to a $3 \times 3$ determinant centered on the row with $\varepsilon_0$. If one starts from the incompressible versions of Eqs. (<ref>)-(<ref>), one can derive that $m=ik$ in an incompressible plasma. The approximation of truncating the series in Eq. (<ref>) is only valid for large enough $\abs{j}$ and away from the poles of the $\varepsilon_j$. Under the assumption that $\abs{K} v_A \ll \o_0$ and $\abs{K} v_s \ll \o_0$, this condition is fulfilled. We note that in the solar atmosphere, $v_A$ and $v_s$ are rougly of the same order of magnitude. Since $K = k^2 - \abs{m}^2$ for surface waves, we have $K \to 0$ in the incompressible limit. Eq. (<ref>) thus gives us an approximation for $\Delta$ at least in a weakly compressible plasma, but could maybe even be correct more generally. Analytical solutions for $m_i$ and $m_e$ in function of $\nu$ can then be derived from Eq. (<ref>). The obtained expressions for $m_i(\nu)$ and $m_e(\nu)$ are very complicated but, introducing numerical values relevant for the physical conditions of the solar structure being considered, we can use them together with a third relation involving $\nu$, $m_i$ and $m_e$. This other relation can theoretically be the one derived in the next subsection, although in practice one of those we derive in Section <ref> under approximating circumstances will probably be preferred. We note that $\nu$ is the same on both sides of the interface, as will be explained in the next section. In contrast to this, $m$ is in general not identical on both sides. It can also be seen that $\Delta(\nu, m) = \Delta(\nu, -m)$. Hence, if $m$ is a solution then so is $-m$. This is reflected in the form of the spatial function $F$ in Eq. (<ref>). §.§ Other perturbed physical quantities With $h(t) = G(t) g(t)$, the compression term can be written as $\c = F(x) h(t) \exp \{ i (k_y y + k_z z) \}$. From this and Eqs. (<ref>)-(<ref>), the following expressions for the perturbed quantities can then easily be derived: \begin{align} \xi_x &= - \d\frac{i m}{k^2 + m^2} \; \tilde{F}(x)\; h_1(t) \; \e \text{,} \label{xix}\\ \xi_y &= - \d\frac{i k_y}{k^2 + m^2} \; F(x)\; h_2(t) \; \e \text{,} \label{xiy}\\ \xi_z &= - \d\frac{i k_z}{k^2 + m^2} \; F(x)\; h_3(t) \; \e \text{,} \label{xiz}\\ B_{1x} &= B_{0z} \; \d\frac{k_z m}{k^2 + m^2} \; \tilde{F}(x) \; h_1(t) \; \e \text{,} \\ B_{1y} &= B_{0z} \; \d\frac{k_z k_y}{k^2 + m^2} \; F(x) \; h_2(t) \; \e \text{,} \\ B_{1z} &= B_{0z} \; F(x) \l( \d\frac{k_z^2}{k^2 + m^2} \; h_3(t) \; - \; h(t) \r) \e \text{,}\\ p_1 &= -\rho_0 \; v_s^2 \; F(x) \; h(t) \; \e \text{,} \\ \rho_1 &= -\rho_0 \; F(x) \; h(t) \; \e \text{,} \\ P_1 &= \rho_0 \; \l( \d\frac{k_z^2 v_A^2}{k^2 + m^2} h_3(t) \; - \; \l( v_A^2 + v_s^2 \r) h(t) \r) \notag\\ & \hspace{4cm} F(x) \; \e \text{,} \label{P1} \end{align} with $\tilde{F}(x) = C_1 e^{imx} - C_2 e^{-imx}$ and where $h_1$, $h_2$, $h_3$ are still unknown but have to satisfy \begin{equation}\label{h1h2h3h} \d\frac{m^2}{k^2 + m^2} \; h_1(t) \; + \; \d\frac{k_y^2}{k^2 + m^2} \; h_2(t) \; + \; \d\frac{k_z^2}{k^2 + m^2} \; h_3(t) \; = \; h(t) \text{,} \end{equation} by the definition $\c = \partial \xi_x / \partial x + \partial \xi_y / \partial y + \partial \xi_z / \partial z$. Each of the expressions (<ref>)-(<ref>) is different for each region on both sides of the interface. The functions $\tilde{F}$, $F$, $h$, $h_1$, $h_2$ and $h_3$ in particular will have a different version in both regions. For the spatial functions $\tilde{F}$ and $F$, only one of the two coefficients $C_1$ and $C_2$ must be retained in each region. We make the choice that $C_1 = 0$ in the $x<0$ region, and $C_2 = 0$ in the $x>0$ region. Writing $h_1(t) = G_1(t) g(t)$, $h_2(t) = G_2(t) g(t)$ and $h_3(t) = G_3(t) g(t)$, the following expressions for $G_1$, $G_2$ and $G_3$ can be derived from Eqs. (<ref>)-(<ref>), Eqs. (<ref>)-(<ref>) and Eq. (<ref>) (see Appendix <ref>): \begin{align} G_1(\t) \; &= \; G_2(\t) \notag \\ &= \; K^2 v_s^2 \; e^{\mu \t} \d\sum_{j = -\infty}^{\infty} \d\frac{1}{\l(j + \nu \r)^2 \o_0^2 - K^2 v_A^2} \; \varphi_j \; e^{i j \t} \text{,} \\ G_3(\t) \; &= \; \d\frac{K^2}{k_z^2} \; e^{\mu \t} \d\sum_{j = -\infty}^{\infty} \l(1 - \frac{\l( k_y^2+ m^2 \r) v_s^2}{\l(j + \nu \r)^2 \o_0^2 - K^2 v_A^2} \r) \; \varphi_j \; e^{i j \t} \text{.} \end{align} It can also be checked that Eq. (<ref>) is indeed fulfilled with these expressions. Now, the following two boundary conditions have to be fulfilled at the interface between the two regions (i.e., at $x=0$) for physical reasons: \begin{align} [ \xi_x ] = 0 \text{,} \label{BCxi}\\ [ P_1 ] = 0 \text{,} \label{BCP1} \end{align} where $[f] = \lim_{x \downarrow 0} f(x) - \lim_{x \uparrow 0} f(x)$ denotes the jump in a quantity $f$ across the interface. We note that, similarly as for the frequency of a normal mode in the case without background flow, $\nu$ is the same inside and outside. It has proven to be too difficult to show mathematically, but it can be explained as follows. When a perturbation is unstable, its growth rate is determined solely by $\nu$: the growth of every perturbed quantity is namely expressed by the factor $e^{-\text{Im}[\nu]t}$. Therefore, since quantities such as $\xi_x$ and $P_1$ have to be continuous at the interface $x=0$, $\nu$ must be the same on both sides of the interface. From the two equations (<ref>) and (<ref>) linking the interior and exterior solutions, a relation can be derived which has to be satisfied in order for the system determined by these equations to have nontrivial solutions: \begin{equation} \label{disp} \d\sum_{j=- \infty}^{\infty} \l( \rho_{0i} m_e \zeta_{A,j} \; + \; \rho_{0e} m_i \zeta_{B,j} \r) \; e^{ij \t} = 0 \text{,} \end{equation} \begin{align*} \zeta_{A,j} &= \d\sum_{l=- \infty}^{\infty} \frac{2 \o_0^2 v_{si}^2 v_{se}^2 \l[ \l( l+\nu \r)^2 \o_0^2 - k_z^2 v_{Ai}^2 \r] \varphi_{l,i} \varphi_{j-l,e}}{\l[ \l( l+\nu \r)^2 \o_0^2 - K_i^2 v_{Ai}^2 \r] \l[ \l(j- l+\nu \r)^2 \o_0^2 - K_e^2 v_{Ae}^2 \r] } \text{,} \\ \zeta_{B,j} &= \d\sum_{l=- \infty}^{\infty} \frac{2 \o_0^2 v_{si}^2 v_{se}^2 \l[ \l( l+\nu \r)^2 \o_0^2 - k_z^2 v_{Ae}^2 \r] \varphi_{l,e} \varphi_{j-l,i}}{\l[ \l( l+\nu \r)^2 \o_0^2 - K_e^2 v_{Ae}^2 \r] \l[ \l(j- l+\nu \r)^2 \o_0^2 - K_i^2 v_{Ai}^2 \r] } \text{.} \end{align*} Equation (<ref>), arising from the boundary conditions at the interface, is usually called the dispersion relation when there is no oscillating background flow. Both $m_i$ and $m_e$ being determined by Eq. (<ref>) (by inserting respectively interior and exterior values for the background quantities), Eq. (<ref>) determines the only remaining unknown, $\nu$. From Eq. (<ref>), the following has to hold for every $j \in \mathbb{Z}$: \begin{equation} \label{mRatios} \d\frac{m_e}{m_i} = -\d\frac{\rho_{0e} \zeta_{B,j}}{\rho_{0i} \zeta_{A,j}} \text{.} \end{equation} For $j = 0$ this gives us the relation \begin{equation} \label{mRatiosj0} \d\frac{m_e}{m_i} = - \d\frac{\rho_{0e}}{\rho_{0i} } \d\frac{\d\sum_{l=- \infty}^{\infty} \frac{ \l[ \l( l+\nu \r)^2 \o_0^2 - k_z^2 v_{Ae}^2 \r] \varphi_{l,e} \varphi_{-l,i}}{\l[ \l( l+\nu \r)^2 \o_0^2 - K_e^2 v_{Ae}^2 \r] \l[ \l(l-\nu \r)^2 \o_0^2 - K_i^2 v_{Ai}^2 \r] } }{ \d\sum_{l=- \infty}^{\infty} \frac{\l[ \l(l -\nu \r)^2 \o_0^2 - k_z^2 v_{Ai}^2 \r] \varphi_{l,e} \varphi_{-l,i}}{\l[ \l( l+\nu \r)^2 \o_0^2 - K_e^2 v_{Ae}^2 \r] \l[ \l(l-\nu \r)^2 \o_0^2 - K_i^2 v_{Ai}^2 \r] } } \text{.} \end{equation} We note that this is not an explicit solution for $m_e/m_i$, because both $m_i$ and $m_e$ appear on the right-hand side, through $K_i$ and $K_e$ respectively. While Eq. (<ref>) seems difficult to use directly, it gives us some information. Indeed, we learn from this equation that, whereas in the incompressible case we have $m_e/m_i = 1$, in the compressible case $m_e/m_i$ seems to depend on $k_z$ as well as $k_y$, the latter appearing only in $K_i$ and $K_e$. In the next section, we see that the only perturbation quantities the stability of the interface depends on are $k_z$ and $m_e/m_i$. This means that, while the abstraction is made that only the longitudinal wavenumber $k_z$ is important for the stability of the interface in an incompressible plasma, the perpendicular wavenumber $k_y$ seems to have some influence as well in the more realistic case of a compressible plasma. § STABILITY OF THE INTERFACE §.§ Governing equation The results derived in the preceding sections permit us to say something about the stability of the interface at the structure's boundary. This stability is determined by the temporal evolution of the normal displacement $\xi_x$, which is governed by the following equation: \begin{equation} \label{Eqxix} \d\frac{D^2 \xi_x}{Dt^2} \; + \; k_z^2 v_A^2 \xi_x \; = \; \frac{-1}{\rho_0} \frac{\pa P_1}{\pa x} \text{.} \end{equation} There is again a different version of Eq. (<ref>) for each region, namely for $x<0$ and for $x>0$. For each of the two regions, the respective versions of expression (<ref>) for $P_1$ can then be inserted in the respective versions of Eq. (<ref>). These can then be put together by expressing $C_1$ in function of $C_2$ thanks to Eq. (<ref>), and yield the following equation governing the displacement of the interface over time: \begin{align} &\l( \rho_{0e} m_i \d\frac{D_e^2 \l(\xi_x\bigr\rvert_{x=0} \r)}{Dt^2} \; + \; \rho_{0i} m_e \frac{D_i^2 \l(\xi_x\bigr\rvert_{x=0} \r)}{Dt^2} \r) \notag\\ & \hspace{1.5cm} + \; k_z^2 \l( \rho_{0e} v_{Ae}^2 m_i \xi_x\bigr\rvert_{x=0} \; + \; \rho_{0i} v_{Ai}^2 m_e \xi_x\bigr\rvert_{x=0} \r) = 0 \text{,} \label{xixie} \end{align} where $D_i/Dt = \pa / \pa t + i k_z V_{0i} \sin (\o_0 t)$, $D_e/Dt = \pa / \pa t + i k_z V_{0e} \sin (\o_0 t)$, and $\xi_x\bigr\rvert_{x=0}$ is $\xi_x$ evaluated at $x=0$. Using a similar trick as was used for $\c$ before and which was also used by Barbulescu} {et~al.}, 2019 in a different setup, we write $ξ_x|_x=0$ as \begin{equation}\label{trick} \xi_x\bigr\rvert_{x=0}(t) = \eta(t) \exp \l\{\frac{- i A \sin \l( \o_0 t \r)}{\o_0} \r\} \text{,} \end{equation} with $A = k_z V_0e m_i ρ_0e + V_0i m_e ρ_0i/m_i ρ_0e + m_e ρ_0i$. This assumption changes nothing to the stability of $ξ_x|_x=0(t)$ since the moduli of $ξ_x|_x=0(t)$ and $η(t)$ are the same for every $t ∈ℝ^+$, and therefore $ξ_x|_x=0$ is unbounded if and only if $η$ is unbounded. Inserting Eq. \eqref{trick} in Eq. \eqref{xixie} greatly simplifies the equation, which can be worked out to yield the following ODE for $η$: \begin{equation}\label{Mathieu} \d\frac{d^2 \eta(\tau)}{d \tau^2} + \l( a - 2 q \cos \l( 2 \tau \r) \r) \eta(\tau) = 0 \text{,} \end{equation} with $τ= ø_0 t$ and %a &= \d \frac{k_z^2}{ \o_0^2 \l( \rho_{0i} m_e + \rho_{0e} m_i \r)^2} \l\{ m_i^2 \rho_{0e}^2 v_{Ae}^2 + m_e^2 \rho_{0i}^2 v_{Ai}^2 \r. \notag \\ %& \hspace{1.5cm} \l. - m_i m_e \rho_{0i} \rho_{0e} \l[ \frac{1}{2} \l(V_{0i} - V_{0e} \r)^2 - \l( v_{Ai}^2 + v_{Ae}^2 \r) \r] \r\} \text{,} \label{a1} \\ %q &= \d\frac{k_z^2 \rho_{0i} \rho_{0e} m_i m_e \l( V_{0e} - V_{0i} \r)^2}{4 \o_0^2 \l( \rho_{0i} m_e + \rho_{0e} m_i \r)^2} \text{.} \label{q1} \begin{align} a &= \d\frac{k_z^2}{\o_0^2} \d\frac{\rho_{0i} v_{Ai}^2 m_e + \rho_{0e} v_{Ae}^2 m_i}{\rho_{0i} m_e + \rho_{0e} m_i} - \frac{k_z^2 \rho_{0i} \rho_{0e} m_i m_e \l(V_{0i} - V_{0e} \r)^2}{2 \o_0^2 \l( \rho_{0i} m_e + \rho_{0e} m_i \r)^2} \text{,} \label{a1} \\ q &= \d\frac{k_z^2 \rho_{0i} \rho_{0e} m_i m_e \l( V_{0i} - V_{0e} \r)^2}{4 \o_0^2 \l( \rho_{0i} m_e + \rho_{0e} m_i \r)^2} \text{.} \label{q1} \end{align} The parameters $a$ and $q$ can be rewritten in normalized form as follows: %a &= \d\frac{\kappa_z^2 \l( r^2 \overline{v}_A^2 + \overline{m}^2 + \overline{m} r \l( 1 + \overline{v}_A^2 - \frac{1}{2} M_A^2 \r) \r) }{\l( \overline{m} + r \r)^2} \text{,} \label{a2}\\ %q &= \d\frac{\kappa_z^2 r \overline{m} M_A^2}{4 \l( \overline{m} + r \r)^2} \label{q2} \text{,} \begin{align} a &= \kappa_z^2 \l[ \d\frac{\ov{m} + r \ov{v}_A^2}{\ov{m} + r} - \frac{r \ov{m} M_A^2}{2 \l( \ov{m} + r \r)^2} \r] \text{,} \label{a2}\\ q &= \kappa_z^2 \d\frac{ r \overline{m} M_A^2}{4 \l( \overline{m} + r \r)^2} \label{q2} \text{,} \end{align} where we introduced the normalized quantities $κ_z = k_z v_Ai / ø_0$, $v_A = v_Ae / v_Ai$, $r = ρ_0e / ρ_0i$, $m = m_e/m_i$ and $M_A = (V_0i-V_0e) / v_Ai$. We notice that the first term of $a$ consists of an expression that resembles the squared kink frequency and which we call the pseudo squared kink expression. The second term of $a$ resembles a Doppler-shifting correction and equals $-2q$. In an incompressible plasma, the first term would be the squared kink frequency (since then $m_i, m_e →k$). We also note that if there is no background flow (i.e., $V_0i = V_0e = 0$), we are left with only the pseudo squared kink expression in the factor in front of $η()$ in Eq. \eqref{Mathieu}. The solution is then a surface wave with frequency $ν$ determined by the dispersion relation \begin{equation} \label{nuConstFlow} \nu^2 = \kappa_z^2 \d\frac{\ov{m} + r \ov{v}_A^2}{\ov{m} + r} \text{,} \end{equation} where $m_i$ and $m_e$ are determined in function of $ν$ through their respective version of Eq. \eqref{DeltaFormula}. Eq. \eqref{nuConstFlow} is a transcendental equation in $ν$ with more than one solution, which means that there are several different modes in a compressible plasma. This is in contrast with an incompressible plasma, where there is only one mode, for which the expression of the frequency is readily readable from Eq. \eqref{nuConstFlow} and is equal to the kink frequency. These results about modes at an interface in a compressible plasma without background flow are already known (see for example Priest, 2014). \subsection{Instability conditions} Equation \eqref{Mathieu} is the Mathieu equation, which also pertains to the class of equations obeying Floquet theory and which has been extensively studied for example by McLachlan, 1947. It does not have an analytical solution but the stability of its solution in function of its parameters is well-known. The stability of the solutions of the Mathieu equation \eqref{Mathieu} in function of the parameters $a$ and $q$ is often represented in the so-called stability diagram (see Figure \ref{StabDiag}). The white zones in the figure are regions where the solutions are stable, whereas the gray zones are region where the solutions are unstable. \begin{figure} \centering \includegraphics[scale=0.165]{MathDiag3.png} \caption{Stability diagram of the Mathieu equation. The white zones represent regions of the parameter space where the solution is stable, whereas the gray zones represent regions of the parameter space where the solution is unstable.} \label{StabDiag} \end{figure} \subsubsection{Constant shear flow} In order to gain some insight on the nature of the potential instabilities, we follow Hillier} {et~al.}, 2019, who studied the stability of an interface in a similar model. We start by looking at the case with a constant background shear flow (i.e., we assume $ø_0 = 0$). We have to modify the procedure above a little bit in order to be mathematically correct: instead of Eq. \eqref{trick}, we assume $ξ_x|_x=0(t) = η(t) expł{- i A t }̊$ in this particular case. The obtained equation is then \begin{equation}\label{PseudoMathieu} \d\frac{d^2 \eta(\tau)}{d \tau^2} + \kappa_z^2 \l[ \d\frac{\ov{m} + r \ov{v}_A^2}{\ov{m} + r} - \frac{r \ov{m} M_A^2}{\l( \ov{m} + r \r)^2} \r]\; \eta(\tau) = 0 \text{.} \end{equation} It has normal mode solutions with normalized frequency $ν$ defined by the dispersion relation \begin{equation} \label{DispMat1} \nu^2 = \kappa_z^2 \l[ \d\frac{\ov{m} + r \ov{v}_A^2}{\ov{m} + r} - \frac{r \ov{m} M_A^2}{\l( \ov{m} + r \r)^2} \r] \text{.} \end{equation} This transcendental equation in $ν$ has multiple solutions as well, and thus we see that there are also different modes in the case of a constant shear flow in a compressible plasma. A particular mode will be unstable to the KHI if \begin{equation} \label{instCondConst} M_A^2 > \d\frac{\l( \ov{m} + r \r) \l( \ov{m} + r \ov{v}_A^2 \r)}{r \ov{m}} \end{equation} is satisfied. Different modes will thus have different instability conditions in a compressible plasma. If we take the incompressible limit, the above derivations match the already well-known results of a constant shear flow in an incompressible plasma (see for example Chandrasekhar}, 1961). \subsubsection{Oscillating shear flow} In the case of an oscillatory background flow ($ø_0 ≠0$), Eq. \eqref{Mathieu} governs the evolution of the normal displacement at the interface over time. In the same way as for Eq. \eqref{EqG}, Floquet theory allows us to write an independent solution to the Mathieu equation as follows [McLachlan, 1947]: \begin{equation}\label{seriesMathieu} \eta(\t) \; = \; e^{i \nu \t} \d\sum_{j = -\infty}^{\infty} \phi_j \; e^{2 i j \t} \text{.} \end{equation} This resembles a normal mode with frequency $ν$, but with a periodic function replacing the constant. One can then find the following recursion relation by inserting Eq. \eqref{seriesMathieu} into Eq. \eqref{Mathieu}: \begin{equation} \label{recursionMathieu} \epsilon_j \; \phi_{j-1} \;\; + \;\; \phi_j \;\; + \;\; \epsilon_j \; \phi_{j+1} \;\; = \;\; 0\text{,} \end{equation} \begin{equation} \epsilon_j = \d\frac{q}{\l( 2j + \nu \r)^2-a} \text{.} \end{equation} If the denominator of one of the $ϵ_j$ vanishes, there can be a resonance between the background oscillator and the induced surface waves. Now some analytical progress can be made in the limiting case $q →0$, corresponding to a small squared shear rate $k_z^2 (V_0i-V_0e)^2$ with respect to the squared frequency $ø_0^2$. Indeed, for $q ≪1$, we see from Eq. \eqref{recursionMathieu} that in order to have a nontrivial solution \eqref{seriesMathieu} we need the denominator of at least one of the $ϵ_j$ to vanish. For that, we need to have \begin{equation} \label{res} \l( 2j + \nu \r)^2-a \approx 0 \end{equation} for a $j ∈ℤ$. In case $ν^2 ≈a$, that is to say, if \begin{equation} \label{j0} \nu^2 \approx \kappa_z^2 \l[ \d\frac{\ov{m} + r \ov{v}_A^2}{\ov{m} + r} - \frac{r \ov{m} M_A^2}{2 \l( \ov{m} + r \r)^2} \r] \text{,} \end{equation} then Eq. \eqref{res} is fulfilled for $j = 0$. Since Eq. \eqref{j0} is again a transcendental equation in $ν$ with multiple solutions, there are multiple modes possible. These modes are surface waves with frequency $ν$. Such a mode will be unstable if \begin{equation} \label{instCondOsc} M_A^2 > 2 \d\frac{\l( \ov{m} + r \r) \l( \ov{m} + r \ov{v}_A^2 \r)}{r \ov{m}} \end{equation} is satisfied. This renders the right-hand side in Eq. \eqref{j0}, which is $a$, negative. Hence, for $q ≪1$, the region of the parameters space which corresponds to the KHI is $a<0$. It can be seen on Figure \ref{StabDiag} that this region is indeed uniformly unstable. We note that the right-hand side in Eq. \eqref{instCondOsc} is a factor of $2$ larger than in Eq. \eqref{instCondConst}. In a first approximation, we could thus evaluate the minimum value of $M_A$ for the KHI to develop in the case of an oscillatory shear flow with $q ≪1$ to be $√(2)$ times higher than in the case of a constant shear flow with $q ≪1$. This is assuming that the corresponding values of $m_i$ and $m_e$ are equal in both cases. In an incompressible plasma, this is obviously correct. In a compressible plasma, however, there is no reason to suggest a priori that this is the case in general. Nevertheless, the values might still be close, such as in a weakly compressible plasma for example. If on top of Eq. \eqref{res} being satistfied for $j=0$ it is also satisfied for another $j ∈ℤ$, then there is a resonance between a mode satisfying Eq. \eqref{j0} and the background oscillator. From $ν^2 = a$ and Eq. \eqref{res} with $j≠0$, this means that we must have \begin{equation} a = j^2 \text{.} \end{equation} In Figure \ref{StabDiag} we see the regions of resonance instability around $a=j^2$, with $j ∈ℤ_0$. Hence, for $q ≪1$, the KHI and the resonance instability are clearly distinct features and are mutually exclusive. As was also mentionned by Hillier} {et~al.}, 2019 (themselves based on Bender \& Orszag, 1999), the dominant resonance in this limit is the one with $j = 1$. Its instability reagion is borded by the curves \begin{equation} a = j^2 \pm q + O(q^2) \hspace{1cm} \text{ as } q \to 0 \end{equation} and its maximum growth rate is \begin{equation} \text{Im}[\nu] = \d\frac{q}{2} + O(q^2) \hspace{1cm} \text{ as } q \to 0 \text{.} \end{equation} We could thus express this growth rate through the equation $Im[ν] = q/2$ as an approximation. This is an implicit equation in $ν$, because $q$ depends on it through $m$. \subsection{Solar applications: Relevant region of the parameter space} From Eqs. \eqref{a2} and \eqref{q2}, we see the parameters $a$ and $q$ are related as follows: \begin{equation} \label{a=f(q)} a = \l\{ \d\frac{4 \l( \ov{m} + r \r) \l( \ov{m} + r \ov{v}_A^2 \r)}{r \ov{m} M_A^2} - 2 \r\} \; q \text{.} \end{equation} In the incompressible limit, $m →1$ and we find the same relation as Barbulescu} {et~al.}, 2019 for their model. For fixed values of the background quantities and for a fixed value of $m$, Eq. \eqref{a=f(q)} represents a line in the $aq$-plane. We note, however, that changing the value of $k_z$ and/or $k_y$ a priori changes the value of $m$ as well and that hence the modes do not lie all on the same line. This is unlike the case of an incompressible plasma, where one stays on the same line when changing $k_z$, as was already found for example by Barbulescu} {et~al.}, 2019 in a similar model. We find that the slope in Eq. \eqref{a=f(q)} is minimized for $m = r v_A$, in which case the line's equation is \begin{equation} \label{minSlope} a= \l\{ \frac{ 4 \l( 1+ \overline{v}_A \r)^2 }{M_A^2} - 2 \r\} \; q \text{.} \end{equation} For each set of fixed background quantities, the only physically possible values of $a$ and $q$ thus lie in the region of the stability diagram between the positive $a$-axis and the line defined by Eq. \eqref{minSlope}. We see that, in this model, it is not physically possible to have values of $(q,a)$ in the region below the line through the origin and with slope $-2$. \subsubsection{Coronal loops} A first application to a solar atmospheric structure would be a coronal loop, where slow waves occur near footpoints [De Moortel} {et~al.}, 2002] or during flares [Wang} {et~al.}, 2002, Kumar} {et~al.}, 2013]. For a standing slow wave under realistic coronal loop conditions, for which we took $(v_Ae, v_si, v_se) = (5/2,1/2,1/4) v_Ai$, the line of minimum slope, Eq. \eqref{minSlope}, lies almost on the positive $a$-axis. Indeed, with $v_A = 5/2$ and for $M_A = 0.066$ for example, we find the line of minimum slope to be given by $a = 11415q$. If we assume incompressibility and fix $m=1$, Eq. \eqref{a=f(q)} defines a line as well and we find $a=12486q$. For the less realistic value of $M_A = 1$, we find that the line of minimum slope is given by $a=47q$ whereas in the incompressible limit the line \eqref{a=f(q)} becomes $a=52q$. As we can see from Figure \ref{StabDiag}, the region of the $aq$-diagram relevant for coronal loop conditions is thus stable with respect to the KHI. It could be unstable with respect to resonance; however, since in reality a finite cylindrical structure is closed both azimuthally and longitudinally, the respective wavenumbers will be entire and thus take on discrete values. In the $aq$-diagram, the possible values will thus be a set of discrete points $(a_n, q_n)$. Seeing how the vast majority of the area is a stable region in the relevant region between the positive $a$-axis and the minimum-slope line \eqref{minSlope}, it is likely that this parametric resonance instability is avoided as well. \begin{table}[!htb] \begin{tabular}{|l|l|l|l|} \hline \textbf{conditions} & \textbf{$M_A$} & \textbf{minimum slope} & \textbf{incomp. slope} \\ \hline coronal & 0.066 & 11415 & 12486 \\ \hline photospheric & 0.066 & 1450 & 1615 \\ \hline photospheric & 0.36 & 46 & 52 \\ \hline coronal & 1 & 47 & 52 \\ \hline photospheric & 1 & 4.25 & 5 \\ \hline \end{tabular} \caption{Table summarizing the minimum slope in Eq. \eqref{minSlope} and the slope of Eq. \eqref{a=f(q)} in the incompressible limit, for different atmospheric conditions. The first two rows are for realistic values of the longitudinal velocity for a slow surface wave on a discontinuous boundary of a cylindrical structure. The third row is for realistic values of the longitudinal velocity around the resonant position for a resonant slow wave in a photospheric pore. The last two lines are examples where $M_A$ begins to be unrealistically high and which show that even then the slopes are not low enough to be in an unstable regime. For coronal conditions we took $(v_{Ae}, v_{si}, v_{se}) = (5/2,1/2,1/4) v_{Ai}$, whereas for photospheric conditions we took $(v_{Ae}, v_{si}, v_{se}) = (1/4,1/2,3/4) v_{Ai}$.} \label{tab:my-table} \end{table} We conclude that the oscillating shear velocity of a standing slow wave in a coronal loop would not trigger the KHI on its own, without the involvement of other physical processes. This is in clear contrast with the fast kink waves. Indeed, according to the incompressible models of Barbulescu} {et~al.}, 2019 and Hillier} {et~al.}, 2019, which are similar to this model but where the velocity and magnetic fields are perpendicular instead of parallel, fast kink waves are predicted to excite the KHI on the loop boundary. This has also been confirmed by numerical simulations. \subsubsection{Photospheric pores} As was mentionend in the introduction, propagating slow waves have also been observed in photospheric pores. One would be curious whether standing versions of these modes could trigger the KHI under these conditions. The model can be used to examine the stability of slow waves in these structures as well. However, the qualitative conclusion is the same as in the case of coronal loops: with photospheric pore conditions set to $(v_Ae, v_si, v_se) = (1/4,1/2,3/4) v_Ai$, we have $v_A = 1/4$, and, taking $M_A = 0.066$, the minimum slope is $1450$ whereas for the slope of the line defined by Eq. \eqref{a=f(q)} is $1615$ in the incompressible limit. As a comparison, even for the unrealistic value of $M_A=1$, the minimum slope is $4.25$ and the slope in Eq. \eqref{a=f(q)} in the incompressible limit is $5$. These values are an order of magnitude lower then for their coronal counterpart, but looking at Figure \ref{StabDiag} we see that the same conclusion remains: every possible pair of parameters $(a,q)$ lie in a region that does not permit the development of the KHI. Similarly as for a coronal loop in the previous subsection, it is likely that the parametric instability arising from resonance with the driver is also avoided and that the pore therefore remains stable. Additionally, one could investigate the stability of resonant slow waves, which can occur in the presence of a transition layer at the pore's boundary. Indeed, slow surface waves are known to be resonantly absorbed in photospheric conditions. It is worth wondering whether the great longitudinal velocity shear that arises around the resonant point could be enough to trigger the KHI. In ideal MHD, the longitudinal velocity of resonant slow modes displays a hyperbolic singularity [Sakurai} {et~al.}, 1991], but in the presence of finite resistivity this singularity disappears to make place for a sharp but continuous profile. Kovitya} \& {Cram}, 1983 reported values of electrical resistivity in sunspots, with a minimum resistivity corresponding to a magnetic Reynolds number of $10^7$. Erdelyi}, 1997 derived an expression for the longitudinal Lagrangian displacement in the dissipative layer around the cusp resonance in the presence of finite electrical resistivity (see Eq. (30) therein). Assuming a sinusoidal transition profile with width $l=0.1R$ (where $R$ is the radius of the pore) in the squared cusp and sound speeds, a ratio of external to internal magnetic field of $B_0ze/B_0zi=0.33$, a cusp resonant position at $r_C=0.955R$ (based on the numerical computations of Goossens} {et~al.}, 2021), a value for the perturbed total pressure at the resonant position equal to its value on the interface for the corresponding surface mode in the absence of the transition layer, a real part of the frequency equal to the frequency of the corresponding surface mode in the absence of the transition layer, and a magnetic Reynolds number of $10^7$ to have a lowerbound on resistivity and thus an upperbound on realistic values for the longitudinal velocity at the resonance, we found a value of $M_A=0.36$ around the cusp resonance point for a sausage mode with $k_z R = 2$. This value of $k_z R$ is within the range of validity for the longitudinal wavenumber of the observed slow surface sausage mode in a photospheric pore by Grant} {et~al.}, 2015. The value found for $M_A$ implies a minimum slope of $46$ in Eq. \eqref{minSlope} and a slope of $52$ for Eq. \eqref{a=f(q)} in the incompressible limit. These values are still well above the values for the slopes found for $M_A=1$ in photospheric conditions. Through the model derived in this paper, we conclude from this that even resonant slow waves do not display a large enough velocity shear for an instability to develop. \subsection{Limitations of the model} We point out that the local model discussed in this paper is only valid under certain conditions. Firstly, the azimuthal wavenumber of the perturbation must be large, such that the azimuthal direction can be approximated by the cartesian direction of $y$. This means that we must have $k_y R ≫1$, with $R$ the structure's radius. Secondly, for the amplitude of the oscillating background velocity to be assumed constant, the longitudinal wavelength of the perturbation must be small with respect to the structure's length $L$. We must thus have $k_z L ≫1$. Lastly, we note that our model might not be suitable in the presence of a smooth transition layer between the interior and the exterior of the structure. Indeed, resonant absorption of the standing slow wave in the background could occur in that case [Yu} {et~al.}, 2017], which leads to a steep and continuous variation in the longitudinal component of the velocity along the $x$ direction. Although one can use the model to estimate the effect of the logitudinal velocity shear in resonant absorption on the stability of the interface, as we did in last subsection, one must keep in mind that certain assumptions on which the model is based are not fulfilled in the presence of a transition layer. Firstly, there is no true discontinuous interface and, secondly, the assumption of a constant amplitude profile along $x$ for the oscillating background velocity is violated. \section{Conclusion} In this paper, we developed an analytical model for the local stability of a cylindrical solar atmospheric structure harboring a standing slow wave. We assumed that the structure is a straight cylinder with a circular base, of which the boundary is an interface discontinuously separating two homogeneous and compressible plasmas. The magnetic fields on both sides were assumed to be constant, straight and aligned with the interface. The velocity fields were assumed to be oscillating in time but spatially constant, in order to model the standing slow wave in the background. We used linearized MHD to derive an equation for the compression term, $$̧. The expressions for all the other perturbed quantities could then be expressed in terms of the solution of $\c$. The spatial part of the solution along the direction normal to the interface could be analytically found to be a normal mode, with wavenumber $m$. In contrast to the incompressible approximation, the value of this wavenumber $m$ in a compressible plasma is different on both sides of the interface. We saw that $m_e/m_i$ is dependent on both $k_z$ and $k_y$, which entails that the stability of the interface depends on $k_y$ as well as $k_z$. This is in contrast with the incompressible version of this model, in which only $k_z$ has an influence on said stability. We then found that the governing equation for the displacement component of the perturbation normal to the interface is a Mathieu equation. The stability of the solution of this equation in function of the different involved parameters being known from the litterature, we were able to describe two kinds of instabilities that could theoretically arise in the presence of an oscillating shear background flow. As an application to the solar atmosphere, we found that the physically relevant region of the parameter space corresponds to an interface that is locally stable with respect to the KHI, both in a coronal loop and in a photospheric pore. Although the interface can be unstable to a parametric resonance between the background slow wave and the induced perturbations on the interface, we concluded that it is unlikely to happen in reality. Even in the case of resonance in the cusp continuum, we concluded from our model that the longitudinal velocity shear of the resonant slow waves is not enough to trigger an instability. We ended by noting that this model is only applicable under specific conditions. It is indeed a local model, in the sense that the spatial scale of azimuthal variations must be small compared to the structure's radius and the spatial scale of longitudinal variations must be small with respect to the loop's length. Furthermore, for a structure with a smooth transition layer the assumptions of a discontinuous transition at the boundary and a constant amplitude for the background oscillating velocity fields would be violated. This needs to be kept in mind when using this model to infer stability conditions around resonances. \begin{acknowledgements} This research was supported by the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation program (TVD via grant agreement No 724326), which MG gratefully acknowledges for his Ph.D. studentship. \end{acknowledgements} % WARNING % Please note that we have included the references to the file aa.dem in % order to compile it, but we ask you to: % - use BibTeX with the regular commands: % \bibliographystyle{aa} % style aa.bst % \begin{thebibliography}{77} \expandafter\ifx\csname natexlab\endcsname\relax\def\natexlab#1{#1}\fi [Afanasyev} {et~al.}, 2019] {Afanasyev}, A., {Karampelas}, K., \& {Van Doorsselaere}, T. 2019, \apj, 876, [Antolin} {et~al.}, 2017] {Antolin}, P., {De Moortel}, I., {Van Doorsselaere}, T., \& {Yokoyama}, T. 2017, \apj, 836, 219 [Antolin} {et~al.}, 2015] {Antolin}, P., {Okamoto}, T.~J., {De Pontieu}, B., {et~al.} 2015, \apj, 809, 72 [Antolin} {et~al.}, 2014] {Antolin}, P., {Yokoyama}, T., \& {Van Doorsselaere}, T. 2014, \apjl, 787, L22 [Arregui}, 2015] {Arregui}, I. 2015, Philosophical Transactions of the Royal Society of London Series A, 373, 20140261 [Barbulescu} {et~al.}, 2019] {Barbulescu}, M., {Ruderman}, M.~S., {Van Doorsselaere}, T., \& {Erd{\'e}lyi}, R. 2019, \apj, 870, 108 [Bender \& Orszag, 1999] Bender, C. \& Orszag, S. 1999, Advanced Mathematical Methods for Scientists and Engineers I: Asymptotic Methods and Perturbation Theory, Advanced Mathematical Methods for Scientists and Engineers (Springer) [Berghmans} \& {Clette}, 1999] {Berghmans}, D. \& {Clette}, F. 1999, \solphys, 186, 207 [Cadez} {et~al.}, 1997] {Cadez}, V.~M., {Csik}, A., {Erdelyi}, R., \& {Goossens}, M. 1997, \aap, 326, [Cesari, 1963] Cesari, L. 1963, Asymptotic Behavior and Stability Problems in Ordinary Differential Equations (Springer-Verlag) [Chandrasekhar}, 1961] {Chandrasekhar}, S. 1961, {Hydrodynamic and hydromagnetic stability} [Chen} {et~al.}, 2018] {Chen}, S.-X., {Li}, B., {Shi}, M., \& {Yu}, H. 2018, \apj, 868, 5 [Chen} {et~al.}, 2020] {Chen}, S.~X., Li, B., Van~Doorsselaere, T., Goossens, M., \& Geeraerts, M. 2020, \apj, submitted [Chicone, 2008] Chicone, C. 2008, Ordinary Differential Equations with Applications, Texts in Applied Mathematics (Springer New York) [Conway, 1990] Conway, J. 1990, A Course in Functional Analysis, Graduate texts in mathematics [De Moortel} \& {Hood}, 2003] {De Moortel}, I. \& {Hood}, A.~W. 2003, \aap, 408, 755 [De Moortel} {et~al.}, 2000] {De Moortel}, I., {Ireland}, J., \& {Walsh}, R.~W. 2000, \aap, 355, L23 [De Moortel} {et~al.}, 2002] {De Moortel}, I., {Ireland}, J., {Walsh}, R.~W., \& {Hood}, A.~W. 2002, \solphys, 209, 61 [De Moortel} {et~al.}, 2016] {De Moortel}, I., {Pascoe}, D.~J., {Wright}, A.~N., \& {Hood}, A.~W. 2016, Plasma Physics and Controlled Fusion, 58, 014001 [Dorotovi{\v{c}}} {et~al.}, 2014] {Dorotovi{\v{c}}}, I., {Erd{\'e}lyi}, R., {Freij}, N., {Karlovsk{\'y}}, V., \& {M{\'a}rquez}, I. 2014, \aap, 563, A12 [Dorotovi{\v{c}}} {et~al.}, 2008] {Dorotovi{\v{c}}}, I., {Erd{\'e}lyi}, R., \& {Karlovsk{\'y}}, V. 2008, in IAU Symposium, Vol. 247, Waves \& Oscillations in the Solar Atmosphere: Heating and Magneto-Seismology, ed. R.~{Erd{\'e}lyi} \& C.~A. {Mendoza-Briceno}, [Edwin} \& {Roberts}, 1983] {Edwin}, P.~M. \& {Roberts}, B. 1983, \solphys, 88, 179 [Erdelyi}, 1997] {Erdelyi}, R. 1997, \solphys, 171, 49 [Erd{\'e}lyi} {et~al.}, 2001] {Erd{\'e}lyi}, R., {Ballai}, I., \& {Goossens}, M. 2001, \aap, 368, 662 [Foullon} {et~al.}, 2011] {Foullon}, C., {Verwichte}, E., {Nakariakov}, V.~M., {Nykyri}, K., \& {Farrugia}, C.~J. 2011, \apjl, 729, L8 [Freij} {et~al.}, 2016] {Freij}, N., {Dorotovi{\v{c}}}, I., {Morton}, R.~J., {et~al.} 2016, \apj, 817, [Geeraerts} {et~al.}, 2020] {Geeraerts}, M., {Van Doorsselaere}, T., {Chen}, S.-X., \& {Li}, B. 2020, \apj, 897, 120 [Goossens} {et~al.}, 2002] {Goossens}, M., {Andries}, J., \& {Aschwanden}, M.~J. 2002, \aap, 394, L39 [Goossens} {et~al.}, 2021] {Goossens}, M., {Chen}, S.~X., {Geeraerts}, M., {Li}, B., \& {Van Doorsselaere}, T. 2021, \aap, 646, A86 [Goossens} {et~al.}, 1992] {Goossens}, M., {Hollweg}, J.~V., \& {Sakurai}, T. 1992, \solphys, 138, 233 [Grant} {et~al.}, 2015] {Grant}, S.~D.~T., {Jess}, D.~B., {Moreels}, M.~G., {et~al.} 2015, \apj, 806, [Guo} {et~al.}, 2019] {Guo}, M., {Van Doorsselaere}, T., {Karampelas}, K., {et~al.} 2019, \apj, 870, [Heyvaerts} \& {Priest}, 1983] {Heyvaerts}, J. \& {Priest}, E.~R. 1983, \aap, 117, 220 [Hillier} {et~al.}, 2019] {Hillier}, A., {Barker}, A., {Arregui}, I., \& {Latter}, H. 2019, \mnras, 482, [Hillier} {et~al.}, 2020] {Hillier}, A., {Van Doorsselaere}, T., \& {Karampelas}, K. 2020, \apjl, 897, [Hollweg} {et~al.}, 2013] {Hollweg}, J.~V., {Kaghashvili}, E.~K., \& {Chandran}, B. D.~G. 2013, \apj, 769, 142 [Hollweg} \& {Yang}, 1988] {Hollweg}, J.~V. \& {Yang}, G. 1988, \jgr, 93, 5423 [Hollweg} {et~al.}, 1990] {Hollweg}, J.~V., {Yang}, G., {Cadez}, V.~M., \& {Gakovic}, B. 1990, \apj, 349, [Karampelas} {et~al.}, 2017] {Karampelas}, K., {Van Doorsselaere}, T., \& {Antolin}, P. 2017, \aap, 604, [Karampelas} {et~al.}, 2019] {Karampelas}, K., {Van Doorsselaere}, T., {Pascoe}, D.~J., {Guo}, M., \& {Antolin}, P. 2019, Frontiers in Astronomy and Space Sciences, 6, 38 [Karpen} {et~al.}, 1994] {Karpen}, J.~T., {Dahlburg}, R.~B., \& {Davila}, J.~M. 1994, \apj, 421, 372 [Kelly}, 1965] {Kelly}, R.~E. 1965, Journal of Fluid Mechanics, 22, 547 [Keys {et~al.}, 2018] Keys, P.~H., Morton, R.~J., Jess, D.~B., {et~al.} 2018, The Astrophysical Journal, 857, 28 [Kovitya} \& {Cram}, 1983] {Kovitya}, P. \& {Cram}, L. 1983, \solphys, 84, 45 [Kumar} {et~al.}, 2013] {Kumar}, P., {Innes}, D.~E., \& {Inhester}, B. 2013, \apjl, 779, L7 [Magyar} {et~al.}, 2015] {Magyar}, N., {Van Doorsselaere}, T., \& {Marcu}, A. 2015, \aap, 582, A117 [Mandal} {et~al.}, 2016] {Mandal}, S., {Magyar}, N., {Yuan}, D., {Van Doorsselaere}, T., \& {Banerjee}, D. 2016, \apj, 820, 13 [McLachlan, 1947] McLachlan, N. 1947, Theory and Application of Mathieu Functions (Clarendon [Moreels} {et~al.}, 2015] {Moreels}, M.~G., {Freij}, N., {Erd{\'e}lyi}, R., {Van Doorsselaere}, T., \& {Verth}, G. 2015, \aap, 579, A73 [Moreels} {et~al.}, 2013] {Moreels}, M.~G., {Goossens}, M., \& {Van Doorsselaere}, T. 2013, \aap, 555, [Morton} {et~al.}, 2011] {Morton}, R.~J., {Erd{\'e}lyi}, R., {Jess}, D.~B., \& {Mathioudakis}, M. 2011, \apjl, 729, L18 [Nakariakov} {et~al.}, 2000] {Nakariakov}, V.~M., {Verwichte}, E., {Berghmans}, D., \& {Robbrecht}, E. 2000, \aap, 362, 1151 [Nightingale} {et~al.}, 1999] {Nightingale}, R.~W., {Aschwanden}, M.~J., \& {Hurlburt}, N.~E. 1999, \solphys, 190, 249 [Ofman} {et~al.}, 1994] {Ofman}, L., {Davila}, J.~M., \& {Steinolfson}, R.~S. 1994, \grl, 21, 2259 [Ofman} \& {Thompson}, 2011] {Ofman}, L. \& {Thompson}, B.~J. 2011, \apjl, 734, L11 [Parnell} \& {De Moortel}, 2012] {Parnell}, C.~E. \& {De Moortel}, I. 2012, Philosophical Transactions of the Royal Society of London Series A, 370, 3217 [Pascoe} {et~al.}, 2020] {Pascoe}, D.~J., {Goddard}, C.~R., \& {Van Doorsselaere}, T. 2020, Frontiers in Astronomy and Space Sciences, 7, 61 [Pascoe} {et~al.}, 2012] {Pascoe}, D.~J., {Hood}, A.~W., {de Moortel}, I., \& {Wright}, A.~N. 2012, \aap, 539, A37 [Pascoe} {et~al.}, 2010] {Pascoe}, D.~J., {Wright}, A.~N., \& {De Moortel}, I. 2010, \apj, 711, 990 [Priest, 2014] Priest, E. 2014, Magnetohydrodynamics of the Sun (Cambridge University Press) [Roberts}, 1973] {Roberts}, B. 1973, Journal of Fluid Mechanics, 59, 65 [Sakurai} {et~al.}, 1991] {Sakurai}, T., {Goossens}, M., \& {Hollweg}, J.~V. 1991, \solphys, 133, 227 [Samanta} {et~al.}, 2019] {Samanta}, T., {Tian}, H., \& {Nakariakov}, V.~M. 2019, \prl, 123, 035102 [Shi} {et~al.}, 2021] {Shi}, M., {Van Doorsselaere}, T., {Guo}, M., {et~al.} 2021, arXiv e-prints, [Simon, 2005] Simon, B. 2005, Trace Ideals and Their Applications, Mathematical surveys and monographs (American Mathematical Society) [Soler} {et~al.}, 2013] {Soler}, R., {Goossens}, M., {Terradas}, J., \& {Oliver}, R. 2013, \apj, 777, [Soler} {et~al.}, 2009] {Soler}, R., {Oliver}, R., {Ballester}, J.~L., \& {Goossens}, M. 2009, \apjl, 695, L166 [Sträng, 2005] Sträng, J.-E. 2005 [Terradas} {et~al.}, 2008] {Terradas}, J., {Andries}, J., {Goossens}, M., {et~al.} 2008, \apjl, 687, L115 [Van Doorsselaere} {et~al.}, 2020] {Van Doorsselaere}, T., {Srivastava}, A.~K., {Antolin}, P., {et~al.} 2020, \ssr, 216, 140 [Wang}, 2011] {Wang}, T. 2011, \ssr, 158, 397 [Wang} {et~al.}, 2007] {Wang}, T., {Innes}, D.~E., \& {Qiu}, J. 2007, \apj, 656, 598 [Wang} {et~al.}, 2002] {Wang}, T., {Solanki}, S.~K., {Curdt}, W., {Innes}, D.~E., \& {Dammasch}, I.~E. 2002, \apjl, 574, L101 [Wang} {et~al.}, 2003] {Wang}, T.~J., {Solanki}, S.~K., {Curdt}, W., {et~al.} 2003{\natexlab{a}}, \aap, 406, 1105 [Wang} {et~al.}, 2003] {Wang}, T.~J., {Solanki}, S.~K., {Innes}, D.~E., {Curdt}, W., \& {Marsch}, E. 2003{\natexlab{b}}, \aap, 402, L17 [Yu} {et~al.}, 2017] {Yu}, D.~J., {Van Doorsselaere}, T., \& {Goossens}, M. 2017, \aap, 602, A108 [Zaqarashvili} {et~al.}, 2015] {Zaqarashvili}, T.~V., {Zhelyazkov}, I., \& {Ofman}, L. 2015, \apj, 813, 123 \end{thebibliography} % your references Yourfile.bib % - join the .bib files when you upload your source files \bibliographystyle{aa} \begin{thebibliography}{77} \expandafter\ifx\csname natexlab\endcsname\relax\def\natexlab#1{#1}\fi [Afanasyev} {et~al.}, 2019] {Afanasyev}, A., {Karampelas}, K., \& {Van Doorsselaere}, T. 2019, \apj, 876, [Antolin} {et~al.}, 2017] {Antolin}, P., {De Moortel}, I., {Van Doorsselaere}, T., \& {Yokoyama}, T. 2017, \apj, 836, 219 [Antolin} {et~al.}, 2015] {Antolin}, P., {Okamoto}, T.~J., {De Pontieu}, B., {et~al.} 2015, \apj, 809, 72 [Antolin} {et~al.}, 2014] {Antolin}, P., {Yokoyama}, T., \& {Van Doorsselaere}, T. 2014, \apjl, 787, L22 [Arregui}, 2015] {Arregui}, I. 2015, Philosophical Transactions of the Royal Society of London Series A, 373, 20140261 [Barbulescu} {et~al.}, 2019] {Barbulescu}, M., {Ruderman}, M.~S., {Van Doorsselaere}, T., \& {Erd{\'e}lyi}, R. 2019, \apj, 870, 108 [Bender \& Orszag, 1999] Bender, C. \& Orszag, S. 1999, Advanced Mathematical Methods for Scientists and Engineers I: Asymptotic Methods and Perturbation Theory, Advanced Mathematical Methods for Scientists and Engineers (Springer) [Berghmans} \& {Clette}, 1999] {Berghmans}, D. \& {Clette}, F. 1999, \solphys, 186, 207 [Cadez} {et~al.}, 1997] {Cadez}, V.~M., {Csik}, A., {Erdelyi}, R., \& {Goossens}, M. 1997, \aap, 326, [Cesari, 1963] Cesari, L. 1963, Asymptotic Behavior and Stability Problems in Ordinary Differential Equations (Springer-Verlag) [Chandrasekhar}, 1961] {Chandrasekhar}, S. 1961, {Hydrodynamic and hydromagnetic stability} [Chen} {et~al.}, 2018] {Chen}, S.-X., {Li}, B., {Shi}, M., \& {Yu}, H. 2018, \apj, 868, 5 [Chen} {et~al.}, 2020] {Chen}, S.~X., Li, B., Van~Doorsselaere, T., Goossens, M., \& Geeraerts, M. 2020, \apj, submitted [Chicone, 2008] Chicone, C. 2008, Ordinary Differential Equations with Applications, Texts in Applied Mathematics (Springer New York) [Conway, 1990] Conway, J. 1990, A Course in Functional Analysis, Graduate texts in mathematics [De Moortel} \& {Hood}, 2003] {De Moortel}, I. \& {Hood}, A.~W. 2003, \aap, 408, 755 [De Moortel} {et~al.}, 2000] {De Moortel}, I., {Ireland}, J., \& {Walsh}, R.~W. 2000, \aap, 355, L23 [De Moortel} {et~al.}, 2002] {De Moortel}, I., {Ireland}, J., {Walsh}, R.~W., \& {Hood}, A.~W. 2002, \solphys, 209, 61 [De Moortel} {et~al.}, 2016] {De Moortel}, I., {Pascoe}, D.~J., {Wright}, A.~N., \& {Hood}, A.~W. 2016, Plasma Physics and Controlled Fusion, 58, 014001 [Dorotovi{\v{c}}} {et~al.}, 2014] {Dorotovi{\v{c}}}, I., {Erd{\'e}lyi}, R., {Freij}, N., {Karlovsk{\'y}}, V., \& {M{\'a}rquez}, I. 2014, \aap, 563, A12 [Dorotovi{\v{c}}} {et~al.}, 2008] {Dorotovi{\v{c}}}, I., {Erd{\'e}lyi}, R., \& {Karlovsk{\'y}}, V. 2008, in IAU Symposium, Vol. 247, Waves \& Oscillations in the Solar Atmosphere: Heating and Magneto-Seismology, ed. R.~{Erd{\'e}lyi} \& C.~A. {Mendoza-Briceno}, [Edwin} \& {Roberts}, 1983] {Edwin}, P.~M. \& {Roberts}, B. 1983, \solphys, 88, 179 [Erdelyi}, 1997] {Erdelyi}, R. 1997, \solphys, 171, 49 [Erd{\'e}lyi} {et~al.}, 2001] {Erd{\'e}lyi}, R., {Ballai}, I., \& {Goossens}, M. 2001, \aap, 368, 662 [Foullon} {et~al.}, 2011] {Foullon}, C., {Verwichte}, E., {Nakariakov}, V.~M., {Nykyri}, K., \& {Farrugia}, C.~J. 2011, \apjl, 729, L8 [Freij} {et~al.}, 2016] {Freij}, N., {Dorotovi{\v{c}}}, I., {Morton}, R.~J., {et~al.} 2016, \apj, 817, [Geeraerts} {et~al.}, 2020] {Geeraerts}, M., {Van Doorsselaere}, T., {Chen}, S.-X., \& {Li}, B. 2020, \apj, 897, 120 [Goossens} {et~al.}, 2002] {Goossens}, M., {Andries}, J., \& {Aschwanden}, M.~J. 2002, \aap, 394, L39 [Goossens} {et~al.}, 2021] {Goossens}, M., {Chen}, S.~X., {Geeraerts}, M., {Li}, B., \& {Van Doorsselaere}, T. 2021, \aap, 646, A86 [Goossens} {et~al.}, 1992] {Goossens}, M., {Hollweg}, J.~V., \& {Sakurai}, T. 1992, \solphys, 138, 233 [Grant} {et~al.}, 2015] {Grant}, S.~D.~T., {Jess}, D.~B., {Moreels}, M.~G., {et~al.} 2015, \apj, 806, [Guo} {et~al.}, 2019] {Guo}, M., {Van Doorsselaere}, T., {Karampelas}, K., {et~al.} 2019, \apj, 870, [Heyvaerts} \& {Priest}, 1983] {Heyvaerts}, J. \& {Priest}, E.~R. 1983, \aap, 117, 220 [Hillier} {et~al.}, 2019] {Hillier}, A., {Barker}, A., {Arregui}, I., \& {Latter}, H. 2019, \mnras, 482, [Hillier} {et~al.}, 2020] {Hillier}, A., {Van Doorsselaere}, T., \& {Karampelas}, K. 2020, \apjl, 897, [Hollweg} {et~al.}, 2013] {Hollweg}, J.~V., {Kaghashvili}, E.~K., \& {Chandran}, B. D.~G. 2013, \apj, 769, 142 [Hollweg} \& {Yang}, 1988] {Hollweg}, J.~V. \& {Yang}, G. 1988, \jgr, 93, 5423 [Hollweg} {et~al.}, 1990] {Hollweg}, J.~V., {Yang}, G., {Cadez}, V.~M., \& {Gakovic}, B. 1990, \apj, 349, [Karampelas} {et~al.}, 2017] {Karampelas}, K., {Van Doorsselaere}, T., \& {Antolin}, P. 2017, \aap, 604, [Karampelas} {et~al.}, 2019] {Karampelas}, K., {Van Doorsselaere}, T., {Pascoe}, D.~J., {Guo}, M., \& {Antolin}, P. 2019, Frontiers in Astronomy and Space Sciences, 6, 38 [Karpen} {et~al.}, 1994] {Karpen}, J.~T., {Dahlburg}, R.~B., \& {Davila}, J.~M. 1994, \apj, 421, 372 [Kelly}, 1965] {Kelly}, R.~E. 1965, Journal of Fluid Mechanics, 22, 547 [Keys {et~al.}, 2018] Keys, P.~H., Morton, R.~J., Jess, D.~B., {et~al.} 2018, The Astrophysical Journal, 857, 28 [Kovitya} \& {Cram}, 1983] {Kovitya}, P. \& {Cram}, L. 1983, \solphys, 84, 45 [Kumar} {et~al.}, 2013] {Kumar}, P., {Innes}, D.~E., \& {Inhester}, B. 2013, \apjl, 779, L7 [Magyar} {et~al.}, 2015] {Magyar}, N., {Van Doorsselaere}, T., \& {Marcu}, A. 2015, \aap, 582, A117 [Mandal} {et~al.}, 2016] {Mandal}, S., {Magyar}, N., {Yuan}, D., {Van Doorsselaere}, T., \& {Banerjee}, D. 2016, \apj, 820, 13 [McLachlan, 1947] McLachlan, N. 1947, Theory and Application of Mathieu Functions (Clarendon [Moreels} {et~al.}, 2015] {Moreels}, M.~G., {Freij}, N., {Erd{\'e}lyi}, R., {Van Doorsselaere}, T., \& {Verth}, G. 2015, \aap, 579, A73 [Moreels} {et~al.}, 2013] {Moreels}, M.~G., {Goossens}, M., \& {Van Doorsselaere}, T. 2013, \aap, 555, [Morton} {et~al.}, 2011] {Morton}, R.~J., {Erd{\'e}lyi}, R., {Jess}, D.~B., \& {Mathioudakis}, M. 2011, \apjl, 729, L18 [Nakariakov} {et~al.}, 2000] {Nakariakov}, V.~M., {Verwichte}, E., {Berghmans}, D., \& {Robbrecht}, E. 2000, \aap, 362, 1151 [Nightingale} {et~al.}, 1999] {Nightingale}, R.~W., {Aschwanden}, M.~J., \& {Hurlburt}, N.~E. 1999, \solphys, 190, 249 [Ofman} {et~al.}, 1994] {Ofman}, L., {Davila}, J.~M., \& {Steinolfson}, R.~S. 1994, \grl, 21, 2259 [Ofman} \& {Thompson}, 2011] {Ofman}, L. \& {Thompson}, B.~J. 2011, \apjl, 734, L11 [Parnell} \& {De Moortel}, 2012] {Parnell}, C.~E. \& {De Moortel}, I. 2012, Philosophical Transactions of the Royal Society of London Series A, 370, 3217 [Pascoe} {et~al.}, 2020] {Pascoe}, D.~J., {Goddard}, C.~R., \& {Van Doorsselaere}, T. 2020, Frontiers in Astronomy and Space Sciences, 7, 61 [Pascoe} {et~al.}, 2012] {Pascoe}, D.~J., {Hood}, A.~W., {de Moortel}, I., \& {Wright}, A.~N. 2012, \aap, 539, A37 [Pascoe} {et~al.}, 2010] {Pascoe}, D.~J., {Wright}, A.~N., \& {De Moortel}, I. 2010, \apj, 711, 990 [Priest, 2014] Priest, E. 2014, Magnetohydrodynamics of the Sun (Cambridge University Press) [Roberts}, 1973] {Roberts}, B. 1973, Journal of Fluid Mechanics, 59, 65 [Sakurai} {et~al.}, 1991] {Sakurai}, T., {Goossens}, M., \& {Hollweg}, J.~V. 1991, \solphys, 133, 227 [Samanta} {et~al.}, 2019] {Samanta}, T., {Tian}, H., \& {Nakariakov}, V.~M. 2019, \prl, 123, 035102 [Shi} {et~al.}, 2021] {Shi}, M., {Van Doorsselaere}, T., {Guo}, M., {et~al.} 2021, arXiv e-prints, [Simon, 2005] Simon, B. 2005, Trace Ideals and Their Applications, Mathematical surveys and monographs (American Mathematical Society) [Soler} {et~al.}, 2013] {Soler}, R., {Goossens}, M., {Terradas}, J., \& {Oliver}, R. 2013, \apj, 777, [Soler} {et~al.}, 2009] {Soler}, R., {Oliver}, R., {Ballester}, J.~L., \& {Goossens}, M. 2009, \apjl, 695, L166 [Sträng, 2005] Sträng, J.-E. 2005 [Terradas} {et~al.}, 2008] {Terradas}, J., {Andries}, J., {Goossens}, M., {et~al.} 2008, \apjl, 687, L115 [Van Doorsselaere} {et~al.}, 2020] {Van Doorsselaere}, T., {Srivastava}, A.~K., {Antolin}, P., {et~al.} 2020, \ssr, 216, 140 [Wang}, 2011] {Wang}, T. 2011, \ssr, 158, 397 [Wang} {et~al.}, 2007] {Wang}, T., {Innes}, D.~E., \& {Qiu}, J. 2007, \apj, 656, 598 [Wang} {et~al.}, 2002] {Wang}, T., {Solanki}, S.~K., {Curdt}, W., {Innes}, D.~E., \& {Dammasch}, I.~E. 2002, \apjl, 574, L101 [Wang} {et~al.}, 2003] {Wang}, T.~J., {Solanki}, S.~K., {Curdt}, W., {et~al.} 2003{\natexlab{a}}, \aap, 406, 1105 [Wang} {et~al.}, 2003] {Wang}, T.~J., {Solanki}, S.~K., {Innes}, D.~E., {Curdt}, W., \& {Marsch}, E. 2003{\natexlab{b}}, \aap, 402, L17 [Yu} {et~al.}, 2017] {Yu}, D.~J., {Van Doorsselaere}, T., \& {Goossens}, M. 2017, \aap, 602, A108 [Zaqarashvili} {et~al.}, 2015] {Zaqarashvili}, T.~V., {Zhelyazkov}, I., \& {Ofman}, L. 2015, \apj, 813, 123 \end{thebibliography} \appendix \section{Deriving the governing equation for $\c$} \label{A1} In this appendix, we derive Eq. \eqref{Eqdivxi} from Eqs. \eqref{eq1}-\eqref{eq4}. Equations \eqref{eq1}, \eqref{eq3} and \eqref{eq4} can be rewritten using $\pmb{v} = \frac{D \pmb{\xi}}{Dt}$ to yield the following expressions for $\rho_1$, $\pmb{B}_1$ and $p_1$: \begin{align} \rho_1 &= -\rho_0 \c \label{rho_1}\\ \pmb{B}_1 &= -\pmb{B}_0 \l( \c \r) + \l( \pmb{B}_0 \cdot \nabla \r) \pmb{\xi} \label{B_1}\\ p_1 &= -\rho_0 v_s^2 \c \text{.} \label{p_1} \end{align} Using Eq. \eqref{rho_1} and Eq. \eqref{p_1}, Eq. \eqref{eq2} can be rewritten as follows: \begin{align} &\o_0 V_0 \sin \l( \o_0 t \r) \l( \c \r) \pmb{1}_z + \d\frac{D^2 \pmb{\xi}}{Dt^2} = \notag\\ & \hspace{3cm} \frac{-1}{\rho_0} \nabla P_1 + i k_z v_A^2 \l[- \pmb{1}_z \l( \c \r) + i k_z \pmb{\xi} \r] \text{,} \label{6bis} \end{align} where $v_A = B_{0z} / \sqrt{\mu_0 \rho_0}$ is the Alfv\'en speed. By now taking the divergence on both sides of Eq. \eqref{6bis} we get the following: \begin{equation} i k_z \o_0 V_0 \sin \l( \o_0 t \r) \l( \c \r) + \d\frac{D^2 \l( \c \r)}{Dt^2} = \frac{-1}{\rho_0} \nabla^2 P_1 \text{,} \label{7} \end{equation} where $\nabla^2$ denotes the Laplace operator. Next, from the definition of total pressure $P_1 = p_1 + \pmb{B}_0 \cdot \pmb{B}_1 / \mu_0$ we obtain the following equation with the use of Eqs. \eqref{B_1} and \eqref{p_1}: \begin{equation} \d\frac{1}{\rho_0} P_1 + (v_A^2 + v_s^2) \l( \c \r) - i k_z v_A^2 \xi_z = 0 \text{.} \label{8} \end{equation} Taking the $z$-component of Eq. \eqref{6bis} and using Eq. \eqref{8} yields \begin{equation} \d\frac{D^2 \xi_z}{Dt^2} = - \o_0 V_0 \sin \l( \o_0 t \r) \l( \c \r) + i k_z v_s^2 \l( \c \r) \text{,} \label{9} \end{equation} while taking twice the Lagrangian derivative on both sides of Eq. \eqref{8} and rearanging the terms yields \begin{equation} \d\frac{D^2 \xi_z}{Dt^2} = \frac{1}{i k_z v_A^2 \rho_0} \frac{D^2 P_1}{Dt^2} +\frac{v_A^2 + v_s^2}{i k_z v_A^2} \frac{D^2 \l( \c \r)}{Dt^2} \text{.} \label{10} \end{equation} We can then combine Eq. \eqref{9} and Eq. \eqref{10} into the following equation: \begin{align} &\d\frac{1}{i k_z v_A^2 \rho_0} \frac{D^2 P_1}{Dt^2} +\frac{v_A^2 + v_s^2}{i k_z v_A^2} \frac{D^2 \l( \c \r)}{Dt^2} = \notag\\ & \hspace{2.5cm} - \o_0 V_0 \sin \l( \o_0 t \r) \l( \c \r) + i k_z v_s^2 \l( \c \r) \text{.} \label{11} \end{align} If we now take twice the Lagrangian derivative on both sides of Eq. \eqref{7} and we take the Laplacian on both sides of Eq. \eqref{11}, we can combine the obtained equations, yielding Eq. \eqref{Eqdivxi}: \begin{align} &\d\frac{D^4 \l( \c \r)}{Dt^4} + i k_z \o_0 V_0 \frac{D^2 \l( \sin \l(\o_0 t \r) \l( \c \r) \r)}{Dt^2} \notag\\ & \qquad - \l( v_A^2 + v_s^2 \r) \frac{D^2}{Dt^2} \l( \frac{\pa^2 \l( \c \r)}{\pa x^2} \r) + k^2 \l( v_A^2 + v_s^2 \r) \frac{D^2 \l( \c \r)}{Dt^2} \notag\\ & \qquad - i k_z V_0 \o_0 \sin \l( \o_0 t \r) v_A^2 \frac{\pa^2 \l( \c \r)}{\pa x^2} \notag\\ & \qquad + i k_z k^2 V_0 \o_0 \sin \l(\o_0 t \r) v_A^2 \l( \c \r) - k_z^2 v_A^2 v_s^2 \frac{\pa^2 \l( \c \r)}{\pa x^2} \notag \\ & \qquad + k_z^2 k^2 v_A^2 v_s^2 \l( \c \r) = 0 \text{,} \end{align} where $k = \sqrt{k_y^2 + k_z^2}$. \section{Deriving the expression for $\Delta$} \label{ExtPow} In this appendix, we show formula \eqref{DeltaFormula} for $\Delta$. We recall the definition of $\Delta$, Eq. \eqref{Delta}, for the reader's convenience: \begin{equation} \label{Delta2} \Delta = \begin{vmatrix} \ddots & \ddots & & & 0\\ \ddots & 1 & \varepsilon_{-1} & & \\ & -\varepsilon_0 & 1 & \varepsilon_0 & \\ & & -\varepsilon_1 & 1 & \ddots \\ 0 & & & \ddots & \ddots \end{vmatrix} \text{.} \end{equation} Denoting with $A$ the operator defined by the infinite matrix corresponding to the determinant in the right-hand side of Eq. \eqref{Delta2}, we explained in Section \ref{Gfunction} that $A-I$ (with $I$ the identity operator) is a trace class operator on $\ell^2(\mathbb{Z})$ (except in the poles of the $\varepsilon_j$). This is the Hilbert space of square-summable sequences of complex numbers with entire index, and with inner product $\langle \cdot, \cdot \rangle$ defined by \begin{equation} \langle x, y \rangle = \d\sum_{l=-\infty}^{\infty} x(l) \overline{y(l)} \end{equation} for $x,y \in \ell^2(\mathbb{Z})$. To work out the right-hand side of Eq. \eqref{Delta2}, we use a result from Fredholm theory. It states that, for a trace class operator $M$, the following holds: \begin{equation} \label{FredholmDet} \det(I+M) \; = \; \d\sum_{n=0}^{\infty} \text{Tr} \l( \Lambda^n \l( M \r) \r) \text{,} \end{equation} where $\text{Tr} \l( \Lambda^n \l( M \r) \r)$ is the trace of the $n$th exterior power of $M$ [Simon, 2005]. The $n$th exterior power $\Lambda^n (\ell^2(\mathbb{Z}))$ (with $n \in \mathbb{N}_0$) of $\ell^2(\mathbb{Z})$ is the Hilbert space whose elements are of the form $v_1 \wedge ... \wedge v_n$, that is to say, the exterior product of $v_1$, ..., $v_n$ (in that order) where $v_i \in \ell^2(\mathbb{Z})$ for every $i \in \{1, ..., n \}$, and with a uniquely defined inner product $\langle \cdot , \cdot \rangle^{\otimes n}$ that is determined by the inner product on $\ell^2(\mathbb{Z})$. The vector space $\Lambda^n (\ell^2(\mathbb{Z}))$ is the subspace of the tensor product $\ell^2(\mathbb{Z}) \otimes ... \otimes \ell^2(\mathbb{Z})$ (n times) whose elements are totally antisymmetric tensors of rank $n$. Furthermore, the 0th exterior power $\Lambda^0 (\ell^2(\mathbb{Z}))$ of $\ell^2(\mathbb{Z})$ is $\mathbb{C}$. For a trace class operator $M$ on $\ell^2(\mathbb{Z})$, the $n$th exterior power of $M$ is defined by \begin{align} \Lambda^n (M): \; &\Lambda^n ( \ell^2 (\mathbb{Z})) \to \Lambda^n ( \ell^2 (\mathbb{Z})): \notag\\ &\hspace{2cm} v_1 \wedge ... \wedge v_n \mapsto M v_1 \wedge ... \wedge M v_n \label{Lambda^n(M)} \end{align} and is a trace class operator on $\Lambda^n (\ell^2(\mathbb{Z}))$. Now, if $\{e_j \}_{j \in \mathbb{Z}}$ is an orthonormal basis for a Hilbert space $H$, then the trace of a trace class operator $M$ on $H$ is defined as [Conway, 1990]: \begin{equation} \label{trace} \text{Tr}(M) = \d\sum_{j=-\infty}^{\infty} \langle M e_j, e_j \rangle \text{.} \end{equation} We also have that, if $\{ e_j \}_{j \in \mathbb{Z}}$ is an orthonormal basis of $\ell^2(\mathbb{Z})$, then $\{e_{j_1} \wedge ... \wedge e_{j_n} \}_{j_1 < \cdots < j_n}$ is an orthonormal basis of $\Lambda^n ( \ell^2 (\mathbb{Z}))$ and \begin{equation} \label{innprodextprod} \d \langle e_{j_1} \wedge ... \wedge e_{j_n}, f_{1} \wedge ... \wedge f_{n} \rangle^{\otimes n} = \det \l(\langle e_{j_k}, f_m \rangle_{1 \leq k,m \leq n} \r) \text{,} \end{equation} for any $f_{1} \wedge ... \wedge f_{n} \in \Lambda^n ( \ell^2 (\mathbb{Z}))$ [Simon, 2005]. Hence, from Eqs. \eqref{Lambda^n(M)}, \eqref{trace} and \eqref{innprodextprod}, we find for the trace class operator $\Lambda^n \l( M \r)$ on $\Lambda^n (\ell^2(\mathbb{Z}))$ that \begin{align} &\text{Tr} \l(\Lambda^n \l( M \r) \r) \notag\\ &= \d\sum_{j_1 = -\infty}^{\infty} \sum_{j_2 = j_1 + 1}^{\infty} ... \sum_{j_n = j_{n-1} + 1}^{\infty} \ip{ \Lambda^n \l( M \r) \l( e_{j_1} \wedge ... \wedge e_{j_n} \r) , e_{j_1} \wedge ... \wedge e_{j_n} }^{\otimes n} \notag \\ &= \d\sum_{j_1 = -\infty}^{\infty} \sum_{j_2 = j_1 + 1}^{\infty} ... \sum_{j_n = j_{n-1} + 1}^{\infty} \ip{ M e_{j_1} \wedge ... \wedge M e_{j_n} , e_{j_1} \wedge ... \wedge e_{j_n} }^{\otimes n} \notag \\ &= \d\sum_{j_1 = -\infty}^{\infty} \sum_{j_2 = j_1 + 1}^{\infty} ... \sum_{j_n = j_{n-1} + 1}^{\infty} \begin{vmatrix} a_{1,1} & \cdots & a_{1,n} \\ \vdots & & \vdots\\ a_{n,1} & \cdots & a_{n,n} \end{vmatrix} \text{,} \label{TrExpr1} \end{align} where $a_{k,m} = \langle M e_{j_k}, e_{j_m} \rangle$ for $k,m \in \mathbb{N}_0$. Looking at Eq. \eqref{Delta2}, we see that the trace class operator $M=A-I$ is defined by \begin{equation} M : \ell^2\l(\mathbb{Z} \r) \to \ell^2\l(\mathbb{Z} \r) : \begin{pmatrix} \vdots \\ \varphi(l) \\ \vdots \end{pmatrix} \mapsto \begin{pmatrix} \vdots \\ \varepsilon_l \l[ \varphi \l( l+1 \r) - \varphi \l( l-1 \r) \r] \\ \vdots \end{pmatrix} \text{.} \end{equation} For the orthonormal basis $\{ e_j \}_{j \in \mathbb{Z}}$ of $\ell^2(\mathbb{Z})$ defined by $e_j(l) = \delta_{jl}$ for all $j,l \in \mathbb{Z}$, we then have the following: \begin{equation} M e_{j_k} = M \begin{pmatrix} \vdots \\ 0 \\ 0 \\ 1 \\ 0 \\ 0 \\ \vdots \end{pmatrix} = \begin{pmatrix} \vdots \\ 0 \\ \varepsilon_{j_k-1} \\ 0 \\ -\varepsilon_{j_k+1} \\ 0 \\ \vdots \end{pmatrix} \text{,} \end{equation} for every $k \in \mathbb{N}_0$. This means that \begin{align} \ip{M e_{j_k},e_{j_m}} &= \d\sum_{l=-\infty}^{\infty} \l(M e_{j_k} \r)(l) \; \overline{e_{j_m}(l)} \notag \\ &= \begin{cases} \varepsilon_{j_k-1} & \text{if } j_m = j_k -1 \\ - \varepsilon_{j_k+1} & \text{if } j_m = j_k+1 \\ 0 & \text{else}\end{cases} \text{.} \label{innprod} \end{align} We thus find that $\langle M e_{j_k}, e_{j_m} \rangle = 0$ if $\abs{k-m} \ge 2$ (since $j_1 < ... < j_n$) or $k=m$. Hence, from Eq. \eqref{TrExpr1}, \begin{equation} \label{traceExpr2} \text{Tr} \l(\Lambda^n \l( M \r) \r) = \d\sum_{j_1 = -\infty}^{\infty} \sum_{j_2 = j_1 + 1}^{\infty} \ldots \sum_{j_n = j_{n-1} + 1}^{\infty} \Delta_{j_1,...,j_n} \text{,} \end{equation} where we define for any $q \in \mathbb{N}_0$ and $s_1,..., s_q \in \mathbb{N}_0$ \begin{equation} \label{Delta_js} \Delta_{j_{s_1},...,j_{s_q}} = \begin{tikzpicture}[baseline=(current bounding box.center)] \matrix (m) [matrix of math nodes,nodes in empty cells,right delimiter={|},left delimiter={|} ]{ 0 & a_{s_1,s_2} & 0 & & 0 \\ a_{s_2,s_1} & & & & \\ 0 & & & & 0 \\ & & & &a_{s_{q-1},s_q} \\ 0 & & 0 & a_{s_q,s_{q-1}} & 0 \\ } ; \draw[loosely dotted][thick] (m-1-1)-- (m-5-5); \draw[loosely dotted][thick] (m-1-2)-- (m-4-5); \draw[loosely dotted][thick] (m-2-1)-- (m-5-4); \draw[loosely dotted][thick] (m-3-1)-- (m-5-1); \draw[loosely dotted][thick] (m-5-1)-- (m-5-3); \draw[loosely dotted][thick] (m-3-1)-- (m-5-3); \draw[loosely dotted][thick] (m-1-3)-- (m-1-5); \draw[loosely dotted][thick] (m-1-5)-- (m-3-5); \draw[loosely dotted][thick] (m-1-3)-- (m-3-5); \end{tikzpicture} \text{.} \end{equation} We can now rewrite the determinants $\Delta_{j_1,...,j_n}$ composing the terms of the sum in Eq. \eqref{traceExpr2}: \begin{eqnarray} \Delta_{j_1,...,j_n} &=&\begin{tikzpicture}[baseline=(current bounding box.center)] \matrix (m) [matrix of math nodes,nodes in empty cells,right delimiter={|},left delimiter={|} ]{ 0 & a_{1,2} & 0 & & 0 \\ a_{2,1} & & & & \\ 0 & & & & 0 \\ & & & &a_{n-1,n} \\ 0 & & 0 & a_{n,n-1} & 0 \\ } ; \draw[loosely dotted][thick] (m-1-1)-- (m-5-5); \draw[loosely dotted][thick] (m-1-2)-- (m-4-5); \draw[loosely dotted][thick] (m-2-1)-- (m-5-4); \draw[loosely dotted][thick] (m-3-1)-- (m-5-1); \draw[loosely dotted][thick] (m-5-1)-- (m-5-3); \draw[loosely dotted][thick] (m-3-1)-- (m-5-3); \draw[loosely dotted][thick] (m-1-3)-- (m-1-5); \draw[loosely dotted][thick] (m-1-5)-- (m-3-5); \draw[loosely dotted][thick] (m-1-3)-- (m-3-5); \end{tikzpicture} \label{TrPowerprev} \\ &=& - a_{1,2} \begin{tikzpicture}[baseline=(current bounding box.center)] \matrix (m) [matrix of math nodes,nodes in empty cells,right delimiter={|},left delimiter={|} ]{ a_{2,1} & a_{2,3} & 0 & & & 0 \\ 0 & 0 & a_{3,4} & 0 & & 0 \\ & a_{4,3} & & & & \\ & 0 & & & & 0 \\ & & & & &a_{n-1,n} \\ 0 & 0 & & 0 & a_{n,n-1} & 0 \\ } ; \draw[loosely dotted][thick] (m-2-2)-- (m-6-6); \draw[loosely dotted][thick] (m-2-3)-- (m-5-6); \draw[loosely dotted][thick] (m-3-2)-- (m-6-5); \draw[loosely dotted][thick] (m-4-2)-- (m-6-2); \draw[loosely dotted][thick] (m-6-2)-- (m-6-4); \draw[loosely dotted][thick] (m-4-2)-- (m-6-4); \draw[loosely dotted][thick] (m-2-4)-- (m-2-6); \draw[loosely dotted][thick] (m-2-6)-- (m-4-6); \draw[loosely dotted][thick] (m-2-4)-- (m-4-6); \draw[loosely dotted][thick] (m-2-1)-- (m-6-1); \draw[loosely dotted][thick] (m-1-3)-- (m-1-6); \end{tikzpicture} \notag \\ &=& - a_{1,2} a_{2,1}\begin{tikzpicture}[baseline=(current bounding box.center)] \matrix (m) [matrix of math nodes,nodes in empty cells,right delimiter={|},left delimiter={|} ]{ 0 & a_{3,4} & 0 & & 0 \\ a_{4,3} & & & & \\ 0 & & & & 0 \\ & & & &a_{n-1,n} \\ 0 & & 0 & a_{n,n-1} & 0 \\ } ; \draw[loosely dotted][thick] (m-1-1)-- (m-5-5); \draw[loosely dotted][thick] (m-1-2)-- (m-4-5); \draw[loosely dotted][thick] (m-2-1)-- (m-5-4); \draw[loosely dotted][thick] (m-3-1)-- (m-5-1); \draw[loosely dotted][thick] (m-5-1)-- (m-5-3); \draw[loosely dotted][thick] (m-3-1)-- (m-5-3); \draw[loosely dotted][thick] (m-1-3)-- (m-1-5); \draw[loosely dotted][thick] (m-1-5)-- (m-3-5); \draw[loosely dotted][thick] (m-1-3)-- (m-3-5); \end{tikzpicture} \notag \\ &=& - a_{1,2} a_{2,1} \Delta_{j_3,...,j_n} \label{TrPower} \text{,} \end{eqnarray} where we expanded along the first row to find the second equality and along the first column to find the third equality. Clearly, we can omit terms that are $0$ from the sum in the right-hand side of Eq. \eqref{traceExpr2}. Now, the factors $a_{1,2}$ and $a_{2,1}$ are different from $0$ only if $j_2 = j_1+1$, in which case \begin{align*} &a_{1,2} = \ip{ M e_{j_1}, e_{j_1+1}} = - \varepsilon_{j_1 + 1} \text{, and}\\ & a_{2,1} = \ip{ M e_{j_1+1}, e_{j_1}} = \varepsilon_{j_1} \text{.} \end{align*} This means that, from Eqs. \eqref{traceExpr2}, \eqref{Delta_js} and \eqref{TrPower}, we have \begin{equation} \label{traceExpr3} \text{Tr} \l(\Lambda^n \l( M \r) \r) = \d\sum_{j_1 = -\infty}^{\infty} \varepsilon_{j_1} \varepsilon_{j_1 + 1} \sum_{j_3 = j_1 + 2}^{\infty} \sum_{j_4 = j_3 + 1}^{\infty} \ldots \sum_{j_n = j_{n-1} + 1}^{\infty} \Delta_{j_3,...,j_n} \text{.} \end{equation} We can now make a distinction between even and odd $n$. Indeed, if $n = 2p$ for a $p \in \mathbb{N}_0$, we find the following by continuing to expand every determinant of the form \eqref{Delta_js} in the sum on the right-hand side of Eq. \eqref{traceExpr3} in the same way as we went from Eq. \eqref{TrPowerprev} to \eqref{TrPower}: \begin{equation} \label{TraceEven} \text{Tr} \l(\Lambda^{2p} \l( M \r) \r) = \d\sum_{j_1 = - \infty}^{\infty} \d\sum_{j_2 = j_1 + 2}^{\infty} \ldots \d\sum_{j_p = j_{p-1} + 2}^{\infty} \l( \d\prod_{l=1}^{p} \varepsilon_{j_l} \varepsilon_{j_l +1} \r) \text{.} \end{equation} In contrast to this, if $n=2p+1$ for a $p \in \mathbb{N}$, we find \begin{align} \text{Tr} \l(\Lambda^{2p+1} \l( M \r) \r) &= \d\sum_{j_1 = - \infty}^{\infty} \d\sum_{j_2 = j_1 + 2}^{\infty} \ldots \d\sum_{j_p = j_{p-1} + 2}^{\infty} \l( \d\prod_{l=1}^{p} \varepsilon_{j_l} \varepsilon_{j_l +1} \r) 0 \notag\\ &= 0 \text{.} \label{TrUneven} \end{align} Hence, from Eqs. \eqref{FredholmDet}, \eqref{TraceEven} and \eqref{TrUneven}, and from the fact that $\text{Tr} \l(\Lambda^0 \l( M \r) \r) = 1$ [Simon, 2005], we now find Eq. \eqref{DeltaFormula}: \begin{equation} \Delta \; = \; 1 + \d\sum_{n=1}^{\infty} \l[ \d\sum_{j_1 = - \infty}^{\infty} \d\sum_{j_2 = j_1 + 2}^{\infty} \ldots \d\sum_{j_n = j_{n-1} + 2}^{\infty} \l( \d\prod_{l=1}^n \varepsilon_{j_l} \varepsilon_{j_l +1} \r) \r] \text{.} \end{equation} \section{Deriving solutions for $G_1$, $G_2$ and $G_3$} \label{AppendixG1G2G3} In this appendix, we show how to derive the solutions to the temporal functions $G_1$, $G_2$ and $G_3$. We recall the equations of interest to this matter, Eqs. \eqref{eq1}-\eqref{eq3}, Eqs. \eqref{xix}-\eqref{xiz} and Eq. \eqref{P1} for the reader's convenience: \begin{align} &\d\frac{D \rho_1}{D t} +\rho_0 \l( \nabla \cdot \pmb{v}_1 \r) = 0\text{,} \label{B1}\\ & \rho_1 \frac{\partial \pmb{v}_0}{\partial t} + \rho_0 \frac{D \pmb{v}_1}{D t} = -\nabla P_1 + \frac{1}{\mu_0} \left( \pmb{B}_0 \cdot \nabla \right) \pmb{B}_1 \text{,} \label{B2}\\ &\frac{D \pmb{B}_1}{D t} = - \pmb{B}_0 \left( \nabla \cdot \pmb{v}_1 \right) + \left( \pmb{B}_0 \cdot \nabla \right) \pmb{v}_1 \text{,} \label{B3}\\ &\xi_x = - \d\frac{i m}{k^2 + m^2} \; \tilde{F}(x)\; h_1(t) \; \e \text{,} \label{B5}\\ &\xi_y = - \d\frac{i k_y}{k^2 + m^2} \; F(x)\; h_2(t) \; \e \text{,} \label{B6}\\ &\xi_z = - \d\frac{i k_z}{k^2 + m^2} \; F(x)\; h_3(t) \; \e \text{,} \label{B7}\\ &P_1 = \rho_0 \; \l( \d\frac{k_z^2 v_A^2}{k^2 + m^2} h_3(t) \; - \; \l( v_A^2 + v_s^2 \r) h(t) \r) \notag\\ & \hspace{4cm} F(x) \; \e \text{.} \label{B8} \end{align} We recall Eq. \eqref{6bis} from Appendix \ref{A1}: \begin{align} &\o_0 V_0 \sin \l( \o_0 t \r) \l( \c \r) \pmb{1}_z + \d\frac{D^2 \pmb{\xi}}{Dt^2} = \notag\\ & \hspace{3cm} \frac{-1}{\rho_0} \nabla P_1 + i k_z v_A^2 \l[- \pmb{1}_z \l( \c \r) + i k_z \pmb{\xi} \r] \text{.} \label{B9} \end{align} Replacing $h_i(t)$ by their definition $G_i(t) g(t)$ for every $i \in \{1,2,3 \}$ (where $g(t) = \exp \l\{ \frac{-i k_z V_0 \sin \l( \o_0 t \r)}{\o_0} \r\}$) and inserting Eqs. \eqref{B5}-\eqref{B8} into the $x$-component, $y$-component and $z$-component of Eq. \eqref{B9}, we find that the following equations have to hold: \begin{align} &\d\frac{d^2 G_1(t)}{dt^2} + K^2 \l(v_A^2 + v_s^2 \r) G(t) + k_z^2 v_A^2 \l[ G_1(t) - G_3(t) \r] = 0\text{,} \label{B.G1}\\ &\d\frac{d^2 G_2(t)}{dt^2} + K^2 \l(v_A^2 + v_s^2 \r) G(t) + k_z^2 v_A^2 \l[ G_2(t) - G_3(t) \r] = 0\text{,} \label{B.G2}\\ &\d\frac{d^2 G_3(t)}{dt^2} + K^2 \l(v_s^2 + \d\frac{i \o_0 V_0}{k_z} \sin \l( \o_0 t \r) \r) G(t) = 0 \text{,} \label{B.G3} \end{align} with $K = \sqrt{k^2 + m^2}$. We see that $G_1$ and $G_2$ obey the same differential equation and that they thus have the same general solution. They depend on $G_3$, which by solving Eq. \eqref{B.G3} can be found to be equal to \begin{equation} \label{B.G3eq} G_3(t) = - \d\frac{K^2}{k_z} \d\iint \l(i \o_0 V_0 \sin \l( \o_0 t \r) + k_z v_s^2 \r) G(t) \; dt \; dt \text{.} \end{equation} We recall that we defined $\t = \o_0 t$. So, since $G(\t) = e^{\mu \t} \sum_{j = -\infty}^{\infty} \varphi_j e^{i j \t}$, we find that \begin{align} G_3(\t) &= -\d\frac{K^2}{2 k_z \o_0^2} e^{\mu \t} \d\sum_{j = -\infty}^{\infty} \d\frac{V_0 \o_0 \l( \varphi_{j+1} - \varphi_{j-1} \r) - 2 k_z v_s^2 \varphi_j}{\l( j + \nu \r)^2} \; e^{i j \t} \notag \\ &= \d\frac{K^2}{k_z^2} e^{\mu \t} \d\sum_{j = -\infty}^{\infty} \l(1 - \d\frac{\l( k_y^2+ m^2 \r) v_s^2}{\l(j + \nu \r)^2 \o_0^2 - K^2 v_A^2} \r) \; \varphi_j \; e^{i j \t} \text{,} \label{B.G3exp} \end{align} where we used Eq. \eqref{recursion} to find the second equality. Now, solving Eq. \eqref{B.G1}, we find that \begin{align} G_1(t) &= \d\frac{1}{k_z v_A} \Bigg\{ \cos \l( k_z v_A t \r) \notag \\ & \qquad \l[ \d \int \sin \l( k_z v_A t \r) \l( K^2 \l( v_A^2 + v_s^2 \r) G(t) - k_z^2 v_A^2 G_3(t) \r) dt \r] \notag\\ & \quad- \sin \l( k_z v_A t \r) \notag \\ & \qquad \l[ \d \int \cos \l( k_z v_A t \r) \l( K^2 \l( v_A^2 + v_s^2 \r) G(t) - k_z^2 v_A^2 G_3(t) \r) dt \r] \Bigg\} \text{.} \label{B.G1eq} \end{align} Having derived expression \eqref{B.G3exp}, we can also work out Eq. \eqref{B.G1eq} and find the following solution for $G_1$: \begin{equation} \label{B.G1exp} G_1(\t) = K^2 v_s^2 \; e^{\mu \t} \d\sum_{j = -\infty}^{\infty} \d\frac{1}{\l(j + \nu \r)^2 \o_0^2 - K^2 v_A^2} \; \varphi_j \; e^{i j \t} \text{.} \end{equation} \begin{equation} \label{B.G1exp} G_2(\t) = K^2 v_s^2 \; e^{\mu \t} \d\sum_{j = -\infty}^{\infty} \d\frac{1}{\l(j + \nu \r)^2 \o_0^2 - K^2 v_A^2} \; \varphi_j \; e^{i j \t} \text{.} \end{equation} % [Baker, 1966] Baker, N. 1966, % in Stellar Evolution, % ed.\ R. F. Stein,\& A. G. W. Cameron % (Plenum, New York) 333 % [Balluch, 1988] Balluch, M. 1988, % A\&A, 200, 58 % [Cox, 1980] Cox, J. P. 1980, % Theory of Stellar Pulsation % (Princeton University Press, Princeton) 165 % [Cox, 1969] Cox, A. N.,\& Stewart, J. N. 1969, % Academia Nauk, Scientific Information 15, 1 % [Mizuno, 1980] Mizuno H. 1980, % Prog. Theor. Phys., 64, 544 % [Tscharnuter, 1987] Tscharnuter W. M. 1987, % A\&A, 188, 55 % [Terlevich, 1992] Terlevich, R. 1992, in ASP Conf. Ser. 31, % Relationships between Active Galactic Nuclei and Starburst Galaxies, % ed. A. V. Filippenko, 13 % [Yorke, 1980] Yorke, H. W. 1980a, % A\&A, 86, 286 % [Zheng, 1997] Zheng, W., Davidsen, A. F., Tytler, D. \& Kriss, G. A. % 1997, preprint \end{document}
# Pervasive beyond room-temperature ferromagnetism in a doped van der Waals magnet Xiang Chen<EMAIL_ADDRESS>Materials Sciences Division, Lawrence Berkeley National Lab, Berkeley, California 94720, USA Physics Department, University of California, Berkeley, California 94720, USA Yu-Tsun Shao School of Applied and Engineering Physics, Cornell University, Ithaca, New York 14853, USA Rui Chen Department of Materials Science and Engineering, University of California, Berkeley, California 94720, USA Materials Sciences Division, Lawrence Berkeley National Lab, Berkeley, California 94720, USA Sandhya Susarla Department of Materials Science and Engineering, University of California, Berkeley, California 94720, USA Materials Sciences Division, Lawrence Berkeley National Lab, Berkeley, California 94720, USA Tom Hogan Quantum Design, Inc., San Diego, CA 92121, USA Yu He Department of Applied Physics, Yale University, New Haven, Connecticut, 06511, USA Physics Department, University of California, Berkeley, California 94720, USA Materials Sciences Division, Lawrence Berkeley National Lab, Berkeley, California 94720, USA Hongrui Zhang Department of Materials Science and Engineering, University of California, Berkeley, California 94720, USA Siqi Wang NSF Nanoscale Science and Engineering Center (NSEC), 3112 Etcheverry Hall, University of California, Berkeley, California 94720, USA Jie Yao Department of Materials Science and Engineering, University of California, Berkeley, California 94720, USA Materials Sciences Division, Lawrence Berkeley National Lab, Berkeley, California 94720, USA Peter Ercius The Molecular Foundry, Lawrence Berkeley National Laboratory, Berkeley, CA 94720, USA David A. Muller School of Applied and Engineering Physics, Cornell University, Ithaca, New York 14853, USA Kavli Institute at Cornell for Nanoscale Science, Cornell University, Ithaca, New York 14853, USA Ramamoorthy Ramesh Department of Materials Science and Engineering, University of California, Berkeley, California 94720, USA Materials Sciences Division, Lawrence Berkeley National Lab, Berkeley, California 94720, USA Physics Department, University of California, Berkeley, California 94720, USA Robert J. Birgeneau<EMAIL_ADDRESS>Physics Department, University of California, Berkeley, California 94720, USA Materials Sciences Division, Lawrence Berkeley National Lab, Berkeley, California 94720, USA Department of Materials Science and Engineering, University of California, Berkeley, California 94720, USA ###### Abstract The existence of long range magnetic order in low dimensional magnetic systems, such as the quasi-two-dimensional (2D) van der Waals (vdW) magnets, has attracted intensive studies of new physical phenomena. The vdW FeNGeTe2 ($N$ = 3, 4, 5; FGT) family is exceptional owing to its vast tunability of magnetic properties. In particular, a ferromagnetic ordering temperature ($T_{\text{C}}$) above room temperature at $N$ = 5 (F5GT) is observed. Here, our study shows that, by nickel (Ni) substitution of iron (Fe) in F5GT, a record high $T_{\text{C}}$ = 478(6) K is achieved. Importantly, pervasive, beyond-room-temperature ferromagnetism exists in almost the entire doping range of the phase diagram of Ni-F5GT. We argue that this striking observation in Ni-F5GT can be possibly due to several contributing factors, including increased 3D magnetic couplings due to the structural alterations. ## I Introduction Spontaneous symmetry breaking is forbidden at non-zero temperatures in isotropic spin systems with dimensions $d$ $\leq$ 2 [1, 2]. Long range magnetic order in materials with reduced dimensionality can still be stabilized via both magnetic anisotropy and weak three-dimensional (3D) magnetic couplings [3, 4, 5]. In quasi-two-dimensional (quasi-2D) Heisenberg magnets, such as the van der Waals (vdW) bonded materials, the ordering process typically occurs as follows [6, 7, 8]. At the highest temperature, the system is expected to exhibit 2D, classical isotropic magnetic correlations. As the temperature $T$ is lowered, the correlation length grows exponentially with $1/T$ and at sufficiently large length scales there is inevitably a crossover from 2D Heisenberg to 3D Ising or XY behavior followed by a phase transition to the 3D long range magnetic order. The details of the crossover depend on the strength of the 3D magnetic interactions and the symmetry and strength of the magnetic anisotropy. Van der Waals materials represent exciting realizations of these phenomena plus they contain the broad prospect of important technological applications [9, 10, 11, 12, 13, 14, 15]. Among the prominent bulk vdW materials for studying quasi-2D magnetism, such as Cr2Ge2Te6 [16, 17], CrI3 [18, 19], Fe3GeTe2 (F3GT) [20, 21, 22, 23, 24, 25] and CrTe2 [26, 27], the F3GT system is exceptional owing to the coupling between the electronic and magnetic degrees of freedom and its remarkable tunability. The bulk form of F3GT has a ferromagnetic (FM) transition temperature $T_{\text{C}}$ $\sim$ 230 K, which can be readily enhanced up to room temperature (RT), by either patterning the microstructure or applying ionic gating [28, 29]. By intercalating more iron (Fe) into F3GT, $i.e.$, FeNGeTe2 ($N$=4 for F4GT or 5 for F5GT) [30, 31, 32], the marked effects are the elevated $T_{\text{C}}$ up to RT and enhanced magnetization while only moderately increasing the magnetic moment size [30, 32, 33]. The F5GT compound is particularly interesting because of its advantageous characteristics for potential RT spintronic applications, including a $T_{\text{C}}$ above RT ($\sim$315 K), large magnetization ($\sim$700 kA/m), strong spin lattice coupling [32, 33] and exotic magnetic textures [34, 35]. To achieve practical applications of the quasi-2D magnetic materials, one common theme is to strengthen the FM and enhance the $T_{\text{C}}$ of the vdW magnets. Some proven avenues to dramatically raise the $T_{\text{C}}$ include gating [28, 37], applying pressure or strain [38], ion intercalation or carrier doping [39, 40]. By cobalt (Co) substitution of F5GT, $i.e.$ $({\text{Fe}}_{1-x}{\text{Co}}_{x})_{5+\delta}{\text{GeTe}}_{2}$ (Co-F5GT), the magnetic ordering temperature is further increased to $\sim$ 360 K, along with the evolution of the magnetic ground state [41, 42, 36, Zhang_2022_FCGT_sciadv]. More strikingly, at $x$ = 0.5 of Co-F5GT, a novel wurtzite-type polar magnetic metal was discovered, along with the N$\acute{\text{e}}$el-type skyrmion lattice at RT [36, Zhang_2022_FCGT_sciadv]. The aforementioned discoveries of the F5GT system highlight its immense tunability and capacity for applications in next- generation spintronics. In this letter, we report pervasive, well-beyond-room-temperature FM in the F5GT vdW magnet with nickel (Ni) substitution, $i.e.$, $({\text{Fe}}_{1-x}{\text{Ni}}_{x})_{5+\delta}{\text{GeTe}}_{2}$ (Ni-F5GT). Strikingly, a record high $T_{\text{C}}$ = 478(6) K is reached at a Ni doping level of $x=0.36(2)$. In addition, the FM order persists robustly against Ni replacement until $x$ $\sim$ 0.86(1), beyond which only weak paramagnetism exists. Several factors might be relevant for the dramatic enhancement of the FM in Ni-F5GT. Figure 1: (Color online) (a) Illustration of the AA-stacking order of both the Fe-rich and Ni-rich domains in Ni doped Fe5GeTe2 (Ni-F5GT) (Green symbols: Fe; Orange: Ni; Purple: Te; Blue: Ge) [43]. (b) The (0 0 L) type of peaks of Ni-F5GT at select Ni doping levels $x$. (c) Effective layer spacing $d$ as a function of Ni doping (colored symbols) or Co doping (gray symbols, [36]) in Fe5+δGeTe2. Vertical dashed lines indicate different regions in Ni-F5GT. The Ni-F5GT single crystals were grown by the chemical vapor transfer method [32, 33, 43]. The exact Ni doping level $x$ and total cation count per formula unit ($f.u.$) $5+\delta$ were verified by energy-dispersive X-ray spectroscopy (EDX/EDS) (Fig. S1). The lattice and atomic structure of Ni-F5GT were investigated by a combination of techniques, including powder and/or single crystal x-ray diffraction (XRD) and high-angle annular dark-field scanning transmission electron microscopy (HAADF-STEM). Magnetization measurements were performed via a commercial Quantum Design MPMS3. Figure 2: (Color online) Nanoscale phase separation of Ni-F5GT at $x$ = 0.36(2). (a) Energy-dispersive X-ray spectroscopy map showing both the Fe-rich (green) and Ni-rich (red) regions. (b)-(d) Atomic resolution HAADF-STEM image demonstrating the two types of domains. The enlarged (c) Fe-rich and (d) Ni- rich domains show flat and rumpled atomic planes, respectively. The Te-Te planes in (b) are color-coded based on the number and the rumpling of the atomic layers. Scale bars in (c)-(d), 5 Å. The unit cell of F5GT is composed of three identical layers (stacks) with the rhombohedral layer stacking (space group R$\bar{3}$m), labelled as ABC- stacking [32]. Upon introducing Ni into F5GT, such as at $x$ = 0.19(1), the crystal structure undergoes a transition from ABC-stacking to AA-stacking (space group P$\bar{3}$m1, illustrated in Fig. 1(a)), as confirmed by single crystal XRD (Fig. S2). The AA-stacking order is also verified by HAADF-STEM images at other doping levels (Fig. 2 and Fig. S5). When viewed along the [1 1 0] direction, the identical layers stack exactly on top of each other along the $c$ direction and are separated by a vdW spatial gap. Within each layer (stack), the center germanium (Ge, blue symbols) atom is surrounded by three different sites of iron (Fe1, Fe2 and Fe3, green) atoms which are further protected by the outer Te (purple) atoms, $i.e.$, forming a Te-Fe1-Fe3-Fe2-Ge- Fe2-Fe3-Fe1-Te plane-like (labelled as TGT-$plane$) structure. With increasing Ni doping in Ni-F5GT, the excess cation count $\delta$ gradually increases from $\delta\sim$ 0 at $x$ = 0 to $\delta$ $\sim$ 0.5 at $x$ $\sim$ 0.3 (labelled as region (I) with 0 $\leq$ $x$ $\leq$ 0.3), and saturates when $x$ $>$ 0.3 (Fig. S1). Interestingly, in the intermediate doping range 0.36(2) $\leq$ $x$ $\leq$ 0.86(1) (region (II)), although the sample still maintains the AA-stacking, two types of domains coexist, as evidenced from the splitting of the (0 0 $L$) peaks from the XRD measurements (Fig. 1(b)), the direct atomic visualization from the HAADF-STEM images (Fig. 2 and Fig. S5) and the STEM-EDX maps (Fig. 2 and Fig. S4). As an example at $x$ = 0.36(2) (Fig. 2 and illustrated in Fig. 1(a)), the two types of domains correspond to Fe-rich domains (Fig. 2(c)) and Ni-rich domains (Fig. 2(d)), respectively. In the Fe-rich domains, the TGT-$planes$ are almost flat and more extended in space; while the TGT-$planes$ are rumpled in the Ni-rich domains. Because of the contrasting local atomic arrangements of the Fe and Ni atoms within the TGT-$planes$, the effective layer spacing $d$ of either the Fe-rich or Ni-rich domain shows a strong deviation from the value at $x$ = 0 (Fig. 1(c)). This strong alteration may have a dramatic impact on the electronic and/or magnetic properties of Ni-F5GT. While the Ni-F5GT samples are still metallic (Fig. S6), the magnetic properties are strongly influenced by Ni substitution (Fig. 3 and Fig. S7). With moderate Ni replacement, the $T_{C}$ of Ni-F5GT is already considerably enhanced (Figs. 3(a)-3(b)). For instance, at $x$ = 0.19(1), the experimentally determined $T_{C}$ = 395(6) K already approaches 400 K (Fig. 3(a)). Meanwhile, the Ising spin moment switches to the in-plane direction and seems to remain so until $x$ $\sim$ 0.86(1) (Fig. 3(e) and Fig. S7). Upon further increasing the Ni content, a record high $T_{C}$ $=$ 478(6) K is achieved at $x$ = 0.36(2), together with the maximum in-plane ($H_{C}^{ab}$ $\approx$ 500 Oe) and out-of-plane ($H_{C}^{c}$ $\approx$ 1600 Oe) coercive fields (Figs. 3(c), 3(f) and Fig. S7). After reaching the maximum $T_{C}$ in Ni-F5GT, the FM order is only gradually weakened by further Ni doping. Strikingly, even at $x$ = 0.86(1), the as-grown sample of Ni-F5GT maintains an above-room-temperature $T_{C}$ $\approx$ 380 K, which is only lowered and stabilized at 220 K or 150 K, depending on how the sample is thermally cycled above its original $T_{C}$ (Fig. S8). Only beyond this point of further Ni replacement (0.86(1) $<$ $x$ $\leq$ 1, region (III)), the FM order of Ni-F5GT is completely suppressed, rendering the weak paramagnetism, accompanied by the adoption of a different, layered tetragonal structure (space group I4/mmm) as Ni5.5GeTe2 [44]. The saturation magnetic moment per $f.u.$ of Ni-F5GT at 2 K decreases approximately linearly with increasing Ni content (Figs. 3(c),(e) and Fig. S7), from $\sim$10 $\mu_{B}$/$f.u.$ at $x$ = 0 to nearly zero (weakly paramagnetic) at $x$ = 1. This implies that the Ni dopants are not magnetic and only dilute the FM in Ni-F5GT. Surprisingly, the magnetic ordering temperature $T_{C}$ does not follow this expected trend (Fig. 4). Instead, pervasive, above-room-temperature FM exists almost over the entire range of the phase diagram (0 $\leq$ $x$ $\leq$ 0.79(1)). Particularly, in region (I) of the phase diagram of Ni-F5GT, the $T_{C}$ shows a strong, positive deviation from the Vegard’s Law behavior ($i.e.$, a dilution of the magnetic component), thus suggesting additional factors might be present for the unusual evolution of $T_{C}$. Our study demonstrates the remarkable Ni enhanced FM in Ni-F5GT, as summarized in Fig. 4, where three different regions are categorized based on the structural and magnetic characterization. Figure 3: (Color online) Magnetization data of Ni-F5GT at select Ni doping level $x$. (a)-(b) Temperature dependent magnetization of Ni-F5GT (external in-plane magnetic field is 50 Oe). For clarity, the magnetization data at $x$ = 0.97(1) and $x$ = 1 are multiplied by 200 and 2000, respectively. (c)-(d) In-plane isothermal magnetization at $T$ = 2 K (c) and $T$ = 400 K (d), respectively. (e) Saturation moment per formula unit at 2 K under the magnetic field of 7 T. (f) Coercive fields extracted from the isothermal magnetization at $T$ = 2 K. In (e)-(f), the magnetic field is applied along the $ab$-plane (red symbols) or $c$-direction (green symbols). The unique feature of Ni-F5GT is the immense impact on the lattice and magnetism by Ni substitution, including the radical change of the layer spacing $d$ (Fig. 1(c)) and the local atomic arrangements (Fig. 1(a), Fig. 2, Figs. S2-S5), as compared to F5GT or Co-F5GT [32, 36]. It is evident that the change of the layer spacing $d$ in Ni-F5GT is significantly more pronounced than its evolution in Co-F5GT. Importantly, the layer spacing $d$ of the Fe- rich domains (green symbols in Fig. 1(c)) closely tracks the evolution of $T_{\text{C}}$ in Ni-F5GT at 0 $\leq$ $x$ $\leq$ 0.86(1) (Fig. 4). This is supported by the experimental observation that the Fe-rich domains maintain the AA-stacking order with the straight-and-flat TGT-$planes$ over a broad Ni doping range 0 $<$ $x$ $\leq$ 0.86(1) and thus may be mainly responsible for the robust FM in Ni-F5GT. The Ni dopants are also indispensable for donating electrons to the system and forming the Ni-rich domains in region (II), which help preserve the Fe-rich domains. This phase separation in region (II) naturally explains the domain pinning enhanced coercive fields (Fig. 3(f)). Meanwhile, the preserved Fe-rich domains maintain the robust $T_{C}$ of Ni-F5GT in this region. Our work on Ni-F5GT reveals a complicated yet intriguing system, in which the lattice, electronic and magnetic degrees of freedom are closely intertwined. To understand the enhancement of FM in Ni-F5GT, especially the region (I) (Fig. 4), it is necessary to consider the possible effects of both electron doping and structural alteration by Ni replacement. Carrier doping is often seen to be detrimental to the correlation induced long range magnetic order, as recognized in the unconventional superconductors, such as the cuprates and iron pnictides [45, 46]. For the itinerant FM, charge doping can, in some cases, help meet the Stoner criterion and actually promote magnetic order [47]. The related F3GT system, which may also apply to F5GT, has both itinerant and localized spin moment contributions to the magnetism [21, 23, 28, 24, 25, 48, 49]. Hence, the ordering temperature of F3GT can be elevated by electrostatic gating of its thin layer form [28, 37]. In Ni-F5GT, the electrons provided by the Ni dopants may greatly influence the electronic band structure and the density of states (DOS) near the Fermi level ($E_{\text{F}}$). Alternatively, the magnetic exchange coupling $J_{i,j}$ between spins $\boldsymbol{S}_{i}$ and $\boldsymbol{S}_{j}$ on sites $i$ and $j$, might also be altered via the Ruderman–Kittel–Kasuya–Yosida (RKKY) exchange since the electronic structure is likely altered substantially by Ni- doping [28, 25, 50, 51]. All together, the itinerant FM might also be enhanced by electron doping in Ni-F5GT. Now focusing on the localized spin moments and considering a simple Heisenberg model with a weak magnetic anisotropy, in which the magnetic contributions from the itinerant FM can also be effectively mapped into the Hamiltonian [52, 28]: $H=\sum_{\begin{subarray}{c}i<j\end{subarray}}J_{i,j}\boldsymbol{S}_{i}\cdot\boldsymbol{S}_{j}-\sum_{\begin{subarray}{c}i\end{subarray}}A(S_{i}^{z})^{2}$ (1) Here, $A$ is the single-ion anisotropy ($A$ $>$ 0 for Ising spin moment). Since $A$ is small and close to zero in Ni-F5GT [33], a mean-field treatment results in the magnetic transition temperature [28, 12]: $T_{C}=\frac{S(S+1)}{3k_{B}}({z_{nn}}J_{nn}+...)$ (2) where $z_{nn}$ is the coordination number of the nearest neighbouring sites, $S$ is the magnetic spin quantum number and $J_{nn}$ the nearest-neighbour exchange coupling. On the mean-field level, a larger $z_{nn}$ or $J_{nn}$ promises a larger $T_{C}$. Perhaps, this is why the ordering temperature in FeNGeTe2 is quickly increased from $T_{C}$ $\sim$ 230 K at $N$ = 3 to $T_{C}$ $\sim$ 317 K at $N$ = 5 [20, 21, 30, 32, 33]. In Ni-F5GT, since the average magnetic moment per Fe is almost unchanged and the in-plane FM indicates a small yet negative $A$ (Fig. 3(e) and Fig. S7), hence neither $S$ nor $A$ is responsible for the enhancement of the FM. However, the structural alterations may affect both $z_{nn}$ and $J_{i,j}$, and therefore are strong candidates for explaining the further enhanced $T_{C}$. Firstly, a small increment of $\delta$ is observed for lightly Ni doped F5GT (region (I) in Fig. 4 and Fig. S1). One direct consequence is the larger site occupancy of the Fe1 site, which is up to $\sim$75$\%$ at $x$ $\sim$ 0.3 as compared to a maximum of $\sim$50$\%$ at $x$ = 0. Considering that the Fe1-Fe3 bond length is the shortest among all of the direct Fe-Fe bonds (Fig. S3), the Fe1 site might be critical for determining the $T_{C}$ in Ni-F5GT. Therefore, a larger $\delta$, which implies a greater $z_{nn}$ of the Fe1 site, might promote a higher $T_{C}$ (Eqn. (2)). Secondly, with more Ni substitution in region (I), the explicit effect is that the TGT-$planes$ of the Fe-rich domains are becoming more flat and extended in space (Fig. 2(c), Fig. S3, Figs. S5(c),(e)). Microscopically, other than the small spatial re- arrangements of the Fe2 and Fe3 sites, it is the Fe1 site that is becoming increasingly distant from the Fe3 site along the $c$ direction while keeping the Fe-Fe bond lengths nearly unchanged (Fig. S3). This explains the enlargement of the layer spacing $d$ (Fig. 1(c)), which positively correlates with the $T_{C}$ in Ni-F5GT. Understandably, the intralayer and interlayer exchange coupling, especially these related to the Fe1 site, might be effectively strengthened, which render the 3D magnetic interactions stronger and eventually lead to the enhancement of $T_{C}$ in Ni-F5GT. Figure 4: (Color online) Ni doping level $x$ dependent $T_{C}$ in Ni-F5GT. Empty symbols: $T_{C}$ for as-grown samples; solid symbols: $T_{C}$ after thermal cycling. Vertical dashed lines indicate different regions in Ni-F5GT. In summary, our work reveals a new arena for studying the vdW magnetic metals with strong room temperature ferromagnetism. A record high ferromagnetic order with a $T_{C}$ $=$ 478(6) K is realized in Ni-F5GT at $x$ = 0.36(2). Albeit with the non-magnetic dilution, several factors are speculated to assist the pervasive, well-above-room-temperature FM in Ni-F5GT. Candidate contributors include the increased site occupancy, structural modifications altered magnetic exchange couplings and the electron doping effect. Clearly, as stated, these ideas are purely speculative and require much more thorough investigation. Although further research is needed to understand fully the mechanism of the enhanced magnetism, our study highlights that the Ni-F5GT system is an extremely rare example of strongly enhanced ferromagnetism, in spite of the detrimental factors such as the non-magnetic dilution and electron doping effects introduced by Ni dopants. In addition, Ni-F5GT offers unique or alternative avenues towards enhanced coercivity, varying length scale of phase separation, thermal cycling influenced $T_{C}$ and potential relevance to skyrmionics in F5GT and other related vdW magnets [41, 42, 36, Zhang_2022_FCGT_sciadv]. X.C. wishes to thank Nicholas S. Settineri and Weiwei Xie for some single crystal XRD measurement and acknowledge the Applications Group at Quantum Design for their contribution of high temperature (300-600 K) magnetization measurements to this work. Work at Lawrence Berkeley National Laboratory was funded by the U.S. Department of Energy, Office of Science, Office of Basic Energy Sciences, Materials Sciences and Engineering Division under Contract No. DE-AC02-05-CH11231 within the Quantum Materials Program (KC2202). Y.T.S., H.Z. and D.A.M acknowledge financial support from the Department of Defense, Air Force Office of Scientific Research under award FA9550-18-1-0480. R.C. and J.Y. acknowledge the support by Intel Corporation under an award titled Valleytronics center. The electron microscopy studies were performed at the Cornell Center for Materials Research, a National Science Foundation (NSF) Materials Research Science and Engineering Centers program (DMR 1719875). The microscopy work at Cornell was supported by the NSF PARADIM DMR-2039380, with additional support from Cornell University, the Weill Institute and the Kavli Institute at Cornell. S.S. acknowledges the help from Dr. Rohan Dhall and is supported by the Quantum Materials Program under the Basic Energy Sciences, Department of Energy. The microscopy work was performed at Molecular Foundry that is supported by the Office of Science, Office of Basic Energy Sciences, of the U.S. Department of Energy under Contract No. DE-AC02-05CH11231. The devices for transport measurements were fabricated in the UC Berkeley Marvell Nanofabrication Laboratory. ## References * Mermin and Wagner [1966] N. D. Mermin and H. Wagner, Absence of Ferromagnetism or Antiferromagnetism in One- or Two-Dimensional Isotropic Heisenberg Models, Phys. Rev. Lett. 17, 1133 (1966). * Hohenberg [1967] P. C. Hohenberg, Existence of Long-Range Order in One and Two Dimensions, Phys. Rev. 158, 383 (1967). * Onsager [1944] L. Onsager, Crystal Statistics. I. A Two-Dimensional Model with an Order-Disorder Transition, Phys. Rev. 65, 117 (1944). * J L Lado and J Fernández-Rossier [2017] J L Lado and J Fernández-Rossier, On the origin of magnetic anisotropy in two dimensional ${\mathrm{Cr}}{\mathrm{I}}_{3}$, 2D Materials 4, 035002 (2017). * Kim _et al._ [2019] D.-H. Kim, K. Kim, K.-T. Ko, J. Seo, J. S. Kim, T.-H. Jang, Y. Kim, J.-Y. Kim, S.-W. Cheong, and J.-H. Park, Giant Magnetic Anisotropy Induced by Ligand $LS$ Coupling in Layered Cr Compounds, Phys. Rev. Lett. 122, 207201 (2019). * Als-Nielsen _et al._ [1976] J. Als-Nielsen, R. J. Birgeneau, H. J. Guggenheim, and G. Shirane, Critical behaviour of a two-dimensional random antiferromagnet: Rb2Mn${}_{0}.5$Ni${}_{0}.5$F4, Journal of Physics C: Solid State Physics 9, L121 (1976). * Chakravarty _et al._ [1989] S. Chakravarty, B. I. Halperin, and D. R. Nelson, Two-dimensional quantum Heisenberg antiferromagnet at low temperatures, Phys. Rev. B 39, 2344 (1989). * Birgeneau _et al._ [1999] R. J. Birgeneau, M. Greven, M. A. Kastner, Y. S. Lee, B. O. Wells, Y. Endoh, K. Yamada, and G. Shirane, Instantaneous spin correlations in ${\mathrm{La}}_{2}{\mathrm{CuO}}_{4}$, Phys. Rev. B 59, 13788 (1999). * Kosterlitz and Thouless [1973] J. M. Kosterlitz and D. J. Thouless, Ordering, metastability and phase transitions in two-dimensional systems, Journal of Physics C: Solid State Physics 6, 1181 (1973). * Park [2016] J.-G. Park, Opportunities and challenges of 2D magnetic van der Waals materials: magnetic graphene?, Journal of Physics: Condensed Matter 28, 301001 (2016). * Burch _et al._ [2018] K. S. Burch, D. Mandrus, and J.-G. Park, Magnetism in two-dimensional van der Waals materials, Nature 563, 47 (2018). * Gibertini _et al._ [2019] M. Gibertini, M. Koperski, A. F. Morpurgo, and K. S. Novoselov, Magnetic 2D materials and heterostructures, Nature Nanotechnology 14, 408 (2019). * Gong and Zhang [2019] C. Gong and X. Zhang, Two-dimensional magnetic crystals and emergent heterostructure devices, Science 363, 10.1126/science.aav4450 (2019). * Wang _et al._ [2020] M.-C. Wang, C.-C. Huang, C.-H. Cheung, C.-Y. Chen, S. G. Tan, T.-W. Huang, Y. Zhao, Y. Zhao, G. Wu, Y.-P. Feng, H.-C. Wu, and C.-R. Chang, Prospects and Opportunities of 2D van der Waals Magnetic Systems, Annalen der Physik 532, 1900452 (2020). * Du _et al._ [2021] L. Du, T. Hasan, A. Castellanos-Gomez, G.-B. Liu, Y. Yao, C. N. Lau, and Z. Sun, Engineering symmetry breaking in 2D layered materials, Nature Reviews Physics 3, 193 (2021). * Carteaux _et al._ [1995] V. Carteaux, D. Brunet, G. Ouvrard, and G. Andre, Crystallographic, magnetic and electronic structures of a new layered ferromagnetic compound ${\mathrm{Cr}}_{2}{\mathrm{Ge}}_{2}{\mathrm{Te}}_{6}$, Journal of Physics: Condensed Matter 7, 69 (1995). * Gong _et al._ [2017] C. Gong, L. Li, Z. Li, H. Ji, A. Stern, Y. Xia, T. Cao, W. Bao, C. Wang, Y. Wang, Z. Q. Qiu, R. J. Cava, S. G. Louie, J. Xia, and X. Zhang, Discovery of intrinsic ferromagnetism in two-dimensional van der Waals crystals, Nature 546, 265 (2017). * McGuire _et al._ [2015] M. A. McGuire, H. Dixit, V. R. Cooper, and B. C. Sales, Coupling of Crystal Structure and Magnetism in the Layered, Ferromagnetic Insulator ${\mathrm{Cr}}{\mathrm{I}}_{3}$, Chem. Mater. 27, 612 (2015). * Huang _et al._ [2017] B. Huang, G. Clark, E. Navarro-Moratalla, D. R. Klein, R. Cheng, K. L. Seyler, D. Zhong, E. Schmidgall, M. A. McGuire, D. H. Cobden, W. Yao, D. Xiao, P. Jarillo-Herrero, and X. Xu, Layer-dependent ferromagnetism in a van der Waals crystal down to the monolayer limit, Nature 546, 270 (2017). * Deiseroth _et al._ [2006] H.-J. Deiseroth, K. Aleksandrov, C. Reiner, L. Kienle, and R. K. Kremer, ${\mathrm{Fe}}_{3}{\mathrm{Ge}}{\mathrm{Te}}_{2}$ and ${\mathrm{Ni}}_{3}{\mathrm{Ge}}{\mathrm{Te}}_{2}$ – Two New Layered Transition-Metal Compounds: Crystal Structures, HRTEM Investigations, and Magnetic and Electrical Properties, European Journal of Inorganic Chemistry 2006, 1561 (2006). * Chen _et al._ [2013] B. Chen, J. Yang, H. Wang, M. Imai, H. Ohta, C. Michioka, K. Yoshimura, and M. Fang, Magnetic Properties of Layered Itinerant Electron Ferromagnet ${\mathrm{Fe}}_{3}{\mathrm{Ge}}{\mathrm{Te}}_{2}$, Journal of the Physical Society of Japan 82, 124711 (2013). * Fei _et al._ [2018] Z. Fei, B. Huang, P. Malinowski, W. Wang, T. Song, J. Sanchez, W. Yao, D. Xiao, X. Zhu, A. F. May, W. Wu, D. H. Cobden, J.-H. Chu, and X. Xu, Two-dimensional itinerant ferromagnetism in atomically thin ${\mathrm{Fe}}_{3}{\mathrm{Ge}}{\mathrm{Te}}_{2}$, Nature Materials 17, 778 (2018). * May _et al._ [2016] A. F. May, S. Calder, C. Cantoni, H. Cao, and M. A. McGuire, Magnetic structure and phase stability of the van der Waals bonded ferromagnet ${\mathrm{Fe}}_{3-x}{\mathrm{GeTe}}_{2}$, Phys. Rev. B 93, 014411 (2016). * Kim _et al._ [2018] K. Kim, J. Seo, E. Lee, K.-T. Ko, B. S. Kim, B. G. Jang, J. M. Ok, J. Lee, Y. J. Jo, , W. Kang, J. H. Shim, C. Kim, H. W. Yeom, B. Il Min, B.-J. Yang, and J. S. Kim, Large anomalous Hall current induced by topological nodal lines in a ferromagnetic van der Waals semimetal, Nature Materials 17, 794 (2018). * Zhang _et al._ [2018] Y. Zhang, H. Lu, X. Zhu, S. Tan, W. Feng, Q. Liu, W. Zhang, Q. Chen, Y. Liu, X. Luo, D. Xie, L. Luo, Z. Zhang, and X. Lai, Emergence of Kondo lattice behavior in a van der Waals itinerant ferromagnet, ${\mathrm{Fe}}_{3}{\mathrm{Ge}}{\mathrm{Te}}_{2}$, Science Advances 4, 10.1126/sciadv.aao6791 (2018). * Freitas _et al._ [2015] D. C. Freitas, R. Weht, A. Sulpice, G. Remenyi, P. Strobel, F. Gay, J. Marcus, and M. Núñez-Regueiro, Ferromagnetism in layered metastable 1$T$-${\mathrm{Cr}}{\mathrm{Te}}_{2}$ , Journal of Physics: Condensed Matter 27, 176002 (2015). * Sun _et al._ [2020] X. Sun, W. Li, X. Wang, Q. Sui, T. Zhang, Z. Wang, L. Liu, D. Li, S. Feng, S. Zhong, H. Wang, V. Bouchiat, M. Nunez Regueiro, N. Rougemaille, J. Coraux, A. Purbawati, A. Hadj-Azzem, Z. Wang, B. Dong, X. Wu, T. Yang, G. Yu, B. Wang, Z. Han, X. Han, and Z. Zhang, Room temperature ferromagnetism in ultra-thin van der Waals crystals of 1$T$-${\mathrm{Cr}}{\mathrm{Te}}_{2}$ , Nano Research 13, 3358 (2020). * Deng _et al._ [2018] Y. Deng, Y. Yu, Y. Song, J. Zhang, N. Z. Wang, Z. Sun, Y. Yi, Y. Z. Wu, S. Wu, J. Zhu, J. Wang, X. H. Chen, and Y. Zhang, Gate-tunable room-temperature ferromagnetism in two-dimensional ${\mathrm{Fe}}_{3}{\mathrm{Ge}}{\mathrm{Te}}_{2}$, Nature 563, 94 (2018). * Li _et al._ [2018] Q. Li, M. Yang, C. Gong, R. V. Chopdekar, A. T. N’Diaye, J. Turner, G. Chen, A. Scholl, P. Shafer, E. Arenholz, A. K. Schmid, S. Wang, K. Liu, N. Gao, A. S. Admasu, S.-W. Cheong, C. Hwang, J. Li, F. Wang, X. Zhang, and Z. Qiu, Patterning-Induced Ferromagnetism of ${\mathrm{Fe}}_{3}{\mathrm{Ge}}{\mathrm{Te}}_{2}$ van der Waals Materials beyond Room Temperature, Nano Letters 18, 5974 (2018). * Seo _et al._ [2020] J. Seo, D. Y. Kim, E. S. An, K. Kim, G.-Y. Kim, S.-Y. Hwang, D. W. Kim, B. G. Jang, H. Kim, G. Eom, S. Y. Seo, R. Stania, M. Muntwiler, J. Lee, K. Watanabe, T. Taniguchi, Y. J. Jo, J. Lee, B. I. Min, M. H. Jo, H. W. Yeom, S.-Y. Choi, J. H. Shim, and J. S. Kim, Nearly room temperature ferromagnetism in a magnetic metal-rich van der Waals metal, Science Advances 6, 10.1126/sciadv.aay8912 (2020). * Stahl _et al._ [2018] J. Stahl, E. Shlaen, and D. Johrendt, The van der Waals Ferromagnets ${\mathrm{Fe}}_{5-\delta}{\mathrm{Ge}}{\mathrm{Te}}_{2}$ and ${\mathrm{Fe}}_{5-\delta-x}{\mathrm{Ni}}_{x}{\mathrm{Ge}}{\mathrm{Te}}_{2}$ – Crystal Structure, Stacking Faults, and Magnetic Properties, Zeitschrift f${\ddot{\text{u}}}$r anorganische und allgemeine Chemie 644, 1923 (2018). * May _et al._ [2019] A. F. May, D. Ovchinnikov, Q. Zheng, R. Hermann, S. Calder, B. Huang, Z. Fei, Y. Liu, X. Xu, and M. A. McGuire, Ferromagnetism Near Room Temperature in the Cleavable van der Waals Crystal ${\mathrm{Fe}}_{5}{\mathrm{Ge}}{\mathrm{Te}}_{2}$ , ACS Nano 13, 4436 (2019). * Zhang _et al._ [2020] H. Zhang, R. Chen, K. Zhai, X. Chen, L. Caretta, X. Huang, R. V. Chopdekar, J. Cao, J. Sun, J. Yao, R. Birgeneau, and R. Ramesh, Itinerant ferromagnetism in van der Waals ${\mathrm{Fe}}_{5-x}{\mathrm{Ge}\mathrm{Te}}_{2}$ crystals above room temperature, Phys. Rev. B 102, 064417 (2020). * Ly _et al._ [2021] T. T. Ly, J. Park, K. Kim, H.-B. Ahn, N. J. Lee, K. Kim, T.-E. Park, G. Duvjir, N. H. Lam, K. Jang, C.-Y. You, Y. Jo, S. K. Kim, C. Lee, S. Kim, and J. Kim, Direct Observation of Fe-Ge Ordering in ${\mathrm{Fe}}_{5-x}{\mathrm{Ge}\mathrm{Te}}_{2}$ Crystals and Resultant Helimagnetism, Advanced Functional Materials 31, 2009758 (2021). * Gao _et al._ [2020] Y. Gao, Q. Yin, Q. Wang, Z. Li, J. Cai, T. Zhao, H. Lei, S. Wang, Y. Zhang, and B. Shen, Spontaneous (Anti)meron Chains in the Domain Walls of van der Waals Ferromagnetic ${\mathrm{Fe}}_{5-x}{\mathrm{Ge}\mathrm{Te}}_{2}$, Advanced Materials 32, 2005228 (2020). * Zhang _et al._ [2021] H. Zhang, Y.-T. Shao, R. Chen, X. Chen, S. Susarla, J. T. Reichanadter, L. Caretta, X. Huang, N. S. Settineri, Z. Chen, J. Zhou, E. Bourret-Courchesne, P. Ercius, J. Yao, J. B. Neaton, D. A. Muller, R. J. Birgeneau, and R. Ramesh, A room temperature polar ferromagnetic metal (2021), arXiv:2106.00833 [cond-mat.mtrl-sci] . * Verzhbitskiy _et al._ [2020] I. A. Verzhbitskiy, H. Kurebayashi, H. Cheng, J. Zhou, S. Khan, Y. P. Feng, and G. Eda, Controlling the magnetic anisotropy in ${\mathrm{Cr}}_{2}{\mathrm{Ge}}_{2}{\mathrm{Te}}_{6}$ by electrostatic gating, Nature Electronics 3, 460 (2020). * Bhoi _et al._ [2021] D. Bhoi, J. Gouchi, N. Hiraoka, Y. Zhang, N. Ogita, T. Hasegawa, K. Kitagawa, H. Takagi, K. H. Kim, and Y. Uwatoko, Nearly room temperature ferromagnetism in pressure-induced correlated metallic state of van der Waals insulator CrGeTe3 (2021), arXiv:2107.10573 [cond-mat.str-el] . * Weber _et al._ [2019] D. Weber, A. H. Trout, D. W. McComb, and J. E. Goldberger, Decomposition-Induced Room-Temperature Magnetism of the Na-Intercalated Layered Ferromagnet $\mathrm{F}{\mathrm{e}}_{3-x}\mathrm{GeT}{\mathrm{e}}_{2}$, Nano Letters 19, 5031 (2019). * Wang _et al._ [2019] N. Wang, H. Tang, M. Shi, H. Zhang, W. Zhuo, D. Liu, F. Meng, L. Ma, J. Ying, L. Zou, Z. Sun, and X. Chen, Transition from Ferromagnetic Semiconductor to Ferromagnetic Metal with Enhanced Curie Temperature in ${\mathrm{Cr}}_{2}{\mathrm{Ge}}_{2}{\mathrm{Te}}_{6}$ via Organic Ion Intercalation, Journal of the American Chemical Society 141, 17166 (2019). * May _et al._ [2020] A. F. May, M.-H. Du, V. R. Cooper, and M. A. McGuire, Tuning magnetic order in the van der Waals metal ${\mathrm{Fe}}_{5}{\mathrm{GeTe}}_{2}$ by cobalt substitution, Phys. Rev. Materials 4, 074008 (2020). * Tian _et al._ [2020] C. Tian, F. Pan, S. Xu, K. Ai, T. Xia, and P. Cheng, Tunable magnetic properties in van der Waals crystals (Fe1-xCox)5GeTe2, Applied Physics Letters 116, 202402 (2020). * [43] See Supplemental Material at ? for additional data of sample synthesis, characterizations, transport measurements, magnetization, etc. * Deiseroth _et al._ [2007] H.-J. Deiseroth, F. Spirovski, C. Reiner, and M. Schlosser, Crystal structures of nickel germanium selenide, Ni5.45GeSe2, and nickel germanium telluride, Ni5.45GeTe2, Zeitschrift f${\ddot{\text{u}}}$r Kristallographie - New Crystal Structures 222, 171 (2007). * Lee _et al._ [2006] P. A. Lee, N. Nagaosa, and X.-G. Wen, Doping a Mott insulator: Physics of high-temperature superconductivity, Rev. Mod. Phys. 78, 17 (2006). * Dai [2015] P. Dai, Antiferromagnetic order and spin dynamics in iron-based superconductors, Rev. Mod. Phys. 87, 855 (2015). * Huang _et al._ [2021] J. Huang, Z. Wang, H. Pang, H. Wu, H. Cao, S.-K. Mo, A. Rustagi, A. F. Kemper, M. Wang, M. Yi, and R. J. Birgeneau, Flat-band-induced itinerant ferromagnetism in ${\mathrm{RbCo}}_{2}{\mathrm{Se}}_{2}$, Phys. Rev. B 103, 165105 (2021). * Calder _et al._ [2019] S. Calder, A. I. Kolesnikov, and A. F. May, Magnetic excitations in the quasi-two-dimensional ferromagnet ${\mathrm{Fe}}_{3-x}{\mathrm{GeTe}}_{2}$ measured with inelastic neutron scattering, Phys. Rev. B 99, 094423 (2019). * Xu _et al._ [2020] X. Xu, Y. W. Li, S. R. Duan, S. L. Zhang, Y. J. Chen, L. Kang, A. J. Liang, C. Chen, W. Xia, Y. Xu, P. Malinowski, X. D. Xu, J.-H. Chu, G. Li, Y. F. Guo, Z. K. Liu, L. X. Yang, and Y. L. Chen, Signature for non-Stoner ferromagnetism in the van der Waals ferromagnet $\mathrm{F}{\mathrm{e}}_{3}\mathrm{GeT}{\mathrm{e}}_{2}$, Phys. Rev. B 101, 201104 (2020). * Seo _et al._ [2021] J. Seo, E. S. An, T. Park, S.-Y. Hwang, G.-Y. Kim, K. Song, W.-s. Noh, J. Y. Kim, G. S. Choi, M. Choi, E. Oh, K. Watanabe, T. Taniguchi, J. H. Park, Y. J. Jo, H. W. Yeom, S.-Y. Choi, J. H. Shim, and J. S. Kim, Tunable high-temperature itinerant antiferromagnetism in a van der Waals magnet, Nature Communications 12, 2844 (2021). * Zhao _et al._ [2021] M. Zhao, B.-B. Chen, Y. Xi, Y. Zhao, H. Xu, H. Zhang, N. Cheng, H. Feng, J. Zhuang, F. Pan, X. Xu, W. Hao, W. Li, S. Zhou, S. X. Dou, and Y. Du, Kondo Holes in the Two-Dimensional Itinerant Ising Ferromagnet Fe3GeTe2, Nano Letters 21, 6117 (2021). * Prange and Korenman [1979] R. E. Prange and V. Korenman, Local-band theory of itinerant ferromagnetism. IV. Equivalent Heisenberg model, Phys. Rev. B 19, 4691 (1979).
# A three-wave mixing kinetic inductance traveling-wave amplifier with near- quantum-limited noise performance M. Malnou<EMAIL_ADDRESS>National Institute of Standards and Technology, Boulder, Colorado 80305, USA Department of Physics, University of Colorado, Boulder, Colorado 80309, USA M. R. Vissers National Institute of Standards and Technology, Boulder, Colorado 80305, USA J. D. Wheeler National Institute of Standards and Technology, Boulder, Colorado 80305, USA J. Aumentado National Institute of Standards and Technology, Boulder, Colorado 80305, USA J. Hubmayr National Institute of Standards and Technology, Boulder, Colorado 80305, USA J. N. Ullom National Institute of Standards and Technology, Boulder, Colorado 80305, USA Department of Physics, University of Colorado, Boulder, Colorado 80309, USA J. Gao National Institute of Standards and Technology, Boulder, Colorado 80305, USA Department of Physics, University of Colorado, Boulder, Colorado 80309, USA ###### Abstract We present a theoretical model and experimental characterization of a microwave kinetic inductance traveling-wave amplifier (KIT), whose noise performance, measured by a shot-noise tunnel junction (SNTJ), approaches the quantum limit. Biased with a dc current, the KIT operates in a three-wave mixing fashion, thereby reducing by several orders of magnitude the power of the microwave pump tone and associated parasitic heating compared to conventional four-wave mixing KIT devices. It consists of a 50 Ω artificial transmission line whose dispersion allows for a controlled amplification bandwidth. We measure $16.5^{+1}_{-1.3}$ dB of gain across a 2 GHz bandwidth with an input 1 dB compression power of -63 dBm, in qualitative agreement with theory. Using a theoretical framework that accounts for the SNTJ-generated noise entering both the signal and idler ports of the KIT, we measure the system-added noise of an amplification chain that integrates the KIT as the first amplifier. This system-added noise, $3.1\pm 0.6$ quanta (equivalent to $0.66\pm 0.15$ K) between 3.5 and 5.5 GHz, is the one that a device replacing the SNTJ in that chain would see. This KIT is therefore suitable to read large arrays of microwave kinetic inductance detectors and promising for multiplexed superconducting qubit readout. ## I INTRODUCTION Is it possible to build a quantum limited microwave amplifier with enough gain, bandwidth and power handling to simultaneously read thousands of frequency-multiplexed superconducting resonators, like those in qubit systems or microwave kinetic inductance detectors (MKIDs)? When designed with resonant structures, Josephson junction-based parametric amplifiers have demonstrated high gain and quantum limited performances Castellanos-Beltran _et al._ (2008); Bergeal _et al._ (2010); Roch _et al._ (2012); Mutus _et al._ (2013); Zhong _et al._ (2013); Lecocq _et al._ (2017); Malnou _et al._ (2018). However, despite efforts to increase the bandwidth up to a few hundred megahertz via impedance engineering Mutus _et al._ (2014); Roy _et al._ (2015), or to increase the power handling up to a few hundred femtowatts via Kerr engineering Frattini _et al._ (2017, 2018); Sivak _et al._ (2019), they still cannot read more than a handful of resonators simultaneously. When designed with nonresonant structures, i.e. transmission lines, Josephson traveling-wave parametric amplifiers (JTWPAs) have high gain over gigahertz bandwidth Macklin _et al._ (2015); White _et al._ (2015); Planat _et al._ (2020), but so far still exhibit similar power handling capabilities as their resonant counterparts. Recent studies Sivak _et al._ (2020); Zorin (2016, 2019) suggest that a three-wave mixing (3WM) JTWPA with a finely controlled and canceled Kerr nonlinearity should increase the device’s power handling tenfold. Compelling experiments have yet to prove the feasibility of this approach, for which the JTWPA’s design and fabrication increase in complexity. We propose to tackle this challenge using a different non-linear medium: starting with the intrinsic broadband and high power handling capabilities of a kinetic inductance traveling-wave amplifier (KIT) Eom _et al._ (2012), we build a near-quantum-limited amplifier. The current limitations on microwave amplification affect many scientific endeavors. Although proof-of-principle “quantum supremacy” was demonstrated by a quantum computer containing a few tens of qubits Arute _et al._ (2019), this number has to scale by at least an order of magnitude to run powerful quantum algorithms Shor (1994); Grover (1997). In the hunt for exoplanets, cameras with tens of thousands of MKID pixels are being built Szypryt _et al._ (2017), and proposals to search for very light warm dark matter also necessitate the use of a great number of MKID pixels Hochberg _et al._ (2016a, b). All these applications are either already limited by amplifier noise, or would greatly benefit from wideband, high gain, high power handling, quantum limited amplifiers. The KIT we present in this article is a step toward a practical, quantum- limited amplifier, whose bandwidth and power handling are compatible with high channel-count applications. Operating in a 3WM fashion, and fabricated out of a single layer of NbTiN, it consists of a weakly dispersive artificial transmission line Chaudhuri _et al._ (2017); Zobrist _et al._ (2019), for which we control the phase matched bandwidth with dispersion engineering. This limits spurious parametric conversion processes that otherwise degrade the power handling and noise performance. We measure an average gain of 16.5 dB over a 2 GHz bandwidth, and a typical 1 dB input compression power of -63 dBm within that bandwidth. Using a shot-noise tunnel junction (SNTJ) Spietz _et al._ (2003, 2006) we measure the added noise of a readout chain with the KIT as the first amplifier. To quote the true system-added noise of the chain, i.e. the one that a device replacing the SNTJ in that chain would see, we develop a novel theoretical framework that accounts for the SNTJ-generated noise illuminating both the signal and idler ports of the KIT. Failure to account for the idler port’s noise makes the system-added noise look significantly better than its true value. We demonstrate a true system-added noise of $3.1\pm 0.6$ quanta between 3.5 and 5.5 GHz, and estimate that the KIT alone is near-quantum-limited. It is the first time that the broadband noise properties of a KIT are fully characterized rigorously. ## II THEORY AND DESIGN KITs exploit the nonlinear kinetic inductance of a superconducting line to generate parametric interaction between pump, signal, and idler photons. In 3WM, a single pump photon converts into signal and idler photons, whereas four-wave mixing (4WM) converts two pump photons in this fashion. Operating a KIT with 3WM offers two key advantages over 4WM. First, as the pump frequency is far detuned from the amplification band, it is easily filtered, which is often necessary to avoid saturating the following amplifier. Second, it reduces the rf pump power because energy is extracted from dc power to convert pump photons, which avoids undesirable heating effects from the strong rf pump, including those happening in the packaging. More precisely, when biased with a dc current $I_{d}$, the KIT inductance per unit length L is Vissers _et al._ (2016): $L=L_{d}(1+\epsilon I+\xi I^{2}+\mathcal{O}(I^{3})),$ (1) where $I$ is the rf current, $L_{d}$ is the KIT inductance under dc bias, at zero rf current, $\epsilon=2I_{d}/(I_{*}^{2}+I_{d}^{2})$ and $\xi=1/(I_{*}^{2}+I_{d}^{2})$. The current $I_{*}$ sets the scale of the nonlinearity; it is typically $\sim 10^{3}$ higher than that of Josephson devices, thereby conferring KITs $\sim 10^{4}-10^{6}$ higher power handling capabilities than their Josephson equivalents. The term $\epsilon I$ permits 3WM, while $\xi I^{2}$ permits 4WM. Solving the coupled mode equations (CME) for a pump at frequency $\omega_{p}$, signal at $\omega_{s}$, and idler at $\omega_{i}$, such that $\omega_{p}=\omega_{s}+\omega_{i}$ yields the 3WM phase matching condition for exponential gain: $\Delta_{k}=-\frac{\xi I_{p0}^{2}}{8}(k_{p}-2k_{s}-2k_{i}),$ (2) see appendix A.1. Here, $\Delta_{k}=k_{p}-k_{s}-k_{i}$ with $k_{j}$, $j\in\\{p,s,i\\}$ the pump, signal and idler wavenumbers, and $I_{p0}$ is the rf pump amplitude at the KIT’s input. In a non-dispersive transmission line $\Delta_{k}=0$, and thus equation 2 can naturally be fulfilled over a very wide frequency range in KITs, where $I_{p0}\ll I_{*}$. Although desirable within the amplification bandwidth, it is undesirable outside of that bandwidth, where multiple parametric conversion processes take place Vissers _et al._ (2016); Erickson and Pappas (2017). These processes deplete the pump, thereby degrading the amplifier’s power handling, and they also induce multiple conversion mechanisms at each frequency, thereby increasing the amplifier-added noise. Figure 1: Schematic of the KIT artificial transmission line. (a) Three cells in series (not to scale) are arranged in a CPW topology. The highly inductive central line (red) is flanked by IDC fingers, which match it to 50 Ω. Each finger constitutes a low-Q, quarter-wave resonator. (b) In the equivalent electrical circuit, each cell consists of a series inductance $L_{d}$ and two resonators with inductance $L_{f}$ and capacitance to ground $C/2$. While in conventional traveling-wave amplifiers dispersion engineering prevents only pump harmonic generation, we suppress all unwanted parametric conversion processes by designing our KIT as a weakly dispersive artificial transmission line. Originally developed to have the KIT matched to 50 Ω Chaudhuri _et al._ (2017), this line consists of a series of coplanar waveguide (CPW) sections, or cells, each with inductance $L_{d}$, flanked by two interdigitated capacitor (IDC) fingers that form the capacitance to ground $C$, such that $Z=\sqrt{L_{d}/C}=50$ Ω, see Fig. 1a. Each IDC finger then constitutes a low-Q quarter-wave resonator, with capacitance $C_{f}=C/2$ and inductance $L_{f}$ (see Fig. 1b), set by the finger’s length. In practice, we choose $\omega_{f}=1/\sqrt{L_{f}C_{f}}=2\pi\times 36$ GHz, and $Q=1/Z\sqrt{L_{f}/C_{f}}=3.3$, to generate a slight dispersion at low frequencies, where the pump, signal and idler modes lie. The dashed line in Fig. 2a shows the dispersive part of the wavenumber $k^{*}=k-k_{0}$ with $k_{0}=\omega\sqrt{L_{d}C}$, as a function of frequency. It is calculated by cascading the ABCD matrices of the KIT cells, see appendix B. As it slightly deviates from zero, no triplet $\\{k_{s},k_{i},k_{p}\\}$ can satisfy Eq. 2. Figure 2: Influence of the phase matching on the gain profile. (a) The nonlinear wavenumber $k^{*}$ is calculated as a function of frequency for the transmission line represented in Fig. 1 (dashed), and for a similar line, periodically loaded at $80$ Ω (black). The nonlinear part of four triplets $\\{k_{s},k_{i},k_{p}\\}$ that satisfy the phase matching condition (Eq. 2) are indicated with colored dots: in purple $\omega_{i}-\omega_{s}=4$ GHz, green $\omega_{i}-\omega_{s}=3$ GHz, red $\omega_{i}-\omega_{s}=2$ GHz, and blue $\omega_{i}=\omega_{s}$. Additionally, a gray line indicates a pump frequency for which phase matching is nowhere fulfilled (b) Solving the CME (Eqs. 11), the signal power gain profile is calculated (Eq. 15) for the related pump frequencies (the colors match with panel a). The KIT length is $N_{c}=6.6\times 10^{4}$ cells. To retrieve phase-matching over a desired bandwidth, we engineer another dispersion feature by increasing the line impedance periodically on short sections. This feature creates a resonance in the line’s phase response (and a stopband in the line’s amplitude response), at a frequency $\omega_{l}$ controlled by the loading periodicity Chaudhuri _et al._ (2017); Pozar (2011). Figure 2a shows $k^{*}$ in a line periodically loaded at $80$ Ω, with $\omega_{l}=2\pi\times 8.5$ GHz. Because the nonlinear wavenumber close to resonance varies sharply, there exists values of $k^{*}_{p}$ (above $\omega_{l}$) for which we can find triplets $\\{k_{s},k_{i},k_{p}\\}$ that satisfy Eq. 2 (examples of their nonlinear parts are shown in colored dots). A slight variation of the pump frequency $\omega_{p}$ significantly affects which pair of signal and idler frequencies is phase matched. At these matched frequencies, the 3WM gain grows exponentially with $k_{p}$ (in radians per cell), with the KIT length, and with $\delta_{L}$, the relative inductance modulation amplitude generated by the rf current and scaling with $I_{d}$. More precisely, the phase matched, small signal power gain can be written as: $G_{s}\simeq\cosh^{2}\left(\frac{1}{8}\delta_{L}k_{p}N_{c}\right),$ (3) where $N_{c}$ is the total number of cells, see appendix A.2. Typically, when operating our KIT, $\delta_{L}\sim 7\times 10^{-3}$; thus, with $L_{d}\sim 50$ pH/cell (see Sec. III), we need $N_{c}>5\times 10^{4}$ to get $G_{s}>15$ dB at $\omega_{p}\sim 2\pi\times 9$ GHz. Since maximum gain is achieved with phase-matching, $\omega_{p}$ influences the gain profile. To calculate this profile, we insert the dispersion relation $k(\omega)$ into the CME, and solve them numerically, see appendix B. Figure 2b shows signal power gain profiles, calculated for the pump frequencies represented in Fig. 2a. As expected, the gain is maximal at the signal and idler frequencies for which the triplets $\\{k_{s},k_{i},k_{p}\\}$ satisfy Eq. 2. When these frequencies are far apart, there is a region in between where phase matching is not sufficient and the gain drops. By reducing $\omega_{p}$, we can lower the distance between phase-matched signal and idler frequencies, and therefore obtain a wideband, flat gain region. Further reducing $\omega_{p}$, we reach the value where phase-matched signal and idler frequencies are equal, beyond what phase matching is nowhere fulfilled anymore, and the gain drops. Fundamentally, the wideband nature of the gain depends on the convexity of the dispersion relation, and therefore on the fingers’ length and capacitance to ground. As $\omega_{f}$ or Q increase, $k^{*}$ is less convex, and thus closer to a broadband, phase-matched situation, but at the cost of allowing extra parametric processes to arise. ## III EXPERIMENTAL REALIZATION Because the kinetic inductance nonlinearity is weak, in practice a KIT comprises a transmission line tens of centimeters long. To maximize this nonlinearity, and to minimize its length, the line as well as IDC fingers are made 1 $\upmu$m wide, and each unit cell is 5 $\upmu$m long, see Fig. 3a and b. Fabricated from a 20 nm thick NbTiN layer on 380 $\upmu$m high-resistivity intrinsic silicon via optical lithography, it yields $I_{*}\sim 7$ mA, and a sheet inductance $\sim 10$ $\mathrm{pH}/$square. Thus, $L_{d}\sim 50$ pH, and in order to retrieve $Z=50$ Ω, each finger is made 102 $\upmu$m long. The loading (Fig.3a) consists of cells with shorter fingers (33.5 $\upmu\mathrm{m}$, $Z=80$ Ω), arranged periodically to generate a resonance at 8.5 GHz, thereby positioning the pump frequency in a way compatible with our filtering capabilities. We lay out the line in a spiral topology, on a $2\times 2$ cm chip (Fig. 3c), which contains $N_{c}=6.6\times 10^{4}$ cells, equivalent to 33 cm. To avoid spurious chip and microstrip modes, electrical grounding inside the chip is ensured with spring-loaded pogo pins, descending from the top copper lid of the packaging, and contacting the chip between the line traces, see appendix D. The pogo pins also improve the chip-to-packaging thermal link, which otherwise mostly relies on wire-bonds. Figure 3: Micrographs of the KIT. (a) The transmission line (false color red) is periodically loaded with shorter IDC fingers. (b) Line and fingers are 1 $\upmu$m wide, and each cell is 5 $\upmu$m long. (c) The overall KIT is laid out in a spiral configuration on a $2\times 2$ cm chip, and clamped on a copper packaging. In a first experiment, we measure the gain, bandwidth, and power handling of the KIT when cooled to $\sim 30$ mK. The KIT is mounted as the sole amplifier in the chain, thereby facilitating comparison of its gain profile to the theoretical profiles, and revealing the gain ripples of the isolated KIT, which otherwise also depend on the return loss of external components. Two mandatory bias tees at the KIT input and output ports combine dc and rf currents. Figure 4a shows KIT gain profiles, acquired at two different pump frequencies. The current amplitudes are $I_{d}=1.5$ mA, and $I_{p0}=160$ $\upmu$A ($-29$ dBm in power, calibrated in situ by comparing dc and rf nonlinear phase shifts, see appendix A.3). For the higher pump frequency, the gain drops in the middle of the amplification bandwidth. For the lower one, the gain profile is flatter, with an average value of $16.5^{+1}_{-1.3}$ dB between 3.5 and 5.5 GHz, where the subscript and superscript denote the amplitude of the gain ripples. Both profiles agree qualitatively with behaviors explained in Sec. II. The gain ripples have a $8$ MHz characteristic frequency (see Fig. 4c), equivalent to $62.5$ cm in wavelength (the phase velocity being $v_{p}=1/\sqrt{L_{d}C}\sim 1000$ cell per nanosecond), or about twice the KIT length. We thus attribute their presence to a mismatch in impedance between KIT and microwave feed structures before and after the KIT. This mismatch results in a pump standing wave pattern, which influences the signal amplification, depending on its frequency. Figure 4b shows the gain at 4.5 GHz (obtained at the lower pump power), as a function of $P_{t}$, the input power of a probe tone. The gain compresses by 1 dB from its small signal value for $P_{\mathrm{-1dB}}=-63$ dBm, about $7$ dB lower than theoretical predictions, see appendix C.2. This discrepancy, suggesting substantial room for improvement, may be due to effects not included in our model, such as standing wave patterns, or defects in the line, which locally lower $I_{*}$. Nonetheless, $P_{\mathrm{-1dB}}$ is already about $30$ dB higher than JTWPAs Macklin _et al._ (2015); White _et al._ (2015); Planat _et al._ (2020), and about $10$ dB less than 4WM KITs with similar gain (see appendix H). This KIT is therefore suitable to read thousands of frequency multiplexed MKIDs, that use drive tone powers typically around $-90$ dBm Zobrist _et al._ (2019); Vissers _et al._ (2016) or even more qubits, whose readout involves powers substantially less than $-90$ dBm. Figure 4: Amplification properties of the KIT. (a) The power gain is measured with a vector network analyzer (VNA), when $\omega_{p}=2\pi\times 8.895$ GHz (blue), and $\omega_{p}=2\pi\times 8.931$ GHz (red). (b) The gain of a probe tone at 4.5 GHz compresses as the tone’s power increases. At $P_{\mathrm{-1dB}}=-63$ dBm, referred to the KIT’s input, the gain has lowered by 1 dB from its small signal value. It is measured for $\omega_{p}=2\pi\times 8.895$ GHz. (c) At the same pump frequency, a close-up on the small signal gain around 4.5 GHz shows ripples with 8 MHz characteristic frequency. As in any phase-insensitive traveling-wave and resonant parametric amplifier, the practical, usable bandwidth, is half of the presented amplification bandwidth. It is the bandwidth in which signals coming from microwave devices can be phase-insensitively amplified. The other half, barring the idler frequencies, contains a copy of signals in the first half. That is why the gain in Fig. 4a is nearly symmetric about the half pump frequency ($\sim 4.5$ GHz). The asymmetry - gain and ripples marginally bigger above the half pump frequency - originates from a frequency dependent saturation power. In fact, higher frequencies possess a higher saturation power, see appendix C.1. We see this effect here because we chose a signal power close to $P_{\mathrm{-1dB}}$ in order to maximize the signal-to-noise ratio in this measurement, where the KIT remains the sole amplifier. At higher dc current bias (bounded by the dc critical current of the transmission line, $\sim 2.4$ mA in our device), lower rf pump power can be used to obtain equivalent small signal gain, at the cost of a reduced 1 dB compression power. Conversely, reducing $I_{d}$ and increasing $I_{p0}$ improves power handling capabilities, but the gain is then limited by a superconductivity breaking phenomenon. We suspect the presence of weak links Bockstiegel _et al._ (2014), and we are currently investigating the line breakdown mechanism. ## IV NOISE PERFORMANCE The combined gain, bandwidth, and power handling performance are promising, provided that the KIT also presents competitive noise performance. Measuring this noise is a topic of current interest Eom _et al._ (2012); Ranzani _et al._ (2018); Zobrist _et al._ (2019), and we execute the task using a self- calibrated shot-noise tunnel junction (SNTJ) Spietz _et al._ (2003, 2006). We measure the output spectral density, whose power depends on the chain’s gain and loss. The SNTJ acts as a dynamic variable noise source, allowing for a continuous sweep in input noise temperature by sweeping its bias voltage, and our measurement scans the entire KIT bandwidth. Figure 5: Schematic of the noise measurement setup. Each component is labeled with its gain or transmission efficiency, and with its input-referred added noise. Calibrated noise $N_{\mathrm{in}}^{s}$ ($N_{\mathrm{in}}^{i}$) is generated by the SNTJ at the signal (idler) frequency. It is routed to the KIT with transmission efficiency $\eta_{1}^{s}$ ($\eta_{1}^{i}$), i.e. it undergoes a beamsplitter interaction and part of it is replaced with vacuum noise whose value $N_{f}=0.5$ quanta. At the KIT’s input, the noise is $N_{1}^{j}$, $j\in\\{s,i\\}$. On the signal-to-signal path, the KIT then adds $N_{\mathrm{ex}}^{s}$ quanta of excess noise and has a gain $G$; on the idler- to-signal path, the KIT adds $N_{\mathrm{ex}}^{i}$ quanta of excess noise and has a gain $G-1$. Noise $N_{2}^{s}$ at the KIT’s output is then routed with efficiency $\eta_{2}$ to the HEMT (input noise $N_{3}^{s}$). With gain $G_{H}$ and added noise $N_{H}$, it further directs the noise $N_{4}^{s}$ to room temperature components. Amplification and loss at room temperature are excluded from our schematic but not our analysis. The noise reaching the spectrum analyzer (SA) is $N_{o}^{s}$. The full setup is described in appendix E. Figure 6: System-added noise measurement of a microwave amplification chain with a KIT as first amplifier. (a) The gain’s frequency dependence is measured with a VNA (with a 1 kHz intermediate frequency bandwidth). (b) The output noise $N_{o}^{s}$ is measured with a SA (5 MHz resolution bandwidth, comparable to typical resonant amplifier’s bandwidth), while varying the SNTJ dc voltage bias V. Fitting the whole output noise response, we obtain the frequency dependent system-added noise $N_{\Sigma}$ and the chain’s signal-to- signal gain $G_{c}^{ss}$. We divide $N_{o}^{s}$ by $G_{c}^{ss}$ to refer it to the KIT input, and subtract the zero bias noise value $N_{f}=0.5$, so that $N_{\Sigma}$ (panel c) visually matches the zero bias value of $N_{o}^{s}$ (see appendix F.3). Three colored curves indicate output noises at 4, 5, and 6 GHz, with fits superimposed in black lines. (c) Data from the output noise spectra are compiled to form $N_{\Sigma}$. Uncertainties are indicated by the gray area surrounding the black line. They predominantly come from the fit of the curves in (b). The quantum limit (QL) on amplifier-added noise is indicated by the dashed black line. Because the SNTJ is a wideband noise source, it illuminates the KIT at both its signal and idler frequency inputs; these input noises are then parametrically amplified. We thus model the KIT as a three-port network: two inputs at $\omega_{s}$ and $\omega_{i}$, and one output at $\omega_{s}$. Figure 5 schematizes the entire amplification chain, where we have labeled the gain or transmission efficiency, and input-referred added noise of each component. The KIT power gain on the signal-to-signal path is $G$, while its power gain on the idler-to-signal path is $G-1$ Caves (1982). The output noise at the signal frequency (in units of photons) measured on the spectrum analyzer (SA) can then be written as $N_{o}^{s}=G_{c}^{ss}(N_{\mathrm{in}}^{s}+N_{\mathrm{eff}}^{s})+G_{c}^{si}(N_{\mathrm{in}}^{i}+N_{\mathrm{eff}}^{i}),$ (4) where $N_{\mathrm{in}}^{s}$ ($N_{\mathrm{in}}^{i}$) is the SNTJ-generated noise at the signal (idler) frequency, $G_{c}^{ss}$ ($G_{c}^{si}$) is the signal-to-signal (idler-to-signal) gain of the entire chain from SNTJ to SA, and $N_{\mathrm{eff}}^{s}$ ($N_{\mathrm{eff}}^{i}$) is the effective signal- to-signal (idler-to-signal) path KIT-excess noise, see appendix F.1. When varying $N_{\mathrm{in}}^{s}$ and $N_{\mathrm{in}}^{i}$ we retrieve $N_{\mathrm{eff}}^{s}$ and $N_{\mathrm{eff}}^{i}$, equal to zero for a quantum-limited amplifier. The system-added noise that a device replacing the SNTJ would see is therefore $N_{\Sigma}=N_{\mathrm{eff}}^{s}+\frac{G_{c}^{si}}{G_{c}^{ss}}(N_{f}+N_{\mathrm{eff}}^{i}),$ (5) where $N_{f}=0.5$ quanta is the vacuum noise (provided that the idler port of this device is cold); for a high gain, phase-insensitive quantum-limited amplifier $N_{f}$ is the minimum added noise Caves (1982). Failure to account for the additional change in idler noise at the amplifier’s input leads to an underestimate of the system-added noise by about a factor two, thereby making it look significantly better than its true value, given by Eq. 5, see appendix F.2. In practice, we measure $N_{o}^{s}$ while varying simultaneously $N_{\mathrm{in}}^{s}$ and $N_{\mathrm{in}}^{i}$ using the SNTJ voltage bias (Fig. 6b). We then fit to obtain $G_{c}^{ss}$, $G_{c}^{si}$, $N_{\mathrm{eff}}^{s}$ and $N_{\mathrm{eff}}^{i}$ (see appendix F.3), and form $N_{\Sigma}$ with Eq. 5. Figure 6c presents the system-added noise $N_{\Sigma}$, measured over the KIT’s amplification bandwidth. In this experimental configuration, the KIT gain is $G=16.6^{+1.8}_{-3.1}$ dB between 3.5 and 5.5 GHz (Fig. 6a), stable over the acquisition time ($\sim 12$ hrs). In that bandwidth, $N_{\Sigma}=3.1\pm 0.6$ quanta. It is an unprecedented broadband, true system-added noise performance (see appendix H). This performance depends on the intrinsic signal (idler) KIT-excess noise $N_{\mathrm{ex}}^{s}$ ($N_{\mathrm{ex}}^{i}$), but also on the chain’s transmission efficiencies as well as on the HEMT-added noise $N_{H}$. More precisely, when $\\{G,N_{H}\\}\gg 1$, Eq. 5 becomes: $N_{\Sigma}=\frac{N_{\mathrm{ex}}^{s}+N_{\mathrm{ex}}^{i}}{\eta_{1}^{s}}+\frac{2(1-\eta_{1}^{s})N_{f}}{\eta_{1}^{s}}+\frac{N_{H}}{\eta_{2}G\eta_{1}^{s}}+N_{f}.$ (6) From left to right, the first three terms in the right-hand side represent the contribution to $N_{\Sigma}$ from the KIT alone, from the lossy link between the SNTJ and the KIT, and from the HEMT-added noise; the last term is the minimum half quantum of added noise that a quantum-limited amplifier must add Caves (1982). Measuring the individual loss of the chain’s components, we estimate the value of the transmission efficiencies to be $\eta_{1}^{s}=0.57\pm 0.02$ and $\eta_{2}=0.64\pm 0.10$; in addition, by measuring the system-added noise of the amplification chain with the KIT turned off, we estimate $N_{H}=8\pm 1$ quanta, see appendix G. With these additional information, we estimate that the overall KIT-excess noise is $N_{\mathrm{ex}}^{s}+N_{\mathrm{ex}}^{i}=0.77\pm 0.40$ quanta, suggesting that the KIT alone operates near the quantum limit. Several strategies can improve the system-added noise. First, increasing the transmission efficiencies would have a major impact, because all the system- dependent noise contributions are enlarged by $1/\eta_{1}^{s}$. For example, $\eta_{1}^{s}=0.8$ would yield $N_{\Sigma}=1.6$ quanta. To that end, we are currently developing lossless superconducting circuits (bias tees, directional couplers and low-pass filters) that can be integrated on-chip with the KIT. Second, increasing the KIT gain $G$ would reduce the HEMT contribution to $N_{\Sigma}$ (third term in Eq. 6). Here, this contribution is estimated at $0.5\pm 0.3$ quanta. The solutions to achieve higher gain directly follow from Eq. 3: higher pump power (i.e. increase $\delta_{L}$), longer line (increase $N_{c}$), or higher inductance per unit cell (increase $k_{p}$). All represent non-trivial challenges, starting with better understanding of the line breakdown mechanism Bockstiegel _et al._ (2014). If it comes from imperfections in the line (weak links), a higher resolution fabrication process, like electron beam lithography, may improve the performance of the device. Also, running the amplifier at higher gain will require better damping of the gain ripples, whose amplitude grows with gain. Finally, we are investigating the origin of the remaining excess noise $N_{\mathrm{ex}}^{s}+N_{\mathrm{ex}}^{i}$. It may be due to parasitic chip heating or two-level system noise Gao _et al._ (2008). ## V CONCLUSION It is possible to build a microwave amplifier with broadband and near-quantum- limited performance without sacrificing power handling capability. Engineering the phase-matched bandwidth is key because it suppresses spurious parametric conversion processes. We demonstrate this idea on a KIT, whose combined gain, bandwidth, power handling, and noise performance are fully characterized. In addition, we develop a theoretical framework adapted to noise measurements performed with wideband noise sources. Using a SNTJ we evaluate the true system-added noise of an amplification containing the KIT as the first amplifier. The KIT itself is estimated to be near-quantum-limited, therefore it has the potential to initiate a qualitative shift in the way arrays of superconducting detectors, such as MKIDs, process information, moving them into a quantum-limited readout regime. ## Acknowledgments We thank Kim Hagen for his help in the design and fabrication of the KIT packaging, and Felix Vietmeyer and Terry Brown for their help in the design and fabrication of room temperature electronics. We acknowledge useful discussions with John Teufel, Gangqiang Liu and Vidul Joshi. Certain commercial materials and equipment are identified in this paper to foster understanding. Such identification does not imply recommendation or endorsement by the National Institute of Standards and Technology, nor does it imply that the materials or equipment identified are necessarily the best available for the purpose. This work was supported by the NIST Innovations in Measurement Science Program, as well as NASA, under grant NNH18ZDA001N-APRA. ## Appendix A COUPLED-MODE THEORY OF A DC-BIASED KIT ### A.1 Coupled-mode equations The phase matching condition for exponential gain, Eq. 2 is obtained by solving the CME while pumping in a 3WM fashion, i.e. such that $\omega_{p}=\omega_{s}+\omega_{i}$, and in the presence of 3WM and 4WM terms see, Eq. 1. The CME relate the current amplitudes $I_{j}$, $j\in\\{p,s,i\\}$ at the frequencies $\omega_{j}$, $j\in\\{p,s,i\\}$; they are obtained by injecting equation 1 into the telegrapher’s equations, and by operating the harmonic balance (HB) with only these three frequencies. More precisely, the telegrapher’s equations in a lossless transmission line relate current I and voltage V as: $\displaystyle-\frac{\partial I}{\partial x}$ $\displaystyle=C\frac{\partial V}{\partial t}$ (7) $\displaystyle-\frac{\partial V}{\partial x}$ $\displaystyle=L\frac{\partial I}{\partial t},$ with x a length per unit cell. Injecting Eq. 1 into Eqs. 7, we obtain: $v_{p}^{2}\frac{\partial^{2}I}{\partial x^{2}}-\frac{\partial^{2}I}{\partial t^{2}}=\frac{\partial}{\partial t^{2}}\left(\frac{1}{2}\epsilon I^{2}+\frac{1}{3}\xi I^{3}\right),$ (8) with $v_{p}=1/\sqrt{CL_{d}}$ the phase velocity. To solve Eq. 8 we perform the HB: we assume that the current in the transmission line is a sum of three terms at three different frequencies: $\displaystyle I=\frac{1}{2}\big{(}$ $\displaystyle I_{p}(x)e^{i(k_{p}x-\omega_{p}t)}$ (9) $\displaystyle+$ $\displaystyle I_{s}(x)e^{i(k_{s}x-\omega_{s}t)}$ $\displaystyle+$ $\displaystyle I_{i}(x)e^{i(k_{i}x-\omega_{i}t)}+c.c\big{)},$ and we then keep only the mixing terms from Eq. 8 that emerge at these frequencies. This approach is valid in our case, because the phase matching bandwidth is limited by dispersion engineering (see appendix B), and thus mostly these three frequencies are able to mix together. Under the slow- varying envelope approximation, $\lvert d^{2}I_{j}/dx^{2}\rvert\ll\lvert k_{j}dI_{j}/dx\rvert$ for $j\in\\{p,s,i\\}$, the left hand side of Eq. 8 yields: $v_{p}^{2}\frac{\partial^{2}I}{\partial x^{2}}-\frac{\partial^{2}I}{\partial t^{2}}=v_{p}^{2}\sum_{j=p,s,i}ik_{j}\frac{dI_{j}}{dx}e^{i(k_{j}x-\omega_{j}t)}+c.c.$ (10) Using $\omega_{p}=\omega_{s}+\omega_{i}$, we collect terms at $\omega_{j}$, $j\in\\{p,s,i\\}$ in the right hand side (rhs) and form the CME: $\displaystyle\frac{dI_{p}}{dx}$ $\displaystyle=\frac{ik_{p}\epsilon}{4}I_{s}I_{i}e^{-i\Delta_{k}x}+\frac{ik_{p}\xi}{8}I_{p}(\lvert I_{p}\rvert^{2}+2\lvert I_{s}\rvert^{2}+2\lvert I_{i}\rvert^{2})$ (11) $\displaystyle\frac{dI_{s}}{dx}$ $\displaystyle=\frac{ik_{s}\epsilon}{4}I_{p}I_{i}^{*}e^{i\Delta_{k}x}+\frac{ik_{s}\xi}{8}I_{s}(2\lvert I_{p}\rvert^{2}+\lvert I_{s}\rvert^{2}+2\lvert I_{i}\rvert^{2})$ $\displaystyle\frac{dI_{i}}{dx}$ $\displaystyle=\frac{ik_{i}\epsilon}{4}I_{p}I_{s}^{*}e^{i\Delta_{k}x}+\frac{ik_{i}\xi}{8}I_{i}(2\lvert I_{p}\rvert^{2}+2\lvert I_{s}\rvert^{2}+\lvert I_{i}\rvert^{2}),$ with $\Delta_{k}=k_{p}-k_{s}-k_{i}$. The phase matching condition, Eq. 2, is found for a strong pump, where $\\{\lvert I_{s}\rvert,\lvert I_{i}\rvert\\}\ll\lvert I_{p}\rvert$. Assuming the pump undepleted, $\lvert I_{p}(x)\rvert=I_{p0}$, Eqs. 11 rewrite: $\displaystyle\frac{dI_{p}}{dx}$ $\displaystyle=\frac{ik_{p}\xi}{8}I_{p}I_{p0}^{2}$ (12) $\displaystyle\frac{dI_{s}}{dx}$ $\displaystyle=\frac{ik_{s}\epsilon}{4}I_{p}I_{i}^{*}e^{i\Delta_{k}x}+\frac{ik_{s}\xi}{4}I_{s}I_{p0}^{2}$ $\displaystyle\frac{dI_{i}}{dx}$ $\displaystyle=\frac{ik_{i}\epsilon}{4}I_{p}I_{s}^{*}e^{i\Delta_{k}x}+\frac{ik_{i}\xi}{4}I_{i}I_{p0}^{2},$ which results in $I_{p}(x)=I_{p0}\exp{(i\xi k_{p}I_{p0}^{2}x/8)}$. Signal and idler amplitudes are then searched with the form: $I_{j}(x)=\tilde{I}_{j}(x)\exp{(i\xi I_{p0}^{2}k_{j}x/4)}$, $j\in\\{s,i\\}$. Equations 12 then yield: $\displaystyle\frac{d\tilde{I}_{s}}{dx}$ $\displaystyle=\frac{ik_{s}\epsilon}{4}I_{p0}\tilde{I}_{i}^{*}e^{i\Delta_{\beta}x}$ (13) $\displaystyle\frac{d\tilde{I}_{i}}{dx}$ $\displaystyle=\frac{ik_{i}\epsilon}{4}I_{p0}\tilde{I}_{s}^{*}e^{i\Delta_{\beta}x},$ with $\Delta_{\beta}=\Delta_{k}+\frac{\xi I_{p0}^{2}}{8}(k_{p}-2k_{s}-2k_{i})$ and $\Delta_{k}=k_{p}-k_{s}-k_{i}$. The system of equations 13 has known solutions Boyd (2019). In particular, when phase matching is achieved, i.e. $\Delta_{\beta}=0$, we obtain: $\displaystyle\tilde{I}_{s}$ $\displaystyle=\cosh{(g_{3}x)\tilde{I}_{s0}}$ (14) $\displaystyle\tilde{I}_{i}$ $\displaystyle=i\sqrt{\frac{k_{i}}{k_{s}}}\sinh{(g_{3}x)\tilde{I}_{s0}},$ with $g_{3}=\frac{\epsilon I_{p0}}{4}\sqrt{k_{i}k_{s}}$, and with initial conditions $I_{s}(0)=I_{s0}$ and $I_{i}(0)=0$. The signal power gain $G_{s}(x)=\left\lvert\frac{I_{s}(x)}{I_{s0}}\right\rvert^{2}$ (15) is then exponential with $x$: $G_{s}=\cosh^{2}(g_{3}x)$. ### A.2 3WM gain We can re-write $g_{3}$ as a function of more meaningful quantities. In fact, the linear inductance $L$ exposed in Eq. 1 also writes: $\displaystyle L$ $\displaystyle=L_{0}\left(1+\frac{I_{d}^{2}}{I_{*}^{2}}\right)\left(1+\frac{2I_{d}I}{I_{*}^{2}+I_{d}^{2}}+\frac{I^{2}}{I_{*}^{2}+I_{d}^{2}}\right)$ (16) $\displaystyle\simeq L_{d}\left(1+\frac{2I_{d}I}{I_{*}^{2}+I_{d}^{2}}\right),$ when $I\ll I_{d}$, and with $L_{d}=L_{0}(1+I_{d}^{2}/I_{*}^{2})$. Here, $L_{0}$ is the bare linear inductance. It is the one directly derived from the sheet kinetic inductance and the geometry of the line, while $L_{d}$ is the inductance per unit length under dc bias. Because $I_{d}^{2}\ll I_{*}^{2}$, for design purposes $L_{d}\sim L_{0}$. In the strong pump regime $I=I_{p0}$, therefore, $2I_{d}I/(I_{*}^{2}+I_{d}^{2})=\epsilon I_{p0}$; assuming $k_{s}=k_{i}\simeq k_{p}/2$, we therefore get: $G_{s}(x)\simeq\cosh^{2}\left(\frac{1}{8}\delta_{L}k_{p}x\right),$ (17) where $\delta_{L}=\frac{L-L_{d}}{L_{d}}=\frac{2I_{d}I_{p0}}{I_{*}^{2}+I_{d}^{2}}$ (18) is the relative inductance variation due to $I_{p0}$. With $I_{*}=7$ mA, and typical values: $I_{d}=1.5$ mA (limited by the dc critical current, $\sim 2.4$ mA, value specific to our NbTiN film and to the line’s width) and $I_{p0}=I_{*}/60$, we get $\delta_{L}=6.8\times 10^{-3}$. ### A.3 Pump phase shift From the phase matching condition, Eq. 2, it is clear that only the 4WM term $\xi$ creates a dispersive phase shift of pump, signal and idler. In other words, in a pure 3WM case, $\xi=0$ and the phase matching condition becomes $\Delta_{k}=0$, naturally fulfilled in a dispersion-less transmission line. While detrimental for noise properties (see Sec. IV), we can use the continued presence of 4WM to our advantage, because it allows us to calibrate the pump power, down to the KIT input. In fact, in such a situation $I_{d}$ and $I_{p0}$ influence the pump tone phase shift, which we can measure unambiguously (i.e. not $\mod 2\pi$) with a VNA. More precisely, although the phase $\phi=\arg(S_{21})$ read by a VNA is $2\pi$-wrapped, its shift $\delta_{\phi}=\phi-\phi_{0}$ from an initial value $\phi_{0}$ can be continuously monitored when $I_{d}$ and $I_{p0}$ vary, and thus unambiguously determined. This phase shift in turn translates into a wavenumber variation $\delta_{p}=-\delta_{\phi}/N_{c}$. If initially at zero dc bias and small rf pump amplitude, then $\delta_{p}=\beta_{p}-k_{p}$, with $\beta_{p}$ the pump wavenumber, dependent on $I_{d}$ and $I_{p0}$, and $k_{p}$ the linear wavenumber. When a single pump tone travels along the line (no input signal), we are by default under the strong pump approximation, and the first equation of Eqs. 12 gives $I_{p}(x)=I_{p0}\exp{(i\xi k_{p}I_{p0}^{2}x/8)}$. In addition, the current I in the line then writes as $I=1/2\\{I_{p}(x)\exp[i(k_{p}x-\omega_{p}t)]+c.c\\}$, and thus the pump wavenumber is $\beta_{p}=\xi k_{p}I_{p0}^{2}/8+k_{p}$, which leads to $\delta_{p}=\xi k_{p}I_{p0}^{2}/8$. Because $k_{p}=\omega_{p}\sqrt{L_{d}C}$, we can rewrite: $\delta_{p}=\frac{1}{8}\frac{I_{p0}^{2}}{I_{*}^{2}}\omega_{p}\sqrt{L_{0}C}\sqrt{1+\frac{I_{d}^{2}}{I_{*}^{2}}},$ (19) therefore $I_{p0}$ and $I_{d}$ induce similar phase shift in the line. Knowing $I_{d}$ (room temperature parameter), we thus get $I_{p0}$ at the KIT input. ## Appendix B ABCD TRANSFER MATRICES The dispersion relations, Fig. 2a are calculated by cascading the ABCD matrices of the KIT cells, a method suitable for any periodic loading pattern. We then compute the KIT $S_{21}$ scattering parameter as $S_{21}=2/(A+B/Z_{0}+CZ_{0}+D)$ Pozar (2011), where $Z_{0}=50$ Ω is the input and output ports impedance, and finally get $k=-\mathrm{unwrap}[\arg(S_{21})]/N_{c}$. In the unloaded case, represented in Fig. 1, the ABCD matrix cell is: $\bm{T_{c}}=\begin{bmatrix}1&iL_{d}\omega\\\ \frac{i2C\omega}{2-L_{f}C\omega^{2}}&1-\frac{2L_{d}C\omega^{2}}{2-L_{f}C\omega^{2}}\end{bmatrix}.$ (20) All the cells being identical, the KIT’s ABCD matrix is simply $\bm{T_{K}}=\bm{T_{c}}^{N_{c}}$. In Fig. 2a we used $N_{c}=6.6\times 10^{4}$ to match the length of our fabricated KIT, and $L_{d}=45.2$ pH, $C=18.8$ fF, and $L_{f}=1.02$ nH, values that match our design (fingers are $102$ $\upmu$m long and $1$ $\upmu$m wide). In the loaded case, some cells have shorter fingers, see Fig. 3a. In these cells, the capacitance to ground is $C_{l}=L_{d}/Z_{l}^{2}$, where $Z_{l}$ is the load impedance, and a finger’s inductance is $L_{l}$. To compute the KIT scattering parameter, we form the ABCD matrix of the repetition pattern comprised with unloaded and loaded cells: $\displaystyle\bm{T_{\mathrm{sc}}}=$ $\displaystyle\begin{bmatrix}1&iL_{d}\omega\\\ \frac{i2C\omega}{2-L_{f}C\omega^{2}}&1-\frac{2L_{d}C\omega^{2}}{2-L_{f}C\omega^{2}}\end{bmatrix}^{N_{u}/2}$ (21) $\displaystyle\times$ $\displaystyle\begin{bmatrix}1&iL_{d}\omega\\\ \frac{i2C_{l}\omega}{2-L_{l}C_{l}\omega^{2}}&1-\frac{2L_{d}C_{l}\omega^{2}}{2-L_{l}C_{l}\omega^{2}}\end{bmatrix}^{N_{l}}$ $\displaystyle\times$ $\displaystyle\begin{bmatrix}1&iL_{d}\omega\\\ \frac{i2C\omega}{2-L_{f}C\omega^{2}}&1-\frac{2L_{d}C\omega^{2}}{2-L_{f}C\omega^{2}}\end{bmatrix}^{N_{u}/2},$ where $N_{u}$ is the number of unloaded cells and $N_{l}$ is the number of loaded cells in the pattern, which we call a supercell. As before, to get the KIT’s ABCD matrix, we simply form $\bm{T_{K}}=\bm{T_{\mathrm{sc}}}^{N_{\mathrm{sc}}}$, where $N_{\mathrm{sc}}=N_{c}/(N_{u}+N_{l})$ is the number of supercells in the KIT. Here, $N_{u}=60$ (equivalent to $300\upmu$m), $N_{l}=6$ (equivalent to $30\upmu$m), $N_{c}=66000$ (equivalent to $33$ cm), therefore $N_{\mathrm{sc}}=1000$. The plain line in Fig. 2a shows the wavenumber $k^{*}$ thus found, with $Z_{l}=80$ Ω, and $L_{l}=335$ pH, as the finger length in a loaded cell is $33.5$ $\upmu$m . To compute the signal power gain, Fig. 2b, we inject the expression of $k(\omega)$ for a periodically loaded KIT (from $\bm{T_{K}}$) in the CME, Eqs. 11. We solve them numerically for different pump frequencies. For $8.8812$ GHz (blue curve), $8.8992$ GHz (red), $8.9256$ GHz (green) and $8.9736$ GHz (purple), phase matched signal and idler are detuned from the half pump frequency by $0$, $1$, $1.5$ and $2$ GHz respectively. For $8.855$ GHz (gray curve), phase matching is nowhere achieved. We used $N_{c}=6.6\times 10^{4}$, $I_{*}=7$ mA, $I_{d}=1.5$ mA, and the initial conditions $I_{p0}=I_{*}/60$, $I_{s0}=I_{p0}/100$, and $I_{i0}=0$, close to experimental values. ## Appendix C KIT SATURATION ### C.1 Strong signal gain profile asymmetry When the input signal power amounts to a significant fraction of the pump power, parametric amplification depletes the pump. It surprisingly generates asymmetry in the signal gain profile, with respect to the half pump frequency. Figure 7a shows signal gain profiles, calculated when phase matching is achieved at exactly half the pump frequency, i.e. for $\omega_{s}=\omega_{i}$ (corresponding to $\omega_{p}=8.8812$ GHz), at various initial signal powers $P_{s0}$. They are obtained by solving the CME 11, which incorporate pump depletion effects. As $P_{s0}$ increases, the gain diminishes, and the originally flat profile tilts, with higher frequencies presenting higher gain. Figure 7: Theory of KIT saturation, calculated by solving the full CME 11. (a) Distortion of the gain profile, at four different input signals: $I_{s0}=I_{p0}/100$ (blue curve), corresponding to a small signal regime, $I_{s0}=I_{p0}/12$ (red), $I_{s0}=I_{p0}/8$ (green), and $I_{s0}=I_{p0}/6$ (purple). The KIT length, is $N_{c}=6.6\times 10^{4}$ cells. Vertical lines indicate three frequencies: 3.4406 GHz (long dashed), 4.4406 GHz (plain), equal to the half pump frequency, and 5.4406 GHz (short dashed). (b) The gain of a probe tone is calculated at these three frequencies, as a function of the probe tone power. (c) The 1 dB compression power is shown as a function of frequency. Fundamentally, this asymmetry stems from the fact that when solving the CME in the case where $I_{p}$ varies along the KIT transmission line, the initial conditions (ICs) vary as a function of frequency. More precisely, the second and third equations in Eqs. 11, govern the evolution of $I_{s}$ and $I_{i}$ respectively; in a simplified version, they write as $\displaystyle\frac{dI_{s}}{dx}$ $\displaystyle=\frac{ik_{s}\epsilon}{4}I_{p}I_{i}^{*}$ (22) $\displaystyle\frac{dI_{i}}{dx}$ $\displaystyle=\frac{ik_{i}\epsilon}{4}I_{p}I_{s}^{*}.$ We dropped the second terms in the rhs of Eqs. 11, representing the 4WM conversion processes, as the asymmetry still holds when $\xi=0$, and we assumed perfect phase matching in 3WM, $\Delta_{k}=0$, i.e. a dispersionless line. In the undepleted pump regime $I_{p}(x)=I_{p0}$, and we can decouple the equations on $I_{s}$ and $I_{i}$. Deriving with respect to $x$ Eqs. 22, we get $\frac{d^{2}I_{j}}{dx^{2}}=g_{3}^{2}I_{j},$ (23) with $j\in\\{s,i\\}$, and $g_{3}=\frac{\epsilon I_{p0}}{4}\sqrt{k_{i}k_{s}}$, as defined in appendix A.1. Using the ICs $I_{s}(0)=I_{s0}$ and $dI_{s}/dx(0)=0$ (because $I_{i}^{*}(0)=0$), $I_{s}=\cosh(g_{3}x)I_{s0}$, as seen in Eqs. 14. Signal and idler wavenumbers appear as a product in this solution, hence for any $x$ the signal amplitude $I_{s}$ is symmetric with respect to the half pump frequency. In the soft pump regime, where $I_{p}(x)$ is not constant, Eqs. 22 cannot be decoupled. We can however write them in a canonical form, deriving with respect to $x$: $\frac{d^{2}I_{j}}{dx^{2}}-\frac{1}{I_{p}}\frac{dI_{p}}{dx}\frac{dI_{j}}{dx}-\frac{k_{s}k_{i}\epsilon^{2}}{16}\lvert I_{p}\rvert^{2}I_{j}=0,$ (24) with $j\in\\{s,i\\}$. Here, the interplay between $I_{s}$ and $I_{i}$ comes from $I_{p}$, which contains the product $I_{s}I_{i}$ (see Eqs. 11). Although signal and idler wavenumbers also appear only as a product in Eqs. 24, these CME lead to an asymmetric signal amplitude profile, because out of the five ICs required to solve them, one changes with frequency: $I_{p}(0)=I_{p0}$, $I_{s}(0)=I_{s0}$, $I_{i}(0)=0$, $dI_{s}/dx(0)=0$, and $dI_{i}/dx(0)=i\epsilon I_{p0}I_{s0}k_{i}/4$. This last IC depends on $k_{i}$, which depends on the signal frequency. In the small signal limit, $dI_{i}/dx(0)\xrightarrow{}0$, and we recover a symmetric gain profile with respect to the half pump frequency. ### C.2 Compression power calculation This asymmetry produces higher power handling capabilities at frequencies above $\omega_{p}/2$, compared to below $\omega_{p}/2$. Thus, considering that only half the bandwidth is usable for resonator readout, it is more advantageous to have these lie above $\omega_{p}/2$. Figure 7b shows the gain as a function of a probe tone power $P_{t}$, calculated from the CME 11 at three frequencies: one at $\omega_{p}/2$, and two at $\omega_{p}/2\pm 1$ GHz. Because phase matching is set to be optimal at $\omega_{s}=\omega_{i}=\omega_{p}/2$, gain is maximal at this frequency. The small signal gain is identical for $\omega_{p}/2\pm 1$ GHz, however it visibly compresses at higher tone power for $\omega_{p}/2+1$ GHz. Figure 7c presents the 1 dB compression power $P_{\mathrm{-1dB}}$, calculated in the interval $[\omega_{p}/2-1,\omega_{p}/2+1]$ gigahertz. As expected, $P_{\mathrm{-1dB}}$ is a few dB higher when $\omega>\omega_{p}/2$. This phenomenon is reminiscent of gain distortion, seen in JPAs Malnou _et al._ (2018). Effects not included in the CME 11, such as standing wave patterns, or defects in the line, which locally lower $I_{*}$, may cause the discrepancy between these theoretical calculations and the measurements (see Sec. III) ## Appendix D KIT PACKAGING There are three main concerns when packaging a KIT: first, the package should be matched to 50 Ω. Any mismatch will result in reflections, creating gain ripples (see Sec. III). Second, the package should ensure good electrical grounding of the transmission line. Otherwise, given the fairly large chip size, spurious chip modes can appear within the frequency range of interest. Third, the package should ensure good thermalization inside the chip. Because the pump power remains high for millikelvin operations, any inhomogeneous rise in temperature may trigger a hot spot near a weak-link, and possibly break superconductivity. We implemented a series of technologies to address these concerns. Figure 8: KIT packaging. (a) The KIT is clamped on a gold plated copper box and wire-bonded to PCBs and to the box itself. Although not very sensitive to magnetic field, the KIT is shielded with aluminum and A4K cryogenic shielding. Two SMA connectors protrude on both side of the packaging. (b) A close-up on the central region of the KIT shows the periodically loaded transmission line, with gold strips deposited between its spiral arms. (c) The top copper lid contains spring-loaded pogo pins inserted half-way into the box. They are arranged to contact the chip between the KIT trace. (d) A close-up on the pins shows their top thinner part, which can retract inside the body of the pin. They are fixed on the copper lid with dried silver paint. Figure 8a presents the chip, clamped onto the bottom part of the copper packaging. The chip is wire-bonded on both sides to printed circuit boards (PCBs). They convert the on-chip CPW layout to microstrip, and then the central pin of sub-miniature version A (SMA) connectors are soldered onto the microstrip. We suspect imperfect PCBs, with measured impedance close to 52 Ω, play a role in creating gain ripples. When designing the spiral, we carefully adjusted the radius of the turns, in conjunction with the unit cell length, to have these turns match the straight sections’ inductance and capacitance per length. Electrical grounding inside the chip is ensured with pogo pins inserted in the top lid of the packaging, see Fig. 8c and d. When closing the box, these pins contact the chip between the line traces. If absent, we have measured spurious resonant modes with harmonics at gigahertz frequencies. Pins are $140$ $\upmu$m in diameter, and each applies a 20 g force to the chip. Figure 9: Full schematic of the noise measurement experimental setup. The KIT is represented by a spiral, enclosed in a square. A permanent magnet suppresses the Josephson effect in the SNTJ. A magnetic shield protects other microwave components from the effect of this magnet. The 4-12 GHz isolator is from Quinstar technology, model CWJ1015-K13B. The dashed purple line represents the bypass. These pins also act as thermal links to the packaging. In addition, we deposited gold strips onto the NbTiN layer, inside the spiral, between the KIT line traces, and near the chip edges, see Fig. 8b. These strips contact the pins. Absent the pogo pins, superconductivity breaks down before high gain can be reached. Finally, we gold-bond the chip ground plane (from the deposited gold) to the copper box (instead of using standard aluminum bonding): gold remains a normal metal at millikelvin temperatures, thereby better thermalizing the chip. ## Appendix E NOISE MEASUREMENT EXPERIMENTAL SETUP Figure 9 presents a schematic of the full experimental setup used to measure the system-added noise of a readout chain using a KIT as a pre-amplifier. In total, noise generated by the SNTJ travels through three amplifiers: the KIT, a HEMT at 4K, and a room temperature amplifier, before finally being recorded with a SA. The SNTJ is packaged with a permanent magnet suppressing the Josephson effect, and a magnetic shield protects other elements from this magnetic field. In addition, a bias tee routes SNTJ-generated rf noise to the microwave readout chain, while at the same time allowing for dc bias. In fact, the SNTJ is biased with an arbitrary waveform generator (AWG). It outputs a low frequency (50 Hz) triangular voltage wave on a $10$ kΩ current limiting resistor, thereby creating a current $I_{\mathrm{AWG}}$, varying between $\pm 12$$\upmu$A, which sweeps the SNTJ-generated noise value. An oscilloscope reads the SNTJ voltage in situ while the AWG outputs a known current, allowing the computation of the SNTJ impedance $Z_{\mathrm{SNTJ}}=54\pm 4$ Ω, and with it, the SNTJ voltage bias $V=Z_{\mathrm{SNTJ}}I_{\mathrm{AWG}}$. Noise from the SNTJ is combined with rf tones (pump, and probe from a VNA to measure the gain profile) via a 20 dB directional coupler (DC) connected to the KIT input. Additionally, a 7 GHz low-pass filter placed between the SNTJ and the DC prevents the rf pump from leaking back to the SNTJ. Because the KIT requires a fairly high pump power ($-29$ dBm), we only attenuate the pump line by 10 dB at 4K. Then, an 8 GHz high-pass filter at 30 mK rejects noise at frequencies within the KIT amplification band, while allowing the pump to pass. A bias tee at the KIT input port combines rf signals (including noise from the SNTJ) with the KIT dc current bias, and a second bias tee at the KIT output separates dc from rf. The rf signal then passes through a $4-12$ GHz isolator, and a 7 GHz low-pass filter (Pasternack PE87FL1015, $\sim 45$ dB rejection at 9 GHz), preventing the pump tone from saturating the HEMT. The SA is operated in a zero-span mode, its acquisition triggered by the AWG. This measures the output noise at a single frequency, over a 5 MHz resolution bandwidth (RBW), and directly traces out the curves of Fig. 6b. Varying the SA center frequency, we obtain the system-added noise over the full 3-6.5 GHz bandwidth, Fig. 6c. ## Appendix F NOISE THEORY ### F.1 System-added noise When propagating through the experimental setup presented in Fig. 9, noise generated by the SNTJ undergoes loss and amplification. Both processes affect the effective noise at each amplifier input, and therefore also, the noise reaching the SA. While the overall system-added noise $N_{\Sigma}$ encompasses microwave loss, we derive its complete expression as a function of signal ($N_{\mathrm{ex}}^{s}$) and idler ($N_{\mathrm{ex}}^{i}$) amplifier-excess noise, gain, and transmission efficiencies to estimate $N_{\mathrm{ex}}^{s}+N_{\mathrm{ex}}^{i}$. Figure 5 represents the lossy amplification chain, with the transmission efficiencies, gain and added noise associated with each interconnection and amplification stages. We have: $\displaystyle N_{1}^{j}$ $\displaystyle=\eta_{1}^{j}\left[N_{\mathrm{in}}^{j}+\frac{N_{f}(1-\eta_{1}^{j})}{\eta_{1}^{j}}\right]$ (25) $\displaystyle N_{2}^{s}$ $\displaystyle=G(N_{1}^{s}+N_{\mathrm{ex}}^{s})+(G-1)(N_{1}^{i}+N_{\mathrm{ex}}^{i})$ (26) $\displaystyle N_{3}^{s}$ $\displaystyle=\eta_{2}\left[N_{2}^{s}+\frac{N_{f}(1-\eta_{2})}{\eta_{2}}\right]$ (27) $\displaystyle N_{4}^{s}$ $\displaystyle=G_{H}(N_{3}^{s}+N_{H})$ (28) $\displaystyle N_{o}^{s}$ $\displaystyle=G_{r}N_{4}^{s}.$ (29) Here, $N_{1}^{j}$ with $j\in\\{s,i\\}$ is the KIT-input noise at the signal and idler frequency respectively; then, at the signal frequency, $N_{2}^{s}$ is the KIT-output noise, $N_{3}^{s}$ is the HEMT-input noise, $N_{4}^{s}$ is the HEMT-output noise, and $N_{o}^{s}$ is the noise measured by the SA. Note that the beamsplitter interaction between the KIT and the HEMT assumes the loss to be mostly cold, which is the case in our setup where the lossy components before the HEMT are at the 30 mK stage. Refer to table 1 for the other variable definitions. From Eqs. 25-29 we can derive: $\displaystyle N_{o}^{s}$ $\displaystyle=G_{c}^{ss}(N_{\mathrm{in}}^{s}+N_{\mathrm{eff}}^{s})+G_{c}^{si}(N_{\mathrm{in}}^{i}+N_{\mathrm{eff}}^{i})$ (30) $\displaystyle=G_{c}^{ss}\left[N_{\mathrm{in}}^{s}+N_{\mathrm{eff}}^{s}+\frac{G-1}{G}\frac{\eta_{1}^{i}}{\eta_{1}^{s}}(N_{\mathrm{in}}^{i}+N_{\mathrm{eff}}^{i})\right],$ (31) where $\displaystyle G_{c}^{ss}$ $\displaystyle=G_{r}G_{H}\eta_{2}G\eta_{1}^{s}$ (32) $\displaystyle G_{c}^{si}$ $\displaystyle=G_{r}G_{H}\eta_{2}(G-1)\eta_{1}^{i},$ (33) and where $\displaystyle N_{\mathrm{eff}}^{s}$ $\displaystyle=\frac{N_{\mathrm{ex}}^{s}+(1-\eta_{1}^{s})N_{f}}{\eta_{1}^{s}}+\frac{(1-\eta_{2})N_{f}+N_{H}}{\eta_{2}G\eta_{1}^{s}}$ (34) $\displaystyle N_{\mathrm{eff}}^{i}$ $\displaystyle=\frac{N_{\mathrm{ex}}^{i}+(1-\eta_{1}^{i})N_{f}}{\eta_{1}^{i}}.$ (35) The system-added noise is defined as the part of the output noise in Eq. 31 that is not due to the input $N_{\mathrm{in}}^{s}$, and by assuming a cold input at the idler port, i.e. assuming $N_{f}=N_{\mathrm{in}}^{i}$. We thus find $N_{\Sigma}=N_{\mathrm{eff}}^{s}+\frac{G-1}{G}\frac{\eta_{1}^{i}}{\eta_{1}^{s}}(N_{f}+N_{\mathrm{eff}}^{i}).$ (36) This equation is equivalent to Eq. 5, where we did not simplify the ratio $G_{c}^{si}/G_{c}^{ss}$. Figure 10: System-added noise measurement of a microwave amplification chain with the HEMT as first amplifier. (a) The transmission through the whole noise setup (KIT un-pumped), when the SNTJ has been replaced by a transmission line capacitively coupled to an array of resonators (not used in the present experiment). It is performed in two situations: without (black curve) and with (purple curve) the bypass. (b) Examples of $T_{o}^{s^{\prime}}=N_{o}^{s^{\prime}}\hbar\omega/k_{B}$ are shown for the SA centered at 4, 5 and 6 GHz, with fits superimposed in black lines. The SA RBW is 5 MHz. (c) From the fits, we extract the system-added noise temperature $T_{\Sigma}^{\prime}=N_{\Sigma}^{\prime}\hbar\omega/k_{B}$ as a function of frequency without (black curve) and with (purple curve) the bypass. Uncertainties are indicated by the gray area surrounding the lines. Assuming $\\{G,N_{H}\\}\gg 1$ and inserting Eqs. 34 and 35 into Eq. 36 we get $N_{\Sigma}=\frac{N_{\mathrm{ex}}^{s}+N_{\mathrm{ex}}^{i}}{\eta_{1}^{s}}+\frac{2(1-\eta_{1}^{s})N_{f}}{\eta_{1}^{s}}+\frac{N_{H}}{\eta_{2}G\eta_{1}^{s}}+N_{f}.$ (37) which is Eq. 6 presented in the main text. When the KIT is not pumped, we consider it as a lossless, noiseless, passive element. We therefore have $G=1$, $N_{\mathrm{ex}}^{s}=0$ and $N_{\mathrm{ex}}^{i}=0$. In that situation $N_{o}^{s^{\prime}}=G_{c}^{ss^{\prime}}(N_{\mathrm{in}}^{s}+N_{\Sigma}^{\prime}),$ (38) where $G_{c}^{ss^{\prime}}=G_{r}G_{H}\eta_{2}\eta_{1}^{s}$. From Eq. 36 and 34 we thus get $N_{\Sigma}^{\prime}=\frac{(1-\eta_{2}\eta_{1}^{s})N_{f}+N_{H}}{\eta_{2}\eta_{1}^{s}}$ (39) as the chain’s system-added noise with the HEMT as first amplifier. ### F.2 Discarding the idler port input noise In a simple case where $G-1\simeq G$, and $\eta_{1}^{s}\simeq\eta_{1}^{i}$, Eq. 30 becomes $N_{o}^{s}=G_{c}^{ss}(N_{\mathrm{in}}^{s}+N_{\mathrm{in}}^{i}+N_{\mathrm{eff}}^{s}+N_{\mathrm{eff}}^{i}).$ (40) Thus, varying the SNTJ bias i.e. $N_{\mathrm{in}}^{s}$ and $N_{\mathrm{in}}^{i}$ simultaneously, the y-intercept gives $N_{\mathrm{eff}}^{s}+N_{\mathrm{eff}}^{i}$, equal to zero for a quantum- limited amplifier. Also, in that simple case, the system-added noise is $N_{\Sigma}=N_{f}+N_{\mathrm{eff}}^{s}+N_{\mathrm{eff}}^{i}$, as shown by Eq. 36. On the other hand, if the calibrated noise (coming from the SNTJ or any other wideband noise source, such as a hot/cold load, or a variable temperature stage) only illuminates the signal port of the amplifier, or if the noise at the idler port is wrongly discarded from the analysis, we get $N_{\mathrm{in}}^{i}=N_{f}$ and Eq. 30 becomes $N_{o}^{s}=G_{c}^{ss}(N_{\mathrm{in}}^{s}+N_{f}+N_{\mathrm{eff}}^{s}+N_{\mathrm{eff}}^{i}).$ (41) Usually, no distinction is made between $N_{\mathrm{eff}}^{s}$ and $N_{\mathrm{eff}}^{i}$, and the effective excess noise is simply $N_{\mathrm{eff}}=N_{\mathrm{eff}}^{s}+N_{\mathrm{eff}}^{i}$. The sum $N_{f}+N_{\mathrm{eff}}$ is then what is commonly defined as the system-added noise. Varying the SNTJ bias, i.e. $N_{\mathrm{in}}^{s}$, the y-intercept gives $N_{f}+N_{\mathrm{eff}}$, equal to half a quantum for a quantum-limited amplifier. In practice, we fit Eqs. 40 or 41 to find the chain’s gain and the y-intercept. Fitting a situation correctly described by Eq. 40 with Eq. 41 leads to an underestimate of the system-added noise. In fact, assuming $N_{\mathrm{in}}^{s}\simeq N_{\mathrm{in}}^{i}$, Eq. 40 can be rewritten into a form comparable to Eq. 41: $N_{o}^{s}\simeq 2G_{c}^{ss}\left(N_{\mathrm{in}}^{s}+\frac{N_{\mathrm{eff}}}{2}\right).$ (42) The interpretation of the y-intercept is crucial. Here, the fit yields $N_{\mathrm{eff}}/2$ as the y-intercept, and $2G_{c}^{ss}$ as the chain’s gain. The true system-added noise is then twice the y-intercept value plus a half quantum, $N_{\Sigma}=N_{\mathrm{eff}}+N_{f}$, and the true chain’s gain is half of the fitted slope. Conversely, assuming Eq. 41 leads to the conclusion that the y-intercept value is already the system-added noise, and that the slope is the chain’s gain. The true system-added noise is thus more than twice as high, and the chain’s true gain is half as much (i.e. 3 dB lower, a mistake usually hard to detect). ### F.3 Shot-noise fit The SNTJ generates a known noise power Spietz _et al._ (2006). The amount of noise $N_{\mathrm{in}}^{j}$, with $j\in\\{s,i\\}$ delivered to the 50 Ω transmission line can be written as Lecocq _et al._ (2017) $\displaystyle N_{\mathrm{in}}^{j}=\frac{k_{B}T}{2\hbar\omega_{j}}\bigg{[}$ $\displaystyle\frac{eV+\hbar\omega_{j}}{2k_{B}T}\coth\left(\frac{eV+\hbar\omega_{j}}{2k_{B}T}\right)$ (43) $\displaystyle+$ $\displaystyle\frac{eV-\hbar\omega_{j}}{2k_{B}T}\coth\left(\frac{eV-\hbar\omega_{j}}{2k_{B}T}\right)\bigg{]},$ where T is the physical temperature of the SNTJ, and V the SNTJ voltage bias. In practice, the AWG has a slight voltage offset, which we include as a fit parameter: we write $V-V_{\mathrm{off}}$ instead of V in Eq. 43. In a first step we fit the asymptotes of the output noise response, for which $\lvert eV/(2\hbar\omega)\rvert>3$ quanta. In that case, Eq. 43 reduces to $N_{\mathrm{in}}^{j}=eV/(2\hbar\omega_{j})$, and thus Eq. 30 reduces to $\displaystyle N_{o}^{s}=$ $\displaystyle G_{c}^{ss}\left(\frac{eV- V_{\mathrm{off}}}{2\hbar\omega_{s}}+N_{\mathrm{eff}}^{s}\right)$ (44) $\displaystyle+$ $\displaystyle G_{c}^{si}\left(\frac{eV- V_{\mathrm{off}}}{2\hbar\omega_{i}}+N_{\mathrm{eff}}^{i}\right).$ We thus get $V_{\mathrm{off}}$, $G_{c}^{ss}$ and $G_{c}^{si}$. In a second step, we fix $V_{\mathrm{off}}$, $G_{c}^{ss}$ and $G_{c}^{si}$ to their values derived in the first step and fit the central region (where $\lvert eV/(2\hbar\omega)\rvert\leq 3$ quanta) to Eq. 30 to get $N_{\mathrm{eff}}^{s}$, $N_{\mathrm{eff}}^{i}$ and T. The spectrum analyzer (SA) records a power spectrum $P_{o}^{s}$ (in Watts), which we convert into a number of photons: $N_{o}^{s}=P_{o}^{s}/(B\hbar\omega)$, where B is the SA resolution bandwidth. Dividing by $G_{c}^{ss}$, we then refer $N_{o}^{s}$ to the chain’s input. Finally, we can subtract $N_{f}$ (in quanta), to visually read $N_{\Sigma}$ at $V=0$ on the SNTJ curves of Fig. 6b. In fact, $\frac{N_{o}^{s}(0)}{G_{c}^{ss}}-N_{\mathrm{in}}^{s}(0)-\frac{G_{c}^{si}}{G_{c}^{ss}}N_{\mathrm{in}}^{i}(0)+N_{f}\frac{G_{c}^{si}}{G_{c}^{ss}}=N_{\Sigma}.$ (45) At $V=0$, $N_{\mathrm{in}}^{j}(0)=\frac{1}{2}\coth{\left(\frac{\hbar\omega_{j}}{k_{B}T}\right)},$ (46) where $j\in\\{s,i\\}$, therefore if $\hbar\omega_{j}\gg k_{B}T$, $N_{\mathrm{in}}^{j}(0)\simeq N_{f}=0.5$. Considering that $G_{c}^{ss}\simeq G_{c}^{si}$, Eq. 45 yields: $\frac{N_{o}^{s}(0)}{G_{c}^{ss}}-N_{f}\simeq N_{\Sigma}.$ (47) ### F.4 List of variables Table 1 lists the variables used throughout Sec. IV and appendix F.1. variable name | definition ---|--- $N_{\mathrm{in}}^{s}$ | SNTJ-generated noise at the signal frequency $N_{\mathrm{in}}^{i}$ | SNTJ-generated noise at the idler frequency $N_{f}$ | vacuum (or thermal) noise, set by the refrigerator temperature $N_{\mathrm{ex}}^{s}$ | signal-to-signal path KIT-excess noise $N_{\mathrm{ex}}^{i}$ | idler-to-signal path KIT-excess noise $N_{\mathrm{eff}}^{s}$ | signal-to-signal path effective KIT-excess noise, Eq. 34 $N_{\mathrm{eff}}^{i}$ | idler-to-signal path effective KIT-excess noise, Eq. 35 $N_{H}$ | HEMT-added noise $N_{o}^{s}$ | noise measured by the SA at the signal frequency, Eq. 30 $N_{o}^{s^{\prime}}$ | noise measured by the SA when the KIT is off, Eq. 38 $N_{\Sigma}$ | system-added noise, Eq. 36 $N_{\Sigma}^{\prime}$ | system-added noise when the KIT is off, Eq. 39 $\eta_{1}^{s}$ | transmission efficiency between SNTJ and KIT at the signal frequency $\eta_{1}^{i}$ | transmission efficiency between SNTJ and KIT at the idler frequency $\eta_{2}$ | transmission efficiency between KIT and HEMT at the signal frequency $G$ | KIT signal power gain $G_{H}$ | HEMT signal power gain $G_{r}$ | room-temperature signal power gain $G_{c}^{ss}$ | signal-to-signal chain’s gain, Eq. 32 $G_{c}^{si}$ | idler-to-signal chain’s gain, Eq. 33 Table 1: List of the variables used in the noise theory. All the variables designating a noise quantity are in units of quanta. All the transmission efficiencies are dimensionless. All the gains are in linear units. ## Appendix G LOSS BUDGET IN THE NOISE MEASUREMENT SETUP To quote the KIT-excess noise terms $N_{\mathrm{ex}}^{s}$ and $N_{\mathrm{ex}}^{i}$, it is necessary to account for the transmission efficiencies (and therefore the loss) in the measurement setup, $\eta_{1}^{s}$, $\eta_{1}^{i}$ and $\eta_{2}$. Note that $\eta_{1}^{i}$ is simply the transmission efficiency at the idler frequency $\omega_{i}=\omega_{p}-\omega_{s}$, symmetric of the signal frequency $\omega_{s}$ with respect to the half-pump frequency $\omega_{p}/2$. Thus, a broadband measurement of $\eta_{1}^{s}$ on both sides of $\omega_{p}/2$ provides both $\eta_{1}^{s}$ and $\eta_{1}^{i}$. We estimate these efficiencies in this appendix and give an overall loss budget per component between the SNTJ and the HEMT. Knowing where the loss comes from also provides guidance on how to improve the amplifier because many lossy components could be optimized, particularly by integrating them on-chip with the KIT. ### G.1 System-added noise temperature with un-pumped KIT First, we measured the system-added noise temperature of the amplification chain $T_{\Sigma}^{\prime}=N_{\Sigma}^{\prime}\hbar\omega/k_{B}$ (with $\hbar$ the reduced Planck’s constant and $k_{B}$ the Boltzmann’s constant) when the KIT is off, i.e. with the HEMT as the first amplifier. In that case, $N_{\Sigma}^{\prime}$ is given by Eq. 39. We measured $N_{\Sigma}^{\prime}$ in two situations: one for the measurement setup presented in Fig. 9, and one where the KIT surrounded by its two bias tees, the directional coupler and the low pass filter next to it have been bypassed, i.e. replaced by a microwave cable. In the limit where $N_{f}\ll N_{\Sigma}^{\prime}$, the ratio in $N_{\Sigma}^{\prime}$ between these two situations gives direct access to the bypassed components’ insertion loss (IL) $\mathcal{I}_{\mathrm{BP}}$ (the exact transmission efficiency ratio from which we calculate $\mathcal{I}_{\mathrm{BP}}$ is equivalent to finding $\eta_{2}\eta_{1}^{s}$ in Eq. 39). Figure 10c shows $T_{\Sigma}^{\prime}$ as a function of frequency, obtained from fitting curves like those presented in Fig. 10b, obtained with and without the bypass. There are ripples with 130 MHz characteristic frequency, likely due to reflections in coaxial cables between the SNTJ and the HEMT. Without the bypass $T_{\Sigma}^{\prime}=5.1\pm 1.4$ K between 3.5 and 5.5 GHz, while with the bypass $T_{\Sigma}^{\prime}=2.9\pm 1$ K. Thus, the ratio gives $\mathcal{I}_{\mathrm{BP}}=2.5$ dB. ### G.2 Component loss Second, we replaced the SNTJ (including the bias tee) with a transmission line, to measure the transmission of a probe tone through the setup (with a VNA). This transmission line is capacitively coupled to an array of resonators, whose resonant frequencies span between $4$ and $5$ GHz, and is part of a future experiment. These resonances are not relevant for the current characterization. We measured the transmission with and without the bypass, shown in Fig. 10a. Once again, the transmission ratio between these two situations gives direct access to the bypassed components’ IL. We find $\mathcal{I}_{BP}=2.4\pm 0.6$ dB, in agreement with the system-added noise measurements of Fig. 10c. authors | mixing | | pump --- power [dBm] | gain --- [dBm] | bandwidth --- [GHz] | saturation --- [dBm] | noise --- bandwidth | system --- noise [K] this work | 3WM | $-29$ | $16.5^{+1}_{-1.3}$ | 2 | $-63$ | $3$-$6.5$ GHz | $0.66\pm 0.15$ Eom et.al. Eom _et al._ (2012) | 4WM | $-8$ | $10^{+3}_{-3}$ | $4$ | $-52$ | $1$ Hz | $1.5$111idler noise input not accounted for Bockstiegel et.al. Bockstiegel _et al._ (2014) | 4WM | $-10$ | $20^{+3}_{-3}$ | $8$ | - | - | - Vissers et.al. Vissers _et al._ (2016) | 3WM | $-10$ | $20^{+5}_{-5}$ | $4$ | $-45$222at $10$ dB gain | - | - Chaudhuri et.al. Chaudhuri _et al._ (2017) | 4WM | $-10$ | $15^{+3}_{-3}$ | $3$ | - | - | - Ranzani et.al. Ranzani _et al._ (2018) | 3WM | $-30$ | $10^{+2}_{-2}$ | $4$ | - | $6$-$10$ GHz | $1.5$-$5^{\mathrm{\ref{ftn:ftn2}}}$ Zobrist et.al. Zobrist _et al._ (2019) | 3WM | $-23$ | $15$333data quoted but not shown in the paper | $5^{\mathrm{\ref{ftn:ftn1}}}$ | $-53^{\mathrm{\ref{ftn:ftn1}}}$ | 1 MHz | ${0.58^{+0.2}_{-0.03}}^{\mathrm{\ref{ftn:ftn2}}}$ Table 2: Comparison of our KIT to other published KIT results. As discussed in appendix F.2, failure to account for the noise at the idler input results in an underestimate of the system-added noise temperature by about a factor of two. We also measured the individual transmissions of the chain’s components at 4K: bias tee (BT), filter (LPF), directional coupler (DC) and isolator (ISO). Figure 11a shows the IL of these four components. Between $3.5$ and $5.5$ GHz, $\mathcal{I}_{\mathrm{BT}}=0.3\pm 0.04$ dB, $\mathcal{I}_{\mathrm{LPF}}=0.2\pm 0.1$ dB, $\mathcal{I}_{\mathrm{DC}}=0.2\pm 0.04$ dB, and $\mathcal{I}_{\mathrm{ISO}}=0.7\pm 0.6$ dB. Its IL increases below $4$ GHz as we leave its operating band, which degrades the system-added noise performance in the same manner as reducing the KIT gain would, see Eq. 34. The SNTJ packaging loss, including the bias tee, has been previously reported to be $\mathcal{I}_{\mathrm{SNTJ}}=1$ dB (transmission efficiency of $0.8$) Chang _et al._ (2016). Figure 11: Insertion loss (IL) from the microwave components used in the noise measurement setup (see Fig. 9). The ILs (a) are measured at 4K: the bias tee, Anritsu K250 (red curve), the low pass filter, Pasternack PE87FL1015 (green curve), the directional coupler, Pasternack PE2204-20 (blue curve), and the isolator, Quinstar CWJ1015-K13B (purple curve). (b) Combining the IL from the components before and after the KIT, we estimate the transmission efficiencies $\eta_{1}^{s}$ (black curve) and $\eta_{2}$ (purple curve) respectively. ### G.3 HEMT-added noise, transmission efficiencies, KIT-excess noise We can estimate the HEMT-added noise temperature $T_{H}=N_{H}\hbar\omega/k_{B}$ from eq. 39, because we measured $T_{\Sigma}^{\prime}$ (see Fig. 10c), and because we can have an estimation of the total IL $\mathcal{I}_{T}$ from the SNTJ to the HEMT. In fact, without the bypass $\mathcal{I}_{T}=\mathcal{I}_{\mathrm{SNTJ}}+\mathcal{I}_{\mathrm{BP}}+\mathcal{I}_{\mathrm{ISO}}+\mathcal{I}_{\mathrm{LPF}}$ (48) (in dB), which gives $\mathcal{I}_{T}=4.3\pm 0.6$ dB. Then, $\eta_{2}\eta_{1}^{s}=10^{-\mathcal{I}_{T}/10}$, and we get $T_{H}=1.8\pm 0.2$ K (i.e. $N_{H}=8\pm 1$ quanta) between 3.5 and 5.5 GHz, in agreement with the HEMT data sheet, which gives $T_{H}=1.6\pm 0.3$ K in that frequency range. Subtracting $\mathcal{I}_{\mathrm{LPF}}$, $\mathcal{I}_{\mathrm{DC}}$ and $2\times\mathcal{I}_{\mathrm{BT}}$ from $\mathcal{I}_{\mathrm{BP}}$, the KIT’s packaging (see Fig. 8a) is responsible for about $\mathcal{I}_{\mathrm{KIT}}=1.4\pm 0.6$ dB of IL. This loss may be decreased in future optimization, for example by coating the PCBs and the SMA pins with superconducting material. Finally, we can separately estimate $\eta_{1}^{s}$ and $\eta_{2}$ by adding (in dB) the IL of the chain’s components, and then use Eqs. 34 and 35 to estimate the KIT-excess noise $N_{\mathrm{ex}}^{s}$ and $N_{\mathrm{ex}}^{i}$. Between the SNTJ and the KIT, $\mathcal{I}_{\eta_{1}^{s}}=\mathcal{I}_{\mathrm{SNTJ}}+\mathcal{I}_{\mathrm{LPF}}+\mathcal{I}_{\mathrm{DC}}+\mathcal{I}_{\mathrm{BT}}+\frac{\mathcal{I}_{\mathrm{KIT}}}{2},$ (49) which gives $\mathcal{I}_{\eta_{1}}=2.4\pm 0.1$ dB, and between the KIT and the HEMT $\mathcal{I}_{\eta_{2}}=\frac{\mathcal{I}_{\mathrm{KIT}}}{2}+\mathcal{I}_{\mathrm{BT}}+\mathcal{I}_{\mathrm{ISO}}+\mathcal{I}_{\mathrm{LPF}},$ (50) which gives $\mathcal{I}_{\eta_{2}}=1.9\pm 0.6$ dB. Figure 11b shows $\eta_{1}^{s}=10^{-\mathcal{I}_{\eta_{1}}/10}$ and $\eta_{2}=10^{-\mathcal{I}_{\eta_{2}}/10}$ thus obtained. Therefore, between 3.5 and 5.5 GHz, $\eta_{1}^{s}=\eta_{1}^{i}=0.57\pm 0.02$ (both equal since the frequency band is nearly symmetric with respect to $\omega_{p}/2=4.4$ GHz), and $\eta_{2}=0.64\pm 0.10$. ## Appendix H KIT PERFORMANCE COMPARISON Table 2 compares the performance of our KIT with previous results on KITs. The pump power (third column) is the power at the KIT’s input. The gain (fourth column) is the average gain observed over the KIT amplification bandwidth (fifth column). The amplitude of gain ripples over that bandwidth is indicated next to the gain value. The saturation power (sixth column) corresponds to the input $1$ dB compression point. The noise bandwidth (seventh column) is the bandwidth over which a noise measurement was performed. Finally, column eight reports the measured system noise temperature $T_{\Sigma}=N_{\Sigma}\hbar\omega/k_{B}$ in that bandwidth. In our case, $N_{\Sigma}=3.1\pm 0.6$ quanta, which translates into $T_{\Sigma}=0.66\pm 0.15$ K. Note that Eom et.al Eom _et al._ (2012), Ranzani et.al. Ranzani _et al._ (2018) and Zobrist et.al. Zobrist _et al._ (2019) used a wideband noise source (a hot/cold load) but did not account for the effect of the idler port’s input noise. Therefore, their true system-added noise is about twice of what they reported (see appendix F.2). We do not compare the amplifier-added noise or amplifier-excess noise, because it is subject to approximations in the amount of loss present in the noise measurement setup. In addition, the inferred added noise usually does not include the loss of mandatory microwave components used when operating the amplifier, and as such it is an under- estimation of the true, useful amplifier-added noise. ## References * Castellanos-Beltran _et al._ (2008) M. A. Castellanos-Beltran, K. D. Irwin, G. C. Hilton, L. R. Vale, and K. W. Lehnert, “Amplification and squeezing of quantum noise with a tunable Josephson metamaterial,” Nature Physics 4, 929–931 (2008). * Bergeal _et al._ (2010) N. Bergeal, F. Schackert, M. Metcalfe, R. Vijay, V. E. Manucharyan, L. Frunzio, D. E. Prober, R. J. Schoelkopf, S. M. Girvin, and M. H. Devoret, “Phase-preserving amplification near the quantum limit with a josephson ring modulator,” Nature 465, 64–68 (2010). * Roch _et al._ (2012) N. Roch, E. Flurin, F. Nguyen, P. Morfin, P. Campagne-Ibarcq, M. H. Devoret, and B. Huard, “Widely tunable, nondegenerate three-wave mixing microwave device operating near the quantum limit,” Phys. Rev. Lett. 108, 147701 (2012). * Mutus _et al._ (2013) J. Y. Mutus, T. C. White, E. Jeffrey, D. Sank, R. Barends, J. Bochmann, Yu Chen, Z. Chen, B. Chiaro, A. Dunsworth, J. Kelly, A. Megrant, C. Neill, P. J. J. O’Malley, P. Roushan, A. Vainsencher, J. Wenner, I. Siddiqi, R. Vijay, A. N. Cleland, and John M. Martinis, “Design and characterization of a lumped element single-ended superconducting microwave parametric amplifier with on-chip flux bias line,” Applied Physics Letters 103, 122602 (2013). * Zhong _et al._ (2013) L. Zhong, E. P. Menzel, R. Di Candia, P. Eder, M. Ihmig, A. Baust, M. Haeberlein, E. Hoffmann, K. Inomata, T. Yamamoto, Y. Nakamura, E. Solano, F. Deppe, A. Marx, and R. Gross, “Squeezing with a flux-driven josephson parametric amplifier,” New J. Phys. 15, 125013 (2013). * Lecocq _et al._ (2017) F. Lecocq, L. Ranzani, G. A. Peterson, K. Cicak, R. W. Simmonds, J. D. Teufel, and J. Aumentado, “Nonreciprocal microwave signal processing with a field-programmable josephson amplifier,” Phys. Rev. Applied 7, 024028 (2017). * Malnou _et al._ (2018) M. Malnou, D. A. Palken, Leila R. Vale, Gene C. Hilton, and K. W. Lehnert, “Optimal operation of a josephson parametric amplifier for vacuum squeezing,” Phys. Rev. Applied 9, 044023 (2018). * Mutus _et al._ (2014) J. Y. Mutus, T. C. White, R. Barends, Yu Chen, Z. Chen, B. Chiaro, A. Dunsworth, E. Jeffrey, J. Kelly, A. Megrant, C. Neill, P. J. J. O’Malley, P. Roushan, D. Sank, A. Vainsencher, J. Wenner, K. M. Sundqvist, A. N. Cleland, and John M. Martinis, “Strong environmental coupling in a josephson parametric amplifier,” Applied Physics Letters 104, 263513 (2014). * Roy _et al._ (2015) T. Roy, S. Kundu, M. Chand, A. M. Vadiraj, A. Ranadive, N. Nehra, M. P. Patankar, J. Aumentado, A. A. Clerk, and R. Vijay, “Broadband parametric amplification with impedance engineering: Beyond the gain-bandwidth product,” Applied Physics Letters 107, 262601 (2015). * Frattini _et al._ (2017) N. E. Frattini, U. Vool, S. Shankar, A. Narla, K. M. Sliwa, and M. H. Devoret, “3-wave mixing josephson dipole element,” Applied Physics Letters 110, 222603 (2017). * Frattini _et al._ (2018) N. E. Frattini, V. V. Sivak, A. Lingenfelter, S. Shankar, and M. H. Devoret, “Optimizing the nonlinearity and dissipation of a snail parametric amplifier for dynamic range,” Phys. Rev. Applied 10, 054020 (2018). * Sivak _et al._ (2019) V. V. Sivak, N. E. Frattini, V. R. Joshi, A. Lingenfelter, S. Shankar, and M. H. Devoret, “Kerr-free three-wave mixing in superconducting quantum circuits,” Phys. Rev. Applied 11, 054060 (2019). * Macklin _et al._ (2015) C. Macklin, K. O’Brien, D. Hover, M. E. Schwartz, V. Bolkhovsky, X. Zhang, W. D. Oliver, and I. Siddiqi, “A near–quantum-limited josephson traveling-wave parametric amplifier,” Science 350, 307–310 (2015). * White _et al._ (2015) T. C. White, J. Y. Mutus, I.-C. Hoi, R. Barends, B. Campbell, Yu Chen, Z. Chen, B. Chiaro, A. Dunsworth, E. Jeffrey, J. Kelly, A. Megrant, C. Neill, P. J. J. O’Malley, P. Roushan, D. Sank, A. Vainsencher, J. Wenner, S. Chaudhuri, J. Gao, and J. M. Martinis, “Traveling wave parametric amplifier with josephson junctions using minimal resonator phase matching,” Applied Physics Letters 106, 242601 (2015). * Planat _et al._ (2020) L. Planat, A. Ranadive, R. Dassonneville, J. Puertas Martínez, S. Léger, C. Naud, O. Buisson, W. Hasch-Guichard, D.M. Basko, and N. Roch, “Photonic-crystal josephson traveling-wave parametric amplifier,” Phys. Rev. X 10, 021021 (2020). * Sivak _et al._ (2020) V. V. Sivak, S. Shankar, G. Liu, J. Aumentado, and M. H. Devoret, “Josephson array-mode parametric amplifier,” Phys. Rev. Applied 13, 024014 (2020). * Zorin (2016) A. B. Zorin, “Josephson traveling-wave parametric amplifier with three-wave mixing,” Phys. Rev. Applied 6, 034006 (2016). * Zorin (2019) A.B. Zorin, “Flux-driven josephson traveling-wave parametric amplifier,” Phys. Rev. Applied 12, 044051 (2019). * Eom _et al._ (2012) B. H. Eom, P. K. Day, H. G. LeDuc, and J. Zmuidzinas, “A wideband, low-noise superconducting amplifier with high dynamic range,” Nature Physics 8, 623–627 (2012). * Arute _et al._ (2019) F. Arute _et al._ , “Quantum supremacy using a programmable superconducting processor,” Nature 574, 505–510 (2019). * Shor (1994) P. W. Shor, “Algorithms for quantum computation: discrete logarithms and factoring,” in _Proceedings 35th Annual Symposium on Foundations of Computer Science_ (IEEE, 1994) pp. 124–134. * Grover (1997) L. K. Grover, “Quantum mechanics helps in searching for a needle in a haystack,” Phys. Rev. Lett. 79, 325–328 (1997). * Szypryt _et al._ (2017) P. Szypryt, S. R. Meeker, G. Coiffard, N. Fruitwala, B. Bumble, G. Ulbricht, A. B. Walter, M. Daal, C. Bockstiegel, G. Collura, N. Zobrist, I. Lipartito, and B. A. Mazin, “Large-format platinum silicide microwave kinetic inductance detectors for optical to near-ir astronomy,” Opt. Express 25, 25894–25909 (2017). * Hochberg _et al._ (2016a) Y. Hochberg, Y. Zhao, and K. M. Zurek, “Superconducting detectors for superlight dark matter,” Phys. Rev. Lett. 116, 011301 (2016a). * Hochberg _et al._ (2016b) Y. Hochberg, M. Pyle, Y. Zhao, and K. M. Zurek, “Detecting superlight dark matter with fermi-degenerate materials,” Journal of High Energy Physics 2016, 57 (2016b). * Chaudhuri _et al._ (2017) S. Chaudhuri, D. Li, K. D. Irwin, C. Bockstiegel, J. Hubmayr, J. N. Ullom, M. R. Vissers, and J. Gao, “Broadband parametric amplifiers based on nonlinear kinetic inductance artificial transmission lines,” Applied Physics Letters 110, 152601 (2017). * Zobrist _et al._ (2019) N. Zobrist, B. H. Eom, P. Day, B. A. Mazin, S. R. Meeker, B. Bumble, H. G. LeDuc, G. Coiffard, P. Szypryt, N. Fruitwala, I. Lipartito, and C. Bockstiegel, “Wide-band parametric amplifier readout and resolution of optical microwave kinetic inductance detectors,” Applied Physics Letters 115, 042601 (2019). * Spietz _et al._ (2003) L. Spietz, K. W. Lehnert, I. Siddiqi, and R. J. Schoelkopf, “Primary electronic thermometry using the shot noise of a tunnel junction,” Science 300, 1929–1932 (2003). * Spietz _et al._ (2006) L. Spietz, R. J. Schoelkopf, and P. Pari, “Shot noise thermometry down to 10mk,” Applied Physics Letters 89, 183123 (2006). * Vissers _et al._ (2016) M. R. Vissers, R. P. Erickson, H.-S. Ku, L. Vale, X. Wu, G. C. Hilton, and D. P. Pappas, “Low-noise kinetic inductance traveling-wave amplifier using three-wave mixing,” Applied Physics Letters 108, 012601 (2016). * Erickson and Pappas (2017) R. P. Erickson and D. P. Pappas, “Theory of multiwave mixing within the superconducting kinetic-inductance traveling-wave amplifier,” Phys. Rev. B 95, 104506 (2017). * Pozar (2011) D.M. Pozar, _Microwave Engineering, 4th Edition_ (Wiley, 2011). * Bockstiegel _et al._ (2014) J. Bockstiegel, C.and Gao, M.R. Vissers, M. Sandberg, S. Chaudhuri, A. Sanders, L.R. Vale, K.D. Irwin, and D.P. Pappas, “Development of a broadband nbtin traveling wave parametric amplifier for mkid readout,” Journal of Low Temperature Physics 176, 476–482 (2014). * Ranzani _et al._ (2018) L. Ranzani, M. Bal, Kin Chung Fong, G. Ribeill, X. Wu, J. Long, H.-S. Ku, R. P. Erickson, D. Pappas, and T. A. Ohki, “Kinetic inductance traveling-wave amplifiers for multiplexed qubit readout,” Applied Physics Letters 113, 242602 (2018). * Caves (1982) Carlton M. Caves, “Quantum limits on noise in linear amplifiers,” Phys. Rev. D 26, 1817–1839 (1982). * Gao _et al._ (2008) Jiansong Gao, Miguel Daal, Anastasios Vayonakis, Shwetank Kumar, Jonas Zmuidzinas, Bernard Sadoulet, Benjamin A. Mazin, Peter K. Day, and Henry G. Leduc, “Experimental evidence for a surface distribution of two-level systems in superconducting lithographed microwave resonators,” Applied Physics Letters 92, 152505 (2008). * Boyd (2019) Robert W Boyd, _Nonlinear optics_ (Academic press, 2019). * Chang _et al._ (2016) S.-W. Chang, J. Aumentado, W.-T. Wong, and J.C. Bardin, “Noise measurement of cryogenic low noise amplifiers using a tunnel-junction shot-noise source,” in _2016 IEEE MTT-S International Microwave Symposium (IMS)_ (IEEE, 2016) pp. 1–4.
# Phy-Taylor: Physics-Model-Based Deep Neural Networks Yanbing Mao Engineering Technology Division, Wayne State University, Detroit, MI 48201, USA corresponding author: Yanbing Mao (e-mail<EMAIL_ADDRESS>Lui Sha Department of Computer Science, University of Illinois at Urbana- Champaign, Urbana, IL 61801, USA Huajie Shao Department of Computer Science, College of William & Mary, Williamsburg, VA 23185, USA Yuliang Gu Department of Mechanical Engineering, University of Illinois at Urbana–Champaign, Urbana, IL 61801, USA Qixin Wang Department of Computing, Hong Kong Polytechnic University, Hong Kong SAR, China Tarek Abdelzaher Department of Computer Science, University of Illinois at Urbana-Champaign, Urbana, IL 61801, USA ###### Abstract Purely data-driven deep neural networks (DNNs) applied to physical engineering systems can infer relations that violate physics laws, thus leading to unexpected consequences. To address this challenge, we propose a physics- model-based DNN framework, called Phy-Taylor, that accelerates learning compliant representations with physical knowledge. The Phy-Taylor framework makes two key contributions; it introduces a new architectural physics- compatible neural network (PhN), and features a novel compliance mechanism, we call Physics-guided Neural Network Editing. The PhN aims to directly capture nonlinearities inspired by physical quantities, such as kinetic energy, potential energy, electrical power, and aerodynamic drag force. To do so, the PhN augments neural network layers with two key components: (i) monomials of Taylor series expansion of nonlinear functions capturing physical knowledge, and (ii) a suppressor for mitigating the influence of noise. The neural- network editing mechanism further modifies network links and activation functions consistently with physical knowledge. As an extension, we also propose a self-correcting Phy-Taylor framework that introduces two additional capabilities: (i) physics-model-based safety relationship learning, and (ii) automatic output correction when violations of safety occur. Through experiments, we show that (by expressing hard-to-learn nonlinearities directly and by constraining dependencies) Phy-Taylor features considerably fewer parameters, and a remarkably accelerated training process, while offering enhanced model robustness and accuracy. ## 1 Introduction The paper proposes a novel physics-model-based deep neural network framework, called Phy-Taylor, that addresses a critical flaw in purely data-driven neural networks, when used to model aspects of physical engineering systems. Namely, it addresses the potential lack of agreement between learned latent neural network representations and prior physical knowledge – a flaw that sometimes leads to catastrophic consequences [1]. As shown in Figure 1, the Phy-Taylor framework introduces two contributions: the deep physics-compatible neural networks and a physics-guided neural network editing mechanism, aiming at ensuring compliance with prior physical knowledge. Figure 1: Architectures of Phy-Taylor and physics-compatible neural network, having neural network (NN) editing including link editing and activation editing. The work contributes to emerging research on physics-enhanced deep neural networks. Current approaches include physics-informed neural networks [2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16], physics-guided neural-network architectures [17, 18, 19, 20, 21, 22] and physics-inspired neural operators [23, 24]. The physics-informed networks and physics-guided architectures use compact partial differential equations (PDEs) for formulating loss functions and/or architectural components. Physics-inspired neural operators, such as the Koopman neural operator [23] and the Fourier neural operator [24], on the other hand, map nonlinear functions into alternative domains, where it is easier to train their parameters from observational data and reason about convergence. These frameworks improve consistency with prior analytical knowledge, but remain problematic in several respects. For example, (i) due to incomplete knowledge, the compact or precise PDEs may not always be available, and (ii) fully-connected neural networks can introduce spurious correlations that deviate from strict compliance with available well-validated physical knowledge. Instead, through the use of a Taylor-series expansion, the Phy- Taylor is able to leverage partial knowledge. Moreover, thanks to the neural editing mechanism, the framework removes links and reshapes activation functions not consistent with physics-based representations. The Phy-Taylor framework leverages the intuition that most physical relations live in low-dimensional manifolds, shaped by applicable physical laws. It’s just that the estimation of key physical variables from high-dimensional system observations is often challenging. By expressing known knowledge as relations between yet-to-be-computed latent variables, we force representation learning to converge to a space, where these variables represent desired physical quantities, shaped by the applicable (expressed) physical laws. In effect, by shaping non-linear terms and relations in the latent space, we arrive at a desired physics-compliant latent representation. More specifically, Phy-Taylor offers the following two advantages: * • Non-linear Physics Term Representation: Classical neural networks can learn arbitrary non-linear relations by unfolding them into layers of linear weighting functions and switch-like activations. This mechanism is akin to constructing nonlinearities by stitching together piecewise linear behaviors. Instead, by directly exploiting non-linear terms of the Taylor series expansion, we offer a set of features that express physical nonlinearities much more succinctly, thereby reducing the number of needed parameters and improving accuracy of representation. Monomials of the Taylor series can capture common nonliearities present in physics equations, such as kinetic energy, potential energy, rolling resistance and aerodynamic drag force. The (controllable) model error of the series drops significantly as the series order increases [25]. The approach constructs input features that represent monomials of the Taylor series and adds a compressor for mitigating influence of noise on augmented inputs. * • Removing Spurious Correlations: The general topology of neural networks allows for models that capture spurious correlations in training samples (overfitting) [26, 27]. In contrast, we develop a neural network (topology) editing mechanism in the latent space that removes links among certain latent variables, when these links contradict their intended physical behaviors, thereby forcing the latent representation to converge to variables with the desired semantic interpretation that obey the desired physical relations. Through experiments with learning the dynamics of autonomous vehicles and other non-linear physical systems, we show that Phy-Taylor exhibits a considerable reduction in learning parameters, a remarkably accelerated training process, and greatly enhanced model robustness and accuracy (viewed from the perspective of long-horizon prediction of a trajectory). Experiments with safe velocity regulation in autonomous vehicles further demonstrate that the self-correcting Phy-Taylor successfully addresses the dilemma of prediction horizon and computation time that nonlinear model-predictive control and control barrier function are facing in safety-critical control. ## 2 Problem Formulation Table 1: Table of Notation $\mathbb{R}^{n}$: set of n-dimensional real vectors | $\mathbb{R}_{\geq 0}$: set of non-negative real numbers ---|--- $\mathbb{N}$: set of natural numbers | $[\mathbf{x}]_{i}$: $i$-th entry of vector $\mathbf{x}$ $[\mathbf{x}]_{i:j}$: a sub-vector formed by the $i$-th to $j$-th entries of vector $\mathbf{x}$ | $[\mathbf{W}]_{i,j}$: element at row $i$ and column $j$ of matrix $\mathbf{W}$ $[\mathbf{W}]_{i,:}$: $i$-th row of matrix $\mathbf{W}$ | $\left[\mathbf{x}\leavevmode\nobreak\ ;\leavevmode\nobreak\ \mathbf{y}\right]$: stacked (tall column) vector of vectors $\mathbf{x}$ and $\mathbf{y}$ $\mathbf{0}_{n}$: $n$-dimensional vector of all zeros | $\mathbf{1}_{n}$: $n$-dimensional vector of all ones $\mathbf{O}_{m\times n}$: $m\times n$-dimensional zero matrix | $\mathbf{I}_{n}$: $n\times n$-dimensional identity matrix $||\cdot||$: Euclidean norm of a vector or absolute value of a number | $\odot$: Hadamard product $\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.8}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.8}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.8}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.8}{$\scriptscriptstyle\bullet$}}}}}$: multiplication operator | $\mathrm{len}(\mathbf{x})$: length of vector $\mathbf{x}$ act: activation function | sus: suppressor function $\top$: matrix or vector transposition | ina: a function that is inactive $\boxplus$: a known model-substructure parameter | $*$: an unknown model-substructure parameter Consider the problem of computing some output vectors, $\mathbf{y}$, from a set of observations, $\mathbf{x}$. The relation between $\mathbf{x}$ and $\mathbf{y}$ is partially determined by physical models of known structure (but possibly unknown parameter values) and partially unknown, thus calling for representation learning of the missing substructures using neural network observables. For example, $\mathbf{y}$ might denote the estimated momentum and future position of a target as a function of a vector of $\mathbf{x}$, that include its position, velocity, and type. In this formulation, position and velocity might be directly related to output quantities via known physical relations, but type is represented only indirectly by an image that requires some representation learning in order to translate it into relevant parameters (such as mass and maneuverability) from which the outputs can be computed. We express the overall input/output relation by the function: $\displaystyle\mathbf{y}=\underbrace{\mathbf{A}}_{\text{weight matrix}}\cdot\underbrace{\mathfrak{m}(\mathbf{x},r)}_{\text{node- representation vector}}+\underbrace{\mathbf{f}(\mathbf{x})}_{\text{model mismatch}}\triangleq\underbrace{\mathbf{g}(\mathbf{x})}_{\text{ground truth model}},$ (1) where $\mathbf{y}$ and $\mathbf{x}$ are the output and input vectors of overall system model, respectively, and the parameter $r\in\mathbb{N}$ controls model size. For convenience, Table 1 summarizes the remaining notations used throughout the paper. Since Equation (1) combines known and unknown model substructures, we distinguish them according to the definition below. ###### Definition 1. For all $i\in\\{1,2,\ldots,\mathrm{len}(\mathbf{y})\\}$, $j\in\\{1,2,\ldots,\mathrm{len}(\mathfrak{m}(\mathbf{x},r))\\}$, element $[\mathbf{A}]_{i,j}$ is said to be a known model-substructure parameter in Equation (1) if and only if $\frac{{\partial[\mathbf{f}(\mathbf{x})]_{i}}}{{\partial{[\mathfrak{m}}(\mathbf{x},r)]_{j}}}\equiv 0$. Otherwise, it is called an unknown model-substructure parameter. Definition 1 indicates that the model-substructure knowledge includes: * • (i) Known parameter values but completely unknown model formula (see e.g., the Example 1). * • (ii) Partially-known model formula. For example, in the (lateral) force balance equation of autonomous vehicle [28]: $\displaystyle m\left({\ddot{y}+\dot{\psi}{V_{x}}}\right)={F_{\text{yf}}}+{F_{\text{fr}}}+{F_{\text{bank}}},$ (2) the force formula due to road bank angle $\phi$, i.e., ${F_{\text{bank}}}=mg\sin(\phi)$, is known while other force formulas are unknown because of complex and unforeseen driving environments. * • (iii) Known model formula but unknown parameter values. ###### Example 1 (Identify Known Model-Substructure Parameters). DNN Design Goal: Use the current position $p(k)$, velocity $v(k)$ and mass $m$ to estimate a vehicle’s next velocity $v(k+1)$, sensed road friction coefficient $r(k+1)$ and safety metric $s(k+1)$ of velocity regulation, in the dynamic driving environments. With the knowledge of vehicle dynamics and control [28], the problem can be mathematically described by $\displaystyle v\left({k+1}\right)$ $\displaystyle={g_{1}}\left({p\left(k\right),\leavevmode\nobreak\ v\left(k\right),\leavevmode\nobreak\ m}\right),$ (3a) $\displaystyle r\left({k+1}\right)$ $\displaystyle={g_{2}}\left({m,\leavevmode\nobreak\ v\left(k\right)}\right),$ (3b) $\displaystyle s\left({k+1}\right)$ $\displaystyle={g_{3}}\left({v\left(k\right)}\right),$ (3c) ​We here assume the formulas of ${g_{1}}(\cdot)$–${g_{3}}(\cdot)$ are unknown. While the Equations (3b) and (3c) indicate that given the inputs, the available knowledge are (i) the road friction coefficient varies with a vehicle’s mass and real-time velocity only, and (ii) the considered safety metric depends on velocity only. We let $\mathbf{x}=[p(k);\leavevmode\nobreak\ v(k);\leavevmode\nobreak\ m]$, $\mathbf{y}=[v(k+1);\leavevmode\nobreak\ r(k+1);\leavevmode\nobreak\ s(k+1)]$, $\mathbf{g}(\mathbf{x})=[{g_{1}}({p(k),v(k),m});\leavevmode\nobreak\ {g_{2}}({m,v(k)});\leavevmode\nobreak\ {g_{3}}({v(k)})]$, and $\mathfrak{m}(\mathbf{x},r)=[1;\leavevmode\nobreak\ p(k);\leavevmode\nobreak\ v(k);\leavevmode\nobreak\ m;\leavevmode\nobreak\ p^{2}(k);\leavevmode\nobreak\ p(k)v(k);\leavevmode\nobreak\ mp(k);\leavevmode\nobreak\ v^{2}(k);\leavevmode\nobreak\ mv(k);\leavevmode\nobreak\ m^{2}]$. The ground truth model (3) is then equivalently rewritten in the form of (1): $\displaystyle\mathbf{y}=\underbrace{\left[{\begin{array}[]{*{20}{c}}*&*&*&*&*&*&*&*&*&*\\\ &0&*&*&0&0&0&*&*&*\\\ &0&*&0&0&0&0&*&0&0\end{array}}\right]}_{\mathbf{A}}\cdot\mathfrak{m}(\mathbf{x},r)+\underbrace{\mathbf{g}(\mathbf{x})-\mathbf{A}\cdot\mathfrak{m}(\mathbf{x},r)}_{\mathbf{f}(\mathbf{x})}=\mathbf{g}(\mathbf{x}),$ (7) which thus encodes the available knowledge points (i) and (ii) to the known model-substructure parameters (i.e., zeros) in system matrix $\mathbf{A}$. Considering this definition, the problem addressed in this paper is formally stated below. ###### Problem 1. Given a time-series of inputs, $\mathbf{x}$, the corresponding outputs, $\mathbf{y}$, and the known model substructures in Equation (1), it is desired to develop an end-to-end neural network that directly estimates $\mathbf{y}$ (denoted by $\widehat{\mathbf{y}}$), given $\mathbf{x}$, consistently with all known model substructures. In other words, the model must satisfy the property that for each known model-substructure parameter, $[\mathbf{A}]_{i,j}$, the end-to-end model must ensure that $\frac{{\partial{{[\widehat{\mathbf{y}}}]_{i}}}}{{\partial{[\mathfrak{m}}\left({\mathbf{x},r}\right)]_{j}}}\equiv[\mathbf{A}]_{i,j}$ for any $\mathfrak{m}(\mathbf{x},r)$. The above definition allows the system described by Equation (1) to have an end-to-end model that intertwines well-known substructure properties with high-order unmodeled correlations of unknown nonlinear structure. In this, our problem differs from past seminal frameworks of physics-enhanced DNNs [2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 29, 30, 31, 32, 33, 34, 35], that use a compact partial differential equations (PDEs) for formulating the PDEs-regulated loss function and/or DNN architectures to count the degree mean of consistency with PDEs. The proposed solution to Problem 1 is the Phy-Taylor framework, which will rely on two building blocks: a deep physics-compatible neural network (PhN) and a physics- guided neural network editing mechanism, presented in the next section. ## 3 Phy-Taylor Framework The proposed Phy-Taylor for addressing Problem 1 is shown in Figure 1, which is built on the conjunctive deep physics-compatible neural network (PhN) and physics-guided neural network (NN) editing. In other words, implementing NN editing according to Taylor’s theorem for embedding available physical knowledge into deep PhN yields the Phy-Taylor. The PhN is a neural network layer with a key component: a physics-inspired augmentation (called Phy- Augmentation) for generating monomials in Equation (1) of Taylor series expansion of nonlinear functions capturing physical knowledge. The physics- guided NN editing – including link editing and activation editing – further modifies network links and activation functions consistently with physical knowledge. Specifically, the link editing performs removing and preserving links according to the consistency with physical knowledge. Meanwhile, the activation editing performs the physics-knowledge-preserving computing in output channel of each PhN. Collaboratively through link and activation editing, the input/output of Phy-Taylor strictly complies with the available physical knowledge, which is a desired solution to Problem 1. Next, we detail the two components. ### 3.1 The Physics-compatible Neural Network (PhN) In order to capture non-linear features of physical functions, we introduce a new type of network layer that is augmented with terms derived from Taylor series expansion. The Taylor’s Theorem offers a series expansion of arbitrary nonlinear functions, as shown below. Taylor’s Theorem (Chapter 2.4 [25]): Let $\mathbf{g}\\!:\leavevmode\nobreak\ \mathbb{R}^{n}\to\mathbb{R}$ be a $r$-times continuously differentiable function at the point $\mathbf{o}\in\mathbb{R}^{n}$. Then there exists $\mathbf{h}_{\alpha}\\!:\leavevmode\nobreak\ \mathbb{R}^{n}\to\mathbb{R}$, where $\left|\alpha\right|=r$, such that $\displaystyle\mathbf{g}(\mathbf{x})=\sum\limits_{\left|\alpha\right|\leq r}{\frac{{{\partial^{\alpha}}\mathbf{g}(\mathbf{o})}}{{\alpha!}}}{\left({\mathbf{x}-\mathbf{o}}\right)^{\alpha}}+\sum\limits_{\left|\alpha\right|=r}{{\mathbf{h}_{\alpha}}(\mathbf{x}){{({\mathbf{x}-\mathbf{o}})^{\alpha}}}},\hskip 5.69046pt\text{and}\leavevmode\nobreak\ \mathop{\lim}\limits_{\mathbf{x}\to\mathbf{o}}{\mathbf{h}_{\alpha}}\left(\mathbf{x}\right)=\mathbf{0},$ (8) where $\alpha=\left[{{\alpha_{1}};\leavevmode\nobreak\ {\alpha_{2}};\leavevmode\nobreak\ \ldots;\leavevmode\nobreak\ {\alpha_{n}}}\right]$, $\left|\alpha\right|=\sum\limits_{i=1}^{n}{{\alpha_{i}}}$, $\alpha!=\prod\limits_{i=1}^{n}{{\alpha_{i}}}!$, ${\mathbf{x}^{\alpha}}=\prod\limits_{i=1}^{n}{\mathbf{x}_{i}^{{\alpha_{i}}}}$, and ${\partial^{\alpha}}\mathbf{g}=\frac{{{\partial^{\left|\alpha\right|}}\mathbf{g}}}{{\partial\mathbf{x}_{1}^{{\alpha_{1}}}\cdot\ldots\cdot\partial\mathbf{x}_{n}^{{\alpha_{n}}}}}$. The Taylor’s theorem has several desirable properties: * • Non-linear Physics Term Representation: The high-order monomials (i.e., the ones included in $\left({\mathbf{x}-\mathbf{o}}\right)^{\alpha}$ with $|\alpha|\geq 2$) of the Taylor series (i.e., $\sum\limits_{\left|\alpha\right|\leq r}{\frac{{{D^{\alpha}}\mathbf{g}(\mathbf{o})}}{{\alpha!}}}{\left({\mathbf{x}-\mathbf{o}}\right)^{\alpha}}$) capture core nonlinearities of physical quantities such as kinetic energy ($\triangleq\frac{1}{2}m{v^{2}}$), potential energy ($\triangleq\frac{1}{2}k{x^{2}}$), electrical power ($\triangleq V\cdot I$) and aerodynamic drag force ($\triangleq\frac{1}{2}\rho{v^{2}}{C_{D}}A$), that drive the state dynamics of physical systems. * • Controllable Model Accuracy: Given ${\mathbf{h}_{\alpha}}(\mathbf{x})$ is finite and $\left\|{\mathbf{x}-\mathbf{o}}\right\|<1$, the error $\sum\limits_{\left|\alpha\right|=r}{{\mathbf{h}_{\alpha}}(\mathbf{x}){{({\mathbf{x}-\mathbf{o}})^{\alpha}}}}$ for approximating the ground truth $\mathbf{g}(\mathbf{x})$ will drop significantly as the order $r=|\alpha|$ increases and $\mathop{\lim}\limits_{|\alpha|=r\to\infty}{\mathbf{h}_{\alpha}}(\mathbf{x}){\left({\mathbf{x}-\mathbf{o}}\right)^{\alpha}}=\mathbf{0}$. This allows for controllable model accuracy via controlling order $r$. * • Knowledge Embedding: The Taylor series can directly project the known model substructure parameters of the ground-truth model (1) into neural network parameters including the weight matrix (${\frac{{{D^{\alpha}}\mathbf{g}(\mathbf{o})}}{{\alpha!}}}$ with $|\alpha|>0$) and bias (${\frac{{{D^{\alpha}}\mathbf{g}(\mathbf{o})}}{{\alpha!}}}$ with $|\alpha|=0$), thus paving the way to embed the available physical knowledge in the form of an appropriately weighted neural network layer. Figure 2: Phy-Augmentation architecture. We note the Taylor theorem relies on an assumption that the ground truth $\mathbf{g}(\mathbf{x})$ is a $r$-times continuously differentiable function at the point $\mathbf{o}$. If the assumption does not hold, what the Taylor series will approximate is a proximity of ground truth that is $r$-times continuously differentiable. For continuous functions, this is often a sufficient approximation. Next, we describe how PhNs embed the Taylor series expansion into neural network layers. The resulting architecture (of a single PhN layer) is shown in Figure 1. Compared with a classical neural network layer, we introduce the Phy-Augmentation, whose architecture is shown in Figure 2. The Phy-Augmentation has two components: (i) augmented inputs that represent monomials of a Taylor series expansion, and (ii) a suppressor for mitigating the influence of noise on such augmented inputs (i.e., high-order monomials). Next, we detail them. #### 3.1.1 Phy-Augmentation: Taylor Series Monomials Input: augmentation order $r$, input $\mathbf{x}$, point $\mathbf{o}$, suppressor mapping $\chi(\cdot)$. 1 Suppress input: $[\mathbf{x}]_{i}\leftarrow\begin{cases}\chi([{\mathbf{x}}]_{i}),&\text{if suppressor is active}\\\ [{\mathbf{x}}]_{i},&\text{otherwise}\end{cases}$, $i\in\\{1,2,\ldots,\text{len}(\mathbf{x})\\}$; 2 Generate index vector of input entries: $\mathbf{i}\leftarrow[1;\leavevmode\nobreak\ 2;\leavevmode\nobreak\ \ldots;\leavevmode\nobreak\ \mathrm{len}({\mathbf{x}})]$; 3 Generate augmentations: ${\mathfrak{m}}({\mathbf{x}},r)\leftarrow{\mathbf{x}}$; 4 for _$\\_\ =2$ to $r$_ do 5 for _$i=1$ to $\mathrm{len}({\mathbf{x}})$ _ do 6 Compute temporaries: ${\mathbf{{t}}}_{a}$ $\leftarrow[{\mathbf{x}}]_{i}\cdot[{\mathbf{x}}]_{\left[{[\mathbf{i}]_{i}\leavevmode\nobreak\ :\leavevmode\nobreak\ \mathrm{len}({\mathbf{x}})}\right]}$; 7 if _$i==1$ _ then 8 Generate temporaries: $\widetilde{\mathbf{{t}}}_{b}\leftarrow\widetilde{\mathbf{{t}}}_{a}$; 9 else 10 Generate temporaries: $\widetilde{\mathbf{{t}}}_{b}\leftarrow\left[\widetilde{\mathbf{{t}}}_{b};\leavevmode\nobreak\ \widetilde{\mathbf{{t}}}_{a}\right]$; 11 end if 12 Update index entry: $[\mathbf{i}]_{i}\leftarrow\mathrm{len}({\mathbf{x}})$; 13 Update augmentations: ${\mathfrak{m}}({\mathbf{x}},r)\leftarrow\left[{\mathfrak{m}}({\mathbf{x}},r);\leavevmode\nobreak\ {{\mathbf{{t}}}_{b}}\right]$; 14 15 end for 16 17 end for Output vector of augmented monomials: ${\mathfrak{m}}({\mathbf{x}},r)\leftarrow\left[1;\leavevmode\nobreak\ {\mathfrak{m}}({\mathbf{x}},r)\right]$. Algorithm 1 Phy-Augmentation Procedure Figure 3: An example of Algorithm 1 in TensorFlow framework, where input $\tilde{\mathbf{x}}\in\mathbb{R}^{3}$ is from Line 1 of Algorithm 1. The function of physical-features augmentation in Figure 2 is to generate the vector of physical features (i.e., node representations) in form of Taylor series monomials, which is formally described by Algorithm 1. The Lines 1–1 of Algorithm 1 guarantee that the generated node-representation vector embraces all the non-missing and non-redundant monomials of Taylor series. The Line 1 shows that Algorithm 1 finally stacks vector with one. This operation means a PhN node will be assigned to be one, and the bias (corresponding to ${\frac{{{D^{\alpha}}\mathbf{g}(\mathbf{o})}}{{\alpha!}}}$ with $|\alpha|=0$ in Taylor series) will be thus treated as link weights in PhN layers. As an example shown in Figure 3, the Phy-Augmentation empowers PhN to well capture core nonlinearities of physical quantities (see e.g., kinetic energy, potential energy, electrical power and aerodynamic drag force) that drive the state dynamics of physical systems, and then represent or approximate physical knowledge in form of the Taylor series. We note the Line 1 of Algorithm 1 means the noise suppressor is not applied to all the input elements. The critical reason is the extra mapping induced by suppressor on inputs can destroy the compliance with physical knowledge, when the available model-substructure knowledge do not include the mapping of suppressor. We next is going to present the function of suppressor. #### 3.1.2 Phy-Augmentation: Noise Suppressor The suppressor in Figure 2 is to mitigate the influence of noise on the augmented high-order monomials. Before proceeding on the working mechanism, we present a metric pertaining to noise and true data. ###### Definition 2. Consider the noisy data and define the data-to-noise ratio ($\mathrm{DNR}$): $\displaystyle[\bar{\mathbf{x}}]_{i}=\underbrace{[\mathbf{h}]_{i}}_{\text{true data}}+\underbrace{[{\mathbf{w}}]_{i}}_{\text{noise}}\in\mathbb{R},\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \mathrm{DNR}_{i}\triangleq\frac{[\mathbf{h}]_{i}}{[\mathbf{w}]_{i}}.$ (9) The auxiliary Theorem 4 presented in Supplementary Information 9.1 implies that the high-order monomials can shrink their DNRs due to nonlinear mapping. This means the PhN can be vulnerable to the noisy inputs, owning to Phy- Augmentation for generating high-order monomials. Hence, mitigating the influence of noise is vital for enhancing the robustness of PhNs, consequently the Phy-Taylor. As shown in Figure 2, we incorporate a suppressor into PhN to process the raw input data, such that the high-order monomial from Phy- Augmentation can enlarge their DNRs. Building on Definition 2, the proposed noise suppressor mapping is $\displaystyle\chi([\bar{\mathbf{x}}]_{i})=\chi([\mathbf{h}]_{i}+[\mathbf{w}]_{i})=\begin{cases}0,&\text{if}\leavevmode\nobreak\ [\mathbf{h}]_{i}+[\mathbf{w}]_{i}<0\\\ [\mathbf{h}]_{i}+[\mathbf{w}]_{i},&\text{if}\leavevmode\nobreak\ [\mathbf{h}]_{i}+[\mathbf{w}]_{i}\geq 0\leavevmode\nobreak\ \text{and}\leavevmode\nobreak\ [\mathbf{w}]_{i}<0\\\ ([\mathbf{h}]_{i}+[\mathbf{w}]_{i})\cdot\kappa_{i}+\rho_{i},&\text{if}\leavevmode\nobreak\ [\mathbf{h}]_{i}+[\mathbf{w}]_{i}\geq 0\leavevmode\nobreak\ \text{and}\leavevmode\nobreak\ [\mathbf{w}]_{i}>0\end{cases},$ (10) where the parameters $\rho_{i}$ and $\kappa_{i}$ satisfy $\displaystyle|\rho_{i}|\geq|[\mathbf{h}]_{i}+[\mathbf{w}]_{i}|\cdot|\kappa_{i}|.$ (11) We next present the suppressor properties in the following theorem, whose proof appears in Supplementary Information 9.2. ###### Theorem 1. Consider the noisy data $[\bar{\mathbf{x}}]_{i}$ and the suppressor described in Equations (9) and (10), respectively. Under the condition (11), the suppressor output, denoted by $[\widehat{\mathbf{x}}]_{i}=\chi([\bar{\mathbf{x}}]_{i})$, has the properties: The DNR magnitude of high-order monomial $[\widehat{\mathbf{x}}]_{i}^{p}[\widehat{\mathbf{x}}]_{j}^{q}$ ($p+q\geq 2$) is strictly increasing with respect to DNR magnitudes of $[\widehat{\mathbf{x}}]_{i}$ and $[\widehat{\mathbf{x}}]_{j}$. (12) The true data and the noise of suppressor output $[\widehat{\mathbf{x}}]_{i}$ are $\displaystyle[\widetilde{\mathbf{h}}]_{i}=\begin{cases}\\![\mathbf{h}]_{i}\cdot\kappa_{i}+\rho_{i},\\!\\!&[\mathbf{h}]_{i}\\!+\\![\mathbf{w}]_{i}\\!\geq\\!0\leavevmode\nobreak\ \text{and}\leavevmode\nobreak\ [\mathbf{w}]_{i}\\!>\\!0\\\ \\![\mathbf{h}]_{i},\\!\\!&\text{otherwise}\\\ \end{cases},\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ [\widetilde{\mathbf{w}}]_{i}=\begin{cases}\\!-[\mathbf{h}]_{i},\\!\\!&[\mathbf{h}]_{i}\\!+\\![\mathbf{w}]_{i}\\!<\\!0\\\ \\![\mathbf{w}]_{i},\\!\\!&[\mathbf{h}]_{i}\\!+\\![\mathbf{w}]_{i}\\!\geq\\!0\leavevmode\nobreak\ \text{and}\leavevmode\nobreak\ [\mathbf{w}]_{i}\\!<\\!0\\\ \\![\mathbf{w}]_{i}\cdot\kappa_{i},\\!\\!&[\mathbf{h}]_{i}\\!+\\![\mathbf{w}]_{i}\\!\geq\\!0\leavevmode\nobreak\ \text{and}\leavevmode\nobreak\ [\mathbf{w}]_{i}\\!>\\!0\end{cases},$ (13) such that $[\widehat{\mathbf{x}}]_{i}=[\widetilde{\mathbf{h}}]_{i}+[\widetilde{\mathbf{w}}]_{i},\leavevmode\nobreak\ i\in\\{1,2,\dots,\mathrm{len}(\widehat{\mathbf{x}})\\}$. The result (13) implies the parameters $\kappa$ and $\rho$ control the DNRs of suppressed data, consequently the high-order monomials. Furthermore, the result (12) suggests that through designing parameters $\kappa_{i}$, $\rho_{i}$, $\kappa_{j}$ and $\rho_{j}$ for increasing the DNR magnitudes of data $[\widehat{\mathbf{x}}]_{i}$ and $[\widehat{\mathbf{x}}]_{j}$, the DNR of high-order monomial $[\widehat{\mathbf{x}}]_{i}^{p}[\widehat{\mathbf{x}}]_{j}^{q}$ can be enlarged consequently, such that the influence of noise is mitigated. ### 3.2 Physics-guided Neural Network Editing Input: Available knowledge included in system matrix $\mathbf{A}$ of ground- truth model (1), terminal output dimension $\text{len}(\mathbf{y})$, number $p$ of PhNs, activation functions $\text{act}(\cdot)$, $\mathbf{y}_{\left\langle 0\right\rangle}=\mathbf{x}$ and $r_{\left\langle 1\right\rangle}=r$. 1 for _$t=1$ to $p$_ do 2 if _$t==1$ _ then 3 Deactivate noise suppressor; 4 Generate node-representation vector $\mathfrak{m}(\mathbf{y}_{\left\langle t-1\right\rangle},r_{\left\langle t\right\rangle})$ via Algorithm 1; 5 Generate knowledge matrix $\mathbf{K}_{\left\langle t\right\rangle}$: $[\mathbf{K}_{\left\langle t\right\rangle}]_{i,j}\leftarrow\begin{cases}[\mathbf{A}_{\left\langle t\right\rangle}]_{i,j},&[\mathbf{A}_{\left\langle t\right\rangle}]_{i,j}=\boxplus\\\ 0,&\text{otherwise}\end{cases}$; 6 Generate weight-masking matrix $\mathbf{M}_{\left\langle t\right\rangle}$: $[\mathbf{M}_{\left\langle t\right\rangle}]_{i,j}\leftarrow\begin{cases}0,&[\mathbf{A}_{\left\langle t\right\rangle}]_{i,j}=\boxplus\\\ 1,&\text{otherwise}\end{cases}$; 7 Generate activation-masking vector $\mathbf{a}_{\left\langle t\right\rangle}$: ​ $[\mathbf{a}_{\left\langle t\right\rangle}]_{i}\\!\leftarrow\\!\begin{cases}0,\\!\\!\\!&[\mathbf{M}_{\left\langle t\right\rangle}]_{i,j}\\!=\\!0,\forall j\\!\in\\!\\{1,\ldots,\text{len}(\mathfrak{m}(\mathbf{y}_{\left\langle t-1\right\rangle},r_{\left\langle t\right\rangle}))\\}\\\ 1,\\!\\!\\!&\text{otherwise}\end{cases}$; 8 9 else 10 Generate node-representation vector $\mathfrak{m}(\mathbf{y}_{\left\langle t-1\right\rangle},r_{\left\langle t\right\rangle})$ via Algorithm 1; 11 Generate knowledge matrix $\mathbf{K}_{\left\langle t\right\rangle}$: $\displaystyle\mathbf{K}_{\left\langle t\right\rangle}\leftarrow\left[\begin{gathered}\leavevmode\hbox to270.7pt{\vbox to68.29pt{\pgfpicture\makeatletter\hbox{\hskip 0.2pt\lower 0.2pt\hbox to0.0pt{\pgfsys@beginscope\pgfsys@invoke{ }\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{ }\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\pgfsys@setlinewidth{0.4pt}\pgfsys@invoke{ }\nullfont\hbox to0.0pt{\pgfsys@beginscope\pgfsys@invoke{ } {{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{}}{} {{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{74.8947pt}{14.4014pt}\pgfsys@invoke{ }\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{ }\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{{$\mbox{\small O}_{(\text{len}(\mathbf{y}_{\left\langle t\right\rangle})-\text{len}(\mathbf{y}))\times\text{len}(\mathfrak{m}(\mathbf{y}_{\left\langle t-1\right\rangle},r_{\left\langle t\right\rangle}))}$}} }}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{}}{} {{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{13.22885pt}{47.09738pt}\pgfsys@invoke{ }\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{ }\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{{$\mathbf{0}_{\text{len}(\mathbf{y})}$}} }}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{}}{} {{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{73.00107pt}{47.24461pt}\pgfsys@invoke{ }\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{ }\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{{$\mbox{\small I}_{\text{len}(\mathbf{y})}$}} }}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{}}{} {{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{139.64648pt}{48.54462pt}\pgfsys@invoke{ }\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{ }\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{{$\mbox{\small O}_{\text{len}(\mathbf{y})\times(\text{len}(\mathfrak{m}(\mathbf{y}_{\left\langle t-1\right\rangle},r_{\left\langle t\right\rangle}))-\text{len}(\mathbf{y})-1)}$}} }}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {}{{}}{} {}{}\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@setdash{3.0pt,3.0pt}{0.0pt}\pgfsys@invoke{ }{}\pgfsys@moveto{0.0pt}{34.14322pt}\pgfsys@lineto{270.30121pt}{34.14322pt}\pgfsys@stroke\pgfsys@invoke{ } \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope{}{{}}{} {}{}\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@setdash{3.0pt,3.0pt}{0.0pt}\pgfsys@invoke{ }{}\pgfsys@moveto{51.21504pt}{34.14322pt}\pgfsys@lineto{51.21504pt}{68.28644pt}\pgfsys@stroke\pgfsys@invoke{ } \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope{}{{}}{} {}{}\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@setdash{3.0pt,3.0pt}{0.0pt}\pgfsys@invoke{ }{}\pgfsys@moveto{113.81104pt}{34.14322pt}\pgfsys@lineto{113.81104pt}{68.28644pt}\pgfsys@stroke\pgfsys@invoke{ } \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope{}{}{}\hss}\pgfsys@discardpath\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope\hss}}\lxSVG@closescope\endpgfpicture}}\end{gathered}\right];$ (15) 12 Generate weight-masking matrix $\mathbf{M}_{\left\langle t\right\rangle}$: $[\mathbf{M}_{\left\langle t\right\rangle}]_{i,j}\leftarrow\begin{cases}0,&\frac{{\partial[\mathfrak{m}(\mathbf{y}_{\left\langle t\right\rangle},r_{\left\langle t\right\rangle})]_{j}}}{{\partial[\mathfrak{m}(\mathbf{x},r_{\left\langle 1\right\rangle})]_{v}}}\neq 0\leavevmode\nobreak\ \text{and}\leavevmode\nobreak\ [\mathbf{M}_{\left\langle 1\right\rangle}]_{i,v}=0,\leavevmode\nobreak\ \leavevmode\nobreak\ v\in\\{1,2,\dots,\text{len}(\mathfrak{m}(\mathbf{x},r_{\left\langle 1\right\rangle}))\\}\\\ 1,&\text{otherwise}\end{cases}$; 13 Generate activation-masking vector $\mathbf{a}_{\left\langle t\right\rangle}\leftarrow\left[\mathbf{a}_{\left\langle 1\right\rangle};\leavevmode\nobreak\ \mathbf{1}_{\text{len}(\mathbf{y}_{\left\langle t\right\rangle})-\text{len}(\mathbf{y})}\right]$; 14 15 end if 16 Generate weight matrix: ${\mathbf{W}_{\left\langle t\right\rangle}}$; 17 Generate uncertainty matrix $\mathbf{U}_{\left\langle t\right\rangle}\leftarrow\mathbf{M}_{\left\langle t\right\rangle}\odot{\mathbf{W}_{\left\langle t\right\rangle}}$; 18 Compute output: $\mathbf{y}_{\left\langle t\right\rangle}\leftarrow\mathbf{K}_{\left\langle t\right\rangle}\cdot\mathfrak{m}(\mathbf{y}_{\left\langle t-1\right\rangle},r_{\left\langle t\right\rangle})+\mathbf{a}_{\left\langle t\right\rangle}\odot\text{act}\left({\mathbf{U}_{\left\langle t\right\rangle}\cdot\mathfrak{m}\left({\mathbf{y}_{\left\langle t-1\right\rangle},r_{\left\langle t\right\rangle}}\right)}\right)$ ; 19 end for Output : terminal output: $\widehat{\mathbf{y}}\leftarrow\mathbf{y}_{\left\langle p\right\rangle}$ . Algorithm 2 Physics-guided NN Editing Building on the deep PhN, this section presents the neural network (NN) editing for embedding and preserving the available physical knowledge, through the physics-guided link editing and activation editing. Specifically, the link editing centers around removing and preserving the links according to the consistency with physical knowledge. Meanwhile, the activation editing performs the physical-knowledge-preserving computing in the output channels of PhNs. Thanks to the concurrent link and activation editing, the input/output of Phy-Taylor can strictly comply with the available physical knowledge. Using the notation ‘$\boxplus$’ defined in the Table 1, the procedure of physics- guided NN editing is described in Algorithm 2. For the edited weight matrix $\mathbf{W}_{\left\langle t\right\rangle}$, if its entries in the same row are all known model-substructure parameters, the associated activation should be inactivate. Otherwise, the Phy-Taylor cannot strictly preserve the available physical knowledge due to the extra nonlinear mappings induced by the activation functions. This thus motivates the physics- knowledge preserving computing, i.e., the Line 2 of Algorithm 2. Figure 4 summarizes the flowchart of NN editing in a single PhN layer: * • (i) Given the node-representation vector from Algorithm 1, the original (fully-connected) weight matrix is edited via link editing to embed assigned physical knowledge, resulting in $\mathbf{W}_{\left\langle t\right\rangle}$. * • (ii) The edited weight matrix ${\mathbf{W}_{\left\langle t\right\rangle}}$ is separated into knowledge matrix $\mathbf{K}_{\left\langle t\right\rangle}$ and uncertainty matrix $\mathbf{U}_{\left\langle t\right\rangle}$, such that ${\mathbf{W}_{\left\langle t\right\rangle}}=\mathbf{K}_{\left\langle t\right\rangle}+\mathbf{U}_{\left\langle t\right\rangle}$. Specifically, the $\mathbf{K}_{\left\langle t\right\rangle}$, generated in Lines 2 and 11, includes all the known model-substructure parameters. While the $\mathbf{M}_{\left\langle t\right\rangle}$, generated in Lines 2 and 2, is used to generate uncertainty matrix $\mathbf{U}_{\left\langle t\right\rangle}$ (see Line 2) to include all the unknown model-substructure parameters, through freezing the known model-substructure parameters of $\mathbf{W}_{\left\langle t\right\rangle}$ to zeros. * • (iii) The $\mathbf{K}_{\left\langle t\right\rangle}$, $\mathbf{M}_{\left\langle t\right\rangle}$ and activation-masking vector $\mathbf{a}_{\left\langle t\right\rangle}$ (generated in Lines 2 and 2) are used by activation editing for the physical-knowledge-preserving computing of output in each PhN layer. The function of $\mathbf{a}_{\left\langle t\right\rangle}$ is to avoid the extra mapping (induced by activation) that prior physical knowledge does not include. Figure 4: Flowchart of NN editing in single PhN layer. The flowchart of NN editing operating in cascade PhN is depicted in Figure 5. The Lines 2-2 of Algorithm 2 means that $\mathbf{A}=\mathbf{K}_{\left\langle 1\right\rangle}+\mathbf{M}_{\left\langle 1\right\rangle}\odot\mathbf{A}$, leveraging which and the setting $r_{\left\langle 1\right\rangle}=r$, the ground-truth model (1) is rewritten as $\displaystyle\mathbf{y}=(\mathbf{K}_{\left\langle 1\right\rangle}+\mathbf{M}_{\left\langle 1\right\rangle}\odot\mathbf{A})\cdot\mathfrak{m}(\mathbf{x},r)+\mathbf{f}(\mathbf{x})=\mathbf{K}_{\left\langle 1\right\rangle}\cdot\mathfrak{m}(\mathbf{x},r_{\left\langle 1\right\rangle})+(\mathbf{M}_{\left\langle 1\right\rangle}\odot\mathbf{A})\cdot\mathfrak{m}(\mathbf{x},r_{\left\langle 1\right\rangle})+\mathbf{f}(\mathbf{x}).$ (16) Figure 5: Implementing NN editing, i.e., Algorithm 2, in the Example 1. (i) Unknown substructures are formed by the grey links, the known substructures are formed by the red and blue links. (ii) Cutting black links to avoid spurious correlations, otherwise, the links introduce the dependence of $s\left({k+1}\right)$ on mass m, thus contradicting physical knowledge. We obtain from the Line 2 of Algorithm 2 that the output of the first PhN layer is $\displaystyle\mathbf{y}_{\left\langle 1\right\rangle}=\mathbf{K}_{\left\langle 1\right\rangle}\cdot\mathfrak{m}(\mathbf{x},r_{\left\langle 1\right\rangle})+\mathbf{a}_{\left\langle 1\right\rangle}\odot\text{act}\left({\mathbf{U}_{\left\langle 1\right\rangle}\cdot{\mathfrak{m}}\left({\mathbf{x},r_{\left\langle 1\right\rangle}}\right)}\right).$ (17) Recalling that $\mathbf{K}_{\left\langle 1\right\rangle}$ includes all the known model-substructure parameters of $\mathbf{A}$ while the $\mathbf{U}_{\left\langle 1\right\rangle}$ includes remainders, we conclude from (16) and (17) that the available physical knowledge pertaining to the ground-truth model (1) has been embedded to the first PhN layer. As Figure 5 shows the embedded knowledge shall be passed down to the remaining cascade PhNs and preserved therein, such that the end-to-end Phy-Taylor model can strictly with the prior physical knowledge. This knowledge passing is achieved by the block matrix $\mathbf{K}_{\left\langle p\right\rangle}$ generated in Line 11, due to which, the output of $t$-th PhN layer satisfies $\displaystyle[\mathbf{y}_{\left\langle t\right\rangle}]_{1:\text{len}(\mathbf{y})}=\underbrace{\mathbf{K}_{\left\langle 1\right\rangle}\cdot\mathfrak{m}(\mathbf{x},r_{\left\langle 1\right\rangle})}_{\text{knowledge passing}}+\underbrace{[\mathbf{a}_{\left\langle t\right\rangle}\odot\text{act}\\!\left({\mathbf{U}_{\left\langle t\right\rangle}\cdot{\mathfrak{m}}({\mathbf{y}_{\left\langle t-1\right\rangle},r_{\left\langle t\right\rangle}})}\right)]_{1:\text{len}(\mathbf{y})}}_{\text{knowledge preserving}},\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \forall t\in\\{2,3,\ldots,p\\}.$ (18) Meanwhile, the $\mathbf{U}_{\left\langle t\right\rangle}=\mathbf{M}_{\left\langle t\right\rangle}\odot{\mathbf{W}_{\left\langle t\right\rangle}}$ means the masking matrix $\mathbf{M}_{\left\langle t\right\rangle}$ generated in Line 2 is to remove the spurious correlations in the cascade PhN, which is depicted by the cutting link operation in Figure 5. ### 3.3 The Solution to Problem 1: Phy-Taylor Described in Figure 1, with the guidance of Taylor series, implementing the physics-guided NN editing in the deep PhN yields the Phy-Taylor. The Phy- Taylor embeds the available physical knowledge into each PhN, such that its input/output strictly complies with the physical knowledge, which is formally stated in the following theorem. The theorem proof appears in Supplementary Information 9.3. ###### Theorem 2. Consider the Phy-Taylor described by Figure 1. The input/output (i.e., ${\mathbf{x}}$/$\widehat{\mathbf{y}}$) of Phy-Taylor strictly complies with the available knowledge pertaining to the physics model (1) of ground truth, i.e., if the $[\mathbf{A}]_{i,j}$ is a known model-substructure parameter, then $\frac{{\partial{{[\widehat{\mathbf{y}}}]_{i}}}}{{\partial{[\mathfrak{m}}\left({\mathbf{x},r}\right)]_{j}}}\equiv\frac{{\partial{[\mathbf{y}]_{i}}}}{{\partial{[\mathfrak{m}}\left({\mathbf{x},r}\right)]_{j}}}\equiv[\mathbf{A}]_{i,j}$ always holds. ## 4 Phy-Taylor Properties Figure 6: (a): Two Fully-connected NN layers. (b): A single PhN layer. Moving forward, this section focuses on the property analysis of Phy-Taylor. ### 4.1 Parameter Quantity Reduction The Figures 2 and 3 show that the Phy-Augmentation, i.e., the Algorithm 1, expands the input without involving weight-matrix multiplication. This trait can be leveraged to significantly reduce the quantity of learning parameters including weights and bias. For the demonstration, we consider the two network models in Figure 6 (a) and (b), where the (a) describes a fully-connected two- layer network, while the (b) describes a single PhN. Observing them, we obtain that given the same dimensions of input and terminal output, the number of learning parameters of Figure 6 (a) is $(m+1)n+(n+1)p$ (including $(m+p)n$ weights and $n+p$ bias), while the number of learning parameters of Figure 6 (b) is $(n+1)p$ (including $n\times p$ weights and $p$ bias). The number difference of learning parameters is thus $\displaystyle(m+1)n+(n+1)p-(n+1)p=(m+1)n.$ (19) We note the quantity of reduced parameters (19) is the lower bound of PhN in Phy-Taylor framework, since it is obtained without considering physics-guided NN editing for removing and freezing links and bias according to the available physical knowledge. ### 4.2 Single PhN v.s. Cascade PhN We next investigate if the space complexity (i.e., the quantity of augmented monomials) of Phy-Augmentation of a single PhN with a large augmentation order can be reduced via cascade PhN with relatively small orders. To simplify the presentation, a single PhN and cascade PhN are respectively represented in the following equations. $\displaystyle\widehat{\mathbf{y}}=\mathrm{PhN}({\left.{\mathbf{x}}\in\mathbb{R}^{n}\right|r})\in\mathbb{R}^{m},$ (20) $\displaystyle{\mathbf{x}}\in{{\mathbb{R}}^{n}}\leavevmode\nobreak\ \leavevmode\nobreak\ \longmapsto\leavevmode\nobreak\ \leavevmode\nobreak\ {\mathbf{y}_{\left\langle 1\right\rangle}}=\mathrm{PhN}({\left.{\mathbf{x}}\right|{r_{\left\langle 1\right\rangle}}})\in{{\mathbb{R}}^{{n_{\left\langle 1\right\rangle}}}}\leavevmode\nobreak\ \leavevmode\nobreak\ \longmapsto\leavevmode\nobreak\ \leavevmode\nobreak\ \ldots\leavevmode\nobreak\ \leavevmode\nobreak\ \longmapsto\leavevmode\nobreak\ \leavevmode\nobreak\ {\mathbf{y}_{{\left\langle d-1\right\rangle}}}=\mathrm{PhN}({\left.{{\mathbf{y}_{{\left\langle d-2\right\rangle}}}}\right|{r_{{\left\langle d-1\right\rangle}}}})\in{{\mathbb{R}}^{{n_{{\left\langle d-1\right\rangle}}}}}$ $\displaystyle\hskip 255.79042pt\longmapsto\leavevmode\nobreak\ \leavevmode\nobreak\ \widehat{\mathbf{y}}=\mathrm{PhN}({\left.{{\mathbf{y}_{{\left\langle d-1\right\rangle}}}}\right|{r_{\left\langle d\right\rangle}}})\in\mathbb{R}^{m},$ (21) where the cascade architecture consists of $d$ PhNs. To guarantee the cascade PhN (21) and the single PhN (20) have the same monomials, their augmentation orders are required to satisfy $\displaystyle\prod\limits_{v=1}^{d}{{r_{{\left\langle v\right\rangle}}}}=r,\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \forall r_{{\left\langle v\right\rangle}},\leavevmode\nobreak\ \leavevmode\nobreak\ r\in\mathbb{N}.$ (22) The space complexity difference of Phy-Augmentation is formally presented in the following theorem, whose proof is presented in Supplementary Information 9.4. ###### Theorem 3. Under the condition (22), the space complexity difference between single PhN (20) and cascade PhN (21), due to Phy-Augmentation, is $\displaystyle\mathrm{len}(\mathfrak{m}(\mathbf{x},r))-\sum\limits_{p=1}^{d}{\mathrm{len}(\mathfrak{m}(\mathbf{x},r_{{\left\langle p\right\rangle}}))}=\sum\limits_{s={r_{\left\langle 1\right\rangle}}+1}^{r}{\frac{{\left({n+s-1}\right)!}}{{\left({n-1}\right)!s!}}}-\sum\limits_{v=1}^{d-1}{\sum\limits_{s=1}^{{r_{{\left\langle v+1\right\rangle}}}}{\frac{{\left({{n_{\left\langle v\right\rangle}}+s-1}\right)!}}{{\left({{n_{\left\langle v\right\rangle}}-1}\right)!s!}}}}+1-d.$ (23) The Theorem 3 implies that the output dimensions and the augmentation orders of intermediate PhNs are critical in the significant reduction of space complexity via cascade PhN. However, an intuitive question arises: Does the cascade PhN reduce the complexity at the cost of model accuracy? Without loss of generality, we use the following example to answer the question. ###### Example 2. For simplicity in explanation, we ignore the bias and consider the scenario that both the activation and the suppressor are inactive. For the single PhN (20), we let $\mathbf{x}\in\mathbb{R}^{2}$ and $\widehat{\mathbf{y}}\in\mathbb{R}$ and $r=4$. Its output is then computed according to $\displaystyle\widehat{\mathbf{y}}={w_{1}}{[\mathbf{x}]_{1}}+{w_{2}}{[\mathbf{x}]_{2}}+{w_{3}}[\mathbf{x}]_{1}^{2}+{w_{4}}{[\mathbf{x}]_{1}}{[\mathbf{x}]_{2}}+{w_{5}}[\mathbf{x}]_{2}^{2}+{w_{6}}[\mathbf{x}]_{1}^{3}+{w_{7}}[\mathbf{x}]_{1}^{2}{[\mathbf{x}]_{2}}+{w_{8}}{[\mathbf{x}]_{1}}[\mathbf{x}]_{2}^{2}+{w_{9}}[\mathbf{x}]_{2}^{3}$ $\displaystyle\hskip 137.42685pt+{w_{10}}[\mathbf{x}]_{1}^{4}+{w_{11}}[\mathbf{x}]_{1}^{3}{[\mathbf{x}]_{2}}+{w_{12}}[\mathbf{x}]_{1}^{2}[\mathbf{x}]_{2}^{2}+{w_{13}}{[\mathbf{x}]_{1}}[\mathbf{x}]_{2}^{3}+{w_{14}}[\mathbf{x}]_{2}^{4},$ (24) For the corresponding cascade PhN (21), we let $r_{{\left\langle 1\right\rangle}}=r_{{\left\langle 2\right\rangle}}=2$, the output dimension of first PhN is 2 while dimension of terminal output is 1. We thus have $\displaystyle\widehat{\mathbf{y}}$ $\displaystyle={{\hat{w}}_{6}}{\mathbf{y}_{\left\langle 1\right\rangle}}+{{\hat{w}}_{7}}\mathbf{y}^{2}_{\left\langle 1\right\rangle}={{\hat{w}}_{6}}({{{\tilde{w}}_{1}}{[\mathbf{x}]_{1}}+{{\tilde{w}}_{2}}{[\mathbf{x}]_{2}}+{{\tilde{w}}_{3}}[\mathbf{x}]_{1}^{2}+{{\tilde{w}}_{4}}{[\mathbf{x}]_{1}}{[\mathbf{x}]_{2}}+{{\tilde{w}}_{5}}[\mathbf{x}]_{2}^{2}})$ $\displaystyle\hskip 152.22241pt+{{\hat{w}}_{7}}{\left({{{\tilde{w}}_{1}}{[\mathbf{x}]_{1}}+{{\tilde{w}}_{2}}{[\mathbf{x}]_{2}}+{{\tilde{w}}_{3}}[\mathbf{x}]_{1}^{2}+{{\tilde{w}}_{4}}{[\mathbf{x}]_{1}}{[\mathbf{x}]_{2}}+{{\tilde{w}}_{5}}[\mathbf{x}]_{2}^{2}}\right)^{2}}.$ (25) Observing (24) and (25), we discover that (i) both the single and cascade architectures have the same monomials due to the satisfying condition (22), and (ii) the single PhN layer has 14 weight parameters (i.e., the ${w_{1}}$, ${w_{2}}$, $\dots$, ${w_{14}}$ in Equation (24)), while the cascade layers have only 7 weight parameters (i.e., the ${\hat{w}_{1}}$, $\ldots$, and $\hat{w}_{7}$ in Equation (25)) in total. Intuitively, we can conclude that if the reduced weights are associated with the links that contradict with physical knowledge, the cascade PhN can further increase model accuracy, otherwise, it can reduce the space complexity at the cost of model accuracy. ## 5 Extension: Self-Correcting Phy-Taylor Figure 7: Self-correcting Phy-Taylor Architecture: $\mathbf{u}(k)$ denotes the vector of real-time decisions, ${\bf{s}}(u(k))$ denotes the vector of real- time safety metrics, $\bf{c}$ is the vector of safety bounds. The recent incidents due to deployment of DNNs overshadow the revolutionizing potential of AI in the physical engineering systems [36, 1], especially the safety-critical systems, whose unintended behavior results in death or serious injury to people, severe damage to equipment or environments [37, 38]. The safe control and planning is a fundamental solution for enhancing safety assurance of the AI-assisted physical systems often operating in environments where time and safety are critical, such as the airplanes, medical drones and autonomous vehicles. To comply with safety constraints in face of potential conflicts from control objectives, the framework of control barrier function (CBF) has been proposed for the computation of real-time safety-critical control commands [39, 40]. The CBFs however use only current state information without prediction, whose control policy is thus greedy and challenging for proactive safe control. It is well known that model predictive control (MPC) yields a less greedy safe control policy, since it takes future state information into account [41, 42]. Motivated by the observations, MPC with incorporation of CBF, i.e., MPC-CBF, was proposed [43]. Due to the nonlinear dynamics, the MPC-CBF however faces a dilemma of prediction horizon and computation time of safe control commands, which induces considerable feedback delays and thus leads to failures in the time- and safety-critical operating environments. To address the dilemma, we propose the self-correcting Phy- Taylor, whose architecture is shown in Figure 7. Its one mission is learning the safety relationship between the real-time decisions and the safety metrics, with consideration of future information: $\displaystyle\mathbf{s}(\mathbf{x}(k),\mathbf{u}(k),\tau)=\sum\limits_{t=k}^{k+\tau-1}{\tilde{\mathbf{f}}(\mathbf{x}(t),\mathbf{u}(t))},$ (26) where $\tilde{\mathbf{f}}(\mathbf{x}(t),\mathbf{u}(t))$ is the predefined vector of safety metrics at time $t$, and $\tau$ denotes the horizon of future information of safety metrics. Inside the self-correcting Phy-Taylor, the learned safety relationship for approximating (26) will first be subject to the off-line verification of available physical knowledge, based on which, the necessary revisions can be needed. According to the off-line verified and revised (if needed) safety relationship, the correcting of real-time decision $\mathbf{u}(k)$ will be triggered if any safety metric $[\mathbf{s}(\mathbf{x}(k),\mathbf{u}(k),\tau)]_{i}$, $i\in\\{1,2,\ldots,h\\}$, exceeds (or leaves) the preset safety bounds (or safety envelopes). However, the current learned formula corresponding to (26) is not ready (if not impossible) for delivering the procedure, owning to the complicated dependence of $[\mathbf{s}(\mathbf{x}(k),\mathbf{u}(k),\tau)]_{i}$ on both system state $\mathbf{x}(k)$ and decision $\mathbf{u}(k)$. To address this problem, as shown in Figure 7, we decouple the real-time decisions from the real-time system states. Specifically, * • Given the real-time system state $\mathbf{x}(k)$ as the origin input, the first Phy-Taylor outputs the real-time decision $\mathbf{u}(k)$, which is motivated by the fact that the state-feedback control is used most commonly in physical engineering systems [44]. In other words, the computation of raw $\mathbf{u}(k)$ directly depends on real-time system state $\mathbf{x}(k)$. * • Given the real-time decision $\mathbf{u}(k)$ (i.e., the output of the first Phy-Taylor) as the input of the second Phy-Taylor, the terminal output is the real-time safety metric $\mathbf{s}(\mathbf{u}(k))$, which is motivated by the fact that the decision $\mathbf{u}(k)$ manipulates system state. In other words, the safety metric $\mathbf{s}(\mathbf{u}(k))$ directly depends on decision $\mathbf{u}(k)$ and indirectly depends on system state $\mathbf{x}(k)$. * • The two Phy-Taylors are trained simultaneously according to the training loss function: $\displaystyle\mathcal{L}=\alpha\left\|{{\bf{s}}(\mathbf{u}(k))-{\bf{s}}({\bf{x}}(k),{\bf{u}}(k),\tau)}\right\|+\beta\left\|{\breve{\bf{u}}(k)-\bf{u}(k)}\right\|,$ (27) where the ${\bf{s}}({\bf{x}}(k),{\bf{u}}(k),\tau)$ given in Equation (26) is ground truth of safety-metric vector, the $\breve{\bf{u}}(k)$ is ground truth of decision vector, the $\alpha$ and $\beta$ are hyperparameters. The two cascade Phy-Taylors thus depend on each other. * • To render the learned safety relationship ${\bf{s}}(\mathbf{u}(k))$ tractable, the activation and compressor inside the second Phy-Taylor are inactive, such that the ${\bf{s}}(\mathbf{u}(k))$ is expressed in the form of Taylor series. Given the verified and revised (if needed) relationship, the self-correcting procedure will be triggered (if a safety metric exceeds the safety bound $\mathbf{c}$) for correcting decision according to $\displaystyle\mathbf{u}(k)\leftarrow\mathop{\arg\min}\limits_{\widetilde{\mathbf{u}}(k)\in{\mathbb{R}^{m}}}\left\\{{\left.{\left\|{\widetilde{\mathbf{u}}(k)-\mathbf{u}(k)}\right\|}\right|[\bf{s}(\widetilde{\mathbf{u}}(k)]_{i}<[\mathbf{c}]_{i},\leavevmode\nobreak\ \leavevmode\nobreak\ i\in\\{1,2,\ldots,\text{len}({\bf{s}}(\mathbf{u}(k)))\\}}\right\\}.$ (28) We note the self-correcting mechanism and the safety revision of relationship between ${\bf{s}}(\mathbf{u}(k))$ and $\mathbf{u}(k)$ for delivering (28) vary with safety problems and physical systems. An example in this paper is the safe control of autonomous vehicles: Algorithm 3 in the Experiments section. ## 6 Experiments The demonstrations are performed on three different systems with different degrees of available physical knowledge: autonomous vehicles, coupled pendulums, and a subsystem of US Illinois climate. ### 6.1 Autonomous Vehicles Figure 8: Vehicle’s driving environment. The first experiment performs the demonstration of two functions: (i) the learning of vehicle’s conjunctive lateral and longitudinal dynamics via Phy- Taylor, and (ii) the safe velocity regulation via self-correcting Phy-Taylor. The vehicle’s driving environment is shown in Figure 8, operating in the AutoRally platform [45]. #### 6.1.1 Vehicle Dynamics Learning We first leverage our available physical knowledge to identify the known model-substructure parameters for the physics-guided NN editing. According to the Newton’s second law for motion along longitudinal and lateral axes [28], we have the following governing equations: $\displaystyle\bar{m}\ddot{\mathrm{p}}={F_{\mathrm{p}f}}+{F_{\mathrm{p}r}}-{F_{\text{aero}}}-{R_{\mathrm{p}f}}-{R_{\mathrm{p}r}},\quad\quad\bar{m}(\ddot{\mathrm{y}}+\dot{\psi}{v_{\mathrm{p}}})={F_{\mathrm{y}f}}+{F_{\mathrm{y}r}},$ (29) where $\mathrm{p}$ is the longitudinal position, $\mathrm{y}$ is the lateral position, $\psi$ is the vehicle yaw angle, $\bar{m}$ is the vehicle mass, $v_{\mathrm{p}}\triangleq\dot{\mathrm{p}}$ is the longitudinal velocity, ${F_{\mathrm{p}f}}$ and ${F_{\mathrm{p}r}}$ denote the longitudinal tire force at the front and rear tires, respectively, ${R_{\mathrm{p}f}}$ and ${R_{\mathrm{p}r}}$ denote the rolling resistance at the front and rear tires, respectively, ${F_{\text{aero}}}$ represents the longitudinal aerodynamic drag force, ${F_{\mathrm{y}f}}$ and ${F_{\mathrm{y}r}}$ are the lateral tire forces of the front and rear wheels, respectively. With the notations of lateral velocity $v_{\mathrm{y}}\triangleq\dot{\mathrm{y}}$ and yaw velocity $v_{\psi}\triangleq\dot{\psi}$, the following state space model is derived from the force balance equation (29) in the literature [28]. $\displaystyle\frac{\mathrm{d}}{{\mathrm{d}t}}\left[\begin{gathered}\mathrm{p}\hfill\\\ \mathrm{y}\hfill\\\ \psi\hfill\\\ {v_{\mathrm{p}}}\hfill\\\ {v_{\mathrm{y}}}\hfill\\\ {v_{\psi}}\hfill\\\ \end{gathered}\right]=\left[{\begin{array}[]{*{20}{c}}0&0&0&1&0&0\\\ 0&0&0&0&1&0\\\ 0&0&0&0&0&1\\\ 0&0&0&*&0&0\\\ 0&0&0&0&*&*\\\ 0&0&0&0&*&*\end{array}}\right]\underbrace{\left[\begin{gathered}\mathrm{p}\hfill\\\ \mathrm{y}\hfill\\\ \psi\hfill\\\ {v_{\mathrm{p}}}\hfill\\\ {v_{\mathrm{y}}}\hfill\\\ {v_{\psi}}\hfill\\\ \end{gathered}\right]}_{\triangleq{\mathbf{x}}}+\left[\begin{gathered}0\hfill\\\ 0\hfill\\\ 0\hfill\\\ *\hfill\\\ 0\hfill\\\ 0\hfill\\\ \end{gathered}\right]\theta+\left[\begin{gathered}0\hfill\\\ 0\hfill\\\ 0\hfill\\\ 0\hfill\\\ *\hfill\\\ *\hfill\\\ \end{gathered}\right]\delta,$ (64) where ‘*’ can represent a state-dependent or time-dependent function or mixed of them or just a scalar, but is unknown to us, and $\theta$ and $\delta$ denote throttle and steering, respectively. Given the practical physical knowledge that the throttle computation depends on the longitudinal velocity and position only, while the dependencies of steering are unknown, the state space model (64) updates with $\displaystyle\dot{{\mathbf{x}}}=\left[{\begin{array}[]{*{20}{c}}0&0&0&1&0&0\\\ 0&0&0&0&1&0\\\ 0&0&0&0&0&1\\\ &0&0&*&0&0\\\ &*&*&*&*&*\\\ &*&*&*&*&*\end{array}}\right]{\mathbf{x}}.$ (71) The sampling technique, with sampling period denoted by $T$, converts the continuous-time state-space model above to the following discrete-time one: $\displaystyle{\mathbf{x}}\left({k+1}\right)=\left[{\begin{array}[]{*{20}{c}}1&0&0&T&0&0\\\ 0&1&0&0&T&0\\\ 0&0&1&0&0&T\\\ &0&0&*&0&0\\\ &*&*&*&*&*\\\ &*&*&*&*&*\end{array}}\right]{\mathbf{x}}\left(k\right).$ (78) Figure 9: (a): Phy-Taylor${}_{\text{large order}}$ has a PhN with large order $r=4$. (b): Phy-Taylor${}_{\text{small order}}$ has cascading PhNs with relatively small orders satisfying $r_{\left\langle 1\right\rangle}\cdot r_{\left\langle 2\right\rangle}=2\cdot 2=4=r$. We first consider two Phy-Taylor models, named ‘Phy-Taylor${}_{\text{large order}}$’ and ‘Phy-Taylor${}_{\text{small order}}$’, which can embed the available knowledge (i.e., the known parameters included in system matrix of model (78)). Their architectures are shown in Figure 9 (a) and (b). The Phy- Taylor${}_{\text{large order}}$ has one PhN with a large augmentation order while the Phy-Taylor${}_{\text{small order}}$ has two cascade PhN layers with two relatively small augmentation orders. Meanwhile, the three orders satisfy the condition (22) for having the same monomials of Taylor series. We also consider the corresponding models without NN editing (i.e., without physical knowledge embedding), which degrades the Phy-Taylor to the deep PhN (DPhN). The two DPhN models are named ‘DPhN${}_{\text{large order}}$’ and ‘DPhN${}_{\text{small order}}$’. The final model we considered is the seminal Deep Koopman [23], following the same model configurations therein. The configurations of five models are summarized in Table 2. Table 2: Model Configurations | Layer 1 | Layer 2 | Layer 3 | | ---|---|---|---|---|--- Model ID | #weights | #bias | #Weights | #bias | #weights | #bias | #parameter sum | prediction error $e$ DPhN${}_{\text{large order}}$ | $3135$ | $15$ | $90$ | $6$ | $-$ | $-$ | $3246$ | nan DPhN${}_{\text{small order}}$ | $270$ | $10$ | $520$ | $8$ | $48$ | $6$ | $862$ | $45.57277$ Phy-Taylor${}_{\text{large order}}$ | $2313$ | $12$ | $30$ | $3$ | $-$ | $-$ | $2358$ | $0.047758$ Phy-Taylor${}_{\text{small order}}$ | $167$ | $7$ | $265$ | $5$ | $18$ | $3$ | $465$ | $0.003605$ | Encoder | Decoder | Auxiliary Networks | | ---|---|---|---|---|--- Model ID | #weights | #bias | #weights | #bias | #weights | #bias | #parameter sum | prediction error Deep Koopman | $2040$ | $176$ | $2040$ | $176$ | $19920$ | $486$ | $24838$ | $0.232236$ Figure 10: Training and Testing. (a)-(c): The trajectories of averaged training loss (5 random seeds) of different models described in Table 2. (i)-(vi): Ground truth and predicted trajectories via trained models. The trajectories of training loss are presented in Figure 10 (a)–(c). The (training loss, validation loss) of trained DPhN${}_{\text{large order}}$, DPhN${}_{\text{small order}}$, Phy-Taylor${}_{\text{large order}}$ and Phy- Taylor${}_{\text{small order}}$ are (0.00389, 0.00375), (0.000344, 0.000351), (0.000222, 0.000238) and (0.000915, 0.000916), respectively. To perform the testing, we consider the long-horizon prediction of system trajectories, given the same initial conditions. The prediction error is measured by the mean squared error: $e=\frac{1}{\kappa}\sum\limits_{t=k+1}^{k+\kappa}{\frac{1}{6}}\left\|{\underline{\bf{x}}\left(t\right)-{\bf{x}}\left(t\right)}\right\|\leavevmode\nobreak\ \text{with}\leavevmode\nobreak\ \underline{\bf{x}}\left(k\right)={\bf{x}}\left(k\right)$, where $\underline{\bf{x}}\left(t\right)$ is the prediction of ground truth ${\bf{x}}\left(t\right)$ at time $t$. The prediction errors over the horizon $\kappa=300$ and initial time $k=950$ are summarized in Table 2. The ground- truth trajectories and predicted ones from Deep Koopman and the Phy-Taylor models are presented in Figure 10 (i)–(vi). Observing from Table 2 and Figure 10, we can conclude: * • Figure 10 (i)–(vi) and Figure 10 (a)–(b): the physics-guided NN editing can significantly accelerate model training, reduce validation and training loss as well as improve model accuracy (viewed from long-horizon prediction of trajectory). * • Figure 10 (i)–(vi) and Figure 10 (c): with physics-guided NN editing, the cascade PhN with small augmentation orders can further significantly reduce training loss, and increase model accuracy. This can be due to the further removed spurious correlations or NN links contradicting with physics law, via the cascade architecture. * • Figure 10 (i)–(vi) and Table 2: compared with the fully-connected DPhN models, i.e., DPhN${}_{\text{large order}}$ and DPhN${}_{\text{small order}}$, the seminal Deep Koopman strikingly increases model accuracy, viewed from the perspective of long-horizon prediction of trajectory. Conversely, compared to Deep Koopman, the Phy-Taylor models (both Phy-Taylor${}_{\text{large order}}$ and Phy-Taylor${}_{\text{small order}}$) notably reduce the model learning parameters (weights and bias) and further remarkably increase model accuracy simultaneously. #### 6.1.2 Self-Correcting Phy-Taylor This experiment demonstrates the effectiveness of self-correcting Phy-Taylor in guaranteeing vehicle’s safe driving in the environment shown in Figure 8. The architecture of self-correcting Phy-Taylor is presented in Figure 11. Its real-time input vector is ${\mathbf{x}}(k)=[w(k);\leavevmode\nobreak\ \mathrm{p}(k);\leavevmode\nobreak\ \mathrm{y}(k);\leavevmode\nobreak\ \psi(k);\leavevmode\nobreak\ v_{\mathrm{p}}(k);\leavevmode\nobreak\ v_{\mathrm{y}}(k);v_{\psi}(k)]$, where $w(k)$ is the average of four wheels’ velocities. The mediate output $\mathbf{u}(k)=\left[\theta(k);\leavevmode\nobreak\ \gamma(k)\right]$ denotes the vector of control commands, where $\theta(k)\in[-0.156,\leavevmode\nobreak\ 0.156]$ is the throttle command and $\gamma(k)\in[-0.6,\leavevmode\nobreak\ 0.6]$ is the steering command. The considered safety-metric vector in Equation (26) is $\displaystyle\mathbf{s}({\mathbf{x}}(k),\mathbf{u}(k),\tau)=\sum\limits_{t=k+1}^{k+\tau}{\left[{{{\left({{v_{\mathrm{p}}}(t)-\mathrm{v}}\right)}^{2}};\leavevmode\nobreak\ \leavevmode\nobreak\ {{\left({{v_{\mathrm{p}}}(t)-r\cdot w(k)}\right)}^{2}}}\right]}\in{\mathbb{R}^{2}},$ (79) Figure 11: Self-correcting Phy-Taylor for safe control of autonomous vehicle. where $\mathrm{v}$ and $r$ denote the reference of longitudinal velocity and the wheel radius, respectively. The safety metric (79) together with the driving scenario in Figure 8 indicate the objective of safe control command is to simultaneously steer vehicle’s longitudinal velocity to reference $\mathrm{v}$, and constrain the slip (i.e., $(v_{\mathrm{x}}(t)-r\cdot w(k))^{2}$) to prevent slipping and sliding. The hyperparameters in the training loss function (27) are set to $\alpha=\beta=1$. The testing results of trained model are presented in Figure 12 (a)–(d) (blue curves), which in conjunction with the ground truth (orange curves) demonstrate the trustfulness of trained model. We next output the learned safety relationships for off-line verification and necessary revision: $\displaystyle[{\bf{s}}({\bf{u}}(k))]_{1}$ $\displaystyle=0.00111007+{\left[\\!\begin{array}[]{l}\theta(k)\\\ \gamma(k)\end{array}\\!\right]^{\top}}\left[{\begin{array}[]{*{20}{c}}{-0.04581441}&{0.00100625}\\\ {0.00100625}&{0.00342825}\end{array}}\right]\left[\begin{array}[]{l}\theta(k)\\\ \gamma(k)\end{array}\right],$ (80g) $\displaystyle[{\bf{s}}({\bf{u}}(k))]_{2}$ $\displaystyle=0.14376973-{\left[\begin{array}[]{l}\theta(k)\\\ \gamma(k)\end{array}\right]^{\top}}\left[{\begin{array}[]{*{20}{c}}{6.06750536}&{0.02701398}\\\ {0.02701398}&{0.00601609}\end{array}}\right]\left[\begin{array}[]{l}\theta(k)\\\ \gamma(k)\end{array}\right].$ (80n) The safety metrics (79) of ground truth are always non-negative. We thus need to verify that given the ranges of control commands (i.e., $\theta(k)\in[-0.156,0.156]$, $\gamma(k)\in[-0.6,0.6]$, $\forall k\in\mathbb{N}$), if both $[{\bf{s}}({\bf{u}}(k))]_{1}$ and $[{\bf{s}}({\bf{u}}(k))]_{2}$ in Equation (80) can always be non-negative. If a violation occurs, we will make revisions to the relationships. We can verify from (80) that the non-negativity constraint does not always hold, such as $\theta(k)=0.156$ and $\gamma(k)=0$. Therefore, the revision of safety relationship is needed before working on the self-correcting procedure. The regulated safety relationships (revisions are highlighted in red color) are presented below. $\displaystyle[{\bf{s}}({\bf{u}}(k))]_{1}$ $\displaystyle=\underbrace{0.00{\color[rgb]{1.00,0.00,0.00}\definecolor[named]{pgfstrokecolor}{rgb}{1.00,0.00,0.00}02}1007}_{{[\mathbf{b}]_{1}}}+{\left[\begin{array}[]{l}\theta\left(k\right)\\\ \gamma\left(k\right)\end{array}\right]^{\top}}\underbrace{\left[{\begin{array}[]{*{20}{c}}{{\color[rgb]{1.00,0.00,0.00}\definecolor[named]{pgfstrokecolor}{rgb}{1.00,0.00,0.00}0.0018}1441}&{0.00100625}\\\ {0.00100625}&{0.00342825}\end{array}}\right]}_{{\triangleq{\bf{P}}_{1}}}\left[\begin{array}[]{l}\theta\left(k\right)\\\ \gamma\left(k\right)\end{array}\right],$ (81g) $\displaystyle[{\bf{s}}({\bf{u}}(k))]_{2}$ $\displaystyle=\underbrace{0.14376973}_{{[\mathbf{b}]_{2}}}-{\left[\begin{array}[]{l}\theta\left(k\right)\\\ \gamma\left(k\right)\end{array}\right]^{\top}}\underbrace{\left[{\begin{array}[]{*{20}{c}}{{\color[rgb]{1.00,0.00,0.00}\definecolor[named]{pgfstrokecolor}{rgb}{1.00,0.00,0.00}5.90769724}}&{{\color[rgb]{1.00,0.00,0.00}\definecolor[named]{pgfstrokecolor}{rgb}{1.00,0.00,0.00}0.01201398}}\\\ {{\color[rgb]{1.00,0.00,0.00}\definecolor[named]{pgfstrokecolor}{rgb}{1.00,0.00,0.00}0.01201398}}&{0.00601609}\end{array}}\right]}_{\triangleq{\bf{P}}_{2}}\left[\begin{array}[]{l}\theta\left(k\right)\\\ \gamma\left(k\right)\end{array}\right],$ (81n) ​​which satisfy $\left[{\bf{s}}({\bf{u}}(k))\right]_{1}\geq 0$ and $\left[{\bf{s}}({\bf{u}}(k))\right]_{2}\geq 0$, for any $\theta(k)\in[-0.156,0.156]$ and $\gamma(k)\in[-0.6,0.6]$, and can be demonstrated by the green curves in Figure 12 (c) and (d). Input: Real-time control-command vector ${\bf{u}}(k)=[\theta(k);\leavevmode\nobreak\ \gamma(k)]$, safety bounds $[\mathbf{c}]_{1}$ and $[\mathbf{c}]_{2}$, and learned matrices $\mathbf{P}_{1}$ and $\mathbf{P}_{2}$ and bias $[\mathbf{b}]_{1}$ and $[\mathbf{b}]_{2}$ defined in Equation (81). 1 Update original safety relationship with off-line verified and revised one: ${\bf{s}}(\bf{u}(k))\leftarrow\eqref{rchoo}$; 2 if _$[{\bf{s}}({\bf{u}}(k))]_{1} >[\mathbf{c}]_{1}$ or $[{\bf{s}}({\bf{u}}(k))]_{2}>[\mathbf{c}]_{2}$ _ then 3 if _$[{\bf{s}}({\bf{u}}(k))]_{i}\geq[\mathbf{c}]_{i}$ , $i\in\\{1,2\\}$ _ then 4 Update safety metric: $[\widehat{\mathbf{c}}]_{i}\leftarrow[\mathbf{c}]_{i},i\in\\{1,2\\}$; 5 else 6 Maintain safety metric: $[\widehat{\mathbf{c}}]_{i}\leftarrow[{\bf{s}}({\bf{u}}(k))]_{i},i\in\\{1,2\\}$; 7 end if 8 Compute orthogonal matrix $\mathbf{P}_{1}$ and eigenvalues $\lambda_{1}$ and $\lambda_{1}$ according to (88); 9 Compute matrix: $\mathbf{S}\leftarrow\mathbf{Q}_{1}\cdot\mathbf{P}_{2}\cdot\mathbf{Q}_{1}$; 10 Compute $\widehat{\theta}(k)$ and $\widehat{\gamma}(k)$ according to (99); 11 Correct real-time control commands: $\displaystyle\theta(k)\leftarrow\mathop{\arg\min}\limits_{\left\\{{\widehat{\theta}(k),-\widehat{\theta}(k)}\right\\}}\left\\{{|{\theta(k)-\widehat{\theta}(k)}|,|{\theta(k)+\widehat{\theta}(k)}|}\right\\},\leavevmode\nobreak\ \leavevmode\nobreak\ \gamma(k)\leftarrow\mathop{\arg\min}\limits_{\left\\{{\widehat{\gamma}(k),-\widehat{\gamma}(k)}\right\\}}\left\\{{|{\gamma(k)-\widehat{\gamma}(k)}|,|{\gamma(k)+\widehat{\gamma}(k)}|}\right\\}.$ 12else 13 Maintain real-time control commands: $\theta(k)\leftarrow\theta(k)$ and $\gamma(k)\leftarrow\gamma(k)$. 14 end if Algorithm 3 Self-Correcting Procedure for Safe Control Commands Figure 12: (a)–(b): Ground truth and inference of control commands. (c)–(d): Ground truth, original inference and inference (according to the revised safety relationships) of safety metrics (tracking error ${({{v_{\mathrm{p}}}(t)-\mathrm{v}})}^{2}$ and wheel slip $(v_{\mathrm{p}}(t)-r\cdot w(k))^{2}$). (e): Safety metric: tracking error. (f): Safety metric: wheel slip. (g): Average of wheel velocities. (h): Vehicle’s driving curve (phase plot). With the revised safety relationships (81) at hand, we are ready to develop the self-correcting procedure. Considering the two matrices ${\bf{P}}_{1}$ and ${\bf{P}}_{2}$ defined in Equation (81) are symmetric, we have $\displaystyle{\bf{P}}_{1}$ $\displaystyle=\underbrace{\left[{\begin{array}[]{*{20}{c}}{-0.934}&{-0.3572}\\\ {-0.3572}&{0.934}\end{array}}\right]}_{\triangleq{\bf{Q}}_{1}}\left[{\begin{array}[]{*{20}{c}}{\underbrace{0.0008}_{\triangleq{\lambda}_{1}}}&0\\\ 0&{\underbrace{0.0038}_{\triangleq{\lambda}_{2}}}\end{array}}\right]\underbrace{\left[{\begin{array}[]{*{20}{c}}{-0.934}&{-0.3572}\\\ {-0.3572}&{0.934}\end{array}}\right]}_{={\bf{Q}}^{\top}_{1}},$ (88) $\displaystyle{\bf{P}}_{2}$ $\displaystyle={\bf{Q}}_{1}\cdot{\bf{Q}}_{1}\cdot{\bf{P}}_{2}\cdot{\bf{Q}}_{1}\cdot{\bf{Q}}_{1},$ (89) based on which, we further define: $\displaystyle\widehat{\mathbf{u}}(k)\triangleq{\bf{Q}}_{1}\left[\begin{array}[]{l}\theta\left(k\right)\\\ \gamma\left(k\right)\end{array}\right],\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \bf{S}\triangleq{\bf{Q}}_{1}\cdot{\bf{P}}_{2}\cdot{\bf{Q}}_{1}=\left[{\begin{array}[]{*{20}{c}}{{s_{11}}}&{{s_{12}}}\\\ {{s_{12}}}&{{s_{22}}}\end{array}}\right].$ (94) We let $[\widehat{\mathbf{c}}]_{1}$ and $[\widehat{\mathbf{c}}]_{2}$ denote the two assigned safety metrics. According to the derivations appearing in Supplementary Information 9.5, the control commands included in the safety formulas $[{\bf{s}}({\bf{u}}(k))]_{1}=[\widehat{\mathbf{c}}]_{1}$ and $\left[{\bf{s}}({\bf{u}}(k))\right]_{2}=[\widehat{\mathbf{c}}]_{2}$ are obtained as $\displaystyle\left[\begin{array}[]{l}\pm\widehat{\theta}(k)\\\ \pm\widehat{\gamma}(k)\end{array}\right]\triangleq{\mathbf{Q}_{1}}\left[\begin{array}[]{l}\pm\sqrt{\frac{{{[\widehat{\mathbf{c}}]_{1}}-{[\mathbf{b}]_{1}}}}{{{\lambda_{1}}}}-\frac{{{\lambda_{2}}}}{{{\lambda_{1}}}}\frac{{\sqrt{\varpi_{2}^{2}-4{\varpi_{1}}{\varpi_{3}}}-{\varpi_{2}}}}{{2{\varpi_{1}}}}}\\\ \pm\sqrt{\frac{{\sqrt{\varpi_{2}^{2}-4{\varpi_{1}}{\varpi_{3}}}-{\varpi_{2}}}}{{2{\varpi_{1}}}}}\end{array}\right],$ (99) where $\displaystyle{\varpi_{1}}$ $\displaystyle\triangleq{\left({\frac{{{\lambda_{2}}}}{{{\lambda_{1}}}}}\right)^{2}}s_{11}^{2}+s_{22}^{2}+\frac{{\left({4s_{12}^{2}-2{s_{11}}{s_{22}}}\right){\lambda_{2}}}}{{{\lambda_{1}}}},$ (100) $\displaystyle{\varpi_{2}}$ $\displaystyle\triangleq\frac{{2({{[\widehat{\mathbf{c}}]_{1}}-{[\mathbf{b}]_{1}}}){s_{11}}{s_{22}}-4({{[\widehat{\mathbf{c}}]_{1}}-{[\mathbf{b}]_{1}}})s_{12}^{2}+2{\lambda_{2}}({{[\mathbf{b}]_{2}}-{[\widehat{\mathbf{c}}]_{2}}}){s_{11}}}}{{{\lambda_{1}}}}$ $\displaystyle\hskip 213.39566pt-\frac{{2({{[\widehat{\mathbf{c}}]_{1}}-{[\mathbf{b}]_{1}}}){\lambda_{2}}s_{11}^{2}}}{{\lambda_{1}^{2}}}-2({{[\mathbf{b}]_{2}}-{[\widehat{\mathbf{c}}]_{2}}}){s_{22}},$ (101) $\displaystyle{\varpi_{3}}$ $\displaystyle\triangleq{\left({{[\mathbf{b}]_{2}}-{[\widehat{\mathbf{c}}]_{2}}}\right)^{2}}+{\left({\frac{{{[\widehat{\mathbf{c}}]_{1}}-{[\mathbf{b}]_{1}}}}{{{\lambda_{1}}}}}\right)^{2}}s_{11}^{2}-\frac{{2\left({{[\widehat{\mathbf{c}}]_{1}}-{[\mathbf{b}]_{1}}}\right)\left({{[\mathbf{b}]_{2}}-{[\widehat{\mathbf{c}}]_{2}}}\right){s_{11}}}}{{{\lambda_{1}}}}.$ (102) The solution (99) has paved the way to delivering the self-correcting procedure: Algorithm 3. The algorithm can be summarized as if the real-time safety metric $[{\bf{s}}({\bf{u}}(k))]_{1}$ or $[{\bf{s}}({\bf{u}}(k))]_{2}$ is larger than the corresponding safety bound $[\mathbf{c}]_{1}$ or $[\mathbf{c}]_{2}$, the real-time safety metric will be updated with the corresponding safety bound (indicated by Line 3 of Algorithm 3). The corrected control commands are then computed according to (99) (see Lines 3-3). The solutions however are not unique. To address the problem, the Line 11 of Algorithm 3 picks up the control commands that are most close to current ones. Under the control of Phy-Taylor, with and without the self-correcting procedure, the system performances are presented in Figure 12 (e)–(h). We can observe from the figure that the self-correcting procedure can significantly enhance the safe assurance of velocity regulation. The demonstration video of implementing the self-correcting Phy-Taylor in AutoRally is available at https://ymao578.github.io/pubs/taylorcontrol2.mp4. ### 6.2 Coupled Pendulums Figure 13: Mechanical analog of three coupled pendulums. The second experiment demonstrates the proposed approach to learn the dynamics of three coupled pendulums, whose mechanical analog is shown in Figure 13. Each pendulum is characterized by its phase angle $\theta_{i}$ and velocity $\dot{\theta}_{i}$. The physical topology is described by $a_{12}=a_{21}=a_{23}=a_{32}=1$ and $a_{13}=a_{31}=0$. We let ${\upsilon_{i}}\triangleq{\dot{\theta}_{i}}$. Referring to Figure 13, the available physical knowledge for NN editing are summarized in Table 3. Table 3: Models with Different Degrees of Embedded Physical Knowledge | Available Physical Knowledge | | ---|---|---|--- Model ID | Physics Law: $\upsilon=\dot{\theta}$ | Sampling Period: $T$ | Coupling Topology: $1\leftrightsquigarrow 2\leftrightsquigarrow 3$ | Force Dependency | Training Loss | Out-of-Distribution: Prediction Error $e$ Phy-Taylor-1 | $\surd$ | $\surd$ | $\surd$ | $\surd$ | $1.75146\\!\cdot\\!10^{-6}$ | $0.06486147$ Phy-Taylor-2 | $\surd$ | $\times$ | $\surd$ | $\times$ | $2.42426\\!\cdot\\!10^{-6}$ | $1.46647886$ FDNN | $\times$ | $\times$ | $\times$ | $\times$ | $6.63564\cdot 10^{-7}$ | $3.65772883$ Figure 14: (i): Phy-Taylor architecture. (ii): FDNN architecture. (a)–(f): Ground truth and predicted trajectories. We consider the dynamics learning via Phy-Taylor and fully-connected DNN (FDNN), whose architectures are shown in Figure 14 (a) and (b). The inputs of both networks are identical as $\mathbf{x}(k)$ $=$ [${\theta_{1}}({k})$; ${\theta_{2}}({k})$; ${\theta_{3}}({k})$; ${\upsilon_{1}}({k})$; ${\upsilon_{2}}({k})$; ${\upsilon_{3}}({k})$]. For a fair comparison, we let each layer of FDNN has the same number of nodes as the Phy-Taylor (after augmentation), which suggests the network configuration in Figure 14 (b). We aim to demonstrate the robustness of Phy-Taylor owning to NN editing. To achieve this, we consider the testing data that is out-of-distribution. Specifically, the initial conditions of phase angles for generating training data are ${\theta_{1}}(0),{\theta_{2}}(0),{\theta_{3}}(0)\in[-1,1]$, while the initial conditions for generating testing data are outside the range: ${\theta_{1}}(0),{\theta_{2}}(0),{\theta_{3}}(0)\in[-1.5,-1)$. The training loss (mean square error) of the considered models with different degrees of embedded physical knowledge are summarized in Table 3. To review the robustness of trained models, we consider the prediction of long-horizon trajectory, given the same initial input. The prediction error is measured by $e=\frac{1}{6}\sum\limits_{i=1}^{3}{\frac{1}{\tau}\left({\sum\limits_{t=k+1}^{k+\tau}{\left({{{\widehat{\theta}}_{i}}(t)-{\theta_{i}}(t)}\right)^{2}}+\sum\limits_{t=k+1}^{k+\tau}{\left({{{\widehat{\upsilon}}_{i}}(t)-{\upsilon_{i}}(t)}\right)^{2}}}\right)}$, where ${{\widehat{\theta}}_{i}}(t)$ and ${{\widehat{\upsilon}}_{i}}(t)$ denote the predicted angle and angular speed of $i$-th pendulum at time $t$, respectively. The prediction errors are summarized in Table 3, which, in conjunction with ground-truth and predicted trajectories in Figure 14 (c)–(h), demonstrate that (i) the NN editing can significantly enhance the model robustness, and (ii) the more embedded physical knowledge can lead to stronger robustness, viewed from the perspective of long-horizon (out-of-distribution) predictions of trajectories. ### 6.3 US Illinois Climate In final example, we consider a climate system without any available physical knowledge for NN editing, which degrades Phy-Taylor to deep PhN. The dataset is the hourly climate normals in Illinois state, including five stations111 NOAA: https://www.ncdc.noaa.gov/cdo- web/datasets/NORMAL_HLY/locations/FIPS:17/detail. The period of record is 01/01/2010–12/31/2010. The locations of considered stations are shown in Figure 15, where $S_{1}$-$S_{4}$ denote stations GHCND:USW00094846, GHCND:USW00094822, GHCND:USW00014842 and GHCND:USW00093822, respectively. The input of deep PhN is $\displaystyle\mathbf{x}(k)=\left[{{x_{1}}(k);\leavevmode\nobreak\ {x_{2}}(k);\leavevmode\nobreak\ {x_{3}}(k);\leavevmode\nobreak\ {x_{4}}(k);\leavevmode\nobreak\ {{\dot{x}}_{1}}(k);\leavevmode\nobreak\ {{\dot{x}}_{2}}(k);\leavevmode\nobreak\ }\right.{{\dot{x}}_{3}}(k);\leavevmode\nobreak\ {{\dot{x}}_{4}}(k);\leavevmode\nobreak\ {{\ddot{x}}_{1}}(k);\leavevmode\nobreak\ {{\ddot{x}}_{2}}(k);\leavevmode\nobreak\ {{\ddot{x}}_{3}}(k);\leavevmode\nobreak\ \left.{{{\ddot{x}}_{4}}(k)}\right],$ Figure 15: Station locations. where ${{\dot{x}}_{i}}\left(k\right)={{x}_{i}}\left(k\right)-{{x}_{i}}\left(k-1\right)$ and ${{\ddot{x}}_{i}}\left(k\right)={\dot{x}_{i}}\left(k\right)-{\dot{x}_{i}}\left(k-1\right)=({{x}_{i}}\left(k\right)-{{x}_{i}}\left(k-1\right))-({{x}_{i}}\left(k-1\right)-{{x}_{i}}\left(k-2\right))$, $i=1,2,3,4$, and the $x_{1}$, $x_{2}$, $x_{3}$ and $x_{4}$ denote the dew point mean, the heat index mean, the wind chill mean and the average wind speed, respectively. The training loss function is $\displaystyle\mathcal{L}={\left\|{{[\widehat{\mathbf{y}}]_{1}}-{x_{2}}\left({k+1}\right)}\right\|}+{\left\|{{[\widehat{\mathbf{y}}]_{2}}-{x_{4}}\left({k+1}\right)}\right\|},$ which indicates the network models are to predict the heat index mean and the average wind speed for next hour. #### 6.3.1 Phy-Augmentation Figure 16: Architectures of deep PhN and classical DNN. We use the data of station $S_{1}$ to study the influence of augmentation order $r$ from the perspective of training loss. The considered network model is shown in Figure 16 (a). The trajectories of training loss under two different augmentation orders are presented in Figure 17 (a). It is straightforward to observe from the figure that the input augmentation significantly speeds up the training process and reduces the training loss. The intuitive explanation is the input augmentation enlarges node number. To validate the statement, we perform the comparisons with the classical DNN, whose structure is given in Figure 16 (b). To guarantee the comparisons are carried in the fair settings, we let the input dimension (i.e., $n$) of the final layer of DNN in Figure 16 (b) equates to the output dimension of input augmentation of PhN in Figure 16 (a). According to (112), we let $n={\rm{len}}(\mathfrak{m}(\mathbf{x},r=5))=253$. The comparison results are presented in Figure 17 (b), which indicates that given the same number of nodes, the deep PhN still has much faster convergence speed and smaller training loss of mean. The phenomenon can be explained by the fact that Phy- Augmentation well captures physical features. Figure 17: Deep PhN v.s. classical DNN: log of training loss (5 random seeds). #### 6.3.2 Large Noise To test the robustness of deep PhN faced with large noise, we consider the network structure in Figure 16 (c). To introduce noisy datasets for training, we use the inputs $\mathbf{x}(k)$ from stations $S_{2}$–$S_{4}$, while the dataset of outputs $[\widehat{\mathbf{y}}]_{1}$ and $[\widehat{\mathbf{y}}]_{2}$ is from the station $S_{1}$. This setting means we will use station $S_{1}$’s neighbors’ inputs to predict its heat index mean and average wind speed. For the suppressor of deep PhN in (11), we let $\beta=90$ and $\alpha=-1$. The trajectories of training loss in Figure 18 together with station locations in Figure 15 show that the training loss decreases as the distance with station $S_{1}$ increases. Considering the fact that noise of training (input) datasets can increase as distance with (truth) station $S_{1}$ increases, the result demonstrates the superiority of suppressor in mitigating the influence of large noise in augmented input features. Figure 18: Training loss with and without suppressor function (5 random seeds). ## 7 Discussion In this article, we have proposed a physics-model-based deep neural network framework, called Phy-Taylor, that addresses the challenge the purely data- driven deep DNNs are facing: the violation of inferred relations with physics laws. The Phy-Taylor framework introduces two contributions: the deep physics- compatible neural networks (PhNs) and a physics-guided neural network (NN) editing mechanism, aiming at ensuring strict compliance with prior physical knowledge. The PhN aims to directly capture nonlinearities inspired by physical quantities, such as kinetic energy and aerodynamic drag force. The NN editing mechanism further modifies network links and activation functions consistently with physical knowledge to avoid spurious correlations. As an extension, we have also proposed a self-correcting Phy-Taylor framework that introduces a core capability of automatic output correction when violation of safety occurs. The self-correcting Phy-Taylor successfully addresses the dilemma of prediction horizon and computation time that nonlinear model- predictive control and control barrier function are facing in safety- and time-critical systems. The experiments show that through physics-guided NN editing and direct capture of hard-to-learn nonlinearities, the Phy-Taylor exhibits a considerable reduction in learning parameters, a remarkably accelerated training process, and greatly enhanced model robustness and accuracy. This suggests that building on the Phy-Taylor framework, the concurrently energy-efficient, robustness and high-accurate DNNs are promising for the energy-constraint physical engineering systems (see e.g., internet of things and battery-powered drones), which constitutes a part of our future research. The current Phy- Taylor however suffers from curse of dimensionality, so it can hardly be applied to high-dimensional data, such as image and text. Overcoming the dimensionality curse problem will thus make the Phy-Taylor scalable. The Tucker Decomposition is a potential to address this problem, since it can decompose the higher-order derivatives in Taylor expansions parameterized by DNNs into small core tensors and a set of factor matrices. ## 8 Methods ### 8.1 Datasets All the datasets used in the experiments are publicly available at https://waynestateprod- my.sharepoint.com/:f:/g/personal/hm9062_wayne_edu/EoXO99WN8zJEidtGRj- dISIBQPBWCnL_Ji6QOZ1uJACjug. The datasets in Experiment 6.1 are collected from the AutoRally platform [45]. The vehicle therein is set to run 6 times, each running time is 30 minutes, which can generate around 1100 samples of ellipse trajectories. The datasets in Experiment 6.2 are generated by solving the differential equations of coupled pendulum in MATLAB using the ode45 solver, given 1000 initial conditions. The climate datasets in Experiment 6.3 are from the Climate Data Online–National Oceanic and Atmospheric Administration222 NOAA: https://www.ncdc.noaa.gov/cdo- web/datasets/NORMAL_HLY/locations/FIPS:17/detail, whose period of record is 01/01/2010–12/31/2010. The datasets in Experiments 6.1 and 6.2 are evenly distributed to 6 files: two files are preserved for testing and validating, the remaining 4 files are used for training. For the Experiment 6.3, the data over the final 100 hours is preserved for testing, the data over another 864 hours are used for validating, the remaining data are used for training. ### 8.2 Code For the code, we use the Python API for the TensorFlow framework [46] and the Adam optimizer [47] for training. The Python version is 2.7.12. The TensorFlow version is 1.4.0. Our source code is publicly available at GitHub: https://github.com/ymao578/Phy-Taylor. The source code of implementing self- correcting Phy-Taylor in the AutoRally platform is publicly available at https://waynestateprod- my.sharepoint.com/:f:/g/personal/hm9062_wayne_edu/EnDmRmmbKlJFmlQfC3qLMHwBf4KG9FRGVVo3FFVk1TrZeg?e=TQZSgr. ### 8.3 Training We set the batch-size to 200 for the Experiments 6.1 and 6.2, while the batch- size of Experiment 6.3 is 150. The learning rates of Experiments 6.1–6.3 are set to 0.0005, 0.00005 and 0.00005, respectively. In all the experiments, each weight matrix is initialized randomly from a (truncated) normal distribution with zero mean and standard deviation, discarding and re-drawing any samples that are more than two standard deviations from the mean. We initialize each bias according to the normal distribution with zero mean and standard deviation. ## References * [1] Zachary, A. & Helen, T. AI Accidents: An emerging threat. _Center for Security and Emerging Technology_ DOI: https://doi.org/10.51593/20200072 (2021). * [2] Wang, R. & Yu, R. Physics-guided deep learning for dynamical systems: A survey. _arXiv preprint_ https://arxiv.org/abs/2107.01272. * [3] Willard, J., Jia, X., Xu, S., Steinbach, M. & Kumar, V. Integrating scientific knowledge with machine learning for engineering and environmental systems. _ACM Computing Surveys_ (2021). * [4] Jia, X. _et al._ Physics-guided machine learning for scientific discovery: An application in simulating lake temperature profiles. _ACM/IMS Transactions on Data Science_ 2, 1–26 (2021). * [5] Jia, X. _et al._ Physics-guided RNNs for modeling dynamical systems: A case study in simulating lake temperature profiles. In _Proceedings of the 2019 SIAM International Conference on Data Mining_ , 558–566 (2019). * [6] Wang, S. & Perdikaris, P. Deep learning of free boundary and stefan problems. _Journal of Computational Physics_ 428, 109914 (2021). * [7] Lu, L. _et al._ Physics-informed neural networks with hard constraints for inverse design. _SIAM Journal on Scientific Computing_ 43, B1105–B1132 (2021). * [8] Chen, Y. _et al._ Theory-guided hard constraint projection (HCP): A knowledge-based data-driven scientific machine learning method. _Journal of Computational Physics_ 445, 110624 (2021). * [9] Wang, N., Zhang, D., Chang, H. & Li, H. Deep learning of subsurface flow via theory-guided neural network. _Journal of Hydrology_ 584, 124700 (2020). * [10] Xu, K. & Darve, E. Physics constrained learning for data-driven inverse modeling from sparse observations. _Journal of Computational Physics_ 110938 (2022). * [11] Karniadakis, G. E. _et al._ Physics-informed machine learning. _Nature Reviews Physics_ 3, 422–440 (2021). * [12] Wang, R., Kashinath, K., Mustafa, M., Albert, A. & Yu, R. Towards physics-informed deep learning for turbulent flow prediction. In _Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining_, 1457–1466 (2020). * [13] Daw, A., Karpatne, A., Watkins, W., Read, J. & Kumar, V. Physics-guided neural networks (PGNN): An application in lake temperature modeling. _arXiv preprint_ https://arxiv.org/abs/1710.11431. * [14] Cranmer, M. _et al._ Lagrangian neural networks. In _ICLR 2020 Workshop on Integration of Deep Neural Models and Differential Equations_ (2020). * [15] Finzi, M., Wang, K. A. & Wilson, A. G. Simplifying Hamiltonian and Lagrangian neural networks via explicit constraints. _Advances in Neural Information Processing Systems_ 33, 13880–13889 (2020). * [16] Greydanus, S., Dzamba, M. & Yosinski, J. Hamiltonian neural networks. _Advances in Neural Information Processing Systems_ 32 (2019). * [17] Muralidhar, N. _et al._ Phynet: Physics guided neural networks for particle drag force prediction in assembly. In _Proceedings of the 2020 SIAM International Conference on Data Mining_ , 559–567 (2020). * [18] Masci, J., Boscaini, D., Bronstein, M. & Vandergheynst, P. Geodesic convolutional neural networks on riemannian manifolds. In _Proceedings of the IEEE international conference on computer vision workshops_ , 37–45 (2015). * [19] Monti, F. _et al._ Geometric deep learning on graphs and manifolds using mixture model cnns. In _Proceedings of the IEEE conference on computer vision and pattern recognition_ , 5115–5124 (2017). * [20] Horie, M., Morita, N., Hishinuma, T., Ihara, Y. & Mitsume, N. Isometric transformation invariant and equivariant graph convolutional networks. _arXiv preprint_ https://arxiv.org/abs/2005.06316. * [21] Wang, R. Incorporating symmetry into deep dynamics models for improved generalization. In _International Conference on Learning Representations_ (2021). * [22] Li, Y., He, H., Wu, J., Katabi, D. & Torralba, A. Learning compositional Koopman operators for model-based control. _arXiv preprint_ https://arxiv.org/abs/1910.08264. * [23] Lusch, B., Kutz, J. N. & Brunton, S. L. Deep learning for universal linear embeddings of nonlinear dynamics. _Nature communications_ 9, 1–10 (2018). * [24] Li, Z. _et al._ Fourier neural operator for parametric partial differential equations. _arXiv:2010.08895_ (2020). * [25] Königsberger, K. _Analysis 2_ (Springer-Verlag, 2013). * [26] Yang, Y.-Y. & Chaudhuri, K. Understanding rare spurious correlations in neural networks. _arXiv preprint_ https://arxiv.org/abs/2202.05189. * [27] Sagawa, S., Raghunathan, A., Koh, P. W. & Liang, P. An investigation of why overparameterization exacerbates spurious correlations. In _International Conference on Machine Learning_ , 8346–8356 (2020). * [28] Rajamani, R. _Vehicle dynamics and control_ (Springer Science & Business Media, 2011). * [29] Kani, J. N. & Elsheikh, A. H. DR-RNN: A deep residual recurrent neural network for model reduction. _arXiv preprint_ https://arxiv.org/abs/1709.00939. * [30] Belbute-Peres, F. D. A., Economon, T. & Kolter, Z. Combining differentiable PDE solvers and graph neural networks for fluid flow prediction. In _International Conference on Machine Learning_ , 2402–2411 (2020). * [31] Wu, D. _et al._ DeepGLEAM: A hybrid mechanistic and deep learning model for COVID-19 forecasting. _arXiv preprint_ https://arxiv.org/abs/2102.06684 (2021). * [32] Guen, V. L. & Thome, N. Disentangling physical dynamics from unknown factors for unsupervised video prediction. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , 11474–11484 (2020). * [33] Garcia Satorras, V., Akata, Z. & Welling, M. Combining generative and discriminative models for hybrid inference. _Advances in Neural Information Processing Systems_ 32 (2019). * [34] Long, Y., She, X. & Mukhopadhyay, S. HybridNet: Integrating model-based and data-driven learning to predict evolution of dynamical systems. In _Conference on Robot Learning_ , 551–560 (2018). * [35] Yin, Y. _et al._ Augmenting physical models with deep networks for complex dynamics forecasting. _Journal of Statistical Mechanics: Theory and Experiment_ 2021, 124012 (2021). * [36] AI incident database. _AID_ DOI: https://incidentdatabase.ai/summaries/incidents. * [37] TOM, K. & STEFANIE, D. Felony charges are 1st in a fatal crash involving autopilot. _AP News_ DOI: https://apnews.com/article/tesla-autopilot-fatal-crash-charges-91b4a0341e07244f3f03051b5c2462ae (2022). * [38] Hawkins, A. J. Tesla didn’t fix an autopilot problem for three years, and now another person is dead. _The Verge_ DOI: https://www.theverge.com/2019/5/17/18629214/tesla-autopilot-crash-death-josh-brown-jeremy-banner (2019). * [39] Ames, A. D. _et al._ Control barrier functions: Theory and applications. In _2019 18th European Control Conference_ , 3420–3431 (2019). * [40] Singletary, A., Swann, A., Chen, Y. & Ames, A. Onboard safety guarantees for racing drones: High-speed geofencing with control barrier functions. _IEEE Robotics and Automation Letters_ (2022). * [41] Williams, G., Drews, P., Goldfain, B., Rehg, J. M. & Theodorou, E. A. Information-theoretic model predictive control: Theory and applications to autonomous driving. _IEEE Transactions on Robotics_ 34, 1603–1622 (2018). * [42] Falcone, P., Borrelli, F., Asgari, J., Tseng, H. E. & Hrovat, D. Predictive active steering control for autonomous vehicle systems. _IEEE Transactions on Control Systems Technology_ 15, 566–580 (2007). * [43] Zeng, J., Zhang, B. & Sreenath, K. Safety-critical model predictive control with discrete-time control barrier function. In _2021 American Control Conference_ , 3882–3889 (2021). * [44] Doyle, J. C., Francis, B. A. & Tannenbaum, A. R. _Feedback control theory_ (Courier Corporation, 2013). * [45] Goldfain, B. _et al._ AutoRally: An open platform for aggressive autonomous driving. _IEEE Control Systems Magazine_ 39, 26–55 (2019). * [46] Kingma, D. P. & Ba, J. Adam: A method for stochastic optimization. _arXiv preprint_ https://arxiv.org/abs/1412.6980. * [47] Abadi, M. _et al._ Tensorflow: Large-scale machine learning on heterogeneous distributed systems. _arXiv preprint_ https://arxiv.org/abs/1603.04467. * [48] Stanley, R. P. What is enumerative combinatorics? In _Enumerative combinatorics_ , 1–63 (Springer, 1986). ## 9 Supplementary Information ### 9.1 Auxiliary Theorems ###### Theorem 4. The DNR magnitude of high-order monomial $[\bar{\mathbf{x}}]_{i}^{p}[\bar{\mathbf{x}}]_{j}^{q}$, $p$, $q\in\mathbb{N}$, is strictly increasing with respect to $|\mathrm{DNR}_{i}|$ and $|\mathrm{DNR}_{j}|$, if $\displaystyle\mathrm{DNR}_{i},\leavevmode\nobreak\ \mathrm{DNR}_{j}\in(-\infty,-1]\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \text{or}\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \mathrm{DNR}_{i},\leavevmode\nobreak\ \mathrm{DNR}_{j}\in[-\frac{1}{2},0)\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \text{or}\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \mathrm{DNR}_{i},\leavevmode\nobreak\ \mathrm{DNR}_{j}\in(0,\infty).$ (103) ###### Proof. In view of definition 2, the true data can be equivalently expressed as $[{\mathbf{h}}]_{i}=\mathrm{DNR}_{i}\cdot[{\mathbf{w}}]_{i}$, according to which we have $[\bar{\mathbf{x}}]_{i}=(1+\mathrm{DNR}_{i}){[{\mathbf{w}}]_{i}}$ such that $\displaystyle[\bar{\mathbf{x}}]_{i}^{p}[\bar{\mathbf{x}}]_{j}^{q}={({1+\mathrm{DNR}_{i}})^{p}}{({1+\mathrm{DNR}_{j}})^{q}}[\mathbf{w}]_{i}^{p}[\mathbf{w}]_{j}^{q},\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \text{and}\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ [\mathbf{h}]_{i}^{p}[\mathbf{h}]_{j}^{q}=\mathrm{DNR}_{i}^{p}\cdot\mathrm{DNR}_{j}^{q}\cdot[\mathbf{w}]_{i}^{p}[\mathbf{w}]_{j}^{q}.$ (104) We note the true data of high-order monomial $[\bar{\mathbf{x}}]_{i}^{p}[\bar{\mathbf{x}}]_{j}^{q}$ is $[{\mathbf{h}}]_{i}^{p}[{\mathbf{h}}]_{j}^{q}$, the corresponding noise can thus be derived from the formula (104) as $\displaystyle[\bar{\mathbf{x}}]_{i}^{p}[\bar{\mathbf{x}}]_{j}^{q}-[\mathbf{h}]_{i}^{p}[\mathbf{h}]_{j}^{q}=\left[{({1+\mathrm{DNR}_{i}})^{p}}{({1+\mathrm{DNR}_{j}})^{q}}-\mathrm{DNR}_{i}^{p}\cdot\mathrm{DNR}_{j}^{q}\right][\mathbf{w}]_{i}^{p}[\mathbf{w}]_{j}^{q},$ which, in conjunction with the second formula in Equation (104), lead to $\displaystyle\left|\mathrm{DNR}^{p+q}_{ij}\right|\triangleq\left|\frac{{[\mathbf{h}]_{i}^{p}[\mathbf{h}]_{j}^{q}}}{{[\bar{\mathbf{x}}]_{i}^{p}[\bar{\mathbf{x}}]_{j}^{q}-[\mathbf{h}]_{i}^{p}[\mathbf{h}]_{j}^{q}}}\right|=\left|\frac{1}{{{{\left({1+\frac{1}{{\mathrm{DNR}_{i}}}}\right)}^{p}}{{\left({1+\frac{1}{{\mathrm{DNR}_{j}}}}\right)}^{q}}-1}}\right|,\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ p,\leavevmode\nobreak\ q\in\mathbb{N}.$ (105) We can straightforwardly verify from formula (105) that if $\mathrm{DNR}_{i},\leavevmode\nobreak\ \mathrm{DNR}_{j}\in(0,\infty)$, we have $\displaystyle\left|\mathrm{DNR}^{p+q}_{ij}\right|=\frac{1}{{{{\left({1+\frac{1}{{|\mathrm{DNR}_{i}|}}}\right)}^{p}}{{\left({1+\frac{1}{{|\mathrm{DNR}_{j}|}}}\right)}^{q}}-1}},$ (106) which implies $|\mathrm{DNR}^{p+q}_{ij}|$ is strictly increasing with respect to $|\mathrm{DNR}_{i}|$ and $|\mathrm{DNR}_{j}|$ under this condition. The condition $\mathrm{DNR}_{i},\leavevmode\nobreak\ \mathrm{DNR}_{j}\in(-\infty,-1]$ means that $\displaystyle\frac{{1}}{{\mathrm{DNR}_{j}}}\in[-1,0),\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \frac{{1}}{{\mathrm{DNR}_{j}}}\in[-1,0),\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ 1+\frac{{1}}{{\mathrm{DNR}_{i}}}\in[0,1),\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ 1+\frac{{1}}{{\mathrm{DNR}_{j}}}\in[0,1),$ (107) considering which, the formula (105) equivalently transforms to $\displaystyle\left|\mathrm{DNR}^{p+q}_{ij}\right|=\frac{1}{{{{1-\left({1+\frac{1}{{\mathrm{DNR}_{i}}}}\right)}^{p}}{{\left({1+\frac{1}{{\mathrm{DNR}_{j}}}}\right)}^{q}}}}=\frac{1}{{{{1-\left({1-\frac{1}{{|\mathrm{DNR}_{i}|}}}\right)}^{p}}{{\left({1-\frac{1}{{|\mathrm{DNR}_{j}|}}}\right)}^{q}}}},$ (108) which reveals that $|\mathrm{DNR}^{p+q}_{ij}|$ is strictly increasing with respect to $|\mathrm{DNR}_{i}|$ and $|\mathrm{DNR}_{j}|$. The condition $\mathrm{DNR}_{i},\leavevmode\nobreak\ \mathrm{DNR}_{j}\in[-\frac{1}{2},0)$ means $\displaystyle\frac{{1}}{{\mathrm{DNR}_{j}}}\in(-\infty,-2],\leavevmode\nobreak\ \leavevmode\nobreak\ \frac{{1}}{{\mathrm{DNR}_{j}}}\in(-\infty,-2],\leavevmode\nobreak\ \leavevmode\nobreak\ 1+\frac{{1}}{{\mathrm{DNR}_{i}}}\in(-\infty,-1],\leavevmode\nobreak\ \leavevmode\nobreak\ 1+\frac{{1}}{{\mathrm{DNR}_{j}}}\in(-\infty,-1],$ (109) in light of which, the formula (105) can equivalently express as * • if $p+q$ is even, $\displaystyle\left|\mathrm{DNR}^{m+n}_{ij}\right|=\frac{1}{{{{\left|{\frac{1}{{|\mathrm{DNR}_{i}|}}-1}\right|}^{p}}{{\left|{\frac{1}{{|\mathrm{DNR}_{j}|}}-1}\right|}^{q}}}-1},$ (110) * • if $p+q$ is odd, $\displaystyle\left|\mathrm{DNR}^{p+q}_{ij}\right|=\frac{1}{{{{1+\left|{\frac{1}{{|\mathrm{DNR}_{i}|}}-1}\right|}^{p}}{{\left|{\frac{1}{{|\mathrm{DNR}_{j}|}}-1}\right|}^{q}}}}.$ (111) We note both the functions (110) and (111) imply $|\mathrm{DNR}^{m+n}_{ij}|$ is strictly increasing with respect to $|\mathrm{DNR}_{i}|$ and $|\mathrm{DNR}_{j}|$, which completes the proof. ∎ ###### Theorem 5. [48] For any pair of positive integers $n$ and $k$, the number of $n$-tuples of non-negative integers whose sum is $r$ is equal to the number of multisets of cardinality $n-1$ taken from a set of size $n+r-1$, i.e., $\left(\begin{array}[]{l}n+r-1\\\ n-1\end{array}\right)=\frac{{\left({n+r-1}\right)!}}{{\left({n-1}\right)!r!}}$. ###### Theorem 6. The space complexity of Phy-Augmentation, i.e., the dimension of terminal output generated by Algorithm 1, in PhN layer (20) is $\displaystyle\mathrm{len}(\mathfrak{m}({\mathbf{x}},r))=\sum\limits_{s=1}^{r}{\frac{{\left({n+s-1}\right)!}}{{\left({n-1}\right)!s!}}}+1.$ (112) ###### Proof. We denote the output from Line 1 of Algorithm 1 by $\overline{\mathbf{x}}$. Let us first consider the case $r=1$. In this case, the Algorithm 1 skips the Lines 1–1 and arrives at $\mathfrak{m}(\mathbf{x},r)=[1;\leavevmode\nobreak\ \overline{\mathbf{x}}]$ in Line 1. Noticing from Line 1 that $\overline{\mathbf{x}}\in\mathbb{R}^{n}$, we obtain $\text{len}(\mathfrak{m}(\mathbf{x},r))=n+1$, which verifies the correctness of (112) with $r=1$. We next consider the case $r\geq 2$. Given the input dimension $n$ and an order $s\in\\{2,\ldots,r-1,r\\}$, the Lines 1–1 of Algorithm 1 are to generate all the non-missing and non-redundant monomials included in ${\left({\sum\limits_{i=1}^{n}{{[\overline{\mathbf{x}}]_{i}}}}\right)^{s}}$. The problem of the number of generated monomials via Algorithm 1 is equivalent to the problem that for any pair of positive integers $n$ and $s$, the number of $n$-tuples of non-negative integers (whose sum is $s$) is equal to the number of multisets of cardinality $n-1$ taken from a set of size $n+s-1$. Additionally, we note that the vector generated in Line 1 of Algorithm 1, denoted by $\widetilde{\mathfrak{m}}(\mathbf{x},s)$, stacks all the generated monomials. According to the auxiliary Theorem 5 in Supplementary Information 9.1, we then have $\text{len}(\widetilde{\mathfrak{m}}(\mathbf{x},s))=\frac{{\left({n+s-1}\right)!}}{{\left({n-1}\right)!s!}}$. Finally, we note that the Lines 1, and 1 of Algorithm 1 imply that the generated vector $\mathfrak{m}(\mathbf{x},r)$ stack the $1$ with $\widetilde{\mathfrak{m}}(\mathbf{x},s)$ over $s\in\\{1,\ldots,r-1,r\\}$, respectively. We thus can obtain (112). ∎ ### 9.2 Proof of Theorem 1 We note the $[\widetilde{\mathbf{h}}]_{i}$ given in Equation (13) can be written as $\displaystyle[\widetilde{\mathbf{h}}]_{i}=\begin{cases}[\mathbf{h}]_{i},&\text{if}\leavevmode\nobreak\ [\mathbf{h}]_{i}+[\mathbf{w}]_{i}<0\\\ [\mathbf{h}]_{i},&\text{if}\leavevmode\nobreak\ [\mathbf{h}]_{i}+[\mathbf{w}]_{i}\geq 0\leavevmode\nobreak\ \text{and}\leavevmode\nobreak\ [\mathbf{w}]_{i}<0\\\ [\mathbf{h}]_{i}\cdot\kappa_{i}+\rho_{i},&\text{if}\leavevmode\nobreak\ [\mathbf{h}]_{i}+[\mathbf{w}]_{i}\geq 0\leavevmode\nobreak\ \text{and}\leavevmode\nobreak\ [\mathbf{w}]_{i}>0\end{cases},$ (113) subtracting $[{\mathbf{h}}]_{i}$ from which yields $\displaystyle\left|[\widetilde{\mathbf{h}}]_{i}-[{\mathbf{h}}]_{i}\right|$ $\displaystyle=\begin{cases}0,&\text{if}\leavevmode\nobreak\ [\mathbf{h}]_{i}+[\mathbf{w}]_{i}<0\\\ 0,&\text{if}\leavevmode\nobreak\ [\mathbf{h}]_{i}+[\mathbf{w}]_{i}\geq 0\leavevmode\nobreak\ \text{and}\leavevmode\nobreak\ [\mathbf{w}]_{i}<0\\\ \left|{[\mathbf{h}]_{i}\cdot\left({\kappa_{i}-1}\right)+\rho_{i}}\right|,&\text{if}\leavevmode\nobreak\ [\mathbf{h}]_{i}+[\mathbf{w}]_{i}\geq 0\leavevmode\nobreak\ \text{and}\leavevmode\nobreak\ [\mathbf{w}]_{i}>0\\\ \end{cases}.$ (114) Referring to the output $\chi([\bar{\bf{x}}]_{i})$ of suppressor in Equation (10), we can conclude the $[\widetilde{\mathbf{h}}]_{i}$ given in Equation (113) is the true data of suppressor output. Subtracting the $[\widetilde{\mathbf{h}}]_{i}$ from the $\chi([\bar{\bf{x}}]_{i})$ results in the noise $[\widetilde{\mathbf{w}}]_{i}$ of suppressor output, given in Equation (13). To prove the property (12), we consider the following three cases: * • If $[\mathbf{h}]_{i}+[\mathbf{w}]_{i}<0$, we obtain from the the first item of $[\widetilde{\mathbf{h}}]_{i}$ in Equation (113) and $[\widetilde{\mathbf{w}}]_{i}$ in Equation (13) that $\mathrm{DNR}_{i}=\frac{[\widetilde{\mathbf{h}}]_{i}}{[\widetilde{\mathbf{w}}]_{i}}=-1$. * • If $[\mathbf{h}]_{i}+[\mathbf{w}]_{i}\geq 0$ and $[\mathbf{w}]_{i}<0$, we have $[\mathbf{h}]_{i}>0$ and $[\mathbf{h}]_{i}>-[\mathbf{w}]_{i}|>0$. We then obtain from the second item of $[\widetilde{\mathbf{h}}]_{i}$ in Equation (113) and $[\widetilde{\mathbf{w}}]_{i}$ in Equation (13) that $\mathrm{DNR}_{i}=\frac{[\widetilde{\mathbf{h}}]_{i}}{[\widetilde{\mathbf{w}}]_{i}}=\frac{[{\mathbf{h}}]_{i}}{[{\mathbf{w}}]_{i}}<-1$. * • If $[\mathbf{h}]_{i}+[\mathbf{w}]_{i}\geq 0$ and $[\mathbf{w}]_{i}>0$, we obtain from the third item of $[\widetilde{\mathbf{h}}]_{i}$ in Equation (113) and $[\widetilde{\mathbf{w}}]_{i}$ in Equation (13) that $\mathrm{DNR}_{i}=\frac{[\widetilde{\mathbf{h}}]_{i}}{[\widetilde{\mathbf{w}}]_{i}}=\frac{{[\mathbf{h}]_{i}\cdot\kappa_{i}+\rho_{i}}}{{[\mathbf{w}]_{i}\cdot\kappa_{i}}}$. Recalling $[\widetilde{\mathbf{w}}]_{i}>0$, if $\kappa_{i}>0$, the $\frac{{[\mathbf{h}]_{i}\cdot\kappa_{i}+\rho_{i}}}{{[\mathbf{w}]_{i}\cdot\kappa_{i}}}\leq-1$ is equivalent to $\displaystyle\rho_{i}\leq-([\mathbf{h}]_{i}+[\mathbf{w}]_{i})\kappa_{i}<0,\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \text{with}\leavevmode\nobreak\ \leavevmode\nobreak\ \kappa_{i}>0,\leavevmode\nobreak\ \leavevmode\nobreak\ [\mathbf{h}]_{i}+[\mathbf{w}]_{i}\geq 0.$ (115) If $\kappa_{i}<0$, the $\frac{{[\mathbf{h}]_{i}\cdot\kappa_{i}+\rho_{i}}}{{[\mathbf{w}]_{i}\cdot\kappa_{i}}}\leq-1$ is equivalent to $\displaystyle\rho_{i}\geq-\left({[\mathbf{w}]_{i}+[\mathbf{h}]_{i}}\right)\kappa_{i}\geq 0,\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \text{with}\leavevmode\nobreak\ \kappa_{i}<0,\leavevmode\nobreak\ \leavevmode\nobreak\ [\mathbf{h}]_{i}+[\mathbf{w}]_{i}\geq 0.$ (116) We finally conclude from Equations (115) and (116) that $\mathrm{DNR}_{i}=\frac{[\widetilde{\mathbf{h}}]_{i}}{[\widetilde{\mathbf{w}}]_{i}}\in(-\infty,-1]$ under the condition (11). Then, according to Theorem 4, we arrive in the property (12), which completes the proof. ### 9.3 Proof of Theorem 2 Let us first consider the first PhN layer, i.e., the case $t=1$. The Line 2 of Algorithm 2 means that the knowledge matrix $\mathbf{K}_{\left\langle 1\right\rangle}$ includes all the known model-substructure parameters, whose corresponding entries in the masking matrix $\mathbf{M}_{\left\langle 1\right\rangle}$ (generated in the Line 2 of Algorithm 2) are frozen to be zeros. Consequently, both $\mathbf{M}_{\left\langle 1\right\rangle}\odot\mathbf{A}$ and $\mathbf{U}_{\left\langle 1\right\rangle}=\mathbf{M}_{\left\langle 1\right\rangle}\odot\mathbf{W}_{\left\langle 1\right\rangle}$ excludes all the known model-substructure parameters (included in $\mathbf{K}_{\left\langle 1\right\rangle}$). With the consideration of Definition 1, we thus conclude that $\mathbf{M}_{\left\langle 1\right\rangle}\odot\mathbf{A}\cdot\mathfrak{m}(\mathbf{x},r_{\left\langle 1\right\rangle})+\mathbf{f}(\mathbf{x})$ in the ground-truth model (16) and $\mathbf{a}_{\left\langle 1\right\rangle}\odot\text{act}\left(\mathbf{U}_{\left\langle 1\right\rangle}\cdot{\mathfrak{m}}\left(\mathbf{x},r_{\left\langle 1\right\rangle}\right)\right)$ in the output computation (17) are independent of the term $\mathbf{K}_{\left\langle 1\right\rangle}\cdot\mathfrak{m}(\mathbf{x},r_{\left\langle 1\right\rangle})$. Moreover, the activation-masking vector (generated in Line 2 of Algorithm 2) indicates that the activation function corresponding to the output’s $i$-th entry is inactive, if the all the entries in the $i$-th row of masking matrix are zeros (implying all the entries in the $i$-th row of weight matrix are known model-substructure parameters). Finally, we arrive in the conclusion that the input/output (i.e., $\mathbf{x}$/$\mathbf{y}_{\left\langle 1\right\rangle}$) of the first PhN layer strictly complies with the available physical knowledge pertaining to the ground truth (16), i.e., if the $[\mathbf{A}]_{i,j}$ is a known model-substructure parameter, the $\frac{{\partial{[\mathbf{y}_{\left\langle 1\right\rangle}]_{i}}}}{{\partial[{\mathfrak{m}}\left({\mathbf{x},r}\right)]_{j}}}\equiv\frac{{\partial{[\mathbf{y}]_{i}}}}{{\partial{[\mathfrak{m}}({\mathbf{x},r})]_{j}}}\equiv{[\mathbf{A}]_{i,j}}$ always holds. We next consider the remaining PhN layers. Considering the Line 2 Algorithm 2, we have $\displaystyle[\mathbf{y}_{\left\langle p\right\rangle}]_{1:\text{len}(\mathbf{y})}$ $\displaystyle=[\mathbf{K}_{\left\langle p\right\rangle}\cdot\mathfrak{m}(\mathbf{y}_{\left\langle p-1\right\rangle},r_{\left\langle p\right\rangle})]_{1:\text{len}(\mathbf{y})}+[\mathbf{a}_{\left\langle p\right\rangle}\odot\text{act}\\!\left({\mathbf{U}_{\left\langle p\right\rangle}\cdot{\mathfrak{m}}({\mathbf{y}_{\left\langle p-1\right\rangle},r_{\left\langle p\right\rangle}})}\right)]_{1:\text{len}(\mathbf{y})}$ $\displaystyle=\mbox{\small I}_{\text{len}(\mathbf{y})}\cdot[\mathfrak{m}(\mathbf{y}_{\left\langle p-1\right\rangle},r_{\left\langle p\right\rangle})]_{2:(\text{len}(\mathbf{y})+1)}+[\mathbf{a}_{\left\langle p\right\rangle}\odot\text{act}\\!\left({\mathbf{U}_{\left\langle p\right\rangle}\cdot{\mathfrak{m}}({\mathbf{y}_{\left\langle p-1\right\rangle},r_{\left\langle p\right\rangle}})}\right)]_{1:\text{len}(\mathbf{y})}$ (117a) $\displaystyle=\mbox{\small I}_{\text{len}(\mathbf{y})}\cdot[\mathbf{y}_{\left\langle p-1\right\rangle}]_{1:\text{len}(\mathbf{y})}+[\mathbf{a}_{\left\langle p\right\rangle}\odot\text{act}\\!\left({\mathbf{U}_{\left\langle p\right\rangle}\cdot{\mathfrak{m}}({\mathbf{y}_{\left\langle p-1\right\rangle},r_{\left\langle p\right\rangle}})}\right)]_{1:\text{len}(\mathbf{y})}$ (117b) $\displaystyle=[\mathbf{y}_{\left\langle p-1\right\rangle}]_{1:\text{len}(\mathbf{y})}+[\mathbf{a}_{\left\langle p\right\rangle}\odot\text{act}\\!\left({\mathbf{U}_{\left\langle p\right\rangle}\cdot{\mathfrak{m}}({\mathbf{y}_{\left\langle p-1\right\rangle},r_{\left\langle p\right\rangle}})}\right)]_{1:\text{len}(\mathbf{y})}$ $\displaystyle=[\mathbf{K}_{\left\langle p-1\right\rangle}\cdot\mathfrak{m}(\mathbf{y}_{\left\langle p-2\right\rangle},r_{\left\langle p-1\right\rangle})]_{1:\text{len}(\mathbf{y})}+[\mathbf{a}_{\left\langle p\right\rangle}\odot\text{act}\\!\left({\mathbf{U}_{\left\langle p\right\rangle}\cdot{\mathfrak{m}}({\mathbf{y}_{\left\langle p-1\right\rangle},r_{\left\langle p\right\rangle}})}\right)]_{1:\text{len}(\mathbf{y})}$ $\displaystyle=\mbox{\small I}_{\text{len}(\mathbf{y})}\cdot[\mathfrak{m}(\mathbf{y}_{\left\langle p-2\right\rangle},r_{\left\langle p-1\right\rangle})]_{2:(\text{len}(\mathbf{y})+1)}+[\mathbf{a}_{\left\langle p\right\rangle}\odot\text{act}\\!\left({\mathbf{U}_{\left\langle p\right\rangle}\cdot{\mathfrak{m}}({\mathbf{y}_{\left\langle p-1\right\rangle},r_{\left\langle p\right\rangle}})}\right)]_{1:\text{len}(\mathbf{y})}$ $\displaystyle=\mbox{\small I}_{\text{len}(\mathbf{y})}\cdot[\mathbf{y}_{\left\langle p-2\right\rangle}]_{1:\text{len}(\mathbf{y})}+[\mathbf{a}_{\left\langle p\right\rangle}\odot\text{act}\\!\left({\mathbf{U}_{\left\langle p\right\rangle}\cdot{\mathfrak{m}}({\mathbf{y}_{\left\langle p-1\right\rangle},r_{\left\langle p\right\rangle}})}\right)]_{1:\text{len}(\mathbf{y})}$ $\displaystyle=[\mathbf{y}_{\left\langle p-2\right\rangle}]_{1:\text{len}(\mathbf{y})}+[\mathbf{a}_{\left\langle p\right\rangle}\odot\text{act}\\!\left({\mathbf{U}_{\left\langle p\right\rangle}\cdot\breve{\mathfrak{m}}({\mathbf{y}_{\left\langle p-1\right\rangle},r_{\left\langle p\right\rangle}})}\right)]_{1:\text{len}(\mathbf{y})}$ $\displaystyle=\ldots$ $\displaystyle=[\mathbf{y}_{\left\langle 1\right\rangle}]_{1:\text{len}(\mathbf{y})}+[\mathbf{a}_{\left\langle p\right\rangle}\odot\text{act}\\!\left({\mathbf{U}_{\left\langle p\right\rangle}\cdot{\mathfrak{m}}({\mathbf{y}_{\left\langle p-1\right\rangle},r_{\left\langle p\right\rangle}})}\right)]_{1:\text{len}(\mathbf{y})}$ $\displaystyle=[\mathbf{K}_{\left\langle 1\right\rangle}\cdot\mathfrak{m}(\mathbf{x},r_{\left\langle 1\right\rangle})]_{1:\text{len}(\mathbf{y})}+[\mathbf{a}_{\left\langle p\right\rangle}\odot\text{act}\\!\left({\mathbf{U}_{\left\langle p\right\rangle}\cdot{\mathfrak{m}}({\mathbf{y}_{\left\langle p-1\right\rangle},r_{\left\langle p\right\rangle}})}\right)]_{1:\text{len}(\mathbf{y})},$ $\displaystyle=\mathbf{K}_{\left\langle 1\right\rangle}\cdot\mathfrak{m}(\mathbf{x},r_{\left\langle 1\right\rangle})+[\mathbf{a}_{\left\langle p\right\rangle}\odot\text{act}\\!\left({\mathbf{U}_{\left\langle p\right\rangle}\cdot{\mathfrak{m}}({\mathbf{y}_{\left\langle p-1\right\rangle},r_{\left\langle p\right\rangle}})}\right)]_{1:\text{len}(\mathbf{y})},$ (117c) where (117a) and (117b) are obtained from their previous steps via considering the structure of block matrix $\mathbf{K}_{\left\langle t\right\rangle}$ (generated in Line 11 of Algorithm 2) and the formula of augmented monomials: $\mathfrak{m}(\mathbf{x},r)=\left[1;\leavevmode\nobreak\ \mathbf{x};\leavevmode\nobreak\ [\mathfrak{m}({\mathbf{x},r})]_{(\text{len}(\mathbf{x})+2):\text{len}(\mathfrak{m}(\mathbf{x},r))}\right]$ (generated via Algorithm 1). The remaining iterative steps follow the same path. The training loss function is to push the terminal output of Algorithm 2 (i.e., $\widehat{\mathbf{y}}=\mathbf{y}_{\left\langle p\right\rangle}$) to approximate the real output $\mathbf{y}$, which in light of (117c) yields $\displaystyle\widehat{\mathbf{y}}$ $\displaystyle=\mathbf{K}_{\left\langle 1\right\rangle}\cdot\mathfrak{m}(\mathbf{x},r_{\left\langle 1\right\rangle})+[\mathbf{a}_{\left\langle p\right\rangle}\odot\text{act}\\!\left({\mathbf{U}_{\left\langle p\right\rangle}\cdot{\mathfrak{m}}({\mathbf{y}_{\left\langle p-1\right\rangle},r_{\left\langle p\right\rangle}})}\right)]_{1:\text{len}(\mathbf{y})}$ $\displaystyle=\mathbf{K}_{\left\langle 1\right\rangle}\cdot\mathfrak{m}(\mathbf{x},r_{\left\langle 1\right\rangle})+\mathbf{a}_{\left\langle p\right\rangle}\odot\text{act}\\!\left({\mathbf{U}_{\left\langle p\right\rangle}\cdot{\mathfrak{m}}({\mathbf{y}_{\left\langle p-1\right\rangle},r_{\left\langle p\right\rangle}})}\right),$ (118) where (118) from its previous step is obtained via considering the fact $\text{len}(\widehat{\mathbf{y}})=\text{len}(\mathbf{y})=\text{len}(\mathbf{y}_{\left\langle p\right\rangle})$. Meanwhile, the condition of generating weight-masking matrix in Line 2 of Algorithm 2 removes all the node-representations’ connections with the known model-substructure parameters included in $\mathbf{K}_{\left\langle 1\right\rangle}$. Therefore, we can conclude that in the terminal output computation (118), the term $\mathbf{a}_{\left\langle p\right\rangle}\odot\text{act}\\!\left({\mathbf{U}_{\left\langle p\right\rangle}\cdot{\mathfrak{m}}({\mathbf{y}_{\left\langle p-1\right\rangle},r_{\left\langle p\right\rangle}})}\right)$ does not have influence on the computing of knowledge term $\mathbf{K}_{\left\langle 1\right\rangle}\cdot\mathfrak{m}(\mathbf{x},r_{\left\langle 1\right\rangle})$. Thus, the Algorithm 2 strictly embeds and preserves the available knowledge pertaining to the physics model of ground truth (1), or equivalently the (16). ### 9.4 Proof of Theorem 3 Due to Theorem 6 in Supplementary Information 9.1, the number of augmented monomials of $d$ cascading PhNs (21) is obtained as $\displaystyle\sum\limits_{p=1}^{d}{\text{len}(\mathfrak{m}({\mathbf{x}},r_{{\left\langle p\right\rangle}}))}$ $\displaystyle=\underbrace{\sum\limits_{s=1}^{{r_{\left\langle 1\right\rangle}}}{\frac{{\left({n+s-1}\right)!}}{{\left({n-1}\right)s!}}}+1}_{\text{the first PhN}}+\underbrace{\sum\limits_{v=1}^{d-1}{\sum\limits_{s=1}^{{r_{{\left\langle v+1\right\rangle}}}}{\frac{{\left({{n_{\left\langle v\right\rangle}}+s-1}\right)!}}{{\left({{n_{\left\langle v\right\rangle}}-1}\right)!s!}}}}+d-1}_{\text{the remaining PhNs}}.$ (119) The condition (22) implies that $r>r_{{\left\langle 1\right\rangle}}$, which in conjunction with (112), lead to $\displaystyle\text{len}(\mathfrak{m}({\mathbf{x}},r))=\sum\limits_{s=1}^{{r_{\left\langle 1\right\rangle}}}{\frac{{\left({n+s-1}\right)!}}{{\left({n-1}\right)!s!}}}+\sum\limits_{s={r_{\left\langle 1\right\rangle}}+1}^{r}{\frac{{\left({n+s-1}\right)!}}{{\left({n-1}\right)!s!}}}+1.$ (120) Subtracting (119) from (120) yields (23). ### 9.5 Derivations of Solution (99) With the consideration of (81)–(94), the safety formulas: $\left[{\bf{s}}({\bf{u}}(k))\right]_{1}=[\widehat{\mathbf{c}}]_{1}$ and $\left[{\bf{s}}({\bf{u}}(k))\right]_{2}=[\widehat{\mathbf{c}}]_{2}$ can be rewritten as $\displaystyle{\lambda_{1}}[\widehat{\mathbf{u}}(k)]_{1}^{2}+{\lambda_{2}}[\widehat{\mathbf{u}}(k)]_{2}^{2}$ $\displaystyle={[\widehat{\mathbf{c}}]_{1}}-{[\mathbf{b}]_{1}},$ (121) $\displaystyle{\left[\begin{array}[]{l}\theta(k)\\\ \gamma(k)\end{array}\right]^{\top}}{\mathbf{P}}_{2}\left[\begin{array}[]{l}\theta(k)\\\ \gamma(k)\end{array}\right]$ $\displaystyle={s_{11}}[\widehat{\mathbf{u}}(k)]_{1}^{2}+2{s_{12}}{[\widehat{\mathbf{u}}}(k)]_{1}{[\widehat{\mathbf{u}}}(k)]_{2}+{s_{22}}[\widehat{\mathbf{u}}(k)]_{2}^{2}=[\mathbf{b}]_{2}-{[\widehat{\mathbf{c}}]_{2}},$ (126) We now define: $\displaystyle\overline{\mu}_{1}\triangleq\frac{{{[\widehat{\mathbf{c}}]_{1}}-{[\mathbf{b}]_{1}}}}{{{\lambda_{1}}}},\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \overline{\lambda}\triangleq\frac{{{\lambda_{2}}}}{{{\lambda_{1}}}},\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \bar{b}\triangleq{[\mathbf{b}]_{2}}-{[\widehat{\mathbf{c}}]_{2}}.$ (127) leveraging which, the (121) is rewritten as $\displaystyle[\widehat{\mathbf{u}}(k)]_{1}^{2}=\overline{\mu}_{1}-\overline{\lambda}[\widehat{\mathbf{u}}(k)]_{2}^{2},$ (128) and we can obtain from (126) that $\displaystyle 4s_{12}^{2}[\widehat{\mathbf{u}}(k)]_{2}^{2}[\widehat{\mathbf{u}}(k)]_{1}^{2}={{\overline{b}}^{2}}+s_{11}^{2}[\widehat{\mathbf{u}}(k)]_{1}^{4}+s_{22}^{2}[\widehat{\mathbf{u}}(k)]_{2}^{4}-2\overline{b}{s_{11}}[\widehat{\mathbf{u}}(k)]_{1}^{2}-2\overline{b}{s_{22}}[\widehat{\mathbf{u}}(k)]_{2}^{2}+2{s_{11}}{s_{22}}[\widehat{\mathbf{u}}(k)]_{1}^{2}[\widehat{\mathbf{u}}(k)]_{2}^{2},$ substituting (128) into which yields $\displaystyle{\varpi_{1}}[\widehat{\mathbf{u}}]_{2}^{4}\left(k\right)+{\varpi_{2}}[\widehat{\mathbf{u}}]_{2}^{2}\left(k\right)+{\varpi_{3}}\left(k\right)=0,$ (129) where $\displaystyle{\varpi_{1}}$ $\displaystyle\triangleq s_{11}^{2}{\overline{\lambda}^{2}}+s_{22}^{2}-2{s_{11}}{s_{22}}\overline{\lambda}+4s_{12}^{2}\overline{\lambda},$ (130) $\displaystyle{\varpi_{2}}$ $\displaystyle\triangleq 2\bar{b}{s_{11}}\overline{\lambda}-2s_{11}^{2}{\overline{\mu}_{1}}\overline{\lambda}-2\bar{b}{s_{22}}+2{s_{11}}{s_{22}}{\overline{\mu}_{1}}-4s_{12}^{2}{\overline{\mu}_{1}},$ (131) $\displaystyle{\varpi_{3}}$ $\displaystyle\triangleq{\bar{b}^{2}}+s_{11}^{2}\overline{\mu}_{1}^{2}-2\bar{b}{s_{11}}{\overline{\mu}_{1}}.$ (132) Considering $\widehat{u}_{2}^{2}(k)\geq 0$, the solution of (129) is $\displaystyle[\widehat{\mathbf{u}}]_{2}^{2}(k)=\frac{{\sqrt{\varpi_{2}^{2}-4{\varpi_{1}}{\varpi_{3}}}-{\varpi_{2}}}}{{2{\varpi_{1}}}},$ (133) substituting which into (128) yields $\displaystyle[\widehat{\mathbf{u}}]_{1}^{2}(k)={{\overline{\mu}}_{1}}-\overline{\lambda}\frac{{\sqrt{\varpi_{2}^{2}-4{\varpi_{1}}{\varpi_{3}}}-{\varpi_{2}}}}{{2{\varpi_{1}}}}.$ (134) The $\widehat{\mathbf{u}}(k)$ is then straightforwardly obtained from (133) and (134): $\displaystyle\widehat{\mathbf{u}}(k)=\left[{\pm\sqrt{{{\overline{\mu}}_{1}}-\overline{\lambda}\frac{{\sqrt{\varpi_{2}^{2}-4{\varpi_{1}}{\varpi_{3}}}-{\varpi_{2}}}}{{2{\varpi_{1}}}}};\pm\sqrt{\frac{{\sqrt{\varpi_{2}^{2}-4{\varpi_{1}}{\varpi_{3}}}-{\varpi_{2}}}}{{2{\varpi_{1}}}}}}\right],$ which, in conjunction with (94) and $Q^{-1}=Q=Q^{\top}$, lead to $\displaystyle\left[\begin{array}[]{l}\theta\left(k\right)\\\ \gamma\left(k\right)\end{array}\right]={\mathbf{Q}_{1}}\left[\begin{array}[]{l}\pm\sqrt{{{\overline{\mu}}_{1}}-\overline{\lambda}\frac{{\sqrt{\varpi_{2}^{2}-4{\varpi_{1}}{\varpi_{3}}}-{\varpi_{2}}}}{{2{\varpi_{1}}}}}\\\ \pm\sqrt{\frac{{\sqrt{\varpi_{2}^{2}-4{\varpi_{1}}{\varpi_{3}}}-{\varpi_{2}}}}{{2{\varpi_{1}}}}}\end{array}\right].$ (139) Substituting the notations defined in (127) into (139) and (130)–(132) results in (99) and (100)–(102), respectively. ## Acknowledgements This work was supported in part by National Science Foundation (award number: CPS-1932529), Hong Kong Research Grants Council (RGC) Theme-based Research Scheme T22-505/19-N (P0031331, RBCR, P0031259, RBCP), and RGC Germany/HK Joint Research Scheme under Grant G-PolyU503/16. ## Author Contributions Y.M., L.S., and H.S. designed research; Y.M. performed research; Y.M. led experiments, H.S. conducted experiment of deep Koopman, and Y. L. collected vehicle data and implemented self-correcting Phy-Taylor in AutoRally; Y.M., L.S., Q.W., and T.A. participated in research discussion and data analysis; Y.M. and S.H. wrote this manuscript; T.A. edited this manuscript; L.S. and T.A. led the research team.
approach. For my needs this is more than enough, since in any case I eventually restrict attention to a small interval around $0$. Corollary 4.11 can be formulated in a slightly more general fashion. Using the notation $\delta(W):=\limsup_{r\rightarrow\infty}\frac{\log\big{(}b_{W}(r)\big{)}}{r}$ for a general subset $W$ in a general metric space $X$, the following version holds: ###### Corollary 4.13. Let $(X,x_{0})$ be a pointed metric space, $Y,Z\subset X$ two subsets, and $u(r)\preceq_{\infty}\varepsilon r$ for some $\varepsilon<\frac{1}{2}$. Assume that $b_{Z}^{u}(r)=b_{Z}(r)$. If $Z\subset\mathcal{N}_{u}(Y)$, then $\delta(Y)\geq(1-4\varepsilon)\cdot\delta(Z)$. Moreover, if $u$ is sublinear, then $\delta(Z)=\delta(Y)$. In particular, Corollary 4.11 holds even when the group $G$ does not have property $(T)$. Corollary 4.11 follows from Corollary 4.13 because the fact that $\Gamma$ is a group of isometries implies $b_{\Gamma}^{u}(r)=b_{\Gamma}(r)$. ###### Proof of Corollary 4.13. Consider the closest point projection $p_{Y}:Z\rightarrow Y$, denote $y_{z}:=p_{Y}(z)$ and observe: 1. 1. $|y_{z}|\leq|z|+u(|z|)$. 2. 2. $d\big{(}y_{z},y_{z^{\prime}}\big{)}\geq d(z,z^{\prime})-2u(\max\\{|z|,|z^{\prime}|\\})$. The first item follows from the fact $y_{z}\in\overline{B}\big{(}z,u(|z|)\big{)}$ and triangle inequality. The second item follows from the quadrilateral inequality, i.e., using triangle inequality twice along the quadrilateral $[z,z^{\prime},y_{z^{\prime}},y_{z}]$. The above properties allow me to use Proposition 4.10 with constant $L=1$ and function $u^{\prime}=2u$ to get $b_{Y}(r)\geq b_{Z}\big{(}r-u^{\prime}(r)\big{)}/b_{Z}^{u}\Big{(}u^{\prime}\big{(}r-u^{\prime}(r)\big{)}\Big{)}$ Since I assume $b_{Z}^{u}=b_{Z}$, I can omit the superscript $u$ in the last expression. Recalling the definition $\delta(W)=\limsup_{r\rightarrow\infty}\frac{b_{W}(r)}{r}$, it remains to prove: $\limsup_{r\rightarrow\infty}\frac{1}{r}\cdot\log\bigg{(}b_{Z}\big{(}r-u^{\prime}(r)\big{)}/b_{Z}\Big{(}u^{\prime}\big{(}r-u^{\prime}(r)\big{)}\Big{)}\bigg{)}\geq(1-4\varepsilon)\cdot\delta(Z)$ The proof of this inequality involves nothing more than $\log$ rules and arithmetic of limits: $\begin{split}&\limsup_{r\rightarrow\infty}\frac{1}{r}\cdot\log\bigg{(}b_{Z}\big{(}r-u^{\prime}(r)\big{)}/b_{Z}\Big{(}u^{\prime}\big{(}r-u^{\prime}(r)\big{)}\Big{)}\bigg{)}\\\ &=\limsup_{r\rightarrow\infty}\frac{1}{r}\cdot\Bigg{(}\log\bigg{(}b_{Z}\big{(}r-u^{\prime}(r)\big{)}\bigg{)}-\log\bigg{(}b_{Z}\Big{(}u^{\prime}\big{(}r-u^{\prime}(r)\big{)}\Big{)}\bigg{)}\Bigg{)}\\\ &\geq\limsup_{r\rightarrow\infty}\Bigg{[}\frac{1}{r}\cdot\log\bigg{(}b_{Z}\big{(}r-u^{\prime}(r)\big{)}\bigg{)}-\limsup_{s\rightarrow\infty}\bigg{[}\frac{1}{s}\log\bigg{(}b_{Z}\Big{(}u^{\prime}\big{(}s-u^{\prime}(s)\big{)}\Big{)}\bigg{)}\bigg{]}\Bigg{]}\\\ &=\limsup_{r\rightarrow\infty}\frac{1}{r}\cdot\log\bigg{(}b_{Z}\big{(}r-u^{\prime}(r)\big{)}\bigg{)}-\limsup_{s\rightarrow\infty}\frac{1}{s}\log\bigg{(}b_{Z}\Big{(}u^{\prime}\big{(}s-u^{\prime}(s)\big{)}\Big{)}\bigg{)}\\\ &\geq\limsup_{r\rightarrow\infty}\frac{1}{r}\cdot\log\bigg{(}b_{Z}\big{(}r-2\varepsilon r\big{)}\bigg{)}-\limsup_{s\rightarrow\infty}\frac{1}{s}\log\bigg{(}b_{Z}\Big{(}2\varepsilon s\Big{)}\bigg{)}\\\ &=(1-2\varepsilon)\delta(Z)-2\varepsilon\delta(Z)=(1-4\varepsilon)\delta(Z)\end{split}$ (3) Below I justify the steps in the above inequalities: 1. 1. First equality is by rules of $\log$. 2. 2. Second and third inequalities are by arithmetic of limits: let $(a_{n})_{n},(b_{n})_{n}$ be two sequences of positive numbers, and $A=\limsup_{n}a_{n},\ B=\limsup_{n}b_{n}$. Then $\limsup(a_{n}-b_{n})\geq\limsup_{n}(a_{n}-B)=A-B$. 3. 3. Fourth inequality: $u^{\prime}(r)<2\varepsilon(r)$ for all large enough $r$. 4. 4. Fifth equality: definition of $\delta$. This completes the proof in the general case, which is what is needed for the proof of Theorem 4.3. For the more refined statement in the case $u$ is sublinear, one has to show a bit more. From inequality 3 (specifically from the fourth line of the inequality) it is clearly enough to prove: 1. 1. $\limsup_{r\rightarrow\infty}\frac{1}{r}\cdot\log\bigg{(}b_{Z}\big{(}r-u^{\prime}(r)\big{)}\bigg{)}=\delta(Z)$. 2. 2. $\limsup_{s\rightarrow\infty}\frac{1}{s}\log\bigg{(}b_{Z}\Big{(}u^{\prime}\big{(}s-u^{\prime}(s)\big{)}\Big{)}\bigg{)}=0$. Starting from the second item, indeed it holds that $\frac{1}{s}\log\bigg{(}b_{Z}\Big{(}u^{\prime}\big{(}s-u^{\prime}(s)\big{)}\Big{)}\bigg{)}=\frac{u^{\prime}\big{(}s-u^{\prime}(s)\big{)}}{s}\cdot\frac{\log\bigg{(}b_{Z}\Big{(}u^{\prime}\big{(}s-u^{\prime}(s)\big{)}\Big{)}\bigg{)}}{u^{\prime}\big{(}s-u^{\prime}(s)\big{)}}$ Clearly $\limsup$ of the right factor in the above product is bounded by $\delta(Z)$, and in particular it is uniformly bounded. On the other hand sublinearity of $u^{\prime}$ implies that the left factor tends to $0$. I conclude that this product tends to $0$ as $s$ tends to $\infty$. It remains to prove $\limsup_{r\rightarrow\infty}\frac{1}{r}\cdot\log\bigg{(}b_{Z}\big{(}r-u^{\prime}(r)\big{)}\bigg{)}=\delta(Z)$. In a similar fashion, $\frac{1}{r}\log\Big{(}b_{Z}\big{(}r-u^{\prime}(r)\big{)}\Big{)}=\frac{r-u^{\prime}(r)}{r}\cdot\frac{\log\Big{(}b_{Z}\big{(}r-u^{\prime}(r)\big{)}\Big{)}}{r-u^{\prime}(r)}$ The left factor limits to $1$ by sublinearity of $u^{\prime}$. The right factor is nearly the expression in the definition of $\delta(Z)$, and I want to prove that indeed taking $\limsup$ of it equals $\delta(Z)$. _A priori_ $\\{r-u^{\prime}(r)\\}_{r\in\mathbb{R}_{>0}}$ is just a subset of $\mathbb{R}_{>0}$, so changing variable and writing $t:=r-u^{\prime}(r)$ requires a justification. But there is no harm in assuming that $u^{\prime}$ is a non-decreasing continuous function, hence $\mathbb{R}_{\geq R}\subset\\{r-u^{\prime}(r)\\}_{r\in\mathbb{R}_{>0}}$ for some $R\in\mathbb{R}_{>0}$. Therefore for any sequence $r_{n}\rightarrow\infty$ there is a sequence $r_{n}^{\prime}$ with $r_{n}=r_{n}^{\prime}-u^{\prime}(r_{n}^{\prime})$ for all large enough $n$ (note that in particular $r_{n}^{\prime}\rightarrow\infty$). In the other direction, for every sequence $r_{n}^{\prime}\rightarrow\infty$ there is clearly a sequence $r_{n}\rightarrow\infty$ for which $r_{n}=r_{n}^{\prime}-u^{\prime}(r_{n}^{\prime})$. I conclude $\limsup_{r\rightarrow\infty}\frac{\log\Big{(}b_{Z}\big{(}r-u^{\prime}(r)\big{)}\Big{)}}{r-u^{\prime}(r)}=\limsup_{r\rightarrow\infty}\frac{1}{r}\cdot\log\big{(}b_{Z}(r)\big{)}=\delta(Z)$ This completes the proof. ∎ ###### Proof of Theorem 4.3. Define $\varepsilon(G)=\frac{c^{*}(G)}{4\cdot 2\|\rho\|}$, and assume $u(r)\preceq_{\infty}\varepsilon(G)\cdot r$. Notice that $\varepsilon(G)<\frac{1}{2}$, and since $\delta(\Gamma)=2\|\rho\|$ Corollary 4.11 gives $\delta(\Lambda)\geq\big{(}1-4\varepsilon(G)\big{)}\cdot 2\|\rho\|=2\|\rho\|-4\varepsilon(G)\cdot 2\|\rho\|\geq 2\|\rho\|-c^{*}(G)$ By Theorem 4.6, $\Lambda$ is a lattice. ∎ ###### Remark 4.14. The question of existence of interesting groups that coarsely cover a lattice is a key question that arises naturally from this paper. The first question that comes to mind is whether there exist groups that are not commensurable to a lattice but that sublinearly, or even $\varepsilon$-linearly, cover one. Perhaps the growth rate point of view could be used to rule out groups that cover a lattice $\varepsilon$-linearly but not sublinearly. ## 5 Uniform Lattices In this section I prove: ###### Theorem 5.1. Let $G$ be a finite-centre semisimple Lie group without compact factors. Let $\Gamma\leq G$ be a lattice, $\Lambda\leq G$ a discrete subgroup such that $\Gamma\subset\mathcal{N}_{u}(\Lambda)$ for some sublinear function $u$. If $\Gamma$ is uniform, then $\Lambda$ is a uniform lattice. As in the case of lattices with property (T), uniform lattices admit the stronger version of $\varepsilon$-linear rigidity, for any $\varepsilon<1$: ###### Theorem 5.2. In the setting of Theorem 5.1, the conclusion holds also under the relaxed assumption that $u(r)\preceq_{\infty}\varepsilon r$ for any $0<\varepsilon<1$. Clearly Theorem 5.2 implies Theorem 5.1. Throughout this section, the standing assumptions are those of Theorem 5.2. ##### Lattice Criterion. A discrete subgroup is a uniform lattice if and only if it admits a relatively compact fundamental domain. The criterion I use is the immediate consequence that if $\Gamma$ is uniform and $u$ is bounded (i.e. $\Gamma\subset\mathcal{N}_{D}(\Lambda)$ for some $D>0$), then $\Lambda$ is a uniform lattice. ##### Line of Proof and Use of $\varepsilon$-Linearity. The goal is to show that the $\varepsilon$-linearity of $u$ forces $\Gamma\subset\mathcal{N}_{D}(\Lambda)$ for some $D>0$, i.e. that $\Gamma$ actually lies inside a bounded neighbourhood of $\Lambda$. The proof is by way of contradiction. If there is no such $D>0$ then there are arbitrarily large balls that do not intersect $\Lambda$. The proof goes by finding such large $\Lambda$-free balls that are all tangent to some fixed arbitrary point $x\in X$ (see Figure 1). The $\varepsilon$-linearity then gives rise to concentric $\Gamma$-free balls that are arbitrarily large, contradicting the fact that $\Gamma$ is a uniform lattice. ###### Remark 5.3. The main difference from the non-uniform case is that for a non-uniform lattice $\Gamma$, the space $X$ does admit arbitrarily large $\Gamma$-free balls. This situation requires different lattice criteria and much extra work. Still the proof for the uniform case, though essentially no more than a few lines, lies the foundations for and presents the logic of the much more involved case of $\mathbb{Q}$-rank $1$ lattices. ### 5.1 Notations and Terminology The following definitions will be used repeatedly in both this section and in Section 6. It mainly fixes terminology and notation of the geometric situation illustrated in Figure 1. ###### Definition 5.4. Let $H\leq G=\mathrm{Isom}(X)^{\circ}$. A set $U\subset X$ is called _$H$ -free_ if $H\cdot x_{0}\cap\mathrm{Int}(U)=\emptyset$, where $\mathrm{Int}(U)$ is the topological interior of $U$. That is, $U$ is called $H$-free if its interior does not intersect the $H$-orbit $H\cdot x_{0}$. ###### Definition 5.5. Denote $P_{\Lambda}(\gamma x_{0})=P_{\Lambda}(\gamma)=\lambda_{\gamma}x_{0}$. 1. 1. $d_{\gamma}:=d(\gamma x_{0},\lambda_{\gamma}x_{0})$. 2. 2. $B_{\gamma}:=B(\gamma x_{0},d_{\gamma})$. It is a $\Lambda$-free ball centred at $\gamma x_{0}$ and tangent to $\lambda_{\gamma}x_{0}$. 3. 3. $x_{\gamma}^{\prime}:=\lambda_{\gamma}^{-1}\gamma x_{0}$. Notice $|x_{\gamma}^{\prime}|=d_{\gamma}$. 4. 4. $B_{\gamma}^{\prime}:=\lambda_{\gamma}^{-1}B_{\gamma}=B(x_{\gamma}^{\prime},d_{\gamma})$. It is $\Lambda$-free as a $\Lambda$-translate of the $\Lambda$-free ball $B_{\gamma}$, and is tangent to $x_{0}$. 5. 5. For $s\in\mathbb{R}_{>0}$ and a ball $B=B(x,r)$, denote $sB:=B(x,sr)$, the rescaled ball with same centre and radius $sr$. 6. 6. For a sequence $\gamma_{n}$, denote by $\lambda_{n},d_{n},B_{n},B_{n}^{\prime},x_{n}^{\prime}$ the respective $\lambda_{\gamma_{n}},d_{\gamma_{n}}$, etc. $x_{0}$$x_{\gamma}^{\prime}$$=d_{\gamma}$$=s\cdot d_{\gamma}$$\Gamma- free$$\Lambda-free$$\gamma x_{0}$$=d_{\gamma}$$\Lambda- free$$\lambda_{\gamma}x_{0}$$L_{\lambda_{\gamma}^{-1}}$ Figure 1: Basic Setting and Lemma 5.6. A $\Lambda$-free ball about $\gamma x_{0}$ of radius $d_{\gamma}$, translated by $\lambda_{\gamma}^{-1}$ to a ball tangent to $x_{0}$. The linear ratio between $|x_{\gamma}^{\prime}|=d_{\gamma}$ and the $\Lambda$-free radius forces the red ball to be $\Gamma$-free. ### 5.2 Proof of Theorem 5.2 ###### Lemma 5.6. Let $x\in X$. There exists $S=S(x,u)\in(0,1)$ such that for every $s\in(0,S)$ there is $R=R(s,S)$ such that if $r>R$ and $B=B(y,r)$ is a $\Lambda$-free ball tangent to $x$, then $sB$ is $\Gamma$-free. In particular, the existence of arbitrarily large $\Lambda$-free balls that are all tangent to a fixed point $x\in X$ implies the existence of arbitrarily large $\Gamma$-free balls. There is a slightly stronger version of Lemma 5.6 if $u$ is sublinear: ###### Lemma 5.7. Let $G,\Gamma,\Lambda$ and $u$ be as in Theorem 5.1 (in particular, $u$ is a sublinear function and $\Gamma\subset\mathcal{N}_{u}(\Gamma)$). For every $x\in X$ and every $s\in(0,1)$ there exists $R=R(x,s)>0$ such that for every $r>R$, if $B=B(y,r)$ is a $\Lambda$-free ball tangent to $x$ then $sB$ is $\Gamma$-free. I omit the proof of Lemma 5.7, which is a slightly simpler version of the proof of Lemma 5.6. ###### Proof of Lemma 5.6.. The proof is more easily read if one assumes $x=x_{0}$ and $u(r)=\varepsilon r$ so I begin with this case. Assume $B=B(y,R)$ is $\Lambda$-free for some $y\in X,r>0$. The assumption $x=x_{0}$ gives $|y|=d(y,x_{0})=r$. For a fixed $s\in(0,1)$, assume $sB\cap\Gamma\cdot x_{0}\neq\emptyset$. This gives rise to an element $\gamma\in\Gamma$ such that: 1. 1. $|\gamma|=d(\gamma x_{0},x_{0})\leq d(y,x_{0})+d(\gamma x_{0},y)=(1+s)r$. In particular, $d(\gamma x_{0},\Lambda\cdot x_{0})\leq\varepsilon(1+s)r$. 2. 2. $B\big{(}\gamma x_{0},(1-s)r\big{)}\subset B(y,r)$, so it is $\Lambda$-free. I conclude that for such $s$ one has the inequality $(1-s)r\leq\varepsilon(1+s)r$, i.e. $\frac{1-s}{1+s}\leq\varepsilon$. The constant $\varepsilon$ is fixed and smaller than $1$, while $\frac{1-s}{1+s}$ limit to $1$ monotonically from below as $s>0$ tend to $0$. I conclude that there is a segment $(0,S)\subset(0,1)$ such that for all $s\in(0,S)$, $sB$ is $\Gamma$-free (independently of $r$). Assume now that $x\neq x_{0}$ and $u(r)\preceq_{\infty}\varepsilon r$. As above, if $\gamma x_{0}\in B(y,sr)$ then it is the centre of a $\Lambda$-free ball of radius $(1-s)r$, and so $(1-s)r\leq u(|\gamma|)$. I wish to use the $\varepsilon$-linear bound on $u$ as I did before, only this time $u$ is only asymptotically smaller than $\varepsilon r$. To circumvent this I just need to show that $|\gamma|$ is large enough. Indeed since $B\big{(}\gamma x_{0},(1-s)r\big{)}$ is $\Lambda$-free it does not contain $x_{0}\in\Lambda\cdot x_{0}$ and in particular $(1-s)r\leq d(x_{0},\gamma x_{0})=|\gamma|$. For some $R_{1}(s)=R_{1}(s,u)$ one therefore has for all $r>R_{1}(s)$ $(1-s)r\leq u(|\gamma|)\leq\varepsilon|\gamma|$ On the other hand $|y|\leq d(x,y)+d(x,x_{0})=r+|x|$. Consequently $|\gamma|\leq(1+s)r+|x|$ and, for $r>R_{1}(s)$, $(1-s)r\leq u(|\gamma|)\leq\varepsilon|\gamma|\leq\varepsilon\big{(}(1+s)r+|x|\big{)}$ This means that $s$ for which $\Gamma\cdot x_{0}\cap B(y,sr)\neq\emptyset$ must admit, for all $r>R_{1}(s)$, $\frac{1-s}{1+s+\frac{|x|}{r}}=\frac{(1-s)r}{(1+s)r+|x|}\leq\varepsilon<1$ (4) The rest of the proof is just Calculus 1, and concerns with finding $S=S(x,\varepsilon,u)\in(0,1)$ so that for any $s\in(0,S)$ there is $R(s)$ such that all $r>R(s)$ satisfy $\varepsilon<\frac{1-S}{1+S+\frac{|x|}{r}}\leq\frac{1-s}{1+s+\frac{|x|}{r}}$ (5) The lemma readily follows from inequalities 4,5. Explicitly, fix $\varepsilon^{\prime}>\varepsilon$. As before, monotonic approach of $\frac{1-s}{1+s}$ to $1$ allows to fix $S\in(0,1)$ for which $\varepsilon<\varepsilon^{\prime}<\frac{1-s}{1+s}$ for all $s\in(0,S)$. Next note that for any fixed $s\in(0,S)$, $\lim_{r\rightarrow\infty}\frac{1-s}{1+s+\frac{|x|}{r}}=\frac{1-s}{1+s}$, and that the approach in monotonically increasing with $r$. Since $\varepsilon<\varepsilon^{\prime}$, this limit implies that for some $R_{2}>R_{1}(S)$, all $r>R_{2}$ admit $\varepsilon<\frac{1-S}{1+S+\frac{|x|}{r}}$. Finally notice that for any fixed $r$ the function $\frac{1-s}{1+s+\frac{|x|}{r}}$ is again monotonically increasing as $s$ tends to $0$ from above. Therefore inequality 5 holds for every $s\in(0,S)$ and all $r>R_{2}(S)$ (capital $S$ is intentional and important). To conclude the proof, notice that if moreover $r>R_{1}(s)$ (again lowercase $s$ is intentional and important) then inequalities 4,5 both hold. This means that for any $s\in(0,S)$ there is $R(s):=\max\\{R_{1}(s),R_{2}(S)\\}$ such that $r>R(s)\Rightarrow B(y,sr)$ is $\Gamma$-free. The constants $R_{1}(s),R_{2}(S)$ have the desired dependencies, hence so does $R(s)$, proving the lemma. ∎ ###### Corollary 5.8 (Theorem 5.2). There is a uniform bound on $\\{d_{\gamma}\\}_{\gamma\in\Gamma}$, i.e., $\Gamma\subset\mathcal{N}_{D}(\Lambda)$ for some $D>0$. In particular, $\Lambda$ is a uniform lattice. ## 6 $\mathbb{Q}$-rank 1 Lattices In this section I prove: ###### Theorem 6.1. Let $G$ be a real finite-centre semisimple Lie group without compact factors, $\Gamma\leq G$ an irreducible non-uniform $\mathbb{Q}$-rank $1$ lattice, $\Lambda\leq G$ a discrete irreducible subgroup. If $\Gamma\subset\mathcal{N}_{u}(\Lambda)$ for some sublinear function $u$, then $\Lambda$ is a lattice. Moreover, if $\Gamma\not\subset\mathcal{N}_{D}(\Lambda)$ for any $D>0$, then $\Lambda$ is also of $\mathbb{Q}$-rank $1$. If $\Gamma\subset\mathcal{N}_{D}(\Lambda)$ for some $D>0$, then $\Lambda$ could be a uniform lattice. An obvious obstacle for that is if $\Lambda\subset\mathcal{N}_{u^{\prime}}(\Gamma)$ for some sublinear function $u^{\prime}$. This condition turns out to be sufficient for commensurability. ###### Proposition 6.2 (Proposition 6.27). Let $G$ be a real finite-centre semisimple Lie group without compact factors, $\Gamma\leq G$ an irreducible $\mathbb{Q}$-rank $1$ lattice, $\Lambda\leq G$ a discrete subgroup such that $\Gamma\subset\mathcal{N}_{D}(\Lambda)$ for some $D>0$, and $\Lambda\subset\mathcal{N}_{u}(\Gamma)$ for some sublinear function $u$. Then $\Lambda\subset\mathcal{N}_{D^{\prime}}(\Gamma)$ for some $D^{\prime}$. From Eskin’s and Schwartz’s results on groups at finite Hausdorff distance (see Section 6.3), I conclude: ###### Corollary 6.3. In the setting of Proposition 6.2 and unless $G$ is locally isomorphic to $\mathrm{SL}_{2}(\mathbb{R})$, $\Lambda$ is commensurable to $\Gamma$. ###### Proof of Theorem 1.8. The theorem is an immediate result of Theorem 6.1 and Corollary 6.3 ∎ ### 6.1 Strategy ##### Lattice Criteria. I use three different lattice criteria, depending on the $\mathbb{R}$-rank of $G$ and on whether or not $\Gamma\subset\mathcal{N}_{D}(\Lambda)$ for some $D>0$. My proof is motivated by a conjecture of Margulis, that can be viewed as an algebraic converse to the geometric structure of the compact core described in Theorem 2.21. Over the past $30$ years this conjecture was resolved in many major cases by Oh, Benoist and Miquel, and was recently proved in full generality by Benoist-Miquel (see Section $1.2$ in [5] for an overview of the milestones in solving this conjecture). ###### Theorem 6.4 (Theorem $2.16$ in [5]). Let $G$ be a semisimple real algebraic Lie group of real rank at least $2$ and $U$ be a non-trivial horospherical subgroup of $G$. Let $\Delta$ be a discrete Zariski dense subgroup of $G$ that contains an indecomposable lattice $\Delta_{U}$ of $U$. Then $\Delta$ is a non-uniform irreducible arithmetic lattice of $G$. See Definition 6.44 for the precise meaning of an _indecomposable_ horospherical lattice. For $\mathbb{R}$-rank$\ 1$ groups, one has the following theorem by Kapovich and Liu, stating that a group is geometrically finite so long as ‘most’ of its limit points are conical. Recall $\mathcal{L}(\Delta)$ is the limit set of $\Delta\leq\mathrm{Isom}(X)$, and $\mathcal{L}_{\mathrm{con}}(\Delta)$ is the set of its conical limit points. ###### Theorem 6.5 (Theorem $1.5$ in [29]). Let $X$ be a $\mathbb{R}$-rank$\ 1$ symmetric space. A discrete subgroup $\Delta\leq\mathrm{Isom}(X)$ is geometrically infinite if and only if the set $\mathcal{L}(\Delta)\setminus\mathcal{L}_{\mathrm{con}}(\Delta)$ of non- conical limit points has the cardinality of the continuum. As a direct corollary I obtain the following criterion: ###### Corollary 6.6. Let $X$ be a $\mathbb{R}$-rank$\ 1$ symmetric space, $\Gamma\leq G=\mathrm{Isom}(X)$ a non-uniform lattice and $\Lambda\leq G$ a discrete subgroup. If $\mathcal{L}(\Lambda)=X(\infty)$ and $\mathcal{L}_{\mathrm{con}}(\Gamma)\subset\mathcal{L}_{\mathrm{con}}(\Lambda)$, then $\Lambda$ is a lattice. ###### Proof. Since $\Gamma$ is a lattice, $\mathcal{L}(\Gamma)=X(\infty)$ and it is geometrically finite. Theorem 6.5 implies the cardinality of $X(\infty)\setminus\mathcal{L}_{\mathrm{con}}(\Gamma)$ is strictly smaller than the continuum. The assumption $\mathcal{L}_{\mathrm{con}}(\Gamma)\subset\mathcal{L}_{\mathrm{con}}(\Lambda)$ implies the same holds for $\Lambda$, and in particular that $\Lambda$ is geometrically finite. The assumption that $\mathcal{L}(\Lambda)=X(\infty)$ implies that $\Lambda$ is geometrically finite if and only if it is a lattice. ∎ ###### Corollary 6.7. Let $X$ be a $\mathbb{R}$-rank$\ 1$ symmetric space, $\Gamma\leq G=\mathrm{Isom}(X)$ a non-uniform lattice and $\Lambda\leq G$ a discrete subgroup. If $\Gamma\subset\mathcal{N}_{D}(\Lambda)$ for some $D>0$, then $\Lambda$ is a lattice. ###### Proof. By definition of the limit set and of conical limit points, it is clear that every $\Gamma$-limit point is a $\Lambda$-limit point, and every $\Gamma$-conical limit point is also $\Lambda$-conical limit point. I conclude from Corollary 6.6 that $\Lambda$ is a lattice. ∎ Also in higher rank the inclusion $\Gamma\subset\mathcal{N}_{D}(\Lambda)$ implies that $\Lambda$ is a lattice. This result is due to Eskin. ###### Theorem 6.8 (Eskin, see Remark 6.10 below). Let $G$ be a real finite-centre semisimple Lie group without compact factors and of higher rank, $\Gamma\leq G$ an irreducible non-uniform lattice, $\Lambda\leq G$ a discrete subgroup such that $\Gamma\subset\mathcal{N}_{D}(\Lambda)$ for some $D>0$. Then $\Lambda$ is a lattice. If moreover $\Lambda\subset\mathcal{N}_{D}(\Gamma)$ then $\Lambda$ and $\Gamma$ are commensurable. Theorem 6.8 was used in the proof of quasi-isometric rigidity for higher rank non-uniform lattices in [15] and [21]. In the (earlier) $\mathbb{R}$-rank$\ 1$ case, Schwartz [53] used an analogous statement that requires one extra assumption. ###### Theorem 6.9 (Schwartz, see Section $10.4$ in [53] and Remark 6.10 below). Let $G$ be a real finite-centre simple Lie group of $\mathbb{R}$-rank$\ 1$, $\Gamma\leq G$ an irreducible non-uniform lattice, $\Lambda\leq G$ a discrete subgroup such that both $\Gamma\subset\mathcal{N}_{D}(\Lambda)$ and $\Lambda\subset\mathcal{N}_{D}(\Gamma)$ for some $D>0$. Then $\Lambda$ is a lattice. If moreover $G$ is not locally isomorphic to $\mathrm{SL}_{2}(\mathbb{R})$, then $\Lambda$ and $\Gamma$ are commensurable. ###### Remark 6.10. Theorem 1.6 should be viewed as a generalization of the bounded case depicted in Theorems 6.8 and 6.9, which were known to experts in the field in the late 1990’s. Complete proofs for these statements were never given in print, and I take the opportunity to include them here. See Section 6.3, where I also prove Proposition 6.2. I thank Rich Schwartz and Alex Eskin for supplying me with their arguments and allowing me to include them in this paper. I also thank my thesis examiner Emmanuel Breuillard for encouraging me to find and make these proofs public. ##### Line of Proof and Use of Sublinearity. Lattices of $\mathbb{Q}$-rank $1$ admit a concrete geometric structure (see Section 2.2). This structure is manifested in the geometry of an orbit of such a lattice in the corresponding symmetric space $X=G/K$. One important geometric property is the existence of a set of horoballs which the orbit of the lattice intersects only in the bounding horospheres, and in each such horosphere the orbit forms a (metric) cocompact lattice. Corollary 6.7 and Theorem 6.8 reduce the proof to the case where $\Gamma\not\subset\mathcal{N}_{D}(\Lambda)$ for any $D>0$. In that case, the essence lies in proving the existence of horospheres in $X$ which a $\Lambda$-orbit intersects in a cocompact lattice. This is proved purely geometrically, using the geometric structure of $\mathbb{Q}$-rank $1$ lattices and the sublinear constraint. Together with some control on the location of these horospheres, I prove two strong properties: 1. 1. $\Lambda\cdot x_{0}$ intersects a horosphere $\mathcal{H}\subset X$ in a cocompact lattice (Proposition 6.11). 2. 2. Every $\Gamma$-conical limit point is also a $\Lambda$-conical limit point (Corollary 6.26). The $\mathbb{R}$-rank$\ 1$ case of Theorem 6.1 follows directly from Corollary 6.6 using the second item above. The higher rank case requires a bit more, namely to deduce from the above items that $\Lambda$ meets the hypotheses of the Benoist-Miquel Theorem 6.4. To that end I use a well known geometric criterion (Lemma 6.47) in order show that $\Lambda$ is Zariski dense, and a lemma of Mostow (Lemma 6.32) to show that $\Lambda$ intersects a horospherical subgroup in a cocompact lattice. ##### Outline for Section 6. Section 6.2 is the core of the original mathematics of this paper. It is devoted to proving that $\Lambda\cdot x_{0}$ intersects some horospheres in a cocompact lattice. The proof is quite delicate and somewhat involved, and I include a few figures and a detailed informal overview of the proof. The figures are detailed and may take a few moments to comprehend, but I believe they are worth the effort. Section 6.3 deals with the case where $\Gamma\subset\mathcal{N}_{D}(\Lambda)$, and elaborates on Schwartz’s and Eskin’s proofs of Theorem 6.8 and Theorem 6.9. Section 6.4 is devoted to the translation of the geometric results of Section 6.2 to the algebraic language used in Theorem 6.4. Though the work is indeed mainly one of translation, some of it is non-trivial. Finally, in Section 6.5 I put everything together for a complete proof of Theorem 6.1. I highly recommend the reader to have a look at the uniform case in Section 5 before reading this one. ### 6.2 A $\Lambda$-Cocompact Horosphere Recall that $d_{\gamma}:=d(\gamma x_{0},\lambda_{\gamma}x_{0})$. In this section I prove: ###### Proposition 6.11 (Proposition 1.9). If $\\{d_{\gamma}\\}_{\gamma\in\Gamma}$ is unbounded, then there exists a horosphere $\mathcal{H}$ based at $\mathcal{W}_{\mathbb{Q}}(\Gamma)$ such that $\big{(}\Lambda\cap\mathrm{Stab}_{G}(\mathcal{H})\big{)}\cdot x_{0}$ intersects $\mathcal{H}$ in a cocompact metric lattice. Moreover, the bounded horoball $\mathcal{HB}$ is $\Lambda$-free. Throughout Section 6.2 the standing assumptions are that $\\{d_{\gamma}\\}_{\gamma}\in\Gamma$ is unbounded, and $\Gamma$ is an irreducible $\mathbb{Q}$-rank $1$ lattice. ##### The Argument. The proof is by chasing down the geometric implications of unbounded $d_{\gamma}$. These implications are delicate, but similar in spirit to the straight-forward proof for uniform lattices. The proof consists of the following steps: 1. 1. Unbounded $d_{\gamma}$ results in $\Lambda$-free horoballs $\mathcal{HB}^{\Lambda}$ tangent to $\Lambda$-orbit points. Each such horoball is based at $\mathcal{W}_{\mathbb{Q}}(\Gamma)$, giving rise to corresponding horoballs of $\Gamma$, denoted $\mathcal{HB}^{\Gamma}$. 2. 2. If $d_{\gamma}$ is large, then $\gamma x_{0}$ must lie deep inside a unique $\Lambda$-free horoball tangent to $\lambda_{\gamma}x_{0}$. I use: 1. (a) A bound on the distance $d(\mathcal{H}^{\Lambda},\mathcal{H}^{\Gamma})$. 2. (b) A bound on the angle $\angle_{\lambda_{\gamma}x_{0}}([\lambda_{\gamma}x_{0},\gamma x_{0}],[\lambda_{\gamma}x_{0},\xi))$, where $\xi\in X(\infty)$ is the base point of a suitable $\Lambda$-free horoball tangent to $\lambda_{\gamma}x_{0}$. 3. 3. There exist horospheres of $\Gamma$, say $\mathcal{H}^{\Gamma}$, such that if $\gamma x_{0}\in\mathcal{H}^{\Gamma}$ then large $d_{\gamma}$ implies large $\Lambda$-free areas along the bounding horosphere of some $\mathcal{HB}^{\Lambda}$. 4. 4. If $\mathcal{HB}^{\Lambda}$ is (uniformly) boundedly close to some $\Lambda$-orbit point, then I show that $\mathcal{H}^{\Lambda}$ is _almost $\Lambda$-cocompact_, that is $\mathcal{H}^{\Lambda}\subset\mathcal{N}_{D}(\Lambda\cdot x_{0})$ for some universal $D=D(\Lambda)$. Together with the previous step, this yields a uniform bound on $d_{\gamma}$ along certain horospheres of $\Gamma$. 5. 5. Finally I elevate the almost cocompactness to actual cocompactness and show $\mathcal{H}^{\Lambda}\subset\mathcal{N}_{D}(\Lambda\cdot x_{0}\cap\mathcal{H}^{\Lambda})$ for some $D>0$. This immediately elevates to $\mathcal{H}^{\Lambda}\subset(\Lambda\cap\mathrm{Stab}_{G}(\mathcal{H}^{\Lambda}))\cdot x_{0}$, proving the proposition. ##### The Properties of $\Gamma$. The geometric properties of $\Gamma$ that are used in the proof are: 1. 1. In higher rank, the characterization of $\mathcal{W}_{\mathbb{Q}}(\Gamma)$ using conical / non-horospherical limit points (Corollary 2.31). In $\mathbb{R}$-rank$\ 1$, the dichotomy of limit points being either non- horospherical or conical (Theorem 2.30). 2. 2. $\Gamma$-cocompactness along the horospheres of $\Gamma$. 3. 3. For every point $x\in X$ and $C>0$ there is a bound $K(C)$ on the number of horospheres of $\Gamma$ that intersect $B(x,C)$ (Corollary 2.23). #### 6.2.1 $\Lambda$-Free Horoballs I retain the notations and objects defined in Section 5.1. ###### Lemma 6.12. There exists a $\Lambda$-free horoball tangent to $x_{0}$. ###### Proof. Since $\\{d_{\gamma}\\}_{\gamma\in\Gamma}$ is unbounded, there are $\gamma_{n}\in\Gamma$ with $d_{n}=d_{\gamma_{n}}=d(\gamma_{n},\lambda_{n})\rightarrow\infty$ monotonically, where $\lambda_{n}\in\Lambda$ is a $\Lambda$-orbit point closest to $\gamma_{n}$. Denote $x_{n}^{\prime}=\lambda_{n}^{-1}\gamma_{n}x_{0}$, $\eta_{n}:=[x_{0},x_{n}^{\prime}]$, and $v_{n}\in S_{x_{0}}X$ the initial velocity vectors $v_{n}:=\dot{\eta_{n}}(0)$. The tangent space $S_{x_{0}}X$ is compact, so up to a subsequence, $v_{n}$ converge monotonically in angle to a direction $v\in S_{x_{0}}X$. Let $\eta$ be the unit speed geodesic ray emanating from $x_{0}$ with initial velocity $\dot{\eta}(0)=v$. Denote $\xi:=\eta(\infty)$ the limit point of $\eta$ in $X(\infty)$. I claim that the horoball $\mathcal{HB}:=\cup_{t>0}B\big{(}\eta(t),t\big{)}$, based at $\xi$ and tangent to $x_{0}$, is $\Lambda$-free. Let $t>0$ and consider $\eta(t)$. For every $\varepsilon>0$, there is some angle $\alpha=\alpha(t,\varepsilon)$ such that any geodesic $\eta^{\prime}$ with $\angle_{x_{0}}(\eta,\eta^{\prime})<\alpha$ admits $d\big{(}\eta(t),\eta^{\prime}(t)\big{)}<\varepsilon/2$. The convergence $v_{n}\rightarrow v$ implies $d\big{(}\eta(t),\eta_{n}(t)\big{)}<\varepsilon/2$ for all but finitely many $n\in\mathbb{N}$. In particular, $B\big{(}\eta(t),t\big{)}\subset\mathcal{N}_{\varepsilon}\Big{(}B\big{(}\eta_{n}(t),t\big{)}\Big{)}$ for all such $n\in\mathbb{N}$. For a fixed $t\leq d_{n}$, it is clear from the definitions that $B\big{(}\eta_{n}(t),t\big{)}\subset B_{n}^{\prime}=B(x_{n}^{\prime},d_{n})$. One has $d_{n}\rightarrow\infty$, and so for a fixed $t>0$ it holds that $t<d_{n}$ for all but finitely many $n\in\mathbb{N}$. I conclude that for any fixed $t>0$ there is $n$ large enough such that $B\big{(}\eta(t),t\big{)}\subset\mathcal{N}_{\varepsilon}\Big{(}B\big{(}\eta_{n}(t),t\big{)}\Big{)}\subset\mathcal{N}_{\varepsilon}B_{n}^{\prime}$ I conclude that for every $\varepsilon>0$, $\mathcal{HB}\subset\bigcup_{n}\mathcal{N}_{\varepsilon}(B_{n}^{\prime})=\mathcal{N}_{\varepsilon}\big{(}\bigcup_{n}B_{n}^{\prime}\big{)}$. This implies that any point in the interior of $\mathcal{HB}$ is contained in the interior of one of the $\Lambda$-free balls $B_{n}^{\prime}$, proving $\mathcal{HB}$ is $\Lambda$-free. ∎ ###### Lemma 6.13. Suppose $\mathcal{HB}$ is a $\Lambda$-free horoball, based at some point $\xi\in X(\infty)$. Then $\xi\in\mathcal{W}_{\mathbb{Q}}(\Gamma)$. ###### Proof. For any geodesic $\eta$ with limit $\xi$, the size $d(x_{0},\gamma x_{0})$ of the $\Gamma$-orbit points $\gamma x_{0}$ that lie boundedly close to $\eta$ grows linearly in the distance to any fixed horosphere based at $\xi$, and in particular to $\mathcal{H}=\partial\mathcal{HB}$. The sublinear constraint $d(\gamma x_{0},\lambda_{\gamma}x_{0})\leq u(|\gamma|)$ together with the fact that $\mathcal{HB}$ is $\Lambda$-free imply that the size of such $\gamma$ is bounded. In $\mathbb{R}$-rank$\ 1$ every limit point is either conical or in $\mathcal{W}_{\mathbb{Q}}(\Gamma)$, proving the lemma in this case. For higher rank, the above argument actually shows more: it shows that a point $\xi^{\prime}\in\mathcal{N}_{\frac{\pi}{2}}(\xi)$ is not conical, because every geodesic with limit $\xi^{\prime}\in\mathcal{N}_{\frac{\pi}{2}}(\xi)$ entres $\mathcal{HB}$ at a linear rate (Lemma 2.32). Hattori’s characterization of $\mathcal{W}_{\mathbb{Q}}(\Gamma)$ (Corollary 2.31) implies $\xi\in\mathcal{W}_{\mathbb{Q}}(\Gamma)$. ∎ ###### Definition 6.14. Given a $\Lambda$-free horoball $\mathcal{HB}^{\Lambda}$, Lemma 6.13 gives rise to a horoball of $\Gamma$ based at the same point at $X(\infty)$. Call this the _horoball corresponding to $\mathcal{HB}^{\Lambda}$_, and denote it by $\mathcal{HB}^{\Gamma}$. The corresponding horosphere is denoted $\mathcal{H}^{\Gamma}$. ###### Remark 6.15. In the course of my work I had had a few conversations with Omri Solan regarding the penetration of geodesics into $\Lambda$-free horoballs. Assuming $\Lambda\subset\mathcal{N}_{u}(\Gamma)$ implies that $\Lambda$ preserves $\mathcal{W}_{\mathbb{Q}}(\Gamma)$ (see Lemma 6.29). This is the case in the motivating setting where $\Lambda$ is an abstract finitely generated group that is SBE to $\Gamma$, see Claim 3.26 in Chapter 3. In the case of $SL_{2}(\mathbb{R})$ Omri suggested to use the action of $\Lambda$ on the Bruhat-Tits tree of $SL_{2}(\mathbb{Q}_{p})$ (for all primes $p$) and the classification of these elements into elliptic and hyperbolic elements (separately for each $p$) in order to deduce that $\Lambda$ actually lies in $SL_{2}(\mathbb{Z})$. We did not pursue that path nor its possible generalization to the $SL_{n}$ case and general Bruhat-Tits buildings. #### 6.2.2 A $\Gamma$-orbit point Lying Deep Inside a $\Lambda$-Free Horoball I established the existence of $\Lambda$-free horoballs. It may seem odd that the first step in proving $\Lambda\cdot x_{0}$ is ‘almost everywhere’ is proving the existence of $\Lambda$-free regions. But this fits perfectly well with the algebraic statement that non-uniform lattices must admit unipotent elements (see Proposition $5.3.1$ in [38]). The goal of this section is to obtain some control on the location of the $\Lambda$-free horoballs, in order to conclude that some $\gamma x_{0}$ lies deep inside $\mathcal{HB}^{\Lambda}$. This results in yet more $\Lambda$-free regions, found on the bounding horosphere. I need one property of sublinear functions. I thank Panos Papazoglou for noticing a mistake in the original formulation. ###### Lemma 6.16. Let $u$ be a sublinear function, $f,g:\mathbb{R}_{\geq 0}\longrightarrow\mathbb{R}_{>0}$ two positive monotone functions with $\lim_{x\rightarrow\infty}f(x)+g(x)=\infty$. If for all large enough $x$ it holds that $f(x)\leq u\big{(}f(x)+g(x)\big{)}$, then for every $1<s$ and all large enough $x$ it holds that $f(x)\leq u\big{(}s\cdot g(x)\big{)}$. In particular $f(x)\leq u^{\prime}\big{(}g(x)\big{)}$ for some sublinear function $u^{\prime}$. ###### Proof. Assume as one may that $u$ is non-decreasing. By definition of sublinearity $\lim_{x\rightarrow\infty}\frac{u\big{(}f(x)+g(x)\big{)}}{f(x)+g(x)}=0$, so by hypothesis $\lim_{x\rightarrow\infty}\frac{f(x)}{f(x)+g(x)}=0$. This means that for every $\varepsilon>0$ one has $f(x)\leq\varepsilon\cdot g(x)$ for all large enough $x$, resulting in $f(x)\leq u\big{(}f(x)+g(x)\big{)}\leq u\big{(}(1+\varepsilon)\cdot g(x)\big{)}$ Notice that for a fixed $s>0$, the function $u^{\prime}(x)=u(sx)$ is sublinear, as $\lim_{x\rightarrow\infty}\frac{u(sx)}{x}=\lim_{x\rightarrow\infty}s\cdot\frac{u(sx)}{sx}=0$ ∎ ###### Lemma 6.17. Let $C>0$. There is $L=L(C)$ such that if $\mathcal{HB}^{\Lambda}$ is any $\Lambda$-free horoball tangent to a point $x\in B(x_{0},C)$ then $d(\mathcal{H}^{\Lambda},\mathcal{H}^{\Gamma})\leq L$. Moreover, there is a sublinear function $u^{\prime}$ such that: $L(C)\leq\begin{cases}u^{\prime}(C)&\text{if }\mathcal{HB}^{\Gamma}\subset\mathcal{HB}^{\Lambda}\\\ C&\text{if }\mathcal{HB}^{\Lambda}\subset\mathcal{HB}^{\Gamma}\end{cases}$ ###### Proof. If $\mathcal{HB}^{\Lambda}\subset\mathcal{HB}^{\Gamma}$, then clearly $d(\mathcal{H}^{\Lambda},\mathcal{H}^{\Gamma})\leq C$, simply because $\mathcal{HB}^{\Gamma}$ is $\Gamma$-free and in particular cannot contain $x_{0}$. Therefore $\mathcal{H}^{\Gamma}$ must separate $\mathcal{H}^{\Lambda}$ from $x_{0}$ and in particular $d(\mathcal{H}^{\Lambda},\mathcal{H}^{\Gamma})\leq C$. Assume that $\mathcal{HB}^{\Gamma}\subset\mathcal{HB}^{\Lambda}$, and denote $l=d(\mathcal{H}^{\Lambda},\mathcal{H}^{\Gamma})$. The horoball $\mathcal{HB}^{\Gamma}$ is a horoball of $\Gamma$, hence $\Gamma\cdot x_{0}$ is $D_{\Gamma}$-cocompact along $\mathcal{H}^{\Gamma}$ and there is an element $\gamma\in\Gamma$ with $|\gamma|\leq C+l+D_{\Gamma}$ and $\gamma x_{0}\in\mathcal{H}^{\Gamma}$. Since $\mathcal{HB}^{\Lambda}$ is $\Lambda$-free one has $l\leq d(\gamma x_{0},\lambda_{\gamma}x_{0})\leq u(|\gamma|)\leq u(C+l+D_{\Gamma})$ $C,D_{\Gamma}$ are fixed, so this inequality can only occur for boundedly small $l$, say $l<L^{\prime}(C)$ ($D_{\Gamma}$ is a universal constant and may be ignored). Consult figure 2 for a geometric visualization of this situation. It remains to show that $L^{\prime}(C)$ is indeed sublinear in $C$. First define $L(C)$ to be the minimal $L$ that bounds the distance $d(\mathcal{H}^{\Lambda},\mathcal{H}^{\Gamma})$ for all possible $\mathcal{HB}^{\Lambda}$ tangent to a point $x\in B(x_{0},C)$. This is indeed a minimum, since by Corollary 2.23 there are only finitely many horoballs of $\Gamma$ intersecting $B\big{(}x_{0},C+L^{\prime}(C)\big{)}$. For every $C$ there is thus a horoball $\mathcal{HB}^{\Gamma}_{C}$ and an element $\gamma\in\Gamma$ such that $\gamma x_{0}\in\mathcal{H}^{\Gamma}$, $d(\mathcal{H}^{\Lambda}_{C},\mathcal{H}^{\Gamma}_{C})=L(C)$ and $|\gamma|\leq C+L(C)+D_{\Gamma}$. The fact that $\mathcal{HB}^{\Lambda}_{C}$ is $\Lambda$-free implies $L(C)=d(\mathcal{H}^{\Lambda}_{C},\mathcal{H}^{\Gamma}_{C})\leq u(|\gamma|)\leq u\big{(}C+L(C)+D_{\Gamma}\big{)}$ Lemma 6.16 implies that $L(C)\leq u^{\prime}(C)$ for some sublinear function $u^{\prime}$. ∎ $Rad=C$$x_{0}$$\Lambda-free\ \mathcal{HB}^{\Lambda}$$\xi$$\Gamma-free\ \mathcal{HB}^{\Gamma}$$\leq L(C)$$\gamma x_{0}$$P_{\mathcal{H}^{\Gamma}}(x_{0})$$\leq L(C)+D_{\Gamma}$ Figure 2: Lemma 6.17. A $\Lambda$-free horoball $\mathcal{HB}^{\Lambda}$ intersects a ball of radius $C$ about $x_{0}$. The associated $\Gamma$-free horoball $\mathcal{HB}^{\Gamma}$ is boundedly close, essentially due to the uniform cocompactness of $\Gamma\cdot x_{0}$ along the $\Gamma$ horospheres. The following is an immediate corollary, apparent already in the above proof. ###### Corollary 6.18. For every $C>0$ there is a bound $K=K(C)$ and a fixed set $\xi_{1},\xi_{2},\dots\xi_{K}\in\mathcal{W}_{\mathbb{Q}}(\Gamma)\subset X(\infty)$ so that every $\Lambda$-free horoball $\mathcal{HB}^{\Lambda}$ which is tangent to some point $x\in B(x_{0},C)$ is based at $\xi_{i}$ for some $i\in\\{1,\dots,K\\}$. In particular, for any specific point $x\in\mathcal{N}_{C}(\Lambda\cdot x_{0})$ there are at most $K$ $\Lambda$-free horoballs tangent to $x$. ###### Proof. Let $\mathcal{HB}^{\Lambda}$ be a horoball tangent to a point $x\in B(x_{0},C)$. Lemma 6.17 bounds $d(\mathcal{HB}^{\Lambda},\mathcal{HB}^{\Gamma})$ by $L(C)$, hence $\mathcal{HB}^{\Gamma}$ is tangent to a point $x^{\prime}\in B\big{(}x_{0},C+L(C)\big{)}$. By Corollary 2.23 there are only finitely many possibilities for such $\mathcal{HB}^{\Gamma}$. In particular there are finitely many base-points for these horoballs, say $\xi_{1},\xi_{2},\dots,\xi_{K(C)}\in\mathcal{W}_{\mathbb{Q}}(\Gamma)$. Finally, recall that a horoball is determined by a base point and a point $x\in X$ tangent to it, so the last statement of the corollary holds for any $x\in B(x_{0},C)$. But the property in question is $\Lambda$-invariant so the same holds for any point $x\in\Lambda\cdot B(x_{0},C)=\mathcal{N}_{C}(\Lambda\cdot x_{0})$. ∎ The bound on $d(\mathcal{HB}^{\Lambda},\mathcal{HB}^{\Gamma})$ given by Lemma 6.17 further strengthen the relation between $\mathcal{HB}^{\Lambda}$ and $\mathcal{HB}^{\Gamma}$. The ultimate goal is to show that the $\mathcal{HB}^{\Lambda}$-s play the role of the $\Gamma$-horoballs in the geometric structure of $\mathbb{Q}$-rank $1$ lattices, namely to show that $\Lambda\cdot x_{0}$ is cocompact on the $\mathcal{H}^{\Lambda}$-s. This requires to actually find $\Lambda$-orbit points somewhere in $X$, and not just $\Lambda$-free regions as was done up to now. As one might suspect, these points arise as $\lambda_{\gamma}x_{0}$ corresponding to points $\gamma x_{0}\in\mathcal{H}^{\Gamma}$, which exist in abundance since $\Gamma\cdot x_{0}\cap\mathcal{H}^{\Gamma}$ is a cocompact lattice in $\mathcal{H}^{\Gamma}$. The hope is that a $\Lambda$-free horoball $\mathcal{HB}^{\Lambda}$ tangent to $\lambda_{\gamma}x_{0}$ would correspond to a horoball of $\Gamma$ tangent to $\gamma x_{0}$. This would have forced all the $\lambda_{\gamma}$ to actually lie on the same bounding horosphere, and $\\{\lambda_{\gamma}x_{0}\mid\gamma x_{0}\in\mathcal{H}^{\Gamma}\\}$ would then be a cocompact lattice in $\mathcal{H}^{\Lambda}$. This hope turns out to be more or less true, but it requires some work. The goal of the rest of this section is to establish a relation between a $\Lambda$-free horoball $\mathcal{HB}^{\Lambda}$ tangent to $\lambda_{\gamma}x_{0}$ and $\gamma x_{0}$. I start with some notations. ###### Definition 6.19. In light of Corollary 6.18, there is a finite number $N$ of $\Lambda$-free horoballs tangent to $x_{0}$. Denote: 1. 1. $\\{\mathcal{HB}^{\Lambda}_{i}\\}_{1=1}^{N}$ are the $\Lambda$-free horoballs tangent to $x_{0}$. 2. 2. $\xi^{i}\in\mathcal{W}_{\mathbb{Q}}(\Gamma)$ is the base point of $\mathcal{HB}^{\Lambda}_{i}$. 3. 3. $v^{i}\in S_{x_{0}}X$ is the unit tangent vector in the direction $\xi^{i}$. 4. 4. $\eta^{i}:=[x_{0},\xi_{i})$ is the unit speed geodesic ray emanating from $x_{0}$ with limit $\xi^{i}$. In particular $v^{i}=\frac{d}{dt}\eta^{i}(0)$. 5. 5. $\mathcal{HB}^{\Gamma}_{i}$ is the horoball of $\Gamma$ that corresponds to $\mathcal{HB}^{\Lambda}_{i}$, based at $\xi^{i}$. 6. 6. $\mathcal{HB}^{\Lambda}_{\lambda,i},\xi^{i}_{\lambda},\eta^{i}_{\lambda}$ are the respective $\lambda$-translates of the objects above. For example, $\mathcal{HB}^{\Lambda}_{\lambda,i}:=\lambda\cdot\mathcal{HB}^{\Lambda}_{i}$. 7. 7. $\mathcal{H}$ decorated by the proper indices denotes the horosphere bounding $\mathcal{HB}$, the horoball with respective indices, e.g. $\mathcal{H}^{\Lambda}_{i}:=\partial\mathcal{HB}^{\Lambda}_{i}$. 8. 8. For an angle $\alpha>0$ and a tangent vector $v_{0}\in S_{x}X$, define 1. (a) The _$\alpha$ -sector of $v$ in $S_{x}X$_ is the set $\\{v\in S_{x}X\mid v\in\mathcal{N}_{\alpha}(v_{0})\\}$. Recall that the metric on $S_{x}X$ is the angular metric. 2. (b) The _$\alpha$ -sector of $v$ in $X$_ are all points $y\in X$ for which the tangent vector at $0$ of the unit speed geodesic $[x,y]$ lies in the $\alpha$-sector of $v$ in $SxX$. ###### Lemma 6.20. For every angle $\alpha\in(0,\frac{\pi}{2})$ there exists $D=D(\alpha)$ such that if $d_{\gamma}>D$ then for some $i\in\\{1,\dots,N\\}$, $\gamma x_{0}$ lies inside the $\alpha$-sector of $v^{i}_{\lambda_{\gamma}}$ at $\lambda_{\gamma}x_{0}$. Furthermore whenever $\alpha$ is uniformly small enough, there is a unique such $i=i(\gamma)$, independent of $\alpha$. ###### Proof. Translation by the isometry $\lambda_{\gamma}^{-1}$ preserves angles and distances, so it is enough to prove that there is an $i$ for which $x^{\prime}_{\gamma}:=\lambda^{-1}\gamma x_{0}$ lies inside the $\alpha$-sector of $v^{i}$, and that this $i$ is unique if $\alpha$ is uniformly small. Assume towards contradiction that there is $\alpha\in(0,\frac{\pi}{2})$ and a sequence $\gamma_{n}\in\Gamma$, $\lambda_{n}:=\lambda_{\gamma_{n}}\in\Lambda$ with $d_{\gamma_{n}}$ unbounded, and $x^{\prime}_{n}:=\lambda_{n}^{-1}\gamma_{n}x_{0}$ not lying in the union of the $\alpha$-sectors of $v^{i}$. By perhaps taking smaller $\alpha$ I may assume all the $\alpha$-sectors of the $v^{i}$ in $S_{x_{0}}X$ are pairwise disjoint. This can be done because there are only finitely many $v^{i}$. Compactness of $S_{x_{0}}X$ allows me to take a converging subsequence $v^{\prime}_{n}:=\dot{[x_{0},x^{\prime}_{n}]}$, with limit direction $v^{\prime}$. Denote by $\eta^{\prime}$ the geodesic ray emanating from $x_{0}$ with initial velocity $v^{\prime}$. The exact same argument of Lemma 6.12 proves that $\eta^{\prime}(\infty)$ is the base point of a $\Lambda$-free horoball tangent to $x_{0}$. But this means $v^{\prime}=v^{i}$ for some $i\in\\{1,\dots,N\\}$, contradicting the fact that all $v^{\prime}_{n}$ lie outside the $\alpha$-sectors of the $v^{i}$. This proves that there is a bound $D=D(\alpha)$ such that if $d_{\gamma}>D$ then $x_{\gamma}^{\prime}$ lies within the $\alpha$-sector of some $v^{i}$. The proof clearly shows that whenever $\alpha$ is small enough so that the $\alpha$-sectors of the $v^{i}$ are disjoint, $x_{\gamma}^{\prime}$ lies in the $\alpha$-sector of a unique $v^{i}$ as soon as $d_{\gamma}>D(\alpha)$. ∎ ###### Remark 6.21. In the proof of Lemma 6.12 I used compactness of $S_{x_{0}}X$ to induce a converging subsequence of directions. Lemma 6.20 actually shows that the fact there are finitely many $\Lambda$-free horoballs tangent to $x_{0}$ implies _a posteriori_ that there was not much choice in the process - all directions $[x_{0},x_{\gamma}^{\prime}]$ must fall into one of the finitely many directions $v^{i}$. Next, I want to control the actual location of certain points with respect to the horoballs of interest, and not just the angles. This turns out to be a more difficult of a task than one might suspect, since control on angles does not immediately give control on distances. Recall that large $\Lambda$-free balls near $x_{0}$ imply large concentric $\Gamma$-free balls. The precise quantities and bounds are given by Lemma 5.6 (one can use Lemma 5.7 to obtain a slightly cleaner statement). ###### Proposition 6.22. Let $S\in(0,1)$ be the constant given by Lemma 5.6, and let $s\in(0,S)$. There is a bound $D=D(s)$ such that $d_{\gamma}>D$ implies that $\gamma x_{0}$ lies $sd_{\gamma}$ deep in $\mathcal{HB}^{\Lambda}_{\lambda_{\gamma}}$. ###### Proof. The proof is a bit delicate but very similar to that of Lemma 5.6. In essence, I use the $\Gamma$-free balls near $x_{0}$ to produce a $\Gamma$-free cylinder, which would force a certain geodesic not to cross a horosphere of $\Gamma$, i.e. force it to stay inside a $\Gamma$-free horoball. As in Lemma 6.20 it is only required to show that $x^{\prime}=\lambda_{\gamma}^{-1}\gamma x_{0}$ is $sd_{\gamma}$ deep inside $\mathcal{HB}^{\Lambda}_{i(\gamma)}$. I start with proving that $x^{\prime}\in\mathcal{HB}^{\Gamma}_{i(\gamma)}$. I learned the hard way that even this is not a triviality. Recall the notation $B_{\gamma}=B(\gamma x_{0},d_{\gamma})$. The ball $B^{\prime}_{\gamma}=\lambda_{\gamma}^{-1}B_{\gamma}$ is a $\Lambda$-free ball of radius $d_{\gamma}$ about $x^{\prime}=\lambda_{\gamma}^{-1}\gamma x_{0}$. Denote by $x^{\prime}_{t}$ the point at time $t$ along the unit speed geodesic $\eta^{\prime}:=[x_{0},x^{\prime}]$. It holds that $|x_{t}^{\prime}|=t$ and, for $t\leq d_{\gamma}$, $x_{t}^{\prime}$ is the centre of a $\Lambda$-free ball of radius $t$ tangent to $x_{0}$. The constant $s$ is fixed and by Lemma 5.6 there is $T^{\prime}=T^{\prime}(s)$ such that if $t>T^{\prime}$, the ball $sB\big{(}x^{\prime}_{t},t\big{)}$ is $\Gamma$-free. The next goal is to show that $x_{T}^{\prime}\in\mathcal{HB}^{\Gamma}$ for some adequate $T$. For any time $T>0$, let $\alpha=\alpha(\varepsilon,T)$ be the angle for which $d\big{(}\eta(T),\eta^{i(\gamma)}(T)\big{)}<\varepsilon$ for every $\eta$ in the $\alpha$-sector of $v^{i(\gamma)}$. By perhaps taking smaller $\alpha$ I may assume that $\alpha$ is uniformly small as stated in Lemma 6.20. Let $D(\alpha)$ be the bound given by Lemma 6.20 guaranteeing $D(\alpha)<d_{\gamma}\Rightarrow d\big{(}x^{\prime}_{T},\eta^{i(\gamma)}(T)\big{)}<\varepsilon$ For my needs in this lemma $\varepsilon$ may as well be chosen to be $1$. I now choose a specific time $T$ for which I want $x_{T}^{\prime}$ and $\eta^{i(\gamma)(T)}$ to be close. There are only finitely many $\Lambda$-free horoballs $\\{\mathcal{HB}^{\Lambda}_{i}\\}_{i\in\\{1,\dots,N\\}}$ tangent to $x_{0}$, giving rise to a uniform bound $L=\max_{i\in\\{1,\dots,N\\}}\\{d(\mathcal{H}^{\Lambda}_{i},\mathcal{H}^{\Gamma}_{i})\\}$ on the distance $d(\mathcal{H}^{\Lambda}_{i(\gamma)},\mathcal{H}^{\Gamma}_{i(\gamma)})$. Fix $T$ to be any time in the open interval $(T^{\prime}+L+\varepsilon,d_{\gamma})$. The fact that $L+\varepsilon<T$ implies that $\eta^{i(\gamma)}(T)$ lies at least $\varepsilon$-deep inside $\mathcal{HB}^{\Gamma}_{i(\gamma)}$, and therefore $\eta^{\prime}(T)\in\mathcal{HB}^{\Gamma}_{i(\gamma)}$. Recall that any point on $\mathcal{H}^{\Gamma}$ is $D_{\Gamma}$-close to a point $\gamma x_{0}\in\mathcal{H}^{\Gamma}$. By perhaps enlarging $T$ and shrinking $\alpha$ if necessary, I may assume that $D_{\Gamma}<sT$. Thus for all $T<t\leq d_{\gamma}$, $x_{t}^{\prime}$ is the centre of a $\Gamma$-free ball of radius $st>sT>D_{\Gamma}$, hence $\\{x^{\prime}_{t}\\}_{T\leq t\leq d_{\gamma}}$ does not cross a horosphere of $\Gamma$. Since $x_{T}^{\prime}\in\mathcal{HB}^{\Gamma}_{i(\gamma)}$, this implies that $x_{t}^{\prime}$ stays in $\mathcal{HB}^{\Gamma}_{i(\gamma)}$ for all $T<t\leq d_{\gamma}$. In particular $x^{\prime}_{d_{\gamma}}=x^{\prime}\in\mathcal{HB}^{\Gamma}_{i(\gamma)}$. To get the result of the proposition, recall that $sB_{\gamma}^{\prime}=B(x^{\prime},sd_{\gamma})$ is $\Gamma$-free, so $x^{\prime}$ must be at distance at least $sd_{\gamma}-D_{\Gamma}$ from any horosphere of $\Gamma$, and in particular from $\mathcal{H}^{\Gamma}_{i(\gamma)}$. In terms of Busemann functions, this means that $b_{\eta^{i(\gamma)}}(x^{\prime})\leq-sd_{\gamma}+D_{\Gamma}$ whenever one can find such $T^{\prime}+L+\varepsilon<T<d_{\gamma}$. Since $\mathcal{HB}^{\Lambda}_{i(\gamma)}$ is tangent to $x_{0}$, the corresponding horoball $\mathcal{HB}^{\Gamma}_{i(\gamma)}$ lies inside it, and so $x^{\prime}$ lies $(sd_{\gamma}-D_{\Gamma})$-deep inside $\mathcal{HB}^{\Lambda}_{i(\gamma)}$. A close look at the argument yields the desired bound $D=D(s)$ such that the above holds whenever $d_{\gamma}>D(s)$. To help the reader take this closer look, I reiterate the choice of constants and their dependencies as they appear in the proof: 1. 1. Fix $\varepsilon=1$. 2. 2. Let $T^{\prime}=T^{\prime}(s)$ the constant from Lemma 5.6 and $L=\max_{i\in\\{1,\dots,N\\}}\\{d(\mathcal{HB}^{\Lambda}_{i},\mathcal{HB}^{\Gamma}_{i})\\}$. 3. 3. Fix $T>T^{\prime}+L+1$. 4. 4. Fix $\alpha=\alpha(1,T)$. 5. 5. Fix $D(s)=\max\\{D(\alpha),T+1\\}$. I remark, for the reader worried about the $D_{\Gamma}$ which appears in the final bound but not in the statement, that (a) $D_{\Gamma}$ is a fixed universal constant and may as well be ignored, and (b) the discrepancy can be formally corrected by taking a slightly larger $s<s^{\prime}$ to begin with and as a result perhaps enlarging the bound $D$ for $d_{\gamma}$). Also note that $L=L(\Lambda)$ is a universal constant. ∎ #### 6.2.3 Intersection of $\Lambda$-Free Regions and the Existence of a $\Lambda$-Cocompact Horosphere In this section I find $\Lambda$-orbit points that lie close to the bounding horosphere of a $\Lambda$-free horoball $\mathcal{HB}^{\Lambda}$. In order to find such points I need to make sure $\mathcal{HB}^{\Lambda}$ is not contained inside a much larger $\Lambda$-free horoball. I introduce the following definition. ###### Definition 6.23. A $\Lambda$-free horoball $\mathcal{HB}^{\Lambda}$ is called _maximal_ if it is tangent to a point $x=\lambda x_{0}\in\Lambda\cdot x_{0}$. It is called _$\varepsilon$ -almost maximal_ if $d(\Lambda\cdot x_{0},\mathcal{H}^{\Lambda})<\varepsilon$. ###### Remark 6.24. It may happen that a discrete group admits free but not _maximally free_ horoballs - see discussion in section $4$ of [56]. In any case it is clear that any $\Lambda$-free horoball can be ‘blown-up’ to an $\varepsilon$-almost maximal $\Lambda$-free horoball, for every $\varepsilon>0$. Moreover, every two $\varepsilon$-almost maximal horoballs based at the same point $\xi\in\mathcal{W}_{\mathbb{Q}}(\Gamma)$ lie at distance at most $\varepsilon$ of one another. For my needs any fixed $\varepsilon$ would suffice, and I fix $\varepsilon=1$. ###### Lemma 6.25. There is $D_{\Lambda}>0$ such that if $\mathcal{HB}^{\Lambda}$ is $1$-almost maximal $\Lambda$-free horoball then $\mathcal{H}^{\Lambda}\subset\mathcal{N}_{D_{\Lambda}}(\Lambda\cdot x_{0})$, i.e. $d(x,\Lambda\cdot x_{0})\leq D_{\Lambda}$ for all $x\in\mathcal{H}^{\Lambda}$. Notice that Lemma 6.25 does not state $\Lambda\cdot x_{0}$ even intersects $\mathcal{H}^{\Lambda}$. ###### Proof. I start with a short sketch of the proof. Consider a $1$-maximal horoball and a point $x$ on its bounding horosphere with $d(x,\Lambda\cdot x_{0})=D$. One may translate this situation to $x_{0}$, which results in a $\Lambda$-free horoball $\mathcal{HB}^{\Lambda}$ intersecting the (closed) $D$-ball about $x_{0}$ at a point $w$ with $B(w,D)$ $\Lambda$-free. The proof differs depending on whether $\mathcal{HB}^{\Gamma}\subset\mathcal{HB}^{\Lambda}$ or the other way round, since I use the bounds from 6.17: 1. 1. If $\mathcal{HB}^{\Gamma}\subset\mathcal{HB}^{\Lambda}$, there is a sublinear bound on $d(\mathcal{HB}^{\Lambda},\mathcal{HB}^{\Gamma})$, which readily yields a bound on $D$. 2. 2. if $\mathcal{HB}^{\Lambda}\subset\mathcal{HB}^{\Gamma}$ there is a bound on $d(x_{0},\mathcal{HB}^{\Gamma})$ that is independent of $D$. So there are only finitely many possibilities for $\mathcal{HB}^{\Gamma}$, independent of $D$. Hence there are only finitely many possible base points for $\mathcal{HB}^{\Gamma}$. These in turn correspond to possible base points for such $\mathcal{HB}^{\Lambda}$, and this finiteness yields a bound on the distance $d(\mathcal{HB}^{\Gamma},\mathcal{HB}^{\Lambda})<L$ that is independent of $D$. The rest of the proof is quite routine. Let $\mathcal{HB}^{\Lambda}$ be a $1$-almost maximal $\Lambda$-free horoball. By definition there is $\lambda\in\Lambda$ and $z\in\mathcal{H}^{\Lambda}$ such that $d(\lambda x_{0},z)<1$. Fix $D>0$. I show that if there is some $z^{\prime}\in\mathcal{H}^{\Lambda}$ for which $d(z^{\prime},\Lambda\cdot x_{0})\geq D$, then $D$ must be uniformly small. Exactly how small will be set in the course of the proof. Fix $D>1$ and assume that there is $z^{\prime}\in\mathcal{H}^{\Lambda}$ with $d(z,\Lambda\cdot x_{0})\geq D$. Up to sliding $z^{\prime}$ along $\mathcal{H}^{\Lambda}$, the continuity of the function $x\mapsto d(x,\Lambda\cdot x_{0})$ together with Intermediate Value Theorem allows to assume that $d(z^{\prime},\Lambda\cdot x_{0})=D$. Let $\lambda^{\prime}\in\Lambda$ be the element for which $d(z^{\prime},\lambda^{\prime}x_{0})=D$. Translating by $\lambda^{\prime-1}$ yields 1. 1. A $\Lambda$-free horoball $\mathcal{HB}^{\Lambda}_{0}:=\lambda^{\prime-1}\mathcal{HB}^{\Lambda}$. 2. 2. A point $w:=\lambda^{\prime-1}z^{\prime}\in\mathcal{H}^{\Lambda}_{0}$ for which $|w|=d(w,x_{0})=d(w,\Lambda\cdot x_{0})=D$. Assume first that $\mathcal{HB}^{\Gamma}_{0}\subset\mathcal{HB}^{\Lambda}_{0}$. By Lemma 6.17 there is a sublinear function $u^{\prime}$ such that $d(\mathcal{H}^{\Gamma}_{0},\mathcal{H}^{\Lambda}_{0})\leq u^{\prime}(D)$. This yields a point $\gamma x_{0}\in\mathcal{H}^{\Gamma}_{0}$ for which $d(w,\gamma x_{0})\leq u^{\prime}(D)+D_{\Gamma}$. Thus $|\gamma x_{0}|\leq D+u^{\prime}(D)+D_{\Gamma}$ and the reverse triangle inequality gives $D-\big{(}u^{\prime}(D)+D_{\Gamma}\big{)}\leq d(w,\lambda_{\gamma}x_{0})-d(w,\gamma x_{0})<d(\gamma x_{0},\lambda_{\gamma}x_{0})$ Together with the bound $d(\gamma x_{0},\lambda_{\gamma}x_{0})\leq u(|\gamma x_{0}|)$ and rearranging, one obtains $D\leq u\big{(}D+u^{\prime}(D)+D_{\Gamma}\big{)}+u^{\prime}(D)+D_{\Gamma}$ The right hand side is clearly a sublinear function in $D$, hence this inequality may hold only for boundedly small $D$, say $D<D_{1}$. I conclude that $\mathcal{HB}^{\Gamma}_{0}\subset\mathcal{HB}^{\Lambda}_{0}$ may occur only when $D<D_{1}$. Notice that $D_{1}$ depends only on $u$ and $u^{\prime}$, and not on $\mathcal{HB}^{\Lambda}$. Assume next that $\mathcal{HB}^{\Lambda}_{0}\subset\mathcal{HB}^{\Gamma}_{0}$, and that the containment is strict. Since $x_{0}\in\Gamma\cdot x_{0}$, the geodesic $\tau:=[x_{0},w]$ is of length $D$ and intersects $\mathcal{H}^{\Gamma}_{0}$. Denote by $t_{0}\in[0,D)$ the time in which $\tau$ intersects $\mathcal{H}^{\Gamma}_{0}$, and let $w^{\prime}:=\tau(t_{0})\in\mathcal{H}^{\Gamma}_{0}$ be the intersection point. In particular $|w^{\prime}|=t_{0}$. It is clear that $B(w^{\prime},t_{0})$ is $\Lambda$-free, as a subset of the ball $B(w,D)$. Again there is $\gamma x_{0}\in B(w^{\prime},D_{\Gamma})\cap\mathcal{H}^{\Gamma}_{0}$ and so $|\gamma x_{0}|\leq t_{0}+D_{\Gamma}$. By reverse triangle inequality $t_{0}-D_{\Gamma}\leq d(w^{\prime},\lambda_{\gamma}x_{0})-d(w^{\prime},\gamma x_{0})\leq d(\gamma x_{0},\lambda_{\gamma}x_{0})$ and the sublinear constraint gives $t_{0}-D_{\Gamma}\leq u(t_{0}+D_{\Gamma})$. This can only happen for boundedly small $t_{0}$, say $t_{0}<T$. I conclude that if $\mathcal{HB}^{\Lambda}_{0}\subset\mathcal{HB}^{\Gamma}_{0}$, then $\mathcal{HB}^{\Gamma}_{0}$ is a horoball of $\Gamma$ tangent to some point $y\in B(x_{0},T)$. By Corollary 2.23 there are finitely many horoballs of $\Gamma$ tangent to points in $B(x_{0},T)$. In particular there is a finite set $\\{\xi_{1}^{\prime},\dots,\xi_{K}^{\prime}\\}\in\mathcal{W}_{\mathbb{Q}}(\Gamma)$ of possible base points for $\mathcal{HB}^{\Gamma}_{0}$. This set depends only on $T$, and since the choice of $T$ was completely independent of $D$, the set of possible base points is independent of $D$ as well. Let $\widetilde{\mathcal{HB}^{\Gamma}_{i}}$ be the horoball of $\Gamma$ based at $\xi_{i}^{\prime}$. I can now bound the distance $d(\mathcal{HB}^{\Gamma}_{0},\mathcal{HB}^{\Lambda}_{0})$. Let $1\leq i\leq K$ be an index for which there is a $\Lambda$-free horoball based at $\xi_{i}^{\prime}$ that is contained in $\widetilde{\mathcal{HB}^{\Gamma}_{i}}$. There is thus some $1$-almost-maximal $\Lambda$-free horoball based at $\xi^{\prime}_{i}$. Fix an arbitrary such $1$-almost-maximal $\Lambda$-free horoball $\widetilde{\mathcal{HB}^{\Lambda}_{i}}$ for each such $i$, and let $L_{i}:=d(\widetilde{\mathcal{HB}^{\Lambda}_{i}},\widetilde{\mathcal{HB}^{\Gamma}_{i}})$. Finally, define $L:=\max\\{L_{i}\\}+1$ among such $i$. As stated in Remark 6.24, $d(\mathcal{HB}^{\Lambda}_{0},\widetilde{\mathcal{HB}^{\Lambda}_{i}})\leq 1$ for some $i$, therefore $d(\mathcal{HB}^{\Gamma}_{0},\mathcal{HB}^{\Lambda}_{0})\leq L$. Recall $|w|=D$ and $B(w,D)$ is $\Lambda$-free. It holds that $d(w,\mathcal{H}^{\Gamma}_{0})\leq L$, and so there is $\gamma x_{0}\in\mathcal{H}^{\Gamma}_{0}$ for which $d(w,\gamma x_{0})\leq L+D_{\Gamma}$. In particular $|\gamma x_{0}|\leq D+L+D_{\Gamma}$ (in fact it is clear that $|\gamma x_{0}|\leq T+D_{\Gamma}$, but this won’t be necessary). Reverse triangle inequality gives $D-(L+D_{\Gamma})\leq d(w,\lambda_{\gamma}x_{0})-d(w,\gamma x_{0})\leq d(\gamma x_{0},\lambda_{\gamma}x_{0})$ and from the sublinear constraint I conclude $D-(L+D_{\Gamma})\leq u(D+L+D_{\Gamma})$. Since $L,D_{\Gamma}$ are fixed constants independent of $D$, this can only hold for boundedly small $D$, say $D<D_{2}$. In particular, one gets a uniform bound $D_{\Lambda}:=\max\\{D_{1},D_{2}\\}$ such that $x\in\mathcal{H}^{\Lambda}\Rightarrow d(x,\Lambda\cdot x_{0})<D_{\Lambda}$. ∎ $w$$x_{0}$$\begin{array}[]{l}\Lambda-free\\\ \ B(w,D)\end{array}$$\tau(t_{0})$$\tau$$\Lambda-free\ \mathcal{HB}^{\Lambda}{}$$\xi$$\Gamma-free\ \mathcal{HB}^{\Gamma}{}$$\gamma x_{0}$ Figure 3: Lemma 6.25, case $\mathcal{HB}^{\Lambda}\subset\mathcal{HB}^{\Gamma}$. The red horosphere of $\Gamma$ is trapped between $x_{0}$ and $\mathcal{H}^{\Lambda}$, and is at distance $t_{0}$ from $x_{0}$. A $\Gamma$-orbit point on the red horosphere close to $x_{0}$ allows to use sublinearity to get a bound on $t_{0}$. ###### Corollary 6.26. Every $\Gamma$-conical limit point is a $\Lambda$-conical limit point. ###### Proof. Let $\xi\in X(\infty)$ be a $\Gamma$-conical limit point. Let $\eta:\mathbb{R}_{\geq 0}\rightarrow X$ be a geodesic with $\eta(\infty)=\xi$. By definition there is a bound $D>0$ and sequences $t_{n}\rightarrow\infty,\gamma_{n}\in\Gamma$ such that $d\big{(}\gamma_{n}x_{0},\eta(t_{n})\big{)}<D$. Consider the corresponding $\lambda_{n}:=\lambda_{\gamma_{n}}$ and $\lambda_{n}x_{0}$. If $d_{n}$ is uniformly bounded, then $\xi$ is $\Lambda$-conical by definition. Otherwise, assume $d_{n}$ is monotonically increasing to $\infty$. For some fixed $s\in(0,1)$ it holds that for all but finitely many $n\in\mathbb{N}$, $\gamma_{n}x_{0}$ is $sd_{n}$ deep inside $\mathcal{HB}^{\Lambda}_{n}:=\mathcal{HB}^{\Lambda}_{\lambda_{n},i(\gamma_{n})}$. I assume $d_{n}$ is large enough so that $sd_{n}>D$, and in particular $\eta(t_{n})\in\mathcal{HB}^{\Lambda}_{n}$. Let $\xi_{n}\in\mathcal{W}_{\mathbb{Q}}(\Gamma)$ be the respective base points of $\mathcal{HB}^{\Lambda}_{n}$. The point $\xi$ is $\Gamma$-conical, and by Theorem 2.29 $\frac{\pi}{2}\leq d(\xi,\mathcal{W}_{\mathbb{Q}}(\Gamma))\leq d(\xi,\xi_{n})$. The proof differs depending on whether the above inequality is strict or not for any $n\in\mathbb{N}$. Assume first that for some $m\in\mathbb{N}$, $d(\xi,\xi_{m})=\frac{\pi}{2}$. By item $2$ of Lemma 2.32, $d\big{(}\mathcal{H}^{\Lambda}_{m},\eta(t)\big{)}$ is uniformly bounded, i.e., there is $C>0$ such that for every $t>0$ there is $x_{t}\in\mathcal{H}^{\Lambda}_{m}$ for which $d\big{(}x_{t},\eta(t)\big{)}<C$. By Lemma 6.25, $d(x_{t},\Lambda\cdot x_{0})<D_{\Lambda}$, hence $d\big{(}\eta(t_{n}),x_{t_{n}}\big{)}\leq C+D_{\Lambda}$. This means that $\xi$ is $\Lambda$-conical. Otherwise, for all $n\in\mathbb{N}$ it holds that $\frac{\pi}{2}<d(\xi,\xi_{n})$. The fact that $\eta(t_{n})\in\mathcal{HB}^{\Lambda}_{n}$ together with Lemma 2.32 implies that at some later time the geodesic ray $\eta$ leaves $\mathcal{HB}^{\Lambda}_{n}$. Thus there is $s_{n}>t_{n}$ for which $\eta(s_{n})\in\mathcal{H}^{\Lambda}_{n}$. Since $\mathcal{HB}^{\Lambda}_{n}$ are maximal $\Lambda$-free horoballs, Lemma 6.25 gives rise to points $\lambda_{n}x_{0}$ such that $d\big{(}\lambda_{n}x_{0},\eta(s_{n})\big{)}\leq D_{\Lambda}$. This renders $\xi$ as a $\Lambda$-conical limit point, as wanted. ∎ ###### Proof of Proposition 6.11. The strategy is as follows. For $\mathcal{HB}^{\Lambda}=\mathcal{HB}^{\Lambda}_{{\lambda_{\gamma}},i(\gamma)}$, one uses Proposition 6.22 to get that $\mathcal{HB}^{\Gamma}\subset\mathcal{HB}^{\Lambda}$ and that the distance $d(\mathcal{H}^{\Lambda},\mathcal{H}^{\Gamma})$ is large with $d_{\gamma}$. The horosphere $\mathcal{H}^{\Gamma}$ admits a $\Gamma$-cocompact metric lattice, and so the projections of these metric lattice points onto $\mathcal{H}^{\Lambda}$ form a cocompact metric lattice in $\mathcal{H}^{\Lambda}$. It remains to show that for each $\gamma^{\prime}x_{0}\in\Gamma\cdot x_{0}\cap\mathcal{H}^{\Gamma}$, the corresponding $\lambda^{\prime}=\lambda_{\gamma^{\prime}}$ indeed lies on the same $\mathcal{H}^{\Lambda}$ and boundedly close to the projection $P_{\mathcal{H}^{\Lambda}}(\gamma^{\prime}x_{0})$. This is done by putting together all the geometric facts obtained up to this point, specifically Lemma 6.25. One delicate fact that will be of use is that two maximal $\Lambda$-free horoballs that are based at the same point must be equal, because none of them can contain a $\Lambda$-orbit point while on the other hand both bounding horospheres intersect $\Lambda\cdot x_{0}$. Fix $s>0$ for which Proposition 6.22 yields a corresponding bound $D(s)$, and let $\gamma\in\Gamma$ such that $sd_{\gamma}>2\cdot\big{(}D_{\Lambda}+D(s)\big{)}$. Consider the (maximal) $\Lambda$-free horoball $\mathcal{HB}^{\Lambda}_{\lambda_{\gamma},i(\gamma)}$ based at $\xi^{i(\gamma)}_{\lambda}$. I show that $\Lambda\cdot x_{0}\cap\mathcal{H}^{\Lambda}_{\lambda_{\gamma},i(\gamma)}$ is a cocompact metric lattice in $\mathcal{H}^{\Lambda}_{\lambda_{\gamma},i(\gamma)}$. I keep the subscript notation because the proof is a game between $\mathcal{HB}^{\Lambda}_{\lambda_{\gamma},i(\gamma)}$ and another $\Lambda$-free horoball. Let $\mathcal{HB}^{\Gamma}_{\lambda_{\gamma},i(\gamma)}$ be the $\Gamma$-horoball corresponding to $\mathcal{HB}^{\Lambda}_{\lambda_{\gamma},i(\gamma)}$. I can conclude that $\mathcal{HB}^{\Gamma}_{\lambda_{\gamma},i(\gamma)}\subset\mathcal{HB}^{\Lambda}_{\lambda_{\gamma},i(\gamma)}$, because the choice of $d_{\gamma}>D(s)$ guarantees $\gamma x_{0}$ is $sd_{\gamma}$ deep inside $\mathcal{HB}^{\Lambda}_{\lambda_{\gamma},i(\gamma)}$. In particular $\mathcal{HB}^{\Lambda}_{\lambda_{\gamma},i(\gamma)}$ is not $\Gamma$-free. Moreover, it holds that $L:=d(\mathcal{H}^{\Lambda}_{\lambda_{\gamma},i(\gamma)},\mathcal{H}^{\Gamma}_{\lambda_{\gamma},i(\gamma)})\geq sd_{\gamma}$. Let $\gamma^{\prime}\in\Gamma$ be any element in the cocompact metric lattice $\Gamma\cdot x_{0}\cap\mathcal{H}^{\Gamma}_{\lambda_{\gamma},i(\gamma)}$, and consider two associated points: (a) $\lambda^{\prime}x_{0}=\lambda_{\gamma^{\prime}}x_{0}$ and (b) the projection of $\gamma^{\prime}x_{0}$ on $\mathcal{H}^{\Lambda}_{\lambda_{\gamma},i(\gamma)}$, denoted $p_{\gamma}^{\prime}:=P_{\mathcal{H}^{\Lambda}_{\lambda_{\gamma},i(\gamma)}}(\gamma^{\prime}x_{0})\in\mathcal{H}^{\Lambda}_{\lambda_{\gamma},i(\gamma)}$. The horoball $\mathcal{HB}^{\Lambda}_{\lambda_{\gamma},i(\gamma)}$ is a maximal $\Lambda$-free horoball so it is also $1$-almost maximal, hence $d(p_{\gamma}^{\prime},\Lambda\cdot x_{0})\leq D_{\Lambda}$ and the following holds: $sd_{\gamma}\leq L\leq d_{\gamma^{\prime}}\leq d(\gamma^{\prime}x_{0},p_{\gamma}^{\prime})+d(p_{\gamma}^{\prime},\Lambda\cdot x_{0})\leq L+D_{\Lambda}$ (6) Consider $\xi^{i(\gamma^{\prime})}_{\lambda^{\prime}}$, and assume towards contradiction that $\xi^{i(\gamma^{\prime})}_{\lambda_{\gamma^{\prime}}}\neq\xi^{i(\gamma)}_{\lambda_{\gamma}}$. Both points lie in $\mathcal{W}_{\mathbb{Q}}(\Gamma)$ and therefore must be at Tits distance $\pi$ of each other. Therefore the fact that $\gamma^{\prime}x_{0}$ lies in $\mathcal{HB}^{\Lambda}_{\lambda_{\gamma},i(\gamma)}$ implies that the geodesic $[\gamma^{\prime}x_{0},\xi^{i(\gamma^{\prime})}]$ leaves $\mathcal{HB}^{\Lambda}_{\lambda_{\gamma},i(\gamma)}$ at some point $z\in\mathcal{H}^{\Lambda}_{\lambda_{\gamma},i(\gamma)}$. The fact that $D(s)\leq sd_{\gamma}\leq d_{\gamma^{\prime}}$ implies that $\gamma^{\prime}x_{0}$ lies $s^{2}d_{\gamma}$ deep inside $\mathcal{HB}^{\Lambda}_{\lambda_{\gamma^{\prime}},i(\gamma^{\prime})}$. Therefore the point $z$ also lies at least $s^{2}d_{\gamma}$ deep inside $\mathcal{HB}^{\Lambda}_{\lambda_{\gamma^{\prime}},i(\gamma^{\prime})}$, and therefore $z$ is the centre of a $\Lambda$-free horoball of radius at least $s^{2}d_{\gamma}$. By choice of $d_{\gamma}$ the point $z$ therefore admits a $2D_{\Lambda}$ neighbourhood that is $\Lambda$-free. But $z$ lies on $\mathcal{H}^{\Lambda}_{\lambda_{\gamma},i(\gamma)}$, a maximal horosphere of $\Lambda$, contradicting Lemma 6.25. I conclude that $\xi^{i(\gamma^{\prime})}_{\lambda_{\gamma^{\prime}}}=\xi^{i(\gamma)}_{\lambda_{\gamma}}$, so $\mathcal{HB}^{\Lambda}_{\lambda_{\gamma},i(\gamma)}$ and $\mathcal{HB}^{\Lambda}_{\lambda_{\gamma^{\prime}},i(\gamma^{\prime})}$ are two $\Lambda$-free horoballs that are tangent to a $\Lambda\cdot x_{0}$ point and based at the same point at $\infty$. This implies $\mathcal{HB}^{\Lambda}_{\lambda_{\gamma},i(\gamma)}=\mathcal{HB}^{\Lambda}_{\lambda_{\gamma^{\prime}},i(\gamma^{\prime})}$, and in particular $\lambda_{\gamma^{\prime}}x_{0}\in\mathcal{H}^{\Lambda}$. Finally, it is clearly seen from Inequality 6 that $d(\lambda^{\prime}x_{0},p_{\gamma}^{\prime})\leq d(\lambda^{\prime}x_{0},\gamma^{\prime}x_{0})+d(\gamma^{\prime}x_{0},p_{\gamma}^{\prime})\leq d_{\gamma^{\prime}}+L\leq L+D_{\Lambda}+L$ The element $\gamma^{\prime}x_{0}\in\Gamma\cdot x_{0}\cap\mathcal{H}^{\Gamma}_{\lambda_{\gamma},i(\gamma)}$ was as arbitrary element, and the above argument shows that the corresponding $\Lambda$-orbit points satisfy: 1. 1. $\lambda^{\prime}x_{0}$ all lie on $\mathcal{H}^{\Lambda}_{\lambda_{\gamma},i(\gamma)}$. 2. 2. Each $p_{\gamma}^{\prime}$ is $2L+D_{\Lambda}$ close to the point $\lambda^{\prime}x_{0}$. This shows that the cocompact metric lattice $\\{p_{\gamma}^{\prime}\mid\gamma^{\prime}x_{0}\in\mathcal{H}^{\Gamma}_{\lambda_{\gamma},i(\gamma)}\\}$ lies in a bounded neighbourhood of the set of points $\Lambda\cdot x_{0}\cap\mathcal{H}^{\Lambda}_{\lambda_{\gamma},i(\gamma)}$, proving that $\Lambda\cdot x_{0}\cap\mathcal{H}^{\Lambda}_{\lambda_{\gamma},i(\gamma)}$ is a cocompact metric lattice in $\mathcal{H}^{\Lambda}_{\lambda_{\gamma},i(\gamma)}$. Lemma 2.25 elevates this to $\big{(}\Lambda\cap\mathrm{Stab}_{G}(\mathcal{H}^{\Lambda}_{\lambda_{\gamma},i(\gamma)})\big{)}\cdot x_{0}\cap\mathcal{H}^{\Lambda}_{\lambda_{\gamma},i(\gamma)}$ being a cocompact metric lattice in $\mathcal{H}^{\Lambda}_{\lambda_{\gamma},i(\gamma)}$, completing the proof. ∎ $\gamma x_{0}$$d_{\gamma}$$\gamma^{\prime}x_{0}$$\lambda_{\gamma^{\prime}}x_{0}$$\mathcal{HB}_{\lambda_{\gamma^{\prime}},i(\gamma^{\prime})}^{\Lambda}$$z^{\prime}:=\pi_{\mathcal{HB}_{\lambda_{\gamma^{\prime}},i(\gamma^{\prime})}^{\Lambda}}(\gamma^{\prime}x_{0})$$\overbrace{}^{D_{1}}_{1}$$\xi_{\lambda_{\gamma^{\prime}}}^{i(\gamma^{\prime})}$$\xi_{\lambda_{\gamma}}^{i(\gamma)}$$\geq\frac{1}{2}d_{\gamma}$$\mathbf{z}$$\geq\frac{1}{2}d_{\gamma^{\prime}}$$\begin{array}[]{l}\Lambda- free\ ball\\\ \ B\left(z,\frac{1}{2}d_{\gamma}\right)\end{array}$$\mathcal{HB}_{\lambda_{\gamma},i(\gamma)}^{\Lambda}$$\mathcal{HB}_{\lambda_{\gamma},i(\gamma)}^{\Gamma}$$\lambda_{\gamma}x_{0}$ Figure 4: Proposition 6.11. Assuming towards contradiction that $\xi^{i(\gamma)}_{\lambda_{\gamma}}\neq\xi^{i(\gamma^{\prime})}_{\lambda_{\gamma^{\prime}}}$ results in a point $z\in\mathcal{HB}^{\Lambda}_{\lambda_{\gamma},i(\gamma)}$ (blue coloured and bold faced in the bottom part of the figure) admitting a large $\Lambda$-free neighbourhood, contradicting almost cocompactness. ### 6.3 The Bounded Case Proposition 6.11 is enough in order to prove Theorem 6.1 in the case that $\Gamma$ does not lie in a bounded neighbourhood of $\Lambda$. The case where $\Gamma$ and $\Lambda$ lie at bounded Hausdorff distance, i.e. where $\Gamma\subset\mathcal{N}_{D}(\Lambda)$ and $\Lambda\subset\mathcal{N}_{D}(\Gamma)$ for $D>0$, arose naturally in the context of quasi-isometric completeness of non-uniform lattices. The precise statements are given in Theorems 6.8 and 6.9 above. I present their proofs in Section 6.3.2 below. The notable difference between Theorem 6.8 and Theorem 6.9 is that for higher rank groups, the inclusion $\Lambda\subset\mathcal{N}_{D}(\Gamma)$ is only required to prove commensurability. In view of Corollary 6.7, this allows me to omit that assumption from Theorem 1.6. Notice also that for groups with property (T) the result easily follows from the (much more recent) result by Leuzinger in Theorem 4.7. In the context of commensurability in the sublinear setting, I can only prove a limited result, Namely that $\Lambda$ is commensurable to $\Gamma$ if $\Gamma$ is an irreducible $\mathbb{Q}$-rank $1$ lattice and both $\Gamma\subset\mathcal{N}_{D}(\Lambda)$ and $\Lambda\subset\mathcal{N}_{u}(\Gamma)$ for some constant $D>0$ and a sublinear function $u$. This is done via a reduction to the bounded case. ###### Proposition 6.27. Let $G$ be a real finite-centre semisimple Lie group without compact factors, $\Gamma\leq G$ an irreducible lattice of $\mathbb{Q}$-rank $1$, $\Lambda\leq G$ a discrete subgroup, and $u$ a sublinear function. If $\Gamma\subset\mathcal{N}_{D}(\Lambda)$ for some $D>0$ and $\Lambda\subset\mathcal{N}_{u}(\Gamma)$, then actually $\Lambda\subset\mathcal{N}_{D^{\prime}}(\Gamma)$ for some $D^{\prime}>0$. Moreover, if $G$ is of $\mathbb{R}$-rank$\ 1$, the conclusion holds under the relaxed assumption that $u(r)\preceq_{\infty}\varepsilon r$ for some $\varepsilon<1$. ###### Remark 6.28. While the setting of Proposition 6.2 is indeed rather limited, the situation that both $\Gamma\subset\mathcal{N}_{u}(\Lambda)$ and $\Lambda\subset\mathcal{N}_{u}(\Gamma)$ arises naturally from the motivating example of SBE-rigidity in Theorem 3.7. Notice however that Theorem 3.7 is not known for groups $G$ that admit $\mathbb{R}$-rank$\ 1$ factors, which is the only setting for which I can prove Proposition 6.2. #### 6.3.1 A Reduction I start with the proof of Proposition 6.27. The first step is to establish the fact that $\Lambda$ must preserve $\mathcal{W}_{\mathbb{Q}}(\Gamma)$. ###### Lemma 6.29. Let $G$ be a real finite-centre semisimple Lie group without compact factors, $\Gamma\leq G$ an irreducible non-uniform lattice of $\mathbb{Q}$-rank $1$, $\Lambda\leq G$ a discrete subgroup. Assume that $\Gamma\subset\mathcal{N}_{u}(\Lambda)$ and that $\Lambda\subset\mathcal{N}_{u^{\prime}}(\Gamma)$ for sublinear functions $u,u^{\prime}$. Then $\Lambda\cdot\mathcal{W}_{\mathbb{Q}}(\Gamma)\subset\mathcal{W}_{\mathbb{Q}}(\Gamma)$. Moreover, if $G$ is of $\mathbb{R}$-rank$\ 1$, the conclusion holds under the relaxed assumption that $u^{\prime}(r)\preceq_{\infty}\varepsilon r$ for some $\varepsilon<1$. ###### Proof. The proof is similar to the argument of Lemma 6.13, and uses the linear penetration rate of a geodesic into a horoball. Let $\xi\in\mathcal{W}_{\mathbb{Q}}(\Gamma)$, and let $\mathcal{H}^{\Gamma}$ be a horosphere bounding a $\Gamma$-free horoball $\mathcal{HB}^{\Gamma}$ with $\mathcal{H}^{\Gamma}\cap\Gamma\cdot x_{0}$ a metric lattice in $\mathcal{H}^{\Gamma}$. Assume first that $u^{\prime}$ is sublinear. Since $\mathcal{HB}^{\Gamma}$ is $\Gamma$-free and $\Lambda\subset\mathcal{N}_{u^{\prime}}(\Gamma)$, I can conclude that $\Lambda\cdot x_{0}\cap\mathcal{HB}^{\Gamma}\subset\mathcal{N}_{u^{\prime}}(\mathcal{H}^{\Gamma})$. Recall (Lemma 2.32) that every geodesic ray $\eta$ with limit point $\xi^{\prime}\in\mathcal{N}_{\frac{\pi}{2}}(\xi)$ penetrates $\mathcal{HB}^{\Gamma}$ at linear rate. Therefore for every such geodesic ray $\eta$ and every sublinear function $v$ there is $R=R(\eta,v)>0$ for which $\mathcal{N}_{v}(\eta_{\restriction_{r>R}})$ is $\Lambda$-free. On the other hand, let $\lambda\in\Lambda$, and assume towards contradiction that $\lambda\xi\notin\mathcal{W}_{\mathbb{Q}}(\Gamma)$. Then by Proposition 2.31 there is a $\Gamma$-conical limit point $\xi^{\prime}\in\mathcal{N}_{\frac{\pi}{2}}(\lambda\xi)$. The hypothesis that $\Gamma\subset\mathcal{N}_{u}(\Lambda)$ then implies that for every $R>0$, $\mathcal{N}_{u}(\eta_{\restriction{r>R}})\cap\Lambda\cdot x_{0}\neq\emptyset$. Translating by $\lambda^{-1}$ yields a contradiction to the previous paragraph. I conclude that $\lambda\xi\in\mathcal{W}_{\mathbb{Q}}(\Gamma)$. I now modify the argument to include $u^{\prime}(r)\preceq_{\infty}\varepsilon r$ when $G$ is of $\mathbb{R}$-rank$\ 1$. In this case, the only point $\xi^{\prime}\in\mathcal{N}_{\frac{\pi}{2}}(\lambda\xi)$ is $\lambda\xi$ itself. Therefore by the same argument as above, the assumption that $\lambda\xi\notin\mathcal{W}_{\mathbb{Q}}(\Gamma)$ implies that the $u$-sublinear neighbourhood of every geodesic ray with limit point $\xi$ intersects $\Lambda\cdot x_{0}$. I.e., for every $\eta$ with limit point $\xi$ and every $R>0$ it holds that $\mathcal{N}_{u}(\eta_{\restriction_{r>R}})\cap\Lambda\cdot x_{0}\neq\emptyset$. On the other hand, every such geodesic penetrates $\mathcal{HB}^{\Gamma}$ at $1$-linear rate. This amounts to the following fact: if $v^{\prime}(r)=\varepsilon r$ for some $\varepsilon\in(0,1)$, then for some $R>0$, the set $\mathcal{N}_{u}(\eta_{\restriction_{r>R}})\cap\mathcal{N}_{v^{\prime}}(\mathcal{H}^{\Gamma})=\emptyset$. This is a contradiction to $\Lambda\subset\mathcal{N}_{u^{\prime}}(\Gamma)$. ∎ ###### Proof of Proposition 6.27. Assume towards contradiction that there is a sequence $\lambda_{n}$ such that $d(\lambda_{n}x_{0},\Gamma\cdot x_{0})>n$. Recall that $\Gamma\cdot x_{0}$ is a cocompact metric lattice in the compact core of $\Gamma$. This implies that there is a number $D^{\prime}>0$ such that any $\lambda\in\Lambda$ for which $\lambda x_{0}\notin\mathcal{N}_{D^{\prime}}(\Gamma\cdot x_{0})$ must lie at least $\frac{1}{2}D^{\prime}$-deep inside a horoball of $\Gamma$. I can assume that for all $n\in\mathbb{N}$ there are corresponding horoballs of $\Gamma$, which I denote $\mathcal{HB}^{\Gamma}_{n}$, for which $\lambda_{n}\cdot x_{0}\in\mathcal{HB}^{\Gamma}_{n}$. The fact that $\Gamma\subset\mathcal{N}_{D}(\Lambda)$ then implies that $\mathcal{N}_{D}(\Lambda\cdot x_{0})$ covers a cocompact metric lattice in $\mathcal{H}^{\Gamma}_{n}$, namely the metric lattice $\Gamma\cdot x_{0}\cap\mathcal{H}^{\Gamma}_{n}$. In the terminology of Section 6.2, $\mathcal{H}^{\Gamma}_{n}$ is almost $\Lambda$-cocompact, or $D$-almost $\Lambda$-cocompact. I first prove that every horoball of $\Gamma$ contains a $\Lambda$-free horoball (this is of course immediate if $\Lambda\subset\mathcal{N}_{C}(\Gamma)$ for some $C>0$). Assume towards contradiction that there is a horoball $\mathcal{HB}^{\Gamma}$ of $\Gamma$ that does not contain a $\Lambda$-free horoball. Denote $\mathcal{H}^{\Gamma}:=\partial\mathcal{HB}^{\Gamma}$. In the notations of the previous paragraph, I can assume without loss of generality that $\mathcal{HB}^{\Gamma}=\mathcal{HB}^{\Gamma}_{n}$ for all $n\in\mathbb{N}$. Denote by $\xi$ the base point of $\mathcal{HB}^{\Gamma}$, fix some arbitrary $x\in\mathcal{H}^{\Gamma}$ and consider the geodesic ray $\eta:=[x,\xi)$. The constraint that $\Lambda\subset\mathcal{N}_{u}(\Gamma)$ implies that for every $R>0$ there is some $L>0$ for which the ball $B\big{(}\eta(L+t),R\big{)}$ is $\Lambda$-free, for all $t\geq 0$. In particular, for all large enough $n\in\mathbb{N}$ (depending on $R$), the horosphere $\mathcal{H}(\xi,\lambda_{n}x_{0})$ that is parallel to $\mathcal{H}^{\Gamma}$ and that passes through $\lambda_{n}x_{0}$ contains a point that is the centre of $\Lambda$-free ball of radius $R$. This property is $\Lambda$-invariant, as well as the fact that $\mathcal{HB}^{\Gamma}$ is based at $\mathcal{W}_{\mathbb{Q}}(\Gamma)$. In particular, these two properties hold for the horoballs $\mathcal{HB}_{n}:=\lambda_{n}^{-1}\cdot\mathcal{HB}^{\Gamma}$, whose respective base points I denote $\xi_{n}:=\lambda_{n}^{-1}\xi\in\mathcal{W}_{\mathbb{Q}}(\Gamma)$. Fix $R=D+2D_{\Gamma}$ (recall that $D_{\Gamma}$ is such that every horosphere $\mathcal{H}$ of $\Gamma$ admits $\mathcal{H}\subset\mathcal{N}_{D_{\Gamma}}(\Gamma\cdot x_{0}\cap\mathcal{H})$). Let $L=L(D+2D_{\Gamma})$ be the corresponding bound from the previous paragraph. For every $n>L$ the horoball $\mathcal{HB}_{n}$ has bounding horosphere $\mathcal{H}_{n}$ that admits a point $z_{n}\in\mathcal{H}_{n}$ for which $B(z_{n},D+2D_{\Gamma})$ is $\Lambda$-free. Moreover, the same is true for every horosphere that is parallel to $\mathcal{H}_{n}$ which lies inside $\mathcal{HB}_{n}$. Since $\Gamma\subset\mathcal{N}_{D}(\Lambda)$, this means that every horosphere that lies inside $\mathcal{HB}_{n}$ admits a point that is the centre of a $\Gamma$-free ball of radius $2D_{\Gamma}$. I conclude that none of those horospheres could be the horosphere of $\Gamma$ corresponding to the parabolic limit point $\xi_{n}\in\mathcal{W}_{\mathbb{Q}}(\Gamma)$. Since $x_{0}\in\mathcal{H}_{n}$ it must therefore be that $\mathcal{H}_{n}$ is a horosphere of $\Gamma$. But this contradicts the fact that $z_{n}\in\mathcal{H}_{n}$ and $B(z_{n},2D_{\Gamma})$ is $\Gamma$-free. This shows that no horoball of $\Gamma$ contains a sequence of $\Lambda$-orbit points that lie deeper and deeper in that horoball. Put differently, it shows that every horoball of $\Gamma$ contains a $\Lambda$-free horoball. I remark that the above argument shows something a bit stronger, which I will not use but which I find illuminating. It proves that as soon as $d(\lambda x_{0},\Gamma\cdot x_{0})$ is uniformly large enough, say more than $M$, then $\lambda x_{0}$ must lie on a $(D+2D_{\Gamma}$)-almost $\Lambda$-cocompact horosphere parallel to $\mathcal{H}^{\Gamma}$, where $\mathcal{H}^{\Gamma}$ is the bounding horosphere of any horoball of $\Gamma$ in which $\lambda x_{0}$ lies (recall that it must lie in at least one such horoball). On the other hand if $d(\lambda x_{0},\Gamma\cdot x_{0})<M$, then since every point in the $\Gamma$-orbit lies on a horosphere of $\Gamma$ one concludes that $\lambda x_{0}$ lies on a horosphere $\mathcal{H}$ based at $\mathcal{W}_{\mathbb{Q}}(\Gamma)$ that is $(M+D)$-almost $\Lambda$-cocompact. I can now assume that every $\mathcal{HB}^{\Gamma}_{n}$ contains a $\Lambda$-free horoball. In particular it contains a $1$-almost maximal $\Lambda$-free horoball $\mathcal{HB}^{\Lambda}_{n}$ (see Definition 6.23). By definition there is a point $\lambda_{n}^{\prime}x_{0}$ that is at distance at most $1$ from $\mathcal{H}^{\Lambda}_{n}=\partial\mathcal{HB}^{\Lambda}_{n}$. Up to enlarging $d(\lambda_{n}x_{0},\Gamma\cdot x_{0})$ or decreasing it by at most $1$, I can assume $\lambda_{n}=\lambda_{n}^{\prime}$ to begin with. Consider $\mathcal{HB}_{n}:=\lambda_{n}^{-1}\cdot\mathcal{HB}^{\Lambda}_{n}$ with $\mathcal{H}_{n}=\partial\mathcal{HB}_{n}$. This is a sequence of horoballs, each of which contains a $\Lambda$-free horoball at depth at most $1$, based at corresponding parabolic limit points $\xi_{n}\in\mathcal{W}_{\mathbb{Q}}(\Gamma)$, and tangent to points that are at distance at most $1$ from $x_{0}$, i.e., $\mathcal{H}_{n}\cap B(x_{0},1)\neq\emptyset$. Since $\Gamma\subset\mathcal{N}_{D}(\Lambda)$ I conclude that each of the $\mathcal{HB}_{n}$ contain a horoball of depth at most $D+1$ that is $\Gamma$-free. Therefore the horoball of $\Gamma$ that is based at $\xi_{n}$ must have its bounding horosphere intersecting $B(x_{0},D+2)$. By Corollary 2.23 there are only finitely such horoballs. I conclude that there are finitely many points $\xi^{\prime}_{1},\dots,\xi^{\prime}_{K}\in\mathcal{W}_{\mathbb{Q}}(\Gamma)$ such that for every $n\in\mathbb{N}$ there is $i(n)\in\\{1,\dots,K\\}$ with $\xi_{n}=\xi^{\prime}_{i(n)}$. From the Pigeonhole Principle there is some $\xi^{\prime}\in\\{\xi^{\prime}_{1}\dots,\xi^{\prime}_{K}\\}$ for which $\xi_{n}=\xi^{\prime}$ for infinitely many $n\in\mathbb{N}$. Passing to a subsequence I assume that this is the case for all $n\in\mathbb{N}$. To begin with the $\mathcal{HB}^{\Gamma}_{n}$ are horoballs of $\Gamma$, and therefore as in the first case the bounding horospheres $\mathcal{H}^{\Gamma}_{n}$ are $D+2D_{\Gamma}$-almost $\Lambda$-cocompact. This is a $\Lambda$-invariant property and therefore the same holds for the $\lambda_{n}^{-1}$ translate of it. These are the horospheres which are based at $\xi^{\prime}$ and lie outside $\mathcal{HB}_{n}$ at distance $d(\lambda_{n}x_{0},\Gamma\cdot x_{0})>n-1$ from $\mathcal{H}_{n}$. They form a sequence of outer and outer horospheres based at the same point at $\mathcal{W}_{\mathbb{Q}}(\Gamma)$, all of which are $D+2D_{\Gamma}$-almost $\Lambda$-cocompact. This is a contradiction, since the union of such horospheres intersect every horoball of $X$, contradicting the existence of $\Lambda$-free horoballs. Formally, take some $\zeta\in\mathcal{W}_{\mathbb{Q}}(\Gamma)$ different from $\xi^{\prime}$. Since both $\xi^{\prime}$ and $\zeta$ lie in $\mathcal{W}_{\mathbb{Q}}(\Gamma)$, they admit $d_{T}(\zeta,\xi^{\prime})=\pi$ and there is a geodesic $\eta$ with $\eta(-\infty)=\xi^{\prime}$ and $\eta(\infty)=\zeta$. Let $\mathcal{HB}^{\Gamma}_{\zeta}$ be the horoball of $\Gamma$ that is based at $\zeta$. By the first step of this proof, every such horoball must contain a $\Lambda$-free horoball $\mathcal{HB}^{\Lambda}_{\zeta}$. Therefore there is some $T>0$ such that for all $t>T$ the point $\eta(t)$ lies $2(D+2D_{\Gamma})$ deep in $\mathcal{HB}^{\Lambda}_{\zeta}$. I conclude that for all $t>T$, $B\big{(}\eta(t),2D\big{)}$ is $\Lambda$-free. On the other hand, for arbitrarily large $t$ it holds that the horosphere based at $\xi^{\prime}$ and tangent to $\eta(t)$ is $D+2D_{\Gamma}$-almost $\Lambda$-cocompact, and in particular $d\big{(}\eta(t),\Lambda\cdot x_{0}\big{)}<D+2D_{\Gamma}$, a contradiction. I conclude that $\Lambda\subset\mathcal{N}_{D^{\prime}}(\Gamma)$ for some $D^{\prime}>0$, as claimed. ∎ ###### Proof of Theorem 1.8. The theorem follows immediately from Proposition 6.27 together with Theorems 6.8 and 6.9. ∎ #### 6.3.2 The Arguments of Schwartz and Eskin ##### The $\mathbb{R}$-rank$\ 1$ case. The statement of Theorem 6.9 is a slight modification of Schwartz’s original formulation. His framework leads to a discrete subgroup $\Delta\leq G$ such that: 1. 1. Every element of $\Delta$ _quasi-preserves_ the compact core of the lattice $\Gamma$. Namely, each element of $\Delta$ is an isometry of $X$ that preserves $\mathcal{W}_{\mathbb{Q}}(\Gamma)$ and that maps every horosphere of $\Gamma$ to within the $D=D(\Delta)$ neighbourhood of some other horosphere of $\Gamma$. 2. 2. It holds that $\Gamma\subset\mathcal{N}_{D}(\Delta)$. From these two properties Schwartz is able to deduce that $\Delta$ has finite covolume, i.e. that $\Delta$ is a lattice in $G$. Here is a sketch of his argument, which works whenever $\Gamma$ is a $\mathbb{Q}$-rank $1$ lattice. ###### Theorem 6.30. In the setting described above, $\Delta$ is a lattice in $G$. ###### Proof sketch. Consider $X_{0}^{\prime}:=\bigcup_{g\in\Delta}g\cdot X_{0}$, where $X_{0}$ is the compact core of $\Gamma$. This space serves as a ‘compact core’ for $\Delta$: the fact that $\Delta$ quasi-preserves $X_{0}$ implies that $X_{0}^{\prime}\subset\mathcal{N}_{D}(X_{0})$. It is a $\Delta$-invariant space, and therefore one gets an isometric action of $\Delta$ on $X_{0}^{\prime}$. This action is cocompact: the reason is that $\Gamma$ acts cocompactly on $X_{0}$, and $\Gamma\subset\mathcal{N}_{D}(\Delta)$. Formally, every point in $X_{0}^{\prime}$ is $D$-close to a point in $X_{0}$. Every point in $X_{0}$ is $D_{\Gamma}$-close to a point in $\Gamma\cdot x_{0}$. Every point in $\Gamma\cdot x_{0}$ is $D$-close to a point in $\Delta\cdot x_{0}$. Therefore the ball of radius $2D+D_{\Gamma}$ contains a fundamental domain for the action of $\Delta$ on $X_{0}^{\prime}$. It remains to see that the action of $\Delta$ on $X\setminus X_{0}^{\prime}$ is of finite covolume. As a result of the cocompact action of $\Delta$ on $X_{0}^{\prime}$, there is $B:=B(x_{0},R)$ so that $X_{0}^{\prime}\subset\Delta\cdot B$. $X_{0}^{\prime}$ is the complement of a union of horoballs, which one may call _horoballs of $\Delta$_, with bounding _horospheres of $\Lambda$_. The fact that $\Gamma$ is of $\mathbb{Q}$-rank $1$ means that the horoballs of $\Gamma$ are disjoint, and therefore those of $\Lambda$ are almost disjoint: there is some $C>0$ such that for every horosphere $\mathcal{H}$ of $\Lambda$ and every point $x\in\mathcal{H}$, $d(x,X_{0}^{\prime})<C$. Up to enlarging the radius of $B$ by $C$, I can assume that $\mathcal{H}\subset\Delta\cdot B$ for every horosphere $\mathcal{H}$ of $\Lambda$. Each horoball of $\Lambda$ is based at $\mathcal{W}_{\mathbb{Q}}(\Gamma)$, and each lies uniformly boundedly close to the corresponding horoballs of $\Gamma$. From Corollary 2.23 one therefore sees that there are finitely many horoballs of $\Delta$ that intersect $B$. Denote them by $\mathcal{HB}_{1},\dots,\mathcal{HB}_{N}$, their bounding horospheres by $\mathcal{H}_{i}=\partial\mathcal{HB}_{i}$, and their intersection with $B$ by $B_{i}:=B\cap\mathcal{HB}_{i}$. Let also $\xi_{i}\in\mathcal{W}_{\mathbb{Q}}(\Gamma)$ denote the base point of each $\mathcal{HB}_{i}$. Each $B_{i}$ is pre-compact and therefore the projection of each $B_{i}$ on $\mathcal{H}_{i}$ is pre-compact as well (this is a consequence e.g. of the results of Heintze-Im hof recalled in Remark 2.6). Let $D_{i}\subset\mathcal{H}_{i}$ be a compact set that contains this projection, i.e. $P_{\mathcal{H}_{i}}(B_{i})\subset D_{i}\subset\mathcal{H}_{i}$. In particular $B\cap\mathcal{H}_{i}\subset D_{i}$. Observe now that for every horoball $\mathcal{HB}$ of $\Delta$, with bounding horosphere $\mathcal{H}=\partial\mathcal{HB}$, the $\Delta$-orbit of every point $x\in\mathcal{H}$ intersects some $D_{i}$. First notice that for $x\in\mathcal{H}$ the choice of $B$ implies that the $\Delta$-orbit of $x$ must intersect $B$, say $gx\in B$. In particular $g\mathcal{H}\cap B\neq\emptyset$, and since $g\mathcal{HB}$ is a horoball of $\Delta$ then by definition $g\mathcal{H}=\mathcal{H}_{i}$ for some $i\in\\{1,\dots,N\\}$. One concludes that indeed $gx\subset D_{i}$. Moreover, let $y\in X$ is any point that lies inside a horoball $\mathcal{HB}$ of $\Delta$, and $x=P_{\mathcal{H}}(y)$ its projection on the bounding horosphere $\mathcal{H}=\partial\mathcal{HB}$. By the previous paragraph there is some $g\in\Delta$ and $i\in\\{1,\dots,N\\}$ for which $gx\in D_{i}$, and therefore it is clear that $gy$ lies on a geodesic emanating from $D_{i}$ to $\xi_{i}$. Finally, define $\mathrm{Cone}(D_{i})$ to be the set of all geodesic rays that emanate from $D_{i}$ and with limit point $\xi_{i}$. The previous paragraph proves that $\bigcup_{i=1}^{N}\mathrm{Cone}(D_{i})$ contains a fundamental domain for the action of $\Delta$ on $X\setminus X_{0}^{\prime}$. Moreover, the fact that $D_{i}\subset\mathcal{H}_{i}$ is compact readily implies that each $\mathrm{Cone}(D_{i})$ has finite volume, and so this fundamental domain is of finite volume. To conclude, $B\cup\big{(}\bigcup_{i=1}^{N}\mathrm{Cone}(D_{i})\big{)}$ is a set of finite volume and it contains a fundamental domain for the $\Delta$-action on $X$, as claimed. The proof of commensurability of $\Delta$ and $\Gamma$ is given in full in [53]. ∎ There is one essential difference between Theorem 6.9 and Theorem 6.30, namely the assumption that $\Lambda\subset\mathcal{N}_{D}(\Gamma)$ rather than that $\Lambda$ quasi-preserves the compact core of $\Gamma$. In Schwartz’s work, the fact that $\Delta\subset\mathcal{N}_{D}(\Gamma)$ is not relevant (even though it easily follows from the construction of his embedding of $\Delta$ in $G$). He only uses the two properties described above, namely the quasi- preservation of $X_{0}$ and $\Gamma\subset\mathcal{N}_{D}(\Delta)$. The assumption that $\Lambda$ quasi-preserves the compact core of $\Gamma$ does not feel appropriate in the context of sublinear rigidity, while the metric condition $\Lambda\subset\mathcal{N}_{D}(\Gamma)$ seems much more natural. It is a stronger condition as I now show. By Lemma 6.29, $\Lambda\cdot\mathcal{W}_{\mathbb{Q}}(\Gamma)\subset\mathcal{W}_{\mathbb{Q}}(\Gamma)$. Let $\mathcal{H}^{\Gamma}_{1}$ be a horosphere of $\Gamma$, based at $\xi\in\mathcal{W}_{\mathbb{Q}}(\Gamma)$, and let $\gamma x_{0}\in\mathcal{H}^{\Gamma}_{1}$ be some point on the metric lattice of $\Gamma\cdot x_{0}$ on $\mathcal{H}^{\Gamma}_{1}$. There is an element $\lambda\in\Lambda$ such that $d(\lambda x_{0},\gamma x_{0})<D$. Moreover, since $\Lambda\subset\mathcal{N}_{D}(\Gamma)$ one knows that the parallel horoball that lies $D$-deep inside $\mathcal{HB}^{\Gamma}_{1}$ is $\Lambda$-free. Let $\lambda^{\prime}\in\Lambda$ be an arbitrary element of $\Lambda$, and consider $\lambda^{\prime}\cdot\mathcal{H}^{\Gamma}_{1}$. The last statement in the previous paragraph is $\Lambda$-invariant, and so the horoball that lies $D$-deep inside $\lambda^{\prime}\cdot\mathcal{HB}^{\Gamma}_{1}$ is $\Lambda$-free. The fact that $\Gamma\subset\mathcal{N}_{D}(\Lambda)$ then implies that the parallel horoball that lies $2D$ deep inside $\lambda^{\prime}\mathcal{HB}^{\Gamma}_{1}$ is $\Gamma$-free. Let $\mathcal{H}^{\Gamma}_{2}$ be the horosphere of $\Gamma$ that is based at $\lambda^{\prime}\xi$. The last statement amounts to saying that $\mathcal{H}^{\Gamma}_{2}$ lies at most $2D$-deep inside $\lambda^{\prime}\mathcal{HB}^{\Gamma}_{1}$. On the other hand, one has $d(\lambda^{\prime}\lambda x_{0},\lambda^{\prime}\mathcal{H}^{\Gamma}_{1})=d(\lambda x_{0},\mathcal{H}^{\Gamma}_{1})\leq D$, so there is a $\Lambda$-orbit point that lies within $D$ of $\lambda^{\prime}\mathcal{H}^{\Gamma}_{1}$. The parallel horoball that lies $D$-deep inside $\mathcal{HB}^{\Gamma}_{2}$ must also be $\Lambda$-free, so I conclude that $\mathcal{H}^{\Gamma}_{2}$ must be contained in the parallel horoball to $\lambda^{\prime}\mathcal{HB}^{\Gamma}_{1}$ which contains it and that is at distance $D$ from it. I conclude that $d(\lambda^{\prime}\mathcal{H}^{\Gamma}_{1},\mathcal{H}^{\Gamma}_{2})\leq 2D$, and so that $\Lambda$ quasi-preserves $X_{0}$. ###### Remark 6.31. It is interesting to note that Schwartz’s arguments are similar in spirit to my arguments in Section 6.2. In fact, one could also prove Theorem 6.9 using the same type of arguments that appear repeatedly in section 6.2, namely by moving $\Lambda$-free horoballs around the space, specifically the proof of Proposition 6.27. I do not present this alternative proof here. ##### Higher rank. Eskin’s proof is ergodic, and based on results of Mozes [41] and Shah [55]. I produce it here without the necessary preliminaries, which are standard. ###### Proof of Theorem 6.8. To prove that $\Lambda$ is a lattice amounts to finding a finite non-zero $G$-invariant measure on $\Lambda\backslash G$. By Theorem $2$ in [41], if $P\leq G$ is a parabolic subgroup then every $P$-invariant measure on $\Lambda\backslash G$ is automatically $G$-invariant. Fix a minimal parabolic subgroup $P\leq G$ and let $\mu_{0}$ be some fixed probability measure on $\Lambda\backslash G$. Since $P$ is amenable it admits a tempered Følner sequence $F_{n}\subset P$, and one can average $\mu_{0}$ along each $F_{n}$ to get a sequence of probability measures $\mu_{n}$. The weak* compactness of the unit ball in the space of measures on $\Lambda\backslash G$ implies that there exists a weak* limit $\mu$ of the $\mu_{n}$. The measure $\mu$ is automatically a finite $P$-invariant measure. It remains to show that $\mu$ is not the zero measure. To see this it is enough to show that for some compact set $C_{\Lambda}\subset\Lambda\backslash G$ and some $\Lambda g=x\in\Lambda\backslash G$, one has $0<\liminf_{n}\frac{1}{|F_{n}|}\int_{F_{n}}{\mathbbm{1}_{C_{\Lambda}}(xp^{-1})dp}$ (7) Fix some compact neighbourhood $C_{\Gamma}\subset\Gamma\backslash G$ of the trivial coset $\Gamma e$. The hypothesis $\Gamma\subset\mathcal{N}_{D}(\Lambda)$ implies that there is a corresponding compact neighbourhood $C_{\Lambda}\subset\Lambda\backslash G$ of the trivial coset $\Lambda e$ such that for any $p\in P$, it holds that $\Gamma gp^{-1}\in C_{\Gamma}\Rightarrow\Lambda gp^{-1}\in C_{\Lambda}$ (simply take $C_{\Lambda}$ to be the $D+1$-blowup of $C_{\Gamma}$). The action of $P$ on $\Gamma\backslash G$ is uniquely ergodic, therefore $0<\mu_{\Gamma}(C_{\Gamma})=\lim_{n}\frac{1}{|F_{n}|}\int_{F_{n}}{\mathbbm{1}_{C_{\Gamma}}(\Gamma p^{-1})dp}$ where $\mu_{\Gamma}$ denotes the natural $G$-invariant measure on $\Gamma\backslash G$. The defining property of $C_{\Lambda}$ ensures that Inequality (7) is satisfied, implying that $\mu$ is a non-zero $P$-invariant probability measure on $\Lambda\backslash G$. I conclude that $\mu$ is also $G$-invariant, and that $\Lambda$ is a lattice. If moreover $\Lambda\subset\mathcal{N}_{D}(\Gamma)$, one may use Shah’s Corollary [55] to conclude that $\Lambda$ is commensurable to $\Gamma$. ∎ ### 6.4 Translating Geometry into Algebra The goal of this section is to prove that the results of Section 6.2 imply that $\Lambda$ satisfies the hypotheses of the Benoist-Miquel criterion Theorem 6.4. Namely, that $\Lambda$ is Zariski dense, and that it intersects a horospherical subgroup in a cocompact indecomposable lattice. These are algebraic properties, and the proof that $\Lambda$ satisfies them is in essence just a translation of the geometric results of Section 6.2 to an algebraic language. The geometric data given by Section 6.2 is that for some horosphere $\mathcal{H}$ bounding a $\Lambda$-free horoball, $\Lambda\cap\mathrm{Stab}_{G}(\mathcal{H})\cdot x_{0}$ intersects $\mathcal{H}$ in a cocompact metric lattice (Proposition 6.11), and that the set of $\Lambda$-conical limit points contains the set of $\Gamma$-conical limit points (Corollary 6.26). Note that since $K$ is compact the former implies that $\Lambda\cap\mathrm{Stab}_{G}(\mathcal{H})$ is a uniform lattice in $\mathrm{Stab}_{G}(\mathcal{H})$. #### 6.4.1 A Horospherical Lattice I assume that $\Lambda\cap\mathrm{Stab}_{G}(\mathcal{H})$ is a lattice in $\mathrm{Stab}_{G}\mathcal{H}$, and I want to show that $\Lambda$ intersects a horospherical subgroup $U$ of $G$ in a lattice. This step requires quite a bit of algebraic background, which I give below in full. In short, the first goal is to show that $\mathrm{Stab}_{G}(\mathcal{H})$ admits a subgroup $U\leq\mathrm{Stab}_{G}(\mathcal{H})$ that is a horospherical subgroup of $G$. A lemma of Mostow (Lemma 6.32 below) allows to conclude that $\Lambda$ intersects $U$ in a lattice. ###### Lemma 6.32 (Lemma $3.9$ in [39]). Let H be a Lie group having no compact connected normal semisimple non-trivial Lie subgroups, and let $N$ be the maximal connected nilpotent normal Lie subgroup of $H$. Let $\Gamma\leq H$ be a lattice. Then $N/N\cap\Gamma$ is compact. ###### Remark 6.33. In the original statement Mostow uses the term ‘analytic group’, which I replaced here with ‘connected Lie subgroup’. This appears to be Mostow’s definition of an analytic group. See e.g. Section $10$, Chapter $1$ in [32]. In Chevalley’s _Theory of Lie Groups_ , he defines a _Lie group_ as a locally connected topological group whose identity component is an analytic group (Definition $1$, Section $8$, Chapter $4$ in [12]), and proves (Theorem $1$, Section $4$, Chapter $4$ therein) a 1-1 correspondence between analytic subgroups of an analytic group and Lie subalgebras of the corresponding Lie algebra. Lemma 6.32 lays the rationale for the rest of this section. Explicitly, I prove that $\mathrm{Stab}_{G}(\mathcal{H})$ admits a subgroup that is a horospherical subgroup $U$ of $G$ (Corollary 6.36), and that $U$ is maximal connected nilpotent normal Lie subgroup of $\mathrm{Stab}_{G}(\mathcal{H})$ (Corollary 6.42). In order to use Lemma 6.32, I show that the horospherical subgroup $N_{\xi}$ is a maximal normal nilpotent connected Lie subgroup of $\mathrm{Stab}_{G}(\mathcal{H})^{\circ}$, and that $\mathrm{Stab}_{G}(\mathcal{H})^{\circ}$ admits no compact normal factors. This requires to establish the structure of $\mathrm{Stab}_{G}(\mathcal{H})^{\circ}$. ###### Definition 6.34. In the notation $h_{\xi}^{t}=\exp(tX)$ and $A_{\xi}=\exp\big{(}Z(X)\cap\mathfrak{p}\big{)}$ of Proposition 2.8, define $A_{\xi}^{\perp}$ to be the codimension-$1$ submanifold of $A_{\xi}$ that is orthogonal to $\\{h_{\xi}(t)\\}_{t\in\mathbb{R}}$ (with respect to the Killing form in the Lie algebra). ###### Claim 6.35. Every element $a\in A_{\xi}^{\perp}$ stabilizes $\mathcal{H}=\mathcal{H}(x_{0},\xi)$. ###### Proof. An element in $A_{\xi}$ is an element that maps $x_{0}$ to a point on a flat $F\subset X$ that contains the geodesic ray $[x_{0},\xi)$. If $a\in A_{\xi}^{\perp}$, then the geodesic $[x_{0},ax_{0}]$ is orthogonal to $[x_{0},\xi)$, and lies in $F$. From Euclidean geometry and structure of horospheres in Euclidean spaces, it is clear that $ax_{0}\in\mathcal{H}(x_{0},\xi)$. Since $a\in G_{\xi}$, this means $a\mathcal{H}=\mathcal{H}(ax_{0},\xi)=\mathcal{H}(x_{0},\xi)=\mathcal{H}$. ∎ ###### Corollary 6.36. Let $\mathcal{H}$ be a horosphere based at $\xi$. Then $\mathrm{Stab}_{G}(\mathcal{H})^{\circ}=(K_{\xi}A_{\xi}^{\perp})^{\circ}N_{\xi}$, and in particular it contains a horospherical subgroup of $G$. Moreover, $\mathrm{Stab}_{G}(\mathcal{H})^{\circ}$ is normal in $\mathrm{Stab}_{G}(\xi)^{\circ}$ and acts transitively on $\mathcal{H}$. ###### Proof. Clearly $(K_{\xi}A_{\xi}^{\perp})^{\circ}N_{\xi}$ is a codimension-$1$ subgroup of $\mathrm{Stab}_{G}(\xi)^{\circ}$. Since $\mathrm{Stab}_{G}(\mathcal{H})\neq\mathrm{Stab}_{G}(\xi)$ (e.g. $h_{\xi}^{t}\notin\mathrm{Stab}_{G}(\mathcal{H})$ for $t\neq 0$), it is enough to show that $(K_{\xi}A_{\xi}^{\perp})^{\circ}N_{\xi}\leq\mathrm{Stab}_{G}(\mathcal{H})$. Let $kan\in(K_{\xi}A_{\xi}^{\perp})^{\circ}N_{\xi}$. It fixes $\xi$, so it is enough to show that $kanx_{0}\in\mathcal{H}$. Since $k\in K_{\xi}$ and $kx_{0}=x_{0}$, it stabilizes $\mathcal{H}$. From Claim 6.35 $a\in\mathrm{Stab}_{G}(\mathcal{H})$. So it remains to check that $N_{\xi}$ stabilizes $\mathcal{H}$, but this is more or less the definition: fixing a base point $x_{0}$, the horospheres based at $\xi$ are parameterized by $\mathbb{R}$. Denote them by $\\{\mathcal{H}_{t}\\}_{t\in\mathbb{R}}$, where $\mathcal{H}=\mathcal{H}_{0}$. In this parameterization, any element $g\in G_{\xi}$ acts on $\\{\mathcal{H}_{t}\\}_{t\in\mathbb{R}}$ by translation. I can thus define for $g\in\mathrm{Stab}_{G}(\xi)$ the real number $l(g)$ to be that number for which $g\mathcal{H}_{t}=\mathcal{H}_{t+l(g)}$. Clearly $l\big{(}h_{\xi}(t)\big{)}=t$. The element $n$ fixes $\xi$, so one has $h_{\xi}^{-t}nh_{\xi}^{t}\mathcal{H}_{0}=h_{\xi}^{-t}\mathcal{H}_{t+l(n)}=\mathcal{H}_{t+l(n)-t}=\mathcal{H}_{l(n)}$ The fact that $n\in Ker(T_{\xi})$, i.e. that $\lim_{t\rightarrow\infty}h_{\xi}^{-t}nh_{\xi}^{t}=e_{G}$ readily implies that necessarily $l(n)=0$. I conclude that $(K_{\xi}A_{\xi}^{\perp})^{\circ}N_{\xi}=\mathrm{Stab}_{G}(\mathcal{H})^{\circ}$, as wanted. Next recall that $\mathrm{Stab}_{G}(\mathcal{H})^{\circ}$ acts transitively on $X$. Let $x,y\in\mathcal{H}$, and consider $g\in\mathrm{Stab}_{G}(\xi)^{\circ}$ with $gx=y$. Writing an element $g\in G_{\xi}$ as $ka_{t}a_{\perp}n\in K_{\xi}h_{\xi}^{t}A_{\xi}^{\perp}N_{\xi}$, the argument above shows that $kh_{\xi}^{t}a_{\perp}n\mathcal{H}_{0}=\mathcal{H}_{0}$ if and only if $t=0$, i.e., if and only if $g\in\mathrm{Stab}_{G}(\mathcal{H})^{\circ}$. Finally, let $g\in\mathrm{Stab}_{G}(\xi)$ and $h\in\mathrm{Stab}_{G}(\mathcal{H})$. By the discussion above $h\cdot\mathcal{H}_{t}=\mathcal{H}_{t}$ for all $t\in\mathbb{R}$. Clearly $-l(g)=l(g^{-1})$, and therefore $ghg^{-1}\mathcal{H}_{0}=gh\mathcal{H}_{-l(g)}=g\cdot\mathcal{H}_{-l(g)}=\mathcal{H}_{0}$ Therefore $\mathrm{Stab}_{G}(\mathcal{H})$ is normal in $\mathrm{Stab}_{G}{\xi}$, and the same is true for the respective identity components. ∎ ###### Corollary 6.37. $\mathrm{Stab}_{G}(\mathcal{H})^{\circ}$ is a connected Lie group with no connected compact normal semisimple non-trivial Lie subgroups. ###### Proof. Every compact subgroup of $G$ fixes a point. Let $H\leq G$ be some closed subgroup. It is standard to note that a normal $N\leq H$ that fixes a point $x\in X$ must fix every point in the orbit $H\cdot x$: $hnh^{-1}hx=hx$. Since $H=\mathrm{Stab}_{G}(\mathcal{H})^{\circ}$ acts transitively on $\mathcal{H}$, it shows that a normal compact subgroup of $\mathrm{Stab}_{G}(\mathcal{H})^{\circ}$ fixes every point in $\mathcal{H}$. An isometry fixing a horosphere pointwise while fixing its base point is clearly the identity, proving the claim. ∎ The following fact is well known but I could not find it in the literature. ###### Corollary 6.38. A horosphere in $X$ is not convex. ###### Proof. Let $\mathcal{H}^{\prime}$ be some horosphere in $X$, with base point $\zeta\in X(\infty)$, and assume towards contradiction that it is convex. Fix $x\in\mathcal{H}^{\prime}$ and $a_{t}^{\prime}$ the one parameter subgroup with $\eta^{\prime}(\infty)=a_{t}^{\prime}x$, and denote $\mathcal{H}^{\prime}_{t}=\mathcal{H}(a_{t}^{\prime}x,\zeta)$. Let $e_{G}\neq n\in N_{\zeta}$ ($N_{\zeta}$ defined with respect to $a_{t}^{\prime}$ in a corresponding Langlands decomposition), and consider the curve $\eta_{n}^{\prime}(t):=a_{t}^{\prime}nx$. I claim that this is a geodesic. On the one hand, the fact that $\mathcal{H}^{\prime}$ is convex implies that the geodesic segment $[x,nx]$ is contained in $\mathcal{H}^{\prime}$. Therefore $a_{t}^{\prime}[x,nx]=[a_{t}^{\prime}x,a_{t}^{\prime}nx]\subset\mathcal{H}^{\prime}_{t}$. More generally it is clear that because $a_{t}^{\prime}\mathcal{H}^{\prime}_{s}=\mathcal{H}^{\prime}_{s+t}$ it holds that $\mathcal{H}^{\prime}_{t}$ is convex for every $t$ as soon as it is convex for some $t$. On the other hand, for every point $y\in[x,nx]$, $d(y,\mathcal{H}^{\prime}_{t})=t$, and more generally for any $y\in[a_{s}^{\prime}nx,a_{s}^{\prime}x]$ it holds that $d(y,\mathcal{H}_{t}^{\prime})=|s-t|$. In particular this is true for $\eta_{n}^{\prime}(t)=y_{t}:=a_{t}^{\prime}nx$. I get that $d\big{(}\eta_{n}^{\prime}(t),\eta_{n}^{\prime}(s)\big{)}=|s-t|$. Therefore $\eta_{n}^{\prime}$ is a geodesic (to be pedantic one has to show that $\eta_{n}^{\prime}$ is a continuous curve, which is a result of the fact that $a_{t}^{\prime}$ is a one parameter subgroup of isometries). Clearly $d\big{(}\eta_{n}^{\prime}(t),\eta^{\prime}(t)\big{)}=d(a_{t}^{\prime}nx,a_{t}^{\prime}x)=d(nx,x)$ and therefore $\eta_{n}^{\prime}$ is at uniformly bounded distance to $\eta^{\prime}$. This bounds $d(\eta_{n},\eta_{n}^{\prime})$ as bi-infinite geodesics, i.e. for all $t\in\mathbb{R}$, not just as infinite rays. The Flat Strip Theorem (Theorem $2.13$, Chapter $2.2$ in [11]), then implies that the geodesics $\eta_{n},\eta_{n}^{\prime}$ bound a flat strip: an isometric copy of $\mathbb{R}\times[0,l]$ (where $l=d(x,nx)$). Up to now I did not use the fact that $n\in N_{\xi}$, only that the point $nx$ lies on a geodesic that is contained in $\mathcal{H}^{\prime}=\mathcal{H}^{\prime}_{0}$. Therefore the entire bi- infinite geodesic that is determined by $[x,nx]$ lies on a $2$-dimensional flat $F$ that contains $\eta^{\prime}$. The two elements $n,a_{t}^{\prime}$ therefore admit $nx,a_{t}^{\prime}x\in F$. It is a fact that two such elements must commute. I can conclude therefore that $[n,a_{t}^{\prime}]=e_{G}$, which contradicts the fact that that $n\in N_{\zeta}=Ker(T_{\zeta})$. ∎ ###### Lemma 6.39 (Theorem $11.13$ in [52]). Let $N$ be a connected real Lie group. Then $\mathrm{Lie}(N)$ is a nilpotent Lie algebra if and only if $N$ is a nilpotent Lie group. ###### Proposition 6.40 (Proposition $13$, Section $4$, Chapter $1$ in [7]). In the notation of Proposition 2.8, $\mathfrak{n}_{\xi}=\mathrm{Lie}(N_{\xi})$ is a maximal nilpotent ideal in $\mathfrak{g}_{\xi}=\mathrm{Lie}(G_{\xi})$. ###### Remark 6.41. 1. 1. The presentation of $\mathfrak{n}_{\xi}$ in [8] is given by means of the root space decomposition of $\mathrm{Stab}_{G}(\xi)$, that appears in Proposition $2.17.13$ in [19]. 2. 2. There are two main objects in the literature that are referred to as the _nilpotent radical_ or the _nilradical_ of a Lie algebra. These are: (a) the maximal nilpotent ideal of the Lie algebra, and (b) the intersection of the kernels of all irreducible finite-dimensional representations. Proposition $13$ in Section $4$ of Chapter $9$ in [7] shows that in the case of Lie algebras of parabolic Lie groups, these notions coincide. ###### Corollary 6.42. $N_{\xi}$ is a maximal connected nilpotent normal Lie subgroup of the identity connected component $\mathrm{Stab}_{G}(\mathcal{H})^{\circ}$. ###### Proof. Lemma 6.39 implies $N_{\xi}$ is nilpotent. Since $\mathrm{Stab}_{G}{\mathcal{H}}\vartriangleleft\mathrm{Stab}_{G}(\xi)$, every normal subgroup of $\mathrm{Stab}_{G}(\mathcal{H})$ containing $N_{\xi}$ is in fact a normal subgroup of $\mathrm{Stab}_{G}(\xi)$, still containing $N_{\xi}$. It remains to prove maximality of $N_{\xi}$ among all connected nilpotent normal Lie subgroups of $\mathrm{Stab}_{G}(\xi)$. Any such subgroup $N^{\prime}\vartriangleleft\mathrm{Stab}_{G}(\xi)$ gives rise to an ideal $\mathfrak{n}^{\prime}$ of $\mathfrak{g}_{\xi}=\mathrm{Lie}\big{(}\mathrm{Stab}_{G}(\xi)\big{)}$, and by Lemma 6.39 it is a nilpotent ideal. Therefore by Proposition 6.40 it is contained in $\mathfrak{n}_{\xi}=\mathrm{Lie}(N_{\xi})$, implying that $N^{\prime}\leq N_{\xi}$. ∎ ###### Corollary 6.43. A lattice in $\mathrm{Stab}_{G}(\mathcal{H})$ intersects the horospherical subgroup $N_{\xi}$ in a lattice. ###### Proof. Corollaries 6.37 and 6.42 imply that the pair $N_{\xi}\vartriangleleft\mathrm{Stab}_{G}(\mathcal{H})$ satisfy the hypotheses of Mostow’s Lemma 6.32. ∎ #### 6.4.2 Indecomposable Horospherical Lattices It is shown in [5] that if a horospherical lattice is contained in a Zariski dense discrete subgroup, then the indecomposability condition is equivalent to the irreducibility of the ambient group. The latter is imposed on $\Lambda$ as a hypothesis in Theorem 6.1. The precise definitions and statements are as follows. ###### Definition 6.44 (Definition $2.14$ in [5]). For a semisimple real algebraic Lie group $G$ and $U$ a horospherical subgroup of $G$, let $\Delta_{U}$ be a lattice in $U$. 1. 1. $\Delta_{U}$ is _irreducible_ if for any proper normal subgroup $N$ of $G^{\circ}$, one has $\Delta_{U}\cap N=\\{e\\}$. 2. 2. $\Delta_{U}$ is _indecomposable_ if one cannot write $G^{\circ}$ as a product $G^{\circ}=N^{\prime}N^{\prime\prime}$ of two proper normal subgroups $N^{\prime},N^{\prime\prime}\vartriangleleft G$ with finite intersection such that the group $\Delta_{U}^{\prime}:=(\Delta_{U}\cap N^{\prime})(\Delta_{U}\cap N^{\prime\prime})$ has finite index in $\Delta_{U}$. ###### Definition 6.45 (See Section $2.4.1$ in [5]). Let $G$ be a semisimple real algebraic Lie group. A discrete subgroup $\Lambda\leq G$ is said to be _irreducible_ if, for all proper normal subgroups $N\vartriangleleft G$, the intersection $\Lambda\cap N$ is finite. ###### Lemma 6.46 (Lemma $4.3$ in [5]). Let $G$ be a semisimple real algebraic Lie group, $U\subset G$ a non-trivial horospherical subgroup, and $\Delta_{U}\leq U$ a lattice of $U$ which is contained in a discrete Zariski dense subgroup $\Delta$ of $G$. Then the following are equivalent: 1. 1. $\Delta$ is irreducible. 2. 2. $\Delta_{U}$ is irreducible. 3. 3. $\Delta_{U}$ is indecomposable. #### 6.4.3 Zariski Density The last requirement is for $\Lambda$ to be Zariski dense. I use a geometric criterion which is well known to experts. ###### Lemma 6.47 (Proposition $2$ in [30]). Let $X$ be a symmetric space of noncompact type, $G=\mathrm{Isom}(X)^{\circ}$. A subgroup $\Delta\leq G$ is Zariski dense if and only if: 1. 1. The group $\Delta$ does not globally fix a point in $X(\infty)$, i.e. $\Delta\not\leq\mathrm{Stab}_{G}(\zeta)$ for any $\zeta\in X(\infty)$. 2. 2. The identity component of the Zariski closure of $\Delta$ does not leave invariant any proper totally geodesic submanifold in $X$. In the proof I use several facts - mostly algebraic, and two geometric. I warmly thank Elyasheev Leibtag for his help and erudition in algebraic groups. The first property I need is very basic. ###### Lemma 6.48. Let $\Delta\leq G$ be a discrete subgroup, and let $H\leq G$ be the Zariski closure of $\Delta$. Then $\Delta\cap H^{\circ}$ is of finite index in $\Delta$. ###### Proof. The subgroup $H^{\circ}$ is normal and of finite index in $H$. ∎ The following fact is probably known to experts. It appears in a recent work by Bader and Leibtag [2]. ###### Lemma 6.49 (Lemma $3.9$ in [2]). Let $k$ be a field, $\mathbf{G}$ a connected $k$ algebraic group, $P\leq G=\mathbf{G}(\mathbb{R})$ a parabolic subgroup. Then the centre of $G$ contains the centre of $P$. Still on the algebraic side, I need a Theorem of Dani, generalizing the Borel Density Theorem. ###### Theorem 6.50 (See [14]). Let $\mathbf{S}$ be a real solvable algebraic group. If $S=\mathbf{S}(\mathbb{R})$ is $\mathbb{R}$-split, then every lattice $\Gamma_{S}\leq S$ is Zariski dense. ###### Remark 6.51. It is a fact (see Theorem $15.4$ and Section $18$ in [6]) that: 1. 1. Every unipotent group over $\mathbb{R}$ is $\mathbb{R}$-split. 2. 2. For a field $k$ of characteristic $0$, a solvable linear algebraic $k$-group is $k$-split if and only if its maximal torus is $k$-split. Finally I need two geometric facts. The first is a characterization determining when does a unipotent element belongs to $N_{\zeta}$ for some $\zeta\in X(\infty)$. ###### Proposition 6.52 (Proposition $4.1.8$ in [19]). Let $X$ be a symmetric space of noncompact type and of higher rank, $n\in G=\mathrm{Isom}(X)^{\circ}$ a unipotent element, and $\zeta\in X(\infty)$. The following are equivalent: 1. 1. For $N_{\zeta}$ as in Proposition 2.8, $n\in N_{\zeta}$. 2. 2. For some geodesic ray $\eta$ with $\eta(\infty)=\zeta$ it holds that $\lim_{t\rightarrow\infty}d\big{(}n\eta(t),\eta(t)\big{)}=0$. 3. 3. For every geodesic ray $\eta$ with $\eta(\infty)=\zeta$ it holds that $\lim_{t\rightarrow\infty}d\big{(}n\eta(t),\eta(t)\big{)}=0$. The last property I need is a characterization of the displacement function for unipotent elements. ###### Proposition 6.53 (See proof of Proposition $3.4$ in [4]). Let $X$ be a symmetric space of noncompact type, $\zeta\in X(\infty)$ some point and $n\in N_{\zeta}$ an element of the unipotent radical of $\mathrm{Stab}_{G}(\zeta)$. The displacement function $x\mapsto d(nx,x)$ is constant on horospheres based at $\zeta$, and for every $\varepsilon>0$ there is a horoball $\mathcal{HB}_{\varepsilon}$ based at $\zeta$ such that $d(nx,x)<\varepsilon$ for every $x\in\mathcal{HB}_{\varepsilon}$. ###### Corollary 6.54. Assume that: 1. 1. $\big{(}\Lambda\cap\mathrm{Stab}_{G}(\mathcal{H})\big{)}\cdot x_{0}$ is a cocompact metric lattice in a horosphere $\mathcal{H}\subset X$ bounding a $\Lambda$-free horoball. 2. 2. Every $\Gamma$-conical limit point is a $\Lambda$-conical limit point. Then $\Lambda$ is Zariski dense. ###### Proof. I show the criteria of Lemma 6.47 are met, starting with $\Lambda\not\leq\mathrm{Stab}_{G}(\zeta)$ for any $\zeta\in X(\infty)$. To this end, I first prove that $\Lambda\cdot x_{0}$ is not contained in any bounded neighbourhood of any horosphere $\mathcal{H}^{\prime}$. Let $\xi^{\prime}$ be the base point of $\mathcal{H}^{\prime}$. By Hattori’s Lemma 2.32 (and Remark 2.33), it is enough to find a $\Lambda$-conical limit point $\zeta^{\prime}$ with $d_{T}(\xi^{\prime},\zeta^{\prime})\neq\frac{\pi}{2}$. Take some $\zeta^{\prime\prime}\in X(\infty)$ at Tits distance $\pi$ of $\xi^{\prime}$, i.e. take a flat $F$ on which $\xi^{\prime}$ lies and let $\zeta^{\prime\prime}$ be the antipodal point to $\xi^{\prime}$ in $F$. Fix $\varepsilon=\frac{\pi}{4}$. By Proposition 2.4, there are neighbourhoods of the cone topology $U,V\subset X(\infty)$ of $\xi^{\prime},\zeta^{\prime\prime}$ (respectively) so that every point $\zeta^{\prime}\in V$ admits $d_{T}(\xi^{\prime},\zeta^{\prime})\geq d_{T}(\xi^{\prime},\zeta^{\prime\prime})-\frac{\pi}{4}=\frac{3}{4}\pi$. Recall that the set of $\Gamma$-conical limit points is dense (in the cone topology), so the second hypothesis implies there is indeed a $\Lambda$-conical limit point in $V$ and therefore at Tits distance different (in this case larger) than $\frac{\pi}{2}$ from $\xi^{\prime}$. I conclude that $\Lambda\cdot x_{0}$ is not contained in any bounded metric neighbourhood of any horosphere of $X$. Assume towards contradiction that $\Lambda\leq\mathrm{Stab}_{G}(\zeta)$. I show that this forces $\Lambda\cap N_{\zeta}\neq\emptyset$. By Proposition 6.52 it is enough to find a unipotent element $\lambda\in\Lambda$ and a geodesic $\eta$ with $\eta(\infty)=\zeta$ such that $\lim_{t\rightarrow\infty}d\big{(}\lambda\eta(t),\eta(t)\big{)}=0$. Let $F$ be a maximal flat with $\xi,\zeta\in F(\infty)$, $x\in F$ some point and $X,Y\in\mathfrak{a}\leq\mathfrak{p}$ two vectors such that $\exp(tY)=\eta(t)$ for the unit speed geodesic $\eta=[x,\zeta)$, and $\exp(tX)=\eta^{\prime}(t)$ for the unit speed geodesic $\eta^{\prime}=[x,\xi)$ (where $\mathfrak{a}\leq\mathfrak{p}$ a maximal abelian subalgebra in a suitable Cartan decomposition $\mathfrak{g}=\mathfrak{p}\oplus\mathfrak{k}$). Let $\mathrm{Stab}_{G}(\xi)=K_{\xi}A_{\xi}N_{\xi}$ be the decomposition described in Proposition 2.8 with respect to $X$ (notice that $N_{\xi}$ does not depend on choice of $X$, see item $3$ of Proposition $2.17.7$ in [19]). The assumption that $\Lambda\leq\mathrm{Stab}_{G}(\zeta)$ implies that for any $\lambda\in\Lambda$ the distance $d\big{(}\lambda\eta(t),\eta(t)\big{)}$ either tends to $0$ as $t\rightarrow\infty$ or is uniformly bounded for $t\in\mathbb{R}$. In the latter case there is some constant $c>0$ for which $d\big{(}\lambda\eta(t),\eta(t)\big{)}=c$ for all $t\in\mathbb{R}$. As in the proof of Corollary 6.38, the Flat Strip Theorem (Theorem $2.13$, Chapter $2.2$ in [11]) implies that $\lambda$ and $a_{t}:=\exp(tY)$ commute. From the first hypothesis of the statement and Mostow’s result (Corollary 6.43) I know that $\Lambda\cap N_{\xi}$ is a cocompact lattice in $N_{\xi}$ (attention to subscripts). Therefore $\Lambda\cap N_{\xi}$ is Zariski dense in $N_{\xi}$ (Theorem 6.50). Moreover, since commuting with an element is an algebraic property, an element $g\in G$ that commutes with $\Lambda\cap N_{\xi}$ must also commute with its Zariski closure, namely with $N_{\xi}$. This means that if $a_{t}$ commutes with all $\Lambda\cap N_{\xi}$ then it commutes with $N_{\xi}$, i.e. $a_{t}n=na_{t}$ for all $t\in\mathbb{R}$ and all $n\in N_{\xi}$. I know that $a_{t}\in A_{\xi}$ commutes with both $K_{\xi}$ and $A_{\xi}$ (Proposition 2.8) therefore if $a_{t}$ also commutes with $N_{\xi}$ then $a_{t}$ lies in the centre of $\mathrm{Stab}_{G}(\xi)$. This means that $a_{t}$ is central in $G$ (Lemma 6.49). For a group $G$ with compact centre this cannot happen, so there is indeed some unipotent element $\lambda\in\Lambda\cap N_{\xi}$ for which $\lim_{t\rightarrow\infty}d\big{(}\lambda\eta(t),\eta(t)\big{)}=0$. I conclude from Proposition 6.52 that $\Lambda\cap N_{\zeta}\neq\emptyset$. The first paragraph of the proof implies in particular that $\Lambda\cdot x_{0}$ does not lie in any bounded neighbourhood of a horosphere $\mathcal{H}^{\prime}$ based at $\zeta$. The assumption that $\Lambda\subset\mathrm{Stab}_{G}(\zeta)$ implies that every $\lambda\in\Lambda$ acts by translation on the filtration $\\{\mathcal{H}^{\prime}_{t}\\}_{t\in\mathbb{R}}$ by horospheres based at $\zeta$. Therefore as soon as $\Lambda\cdot x_{0}\not\subset\mathcal{H}_{t}$ for some $t\in\mathbb{R}$ one concludes that $\zeta$ is a horospherical limit point of $\Lambda$, i.e. that every horoball based at $\zeta$ intersects the orbit $\Lambda\cdot x_{0}$. By Proposition 6.53 it holds that for a unipotent element $g\in N_{\zeta}$ the displacement function $x\mapsto d(gx,x)$ depends only on the horosphere $\mathcal{H}^{\prime}_{t}$ in which $x$ lies and that, for $x_{t}\in\mathcal{H}^{\prime}_{t}$ it holds that $\lim_{t\rightarrow\infty}d(gx_{t},x_{t})=0$ (up to reorienting the filtration $t\in\mathbb{R}$ so that $\eta(t)\in\mathcal{H}^{\prime}_{t})$. For a non- trivial element $\lambda_{\zeta}\in\Lambda\cap N_{\zeta}$ the previous paragraph therefore yields a sequence of elements $\lambda_{n}\in\Lambda$ such that $\lim_{n\rightarrow\infty}d(\lambda_{\zeta}\lambda_{n}x_{0},\lambda_{n}x_{0})=0$, contradicting the discreteness of $\Lambda$. I conclude that $\Lambda\not\leq\mathrm{Stab}_{G}(\zeta)$ for every $\zeta\in X(\infty)$. Assume that $H:=\big{(}\overline{\Lambda}^{Z}\big{)}^{\circ}$, the identity connected component of the Zariski closure of $\Lambda$, stabilizes a totally geodesic submanifold $Y\subset X$. By Lemma 6.48, $\Lambda_{0}:=\Lambda\cap H$ is of finite index in $\Lambda$, therefore $\Lambda_{0}\cap\mathrm{Stab}_{G}(\mathcal{H})$ is also a cocompact lattice in $\mathrm{Stab}_{G}(\mathcal{H})$. The fact that $\big{(}\Lambda_{0}\cap\mathrm{Stab}_{G}(\mathcal{H})\big{)}\cdot x_{0}$ is a cocompact metric lattice in $\mathcal{H}$ readily implies that $\big{(}\Lambda_{0}\cap\mathrm{Stab}_{G}(\mathcal{H})\big{)}\cdot y$ is a cocompact metric lattice in $\mathcal{H}_{y}=\mathcal{H}(y,\xi)$. This goes to show that there is no loss of generality in assuming $x_{0}\in\mathcal{H}\cap Y$. It follows that $\Lambda_{0}\cap\mathrm{Stab}_{G}(\mathcal{H})\cdot x_{0}\subset Y\cap\mathcal{H}$, and therefore $\mathcal{H}\subset\mathcal{N}_{D}(Y)$ for some $D>0$. A horosphere is a codimension-$1$ submanifold, implying that $Y$ is either all of $X$ or of codimension-$1$. The latter forces $Y=\mathcal{H}$, which is impossible since $\mathcal{H}$ is not totally geodesic ($\mathcal{H}$ is not convex, see Corollary 6.38). I conclude that $H$ does not stabilize any totally geodesic proper submanifold, and hence that $\Lambda$ is Zariski dense. ∎ ### 6.5 Proof of Theorem 6.1 I now complete the proof of the main sublinear rigidity theorem for $\mathbb{Q}$-rank $1$ lattices. ###### Proof of Theorem 6.1. If $\\{d_{\gamma}\\}_{\gamma\in\Gamma}$ is bounded, then $\Lambda$ is a lattice by Corollary 6.6 or Theorem 6.8, depending on the $\mathbb{R}$-rank of $G$. If $\\{d_{\gamma}\\}_{\gamma\in\Gamma}$ is unbounded, then Proposition 6.11 and Corollary 6.26 both hold. In $\mathbb{R}$-rank$\ 1$ the proof again follows immediately from Corollary 6.6 using Lemma 2.17 and Corollary 6.26. In higher rank, the results of Section 6.4 allows one to conclude that $\Lambda$ is an irreducible, discrete, Zariski dense subgroup that contains a horospherical lattice. By Theorem 6.4, this renders $\Lambda$ a lattice. It is a $\mathbb{Q}$-rank $1$ lattice as a result of Theorem 2.21. ∎ ###### Remark 6.55. The sublinear nature of the hypothesis in Theorem 1.6 induces coarse metric constraints. A horospherical lattice on the other hand is a very precise object. It is not clear how to produce unipotent elements in $\Lambda$, or even general elements that preserve some horosphere. The proof above produces a whole lattice of unipotent elements in $\Lambda$ (this is Corollary 6.43); it is also the only proof that I know which produces even a single unipotent element in $\Lambda$. ## References * [1] Paul Albuquerque. Patterson-Sullivan theory in higher rank symmetric spaces. GAFA Geom. Funct. Anal., 9(1):1–28, March 1999. * [2] U. Bader and E. Leibtag. Homomorphic images of algebraic groups. arXiv preprint arXiv:2212.03055, 2022. * [3] W. Ballmann, M. Gromov, and V. Schroeder. Manifolds of Nonpositive Curvature. Birkhäuser Boston, 1985. * [4] Werner Ballmann. Lectures on spaces of nonpositive curvature, volume 25. Springer Science & Business Media, 1995. * [5] Yves Benoist and Sébastien Miquel. Arithmeticity of discrete subgroups containing horospherical lattices. Duke Mathematical Journal, 169(8):1485 – 1539, 2020. * [6] Armand Borel. Linear Algebraic Groups. Springer New York, 1991. * [7] N. Bourbaki. Lie Groups and Lie Algebras: Chapters 1-3. Bourbaki, Nicolas: Elements of mathematics. Springer-Verlag, 1989. * [8] N. Bourbaki. Lie Groups and Lie Algebras: Chapters 7-9. Number 7-9 in Elements of mathematics. Springer Berlin Heidelberg, 2004\. * [9] Brian H. Bowditch. Geometrical finiteness for hyperbolic groups. Journal of Functional Analysis, 113(2):245–317, 1993. * [10] Brian H Bowditch. Geometrical finiteness with variable negative curvature. Duke Mathematical Journal, 77(1):229–274, 1995. * [11] Martin R Bridson and André Haefliger. Metric spaces of non-positive curvature, volume 319. Springer Science & Business Media, 2013. * [12] Claude Chevalley. Theory of Lie Groups. Princeton University Press, 1946. * [13] Yves Cornulier. On sublinear bilipschitz equivalence of groups. Ann. ENS, 52:1201–1242, 2019. * [14] SG Dani. On ergodic quasi-invariant measures of group automorphism. Israel Journal of Mathematics, 43(1):62–74, 1982. * [15] Cornelia Druţu. Quasi-isometric classification of non-uniform lattices in semisimple groups of higher rank. Geometric and Functional Analysis - GAFA, 10, 06 2000. * [16] Cornelia Druţu and Michael Kapovich. Geometric group theory. American Mathematical Society, 2017. * [17] Cornelia Druţu and Mark Sapir. Tree-graded spaces and asymptotic cones of groups. Topology, 44(5):959–1058, 2005. * [18] Patrick Eberlein. Lattices in spaces of nonpositive curvature. Annals of Mathematics, 111(3):435–476, 1980. * [19] Patrick Eberlein. Geometry of Nonpositively Curved Manifolds. Chicago Lectures in Mathematics. University of Chicago Press, 1996. * [20] A. Eskin and B. Farb. Quasi-flats and rigidity in higher rank symmetric spaces. Journal of the American Mathematical Society, 10:653–692, 1997\. * [21] Alex Eskin. Quasi-isometric rigidity of nonuniform lattices in higher rank symmetric spaces. Journal of the American Mathematical Society, 11(2):321–361, 1998\. * [22] Benson Farb. The quasi-isometry classification of lattices in semisimple lie groups. Mathematical Research Letters, 4:705–717, 1997. * [23] Mikolaj Fraczyk and Tsachik Gelander. Infinite volume and infinite injectivity radius. Annals of Mathematics, 197(1):389–421, 2023. * [24] Toshiaki Hattori. Geometric Limit Sets of Higher Rank Lattices. Proceedings of the London Mathematical Society, 90(3):689–710, 05 2005. * [25] Ernst Heintze and Hans-Christoph Im Hof. Geometry of horospheres. Journal of Differential Geometry, 12(4):481 – 491, 1977. * [26] Sigurdur Helgason. Differential Geometry, Lie Groups, and Symmetric Spaces. ISSN. Elsevier Science, 1979. * [27] Sigurdur Helgason. Differential geometry and symmetric spaces, volume 341. American Mathematical Soc., 2001. * [28] Lizhen Ji. From symmetric spaces to buildings, curve complexes and outer spaces. Innovations in Incidence Geometry, 10(none):33 – 80, 2009. * [29] M. Kapovich and B. Liu. Geometric finiteness in negatively pinched hadamard manifolds. Annales Academiae Scientiarum Fennicae Mathematica, 2019. * [30] Inkang Kim. Rigidity on symmetric spaces. Topology, 43(2):393–405, 2004. * [31] B. Kleiner and B. Leeb. Rigidity of quasi-isometries for symmetric spaces and euclidean buildings. Publ. Math. IHES, 86:115–197, 1997. * [32] Anthony W. Knapp. Lie Groups Beyond an Introduction. Progress in Mathematics. Birkhäuser Boston, 2002. * [33] Enrico Leuzinger. An exhaustion of locally symmetric spaces by compact submanifolds with corners. Inventiones Mathematicae, 121(1):389–410, December 1995.
# AGN feedback duty cycle in Planck SZ selected clusters using Chandra observations V. Olivares1,Y. Su1, P. Nulsen2,3, R. Kraft2, T. Somboonpanyakul4, F. Andrade- Santos2, C. Jones2, W. Forman2 1 Department of Physics and Astronomy, University of Kentucky, 505 Rose Street, Lexington, KY 40506, USA 2 Harvard-Smithsonian Center for Astrophysics, 60 Garden St., Cambridge, MA 02138, USA 3 ICRAR, University of Western Australia, 35 Stirling Hwy, Crawley, WA 6009, Australia 3 Kavli Institute for Particle Astrophysics & Cosmology, P.O. Box 2450, Stanford University, Stanford, CA 94305, USA (Accepted XXX. Received YYY; in original form ZZZ) ###### Abstract We present a systematic study of X-ray cavities using archival Chandra observations of nearby galaxy clusters selected by their Sunyaev-Zel’dovich (SZ) signature in the Planck survey, which provides a nearly unbiased mass- selected sample to explore the entire AGN feedback duty cycle. Based on X-ray image analysis, we report that 30 of the 164 clusters show X-ray cavities, which corresponds to a detection fraction of 18%. After correcting for spatial resolution to match the high-$z$ SPT–SZ sample, the detection fraction decreases to 9%, consistent with the high-z sample, hinting that the AGN feedback has not evolved across almost 8 Gyrs. Our finding agrees with the lack of evolution of cool-core clusters fraction. We calculate the cavity power, $P_{\rm cav}$, and find that most systems of our sample have enough AGN heating to offset the radiative losses of the intracluster medium. ###### keywords: galaxies: clusters: general – intergalactic medium – X-rays: galaxies ††pubyear: 2022 ## 1 Introduction Feedback from active galactic nuclei (AGN) jets has been proposed to solve the cooling flow problem (see Fabian 1994; McNamara et al. 2005, for a review). Although the details of how AGN feedback counteracts the radiative losses of the intracluster medium (ICM) in clusters are still not fully understood. Early observations with the X-ray satellite ROSAT revealed surface brightness deficits that appear to be spatially aligned with regions of radio emission in the ICM of a few galaxy clusters (Boehringer et al., 1993; Carilli et al., 1994). Nowadays, with the superb resolution of the X-ray Chandra observatory, it has become clear that central AGN located in the Brightest Cluster Galaxy (hereafter BCG) continuously interacts with the surrounding ICM, producing, not solely, the X-ray surface brightness depressions known as X-ray cavities (or bubbles), but also shocks and ripples (e.g., Fabian et al., 2006). In addition, high-resolution radio observations by the Jansky Very Large Array (JVLA) have shown that extended radio lobes inflated by the central AGN may excavate these X-ray cavities by pushing aside the surrounding hot gas. Accordingly, they are expected to be filled with radio emission (e.g., Bîrzan et al., 2020). There have also been detections of the so-called “ghost cavities” at low radio frequency, which are believed to trace a past AGN outburst, for which the radio emission has faded away. More importantly, the X-ray cavities and bubbles may provide a direct measurement of the work done by the radio-mode feedback on the ICM (e.g., Gitti et al., 2010). The X-ray cavities are not only supposed to carry enough energy to balance the cooling losses of the X-ray emitting plasma (Bîrzan et al., 2008), but also play a key role in the formation of extended multiphase filaments observed in cooling flow clusters (e.g., Olivares et al., 2019, 2022; Russell et al., 2019). Therefore, investigating the physical properties of X-ray cavities can improve our understanding of AGN feedback and its impact on galaxy formation and evolution. Currently, most X-ray cavity studies of clusters, groups and elliptical galaxies are based on Chandra X-ray observations for both individual systems and dedicated surveys (see Bîrzan et al. 2004, 2008; Rafferty et al. 2006; Nulsen et al. 2009; Dong et al. 2010; O’Sullivan et al. 2011; Hlavacek- Larrondo et al. 2013; Shin et al. 2016; Panagoulia et al. 2014; Bîrzan et al. 2017). One of the main limitations of existing studies based on X-ray selection methods is that they are often biased towards bright cool-core systems and, consequently, against X-ray faint clusters. A complete unbiased sample of galaxy clusters is desirable to obtain heating and cooling balance constraints (Gitti et al., 2010), and to understand the duty cycle of AGN feedback, which is estimated as the fraction of systems displaying bubbles inflated by the central AGN. Millimeter-wave surveys utilizing the SZ effect have the advantage of providing nearly mass-limited samples, as the impact of SZ effect on the CMB (cosmic microwave background) brightness temperature is independent of redshift. This allows us to explore the entire AGN feedback duty cycle. Examples of instruments used to undertake SZ surveys include the Planck satellite (Planck Collaboration et al., 2011), the South Pole Telescope (SPT) (Bleem et al., 2015, 2020), and the Atacama Cosmology Telescope (Hincks et al., 2010; Hilton et al., 2018). For the high-$z$ Universe, Hlavacek-Larrondo et al. (2015, hereafter HL15) performed a Chandra study of 83 clusters selected from the SPT-SZ survey, and found X-ray surface brightness depression in 6 clusters consistent with radio jets inflating X-ray cavities in their ICM. Here, we present a study of X-ray cavities in the Planck SZ survey, which provides a unique and unbiased view of AGN feedback in the nearby ($z<0.35$) Universe and anchors the evolution of AGN feedback over the past 8 Gyrs. This paper examines Chandra observations of 164 Planck SZ clusters with the aim of identifying X-ray bubbles. In section 2 we describe the Planck SZ sample. Section 3 presents the X-ray Chandra observations and describes the methods used to identify X-ray surface brightness depressions. Section 4 is devoted to the results and their implications. Section 5 presents the limitations of the present study. Finally, section 6 summarizes our findings. Throughout this paper, we adopted a standard cosmology with H0=70 km s-1 Mpc-1 and $\Omega_{\rm m}$=0.3. ## 2 Sample The Chandra-Planck Legacy Program for Massive Clusters of Galaxies (Jones, 2012) is a deep X-ray survey of massive Planck clusters with redshift $\leq 0.35$ detected over almost the full sky (and $|$b$|>15\deg$) through the Sunyaev-Zel’dovich effect by the first Planck mission released in early 2011 (Planck Collaboration et al., 2011). The observations are constructed by combining the Chandra XVP (PI: Jones) and HRC Guaranteed Time Observations (PI: Murray). At least 10,000 source counts have been collected for each cluster to derive its gas properties out to $R_{\rm 500}$ (Andrade-Santos et al., 2017, hereafter AS17). The Chandra Planck sample is nearly an unbiased, mass-selected sample, covering the mass range $7\times 10^{13}{\rm M}_{\odot}\leq M_{500}\leq 2\times 10^{15}{\rm M}_{\odot}$. The sample consists of 164 clusters, of which a small fraction contain pronounced substructures (35, subclusters), visually identified in the X-ray images (AS17). Central density is the best known proxy for the central cooling time (e.g., Su et al., 2020), and has been widely used to classify CC and NCC clusters (e.g., Ruppin et al., 2021; Andrade-Santos et al., 2017). Based on the central density classification of $n_{\rm core}=1.5\times$10-2 cm-3, as presented in AS17, 63 clusters are classified as CC and 101 as NCC clusters. Deprojected temperature and density profiles of clusters in this sample are taken from AS17, to which we refer the reader for a detailed description. ## 3 Observation and Analysis For each cluster, we used all available Chandra observations, including both CCDs ACIS-I and ACIS-S. The data reduction and calibration of Chandra observations were carried out using Chandra Interactive Analysis of Observations software (CIAO) 4.12, and Chandra Calibration Database (CALDB) 4.9.2.1. The observations were reprocessed using the chandra_repro tool of CIAO. Standard blank sky background files and readout artifacts were subtracted. Point sources were detected in the 0.5-8.0 keV energy band, then masked before performing the spectral and imagining analysis of the clusters. Exposure corrected images were produced in the 0.5–2.0 keV band energy, and used for the X-ray cavity analysis. Unsharp masked images were produced to help identify X-ray cavities using CIAO tool aconvolve. The original image was smoothed twice using a small- and large-scale Gaussian kernel. The highly smoothed image was then subtracted from the less smoothed image, enhancing the inhomogeneities in the residual image. We tried different smoothing lengths for the more heavily smoothed images based on the large scale of the cluster emission, starting on 10 up to 60 kpc. For the less smoothed images, we tried smoothing lengths comparable to the physical size of a cavity, from 1 up to 20 kpc (e.g. Rafferty et al., 2006). We also examined the residual image after subtracting an elliptical double beta model, which was obtained by fitting a slightly smoothed 0.5–2.0 keV image. The second beta model is to account for excess emission from the cool core. We classified each cavity identified as Certain (C) or Potential (P). The first two co-author independently looked for X-ray cavities and then classified them based on the significance of each cavity. Cavities were classified as certain if they appear as a clear visible depression in the original image, but also the unsharp-masked image or double $\beta$-subtracted image. In figure 1 we present an example of the methods employed to identify cavities for a clusters with certain (C), potential (P), and without cavities. A cavity was classified as potential if there was only a hint of an X-ray depression in the original X-ray image, but visible in the unsharp-masked or double $\beta$-model subtracted image. The number of counts of the central region ($<$20 kpc) of the clusters with potential cavities is too low for the cavities to be certain (see also Section 5). Clusters without depressions were classified as lacking cavities. We also consider clusters with dark annuli or rings created by bright excesses and asymmetries of the cluster distribution as lacking cavities, as such surface brightness depressions are not consistent with bubbles inflated by radio jets. Figure 1: Example of the methods employed to look for X-ray cavities for a cluster with Certain (top), Potential (middle) detected cavities, and without (bottom) cavities. From left to right: 0.5-2.0 keV original image, double $\beta$-subtracted image, and unsharp image. Detected cavities are displayed with green ellipses. Figure 2: Left panel: Detection fraction of cavities as a function of redshift. The detection fraction for the entire sample is shown with a green square. The detection fraction corrected by resolution to match the SPT-SZ sample is shown with a green circle. The SPT-SZ detection fraction is displayed with an orange circle (HL15). The grey arrows correspond to the detection fraction when only “certain” cavities are taken into account. Right panel: Cavity size versus redshift color-coded by the number of counts within the central 20 kpc. We have also included cavity measurements from the high-$z$ SPT–SZ sample (HL15), shown with black symbols. Certain (C) cavities are displayed with circles, whereas potential (P) cavities with square symbols. The dashed black line corresponds to two times the size of the Chandra PSF as a function of redshift. In both panels, the upper axis gives the lookback times in Gyr. ## 4 Results and Discussion ### 4.1 Detection fraction of cavities and evolution Overall, we detected 67 X-ray cavities in 30 clusters, of which 32 are classified as certain (C) and 35 as potential (P) cavities. From the CC cluster sub-sample, we find 29 clusters with cavities, of which 12 clusters reveal mostly certain cavities and 17 potential cavities. The remaining 34 CC clusters lack X-ray depressions. We also find in one NCC cluster, G269.51+26.42, two potential cavities located in opposite sides of the cluster center. The rest of the NCC clusters show no hint of X-ray depressions. We find that most of the detected cavities come in pairs, and they are usually located on opposite sides of the cluster core, as expected, considering that X-ray cavities are believed to be inflated by radio jets. It is worth mentioning that 29/30 of clusters with cavities also have radio emission associated with the central source (Olivares et al. in prep). Some clusters show multiple X-ray cavities, likely due to either multiple AGN outbursts or the disintegration of large cavities, while five clusters have single cavities (e.g., Morsony et al., 2010; Canning et al., 2013). In total, 18% of all clusters in our sample, including both CC and NCC clusters contain X-ray cavities (see Fig 2, left panel), a few times smaller than the fractions found by previous studies of nearby clusters and about twice as high as that of the high-$z$ SPT–SZ sample (7%–9%; HL15). We have included uncertainties associated with the fraction of clusters with cavities using the Wilson interval method (Brown et al., 2001). We note that previous studies of nearby clusters tend to be biased towards X-ray bright clusters. Furthermore, our findings suggest a slightly lower duty cycle of $\sim$46%, as 28 of the 63 CC clusters ($n_{\rm core}$>1.5$\times$10-2 cm-3) have detected cavities (see Fig. 2, left panel), compared to previous studies which predict AGN feedback duty cycle to be high (60–90%, Bîrzan et al. 2012; Fabian 2012; Panagoulia et al. 2014). We stress, however, that different definitions have been used to classify cool core clusters. At high-$z$, HL15 found a lower limit of $\sim$11% on the duty cycle for the SPT-SZ sample, as only 6 of the 52 clusters with signs of cooling reveal cavities. To explore the evolution of the detection fraction of cavities, we compare our results with those found in the high-$z$ SPT-SZ sample (HL15). Bear in mind that the SPT–SZ sample is limited by resolution (see Fig 2, right panel), with the smaller cavities detected in this sample having sizes of $\lesssim$10 kpc. The latter is due to a combination of larger Chandra PSF at high $z$ and lower number of counts compared to low-$z$ clusters. To account for that limitation, we compute the detection fraction taking only clusters with cavities sizes larger than $\gtrsim$10 kpc to match the observing bias of the SPT–SZ sample. That yields a detection fraction of 9%, which is in good agreement with the SPT–SZ sample (HL15). In the same vein, if we consider only clusters with “certain” (C) cavities and sizes $\gtrsim$10 kpc, the detection fraction of the Planck sample drops to 3%, close to the 2% obtained in the high-$z$ SPT–SZ sample when only clearly detected cavities are taken into account. These findings suggest that the AGN feedback duty cycle has remained constant over almost 8 Gyrs. This trend strongly agrees with the lack of evolution in the fraction of cool-cores clusters (of $\sim$40–60% across the same redshift range Ruppin et al. 2021), which is linked to the ICM cooling. An absence of evolution on the detection fraction of cavities could imply that the mechanical feedback in CC clusters has been in place and maintained across almost 8 Gyrs. All the above is quite intriguing given that the AGN-hosting BCG fraction in the SPT-SZ cluster sample, selected from infrared WISE observations, appears to be strongly evolving with redshift (see Somboonpanyakul et al. 2022; Bîrzan et al. 2017; Hlavacek-Larrondo et al. 2013; also Silverman et al. 2009; Haggard et al. 2010 for related studies). The authors argue that nearby clusters may grow by dry mergers without increasing the AGN activity. Whereas, high-$z$ clusters may accrete cold gas from gas-rich mergers and satellite interactions, which could drive a massive inflow of cold gas towards the central region, increasing the AGN activity. With more fuel available at high-$z$, the accretion rate is more likely to reach the Eddington limit, leading to a transition from a mechanical feedback state to a radiative feedback mode (e.g., Churazov et al., 2005; Dubois et al., 2012; Russell et al., 2013). Therefore, the lack of evolution on the cavity fraction at high-$z$ may be due to the dominance of BCGs with radiatively efficient AGNs (see also Hlavacek-Larrondo et al. 2013). ### 4.2 Cooling luminosity versus Cavity power Figure 3: Comparison between the mechanical power being injected by the AGN in the BCG ($P_{\rm cav}$) and the cooling luminosity ($L_{\rm cool}$) of the cluster at 7.7 Gyrs. The dotted lines are from bottom to top are pV, 4pV, 16pV, per cavity, respectively. Certain (C) cavities are shown with filled green circles, whereas Potential (P) cavities with open green circles for Planck selected clusters. We included X-ray cavities from the high-$z$ SPT–SZ sample (HL15) with filled and open orange circles for the certain and potential cavities, respectively. We have also added nearby clusters with X-ray cavities from Rafferty et al. (2006) sample shown with purple circles. One goal of this work is to test whether the AGN is able to compensate for the cooling losses of the ICM by heating caused from radio jets. We used the cavity power ($P_{\rm cav}$) as a proxy for the mechanical power released by the AGN. The $P_{\rm cav}$ was estimated by dividing the total enthalpy of each cavity ($E_{\rm cav}=4pV$) by its age. Here $p$ is the thermal pressure of the ICM at the projected location of the cavity, defined as the center of each ellipse, and $V$ is the cavity volume. We assumed that the cavities have a prolate shape. The age of the cavity can be given by the buoyant rise time, the refill time, or the sound crossing time (McNamara et al., 2000; Bîrzan et al., 2004; McNamara et al., 2005). For the purpose of this work, we used the buoyant rise time ($t_{\rm buoy}$) as the age of the cavity, as done in previous studies. The $t_{\rm buoy}$ corresponds to time for the cavity to rise buoyantly at its terminal velocity, and is defined as $t_{\rm buoy}=R/v_{\rm t}=R\sqrt{SC/2gV}$, where $S$ is the cross-section of the bubble ($S=\pi r_{\rm b}^{2}$), $C$ (= 0.75) is the drag coefficient (Churazov et al., 2001). Lastly, the local gravitational potential, $g$, was derived assuming hydrostatic equilibrium. We drew on top of each identified X-ray cavity an ellipse model, as done in previous works (e.g., Dong et al., 2010; Shin et al., 2016). Accordingly, the volume of the cavities is $V=4\pi r_{\rm b}^{2}r_{\rm a}/3$, where $r_{\rm a}$ is the semi-major axis, $r_{\rm b}$ is the semi-minor axis of each X-ray cavity (see Table 1). We also include in Table 1 the significance of the detection for each cavity. The significance was calculate as the surface brightness ratio of the surrounding “blackground”, measured within the same aperture size as the cavity , and the cavity. Certain cavities have average significance of 2.2, while potential cavities have average significance of 1.5. Table 1: Cavity properties Cluster name | Class | $r_{\rm a}$ | $r_{\rm b}$ | R | PA | $t_{\rm bouy}$ | $P_{\rm cav}$ | $L_{\rm cool}$ | cav. ---|---|---|---|---|---|---|---|---|--- | | (kpc) | (kpc) | (kpc) | (deg.) | ( 107 yr) | ( 1044 erg s-1) | (1044 erg s-1) | significance G021.09+33.25 | C | 4.1 | 2.6 | 6 | 140 | 0.9${}^{+0.6}_{-3.3}$ | 0.9${}^{+1.6}_{-0.2}$ | 10.1 | 2.2 G021.09+33.25 | C | 5.4 | 3.6 | 7 | 70 | 1.6${}^{+2.2}_{-1.3}$ | 1.2${}^{+1.1}_{-1.0}$ | 10.1 | 2.7 (This table is available in its entirety in machine-readable form.) Motivated from the previous studies (e.g., Rafferty et al., 2006), we calculate the X-ray cooling luminosity, $L_{cool}$, within a volume where the deprojected (isobaric) cooling time, $t_{\rm cool}$, is 7.7 Gyrs (see Table 1). It is representative of the epoch of the last major. Since then, clusters have been relaxed and a cooling flow could develop (Rafferty et al., 2006). For the cooling luminosity, we used $L_{\rm cool}=\int n_{\rm e}n_{\rm H}\Lambda(T,Z)dV$, where $\Lambda(Z,T)$ is the cooling function which depends on the temperature, $T$, and metallicity, $Z$, of the hot gas. We use the cooling functions from Gnat & Sternberg (2007), assuming metallicity of $Z=1~{}Z\odot$, since typical CC clusters have solar or nearly solar metallicity within their cores (e.g., Molendi & Pizzolato, 2001; Mernier et al., 2016). In Figure 3 we compare the mechanical power released by the AGN located in the central BCG and the cooling luminosity ($L_{\rm cool}$) of each cluster. As a matter of comparison, we included galaxy groups and elliptical galaxies from (Rafferty et al., 2006), as well as high-$z$ clusters from the SPT–SZ (HL15). Notably, our Planck sample has systematically lower mechanical powers than the high-$z$ SPT–SZ sample, indicating our ability to detect smaller cavities. The smallest X-ray cavities detected in the SPT–SZ sample have sizes on the order of $\sim$10 kpc, whereas the resolution for the Planck clusters, as they are at lower $z$ clusters, is on the order of $\sim$2.5 kpc. The scatter on the $L_{\rm cool}$ versus $P_{\rm cav}$ relation is slightly smaller in our sample than in the high-$z$ SPT–SZ sample by $\sim$20%. To make a fair comparison with the high-$z$ SPT-SZ sample taking only cavities with sizes $\gtrsim$10 kpcs, we find that the scatter is 65% smaller compared to the SPT-SZ sample. The higher scatter on this relation for high-$z$ clusters is consistent with being fueled mainly through wet mergers (Somboonpanyakul et al., 2022). As shown in Fig 3 the $L_{\rm cool}$ and $P_{\rm cav}$ realized from the jets are positively correlated, indicating that the energy realized by the AGN is sufficient to balance the radiative losses of the ICM within the cooling radius for most of the sources in the sample. Some of those objects may require additional heating from another mechanism, such as thermal conduction and shocks (e.g., Pope et al., 2005). However, we expect that some of these clusters may be in a cooling phase discarding the need for an additional heating source. The nature of the AGN feedback is cyclic and does not always require a balance between the cooling luminosity and the AGN heating (McNamara & Nulsen, 2007). This is likely displayed on the scatter of the $L_{\rm cool}$ versus $P_{\rm cav}$ relation. In that sense, the AGN power is variable, and the objects change their $P_{\rm cav}$ depending on what phase of the AGN feedback cycle they are observed in. Another source of scatter comes from dynamically disturbed clusters due to either sloshing motions or mergers. These mechanisms move the hot gas out of the central BCG to large distances producing lower $L_{\rm cool}$ for a given $P_{\rm cav}$ value, as found for clusters with higher centroid shifts indicative of dynamically disturbed atmospheres (Olivares et al. in prep). ## 5 Limitations One of this work’s limitations is the fact that we are probably missing cavities due to shallow X-ray observations, in particular in high-$z$ clusters (e.g., Diehl et al., 2008; Bîrzan et al., 2012). As pointed out by several studies, X-ray cavities are more easily detected in clusters with stronger cool cores, as the contrast between the depression and surroundings is sharper, and it is more difficult to find bubbles in high-redshift clusters due to the lack of counts. The detectability of cavities also decreases with their radius (Enßlin & Heinz, 2002). More importantly, cavities that have sizes below the resolution (e.g., cavity size $\leq 2$ kpc for clusters at $z$=0.1) will be undetectable in such an analysis. As shown in Figure 2, this effect will increase at high-$z$. For example, cluster at $z$>0.5 only cavities with sizes $\geq 6$ kpc can be detected. We also note that sources with more than 2000 counts, in the central region, tend to have “certain” detected X-ray cavities (circle symbols). Therefore, we stress that deeper Chandra follow-up observations are required to confirm the presence of any potential X-ray cavity, especially in high-$z$ clusters. Aside from the data quality, other effects may be also interfering with the cavity detectability, such as orientation, location, and angular size (see Enßlin & Heinz 2002; Diehl et al. 2008; Brüggen et al. 2009 for more details). As pointed out by Enßlin & Heinz (2002), the detectability of cavities decreases with distance to the center, and for a cavity moving in the plane of the cavities, the contrast decreases slowly. Cavities that lie off the plane of the sky have reduced contrast and therefore are harder to detect. To quantify this projection effect, we assume a random distribution of the angle of the cavity relative to the plane of the sky, a typical cavity size of 10 kpc, and a typical beta profile for the ICM distribution (rc=20, beta=3/4). At an average projected distance of 30 kpc, 20%–30% of the cavities would have a contrast below our detection limit and would have been missed in our study. It should be noted that the $P_{\rm cav}-L_{\rm cool}$ relation is also affected by projection effects, likely introducing scatter. As pointed out by Diehl et al. (2008), all the physical quantities involving the cavity power, $P_{\rm cav}$, such as density, temperature, and pressure, are measured at the projected distance from the cavity to the center rather than the true distance. The former corresponds to a lower limit. The pressure increases towards the center, leading to an overestimation of the ambient pressure at the cavity position. On the other hand, the cavity ages will be underestimated as they are proportional to the cavity distance. Both mentioned effects will bias the cavity power upwards. ## 6 Conclusions We have investigated the mechanical AGN feedback mechanism in central cluster galaxies using archival X-ray Chandra observations of 164 Planck selected clusters to search for X-ray cavities. (i) Using several techniques to look for X-ray cavities, including inspection of the original image, a model subtracted image and an unsharp masked image, we find 65 X-ray cavities in 29 systems out of 63 CC clusters. Among them, 12 systems have clearly detected cavities, whereas 17 have only potential depressions. Two potential cavities were also found in one NCC cluster. (ii) We measured a total detection fraction of X-ray cavities of $\sim$18%, twice the detection rate of the high-$z$ SPT–SZ sample, indicating that clusters have radio-mode feedback only 18% of the time. Nevertheless, our detection fraction of 9% is close to the high-$z$ SPT–SZ sample when taking only cavities with sizes $\gtrsim$10 kpc to match the resolution of the SPT-SZ sample. We interpreted this as an lack of evolution of the AGN feedback cycle across cosmic time. (iii) We find that the AGN heating traced by the power of the X-ray cavities alone is able to balance the radiative losses of the ICM in our sample. Our sources have slightly lower cavity power per cavity than high-$z$ massive clusters from the SPT-SZ sample due to smaller cavities being detected in our sample. Future high-resolution X-ray observations from Chandra satellite and the upcoming Advanced X-ray Imaging Satellite (AXIS) telescope will be needed to find more cavities in the faintest clusters and confirm the discussed findings in high-$z$ clusters. ## Acknowledgments This research has made use of software provided by the Chandra X-ray Center (CXC) in the application packages CIAO. V.O. and Y.S. were supported by NSF grant 2107711, Chandra X-ray Observatory grant GO1-22126X, and NASA grant 80NSSC21K0714. ## Data Availability The Chandra raw data used in this paper are available to download at the HEASARC Data Archive website111https://heasarc.gsfc.nasa.gov/docs/archive.htm. ## References * Andrade-Santos et al. (2017) Andrade-Santos F., et al., 2017, ApJ, 843, 76 * Bîrzan et al. (2004) Bîrzan L., Rafferty D. A., McNamara B. R., Wise M. W., Nulsen P. E. J., 2004, ApJ, 607, 800 * Bîrzan et al. (2008) Bîrzan L., McNamara B. R., Nulsen P. E. J., Carilli C. L., Wise M. W., 2008, ApJ, 686, 859 * Bîrzan et al. (2012) Bîrzan L., Rafferty D. A., Nulsen P. E. J., McNamara B. R., Röttgering H. J. A., Wise M. W., Mittal R., 2012, MNRAS, 427, 3468 * Bîrzan et al. (2017) Bîrzan L., Rafferty D. A., Brüggen M., Intema H. T., 2017, MNRAS, 471, 1766 * Bîrzan et al. (2020) Bîrzan L., et al., 2020, MNRAS, 496, 2613 * Bleem et al. (2015) Bleem L. E., et al., 2015, ApJS, 216, 27 * Bleem et al. (2020) Bleem L. E., et al., 2020, ApJS, 247, 25 * Boehringer et al. (1993) Boehringer H., Voges W., Fabian A. C., Edge A. C., Neumann D. M., 1993, MNRAS, 264, L25 * Brown et al. (2001) Brown L. D., Cai T., DasGupta A., 2001, Statistical Stadistical science, 16, 101 * Brüggen et al. (2009) Brüggen M., Scannapieco E., Heinz S., 2009, MNRAS, 395, 2210 * Canning et al. (2013) Canning R. E. A., et al., 2013, MNRAS, 435, 1108 * Carilli et al. (1994) Carilli C. L., Perley R. A., Harris D. E., 1994, MNRAS, 270, 173 * Churazov et al. (2001) Churazov E., Brüggen M., Kaiser C. R., Böhringer H., Forman W., 2001, ApJ, 554, 261 * Churazov et al. (2005) Churazov E., Sazonov S., Sunyaev R., Forman W., Jones C., Bohringer H., 2005, MNRAS: Letters, 363, L91 * Diehl et al. (2008) Diehl S., Li H., Fryer C. L., Rafferty D., 2008, ApJ, 687, 173 * Dong et al. (2010) Dong R., Rasmussen J., Mulchaey J. S., 2010, The Astrophysical Journal, 712, 883 * Dubois et al. (2012) Dubois Y., Devriendt J., Slyz A., Teyssier R., 2012, MNRAS, 420, 2662 * Enßlin & Heinz (2002) Enßlin T. A., Heinz S., 2002, A&A, 384, L27 * Fabian (1994) Fabian A. C., 1994, ARA&A, 32, 277 * Fabian (2012) Fabian A., 2012, ARA&A, 50, 455 * Fabian et al. (2006) Fabian A. C., Sanders J. S., Taylor G. B., Allen S. W., Crawford C. S., Johnstone R. M., Iwasawa K., 2006, MNRAS, 366, 417 * Gitti et al. (2010) Gitti M., O’Sullivan E., Giacintucci S., David L. P., Vrtilek J., Raychaudhury S., Nulsen P. E. J., 2010, ApJ, 714, 758 * Gnat & Sternberg (2007) Gnat O., Sternberg A., 2007, ApJS, 168, 213 * Haggard et al. (2010) Haggard D., Green P. J., Anderson S. F., Constantin A., Aldcroft T. L., Kim D.-W., Barkhouse W. A., 2010, ApJ, 723, 1447 * Hilton et al. (2018) Hilton M., et al., 2018, ApJS, 235, 20 * Hincks et al. (2010) Hincks A. D., et al., 2010, ApJS, 191, 423 * Hlavacek-Larrondo et al. (2013) Hlavacek-Larrondo J., Fabian A. C., Edge A. C., Ebeling H., Allen S. W., Sanders J. S., Taylor G. B., 2013, MNRAS, 431, 1638 * Hlavacek-Larrondo et al. (2015) Hlavacek-Larrondo J., et al., 2015, ApJ, 805, 35 * Jones (2012) Jones C., 2012, A Chandra-Planck Legacy Program for Massive Clusters of Galaxies, Chandra Proposal * McNamara & Nulsen (2007) McNamara B., Nulsen P., 2007, ARA&A, 45, 117 * McNamara et al. (2000) McNamara B. R., et al., 2000, ApJ, 534, L135 * McNamara et al. (2005) McNamara B. R., Nulsen P. E. J., Wise M. W., Rafferty D. A., Carilli C., Sarazin C. L., Blanton E. L., 2005, Nature, 433, 45 * Mernier et al. (2016) Mernier F., de Plaa J., Pinto C., Kaastra J. S., Kosec P., Zhang Y. Y., Mao J., Werner N., 2016, A&A, 592, A157 * Molendi & Pizzolato (2001) Molendi S., Pizzolato F., 2001, ApJ, 560, 194 * Morsony et al. (2010) Morsony B. J., Heinz S., Brüggen M., Ruszkowski M., 2010, MNRAS, 407, 1277 * Nulsen et al. (2009) Nulsen P., Jones C., Forman W., Churazov E., McNamara B., David L., Murray S., 2009, in Heinz S., Wilcots E., eds, American Institute of Physics Conference Series Vol. 1201, The Monster’s Fiery Breath: Feedback in Galaxies, Groups, and Clusters. pp 198–201 (arXiv:0909.1809), doi:10.1063/1.3293033 * O’Sullivan et al. (2011) O’Sullivan E., Giacintucci S., David L. P., Gitti M., Vrtilek J. M., Raychaudhury S., Ponman T. J., 2011, ApJ, 735, 11 * Olivares et al. (2019) Olivares V., et al., 2019, A&A, 631, A22 * Olivares et al. (2022) Olivares V., et al., 2022, arXiv e-prints, p. arXiv:2201.07838 * Panagoulia et al. (2014) Panagoulia E. K., Fabian A. C., Sanders J. S., Hlavacek-Larrondo J., 2014, MNRAS, 444, 1236 * Planck Collaboration et al. (2011) Planck Collaboration et al., 2011, A&A, 536, A1 * Pope et al. (2005) Pope E. C. D., Pavlovski G., Kaiser C. R., Fangohr H., 2005, MNRAS, 364, 13 * Rafferty et al. (2006) Rafferty D. A., McNamara B. R., Nulsen P. E. J., Wise M. W., 2006, ApJ, 652, 216 * Ruppin et al. (2021) Ruppin F., McDonald M., Bleem L. E., Allen S. W., Benson B. A., Calzadilla M., Khullar G., Floyd B., 2021, ApJ, 918, 43 * Russell et al. (2013) Russell H. R., McNamara B. R., Edge A. C., Hogan M. T., Main R. A., Vantyghem A. N., 2013, MNRAS, 432, 530 * Russell et al. (2019) Russell H. R., et al., 2019, MNRAS, 490, 3025 * Shin et al. (2016) Shin J., Woo J.-H., Mulchaey J. S., 2016, ApJS, 227, 31 * Silverman et al. (2009) Silverman J. D., et al., 2009, ApJ, 695, 171 * Somboonpanyakul et al. (2022) Somboonpanyakul T., et al., 2022, arXiv e-prints, p. arXiv:2201.08398 * Su et al. (2020) Su Y., et al., 2020, MNRAS, 498, 5620
# Interactive Region-of-Interest Discovery using Exploratory Feedback Behrooz Omidvar-Tehrani Grenoble AI Institute ###### Abstract In this paper, we propose a geospatial data management framework called IRIDEF which captures and analyzes user’s exploratory feedback for an enriched guidance mechanism in the context of interactive analysis. We discuss that exploratory feedback can be a proxy for decision-making feedback when the latter is scarce or unavailable. IRIDEF identifies regions of interest (ROIs) via exploratory feedback, and highlights a few interesting and out-of-sight POIs in each ROI. These highlights enable the user to shape up his/her future interactions with the system. We detail the components of our proposed framework in the form of a data analysis pipeline, and present the aspects of efficiency and effectiveness for each component. We also discuss evaluation plans and future directions for IRIDEF. ## 1 Introduction Background. Nowadays, geospatial data are ubiquitous in various fields of science, such as transportation, smart city management [1, 2], travel planning [3], bike sharing [4], localized advertising [5], and regional health-care [6]. A recent solution for an improved geospatial data management is to provide means for interactive analysis, where users in the loop are guided towards interesting subsets of data in an exploratory iterative manner [7, 8]. Typically, the guidance is performed through learning user’s preferences using a decision-making feedback received from the user in each iteration, e.g., picking (clicking on) a favorite point of interest (POI). However, it is often the case in geospatial scenarios that users forget or don’t feel necessary to explicitly express their feedback in what they find interesting. As a result, the interactive dialog will be broken and no guidance can be delivered. In this paper, we focus on the following question: Is it possible to perform interactive analysis on geospatial data without having access to decision- making interactions? Proposal. In the absence of decision-making interactions, we propose to focus on exploratory feedback, i.e., patterns in signals captured from the user in the background which provide hints on user’s interests. For instance, users often hover their mouse (or make frequent touch actions on a touch screen, such as scroll, pinch and zoom) over a region of interest to collect information on the map (e.g., touristic places and hidden gems presented in the form of map layers and tooltips) before landing on a decision about picking a POI in that region, such as a home-stay. Hence it is possible to infer the interest towards that region even without decision-making interactions. This inferred knowledge should be leveraged in the guidance mechanism. An instance of such guidance is to highlight a few interesting POIs in the region of interest. We advocate a geospatial data management framework (called IRIDEF) which captures and analyzes user’s exploratory feedback for an enriched guidance mechanism in the context of interactive analysis. Scenario. Lindsey is a visiting researcher from the US. She wants to rent a home-stay in Paris via the Airbnb website. She likes to discover the city, hence she is open to any type of lodging in any region with an interest to stay in the center of Paris. Her exploration starts with a query which expresses the preliminary set of her interests. The website returns 1500 different home-stays for her query. While scanning the very first items, she shows (an exploratory) interest towards the region of Trocadero by hovering her mouse around the Eiffel tower and checking the amenities within that region. However, she forgets or doesn’t feel the necessity to click a POI (i.e., a home-stay) in that region. While typical recommendation and exploration systems do not necessarily focus on this implicit interest in the future iterations, our framework ensures that Lindsey receives home-stay recommendations related to the Trocadero region even if she didn’t provide any decision-making feedback. Challenges. Analyzing exploratory feedback is challenging. First, it is not clear how this feedback should be interpreted in terms of the user preferences. Exploratory feedback on geospatial data can be enabled via different signals, such as mouse hovering [9], touch actions [10], voice [11], and gaze [12]. Translating such enablers into geospatial semantics is challenging. Second, all exploratory signals are not necessarily useful and some may introduce false positives. For instance, a small mouse move on a typical screen would yield more than 14,000 points (assuming 1600 DPI) which may turn out to be just a random futile move. Beyond the first two challenges, guiding users towards interesting POIs is also challenging, as it requires an exhaustive scan over the geospatial data against the evolving user preferences. Contributions. We propose a guidance approach for interactive exploration of geospatial data. Our approach identifies regions of interest (ROIs) without the need for any decision-making feedback. Our proposed guidance mechanism is to highlight a few interesting and out-of-sight POIs in each ROI, and let the user investigate those POIs in his/her future interactions with the system. The following list summarizes the contributions and claims discussed in this paper: * • We define the notion of “exploratory user feedback” which enables a seamless navigation in the geospatial data. * • We define the notion of “information highlighting”, a mechanism to highlight important spatial information that is out-of-sight. * • We employ an efficient polygon-based approach to discover ROIs. * • We propose an approach to compute highlights on-the-fly in an efficient manner. To the best of our knowledge, our contributions have not been investigated before in the literature. Popular map-based applications such as Google Maps and Bing Maps do not offer interactive functionalities for feedback capturing. In the literature, information highlighting [13, 14, 15] and spatial recommendation approaches [16, 17] often assume that the user’s preferences are static and will never change in time. This limits their functionality for serving the scenarios of an interactive analysis. The process of feedback capturing is mostly formulated for decision-making interactions [18, 19, 20, 21, 22, 23]. While a few fuse decision-making and exploratory feedbacks [24, 25, 26], our approach is not dependent on decision-making feedback and is able to operate purely on exploratory feedback. It is to state the obvious that a straightforward extension of our system is to incorporate decision-making feedback (if available) to improve the effectiveness of the system. Paper outline. The rest of this paper is organized as follows. In Section 2, we elaborate on different instances of decision-making and exploratory feedbacks in the literature. We discuss the data model and introduce in the problem in Section 3. We present our proposed approach in Section 4, and discuss evaluation plans in Section 5. Last, we conclude and present future directions in Section 6. Figure 1: Examples of decision-making and exploratory feedbacks in realistic geospatial scenarios [6, 27, 3] ## 2 Decision-making Feedback versus and Exploratory Feedback We briefly discuss a few examples in the literature to clarify the distinction between decision-making and exploratory feedback types in realistic geospatial applications. These examples are illustrated in Figure 1. In summary, we argue that different types of decision-making feedback have been already employed, but the exploratory feedback is often missing. Medical domain. COVIZ [6] is an interactive web-based application which enables medical experts to form and compare medical cohorts. In Figure 1-A, the expert clicks on the Auvergne-Rhône-Alpes region (as a decision-making feedback) to compare the patient cohort in this particular region with the whole France. In Figure 1-B, the expert adds the air pollution layer to the analysis to examine any potential correlation between the patients’ health status and the abundance of the air pollution. While the expert explores the cohort comparisons and pollution correlations, the tool does not collect any exploratory feedback, such as mouse hover and gaze. Aviation domain. DV8 [27] is an interactive aviation data analysis tool. When several flight trajectories are visualized (Figure 1-C), the expert can click on one trajectory to retrieve its information (departure, destination, etc.), and double-click to solely focus on that single trajectory and analyze it further (Figure 1-D). The interaction is always through the decision-making feedback (single-click and double-click) and the exploratory feedback is not supported. DV8 also supports touch gestures, such as pinch and zoom (Figure 1-E) and brush (Figure 1-F). However the touch actions are all considered as decision-making feedback with an immediate resulting action. Hence there is no support for exploratory feedback. The virtual reality (VR) version of DV8 (Figures 1-G and 1-H) enhances the exploration experience of the aviation expert, particularly for analyzing flights in different altitudes. While the gaze signal is an exploratory feedback which can be captured through VR, DV8 employs the signal only for navigating the geospatial data, and not for guidance. Travel domain. Simurgh [3] is an interactive travel package generation tool. The user can ask for a new day plan using a drag-and-drop action over a region of interest (the drag-and-drop in Figure 1-I and the resulting day plan in Figure 1-J). She can also replace a point of interest by clicking on the point (the selection in Figure 1-K and the replacement in Figure 1-L). All the interactions are defined as the decision-making feedback. In other words, Simurgh does not detect the regions of interest by following the exploratory feedback. ## 3 Data Model and Problem Definition To enable feedback capturing, we consider two different layers on a geographical map: spatial layer and interaction layer. The spatial layer contains POIs from a spatial database $\mathcal{P}$. The interaction layer contains exploratory feedback points $\mathcal{M}$. These layers are explained below. Spatial layer. Each POI $p=\langle\mathit{lat},\mathit{lon}\rangle\in\mathcal{P}$ is described using its geographical coordinates. POIs are also associated to a set of domain- specific attributes $\mathcal{A}$. For instance, in the dataset of a real estate agency, POIs are properties (houses and apartments) and $\mathcal{A}$ contains attributes such as surface, number of rooms and price. The set of all possible values for an attribute $a\in\mathcal{A}$ is denoted as $dom(a)$. We also define user’s feedback $F$ as a vector over all attribute values (i.e., facets), i.e., $F\in\prod_{a\in\mathcal{A}}dom(a)$. The vector $F$ is initialized by zeros and will be updated to express the user’s preferences. The facet-based schema of $F$ ensures that learned feedback is always transparent and interpretable by the user using the facets, and hence reduces algorithmic anxiety [28]. Interaction layer. We assume that an exploratory signal addresses one specific point $m$ on the screen, e.g., hovering at, gazing at, or providing a voice command about $m$. When an exploratory signal is received, the point $m$ is appended to the set $\mathcal{M}$. Each point is a tuple $m=\langle x,y,t\rangle$, where $x$ and $y$ specify the affected pixel location and $t$ is a timestamp. To conform with geographical standards, we assume $m=\langle 0,0,t\rangle$ sits at the middle of the interaction layer, both horizontally and vertically, for any $t$. Transitioning between the layers. The user is in contact with the interaction layer. To update the feedback vector $F$, we need to translate pixel locations in the interaction layer to latitudes and longitudes in the spatial layer. We employ equirectangular projection to obtain the best possible approximation of a point $m=\langle x,y,t\rangle\in\mathcal{M}$ in the spatial layer, denoted as $p(m)$. $p(m=\langle x,y,t\rangle)=\langle\mathit{lat}=y+\gamma,\mathit{lon}=\frac{x}{\mathit{cos}\gamma}+\theta\rangle$ (1) The inverse operation, i.e., transforming a point $p=\langle\mathit{lat},\mathit{lon}\rangle$ from the spatial layer to the interaction is done using Equation 2. $m(p=\langle\mathit{lat},\mathit{lon}\rangle)=\langle x=(\mathit{lon}-\theta)\times\mathit{cos}\gamma,y=\mathit{lat}-\gamma\rangle$ (2) The reference point for the transformation is the center of both layers. In Equations 1 and 2, we assume that $\gamma$ is the latitude and $\theta$ is the longitude of a point in the spatial layer corresponding to the center of the interaction layer, i.e., $m=\langle 0,0\rangle$. Problem definition. Given the user’s feedback $F$, we are interested in solving two consecutive problems: $(i)$ discover regions of interest in the form of geospatial clusters whose centroids correlate with $F$ (with respect to the POI attributes in which the user is interested in), and $(ii)$ for each discovered region, find at most $k$ POIs ($k$ is an input parameter) which are relevant to $F$ and have high exploration quality. We define relevance and exploration quality in Section 4. Figure 2: IRIDEF framework. ## 4 Proposed Approach We propose IRIDEF (Interactive Region-of-Interest Discovery using Exploratory Feedback), a framework for exploiting exploratory feedback to highlight interesting POIs as future analysis directions. As depicted in Figure 2, our approach consists of a pipeline with three main components: CAPTURE, DISCOVER, and HIGHLIGHT. After the user has explored the map for a while, IRIDEF captures exploratory feedback from the exploration (i.e., the CAPTURE component detailed in Section 4.2). Then a set of regions of interest (ROIs) will be discovered using the captured feedback (i.e., the DISCOVER component detailed in Section 4.3). Finally some out-of-sight interesting POIs will be highlighted for each discovered ROI (i.e., HIGHLIGHTcomponent detailed in Section 4.4). In the following, we first discuss the desiderata behind our approach, and then detail each component of the pipeline. ### 4.1 Principles In order to maximize the usability of IRIDEF, we believe that the framework should be generic and fluid, as discussed below. Genericness. IRIDEF’s pipeline is applicable to different datasets and different types of exploratory feedback. This enables IRIDEF to cover different exploration scenarios. The minimal requirement is that the input dataset and the feedback signal match with our data model (Section 3). Fluidity. A fluid interactive system does not break the user’s train of thought. The fluidity is ensured by rendering results in an efficient and effective manner. In the CAPTURE component, effectiveness is satisfied by disregarding irrelevant signals. In the DISCOVER and HIGHLIGHT components, effectiveness is interpreted as delivering meaningful and useful regions (ROIs) and highlights (POIs), respectively. In all of the components, efficiency is to return results instantaneously, often considered to be $\leq 500ms$ [29]. ### 4.2 CAPTURE Component Exploratory feedback can be captured using different latent signals, e.g., time dedicated to item details, touch actions, gaze, mouse moves, scrolling speed, etc. Without loss of generality, we focus on mouse moves as an instance of exploratory feedback signal. A particular challenge in capturing mouse moves as the exploratory feedback is that the user may mindlessly move the mouse everywhere on the map. Obviously, this should not signify that all the locations are equally important to the user. An effective approach should only capture a subset of this feedback which is then useful for discovering ROIs. Also an efficient approach should capture this feedback without any interruption in the fluidity of the user experience. For an effective and efficient feedback capturing, IRIDEF performs the two following actions: 1. 1. First, it records the exploratory signals (by adding the coordinates of the screen points they were applied on to $\mathcal{M}$) only every $\varepsilon$ milliseconds to prevent adding redundant points. 2. 2. After a given period of feedback capturing time, it groups the recorded signals into $g$ different segments, $\mathcal{M}_{1}$ to $\mathcal{M}_{g}$. The first segment starts at time zero (where the system started to operate), and the last segment ends at the current time. The choice of $\varepsilon$ depends on various parameters such as the application (e.g., tourism, delivery, transportation) and the user’s expertise. For instance, a larger $\varepsilon$ seems more appropriate for novice users, as they might perform many random moves to get acquainted with the data. In conformance with progressive data analytics [29], we set $\varepsilon=100ms$ as the default value to ensure continuity preserving latency. Input: Mouse move points $\mathcal{M}$, time gap $\varepsilon$, segmentation strategy $\psi$ Output: Segments $\mathcal{M}_{i}$, $i\in[1,g]$ 1 $\mathit{segment\\_count}\leftarrow 0$ 2 for _$m\in\mathcal{M}$ captured every $\varepsilon$ milliseconds_ do 3 $\mathcal{M}[\mathit{segment\\_count}]\leftarrow\mathcal{M}[\mathit{segment\\_count}]\cup\\{m\\}$ 4 $\mathit{segment\\_change}\leftarrow\mathit{check\\_strategy}(\psi,m,\mathcal{M})$ 5 if _$\mathit{segment\\_change}=\mathit{true}$_ then 6 $\mathit{segment\\_count}\leftarrow\mathit{segment\\_count}+1$ 7 $\mathcal{M}[\mathit{segment\\_count}]\leftarrow\emptyset$ 8 9 end if 10 11 end for 12return _$\mathcal{M}_{i}$ , $i\in[1,g]$ where $g=\mathit{segment\\_count}$_ Algorithm 1 CAPTURE algorithm Moreover, the end of a segment is determined by one of the following approaches: * • $\psi_{1}$: End the current segment after a fixed amount of time (i.e., fixed- length segments). In this case, the value of $g$ is selected based on the spatial density of the dataset under investigation. * • $\psi_{2}$: End the current segment if the mouse location is unchanged for a certain amount of time. * • $\psi_{3}$: End the current segment after a drastic change in the signal, where the drift is captured using signal segmentation approaches. We employ the Wedding Cake technique for the dynamic segmentation of our signals [30, 31]. Algorithm 1 summarizes the CAPTURE process. ### 4.3 DISCOVER Component The objective of this step in the IRIDEF pipeline is to obtain one or several ROIs in which the user has expressed his/her exploratory feedback. We conjecture that a region is more interesting for the user if it is denser, i.e., the user moves the mouse in that region frequently, to collect information from the background map. Hence ROIs can be simply discovered as dense clusters of mouse move points. We denote the set of all ROIs as $\mathcal{R}$ and we refer to the $i$-th ROI as $R_{i}\in\mathcal{R}$. Algorithm 2 summarizes the DISCOVER process. We employ ST-DBSCAN [32], a space-aware variant of DB-SCAN, to cluster points in each segment (line 2 in Algorithm 2). For each subset of mouse move points $\mathcal{M}_{i}$, $i\in[1,g]$, ST-DBSCAN begins with a random point $m_{0}\in\mathcal{M}_{i}$ and collects all density-reachable points from $m_{0}$ using a distance metric. As mouse move points are in the 2-dimensional pixel space (i.e., the screen), we choose euclidean distance as the distance metric. A density-reachable point $m_{i}$ is either directly reachable from $m_{0}$, i.e., the distance between $m_{i}$ and $m_{0}$ is lower than a distance threshold (an input parameter for the ST-DBSCAN algorithm), or reachable via a path $m_{0}\dots m_{j-1},m_{j}\dots m_{i}$ where each point $m_{j}$ in the path is directly reachable from its immediately prior point in the path $m_{j-1}$. If $m_{0}$ turns out to be a core point, a cluster will be generated. A point is core if there exist a certain amount of points in its vicinity, i.e., with a distance lower than the distance threshold. The minimum number of points for a core point is yet another input parameter for ST- DBSCAN. If $m_{0}$ is not a core point, the algorithm picks another random point in $\mathcal{M}_{i}$. The process is repeated until all points have been processed. We denote the set of all resulting clusters for $\mathcal{M}_{i}$ as $\mathcal{C}_{i}=\\{C_{1},C_{2},\dots\\}$. Input: Segments $\mathcal{M}_{1}$ to $\mathcal{M}_{g}$, user feedback vector $F$, number of interactions performed so far $T$ Output: Set of discovered ROIs $\mathcal{R}$ $O\leftarrow\emptyset$ // the set of all polygons initialized as empty $\mathcal{R}\leftarrow\emptyset$ // the set of all ROIs initialized as empty 1 for _each segment $\mathcal{M}_{i}$_ do $\mathcal{C}_{i}\leftarrow\mathit{ST\\_DBSCAN}(\mathcal{M}_{i})$ // all clusters inside $\mathcal{M}_{i}$ 2 $\mathcal{C}_{i}\leftarrow\mathit{AklToussaint}(\mathcal{C}_{i})$ $O_{i}\leftarrow\mathit{Graham\\_scan}(\mathcal{C}_{i})$ // all polygons inside $\mathcal{M}_{i}$ $O_{i}.\mathit{expand}(\mathit{confidence}(F,T))$ // Equation 3 3 $O\leftarrow O\cup O_{i}$ 4 end for 5for _each pair of polygons $O_{x}\in O$ and $O_{y}\in O$_ do 6 $S\leftarrow\mathit{intersect}(O_{x},O_{y})$ 7 if _$S.\mathit{size} >0$_ then $\mathcal{R}\leftarrow\mathcal{R}\cup\\{S\\}$ 8 9 end for 10 return _$\mathcal{R}$_ Algorithm 2 DISCOVER algorithm Once the clusters are obtained for all the subsets of $\mathcal{M}$, we find their intersections to locate recurring regions. Note that we don’t aim to directly consider the clusters $\mathcal{C}$ as the ROIs, as they may contain noisy signals. Their intersection counts as a confirmation of user preferences. To obtain intersections, we need to clearly define the spatial boundaries of each cluster. For this aim, we discover the polygons which cover the points inside each cluster. We employ Graham scan algorithm (line 2 in Algorithm 2) which is an efficient method to compute the convex hull for a given set of points in a 2D plane [33]. We reduce the typical complexity of Graham scan (i.e., $\mathcal{O}(|C_{i}|\times\mathit{log}|C_{i}|)$, $|C_{i}|$ being the number of points in the $i$-th cluster) to $\mathcal{O}(|C_{i}|)$ by ordering the cluster members by their spatial coordinates. For more efficiency, we perform Akl-Toussaint heuristics [34] before the polygon computation to prune the points which are unnecessary for shaping the polygons (line 2 in Algorithm 2). The intersections between the polygons constitute the ROIs (lines 2 to 2 in Algorithm 2). Personalizing discovered ROIs. By default, our ROI discovery approach creates strictly tight ROIs, i.e., the area of the polygons is exactly inferred by the points it covers. However in exploratory scenarios, the feedback points do not necessarily reflect the exact interests of the user. The user exposes his/her interests in a gradual manner using exploratory feedback captured in several iterations. We believe that the user’s confidence (interpreted as the richness of the user feedback vector $F$) should impact the way ROIs are computed, hence personalized ROIs. In case the user is less confident (e.g., the user is in early stages of his/her exploration), ROIs should be expanded in their area (up to twice their original size) to let more opportunities arise (line 2 in Algorithm 2). The user confidence is computed as follows. $\mathit{confidence}(F,T)=\mathit{min}(1.0,\frac{||F||_{0}}{\xi\times T})$ (3) In Equation 3, $\xi$ is a feedback frequency, and $T$ is the number of interactions performed so far. For instance, given $|F|=50$, $T=10$, and assuming that a typical user provides $7$ exploratory signals per iteration, the confidence will be equal to $0.71$. The confidence is a coefficient for stretching the ROI area. Let $A_{1}$ denote the area of the ROI $R_{1}$, the confidence-aware area $A^{\prime}_{1}$ is computed as follows: $A^{\prime}_{1}=(A_{1}+A_{1}\times\mathit{confidence})$. This process is shown in line 2 of Algorithm 2. Example. Figure 3 shows the steps that Lindsey follows to explore home-stays in Paris. For the sake of simplicity, we assume Lindsey’s confidence is $1.0$. Figure 3.A shows the mouse moves of Lindsey in different time stages. In this example, we consider $g=3$ and capture Lindsey’s feedback in three different time segments with fixed-length, i.e., $\psi_{1}$ (progressing from Figures 3.B to 3.D). It shows that Lindsey started her search around Eiffel Tower and Arc de Triomphe (Figure 3.B) and gradually showed interest in areas located south (Figure 3.C) and north (Figure 3.D) as well. All intersections between those clusters are discovered (hatched regions in Figure 3.E) which will contribute to the set of interesting regions (Figure 3.F), i.e., ROI1 to ROI4. Figure 3: An example of discovering ROIs [9]. ### 4.4 HIGHLIGHT Component We define highlights as a subset of POIs in the form of suggestions for directions of future analysis of the user. The highlights are generated by performing the three following steps: matching points, updating feedback, and highlighting POIs. First, we find POIs which fit into the polygons obtained in the DISCOVER component. Then we update the user feedback $F$ according to those POIs. Finally we highlight a set of POIs based on the updated content of $F$. Matching points. Being a function of mouse move points, ROIs are discovered in the interaction layer. We then need to find out which POIs in $\mathcal{P}$ fall into ROIs. We employ Equation 2 to transform those POIs from the spatial layer to the interaction layer. Then a simple spatial containment function can verify whether a given POI fits into a given ROI.111Typically, we use the implementation of $\mathit{ST\\_Within}()$ module in PostGIS for the containment verification. To improve efficiency, we employ Quadtrees [35] in a two-step approach: $(i)$ In an offline process, we build a Quadtree index for all POIs in $\mathcal{P}$. We record the membership relations between POIs and Quadtree grid cells in the index. $(ii)$ Once ROIs are discovered, we record which cells in the Quadtree index intersect with the ROIs. For matching POIs, we only check a subset which is inside the cells associated to ROIs and ignore the ones outside, hence a drastic pruning of POIs in $\mathcal{P}$. Given an ROI $R_{i}$, we denote the set of its matching points as $\mathcal{P}_{i}$. We also define the binary vector $\overrightarrow{\mathcal{P}_{i}}$ whose cell of $\langle a_{j},v_{w}\rangle$ is $1$ if at least one point in $\mathcal{P}_{i}$ gets the value $v_{w}\in\mathit{dom}(a_{j})$ for the attribute $a_{j}\in\mathcal{A}$, otherwise $0$. Updating feedback. The matching points depict the exploratory preferences of the user. To memorize these preferences, we update the feedback vector $F$ using the attributes of the matching points. We consider an increment value $\delta$ to update $F$. If $p$ is a matching point and gets $v_{w}\in\mathit{dom}(a_{j})$ for attribute $a_{j}\in\mathcal{A}$, we augment the value in the $F$’s cell of $\langle a_{j},v_{w}\rangle$ by the factor $\delta$. Note that we only consider incremental feedback, i.e., we never decrease a value in $F$. The vector $F$ will become normalized after each update using a softmax function. The updated feedback vector is fully transparent and the user can easily apprehend what has been learned from his/her previous actions. Our current update model considers the feedback vector to be recency-agnostic. We leave the integration of recency as future work. Highlighting POIs. The updated feedback vector $F$ is the input to the highlighting phase. The objective is to select $k$ POIs out of all POIs inside ROIs whose relevance and exploration quality are maximal. We denote the set of highlights as $\mathcal{H}$. We propose two approaches to achieve our objective, depending on how we define relevance and quality: Input: Discovered ROIs $\mathcal{R}$, user feedback vector $F$, $k$, $\mathit{time\\_limit}$, $\mathit{similarity\\_threshold}$ Output: Highlights $\mathcal{H}$ $\mathcal{H}\leftarrow\emptyset$ // highlights 1 for _each discovered ROI $R_{i}\in\mathcal{R}$_ do 2 $\mathcal{P}_{i}\leftarrow\mathit{match\\_points}(R_{i})$ 3 $F.\mathit{update}(\mathcal{P}_{i})$ 4 $\mathcal{L}_{i}\leftarrow$ sort the POIs in $\mathcal{P}_{i}$ in decreasing order of their similarity with $F$ 5 $p^{*}\leftarrow\mathit{most\\_similar\\_point}(\mathcal{P}_{i},F)$ 6 $k^{\prime}\leftarrow k\times\mathit{peculiarity}(R_{i})$ 7 $\mathcal{H}[R_{i}]\leftarrow\mathit{top}(\mathcal{L}_{i},k^{\prime})$ 8 $p_{\mathit{next}}\leftarrow\mathit{get\\_next}(\mathcal{L}_{i})$ 9 while _$\mathit{time\\_limit}$ not exceeded and $\mathit{similarity}(p_{\mathit{next}},p^{*})\leq\mathit{similarity\\_threshold}$_ do 10 for _$p_{\mathit{current}}\in\mathcal{H}[R_{i}]$_ do 11 if _$\mathit{diversity\\_improved}(\mathcal{H}[R_{i}],p_{\mathit{next}},p_{\mathit{current}})$_ then $\mathcal{H}[R_{i}]\leftarrow\mathcal{H}[R_{i}]\cup\\{p_{\mathit{next}}\\}\setminus\\{p_{\mathit{current}}\\}$ 12 13 end for 14 $p_{\mathit{next}}\leftarrow\mathit{get\\_next}(\mathcal{L}_{i})$ 15 16 end while 17 18 end for return _$\mathcal{H}$_ Algorithm 3 Greedy HIGHLIGHT algorithm Greedy approach. Inspired from [9, 36, 37], we define the relevance as the Cosine similarity between $F$ and the POIs (note that the feedback vector $F$ and the POIs are defined over the same schema), and the quality as the diversity between the POIs. The diversity is computed using Cosine distance between the POI attribute values. We then follow a greedy approach for each ROI to maximize diversity while respecting a lower bound on similarity. Algorithm 3 summarizes this approach. The similarity values are preprocessed and organized in $\mathcal{L}_{i}$ for all POIs in $\mathcal{P}_{i}$ (line 3 in the algorithm). The algorithm starts the greedy process by initializing a list $\mathcal{H}[R_{i}]$ with $k^{\prime}$ POIs at the top of $\mathcal{L}_{i}$, i.e., the most similar POIs in $\mathcal{P}_{i}$ to $F$ (line 3 in the algorithm). While a time limit is not exceeded (time limit is an input parameter which is often set to values $\leq 500ms$ [29]), the algorithm scans $\mathcal{L}_{i}$ sequentially to find appropriate POI replacements in $\mathcal{H}[R_{i}]$ to improve diversity (line 3 of the algorithm). Once the greedy loop is done, the set $\mathcal{H}$ will be returned by the algorithm, containing the highlights for all the discovered ROIs. Fuzzy approach. Inspired from [38, 39, 40, 3], we employ fuzzy clustering to process all ROIs simultaneously. Algorithm 4 summarizes this approach. The relevance is defined in the same way as the greedy approach, and the exploration quality is defined using two factors: cohesiveness between POIs of the same ROI (opposite of diversity, hence measured using Cosine similarity), and representativeness, i.e., the sum of euclidean distances between ROI centroids. We use a weighted sum over relevance and quality where the weights are user-defined parameters ($w_{1}$ to $w_{3}$ in line 4 of Algorithm 4). Through several trial-and-error tests and user studies in previous works [40, 39], we found that the most ideal set of weights are $w_{1}=0.5$, $w_{2}=0.25$ and $w_{3}=0.25$. The algorithm refines the centroids of ROIs iteratively until convergence (lines 4 to 4 in Algorithm 4). Then $k^{\prime}$ most probable points (in fuzzy clustering semantics) will be returned as highlights for each centroid (line 4 in Algorithm 4). Which approach to choose? We conjecture that the greedy approach is more appropriate for the bird’s-eye view exploration, which mainly refers to early stages of the exploration where the user is trying to get acquainted with the geospatial data by random explorations. In this case, ROIs do not necessarily need to be related and may represent independent future directions. However, in the case of more focused exploration scenarios, the fuzzy approach would be able to deliver highlights with more coverage over the whole regions of interest. We plan to validate these hypotheses via extensive qualitative evaluations. Peculiar highlighting. Recall the main objective of the highlighting component is to return out-of-sight POIs as future analysis directions. This simply means that the neighborhoods that have been already investigated by the user are less peculiar, and the POIs within those regions may not be as interesting as the ones in unexplored regions. Given an ROI $R_{i}$, we define its peculiarity score as follows. $\mathit{peculiarity}(R_{i})=\mathit{Cosine\\_similarity}(F,\overrightarrow{\mathcal{P}_{i}})$ (4) Input: Discovered ROIs $\mathcal{R}$, user feedback vector $F$, $k$ Output: Highlights $\mathcal{H}$ 1 $\mathcal{P}_{\mathit{all}}\leftarrow\emptyset$ 2 for _each discovered ROI $R_{i}\in\mathcal{R}$_ do 3 $\mathcal{P}_{\mathit{all}}\leftarrow\mathcal{P}_{\mathit{all}}\cup\mathit{match\\_points}(R_{i})$ 4 $\mathit{centroid}_{\mathit{old}}\leftarrow\emptyset$ 5 $\mathit{centroid}_{\mathit{current}}\leftarrow\mathit{get\\_centroid}(R_{i})$ 6 7 end for 8$k^{\prime}\leftarrow k\times\mathit{peculiarity}(R_{i})$ 9 while _$\delta(\mathit{centroid}_{\mathit{old}},\mathit{centroid}_{\mathit{current}})$ is significant_ do 10 $\mathit{centroid}_{\mathit{old}}\leftarrow\mathit{centroid}_{\mathit{current}}$ 11 $\mathit{centroid}_{\mathit{current}}\leftarrow\mathit{argmax}_{k^{\prime}}(w_{1}\times\mathit{relevance}(\mathcal{P}_{\mathit{all}}),$ $w_{2}\times\mathit{cohesiveness}(\mathcal{P}_{\mathit{all}}),$ $w_{3}\times\mathit{representativeness}(\mathcal{P}_{\mathit{all}}))$ 12 13 end while 14 $\mathcal{H}\leftarrow\mathit{fuzzy\\_clusters}(\mathit{centroid}_{\mathit{current}})$ return _$\mathcal{H}$_ Algorithm 4 Fuzzy HIGHLIGHT algorithm We then enrich the traditional $k$ parameter with the peculiarity semantics as follows: $k^{\prime}=\lfloor k\times\mathit{peculiarity}(R_{i})\rfloor$ (line 3 in Algorithm 3 and line 4 in Algorithm 4). Note that $k^{\prime}$ is the peculiarity-aware version of the $k$. This simply means that $k^{\prime}$ is lower for less peculiar ROIs, and hence less POIs will be highlighted in them. For instance, in case $F$ has already captured feedback about two-bedroom home-stays and an ROI has only amenities with two bedrooms, that ROI will receive a low peculiarity score, and hence very few POIs will be highlighted in it. ## 5 Discussion on Evaluation We plan to perform the following evaluation strategies to validate the usefulness of IRIDEF: Single-shot quantitative analysis. Although our approach is multi-shot, we can consider only one iteration of our approach (CAPTURE $\rightarrow$ DISCOVER $\rightarrow$ HIGHLIGHT) and see how the components behave in this single iteration. The behavior can be captured through execution time and memory consumption, as well as precision. We average over several single-shot runs. The feedback will be captured through crowdsourcing campaigns. Simulation study. We simulate interactive scenarios using virtual agents and measure accumulated quality such as precision, hit ratio, and diversity. User study. We also perform an in-depth lab study and an in-breadth crowdsourcing study to survey real users about their perception on the resulting regions (ROIs) and the highlights (POIs). ## 6 Conclusion and Future Work In this paper, we present IRIDEF, an approach to interactively discover regions of interest (ROIs) using exploratory feedback. The exploratory feedback is captured from mouse moves over the geographical map while analyzing spatial data. We propose a novel polygon-based mining algorithm which returns a few highlights (POIs) in conformance with user’s exploratory preferences. The highlights enable users to have a better understanding of what to focus on in the followup steps in their analysis scenarios. We plan to extend IRIDEF in several directions, such as the incorporation of multi-modal exploratory feedback and the generation of sequential highlights as a mobility-aware guidance. ## Acknowledgment The author thanks Thibaut Thonet, Sruthi Viswanathan, Fabien Guillot, Jean- Michel Renders, and Placido Neto for their constructive comments in the process of writing this paper. ## References * [1] John F. Roddick, Max J. Egenhofer, Erik G. Hoel, Dimitris Papadias, and Betty Salzberg. Spatial, temporal and spatio-temporal databases - hot issues and directions for phd research. SIGMOD Record, 33(2):126–131, 2004. * [2] Zheng Xu, Yunhuai Liu, Neil Yen, Lin Mei, Xiangfeng Luo, Xiao Wei, and Chuanping Hu. Crowdsourcing based description of urban emergency events using social media big data. TCC, 2016. * [3] Sihem Amer-Yahia, Ria M Borromeo, Shady Elbassuoni, Behrooz Omidvar-Tehrani, and Sruthi Viswanathan. Interactive generation and customization of travel packages for individuals and groups. In IUI, 2020. * [4] Hangil Chung, Daniel Freund, and David B Shmoys. Bike angels: An analysis of citi bike’s incentive program. In SIGCAS. ACM, 2018. * [5] Kaiyu Feng, Gao Cong, Sourav S Bhowmick, Wen-Chih Peng, and Chunyan Miao. Towards best region search for data exploration. In SIGMOD, 2016. * [6] Cicero A. L. Pahin, Behrooz Omidvar-Tehrani, Sihem Amer-Yahia, Valerie Siroux, Jean-Louis Pepin, Jean-Christian Botel, and Comba Joao. COVIZ: A system for visual formation and exploration of patient cohorts. In VLDB, 2019. * [7] Ori Bar El, Tova Milo, and Amit Somech. Towards autonomous, hands-free data exploration. In CIDR, 2020. * [8] Arnab Nandi and H. V. Jagadish. Guided interaction: Rethinking the query-result paradigm. Proc. VLDB Endow., 4(12):1466–1469, 2011. * [9] Behrooz Omidvar-Tehrani, Plácido A. Souza Neto, Francisco B. Silva Júnior, and Felipe M. Freire Pontes. Exploration of interesting dense regions on spatial data. In Proceedings of the Workshops of the EDBT/ICDT 2020 Joint Conference. CEUR-WS.org, 2020. * [10] Lilong Jiang, Michael Mandel, and Arnab Nandi. Gesturequery: A multitouch database query interface. Proc. VLDB Endow., 6(12):1342–1345, 2013. * [11] Sruthi Viswanathan, Fabien Guillot, and Maria Antonietta Grasso. What is natural?: Challenges and opportunities for conversational recommender systems. In María Inés Torres, Stephan Schlögl, Leigh Clark, and Martin Porcheron, editors, Proceedings of the 2nd Conference on Conversational User Interfaces, CUI 2020, Bilbao, Spain, July 22-24, 2020, pages 40:1–40:4. ACM, 2020. * [12] Georg Buscher, Andreas Dengel, Ralf Biedert, and Ludger V Elst. Attentive documents: Eye tracking as implicit feedback for information retrieval and beyond. ACM Transactions on Interactive Intelligent Systems (TiiS), 1(2):1–30, 2012. * [13] J. Liang and M. L. Huang. Highlighting in information visualization: A survey. In 2010 14th International Conference Information Visualisation, July 2010. * [14] Anthony C. Robinson. Highlighting in geovisualization. Cartography and Geographic Information Science, 38(4):373–383, 2011\. * [15] Kanit Wongsuphasawat, Dominik Moritz, Anushka Anand, Jock Mackinlay, Bill Howe, and Jeffrey Heer. Voyager: Exploratory analysis via faceted browsing of visualization recommendations. TVCG, 22(1), 2016. * [16] Jie Bao, Yu Zheng, David Wilkie, and Mohamed Mokbel. Recommendations in location-based social networks: a survey. GeoInformatica, 19(3):525–565, 2015. * [17] Justin J. Levandoski, Mohamed Sarwat, Ahmed Eldawy, and Mohamed F. Mokbel. Lars: A location-aware recommender system. In ICDE, pages 450–461, 2012. * [18] Mansurul Bhuiyan, Snehasis Mukhopadhyay, and Mohammad Al Hasan. Interactive pattern mining on hidden data: a sampling-based solution. In Proceedings of the 21st ACM international conference on Information and knowledge management, pages 95–104. ACM, 2012. * [19] Dong Xin, Xuehua Shen, Qiaozhu Mei, and Jiawei Han. Discovering interesting patterns through user’s interactive feedback. In Proceedings of the 12th ACM SIGKDD international conference on Knowledge discovery and data mining, pages 773–778. ACM, 2006. * [20] Kyriaki Dimitriadou, Olga Papaemmanouil, and Yanlei Diao. Aide: an active learning-based approach for interactive data exploration. IEEE Transactions on Knowledge and Data Engineering, 28(11):2842–2856, 2016. * [21] Niranjan Kamat, Prasanth Jayachandran, Karthik Tunga, and Arnab Nandi. Distributed and interactive cube exploration. In ICDE, 2014. * [22] Behrooz Omidvar-Tehrani, Sihem Amer-Yahia, and Alexandre Termier. Interactive user group analysis. In CIKM, pages 403–412. ACM, 2015. * [23] Mario Boley, Michael Mampaey, Bo Kang, Pavel Tokmakov, and Stefan Wrobel. One click mining: Interactive local pattern discovery through implicit preference and performance learning. In Proceedings of the ACM SIGKDD Workshop on Interactive Data Exploration and Analytics, pages 27–35. ACM, 2013. * [24] Eoin Mac Aoidh, Michela Bertolotto, and David C. Wilson. Analysis of implicit interest indicators for spatial data. In 15th ACM International Symposium on Geographic Information Systems, ACM-GIS 2007, November 7-9, 2007, Seattle, Washington, USA, Proceedings, page 47, 2007. * [25] Andrea Ballatore and Michela Bertolotto. Semantically enriching vgi in support of implicit feedback analysis. In Katsumi Tanaka, Peter Fröhlich, and Kyoung-Sook Kim, editors, Web and Wireless Geographical Information Systems, pages 78–93, Berlin, Heidelberg, 2011. Springer Berlin Heidelberg. * [26] Nathan N. Liu, Evan W. Xiang, Min Zhao, and Qiang Yang. Unifying explicit and implicit feedback for collaborative filtering. In Proceedings of the 19th ACM International Conference on Information and Knowledge Management, CIKM ’10, pages 1445–1448, New York, NY, USA, 2010. ACM. * [27] Behrooz Omidvar-Tehrani, Arnab Nandi, Nicholas Meyer, Dalton Flanagan, and Seth Young. DV8: interactive analysis of aviation data. In 33rd IEEE International Conference on Data Engineering, ICDE 2017, San Diego, CA, USA, April 19-22, 2017, pages 1411–1412. IEEE Computer Society, 2017. * [28] Shagun Jhaver, Yoni Karpfen, and Judd Antin. Algorithmic anxiety and coping strategies of airbnb hosts. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, page 421. ACM, 2018. * [29] Jean-Daniel Fekete and Romain Primet. Progressive analytics: A computation paradigm for exploratory data analysis. arXiv preprint arXiv:1607.05162, 2016. * [30] John Krumm and Eric Horvitz. Predestination: Inferring destinations from partial trajectories. In UbiComp, 2006. * [31] Sobhan Moosavi, Behrooz Omidvar-Tehrani, R Bruce Craig, Arnab Nandi, and Rajiv Ramnath. Characterizing driving context from driver behavior. In Proceedings of the 25th ACM SIGSPATIAL International Conference on Advances in Geographic Information Systems, pages 1–4, 2017. * [32] Derya Birant and Alp Kut. ST-DBSCAN: An algorithm for clustering spatial-temporal data. Data Knowl. Eng., 60(1):208–221, January 2007. * [33] Ronald L. Graham. An efficient algorithm for determining the convex hull of a finite planar set. Info. Pro. Lett., 1:132–133, 1972. * [34] Luc Devroye and Godfried T Toussaint. A note on linear expected time algorithms for finding convex hulls. Computing, 26(4):361–366, 1981. * [35] Raphael A. Finkel and Jon Louis Bentley. Quad trees a data structure for retrieval on composite keys. Acta informatica, 4(1):1–9, 1974. * [36] Behrooz Omidvar-Tehrani, Plácido A. Souza Neto, Felipe M. Freire Pontes, and Francisco Bento da Silva Júnior. Geoguide: An interactive guidance approach for spatial data. In IEEE Smart Data, pages 1112–1117, 2017. * [37] Behrooz Omidvar-Tehrani, Sruthi Viswanathan, and Jean-Michel Renders. Interactive and explainable point-of-interestrecommendation using look-alike groups. In SIGSPATIAL, 2020. * [38] Vincent Leroy, Sihem Amer-Yahia, Eric Gaussier, and Hamid Mirisaee. Building representative composite items. In Proceedings of the 24th ACM International on Conference on Information and Knowledge Management, pages 1421–1430, 2015. * [39] Manish Singh, Ria Mae Borromeo, Anas Hosami, Sihem Amer-Yahia, and Shady Elbassuoni. Customizing travel packages with interactive composite items. In DSAA, pages 137–145. IEEE, 2017. * [40] Sihem Amer-Yahia, Shady Elbassuoni, Behrooz Omidvar-Tehrani, Ria Borromeo, and Mehrdad Farokhnejad. Grouptravel: Customizing travel packages for groups. In EDBT, 2019.
11institutetext: Department of Computing, Imperial College London, United Kingdom # On Polymorphic Sessions and Functions A Tale of Two (Fully Abstract) Encodings Bernardo Toninho Nobuko Yoshida ###### Abstract This work exploits the logical foundation of session types to determine what kind of type discipline for the $\pi$-calculus can exactly capture, and is captured by, $\lambda$-calculus behaviours. Leveraging the proof theoretic content of the soundness and completeness of sequent calculus and natural deduction presentations of linear logic, we develop the first _mutually inverse_ and _fully abstract_ processes-as-functions and functions-as- processes encodings between a polymorphic session $\pi$-calculus and a linear formulation of System F. We are then able to derive results of the session calculus from the theory of the $\lambda$-calculus: (1) we obtain a characterisation of inductive and coinductive session types via their algebraic representations in System F; and (2) we extend our results to account for _value_ and _process_ passing, entailing strong normalisation. ## 1 Introduction Dating back to Milner's seminal work [29], encodings of $\lambda$-calculus into $\pi$-calculus are seen as essential benchmarks to examine expressiveness of various extensions of the $\pi$-calculus. Milner's original motivation was to demonstrate the power of link mobility by decomposing higher-order computations into pure name passing. Another goal was to analyse functional behaviours in a broad computational universe of concurrency and non- determinism. While _operationally_ correct encodings of many higher-order constructs exist, it is challenging to obtain encodings that are precise wrt behavioural equivalence: the semantic distance between the $\lambda$-calculus and the $\pi$-calculus typically requires either restricting process behaviours [45] (e.g. via typed equivalences [5]) or enriching the $\lambda$-calculus with constants that allow for a suitable characterisation of the term equivalence induced by the behavioural equivalence on processes [43]. Encodings in $\pi$-calculi also gave rise to new typing disciplines: Session types [20, 22], a typing system that is able to ensure deadlock-freedom for communication protocols between two or more parties [23], were originally motivated ``from process encodings of various data structures in an asynchronous version of the $\pi$-calculus'' [21]. Recently, a propositions- as-types correspondence between linear logic and session types [8, 9, 54] has produced several new developments and logically-motivated techniques [49, 54, 7, 26] to augment both the theory and practice of session-based message- passing concurrency. Notably, parametric session polymorphism [7] (in the sense of Reynolds [41]) has been proposed and a corresponding abstraction theorem has been shown. Our work expands upon the proof theoretic consequences of this propositions- as-types correspondence to address the problem of how to exactly match the behaviours induced by session $\pi$-calculus encodings of the $\lambda$-calculus with those of the $\lambda$-calculus. We develop mutually inverse and fully abstract encodings (up to typed observational congruences) between a polymorphic session-typed $\pi$-calculus and the polymorphic $\lambda$-calculus. The encodings arise from the proof theoretic content of the equivalence between sequent calculus (i.e. the session calculus) and natural deduction (i.e. the $\lambda$-calculus) for _second-order_ intuitionistic linear logic, greatly generalising [49]. While fully abstract encodings between $\lambda$-calculi and $\pi$-calculi have been proposed (e.g. [5, 43]), our work is the first to consider a two-way, _both_ mutually inverse _and_ fully abstract embedding between the two calculi by crucially exploiting the linear logic-based session discipline. This also sheds some definitive light on the nature of concurrency in the (logical) session calculi, which exhibit ``don't care'' forms of non-determinism (e.g. processes may race on stateless replicated servers) rather than ``don't know'' non-determinism (which requires less harmonious logical features [2]). In the spirit of Gentzen [14], we use our encodings as a tool to study non- trivial properties of the session calculus, deriving them from results in the $\lambda$-calculus: We show the existence of inductive and coinductive sessions in the polymorphic session calculus by considering the representation of initial $F$-algebras and final $F$-coalgebras [28] in the polymorphic $\lambda$-calculus [1, 19] (in a linear setting [6]). By appealing to full abstraction, we are able to derive processes that satisfy the necessary algebraic properties and thus form adequate _uniform_ representations of inductive and coinductive session types. The derived algebraic properties enable us to reason about standard data structure examples, providing a logical justification to typed variations of the representations in [30]. We systematically extend our results to a session calculus with $\lambda$-term and process passing (the latter being the core calculus of [50], inspired by Benton's LNL [4]). By showing that our encodings naturally adapt to this setting, we prove that it is possible to encode higher-order process passing in the first-order session calculus fully abstractly, providing a typed and proof-theoretically justified re-envisioning of Sangiorgi's encodings of higher-order $\pi$-calculus [46]. In addition, the encoding instantly provides a strong normalisation property of the higher-order session calculus. Contributions and the outline of our paper are as follows: § 3.1 develops a functions-as-processes encoding of a linear formulation of System F, Linear-F, using a logically motivated polymorphic session $\pi$-calculus, Poly$\pi$, and shows that the encoding is operationally sound and complete. § 3.2 develops a processes-as-functions encoding of Poly$\pi$ into Linear-F, arising from the completeness of the sequent calculus wrt natural deduction, also operationally sound and complete. § 3.3 studies the relationship between the two encodings, establishing they are _mutually inverse_ and _fully abstract_ wrt typed congruence, the first two- way embedding satisfying _both_ properties. § 4 develops a _faithful_ representation of inductive and coinductive session types in Poly$\pi$ via the encoding of initial and final (co)algebras in the polymorphic $\lambda$-calculus. We demonstrate a use of these algebraic properties via examples. § 4.2,4.3 study term-passing and process-passing session calculi, extending our encodings to provide embeddings into the first-order session calculus. We show full abstraction and mutual inversion results, and derive strong normalisation of the higher-order session calculus from the encoding. In order to introduce our encodings, we first overview Poly$\pi$, its typing system and behavioural equivalence (§ 2). We discuss related work and conclude with future work (§ 5). Detailed proofs can be found in [52]. ## 2 Polymorphic Session $\pi$-Calculus This section summarises the polymorphic session $\pi$-calculus [7], dubbed Poly$\pi$, arising as a process assignment to second-order linear logic [15], its typing system and behavioural equivalences. ### 2.1 Processes and Typing Syntax. Given an infinite set $\Lambda$ of names $x,y,z,u,v$, the grammar of processes $P,Q,R$ and session types $A,B,C$ is defined by: $\begin{array}[]{l}\begin{array}[]{lclllllllllllllllll}P,Q,R&::=&x\langle y\rangle.P&\mid&x(y).P&\mid&P\mid Q&\mid&(\mathbf{\nu}y)P&\mid&[x\leftrightarrow y]&\mid&{\bf 0}\\\\[2.84526pt] &\mid&x\langle A\rangle.P&\mid&x(Y).P&\mid&x.\mathsf{inl};P&\mid&x.\mathsf{inr};P&\mid&x.\mathsf{case}(P,Q)&\mid&!x(y).P\\\\[2.84526pt] \end{array}\\\\[2.84526pt] \begin{array}[]{lcl}A,B&::=&\mathbf{1}\mid A\multimap B\mid A\otimes B\mid A\with B\mid A\oplus B\mid\,\,!A\mid\forall X.A\mid\exists X.A\mid X\end{array}\end{array}$ $x\langle y\rangle.P$ denotes the output of channel $y$ on $x$ with continuation process $P$; $x(y).P$ denotes an input along $x$, bound to $y$ in $P$; $P\mid Q$ denotes parallel composition; $(\mathbf{\nu}y)P$ denotes the restriction of name $y$ to the scope of $P$; ${\bf 0}$ denotes the inactive process; $[x\leftrightarrow y]$ denotes the linking of the two channels $x$ and $y$ (implemented as renaming); $x\langle A\rangle.P$ and $x(Y).P$ denote the sending and receiving of a _type_ $A$ along $x$ bound to $Y$ in $P$ of the receiver process; $x.\mathsf{inl};P$ and $x.\mathsf{inr};P$ denote the emission of a selection between the $\mathsf{l}$eft or $\mathsf{r}$ight branch of a receiver $x.\mathsf{case}(P,Q)$ process; $!x(y).P$ denotes an input- guarded replication, that spawns replicas upon receiving an input along $x$. We often abbreviate $(\mathbf{\nu}y)x\langle y\rangle.P$ to $\overline{x}\langle y\rangle.P$ and omit trailing ${\bf 0}$ processes. By convention, we range over linear channels with $x,y,z$ and shared channels with $u,v,w$. The syntax of session types is that of (intuitionistic) linear logic propositions which are assigned to channels according to their usages in processes: $\mathbf{1}$ denotes the type of a channel along which no further behaviour occurs; $A\multimap B$ denotes a session that waits to receive a channel of type $A$ and will then proceed as a session of type $B$; dually, $A\otimes B$ denotes a session that sends a channel of type $A$ and continues as $B$; $A\with B$ denotes a session that offers a choice between proceeding as behaviours $A$ or $B$; $A\oplus B$ denotes a session that internally chooses to continue as either $A$ or $B$, signalling appropriately to the communicating partner; $!A$ denotes a session offering an unbounded (but finite) number of behaviours of type $A$; $\forall X.A$ denotes a polymorphic session that receives a type $B$ and behaves uniformly as $A\\{B/X\\}$; dually, $\exists X.A$ denotes an existentially typed session, which emits a type $B$ and behaves as $A\\{B/X\\}$. Operational Semantics. The operational semantics of our calculus is presented as a standard labelled transition system (Fig. 1) in the style of the _early_ system for the $\pi$-calculus [46]. In the remainder of this work we write $\equiv$ for a standard $\pi$-calculus structural congruence extended with the clause $[x\leftrightarrow y]\equiv[y\leftrightarrow x]$. In order to streamline the presentation of observational equivalence [36, 7], we write $\equiv_{!}$ for structural congruence extended with the so-called sharpened replication axioms [46], which capture basic equivalences of replicated processes (and are present in the proof dynamics of the exponential of linear logic). A transition $P\xrightarrow{~{}\alpha~{}}Q$ denotes that $P$ may evolve to $Q$ by performing the action represented by label $\alpha$. An action $\alpha$ ($\overline{\alpha}$) requires a matching $\overline{\alpha}$ ($\alpha$) in the environment to enable progress. Labels include: the silent internal action $\tau$, output and bound output actions ($\overline{x\langle y\rangle}$ and $\overline{(\nu z)x\langle z\rangle}$); input action $x(y)$; the binary choice actions ($x.\mathsf{inl}$, $\overline{x.\mathsf{inl}}$, $x.\mathsf{inr}$, and $\overline{x.\mathsf{inr}}$); and output and input actions of types ($\overline{x\langle A\rangle}$ and $x(A)$). The labelled transition relation is defined by the rules in Fig. 1, subject to the side conditions: in rule $(\mathsf{res})$, we require $y\not\in\mbox{\it fn}(\alpha)$; in rule $(\mathsf{par})$, we require $\mbox{\it bn}(\alpha)\cap\mbox{\it fn}(R)=\emptyset$; in rule $(\mathsf{close})$, we require $y\not\in\mbox{\it fn}(Q)$. We omit the symmetric versions of $(\mathsf{par})$, $(\mathsf{com})$, $(\mathsf{lout})$, $(\mathsf{lin})$, $(\mathsf{close})$ and closure under $\alpha$-conversion. We write $\rho_{1}\rho_{2}$ for the composition of relations $\rho_{1},\rho_{2}$. We write $\xrightarrow{}$ to stand for $\xrightarrow{\tau}\equiv$. Weak transitions are defined as usual: we write $\stackrel{{\scriptstyle}}{{\Longrightarrow}}$ for the reflexive, transitive closure of $\xrightarrow{\tau}$ and $\xrightarrow{}^{+}$ for the transitive closure of $\xrightarrow{\tau}$. Given $\alpha\neq\tau$, notation $\stackrel{{\scriptstyle\alpha}}{{\Longrightarrow}}$ stands for $\stackrel{{\scriptstyle~{}}}{{\Longrightarrow}}\xrightarrow{\alpha}\stackrel{{\scriptstyle~{}}}{{\Longrightarrow}}$ and $\stackrel{{\scriptstyle\tau}}{{\Longrightarrow}}$ stands for $\stackrel{{\scriptstyle}}{{\Longrightarrow}}$. $\begin{array}[]{ccccc}{\vbox{\hbox{\hbox{\small\sc\mbox{{\scriptsize{{($\displaystyle\mathsf{out}$)}}}}}}\hbox{$\displaystyle\vbox{\hbox{\hskip 22.60664pt\vbox{\vbox{}\hbox{\hskip-22.60663pt\hbox{\hbox{$\displaystyle\displaystyle{x\langle y\rangle.P\xrightarrow{\overline{x\langle y\rangle}}P}$}}}}}}$}}}\hskip 8.5359pt{\vbox{\hbox{\hbox{\small\sc\mbox{{\scriptsize{{($\displaystyle\mathsf{in}$)}}}}}}\hbox{$\displaystyle\vbox{\hbox{\hskip 40.2386pt\vbox{\vbox{}\hbox{\hskip-40.2386pt\hbox{\hbox{$\displaystyle\displaystyle{x(y).P\xrightarrow{x(z)}P\\{\raisebox{1.93748pt}{\small$\displaystyle z$}\\!/\mbox{\small$\displaystyle y$}\\}}$}}}}}}$}}}\hskip 8.5359pt{\vbox{\hbox{\hbox{\small\sc\mbox{{\scriptsize{{($\displaystyle\mathsf{outT}$)}}}}}}\hbox{$\displaystyle\vbox{\hbox{\hskip 23.61392pt\vbox{\vbox{}\hbox{\hskip-23.61392pt\hbox{\hbox{$\displaystyle\displaystyle{x\langle A\rangle.P\xrightarrow{\overline{x\langle A\rangle}}P}$}}}}}}$}}}\hskip 8.5359pt{\vbox{\hbox{\hbox{\small\sc\mbox{{\scriptsize{{($\displaystyle\mathsf{inT}$)}}}}}}\hbox{$\displaystyle\vbox{\hbox{\hskip 45.42505pt\vbox{\vbox{}\hbox{\hskip-45.42505pt\hbox{\hbox{$\displaystyle\displaystyle{x(Y).P\xrightarrow{x(B)}P\\{\raisebox{1.93748pt}{\small$\displaystyle B$}\\!/\mbox{\small$\displaystyle Y$}\\}}$}}}}}}$}}}\\\\[2.84526pt] \begin{array}[]{ll}\begin{array}[]{c}{\vbox{\hbox{\hbox{\small\sc\mbox{{\scriptsize{{($\displaystyle\mathsf{lout}$)}}}}}}\hbox{$\displaystyle\vbox{\hbox{\hskip 22.60002pt\vbox{\vbox{}\hbox{\hskip-22.60002pt\hbox{\hbox{$\displaystyle\displaystyle{x.\mathsf{inl};P\xrightarrow{\overline{x.\mathsf{inl}}}P}$}}}}}}$}}}\hskip 8.5359pt{\vbox{\hbox{\hbox{\small\sc\mbox{{\scriptsize{{($\displaystyle\mathsf{id}$)}}}}}}\hbox{$\displaystyle\vbox{\hbox{\hskip 52.91936pt\vbox{\vbox{}\hbox{\hskip-52.91934pt\hbox{\hbox{$\displaystyle\displaystyle{(\nu x)([x\leftrightarrow y]\mid P)\xrightarrow{\tau}P\\{\raisebox{1.93748pt}{\small$\displaystyle y$}\\!/\mbox{\small$\displaystyle x$}\\}}$}}}}}}$}}}\\\\[2.84526pt] {\vbox{\hbox{\hbox{\small\sc\mbox{{\scriptsize{{($\displaystyle\mathsf{lin}$)}}}}}}\hbox{$\displaystyle\vbox{\hbox{\hskip 38.5044pt\vbox{\vbox{}\hbox{\hskip-38.5044pt\hbox{\hbox{$\displaystyle\displaystyle{x.\mathsf{case}(P,Q)\xrightarrow{x.\mathsf{inl}}P}$}}}}}}$}}}\hskip 8.5359pt{\vbox{\hbox{\hbox{\small\sc\mbox{{\scriptsize{{($\displaystyle\mathsf{rep}$)}}}}}}\hbox{$\displaystyle\vbox{\hbox{\hskip 58.69226pt\vbox{\vbox{}\hbox{\hskip-58.69226pt\hbox{\hbox{$\displaystyle\displaystyle{!x(y).P\xrightarrow{x(z)}P\\{\raisebox{1.93748pt}{\small$\displaystyle z$}\\!/\mbox{\small$\displaystyle y$}\\}\mid!x(y).P}$}}}}}}$}}}\\\\[2.84526pt] \end{array}\quad\begin{array}[]{l}{\vbox{\hbox{\hbox{\small\sc\mbox{{\scriptsize{{($\displaystyle\mathsf{open}$)}}}}}}\hbox{$\displaystyle\displaystyle{\hbox{\hskip 11.82158pt\vbox{\hbox{\hskip-11.82156pt\hbox{\hbox{$\displaystyle\displaystyle{P\xrightarrow{\overline{x\langle y\rangle}}Q}$}}}\vbox{}}}\over\hbox{\hskip 19.91222pt\vbox{\vbox{}\hbox{\hskip-19.91222pt\hbox{\hbox{$\displaystyle\displaystyle{(\mathbf{\nu}y)P\xrightarrow{\overline{(\mathbf{\nu}y)x\langle y\rangle}}Q}$}}}}}}$}}}\\\ \end{array}\\\ \end{array}\\\ {\vbox{\hbox{\hbox{\small\sc\mbox{{\scriptsize{{($\displaystyle\mathsf{close}$)}}}}}}\hbox{$\displaystyle\displaystyle{\hbox{\hskip 32.46872pt\vbox{\hbox{\hskip-32.46872pt\hbox{\hbox{$\displaystyle\displaystyle{P\xrightarrow{\overline{(\mathbf{\nu}y)x\langle y\rangle}}P^{\prime}\,\,Q\xrightarrow{x(y)}Q^{\prime}}$}}}\vbox{}}}\over\hbox{\hskip 36.33694pt\vbox{\vbox{}\hbox{\hskip-36.33694pt\hbox{\hbox{$\displaystyle\displaystyle{P\mid Q\xrightarrow{\tau}(\mathbf{\nu}y)(P^{\prime}\mid Q^{\prime})}$}}}}}}$}}}\hskip 8.5359pt{\vbox{\hbox{\hbox{\small\sc\mbox{{\scriptsize{{($\displaystyle\mathsf{par}$)}}}}}}\hbox{$\displaystyle\displaystyle{\hbox{\hskip 12.20023pt\vbox{\hbox{\hskip-12.20021pt\hbox{\hbox{$\displaystyle\displaystyle{P\xrightarrow{\alpha}Q}$}}}\vbox{}}}\over\hbox{\hskip 25.10335pt\vbox{\vbox{}\hbox{\hskip-25.10333pt\hbox{\hbox{$\displaystyle\displaystyle{P\mid R\xrightarrow{\alpha}Q\mid R}$}}}}}}$}}}\hskip 8.5359pt{\vbox{\hbox{\hbox{\small\sc\mbox{{\scriptsize{{($\displaystyle\mathsf{com}$)}}}}}}\hbox{$\displaystyle\displaystyle{\hbox{\hskip 26.90778pt\vbox{\hbox{\hskip-26.90778pt\hbox{\hbox{$\displaystyle\displaystyle{P\xrightarrow{\overline{\alpha}}P^{\prime}\,\,Q\xrightarrow{\alpha}Q^{\prime}}$}}}\vbox{}}}\over\hbox{\hskip 24.7463pt\vbox{\vbox{}\hbox{\hskip-24.74629pt\hbox{\hbox{$\displaystyle\displaystyle{P\mid Q\xrightarrow{\tau}P^{\prime}\mid Q^{\prime}}$}}}}}}$}}}\hskip 8.5359pt{\vbox{\hbox{\hbox{\small\sc\mbox{{\scriptsize{{($\displaystyle\mathsf{res}$)}}}}}}\hbox{$\displaystyle\displaystyle{\hbox{\hskip 12.20023pt\vbox{\hbox{\hskip-12.20021pt\hbox{\hbox{$\displaystyle\displaystyle{P\xrightarrow{\alpha}Q}$}}}\vbox{}}}\over\hbox{\hskip 28.38152pt\vbox{\vbox{}\hbox{\hskip-28.3815pt\hbox{\hbox{$\displaystyle\displaystyle{(\mathbf{\nu}y)P\xrightarrow{\alpha}(\mathbf{\nu}y)Q}$}}}}}}$}}\par}\end{array}\vspace{-4ex}$ Figure 1: Labelled Transition System. Typing System. The typing rules of Poly$\pi$ are given in Fig. 2, following [7]. The rules define the judgment $\Omega;\Gamma;\Delta\vdash P::z{:}A$, denoting that process $P$ offers a session of type $A$ along channel $z$, using the _linear_ sessions in $\Delta$, (potentially) using the unrestricted or _shared_ sessions in $\Gamma$, with polymorphic type variables maintained in $\Omega$. We use a well-formedness judgment $\Omega\vdash A\,\mathsf{type}$ which states that $A$ is well-formed wrt the type variable environment $\Omega$ (i.e. $\mathit{fv}(A)\subseteq\Omega$). We often write $T$ for the right-hand side typing $z{:}A$, $\cdot$ for the empty context and $\Delta,\Delta^{\prime}$ for the union of contexts $\Delta$ and $\Delta^{\prime}$, only defined when $\Delta$ and $\Delta^{\prime}$ are disjoint. We write $\cdot\vdash P::T$ for $\cdot;\cdot;\cdot\vdash P::T$. $\begin{array}[]{c}\raise 10.5pt\hbox{$\hbox{$\hbox{\small\sc$({{\multimap}\mathsf{R}})$}\,$}{\hbox{$\displaystyle\displaystyle{\hbox{\hskip 48.95317pt\vbox{\hbox{\hskip-48.95316pt\hbox{\hbox{$\displaystyle\displaystyle{\Omega;\Gamma;\Delta,x{:}A\vdash P::z{:}B}$}}}\vbox{}}}\over\hbox{\hskip 58.4938pt\vbox{\vbox{}\hbox{\hskip-58.4938pt\hbox{\hbox{$\displaystyle\displaystyle{\Omega;\Gamma;\Delta\vdash z(x).P::z{:}A\multimap B}$}}}}}}$}}\hbox{}$}\quad\raise 11.0pt\hbox{$\hbox{$\hbox{\small\sc$({{\otimes}\mathsf{R}})$}\,$}{\hbox{$\displaystyle\displaystyle{\hbox{\hskip 76.38904pt\vbox{\hbox{\hskip-76.38904pt\hbox{\hbox{$\displaystyle\displaystyle{\Omega;\Gamma;\Delta_{1}\vdash P::y{:}A\quad\Omega;\Gamma;\Delta_{2}\vdash Q::z{:}B}$}}}\vbox{}}}\over\hbox{\hskip 82.05086pt\vbox{\vbox{}\hbox{\hskip-82.05084pt\hbox{\hbox{$\displaystyle\displaystyle{\Omega;\Gamma;\Delta_{1},\Delta_{2}\vdash(\mathbf{\nu}x)z\langle y\rangle.(P\mid Q)::z{:}A\otimes B}$}}}}}}$}}\hbox{}$}\\\\[9.00002pt] \raise 10.5pt\hbox{$\hbox{$\hbox{\small\sc$({{\forall}\mathsf{R}})$}\,$}{\hbox{$\displaystyle\displaystyle{\hbox{\hskip 43.07347pt\vbox{\hbox{\hskip-43.07346pt\hbox{\hbox{$\displaystyle\displaystyle{\Omega,X;\Gamma;\Delta\vdash P::z{:}A}$}}}\vbox{}}}\over\hbox{\hskip 57.94534pt\vbox{\vbox{}\hbox{\hskip-57.94534pt\hbox{\hbox{$\displaystyle\displaystyle{\Omega;\Gamma;\Delta\vdash z(X).P::z{:}\forall X.A}$}}}}}}$}}\hbox{}$}\quad\raise 11.0pt\hbox{$\hbox{$\hbox{\small\sc$({{\forall}\mathsf{L}})$}\,$}{\hbox{$\displaystyle\displaystyle{\hbox{\hskip 90.71165pt\vbox{\hbox{\hskip-90.71165pt\hbox{\hbox{$\displaystyle\displaystyle{\Omega\vdash B\,\mathsf{type}\quad\Omega;\Gamma;\Delta,x{:}A\\{B/X\\}\vdash P::z{:}C}$}}}\vbox{}}}\over\hbox{\hskip 70.03331pt\vbox{\vbox{}\hbox{\hskip-70.03331pt\hbox{\hbox{$\displaystyle\displaystyle{\Omega;\Gamma;\Delta,x{:}\forall X.A\vdash x\langle B\rangle.P::z{:}C}$}}}}}}$}}\hbox{}$}\\\\[9.00002pt] \raise 11.0pt\hbox{$\hbox{$\hbox{\small\sc$({{\exists}\mathsf{R}})$}\,$}{\hbox{$\displaystyle\displaystyle{\hbox{\hskip 79.97665pt\vbox{\hbox{\hskip-79.97665pt\hbox{\hbox{$\displaystyle\displaystyle{\Omega\vdash B\,\mathsf{type}\quad\Omega;\Gamma;\Delta\vdash P::z{:}A\\{B/X\\}}$}}}\vbox{}}}\over\hbox{\hskip 57.89207pt\vbox{\vbox{}\hbox{\hskip-57.89206pt\hbox{\hbox{$\displaystyle\displaystyle{\Omega;\Gamma;\Delta\vdash z\langle B\rangle.P::z{:}\exists X.A}$}}}}}}$}}\hbox{}$}\quad\raise 10.5pt\hbox{$\hbox{$\hbox{\small\sc$({{\exists}\mathsf{L}})$}\,$}{\hbox{$\displaystyle\displaystyle{\hbox{\hskip 54.93346pt\vbox{\hbox{\hskip-54.93346pt\hbox{\hbox{$\displaystyle\displaystyle{\Omega,X;\Gamma;\Delta,x{:}A\vdash P::z{:}C}$}}}\vbox{}}}\over\hbox{\hskip 70.0866pt\vbox{\vbox{}\hbox{\hskip-70.08658pt\hbox{\hbox{$\displaystyle\displaystyle{\Omega;\Gamma;\Delta,x{:}\exists X.A\vdash x(X).P::z{:}C}$}}}}}}$}}\hbox{}$}\\\\[9.00002pt] \raise 10.5pt\hbox{$\hbox{$\hbox{\small\sc$(\mathsf{id})$}\,$}{\hbox{$\displaystyle\displaystyle{\hbox{\hskip 0.75pt\vbox{\hbox{\hskip-0.75pt\hbox{\hbox{$\displaystyle\displaystyle{\,}$}}}\vbox{}}}\over\hbox{\hskip 53.78755pt\vbox{\vbox{}\hbox{\hskip-53.78754pt\hbox{\hbox{$\displaystyle\displaystyle{\Omega;\Gamma;x{:}A\vdash[x\leftrightarrow z]::z{:}A}$}}}}}}$}}\hbox{}$}\quad\raise 10.5pt\hbox{$\hbox{$\hbox{\small\sc$(\mathsf{cut})$}\,$}{\hbox{$\displaystyle\displaystyle{\hbox{\hskip 91.93913pt\vbox{\hbox{\hskip-91.93912pt\hbox{\hbox{$\displaystyle\displaystyle{\Omega;\Gamma;\Delta_{1}\vdash P::x{:}A\quad\Omega;\Gamma;\Delta_{2},x{:}A\vdash Q::z{:}C}$}}}\vbox{}}}\over\hbox{\hskip 62.52763pt\vbox{\vbox{}\hbox{\hskip-62.52763pt\hbox{\hbox{$\displaystyle\displaystyle{\Omega;\Gamma;\Delta_{1},\Delta_{2}\vdash(\mathbf{\nu}x)(P\mid Q)::z{:}C}$}}}}}}$}}\hbox{}$}\end{array}$ Figure 2: Typing Rules (Abridged – See [52] for all rules). As in [8, 9, 36, 54], the typing discipline enforces that channel outputs always have as object a _fresh_ name, in the style of the internal mobility $\pi$-calculus [44]. We clarify a few of the key rules: Rule ${{\forall}\mathsf{R}}$ defines the meaning of (impredicative) universal quantification over session types, stating that a session of type $\forall X.A$ inputs a type and then behaves uniformly as $A$; dually, to use such a session (rule ${{\forall}\mathsf{L}}$), a process must output a type $B$ which then warrants the use of the session as type $A\\{B/X\\}$. Rule ${{\multimap}\mathsf{R}}$ captures session input, where a session of type $A\multimap B$ expects to receive a session of type $A$ which will then be used to produce a session of type $B$. Dually, session output (rule ${{\otimes}\mathsf{R}}$) is achieved by producing a fresh session of type $A$ (that uses a disjoint set of sessions to those of the continuation) and outputting the fresh session along $z$, which is then a session of type $B$. Linear composition is captured by rule $\mathsf{cut}$ which enables a process that offers a session $x{:}A$ (using linear sessions in $\Delta_{1}$) to be composed with a process that _uses_ that session (amongst others in $\Delta_{2}$) to offer $z{:}C$. As shown in [7], typing entails Subject Reduction, Global Progress, and Termination. Observational Equivalences. We briefly summarise the typed congruence and logical equivalence with polymorphism, giving rise to a suitable notion of relational parametricity in the sense of Reynolds [41], defined as a contextual logical relation on typed processes [7]. The logical relation is reminiscent of a typed bisimulation. However, extra care is needed to ensure well-foundedness due to impredicative type instantiation. As a consequence, the logical relation allows us to reason about process equivalences where type variables are not instantiated with _the same_ , but rather _related_ types. Typed Barbed Congruence ($\cong$). We use the typed contextual congruence from [7], which preserves _observable_ actions, called barbs. Formally, _barbed congruence_ , noted $\cong$, is the largest equivalence on well-typed processes that is $\tau$-closed, barb preserving, and contextually closed under typed contexts; see [7] and [52] for the full definition. Logical Equivalence ($\approx_{\mathtt{L}}$). The definition of logical equivalence is no more than a typed contextual bisimulation with the following intuitive reading: given two open processes $P$ and $Q$ (i.e. processes with non-empty left-hand side typings), we define their equivalence by inductively closing out the context, composing with equivalent processes offering appropriately typed sessions. When processes are closed, we have a single distinguished session channel along which we can perform observations, and proceed inductively on the structure of the offered session type. We can then show that such an equivalence satisfies the necessary fundamental properties (Theorem 2.1). The logical relation is defined using the candidates technique of Girard [16]. In this setting, an _equivalence candidate_ is a relation on typed processes satisfying basic closure conditions: an equivalence candidate must be compatible with barbed congruence and closed under forward and converse reduction. ###### Definition 2.1 (Equivalence Candidate) An _equivalence candidate_ $\mathcal{R}$ at $z{:}A$ and $z{:}B$, noted $\mathcal{R}::z{:}A\\!\Leftrightarrow\\!B$, is a binary relation on processes such that, for every $(P,Q)\in\mathcal{R}::z{:}A\\!\Leftrightarrow\\!B$ both $\cdot\vdash P::z{:}A$ and $\cdot\vdash Q::z{:}B$ hold, together with the following (we often write $(P,Q)\in\mathcal{R}::z{:}A\\!\Leftrightarrow\\!B$ as $P\,\mathcal{R}\,Q::z{:}A\\!\Leftrightarrow\\!B$): 1. 1. If $(P,Q)\in\mathcal{R}::z{:}A\\!\Leftrightarrow\\!B$, $\cdot\vdash P\cong P^{\prime}::z{:}A$, and $\cdot\vdash Q\cong Q^{\prime}::z{:}B$ then $(P^{\prime},Q^{\prime})\in\mathcal{R}::z{:}A\\!\Leftrightarrow\\!B$. 2. 2. If $(P,Q)\in\mathcal{R}::z{:}A\\!\Leftrightarrow\\!B$ then, for all $P_{0}$ such that $\cdot\vdash P_{0}::z{:}A$ and $P_{0}\stackrel{{\scriptstyle}}{{\Longrightarrow}}P$, we have $(P_{0},Q)\in\,\mathcal{R}::z{:}A\\!\Leftrightarrow\\!B$. Symmetrically for $Q$. To define the logical relation we rely on some auxiliary notation, pertaining to the treatment of type variables arising due to impredicative polymorphism. We write $\omega:\Omega$ to denote a mapping $\omega$ that assigns a closed type to the type variables in $\Omega$. We write $\omega(X)$ for the type mapped by $\omega$ to variable $X$. Given two mappings $\omega:\Omega$ and $\omega^{\prime}:\Omega$, we define an equivalence candidate assignment $\eta$ between $\omega$ and $\omega^{\prime}$ as a mapping of equivalence candidate $\eta(X)::{-}{:}\omega(X)\\!\Leftrightarrow\\!\omega^{\prime}(X)$ to the type variables in $\Omega$, where the particular choice of a distinguished right- hand side channel is _delayed_ (i.e. to be instantiated later on). We write $\eta(X)(z)$ for the instantiation of the (delayed) candidate with the name $z$. We write $\eta:\omega\\!\Leftrightarrow\\!\omega^{\prime}$ to denote that $\eta$ is a candidate assignment between $\omega$ and $\omega^{\prime}$; and $\hat{\omega}(P)$ to denote the application of mapping $\omega$ to $P$. We define a sequent-indexed family of process relations, that is, a set of pairs of processes $(P,Q)$, written $\Gamma;\Delta\vdash P\approx_{\mathtt{L}}Q::T[\eta:\omega\\!\Leftrightarrow\\!\omega^{\prime}]$, satisfying some conditions, typed under $\Omega;\Gamma;\Delta\vdash T$, with $\omega:\Omega$, $\omega^{\prime}:\Omega$ and $\eta:\omega\\!\Leftrightarrow\\!\omega^{\prime}$. Logical equivalence is defined inductively on the size of the typing contexts and then on the structure of the right-hand side type. We show only select cases (see [52] for the full definition). ###### Definition 2.2 (Logical Equivalence) (Base Case) Given a type $A$ and mappings $\omega,\omega^{\prime},\eta$, we define _logical equivalence_ , noted $P\approx_{\mathtt{L}}Q::z{:}A[\eta:\omega\\!\Leftrightarrow\\!\omega^{\prime}]$, as the smallest symmetric binary relation containing all pairs of processes $(P,Q)$ such that (i) $\cdot\vdash\hat{\omega}(P)::z{:}\hat{\omega}(A)$; (ii) $\cdot\vdash\hat{\omega}^{\prime}(Q)::z{:}\hat{\omega}^{\prime}(A)$; and (iii) satisfies the conditions given below: * • $P\approx_{\mathtt{L}}Q::z{:}X[\eta:\omega\\!\Leftrightarrow\\!\omega^{\prime}]\text{ iff }(P,Q)\in\eta(X)(z)$ * • $P\approx_{\mathtt{L}}Q::z{:}A\multimap B[\eta:\omega\\!\Leftrightarrow\\!\omega^{\prime}]$ iff $\forall P^{\prime},y.~{}(P\xrightarrow{z(y)}P^{\prime})\Rightarrow\exists Q^{\prime}.Q\stackrel{{\scriptstyle z(y)}}{{\Longrightarrow}}Q^{\prime}$ s.t. $\forall R_{1},R_{2}.R_{1}\approx_{\mathtt{L}}R_{2}::y{:}A[\eta:\omega\\!\Leftrightarrow\\!\omega^{\prime}](\nu y)(P^{\prime}\,|\,R_{1})\approx_{\mathtt{L}}(\nu y)(Q^{\prime}\,|\,R_{2})::z{:}B[\eta:\omega\\!\Leftrightarrow\\!\omega^{\prime}]$ * • $P\approx_{\mathtt{L}}Q::z{:}A\otimes B[\eta:\omega\\!\Leftrightarrow\\!\omega^{\prime}]$ iff $\forall P^{\prime},y.~{}~{}(P\xrightarrow{\overline{(\nu y)z\langle y\rangle}}P^{\prime})\Rightarrow\exists Q^{\prime}.Q\stackrel{{\scriptstyle\overline{(\nu y)z\langle y\rangle}}}{{\Longrightarrow}}Q^{\prime}$ s.t. $\exists P_{1},P_{2},Q_{1},Q_{2}.\ P^{\prime}\equiv_{!}P_{1}\mid P_{2}\wedge Q^{\prime}\equiv_{!}Q_{1}\mid Q_{2}\wedge P_{1}\approx_{\mathtt{L}}Q_{1}::y{:}A[\eta:\omega\\!\Leftrightarrow\\!\omega^{\prime}]\wedge P_{2}\approx_{\mathtt{L}}Q_{2}::z{:}B[\eta:\omega\\!\Leftrightarrow\\!\omega^{\prime}]$ * • $P\approx_{\mathtt{L}}Q::z{:}\forall X.A[\eta:\omega\\!\Leftrightarrow\\!\omega^{\prime}]$ iff $\forall B_{1},B_{2},P^{\prime},\mathcal{R}::{-}{:}B_{1}\\!\Leftrightarrow\\!B_{2}.~{}~{}(P\xrightarrow{z(B_{1})}P^{\prime})$ implies $\exists Q^{\prime}.Q\stackrel{{\scriptstyle z(B_{2})}}{{\Longrightarrow}}Q^{\prime},~{}P^{\prime}\approx_{\mathtt{L}}Q^{\prime}::z{:}A[\eta[X\mapsto\mathcal{R}]:\omega[X\mapsto B_{1}]\\!\Leftrightarrow\\!\omega^{\prime}[X\mapsto B_{2}]]$ (Inductive Case) Let $\Gamma,\Delta$ be non empty. Given $\Omega;\Gamma;\Delta\vdash P::T$ and $\Omega;\Gamma;\Delta\vdash Q::T$, the binary relation on processes $\Gamma;\Delta\vdash P\approx_{\mathtt{L}}Q::T[\eta:\omega\\!\Leftrightarrow\\!\omega^{\prime}]$ (with $\omega,\omega^{\prime}:\Omega$ and $\eta:\omega\\!\Leftrightarrow\\!\omega^{\prime}$) is inductively defined as: $\begin{array}[]{lcl}\Gamma;\Delta,y:A\vdash P\approx_{\mathtt{L}}Q::T[\eta:\omega\\!\Leftrightarrow\\!\omega^{\prime}]&\mbox{ iff }&\forall R_{1},R_{2}.\mbox{ s.t. }R_{1}\approx_{\mathtt{L}}R_{2}::y{:}A[\eta:\omega\\!\Leftrightarrow\\!\omega^{\prime}],\\\ &&\Gamma;\Delta\vdash(\nu y)(\hat{\omega}(P)\mid\hat{\omega}(R_{1}))\approx_{\mathtt{L}}(\nu y)(\hat{\omega}^{\prime}(Q)\mid\hat{\omega}^{\prime}(R_{2}))::T[\eta:\omega\\!\Leftrightarrow\\!\omega^{\prime}]\\\\[2.84526pt] \Gamma,u:A;\Delta\vdash P\approx_{\mathtt{L}}Q::T[\eta:\omega\\!\Leftrightarrow\\!\omega^{\prime}]&\mbox{ iff }&\forall R_{1},R_{2}.\mbox{ s.t. }R_{1}\approx_{\mathtt{L}}R_{2}::y{:}A[\eta:\omega\\!\Leftrightarrow\\!\omega^{\prime}],\\\ &&\Gamma;\Delta\vdash(\mathbf{\nu}u)(\hat{\omega}(P)\mid!u(y).\hat{\omega}(R_{1}))\approx_{\mathtt{L}}(\mathbf{\nu}u)(\hat{\omega}^{\prime}(Q)\mid!u(y).\hat{\omega}^{\prime}(R_{2}))::T[\eta:\omega\\!\Leftrightarrow\\!\omega^{\prime}]\end{array}$ For the sake of readability we often omit the $\eta:\omega\\!\Leftrightarrow\\!\omega^{\prime}$ portion of $\approx_{\mathtt{L}}$, which is henceforth implicitly universally quantified. Thus, we write $\Omega;\Gamma;\Delta\vdash P\approx_{\mathtt{L}}Q::z{:}A$ (or $P\approx_{\mathtt{L}}Q$) iff the two given processes are logically equivalent for all consistent instantiations of its type variables. It is instructive to inspect the clause for type input ($\forall X.A$): the two processes must be able to match inputs of any pair of _related_ types (i.e. types related by a candidate), such that the continuations are related at the open type $A$ with the appropriate type variable instantiations, following Girard [16]. The power of this style of logical relation arises from a combination of the extensional flavour of the equivalence and the fact that polymorphic equivalences do not require the same type to be instantiated in both processes, but rather that the types are _related_ (via a suitable equivalence candidate relation). ###### Theorem 2.1 (Properties of Logical Equivalence [7]) Parametricity: If $\Omega;\Gamma;\Delta\vdash P::z{:}A$ then, for all $\omega,\omega^{\prime}:\Omega$ and $\eta:\omega\\!\Leftrightarrow\\!\omega^{\prime}$, we have $\Gamma;\Delta\vdash\hat{\omega}(P)\approx_{\mathtt{L}}\hat{\omega^{\prime}}(P)::z{:}A[\eta:\omega\\!\Leftrightarrow\\!\omega^{\prime}]$. Soundness: If $\Omega;\Gamma;\Delta\vdash P\approx_{\mathtt{L}}Q::z{:}A$ then $\mathcal{C}[P]\cong\mathcal{C}[Q]::z{:}A$, for any closing $\mathcal{C}[-]$. Completeness: If $\Omega;\Gamma;\Delta\vdash P\cong Q::z{:}A$ then $\Omega;\Gamma;\Delta\vdash P\approx_{\mathtt{L}}Q::z{:}A$. ## 3 To Linear-F and Back We now develop our mutually inverse and fully abstract encodings between Poly$\pi$ and a linear polymorphic $\lambda$-calculus [55] that we dub Linear-F. We first introduce the syntax and typing of the linear $\lambda$-calculus and then proceed to detail our encodings and their properties (we omit typing ascriptions from the existential polymorphism constructs for readability). ###### Definition 3.1 (Linear-F) The syntax of terms $M,N$ and types $A,B$ of Linear-F is given below. $\small\begin{array}[]{lcl}M,N&::=&\lambda x{:}A.M\mid M\,N\mid\langle M\otimes N\rangle\mid\mathsf{let}\,x\otimes y=M\,\mathsf{in}\,N\mid\,!M\mid\mathsf{let}\,!u=M\,\mathsf{in}\,N\mid\Lambda X.M\\\\[2.84526pt] &\mid&M[A]\mid\mathsf{pack}\,A\,\mathsf{with}\,M\mid\mathsf{let}\,(X,y)=M\,\mathsf{in}\,N\mid\mathsf{let}\,\mathbf{1}=M\,\mathsf{in}\,N\mid\langle\rangle\mid\mathsf{T}\mid\mathsf{F}\\\\[4.5pt] A,B&::=&A\multimap B\mid A\otimes B\mid\,!A\mid\forall X.A\mid\exists X.A\mid X\mid\mathbf{1}\mid\mathbf{2}\end{array}$ The syntax of types is that of the multiplicative and exponential fragments of second-order intuitionistic linear logic: $\lambda x{:}A.M$ denotes linear $\lambda$-abstractions; $M\,N$ denotes the application; $\langle M\otimes N\rangle$ denotes the multiplicative pairing of $M$ and $N$, as reflected in its elimination form $\mathsf{let}\,x\otimes y=M\,\mathsf{in}\,N$ which simultaneously deconstructs the pair $M$, binding its first and second projection to $x$ and $y$ in $N$, respectively; $!M$ denotes a term $M$ that does not use any linear variables and so may be used an arbitrary number of times; $\mathsf{let}\,!u=M\,\mathsf{in}\,N$ binds the underlying exponential term of $M$ as $u$ in $N$; $\Lambda X.M$ is the type abstraction former; $M[A]$ stands for type application; $\mathsf{pack}\,A\,\mathsf{with}\,M$ is the existential type introduction form, where $M$ is a term where the existentially typed variable is instantiated with $A$; $\mathsf{let}\,(X,y)=M\,\mathsf{in}\,N$ unpacks an existential package $M$, binding the representation type to $X$ and the underlying term to $y$ in $N$; the multiplicative unit $\mathbf{1}$ has as introduction form the nullary pair $\langle\rangle$ and is eliminated by the construct $\mathsf{let}\,\mathbf{1}=M\,\mathsf{in}\,N$, where $M$ is a term of type $\mathbf{1}$. Booleans (type $\mathbf{2}$ with values $\mathsf{T}$ and $\mathsf{F}$) are the basic observable. The typing judgment in Linear-F is given as $\Omega;\Gamma;\Delta\vdash M:A$, following the DILL formulation of linear logic [3], stating that term $M$ has type $A$ in a linear context $\Delta$ (i.e. bindings for linear variables $x{:}B$), intuitionistic context $\Gamma$ (i.e. binding for intuitionistic variables $u{:}B$) and type variable context $\Omega$. The typing rules are standard [7]. The operational semantics of the calculus are the expected call- by-name semantics with commuting conversions [27]. We write $\Downarrow$ for the evaluation relation. We write $\cong$ for the largest typed congruence that is consistent with the observables of type $\mathbf{2}$ (i.e. a so-called Morris-style equivalence as in [5]). ### 3.1 Encoding Linear-F into Session $\pi$-Calculus We define a translation from Linear-F to Poly$\pi$ generalising the one from [49], accounting for polymorphism and multiplicative pairs. We translate typing derivations of $\lambda$-terms to those of $\pi$-calculus terms (we omit the full typing derivation for the sake of readability). Proof theoretically, the $\lambda$-calculus corresponds to a proof term assignment for natural deduction presentations of logic, whereas the session $\pi$-calculus from § 2 corresponds to a proof term assignment for sequent calculus. Thus, we obtain a translation from $\lambda$-calculus to the session $\pi$-calculus by considering the proof theoretic content of the constructive proof of soundness of the sequent calculus wrt natural deduction. Following Gentzen [14], the translation from natural deduction to sequent calculus maps introduction rules to the corresponding right rules and elimination rules to a combination of the corresponding left rule, cut and/or identity. Since typing in the session calculus identifies a distinguished channel along which a process offers a session, the translation of $\lambda$-terms is parameterised by a ``result'' channel along which the behaviour of the $\lambda$-term is implemented. Given a $\lambda$-term $M$, the process $\llbracket M\rrbracket_{z}$ encodes the behaviour of $M$ along the session channel $z$. We enforce that the type $\mathbf{2}$ of booleans and its two constructors are consistently translated to their polymorphic Church encodings before applying the translation to Poly$\pi$. Thus, type $\mathbf{2}$ is first translated to $\forall X.!X\\!\multimap\,!X\\!\multimap X$, the value $\mathsf{T}$ to $\Lambda X.\lambda u{:}!X.\lambda v{:}!X.\mathsf{let}\,!x=u\,\mathsf{in}\,\mathsf{let}\,!y=v\,\mathsf{in}\,x$ and the value $\mathsf{F}$ to $\Lambda X.\lambda u{:}!X.\lambda v{:}!X.\mathsf{let}\,!x=u\,\mathsf{in}\,\mathsf{let}\,!y=v\,\mathsf{in}\,y$. Such representations of the booleans are adequate up to parametricity [6] and suitable for our purposes of relating the session calculus (which has no primitive notion of value or result type) with the $\lambda$-calculus precisely due to the tight correspondence between the two calculi. ###### Definition 3.2 (From Linear-F to Poly$\pi$) $\llbracket\Omega\rrbracket;\llbracket\Gamma\rrbracket;\llbracket\Delta\rrbracket\vdash\llbracket M\rrbracket_{z}::z{:}A$ denotes the translation of contexts, types and terms from Linear-F to the polymorphic session calculus. The translations on contexts and types are the identity function. Booleans and their values are first translated to their Church encodings as specified above. The translation on $\lambda$-terms is given below: $\begin{array}[]{lcllcl}\llbracket x\rrbracket_{z}&\triangleq&[x\leftrightarrow z]&\llbracket M\,N\rrbracket_{z}\triangleq&&(\mathbf{\nu}x)(\llbracket M\rrbracket_{x}\mid(\mathbf{\nu}y)x\langle y\rangle.(\llbracket N\rrbracket_{y}\mid[x\leftrightarrow z]))\\\ \llbracket u\rrbracket_{z}&\triangleq&(\mathbf{\nu}x)u\langle x\rangle.[x\leftrightarrow z]&\llbracket\mathsf{let}\,!u=M\,\mathsf{in}\,N\rrbracket_{z}&\triangleq&(\mathbf{\nu}x)(\llbracket M\rrbracket_{x}\mid\llbracket N\rrbracket_{z}\\{x/u\\})\\\ \llbracket\lambda x{:}A.M\rrbracket_{z}&\triangleq&z(x).\llbracket M\rrbracket_{z}&\llbracket\langle M\otimes N\rangle\rrbracket_{z}&\triangleq&(\mathbf{\nu}y)z\langle y\rangle.(\llbracket M\rrbracket_{y}\mid\llbracket N\rrbracket_{z})\\\ \llbracket!M\rrbracket_{z}&\triangleq&!z(x).\llbracket M\rrbracket_{x}&\llbracket\mathsf{let}\,x\otimes y=M\,\mathsf{in}\,N\rrbracket_{z}&\triangleq&(\mathbf{\nu}w)(\llbracket M\rrbracket_{y}\mid y(x).\llbracket N\rrbracket_{z})\\\ \llbracket\Lambda X.M\rrbracket_{z}&\triangleq&z(X).\llbracket M\rrbracket_{z}&\llbracket M[A]\rrbracket_{z}&\triangleq&(\mathbf{\nu}x)(\llbracket M\rrbracket_{x}\mid x\langle A\rangle.[x\leftrightarrow z])\\\ \llbracket\mathsf{pack}\,A\,\mathsf{with}\,M\rrbracket_{z}&\triangleq&z\langle A\rangle.\llbracket M\rrbracket_{z}&\llbracket\mathsf{let}\,(X,y)=M\,\mathsf{in}\,N\rrbracket_{z}&\triangleq&(\mathbf{\nu}x)(\llbracket M\rrbracket_{y}\mid y(X).\llbracket N\rrbracket_{z})\\\ \llbracket\langle\rangle\rrbracket_{z}&\triangleq&{\bf 0}&\llbracket\mathsf{let}\,\mathbf{1}=M\,\mathsf{in}\,N\rrbracket_{z}&\triangleq&(\mathbf{\nu}x)(\llbracket M\rrbracket_{x}\mid\llbracket N\rrbracket_{z})\par\par\end{array}$ To translate a (linear) $\lambda$-abstraction $\lambda x{:}A.M$, which corresponds to the proof term for the introduction rule for $\multimap$, we map it to the corresponding ${{\multimap}\mathsf{R}}$ rule, thus obtaining a process $z(x).\llbracket M\rrbracket_{z}$ that inputs along the result channel $z$ a channel $x$ which will be used in $\llbracket M\rrbracket_{z}$ to access the function argument. To encode the application $M\,N$, we compose (i.e. $\mathsf{cut}$) $\llbracket M\rrbracket_{x}$, where $x$ is a fresh name, with a process that provides the (encoded) function argument by outputting along $x$ a channel $y$ which offers the behaviour of $\llbracket N\rrbracket_{y}$. After the output is performed, the type of $x$ is now that of the function's codomain and thus we conclude by forwarding (i.e. the $\mathsf{id}$ rule) between $x$ and the result channel $z$. The encoding for polymorphism follows a similar pattern: To encode the abstraction $\Lambda X.M$, we receive along the result channel a type that is bound to $X$ and proceed inductively. To encode type application $M[A]$ we encode the abstraction $M$ in parallel with a process that sends $A$ to it, and forwards accordingly. Finally, the encoding of the existential package $\mathsf{pack}\,A\,\mathsf{with}\,M$ maps to an output of the type $A$ followed by the behaviour $\llbracket M\rrbracket_{z}$, with the encoding of the elimination form $\mathsf{let}\,(X,y)=M\,\mathsf{in}\,N$ composing the translation of the term of existential type $M$ with a process performing the appropriate type input and proceeding as $\llbracket N\rrbracket_{z}$. ###### Example 3.1 (Encoding of Linear-F) Consider the following $\lambda$-term corresponding to a polymorphic pairing function (recall that we write $\overline{z}\langle w\rangle.P$ for $(\mathbf{\nu}w)z\langle w\rangle.P$): $\small\begin{array}[]{lcllcl}M\triangleq\Lambda X.\Lambda Y.\lambda x{:}X.\lambda y{:}Y.\langle x\otimes y\rangle&\mbox{and}&N\triangleq((M[A][B]\,M_{1})\,M_{2})\end{array}$ Then we have, with $\tilde{x}=x_{1}x_{2}x_{3}x_{4}$: $\small\begin{array}[]{rcll}\llbracket N\rrbracket_{z}&\equiv&(\mathbf{\nu}\tilde{x})(&\llbracket M\rrbracket_{x_{1}}\mid x_{1}\langle A\rangle.[x_{1}\leftrightarrow x_{2}]\mid x_{2}\langle B\rangle.[x_{2}\leftrightarrow x_{3}]\mid\\\ &&&\overline{x_{3}}\langle x\rangle.(\llbracket M_{1}\rrbracket_{x}\mid[x_{3}\leftrightarrow x_{4}])\mid\overline{x_{4}}\langle y\rangle.(\llbracket M_{2}\rrbracket_{y}\mid[x_{4}\leftrightarrow z]))\\\ &\equiv&(\mathbf{\nu}\tilde{x})(&x_{1}(X).x_{1}(Y).x_{1}(x).x_{1}(y).\overline{x_{1}}\langle w\rangle.([x\leftrightarrow w]\mid[y\leftrightarrow x_{1}])\mid x_{1}\langle A\rangle.[x_{1}\leftrightarrow x_{2}]\mid\\\ &&&x_{2}\langle B\rangle.[x_{2}\leftrightarrow x_{3}]\mid\overline{x_{3}}\langle x\rangle.(\llbracket M_{1}\rrbracket_{x}\mid[x_{3}\leftrightarrow x_{4}])\mid\overline{x_{4}}\langle y\rangle.(\llbracket M_{2}\rrbracket_{y}\mid[x_{4}\leftrightarrow z]))\end{array}$ We can observe that $N\xrightarrow{}^{+}(((\lambda x{:}A.\lambda y{:}B.\langle x\otimes y\rangle)\,M_{1})\,M_{2})\xrightarrow{}^{+}\langle M_{1}\otimes M_{2}\rangle$. At the process level, each reduction corresponding to the redex of type application is simulated by two reductions, obtaining: $\small\begin{array}[]{lcll}\llbracket N\rrbracket_{z}&\xrightarrow{}^{+}&(\mathbf{\nu}x_{3},x_{4})(&x_{3}(x).x_{3}(y).\overline{x_{3}}\langle w\rangle.([x\leftrightarrow w]\mid[y\leftrightarrow x_{3}])\mid\\\ &&&\overline{x_{3}}\langle x\rangle.(\llbracket M_{1}\rrbracket_{x}\mid[x_{3}\leftrightarrow x_{4}])\mid\overline{x_{4}}\langle y\rangle.(\llbracket M_{2}\rrbracket_{y}\mid[x_{4}\leftrightarrow z]))=P\end{array}$ The reductions corresponding to the $\beta$-redexes clarify the way in which the encoding represents substitution of terms for variables via fine-grained name passing. Consider $\llbracket\langle M_{1}\otimes M_{2}\rangle\rrbracket_{z}\triangleq\overline{z}\langle w\rangle.(\llbracket M_{1}\rrbracket_{w}\mid\llbracket M_{2}\rrbracket_{z})$ and $\small\begin{array}[]{rcl}P&\xrightarrow{}^{+}&(\mathbf{\nu}x,y)(\llbracket M_{1}\rrbracket_{x}\mid\llbracket M_{2}\rrbracket_{y}\mid\overline{z}\langle w\rangle.([x\leftrightarrow w]\mid[y\leftrightarrow z]))\end{array}$ The encoding of the pairing of $M_{1}$ and $M_{2}$ outputs a fresh name $w$ which will denote the behaviour of (the encoding of) $M_{1}$, and then the behaviour of the encoding of $M_{2}$ is offered on $z$. The reduct of $P$ outputs a fresh name $w$ which is then identified with $x$ and thus denotes the behaviour of $\llbracket M_{1}\rrbracket_{w}$. The channel $z$ is identified with $y$ and thus denotes the behaviour of $\llbracket M_{2}\rrbracket_{z}$, making the two processes listed above equivalent. This informal reasoning exposes the insights that justify the operational correspondence of the encoding. Proof-theoretically, these equivalences simply map to commuting conversions which push the processes $\llbracket M_{1}\rrbracket_{x}$ and $\llbracket M_{2}\rrbracket_{z}$ under the output on $z$. ###### Theorem 3.1 (Operational Correspondence) * • If $\Omega;\Gamma;\Delta\vdash M:A$ and $M\xrightarrow{}N$ then $\llbracket M\rrbracket_{z}\stackrel{{\scriptstyle}}{{\Longrightarrow}}P$ such that $\llbracket N\rrbracket_{z}\approx_{\mathtt{L}}P$ * • If $\llbracket M\rrbracket_{z}\xrightarrow{}P$ then $M\xrightarrow{}^{+}N$ and $\llbracket N\rrbracket_{z}\approx_{\mathtt{L}}P$ ### 3.2 Encoding Session $\pi$-calculus to Linear-F Just as the proof theoretic content of the soundness of sequent calculus wrt natural deduction induces a translation from $\lambda$-terms to session-typed processes, the _completeness_ of the sequent calculus wrt natural deduction induces a translation from the session calculus to the $\lambda$-calculus. This mapping identifies sequent calculus right rules with the introduction rules of natural deduction and left rules with elimination rules combined with (type-preserving) substitution. Crucially, the mapping is defined on _typing derivations_ , enabling us to consistently identify when a process uses a session (i.e. left rules) or, dually, when a process offers a session (i.e. right rules). ${\footnotesize\begin{array}[]{l}\stretchleftright{\llparenthesis}{\raise 10.0pt\hbox{$\hbox{$\hbox{\small\sc$({{\multimap}\mathsf{R}})$}\,$}{\hbox{$\displaystyle\displaystyle{\hbox{\hskip 34.56946pt\vbox{\hbox{\hskip-34.56946pt\hbox{\hbox{$\displaystyle\displaystyle{\Delta,x{:}A\vdash{\color[rgb]{0,0,1}P}::z{:}B}$}}}\vbox{}}}\over\hbox{\hskip 43.05003pt\vbox{\vbox{}\hbox{\hskip-43.05003pt\hbox{\hbox{$\displaystyle\displaystyle{\Delta\vdash{\color[rgb]{0,0,1}z(x).P}::z{:}A\multimap B}$}}}}}}$}}\hbox{}$}}{\rrparenthesis}\triangleq\raise 10.01332pt\hbox{$\hbox{$\hbox{\small\sc$(\multimap I)$}\,$}{\hbox{$\displaystyle\displaystyle{\hbox{\hskip 42.8808pt\vbox{\hbox{\hskip-42.8808pt\hbox{\hbox{$\displaystyle\displaystyle{\Delta,x{:}A\vdash{\color[rgb]{0,0,1}\llparenthesis P\rrparenthesis}_{\Delta,x{:}A\vdash z{:}B}:B}$}}}\vbox{}}}\over\hbox{\hskip 53.76971pt\vbox{\vbox{}\hbox{\hskip-53.76971pt\hbox{\hbox{$\displaystyle\displaystyle{\Delta\vdash{\color[rgb]{0,0,1}\lambda x{:}A.\llparenthesis P\rrparenthesis}_{\Delta,x{:}A\vdash z{:}B}:A\multimap B}$}}}}}}$}}\hbox{}$}\\\\[16.00003pt] \stretchleftright{\llparenthesis}{{\vbox{\hbox{\hbox{\small\sc$\displaystyle({{\multimap}\mathsf{L}})$}}\hbox{$\displaystyle\displaystyle{\hbox{\hskip 63.888pt\vbox{\hbox{\hskip-63.888pt\hbox{\hbox{$\displaystyle\displaystyle{\Delta_{1}\vdash{\color[rgb]{0,0,1}P}::y{:}A\quad\Delta_{2},x{:}B\vdash{\color[rgb]{0,0,1}Q}::z{:}C}$}}}\vbox{}}}\over\hbox{\hskip 77.92137pt\vbox{\vbox{}\hbox{\hskip-77.92137pt\hbox{\hbox{$\displaystyle\displaystyle{\Delta_{1},\Delta_{2},x{:}A\multimap B\vdash{\color[rgb]{0,0,1}(\mathbf{\nu}y)x\langle y\rangle.(P\mid Q)}::z{:}C}$}}}}}}$}}}}{\rrparenthesis}\par\triangleq\\\\[16.00003pt] {\vbox{\hbox{\hbox{\small\sc(subst)}}\hbox{$\displaystyle\displaystyle{\hbox{\hskip 125.67091pt\vbox{\hbox{\hskip-125.6709pt\hbox{\hbox{$\displaystyle\displaystyle{\Delta_{2},x{:}B\vdash{\color[rgb]{0,0,1}\llparenthesis Q\rrparenthesis}_{\Delta_{2},x{:}B\vdash z{:}C}:C\quad{\vbox{\hbox{\hbox{\small\sc$\displaystyle(\multimap E)$}}\hbox{$\displaystyle\displaystyle{\hbox{\hskip 75.6542pt\vbox{\hbox{\hskip-75.65419pt\hbox{\hbox{$\displaystyle\displaystyle{x{:}A\multimap B\vdash{\color[rgb]{0,0,1}x}{:}A\multimap B\quad\Delta_{1}\vdash{\color[rgb]{0,0,1}\llparenthesis P\rrparenthesis}_{\Delta_{1}\vdash y{:}A}:B}$}}}\vbox{}}}\over\hbox{\hskip 54.30835pt\vbox{\vbox{}\hbox{\hskip-54.30833pt\hbox{\hbox{$\displaystyle\displaystyle{\Delta_{1},x{:}A\multimap B\vdash{\color[rgb]{0,0,1}x\,\llparenthesis P\rrparenthesis}_{\Delta_{1}\vdash y{:}A}:B}$}}}}}}$}}}}$}}}\vbox{}}}\over\hbox{\hskip 94.41118pt\vbox{\vbox{}\hbox{\hskip-94.41116pt\hbox{\hbox{$\displaystyle\displaystyle{\Delta_{1},\Delta_{2},x{:}A\multimap B\vdash{\color[rgb]{0,0,1}\llparenthesis Q\rrparenthesis}_{\Delta_{2},x{:}B\vdash z{:}C}{\color[rgb]{0,0,1}\\{(x\,\llparenthesis P\rrparenthesis}_{\Delta_{1}\vdash y{:}A}{\color[rgb]{0,0,1})/x\\}}:C}$}}}}}}$}}}\end{array}}$ Figure 3: Translation on Typing Derivations (Excerpt – See [52]) ###### Definition 3.3 (From Poly$\pi$ to Linear-F) We write $\llparenthesis\Omega\rrparenthesis;\llparenthesis\Gamma\rrparenthesis;\llparenthesis\Delta\rrparenthesis\vdash\llparenthesis P\rrparenthesis:A$ for the translation from typing derivations in Poly$\pi$ to derivations in Linear-F. The translations on types and contexts are the identity function. The translation on processes is given below, where the leftmost column indicates the typing rule at the root of the derivation (see Fig. 3 for an excerpt of the translation on typing derivations, where we write $\llparenthesis P\rrparenthesis_{\Omega;\Gamma;\Delta\vdash z{:}A}$ to denote the translation of $\Omega;\Gamma;\Delta\vdash P::z{:}A$. We omit $\Omega$ and $\Gamma$ when unchanged). $\begin{array}[]{llclllcl}({{\mathbf{1}}\mathsf{R}})&\llparenthesis{\bf 0}\rrparenthesis&\triangleq&\langle\rangle&({{\multimap}\mathsf{L}})&\llparenthesis(\mathbf{\nu}y)x\langle y\rangle.(P\mid Q)\rrparenthesis&\triangleq&\llparenthesis Q\rrparenthesis\\{(x\,\llparenthesis P\rrparenthesis)/x\\}\\\ (\mathsf{id})&\llparenthesis[x\leftrightarrow y]\rrparenthesis&\triangleq&x&({{\multimap}\mathsf{R}})&\llparenthesis z(x).P\rrparenthesis&\triangleq&\lambda x{:}A.\llparenthesis P\rrparenthesis\\\ ({{\mathbf{1}}\mathsf{L}})&\llparenthesis P\rrparenthesis&\triangleq&\mathsf{let}\,\mathbf{1}=x\,\mathsf{in}\,\llparenthesis P\rrparenthesis&({{\otimes}\mathsf{R}})&\llparenthesis(\mathbf{\nu}x)z\langle x\rangle.(P\mid Q)\rrparenthesis&\triangleq&\langle\llparenthesis P\rrparenthesis\otimes\llparenthesis Q\rrparenthesis\rangle\\\ ({{!}\mathsf{R}})&\llparenthesis!z(x).P\rrparenthesis&\triangleq&!\llparenthesis P\rrparenthesis&({{\otimes}\mathsf{L}})&\llparenthesis x(y).P\rrparenthesis&\triangleq&\mathsf{let}\,x\otimes y=x\,\mathsf{in}\,\llparenthesis P\rrparenthesis\\\ ({{!}\mathsf{L}})&\llparenthesis P\\{u/x\\}\rrparenthesis&\triangleq&\mathsf{let}\,!u=x\,\mathsf{in}\,\llparenthesis P\rrparenthesis&(\mathsf{copy})&\llparenthesis(\mathbf{\nu}x)u\langle x\rangle.P\rrparenthesis&\triangleq&\llparenthesis P\rrparenthesis\\{u/x\\}\\\ ({{\forall}\mathsf{R}})&\llparenthesis z(X).P\rrparenthesis&\triangleq&\Lambda X.\llparenthesis P\rrparenthesis&({{\forall}\mathsf{L}})&\llparenthesis x\langle B\rangle.P\rrparenthesis&\triangleq&\llparenthesis P\rrparenthesis\\{(x[B])/x\\}\\\ ({{\exists}\mathsf{R}})&\llparenthesis z\langle B\rangle.P\rrparenthesis&\triangleq&\mathsf{pack}\,B\,\mathsf{with}\,\llparenthesis P\rrparenthesis&({{\exists}\mathsf{L}})&\llparenthesis x(Y).P\rrparenthesis&\triangleq&\mathsf{let}\,(Y,x)=x\,\mathsf{in}\,\llparenthesis P\rrparenthesis\\\ (\mathsf{cut})&\llparenthesis(\mathbf{\nu}x)(P\mid Q)\rrparenthesis&\triangleq&\llparenthesis Q\rrparenthesis\\{\llparenthesis P\rrparenthesis/x\\}&(\mathsf{cut}^{!})&\llparenthesis(\mathbf{\nu}u)(!u(x).P\mid Q)\rrparenthesis&\triangleq&\llparenthesis Q\rrparenthesis\\{\llparenthesis P\rrparenthesis/u\\}\\\ \end{array}$ For instance, the encoding of a process $z(x).P::z{:}A\multimap B$, typed by rule ${{\multimap}\mathsf{R}}$, results in the corresponding $\multimap\\!I$ introduction rule in the $\lambda$-calculus and thus is $\lambda x{:}A.\llparenthesis P\rrparenthesis$. To encode the process $(\mathbf{\nu}y)x\langle y\rangle.(P\mid Q)$, typed by rule ${{\multimap}\mathsf{L}}$, we make use of substitution: Given that the sub- process $Q$ is typed as $\Omega;\Gamma;\Delta^{\prime},x{:}B\vdash Q::z{:}C$, the encoding of the full process is given by $\llparenthesis Q\rrparenthesis\\{(x\,\llparenthesis P\rrparenthesis)/x\\}$. The term $x\,\llparenthesis P\rrparenthesis$ consists of the application of $x$ (of function type) to the argument $\llparenthesis P\rrparenthesis$, thus ensuring that the term resulting from the substitution is of the appropriate type. We note that, for instance, the encoding of rule ${{\otimes}\mathsf{L}}$ does not need to appeal to substitution – the $\lambda$-calculus $\mathsf{let}$ style rules can be mapped directly. Similarly, rule ${{\forall}\mathsf{R}}$ is mapped to type abstraction, whereas rule ${{\forall}\mathsf{L}}$ which types a process of the form $x\langle B\rangle.P$ maps to a substitution of the type application $x[B]$ for $x$ in $\llparenthesis P\rrparenthesis$. The encoding of existential polymorphism is simpler due to the $\mathsf{let}$-style elimination. We also highlight the encoding of the $\mathsf{cut}$ rule which embodies parallel composition of two processes sharing a linear name, which clarifies the use/offer duality of the intuitionistic calculus – the process that offers $P$ is encoded and substituted into the encoded user $Q$. ###### Theorem 3.2 () If $\Omega;\Gamma;\Delta\vdash P::z{:}A$ then $\llparenthesis\Omega\rrparenthesis;\llparenthesis\Gamma\rrparenthesis;\llparenthesis\Delta\rrparenthesis\vdash\llparenthesis P\rrparenthesis:A$. ###### Example 3.2 (Encoding of Poly$\pi$) Consider the following processes $\small\begin{array}[]{l}P\triangleq z(X).z(Y).z(x).z(y).\overline{z}\langle w\rangle.([x\leftrightarrow w]\mid[y\leftrightarrow z])\quad Q\triangleq z\langle\mathbf{1}\rangle.z\langle\mathbf{1}\rangle.\overline{z}\langle x\rangle.\overline{z}\langle y\rangle.z(w).[w\leftrightarrow r]\end{array}$ with $\vdash P::z{:}\forall X.\forall Y.X\multimap Y\multimap X\otimes Y$ and $z{:}\forall X.\forall Y.X\multimap Y\multimap X\otimes Y\vdash Q::r{:}\mathbf{1}$. $\small\begin{array}[]{ll}\mbox{Then:}&\llparenthesis P\rrparenthesis=\Lambda X.\Lambda Y.\lambda x{:}X.\lambda y{:}Y.\langle x\otimes y\rangle\quad\quad\llparenthesis Q\rrparenthesis=\mathsf{let}\,x\otimes y=z[\mathbf{1}][\mathbf{1}]\,\langle\rangle\,\langle\rangle\,\mathsf{in}\,\mathsf{let}\,\mathbf{1}=y\,\mathsf{in}\,x\\\ &\llparenthesis(\mathbf{\nu}z)(P\mid Q)\rrparenthesis=\mathsf{let}\,x\otimes y=(\Lambda X.\Lambda Y.\lambda x{:}X.\lambda y{:}Y.\langle x\otimes y\rangle)[\mathbf{1}][\mathbf{1}]\,\langle\rangle\,\langle\rangle\,\mathsf{in}\,\mathsf{let}\,\mathbf{1}=y\,\mathsf{in}\,x\end{array}$ By the behaviour of $(\mathbf{\nu}z)(P\mid Q)$, which consists of a sequence of cuts, and its encoding, we have that $\llparenthesis(\mathbf{\nu}z)(P\mid Q)\rrparenthesis\xrightarrow{}^{+}\langle\rangle$ and $(\mathbf{\nu}z)(P\mid Q)\xrightarrow{}^{+}{\bf 0}=\llparenthesis\langle\rangle\rrparenthesis$. In general, the translation of Def. 3.3 can introduce some distance between the immediate operational behaviour of a process and its corresponding $\lambda$-term, insofar as the translations of cuts (and left rules to non $\mathsf{let}$-form elimination rules) make use of substitutions that can take place deep within the resulting term. Consider the process at the root of the following typing judgment $\Delta_{1},\Delta_{2},\Delta_{3}\vdash(\mathbf{\nu}x)(x(y).P_{1}\mid(\mathbf{\nu}y)x\langle y\rangle.(P_{2}\mid w(z).{\bf 0}))::w{:}\mathbf{1}\multimap\mathbf{1}$, derivable through a $\mathsf{cut}$ on session $x$ between instances of ${{\multimap}\mathsf{R}}$ and ${{\multimap}\mathsf{L}}$, where the continuation process $w(z).{\bf 0}$ offers a session $w{:}\mathbf{1}\multimap\mathbf{1}$ (and so must use rule ${{\mathbf{1}}\mathsf{L}}$ on $x$). We have that: $(\mathbf{\nu}x)(x(y).P_{1}\mid(\mathbf{\nu}y)x\langle y\rangle.(P_{2}\mid w(z).{\bf 0}))\xrightarrow{}(\mathbf{\nu}x,y)(P_{1}\mid P_{2}\mid w(z).{\bf 0})$. However, the translation of the process above results in the term $\lambda z{:}\mathbf{1}.\mathsf{let}\,\mathbf{1}=((\lambda y{:}A.\llparenthesis P_{1}\rrparenthesis)\,\llparenthesis P_{2}\rrparenthesis)\,\mathsf{in}\,\mathsf{let}\,\mathbf{1}=z\,\mathsf{in}\,\langle\rangle$, where the redex that corresponds to the process reduction is present but hidden under the binder for $z$ (corresponding to the input along $w$). Thus, to establish operational completeness we consider full $\beta$-reduction, denoted by $\xrightarrow{}_{\beta}$, i.e. enabling $\beta$-reductions under binders. ###### Theorem 3.3 (Operational Completeness) Let $\Omega;\Gamma;\Delta\vdash P::z{:}A$. If $P\xrightarrow{}Q$ then $\llparenthesis P\rrparenthesis\xrightarrow{}_{\beta}^{*}\llparenthesis Q\rrparenthesis$. In order to study the soundness direction it is instructive to consider typed process $x{:}\mathbf{1}\multimap\mathbf{1}\vdash\overline{x}\langle y\rangle.(\mathbf{\nu}z)(z(w).{\bf 0}\mid\overline{z}\langle w\rangle.{\bf 0})::v{:}\mathbf{1}$ and its translation: $\small\begin{array}[]{c}\llparenthesis\overline{x}\langle y\rangle.(\mathbf{\nu}z)(z(w).{\bf 0}\mid\overline{z}\langle w\rangle.{\bf 0})\rrparenthesis=\llparenthesis(\mathbf{\nu}z)(z(w).{\bf 0}\mid\overline{z}\langle w\rangle.{\bf 0})\rrparenthesis\\{(x\,\langle\rangle)/x\\}\\\ =\mathsf{let}\,\mathbf{1}=(\lambda w{:}\mathbf{1}.\mathsf{let}\,\mathbf{1}=w\,\mathsf{in}\,\langle\rangle)\,\langle\rangle\,\mathsf{in}\,\mathsf{let}\,\mathbf{1}=x\,\langle\rangle\,\mathsf{in}\,\langle\rangle\end{array}$ The process above cannot reduce due to the output prefix on $x$, which cannot synchronise with a corresponding input action since there is no provider for $x$ (i.e. the channel is in the left-hand side context). However, its encoding can exhibit the $\beta$-redex corresponding to the synchronisation along $z$, hidden by the prefix on $x$. The corresponding reductions hidden under prefixes in the encoding can be _soundly_ exposed in the session calculus by appealing to the commuting conversions of linear logic (e.g. in the process above, the instance of rule ${{\multimap}\mathsf{L}}$ corresponding to the output on $x$ can be commuted with the $\mathsf{cut}$ on $z$). As shown in [36], commuting conversions are sound wrt observational equivalence, and thus we formulate operational soundness through a notion of _extended_ process reduction, which extends process reduction with the reductions that are induced by commuting conversions. Such a relation was also used for similar purposes in [5] and in [26], in a classical linear logic setting. For conciseness, we define extended reduction as a relation on _typed_ processes modulo $\equiv$. ###### Definition 3.4 (Extended Reduction [5]) We define $\mapsto$ as the type preserving relations on typed processes modulo $\equiv$ generated by: 1. 1. $\mathcal{C}[(\mathbf{\nu}y)x\langle y\rangle.P]\mid x(y).Q\mapsto\mathcal{C}[(\mathbf{\nu}y)(P\mid Q)]$; 2. 2. $\mathcal{C}[(\mathbf{\nu}y)x\langle y\rangle.P]\mid\,!x(y).Q\mapsto\mathcal{C}[(\mathbf{\nu}y)(P\mid Q)]\mid\,!x(y).Q$; and (3) $(\mathbf{\nu}x)(!x(y).Q)\mapsto\mathbf{0}$ where $\mathcal{C}$ is a (typed) process context which does not capture the bound name $y$. ###### Theorem 3.4 (Operational Soundness) Let $\Omega;\Gamma;\Delta\vdash P::z{:}A$ and $\llparenthesis P\rrparenthesis\xrightarrow{}M$, there exists $Q$ such that $P\mapsto^{*}Q$ and $\llparenthesis Q\rrparenthesis=_{\alpha}M$. ### 3.3 Inversion and Full Abstraction Having established the operational preciseness of the encodings to-and-from Poly$\pi$ and Linear-F, we establish our main results for the encodings. Specifically, we show that the encodings are mutually inverse up-to behavioural equivalence (with fullness as its corollary), which then enables us to establish full abstraction for _both_ encodings. ###### Theorem 3.5 (Inverse) If $\Omega;\Gamma;\Delta\vdash M:A$ then $\Omega;\Gamma;\Delta\vdash\llparenthesis\llbracket M\rrbracket_{z}\rrparenthesis\cong M:A$. Also, if $\Omega;\Gamma;\Delta\vdash P::z{:}A$ then $\Omega;\Gamma;\Delta\vdash\llbracket\llparenthesis P\rrparenthesis\rrbracket_{z}\approx_{\mathtt{L}}P::z{:}A$ ###### Corollary 3.1 (Fullness) Let $\Omega;\Gamma;\Delta\vdash P::z{:}A$. $\exists M$ s.t. $\Omega;\Gamma;\Delta\vdash M:A$ and $\Omega;\Gamma;\Delta\vdash\llbracket M\rrbracket_{z}\approx_{\mathtt{L}}P::z{:}A$ Also, let $\Omega;\Gamma;\Delta\vdash M:A$. $\exists P$ s.t. $\Omega;\Gamma;\Delta\vdash P::z{:}A$ and $\Omega;\Gamma;\Delta\vdash\llparenthesis P\rrparenthesis\cong M:A$ We now state our full abstraction results. Given two Linear-F terms of the same type, equivalence in the image of the $\llbracket{-}\rrbracket_{z}$ translation can be used as a proof technique for contextual equivalence in Linear-F. This is called the _soundness_ direction of full abstraction in the literature [18] and proved by showing the relation generated by $\llbracket M\rrbracket_{z}\approx_{\mathtt{L}}\llbracket N\rrbracket_{z}$ forms $\cong$; we then establish the _completeness_ direction by contradiction, using fullness. ###### Theorem 3.6 (Full Abstraction) $\Omega;\Gamma;\Delta\vdash M\cong N:A$ iff $\Omega;\Gamma;\Delta\vdash\llbracket M\rrbracket_{z}\approx_{\mathtt{L}}\llbracket N\rrbracket_{z}::z{:}A$. We can straightforwardly combine the above full abstraction with Theorem 3.5 to obtain full abstraction of the $\llparenthesis{-}\rrparenthesis$ translation. ###### Theorem 3.7 (Full Abstraction) $\Omega;\Gamma;\Delta\vdash P\approx_{\mathtt{L}}Q::z{:}A$ iff $\Omega;\Gamma;\Delta\vdash\llparenthesis P\rrparenthesis\cong\llparenthesis Q\rrparenthesis:A$. ## 4 Applications of the Encodings In this section we develop applications of the encodings of the previous sections. Taking advantage of full abstraction and mutual inversion, we apply non-trivial properties from the theory of the $\lambda$-calculus to our session-typed process setting. In § 4.1 we study inductive and coinductive sessions, arising through encodings of initial $F$-algebras and final $F$-coalgebras in the polymorphic $\lambda$-calculus. In § 4.2 we study encodings for an extension of the core session calculus with term passing, where terms are derived from a simply-typed $\lambda$-calculus. Using the development of § 4.2 as a stepping stone, we generalise the encodings to a _higher-order_ session calculus (§ 4.3), where processes can send, receive and execute other processes.We show full abstraction and mutual inversion theorems for the encodings from higher-order to first-order. As a consequence, we can straightforwardly derive a strong normalisation property for the higher-order process-passing calculus. ### 4.1 Inductive and Coinductive Session Types The study of polymorphism in the $\lambda$-calculus [1, 19, 40, 6] has shown that parametric polymorphism is expressive enough to encode both inductive and coinductive types in a precise way, through a faithful representation of initial and final (co)algebras [28], without extending the language of terms nor the semantics of the calculus, giving a logical justification to the Church encodings of inductive datatypes such as lists and natural numbers. The polymorphic session calculus can express fairly intricate communication behaviours, including generic protocols through both existential and universal polymorphism (i.e. protocols that are parametric in their sub-protocols). Using our fully abstract encodings between the two calculi, we show that session polymorphism is expressive enough to encode inductive and coinductive sessions, ``importing'' the results for the $\lambda$-calculus, which may then be instantiated to provide a session-typed formulation of the encodings of data structures in the $\pi$-calculus of [30]. Inductive and Coinductive Types in System F. Exploring an algebraic interpretation of polymorphism where types are interpreted as functors, it can be shown that given a type $F$ with a free variable $X$ that occurs only positively (i.e. occurrences of $X$ are on the left-hand side of an even number of function arrows), the polymorphic type $\forall X.((F(X)\rightarrow X)\rightarrow X)$ forms an initial $F$-algebra [42, 1] (we write $F(X)$ to denote that $X$ occurs in $F$). This enables the representation of _inductively_ defined structures using an algebraic or categorical justification. For instance, the natural numbers can be seen as the initial $F$-algebra of $F(X)=\mathbf{1}+X$ (where $\mathbf{1}$ is the unit type and $+$ is the coproduct), and are thus _already present_ in System F, in a precise sense, as the type $\forall X.((\mathbf{1}+X)\rightarrow X)\rightarrow X$ (noting that both $\mathbf{1}$ and $+$ can also be encoded in System F). A similar story can be told for _coinductively_ defined structures, which correspond to final $F$-coalgebras and are representable with the polymorphic type $\exists X.(X\rightarrow F(X))\times X$, where $\times$ is a product type. In the remainder of this section we assume the positivity requirement on $F$ mentioned above. While the complete formal development of the representation of inductive and coinductive types in System F would lead us to far astray, we summarise here the key concepts as they apply to the $\lambda$-calculus (the interested reader can refer to [19] for the full categorical details). | --- (a) | --- (b) Figure 4: Diagrams for Initial $F$-algebras and Final $F$-coalgebras To show that the polymorphic type $T_{i}\triangleq\forall X.((F(X)\rightarrow X)\rightarrow X)$ is an initial $F$-algebra, one exhibits a pair of $\lambda$-terms, often dubbed $\mathsf{fold}$ and $\mathsf{in}$, such that the diagram in Fig. 4(a) commutes (for any $A$, where $F(f)$, where $f$ is a $\lambda$-term, denotes the functorial action of $F$ applied to $f$), and, crucially, that $\mathsf{fold}$ is _unique_. When these conditions hold, we are justified in saying that $T_{i}$ is a least fixed point of $F$. Through a fairly simple calculation, it is easy to see that: $\small\begin{array}[]{rcl}\mathsf{fold}&\triangleq&\Lambda X.\lambda x{:}F(X)\rightarrow X.\lambda t{:}T_{i}.t[X](x)\\\ \mathsf{in}&\triangleq&\lambda x{:}F(T_{i}).\Lambda X.\lambda y{:}F(X)\rightarrow X.y\,(F(\mathsf{fold}[X](x))(x))\end{array}$ satisfy the necessary equalities. To show uniqueness one appeals to _parametricity_ , which allows us to prove that any function of the appropriate type is equivalent to $\mathsf{fold}$. This property is often dubbed initiality or universality. The construction of final $F$-coalgebras and their justification as _greatest_ fixed points is dual. Assuming products in the calculus and taking $T_{f}\triangleq\exists X.(X\rightarrow F(X))\times X$, we produce the $\lambda$-terms $\small\begin{array}[]{rcl}\mathsf{unfold}&\triangleq&\Lambda X.\lambda f{:}X\rightarrow F(X).\lambda x{:}T_{f}.\mathsf{pack}\,X\,\mathsf{with}\,(f,x)\\\ \mathsf{out}&\triangleq&\lambda t:T_{f}.\mathsf{let}\,(X,(f,x))=t\,\mathsf{in}\,F(\mathsf{unfold}[X](f))\,(f(x))\end{array}$ such that the diagram in Fig. 4(b) commutes and $\mathsf{unfold}$ is unique (again, up to parametricity). While the argument above applies to System F, a similar development can be made in Linear-F [6] by considering $T_{i}\triangleq\forall X.!(F(X)\multimap X)\multimap X$ and $T_{f}\triangleq\exists X.!(X\multimap F(X))\otimes X$. Reusing the same names for the sake of conciseness, the associated _linear_ $\lambda$-terms are: $\small\begin{array}[]{rcl}\mathsf{fold}&\triangleq&\Lambda X.\lambda u{:}!(F(X)\multimap X).\lambda y{:}T_{i}.(y[X]\,u):\forall X.!(F(X)\multimap X)\multimap T_{i}\multimap X\\\ \mathsf{in}&\triangleq&\lambda x{:}F(T_{i}).\Lambda X.\lambda y{:}!(F(X)\multimap X).\mathsf{let}\,!u=y\,\mathsf{in}\,k\,(F\,(\mathsf{fold}[X](!u))(x)):F(T_{i})\multimap T_{i}\\\ \mathsf{unfold}&\triangleq&\Lambda X.\lambda u{:}!(X\multimap F(X)).\lambda x{:}X.\mathsf{pack}\,X\,\mathsf{with}\,\langle u\otimes x\rangle:\forall X.!(X\multimap F(X))\multimap X\multimap T_{f}\\\ \mathsf{out}&\triangleq&\lambda t:T_{f}.\mathsf{let}\,(X,(u,x))=t\,\mathsf{in}\,\mathsf{let}\,!f=u\,\mathsf{in}\,F(\mathsf{unfold}[X](!f))\,(f(x)):T_{f}\multimap F(T_{f})\end{array}$ Inductive and Coinductive Sessions for Free. As a consequence of full abstraction we may appeal to the $\llbracket{-}\rrbracket_{z}$ encoding to derive representations of $\mathsf{fold}$ and $\mathsf{unfold}$ that satisfy the necessary algebraic properties. The derived processes are (recall that we write $\overline{x}\langle y\rangle.P$ for $(\mathbf{\nu}y)x\langle y\rangle.P$): $\small\begin{array}[]{rcl}\llbracket\mathsf{fold}\rrbracket_{z}&\triangleq&z(X).z(u).z(y).(\mathbf{\nu}w)((\mathbf{\nu}x)([y\leftrightarrow x]\mid x\langle X\rangle.[x\leftrightarrow w])\mid\overline{w}\langle v\rangle.([u\leftrightarrow v]\mid[w\leftrightarrow z]))\\\ \llbracket\mathsf{unfold}\rrbracket_{z}&\triangleq&z(X).z(u).z(x).z\langle X\rangle.\overline{z}\langle y\rangle.([u\leftrightarrow y]\mid[x\leftrightarrow z])\\\ \end{array}$ We can then show universality of the two constructions. We write $P_{x,y}$ to single out that $x$ and $y$ are free in $P$ and $P_{z,w}$ to denote the result of employing capture-avoiding substitution on $P$, substituting $x$ and $y$ by $z$ and $w$. Let: $\small\begin{array}[]{rcl}\mathsf{foldP}(A)_{y_{1},y_{2}}&\triangleq&(\mathbf{\nu}x)(\llbracket\mathsf{fold}\rrbracket_{x}\mid x\langle A\rangle.\overline{x}\langle v\rangle.(\overline{u}\langle y\rangle.[y\leftrightarrow v]\mid\overline{x}\langle z\rangle.([z\leftrightarrow y_{1}]\mid[x\leftrightarrow y_{2}])))\\\ \mathsf{unfoldP}(A)_{y_{1},y_{2}}&\triangleq&(\mathbf{\nu}x)(\llbracket\mathsf{unfold}\rrbracket_{x}\mid x\langle A\rangle.\overline{x}\langle v\rangle.(\overline{u}\langle y\rangle.[y\leftrightarrow v]\mid\overline{x}\langle z\rangle.([z\leftrightarrow y_{1}]\mid[x\leftrightarrow y_{2}])))\end{array}$ where $\mathsf{foldP}(A)_{y_{1},y_{2}}$ corresponds to the application of $\mathsf{fold}$ to an $F$-algebra $A$ with the associated morphism $F(A)\multimap A$ available on the shared channel $u$, consuming an ambient session $y_{1}{:}T_{i}$ and offering $y_{2}{:}A$. Similarly, $\mathsf{unfoldP}(A)_{y_{1},y_{2}}$ corresponds to the application of $\mathsf{unfold}$ to an $F$-coalgebra $A$ with the associated morphism $A\multimap F(A)$ available on the shared channel $u$, consuming an ambient session $y_{1}{:}A$ and offering $y_{2}{:}T_{f}$. ###### Theorem 4.1 (Universality of $\mathsf{foldP}$) $\forall Q$ such that $X;u{:}F(X)\multimap X;y_{1}{:}T_{i}\vdash Q::y_{2}{:}X$ we have $X;u{:}F(X)\multimap X;y_{1}{:}T_{i}\vdash Q\approx_{\mathtt{L}}\mathsf{foldP}(X)_{y_{1},y_{2}}::y_{2}{:}X$ ###### Theorem 4.2 (Universality of $\mathsf{unfoldP}$) $\forall Q$ and $F$-coalgebra $A$ s.t $\cdot;\cdot;y_{1}{:}A\vdash Q::y_{2}{:}T_{f}$ we have that $\cdot;u{:}F(A)\multimap A;y_{1}{:}A\vdash Q\approx_{\mathtt{L}}\mathsf{unfoldP}(A)_{y_{1},y_{2}}::y_{2}::T_{f}$. ###### Example 4.1 (Natural Numbers) We show how to represent the natural numbers as an inductive session type using $F(X)=\mathbf{1}\oplus X$, making use of $\mathsf{in}$: $\small\begin{array}[]{c}\mathsf{zero}_{x}\triangleq(\mathbf{\nu}z)(z.\mathsf{inl};{\bf 0}\mid\llbracket\mathsf{in}(z)\rrbracket_{x})\quad\mathsf{succ}_{y,x}\triangleq(\mathbf{\nu}s)(s.\mathsf{inr};[y\leftrightarrow s]\mid\llbracket\mathsf{in}(s)\rrbracket_{x})\end{array}$ with $\mathsf{Nat}\triangleq\forall X.!((\mathbf{1}\oplus X)\multimap X)\multimap X$ where $\vdash\mathsf{zero}_{x}::x{:}\mathsf{Nat}$ and $y{:}\mathsf{Nat}\vdash\mathsf{succ}_{y,x}::x{:}\mathsf{Nat}$ encode the representation of $0$ and successor, respectively. The natural $1$ would thus be represented by $\mathsf{one}_{x}\triangleq(\mathbf{\nu}y)(\mathsf{zero}_{y}\mid\mathsf{succ}_{y,x})$. The behaviour of type $\mathsf{Nat}$ can be seen as a that of a sequence of internal choices of arbitrary (but finite) length. We can then observe that the $\mathsf{foldP}$ process acts as a recursor. For instance consider: $\small\begin{array}[]{l}\mathsf{stepDec}_{d}\triangleq d(n).n.\mathsf{case}(\mathsf{zero}_{d},[n\leftrightarrow d])\quad\mathsf{dec}_{x,z}\triangleq(\mathbf{\nu}u)(!u(d).\mathsf{stepDec}_{d}\mid\mathsf{foldP}(\mathsf{Nat})_{x,z})\end{array}$ with $\mathsf{stepDec}_{d}::d{:}(\mathbf{1}\oplus\mathsf{Nat})\multimap\mathsf{Nat}$ and $x{:}\mathsf{Nat}\vdash\mathsf{dec}_{x,z}::z{:}\mathsf{Nat}$, where $\mathsf{dec}$ decrements a given natural number session on channel $x$. We have that: $\small\begin{array}[]{l}(\mathbf{\nu}x)(\mathsf{one}_{x}\mid\mathsf{dec}_{x,z})\equiv(\mathbf{\nu}x,y.u)(\mathsf{zero}_{y}\mid\mathsf{succ}_{y,x}!u(d).\mathsf{stepDec}_{d}\mid\mathsf{foldP}(\mathsf{Nat})_{x,z})\approx_{\mathtt{L}}\mathsf{zero}_{z}\end{array}$ We note that the resulting encoding is reminiscent of the encoding of lists of [30] (where $\mathsf{zero}$ is the empty list and $\mathsf{succ}$ the cons cell). The main differences in the encodings arise due to our primitive notions of labels and forwarding, as well as due to the generic nature of $\mathsf{in}$ and $\mathsf{fold}$. ###### Example 4.2 (Streams) We build on Example 4.1 by representing _streams_ of natural numbers as a coinductive session type. We encode infinite streams of naturals with $F(X)=\mathsf{Nat}\otimes X$. Thus: $\mathsf{NatStream}\triangleq\exists X.!(X\multimap(\mathsf{Nat}\otimes X))\otimes X$. The behaviour of a session of type $\mathsf{NatStream}$ amounts to an infinite sequence of outputs of channels of type $\mathsf{Nat}$. Such an encoding enables us to construct the stream of all naturals $\mathsf{nats}$ (and the stream of all non-zero naturals $\mathsf{oneNats}$): $\small\begin{array}[]{lcl}\mathsf{genHdNext}_{z}&\triangleq&z(n).\overline{z}\langle y\rangle.(\overline{n}\langle n^{\prime}\rangle.[n^{\prime}\leftrightarrow y]\mid\,!z(w).\overline{n}\langle n^{\prime}\rangle.\mathsf{succ}_{n^{\prime},w})\\\ \mathsf{nats}_{y}&\triangleq&(\mathbf{\nu}x,u)(\mathsf{zero}_{x}\mid\,!u(z).\mathsf{genHdNext}_{z}\mid\mathsf{unfoldP}(!\mathsf{Nat})_{x,y})\\\ \mathsf{oneNats}_{y}&\triangleq&(\mathbf{\nu}x,u)(\mathsf{one}_{x}\mid\,!u(z).\mathsf{genHdNext}_{z}\mid\mathsf{unfoldP}(!\mathsf{Nat})_{x,y})\end{array}$ with $\mathsf{genHdNext}_{z}::z{:}!\mathsf{Nat}\multimap\mathsf{Nat}\otimes!\mathsf{Nat}$ and both $\mathsf{nats}_{y}$ and $\mathsf{oneNats}::y{:}\mathsf{NatStream}$. $\mathsf{genHdNext}_{z}$ consists of a helper that generates the current head of a stream and the next element. As expected, the following process implements a session that ``unrolls'' the stream once, providing the head of the stream and then behaving as the rest of the stream (recall that $\mathsf{out}:T_{f}\multimap F(T_{f})$). $(\mathbf{\nu}x)(\mathsf{nats}_{x}\mid\llbracket\mathsf{out}(x)\rrbracket_{y})::y{:}\mathsf{Nat}\otimes\mathsf{NatStream}$ We note a peculiarity of the interaction of linearity with the stream encoding: a process that begins to deconstruct a stream has no way of ``bottoming out'' and stopping. One cannot, for instance, extract the first element of a stream of naturals and stop unrolling the stream in a well-typed way. We can, however, easily encode a ``terminating'' stream of all natural numbers via $F(X)=(\mathsf{Nat}\otimes!X)$ by replacing the $\mathsf{genHdNext}_{z}$ with the generator given as: $\small\begin{array}[]{rcl}\mathsf{genHdNextTer}_{z}&\triangleq&z(n).\overline{z}\langle y\rangle.(\overline{n}\langle n^{\prime}\rangle.[n^{\prime}\leftrightarrow y]\mid\,!z(w).!w(w^{\prime}).\overline{n}\langle n^{\prime}\rangle.\mathsf{succ}_{n^{\prime},w^{\prime}})\end{array}$ It is then easy to see that a usage of $\llbracket\mathsf{out}(x)\rrbracket_{y}$ results in a session of type $\mathsf{Nat}\otimes!\mathsf{NatStream}$, enabling us to discard the stream as needed. One can replay this argument with the operator $F(X)=(!\mathsf{Nat}\otimes X)$ to enable discarding of stream elements. Assuming such modifications, we can then show: $(\mathbf{\nu}y)((\mathbf{\nu}x)(\mathsf{nats}_{x}\mid\llbracket\mathsf{out}(x)\rrbracket_{y})\mid y(n).[y\leftrightarrow z])\approx_{\mathtt{L}}\mathsf{oneNats}_{z}::z{:}\mathsf{NatStream}$ ### 4.2 Communicating Values – Sess$\pi\lambda$ We now consider a session calculus extended with a data layer obtained from a $\lambda$-calculus (whose terms are ranged over by $M,N$ and types by $\tau,\sigma$). We dub this calculus Sess$\pi\lambda$. $\small\begin{array}[]{ll}\begin{array}[]{lcl}P,Q&::=&\dots\mid x\langle M\rangle.P\mid x(y).P\\\ M,N&::=&\lambda x{:}\tau.M\mid M\,N\mid x\end{array}&\quad\quad\begin{array}[]{lcl}A,B&::=&\dots\mid\tau\wedge A\mid\tau\supset A\\\ \tau,\sigma&::=&\dots\mid\tau\rightarrow\sigma\end{array}\end{array}$ Without loss of generality, we consider the data layer to be simply-typed, with a call-by-name semantics, satisfying the usual type safety properties. The typing judgment for this calculus is $\Psi\vdash M:\tau$. We omit session polymorphism for the sake of conciseness, restricting processes to communication of data and (session) channels. The typing judgment for processes is thus modified to $\Psi;\Gamma;\Delta\vdash P::z{:}A$, where $\Psi$ is an intuitionistic context that accounts for variables in the data layer. The rules for the relevant process constructs are (all other rules simply propagate the $\Psi$ context from conclusion to premises): $\small\begin{array}[]{c}\Psi;\Gamma;\Delta\vdash z\langle M\rangle.P::z{:}\tau\wedge A\lx@proof@logical@and\Psi\vdash M:\tau\Psi;\Gamma;\Delta\vdash P::z{:}A\quad\Psi;\Gamma;\Delta,x{:}\tau\wedge A\vdash x(y).Q::z{:}C\Psi,y{:}\tau;\Gamma;\Delta,x{:}A\vdash Q::z{:}C\\\\[4.5pt] \Psi;\Gamma;\Delta\vdash z(x).P::z{:}\tau\supset A\Psi,x{:}\tau;\Gamma;\Delta\vdash P::z{:}A\quad\Psi;\Gamma;\Delta,x{:}\tau\supset A\vdash x\langle M\rangle.Q::z{:}C\lx@proof@logical@and\Psi\vdash M:\tau\Psi;\Gamma;\Delta,x{:}A\vdash Q::z{:}C\end{array}$ With the reduction rule given by:111For simplicity, in this section, we define the process semantics through a reduction relation. $x\langle M\rangle.P\mid x(y).Q\xrightarrow{}P\mid Q\\{M/y\\}$. With a simple extension to our encodings we may eliminate the data layer by encoding the data objects as processes, showing that from an expressiveness point of view, data communication is orthogonal to the framework. We note that the data language we are considering is _not_ linear, and the usage discipline of data in processes is itself also not linear. #### To First-Order Processes We now introduce our encoding for Sess$\pi\lambda$, defined inductively on session types, processes, types and $\lambda$-terms (we omit the purely inductive cases on session types and processes for conciseness). As before, the encoding on processes is defined on _typing derivations_ , where we indicate the typing rule at the root of the typing derivation. $\small\begin{array}[]{l}\llbracket\tau\wedge A\rrbracket\triangleq!\llbracket\tau\rrbracket\otimes\llbracket A\rrbracket\qquad\llbracket\tau\supset A\rrbracket\triangleq!\llbracket\tau\rrbracket\multimap\llbracket A\rrbracket\qquad\llbracket\tau\rightarrow\sigma\rrbracket\triangleq!\llbracket\tau\rrbracket\multimap\llbracket\sigma\rrbracket\end{array}$ $\small\begin{array}[]{lrcllrcl}({{\wedge}\mathsf{R}})&\llbracket z\langle M\rangle.P\rrbracket&\triangleq&\overline{z}\langle x\rangle.(!x(y).\llbracket M\rrbracket_{y}\mid\llbracket P\rrbracket)&({{\wedge}\mathsf{L}})&\llbracket x(y).P\rrbracket&\triangleq&x(y).\llbracket P\rrbracket\\\ ({{\supset}\mathsf{R}})&\llbracket z(x).P\rrbracket&\triangleq&z(x).\llbracket P\rrbracket&({{\supset}\mathsf{L}})&\llbracket x\langle M\rangle.P\rrbracket&\triangleq&\overline{x}\langle y\rangle.(!y(w).\llbracket M\rrbracket_{w}\mid\llbracket P\rrbracket)\end{array}$ $\small\begin{array}[]{lll}\llbracket x\rrbracket_{z}\triangleq\overline{x}\langle y\rangle.[y\leftrightarrow z]\quad\quad\quad\llbracket\lambda x{:}\tau.M\rrbracket_{z}\triangleq z(x).\llbracket M\rrbracket_{z}\\\ \llbracket M\,N\rrbracket_{z}\triangleq(\mathbf{\nu}y)(\llbracket M\rrbracket_{y}\mid\overline{y}\langle x\rangle.(!x(w).\llbracket N\rrbracket_{w}\mid[y\leftrightarrow z]))\end{array}$ The encoding addresses the non-linear usage of data elements in processes by encoding the types $\tau\wedge A$ and $\tau\supset A$ as $!\llbracket\tau\rrbracket\otimes\llbracket A\rrbracket$ and $!\llbracket\tau\rrbracket\multimap\llbracket A\rrbracket$, respectively. Thus, sending and receiving of data is codified as the sending and receiving of channels of type $!$, which therefore can be used non-linearly. Moreover, since data terms are themselves non-linear, the $\tau\rightarrow\sigma$ type is encoded as $!\llbracket\tau\rrbracket\multimap\llbracket\sigma\rrbracket$, following Girard's embedding of intuitionistic logic in linear logic [15]. At the level of processes, offering a session of type $\tau\wedge A$ (i.e. a process of the form $z\langle M\rangle.P$) is encoded according to the translation of the type: we first send a _fresh_ name $x$ which will be used to access the encoding of the term $M$. Since $M$ can be used an arbitrary number of times by the receiver, we guard the encoding of $M$ with a replicated input, proceeding with the encoding of $P$ accordingly. Using a session of type $\tau\supset A$ follows the same principle. The input cases (and the rest of the process constructs) are completely homomorphic. The encoding of $\lambda$-terms follows Girard's decomposition of the intuitionistic function space [49]. The $\lambda$-abstraction is translated as input. Since variables in a $\lambda$-abstraction may be used non-linearly, the case for variables and application is slightly more intricate: to encode the application $M\,N$ we compose $M$ in parallel with a process that will send the ``reference'' to the function argument $N$ which will be encoded using replication, in order to handle the potential for $0$ or more usages of variables in a function body. Respectively, a variable is encoded by performing an output to trigger the replication and forwarding accordingly. Without loss of generality, we assume variable names and their corresponding replicated counterparts match, which can be achieved through $\alpha$-conversion before applying the translation. We exemplify our encoding as follows: $\small\begin{array}[]{c}\llbracket z(x).z\langle x\rangle.z\langle(\lambda y{:}\sigma.x)\rangle.{\bf 0}\rrbracket=z(x).\overline{z}\langle w\rangle.(!w(u).\llbracket x\rrbracket_{u}\mid\overline{z}\langle v\rangle.(!v(i).\llbracket\lambda y{:}\sigma.x\rrbracket_{i}\mid{\bf 0}))\\\ \qquad\qquad\qquad\qquad=z(x).\overline{z}\langle w\rangle.(!w(u).\overline{x}\langle y\rangle.[y\leftrightarrow u]\mid\overline{z}\langle v\rangle.(!v(i).i(y).\overline{x}\langle t\rangle.[t\leftrightarrow i]\mid{\bf 0}))\end{array}$ Properties of the Encoding. We discuss the correctness of our encoding. We can straightforwardly establish that the encoding preserves typing. To show that our encoding is operationally sound and complete, we capture the interaction between substitution on $\lambda$-terms and the encoding into processes through logical equivalence. Consider the following reduction of a process: $\displaystyle(\mathbf{\nu}z)(z(x).z\langle x\rangle.z\langle(\lambda y{:}\sigma.x)\rangle.{\bf 0}\mid z\langle\lambda w{:}\tau_{0}.w\rangle.P)$ $\displaystyle\hskip 85.35826pt\xrightarrow{}(\mathbf{\nu}z)(z\langle\lambda w{:}\tau_{0}.w\rangle.z\langle(\lambda y{:}\sigma.\lambda w{:}\tau_{0}.w)\rangle.{\bf 0}\mid P)$ (1) Given that substitution in the target session $\pi$-calculus amounts to renaming, whereas in the $\lambda$-calculus we replace a variable for a term, the relationship between the encoding of a substitution $M\\{N/x\\}$ and the encodings of $M$ and $N$ corresponds to the composition of the encoding of $M$ with that of $N$, but where the encoding of $N$ is guarded by a replication, codifying a form of explicit non-linear substitution. ###### Lemma 4.1 (Compositionality) Let $\Psi,x{:}\tau\vdash M:\sigma$ and $\Psi\vdash N:\tau$. We have that $\llbracket M\\{N/x\\}\rrbracket_{z}\approx_{\mathtt{L}}(\mathbf{\nu}x)(\llbracket M\rrbracket_{z}\mid!x(y).\llbracket N\rrbracket_{y})$ Revisiting the process to the left of the arrow in Equation 1 we have: $\small\begin{array}[]{l}\llbracket(\mathbf{\nu}z)(z(x).z\langle x\rangle.z\langle(\lambda y{:}\sigma.x)\rangle.{\bf 0}\mid z\langle\lambda w{:}\tau_{0}.w\rangle.P)\rrbracket\\\ =(\mathbf{\nu}z)(\llbracket z(x).z\langle x\rangle.z\langle(\lambda y{:}\sigma.x)\rangle.{\bf 0}\rrbracket_{z}\mid\overline{z}\langle x\rangle.(!x(b).\llbracket\lambda w{:}\tau_{0}.w\rrbracket_{b}\mid\llbracket P\rrbracket))\\\ \xrightarrow{}(\mathbf{\nu}z,x)(\overline{z}\langle w\rangle.(!w(u).\overline{x}\langle y\rangle.[y\leftrightarrow u]\mid\overline{z}\langle v\rangle.(!v(i).\llbracket\lambda y{:}\sigma.x\rrbracket_{i}\mid{\bf 0})\mid\,!x(b).\llbracket\lambda w{:}\tau_{0}.w\rrbracket_{b}\mid\llbracket P\rrbracket))\end{array}$ whereas the process to the right of the arrow is encoded as: $\small\begin{array}[]{l}\llbracket(\mathbf{\nu}z)(z\langle\lambda w{:}\tau_{0}.w\rangle.z\langle(\lambda y{:}\sigma.\lambda w{:}\tau_{0}.w)\rangle.{\bf 0}\mid P)\rrbracket\\\ =(\mathbf{\nu}z)(\overline{z}\langle w\rangle.(!w(u).\llbracket\lambda w{:}\tau_{0}.w\rrbracket_{u}\mid\overline{z}\langle v\rangle.(!v(i).\llbracket\lambda y{:}\sigma.\lambda w{:}\tau_{0}.w\rrbracket_{i}\mid\llbracket P\rrbracket)))\end{array}$ While the reduction of the encoded process and the encoding of the reduct differ syntactically, they are observationally equivalent – the latter inlines the replicated process behaviour that is accessible in the former on $x$. Having characterised substitution, we establish operational correspondence for the encoding. ###### Theorem 4.3 (Operational Correspondence) 1. 1. If $\Psi\vdash M:\tau$ and $\llbracket M\rrbracket_{z}\xrightarrow{}Q$ then $M\xrightarrow{}^{+}N$ such that $\llbracket N\rrbracket_{z}\approx_{\mathtt{L}}Q$ 2. 2. If $\Psi;\Gamma;\Delta\vdash P::z{:}A$ and $\llbracket P\rrbracket\xrightarrow{}Q$ then $P\xrightarrow{}^{+}P^{\prime}$ such that $\llbracket P^{\prime}\rrbracket\approx_{\mathtt{L}}Q$ 3. 3. If $\Psi\vdash M:\tau$ and $M\xrightarrow{}N$ then $\llbracket M\rrbracket_{z}\stackrel{{\scriptstyle}}{{\Longrightarrow}}P$ such that $P\approx_{\mathtt{L}}\llbracket N\rrbracket_{z}$ 4. 4. If $\Psi;\Gamma;\Delta\vdash P::z{:}A$ and $P\xrightarrow{}Q$ then $\llbracket P\rrbracket\xrightarrow{}^{+}R$ with $R\approx_{\mathtt{L}}\llbracket Q\rrbracket$ The process equivalence in Theorem 4.3 above need not be extended to account for data (although it would be relatively simple to do so), since the processes in the image of the encoding are fully erased of any data elements. #### Back to $\lambda$-Terms. We extend our encoding of processes to $\lambda$-terms to Sess$\pi\lambda$. Our extended translation maps processes to linear $\lambda$-terms, with the session type $\tau\wedge A$ interpreted as a pair type where the first component is replicated. Dually, $\tau\supset A$ is interpreted as a function type where the domain type is replicated. The remaining session constructs are translated as in § 3.2. $\small\begin{array}[]{c}\llparenthesis\tau\wedge A\rrparenthesis\triangleq\ !\llparenthesis\tau\rrparenthesis\otimes\llparenthesis A\rrparenthesis\quad\quad\llparenthesis\tau\supset A\rrparenthesis\triangleq\ !\llparenthesis\tau\rrparenthesis\multimap\llparenthesis A\rrparenthesis\quad\quad\llparenthesis\tau\rightarrow\sigma\rrparenthesis\triangleq\ !\llparenthesis\tau\rrparenthesis\multimap\llparenthesis\sigma\rrparenthesis\end{array}$ $\small\begin{array}[]{llclllcl}({{\wedge}\mathsf{L}})&\llparenthesis x(y).P\rrparenthesis&\triangleq&\mathsf{let}\,y\otimes x=x\,\mathsf{in}\,\mathsf{let}\,!y=y\,\mathsf{in}\,\llparenthesis P\rrparenthesis&({{\wedge}\mathsf{R}})&\llparenthesis z\langle M\rangle.P\rrparenthesis&\triangleq&\langle!\llparenthesis M\rrparenthesis\otimes\llparenthesis P\rrparenthesis\rangle\\\ ({{\supset}\mathsf{R}})&\llparenthesis x(y).P\rrparenthesis&\triangleq&\lambda x{:}!\llparenthesis\tau\rrparenthesis.\mathsf{let}\,!x=x\,\mathsf{in}\,\llparenthesis P\rrparenthesis&({{\supset}\mathsf{L}})&\llparenthesis x\langle M\rangle.P\rrparenthesis&\triangleq&\llparenthesis P\rrparenthesis\\{(x\,!\llparenthesis M\rrparenthesis)/x\\}\end{array}$ $\begin{array}[]{c}\llparenthesis\lambda x{:}\tau.M\rrparenthesis\triangleq\lambda x{:}!\llparenthesis\tau\rrparenthesis.\mathsf{let}\,!x=x\,\mathsf{in}\,\llparenthesis M\rrparenthesis\par\quad\llparenthesis M\,N\rrparenthesis\triangleq\llparenthesis M\rrparenthesis\,!\llparenthesis N\rrparenthesis\quad\llparenthesis x\rrparenthesis\triangleq x\end{array}$ The treatment of non-linear components of processes is identical to our previous encoding: non-linear functions $\tau\rightarrow\sigma$ are translated to linear functions of type $!\tau\multimap\sigma$; a process offering a session of type $\tau\wedge A$ (i.e. a process of the form $z\langle M\rangle.P$, typed by rule ${{\wedge}\mathsf{R}}$) is translated to a pair where the first component is the encoding of $M$ prefixed with $!$ so that it may be used non-linearly, and the second is the encoding of $P$. Non-linear variables are handled at the respective binding sites: a process using a session of type $\tau\wedge A$ is encoded using the elimination form for the pair and the elimination form for the exponential; similarly, a process offering a session of type $\tau\supset A$ is encoded as a $\lambda$-abstraction where the bound variable is of type $!\llparenthesis\tau\rrparenthesis$. Thus, we use the elimination form for the exponential, ensuring that the typing is correct. We illustrate our encoding: $\small\begin{array}[]{rcl}\llparenthesis z(x).z\langle x\rangle.z\langle(\lambda y{:}\sigma.x)\rangle.{\bf 0}\rrparenthesis&=&\lambda x{:}!\llparenthesis\tau\rrparenthesis.\mathsf{let}\,!x=x\,\mathsf{in}\,\langle!x\,\otimes\langle!\llparenthesis\lambda y{:}\sigma.x\rrparenthesis\,\otimes\langle\rangle\rangle\rangle\\\ &=&\lambda x{:}!\llparenthesis\tau\rrparenthesis.\mathsf{let}\,!x=x\,\mathsf{in}\,\langle!x\,\otimes\langle!(\lambda y{:}!\llparenthesis\sigma\rrparenthesis.\mathsf{let}\,!y=y\,\mathsf{in}\,x)\,\otimes\langle\rangle\rangle\rangle\end{array}$ Properties of the Encoding. Unsurprisingly due to the logical correspondence between natural deduction and sequent calculus presentations of logic, our encoding satisfies both type soundness and operational correspondence (c.f. Theorems 3.2, 3.3, and 3.4). The full development can be found in [52]. #### Relating the Two Encodings. We prove the two encodings are mutually inverse and preserve the full abstraction properties (we write $=_{\beta}$ and $=_{\beta\eta}$ for $\beta$\- and $\beta\eta$-equivalence, respectively). ###### Theorem 4.4 (Inverse) If $\Psi;\Gamma;\Delta\vdash P::z{:}A$ then $\llbracket\llparenthesis P\rrparenthesis\rrbracket_{z}\approx_{\mathtt{L}}\llbracket P\rrbracket$. Also, if $\Psi\vdash M:\tau$ then $\llparenthesis\llbracket M\rrbracket_{z}\rrparenthesis=_{\beta}\llparenthesis M\rrparenthesis$. The equivalences above are formulated between the composition of the encodings applied to $P$ (resp. $M$) and the process (resp. $\lambda$-term) _after_ applying the translation embedding the non-linear components into their linear counterparts. This formulation matches more closely that of § 3.3, which applies to linear calculi for which the _target_ languages of this section are a strict subset (and avoids the formalisation of process equivalence with terms). We also note that in this setting, observational equivalence and $\beta\eta$-equivalence coincide [3, 31]. Moreover, the extensional flavour of $\approx_{\mathtt{L}}$ includes $\eta$-like principles at the process level. ###### Theorem 4.5 () Let $\cdot\vdash M:\tau$ and $\cdot\vdash N:\tau$. $\llparenthesis M\rrparenthesis=_{\beta\eta}\llparenthesis N\rrparenthesis$ iff $\llbracket M\rrbracket_{z}\approx_{\mathtt{L}}\llbracket N\rrbracket_{z}$. Also, let $\cdot\vdash P::z{:}A$ and $\cdot\vdash Q::z{:}A$. We have that $\llbracket P\rrbracket\approx_{\mathtt{L}}\llbracket Q\rrbracket$ iff $\llparenthesis P\rrparenthesis=_{\beta\eta}\llparenthesis Q\rrparenthesis$. We establish full abstraction for the encoding of $\lambda$-terms into processes (Theorem 4.5) in two steps: The completeness direction (i.e. from left-to-right) follows from operational completeness and strong normalisation of the $\lambda$-calculus. The soundness direction uses operational soundness. The proof of Theorem 4.5 uses the same strategy of Theorem 3.7, appealing to the inverse theorems. ### 4.3 Higher-Order Session Processes – Sess$\pi\lambda^{+}$ We extend the value-passing framework of the previous section, accounting for process-passing (i.e. the higher-order) in a session-typed setting. As shown in [50], we achieve this by adding to the data layer a _contextual monad_ that encapsulates (open) session-typed processes as data values, with a corresponding elimination form in the process layer. We dub this calculus Sess$\pi\lambda^{+}$. $\begin{array}[]{rclrcl}P,Q&::=&\dots\mid x\leftarrow M\leftarrow\overline{y_{i}};Q&\quad\quad M.N&::=&\dots\mid\\{x\leftarrow P\leftarrow\overline{y_{i}{:}A_{i}}\\}\\\ \tau,\sigma&::=&\dots\mid\\{\overline{x_{j}{:}A_{j}}\vdash z{:}A\\}\end{array}$ The type $\\{\overline{x_{j}{:}A_{j}}\vdash z{:}A\\}$ is the type of a term which encapsulates an open process that uses the linear channels $\overline{x_{j}{:}A_{j}}$ and offers $A$ along channel $z$. This formulation has the added benefit of formalising the integration of session-typed processes in a functional language and forms the basis for the concurrent programming language SILL [37, 50]. The typing rules for the new constructs are (for simplicity we assume no shared channels in process monads): $\begin{array}[]{c}\Psi\vdash\\{z\leftarrow P\leftarrow\overline{x_{i}{:}A_{i}}\\}:\\{\overline{x_{i}{:}A_{i}}\vdash z{:}A\\}\Psi;\cdot;\overline{x_{i}{:}A_{i}}\vdash P::z{:}A\\\\[4.5pt] \Psi;\Gamma;\Delta_{1},\Delta_{2}\vdash x\leftarrow M\leftarrow\overline{y_{i}};Q::z{:}C\lx@proof@logical@and\Psi\vdash M:\\{\overline{x_{i}{:}A_{i}}\vdash x{:}A\\}\Delta_{1}=\overline{y_{i}{:}A_{i}}\Psi;\Gamma;\Delta_{2},x{:}A\vdash Q::z{:}C\end{array}$ Rule $\\{\\}I$ embeds processes in the term language by essentially quoting an open process that is well-typed according to the type specification in the monadic type. Dually, rule $\\{\\}E$ allows for processes to use monadic values through composition that _consumes_ some of the ambient channels in order to provide the monadic term with the necessary context (according to its type). These constructs are discussed in substantial detail in [50]. The reduction semantics of the process construct is given by (we tacitly assume that the names $\overline{y}$ and $c$ do not occur in $P$ and omit the congruence case): $\begin{array}[]{c}(c\leftarrow\\{z\leftarrow P\leftarrow\overline{x_{i}{:}A_{i}}\\}\leftarrow\overline{y_{i}};Q)\xrightarrow{}(\mathbf{\nu}c)(P\\{\overline{y}/\overline{x_{i}}\\{c/z\\}\\}\mid Q)\end{array}$ The semantics allows for the underlying monadic term $M$ to evaluate to a (quoted) process $P$. The process $P$ is then executed in parallel with the continuation $Q$, sharing the linear channel $c$ for subsequent interactions. We illustrate the higher-order extension with following typed process (we write $\\{x\leftarrow P\\}$ when $P$ does not depend on any linear channels and assume $\vdash Q::d{:}\mathsf{Nat}\wedge\mathbf{1}$): $P\triangleq(\mathbf{\nu}c)(c\langle\\{d\leftarrow Q\\}\rangle.c(x).{\bf 0}\mid c(y).d\leftarrow y;d(n).c\langle n\rangle.{\bf 0})$ (2) Process $P$ above gives an abstract view of a communication idiom where a process (the left-hand side of the parallel composition) sends another process $Q$ which potentially encapsulates some complex computation. The receiver then _spawns_ the execution of the received process and inputs from it a result value that is sent back to the original sender. An execution of $P$ is given by: $\small\begin{array}[]{rcl}P\xrightarrow{}(\mathbf{\nu}c)(c(x).{\bf 0}\mid d\leftarrow\\{d\leftarrow Q\\};d(n).c\langle n\rangle.{\bf 0})&\xrightarrow{}&(\mathbf{\nu}c)(c(x).{\bf 0}\mid(\mathbf{\nu}d)(Q\mid d(n).c\langle n\rangle.{\bf 0}))\\\ &\xrightarrow{}^{+}&(\mathbf{\nu}c)(c(x).{\bf 0}\mid c\langle 42\rangle.{\bf 0})\xrightarrow{}{\bf 0}\end{array}$ Given the seminal work of Sangiorgi [46], such a representation naturally begs the question of whether or not we can develop a _typed_ encoding of higher- order processes into the first-order setting. Indeed, we can achieve such an encoding with a fairly simple extension of the encoding of § 4.2 to Sess$\pi\lambda^{+}$ by observing that monadic values are processes that need to be potentially provided with extra sessions in order to be executed correctly. For instance, a term of type $\\{x{:}A\vdash y{:}B\\}$ denotes a process that given a session $x$ of type $A$ will then offer $y{:}B$. Exploiting this observation we encode this type as the session $A\multimap B$, ensuring subsequent usages of such a term are consistent with this interpretation. $\small\begin{array}[]{lcl}\llbracket\\{\overline{x_{j}{:}A_{j}}\vdash z{:}A\\}\rrbracket&\triangleq&\overline{\llbracket A_{j}\rrbracket}\multimap\llbracket A\rrbracket\\\\[4.5pt] \llbracket\\{x\leftarrow P\rightarrow\overline{y_{i}}\\}\rrbracket_{z}&\triangleq&z(y_{0}).\dots.z(y_{n}).\llbracket P\\{z/x\\}\rrbracket\quad(z\not\in\mbox{\it fn}(P))\\\\[4.5pt] \llbracket x\leftarrow M\leftarrow\overline{y_{i}};Q\rrbracket&\triangleq&(\mathbf{\nu}x)(\llbracket M\rrbracket_{x}\mid\overline{x}\langle a_{0}\rangle.([a_{0}\leftrightarrow y_{0}]\mid\dots\mid x\langle a_{n}\rangle.([a_{n}\leftrightarrow y_{n}]\mid\llbracket Q\rrbracket)\dots))\end{array}$ To encode the monadic type $\\{\overline{x_{j}{:}A_{j}}\vdash z{:}A\\}$, denoting the type of process $P$ that is typed by $\overline{x_{j}{:}A_{j}}\vdash P::z{:}A$, we require that the session in the image of the translation specifies a sequence of channel inputs with behaviours $\overline{A_{j}}$ that make up the linear context. After the contextual aspects of the type are encoded, the session will then offer the (encoded) behaviour of $A$. Thus, the encoding of the monadic type is $\llbracket A_{0}\rrbracket\multimap\dots\multimap\llbracket A_{n}\rrbracket\multimap\llbracket A\rrbracket$, which we write as $\overline{\llbracket A_{j}\rrbracket}\multimap\llbracket A\rrbracket$. The encoding of monadic expressions adheres to this behaviour, first performing the necessary sequence of inputs and then proceeding inductively. Finally, the encoding of the elimination form for monadic expressions behaves dually, composing the encoding of the monadic expression with a sequence of outputs that instantiate the consumed names accordingly (via forwarding). The encoding of process $P$ from Equation 2 is thus: $\small\begin{array}[]{l}\llbracket P\rrbracket=(\mathbf{\nu}c)(\llbracket c\langle\\{d\leftarrow Q\\}\rangle.c(x).{\bf 0}\rrbracket\mid\llbracket c(y).d\leftarrow y;d(n).c\langle n\rangle.{\bf 0}\rrbracket)\\\ =(\mathbf{\nu}c)(\overline{c}\langle w\rangle.(!w(d).\llbracket Q\rrbracket\mid c(x).{\bf 0})c(y).(\mathbf{\nu}d)(\overline{y}\langle b\rangle.[b\leftrightarrow d]\mid d(n).\overline{c}\langle m\rangle.(\overline{n}\langle e\rangle.[e\leftrightarrow m]\mid{\bf 0})))\end{array}$ #### Properties of the Encoding. As in our previous development, we can show that our encoding for Sess$\pi\lambda^{+}$ is type sound and satisfies operational correspondence. The full development is omitted but can be found in [52]. We encode Sess$\pi\lambda^{+}$ into $\lambda$-terms, extending § 4.2 with: $\small\begin{array}[]{l}\llparenthesis\\{\overline{x_{i}{:}A_{i}}\vdash z{:}A\\}\rrparenthesis\triangleq\overline{\llparenthesis A_{i}\rrparenthesis}\multimap\llparenthesis A\rrparenthesis\\\\[1.42262pt] \llparenthesis x\leftarrow M\leftarrow\overline{y_{i}};Q\rrparenthesis\triangleq\llparenthesis Q\rrparenthesis\\{(\llparenthesis M\rrparenthesis\,\overline{y_{i}})/x\\}\ \quad\llparenthesis\\{x\leftarrow P\leftarrow\overline{w_{i}}\\}\rrparenthesis\triangleq\lambda w_{0}.\dots.\lambda w_{n}.\llparenthesis P\rrparenthesis\end{array}$ The encoding translates the monadic type $\\{\overline{x_{i}{:}A_{i}}\vdash z{:}A\\}$ as a linear function $\overline{\llparenthesis A_{i}\rrparenthesis}\multimap\llparenthesis A\rrparenthesis$, which captures the fact that the underlying value must be provided with terms satisfying the requirements of the linear context. At the level of terms, the encoding for the monadic term constructor follows its type specification, generating a nesting of $\lambda$-abstractions that closes the term and proceeding inductively. For the process encoding, we translate the monadic application construct analogously to the translation of a linear $\mathsf{cut}$, but applying the appropriate variables to the translated monadic term (which is of function type). We remark the similarity between our encoding and that of the previous section, where monadic terms are translated to a sequence of inputs (here a nesting of $\lambda$-abstractions). Our encoding satisfies type soundness and operational correspondence, as usual. Further showcasing the applications of our development, we obtain a novel strong normalisation result for this higher-order session-calculus ``for free'', through encoding to the $\lambda$-calculus. ###### Theorem 4.6 (Strong Normalisation) Let $\Psi;\Gamma;\Delta\vdash P::z{:}A$. There is no infinite reduction sequence starting from $P$. ###### Theorem 4.7 (Inverse Encodings) If $\Psi;\Gamma;\Delta\vdash P::z{:}A$ then $\llbracket\llparenthesis P\rrparenthesis\rrbracket_{z}\approx_{\mathtt{L}}\llbracket P\rrbracket$. Also, if $\Psi\vdash M:\tau$ then $\llparenthesis\llbracket M\rrbracket_{z}\rrparenthesis=_{\beta}\llparenthesis M\rrparenthesis$. ###### Theorem 4.8 Let $\vdash M:\tau$, $\vdash N:\tau$, $\vdash P::z{:}A$ and $\vdash Q::z{:}A$. $\llparenthesis M\rrparenthesis=_{\beta\eta}\llparenthesis N\rrparenthesis$ iff $\llbracket M\rrbracket_{z}\approx_{\mathtt{L}}\llbracket N\rrbracket_{z}$ and $\llbracket P\rrbracket\approx_{\mathtt{L}}\llbracket Q\rrbracket$ iff $\llparenthesis P\rrparenthesis=_{\beta\eta}\llparenthesis Q\rrparenthesis$. ## 5 Related Work and Concluding Remarks Process Encodings of Functions. Toninho et al. [49] study encodings of the simply-typed $\lambda$-calculus in a logically motivated session $\pi$-calculus, via encodings to the linear $\lambda$-calculus. Our work differs since they do not study polymorphism nor reverse encodings; and we provide deeper insights through applications of the encodings. Full abstraction or inverse properties are not studied. Sangiorgi [43] uses a fully abstract compilation from the higher-order $\pi$-calculus (HO$\pi$) to the $\pi$-calculus to study full abstraction for Milner's encodings of the $\lambda$-calculus. The work shows that Milner's encoding of the lazy $\lambda$-calculus can be recovered by restricting the semantic domain of processes (the so-called _restrictive_ approach) or by enriching the $\lambda$-calculus with suitable constants. This work was later refined in [45], which does not use HO$\pi$ and considers an operational equivalence on $\lambda$-terms called _open applicative bisimulation_ which coincides with Lévy-Longo tree equality. The work [47] studies general conditions under which encodings of the $\lambda$-calculus in the $\pi$-calculus are fully abstract wrt Lévy-Longo and Böhm Trees, which are then applied to several encodings of (call-by-name) $\lambda$-calculus. The works above deal with untyped calculi, and so reverse encodings are unfeasible. In a broader sense, our approach takes the restrictive approach using linear logic-based session typing and the induced observational equivalence. We use a $\lambda$-calculus with booleans as observables and reason with a Morris-style equivalence instead of tree equalities. It would be an interesting future work to apply the conditions in [47] in our typed setting. Wadler [54] shows a correspondence between a linear functional language with session types GV and a session-typed process calculus with polymorphism based on classical linear logic CP. Along the lines of this work, Lindley and Morris [26], in an exploration of inductive and coinductive session types through the addition of least and greatest fixed points to CP and GV, develop an encoding from a linear $\lambda$-calculus with session primitives (Concurrent $\mu$GV) to a pure linear $\lambda$-calculus (Functional $\mu$GV) via a CPS transformation. They also develop translations between $\mu$CP and Concurrent $\mu$GV, extending [25]. Mapping to the terminology used in our work [17], their encodings are shown to be operationally complete, but no results are shown for the operational soundness directions and neither full abstraction nor inverse properties are studied. In addition, their operational characterisations do not compose across encodings. For instance, while strong normalisation of Functional $\mu$GV implies the same property for Concurrent $\mu$GV through their operationally complete encoding, the encoding from $\mu$CP to $\mu$GV does not necessarily preserve this property. Types for $\pi$-calculi delineate sequential behaviours by restricting composition and name usages, limiting the contexts in which processes can interact. Therefore typed equivalences offer a coarser semantics than untyped semantics. Berger et al. [5] study an encoding of System F in a polymorphic linear $\pi$-calculus, showing it to be fully abstract based on game semantics techniques. Their typing system and proofs are more complex due to the fine- grained constraints from game semantics. Moreover, they do not study a reverse encoding. Orchard and Yoshida [33] develop embeddings to-and-from PCF with parallel effects and a session-typed $\pi$-calculus, but only develop operational correspondence and semantic soundness results, leaving the full abstraction problem open. Polymorphism and Typed Behavioural Semantics. The work of [7] studies parametric session polymorphism for the intuitionistic setting, developing a behavioural equivalence that captures parametricity, which is used (denoted as $\approx_{\mathtt{L}}$) in our paper. The work [39] introduces a typed bisimilarity for polymorphism in the $\pi$-calculus. Their bisimilarity is of an intensional flavour, whereas the one used in our work follows the extensional style of Reynolds [41]. Their typing discipline (originally from [53], which also develops type-preserving encodings of polymorphic $\lambda$-calculus into polymorphic $\pi$-calculus) differs significantly from the linear logic-based session typing of our work (e.g. theirs does not ensure deadlock-freedom). A key observation in their work is the coarser nature of typed equivalences with polymorphism (in analogy to those for IO-subtyping [38]) and their interaction with channel aliasing, suggesting a use of typed semantics and encodings of the $\pi$-calculus for fine-grained analyses of program behaviour. F-Algebras and Linear-F. The use of initial and final (co)algebras to give a semantics to inductive and coinductive types dates back to Mendler [28], with their strong definability in System F appearing in [1] and [19]. The definability of inductive and coinductive types using parametricity also appears in [40] in the context of a logic for parametric polymorphism and later in [6] in a linear variant of such a logic. The work of [55] studies parametricity for the polymorphic linear $\lambda$-calculus of this work, developing encodings of a few inductive types but not the initial (or final) algebraic encodings in their full generality. Inductive and coinductive session types in a logical process setting appear in [51] and [26]. Both works consider a calculus with built-in recursion – the former in an intuitionistic setting where a process that offers a (co)inductive protocol is composed with another that consumes the (co)inductive protocol and the latter in a classical framework where composed recursive session types are dual each other. Conclusion and Future Work. This work answers the question of what kind of type discipline of the $\pi$-calculus can exactly capture and is captured by $\lambda$-calculus behaviours. Our answer is given by showing the first mutually inverse and fully abstract encodings between two calculi with polymorphism, one being the Poly$\pi$ session calculus based on intuitionistic linear logic, and the other (a linear) System F. This further demonstrates that the linear logic-based articulation of name-passing interactions originally proposed by [8] (and studied extensively thereafter e.g. [50, 51, 36, 9, 54, 7, 25]) provides a clear and applicable tool for message-passing concurrency. By exploiting the proof theoretic equivalences between natural deduction and sequent calculus we develop mutually inverse and fully abstract encodings, which naturally extend to more intricate settings such as process passing (in the sense of HO$\pi$). Our encodings also enable us to derive properties of the $\pi$-calculi ``for free''. Specifically, we show how to obtain adequate representations of least and greatest fixed points in Poly$\pi$ through the encoding of initial and final (co)algebras in the $\lambda$-calculus. We also straightforwardly derive a strong normalisation result for the higher-order session calculus, which otherwise involves non- trivial proof techniques [13, 12, 36, 7, 5]. Future work includes extensions to the classical linear logic-based framework, including multiparty session types [10, 11]. Encodings of session $\pi$-calculi to the $\lambda$-calculus have been used to implement session primitives in functional languages such as Haskell (see a recent survey [32]), OCaml [34, 35, 24] and Scala [48]. Following this line of work, we plan to develop encoding-based implementations of this work as embedded DSLs. This would potentially enable an exploration of algebraic constructs beyond initial and final co-algebras in a session programming setting. In particular, we wish to further study the meaning of functors, natural transformations and related constructions in a session-typed setting, both from a more fundamental viewpoint but also in terms of programming patterns. Acknowledgements. The authors would like to thank the reviewers for their comments, suggestions and pointers to related works. This work is partially supported by EPSRC EP/K034413/1, EP/K011715/1, EP/L00058X/1, EP/N027833/1 and EP/N028201/1. ## References * [1] Bainbridge, E.S., Freyd, P.J., Scedrov, A., Scott, P.J.: Functorial polymorphism. Theor. Comput. Sci. 70(1), 35–64 (1990) * [2] Balzer, S., Pfenning, F.: Manifest sharing with session types. In: ICFP (2017) * [3] Barber, A.: Dual intuitionistic linear logic. Tech. Rep. ECS-LFCS-96-347, School of Informatics, University of Edinburgh (1996) * [4] Benton, N.: A mixed linear and non-linear logic: Proofs, terms and models (extended abstract). In: CSL. pp. 121–135 (1994) * [5] Berger, M., Honda, K., Yoshida, N.: Genericity and the pi-calculus. Acta Inf. 42(2-3), 83–141 (2005) * [6] Birkedal, L., Møgelberg, R.E., Petersen, R.L.: Linear Abadi and Plotkin Logic. Logical Methods in Computer Science 2(5) (2006) * [7] Caires, L., Pérez, J.A., Pfenning, F., Toninho, B.: Behavioral polymorphism and parametricity in session-based communication. In: ESOP 2013\. pp. 330–349 (2013) * [8] Caires, L., Pfenning, F.: Session types as intuitionistic linear propositions. In: CONCUR 2010. pp. 222–236 (2010) * [9] Caires, L., Pfenning, F., Toninho, B.: Linear logic propositions as session types. Mathematical Structures in Computer Science 26(3), 367–423 (2016) * [10] Carbone, M., Lindley, S., Montesi, F., Schuermann, C., Wadler, P.: Coherence generalises duality: a logical explanation of multiparty session types. In: CONCUR'16. vol. 59, pp. 33:1–33:15. Sch. Dag. (2016) * [11] Carbone, M., Montesi, F., Schurmann, C., Yoshida, N.: Multiparty session types as coherence proofs. In: CONCUR 2015. vol. 42, pp. 412–426. Sch. Dag. (2015) * [12] Demangeon, R., Hirschkoff, D., Sangiorgi, D.: Mobile processes and termination. In: Semantics and Algebraic Specification. pp. 250–273 (2009) * [13] Demangeon, R., Hirschkoff, D., Sangiorgi, D.: Termination in higher-order concurrent calculi. J. Log. Algebr. Program. 79(7), 550–577 (2010) * [14] Gentzen, G.: Untersuchungen über das logische schließen. Mathematische Zeitschrift 39, 176–210 (1935) * [15] Girard, J.: Linear logic. Theor. Comput. Sci. 50, 1–102 (1987) * [16] Girard, J., Lafont, Y., Taylor, P.: Proofs and Types. C. U. P. (1989) * [17] Gorla, D.: Towards a unified approach to encodability and separation results for process calculi. Inf. Comput. 208(9), 1031–1053 (2010) * [18] Gorla, D., Nestmann, U.: Full abstraction for expressiveness: history, myths and facts. Mathematical Structures in Computer Science 26(4), 639–654 (2016) * [19] Hasegawa, R.: Categorical data types in parametric polymorphism. Mathematical Structures in Computer Science 4(1), 71–109 (1994) * [20] Honda, K.: Types for dyadic interaction. In: CONCUR'93. pp. 509–523 (1993) * [21] Honda, K.: Session types and distributed computing. In: ICALP (2012) * [22] Honda, K., Vasconcelos, V.T., Kubo, M.: Language primitives and type disciplines for structured communication-based programming. In: ESOP'98. pp. 22–138 (1998) * [23] Honda, K., Yoshida, N., Carbone, M.: Multiparty asynchronous session types. In: POPL'08. pp. 273–284 (2008) * [24] Imai, K., Yoshida, N., Yuen, S.: Session-ocaml: a session-based library with polarities and lenses . In: COORDINATION. LNCS, vol. 10319, pp. 99–118 (2017) * [25] Lindley, S., Morris, J.G.: A semantics for propositions as sessions. In: ESOP'15. pp. 560–584 (2015) * [26] Lindley, S., Morris, J.G.: Talking bananas: structural recursion for session types. In: ICFP 2016. pp. 434–447 (2016) * [27] Maraist, J., Odersky, M., Turner, D.N., Wadler, P.: Call-by-name, call-by-value, call-by-need and the linear lambda calculus. T. C. S. 228(1-2), 175–210 (1999) * [28] Mendler, N.P.: Recursive types and type constraints in second-order lambda calculus. In: LICS'87. pp. 30–36 (1987) * [29] Milner, R.: Functions as processes. Math. Struct. in C.S. 2(2), 119–141 (1992) * [30] Milner, R., Parrow, J., Walker, D.: A calculus of mobile processes, I and II. Inf. Comput. 100(1), 1–77 (1992) * [31] Ohta, Y., Hasegawa, M.: A terminating and confluent linear lambda calculus. In: RTA'06. pp. 166–180 (2006) * [32] Orchard, D., Yoshida, N.: Session types with linearity in Haskell. In: Behavioural Types: from Theory to Tools. River Publishers (2017) * [33] Orchard, D.A., Yoshida, N.: Effects as sessions, sessions as effects. In: POPL 2016. pp. 568–581 (2016) * [34] Padovani, L.: A Simple Library Implementation of Binary Sessions. JFP 27 (2016) * [35] Padovani, L.: Context-Free Session Type Inference. In: ESOP 2017 (2017) * [36] Pérez, J.A., Caires, L., Pfenning, F., Toninho, B.: Linear logical relations for session-based concurrency. In: ESOP. pp. 539–558 (2012) * [37] Pfenning, F., Griffith, D.: Polarized substructural session types. In: FoSSaCS. pp. 3–22 (2015) * [38] Pierce, B.C., Sangiorgi, D.: Typing and subtyping for mobile processes. Mathematical Structures in Computer Science 6(5), 409–453 (1996) * [39] Pierce, B.C., Sangiorgi, D.: Behavioral equivalence in the polymorphic pi-calculus. J. ACM 47(3), 531–584 (2000) * [40] Plotkin, G.D., Abadi, M.: A logic for parametric polymorphism. In: TLCA '93. pp. 361–375 (1993) * [41] Reynolds, J.C.: Types, abstraction and parametric polymorphism. In: IFIP Congress. pp. 513–523 (1983) * [42] Reynolds, J.C., Plotkin, G.D.: On functors expressible in the polymorphic typed lambda calculus. Inf. Comput. 105(1), 1–29 (1993) * [43] Sangiorgi, D.: An investigation into functions as processes. In: MFPS (1993) * [44] Sangiorgi, D.: Pi-calculus, internal mobility, and agent-passing calculi. Theor. Comput. Sci. 167(1&2), 235–274 (1996) * [45] Sangiorgi, D.: Lazy functions and mobile processes. In: Proof, Language, and Interaction, Essays in Honour of Robin Milner. pp. 691–720 (2000) * [46] Sangiorgi, D., Walker, D.: The pi-calculus: A theory of mobile processes (2001) * [47] Sangiorgi, D., Xu, X.: Trees from functions as processes. In: CONCUR (2014) * [48] Scalas, A., Dardha, O., Hu, R., Yoshida, N.: A Linear Decomposition of Multiparty Sessions for Safe Distributed Programming. In: ECOOP'17 (2017) * [49] Toninho, B., Caires, L., Pfenning, F.: Functions as session-typed processes. In: FOSSACS 2012. pp. 346–360 (2012) * [50] Toninho, B., Caires, L., Pfenning, F.: Higher-order processes, functions, and sessions: A monadic integration. In: ESOP. pp. 350–369 (2013) * [51] Toninho, B., Caires, L., Pfenning, F.: Corecursion and non-divergence in session-typed processes. In: TGC 2014. pp. 159–175 (2014) * [52] Toninho, B., Yoshida, N.: On polymorphic sessions and functions: A tale of two (fully abstract) encodings (long version). CoRR abs/1711.00878 (2017) * [53] Turner, D.: The polymorphic pi-calculus: Theory and implementation. Tech. Rep. ECS-LFCS-96-345, School of Informatics, University of Edinburgh (1996) * [54] Wadler, P.: Propositions as sessions. J. Funct. Program. 24(2-3), 384–418 (2014) * [55] Zhao, J., Zhang, Q., Zdancewic, S.: Relational parametricity for a polymorphic linear lambda calculus. In: APLAS. pp. 344–359 (2010) Appendix On Polymorphic Sessions and Functions A Tale of Two (Fully Abstract) Encodings Additional definitions and proofs of the main materials. ## Appendix 0.A Appendix ### 0.A.1 Additional Definitions for § 2 – Structural Congruence ###### Definition 0.A.1 (Structural congruence) is the least congruence relation generated by the following laws: $P\mid{\bf 0}\equiv P$; $P\equiv_{\alpha}Q\Rightarrow P\equiv Q$; $P\mid Q\equiv Q\mid P$; $P\mid(Q\mid R)\equiv(P\mid Q)\mid R$; $(\mathbf{\nu}x)(\mathbf{\nu}y)P\equiv(\mathbf{\nu}y)(\mathbf{\nu}x)P$; $x\not\in\mbox{\it fn}(P)\Rightarrow P\mid(\mathbf{\nu}x)Q\equiv(\mathbf{\nu}x)(P\mid Q)$; $(\mathbf{\nu}x){\bf 0}\equiv{\bf 0}$; and $[x\leftrightarrow y]\equiv[y\leftrightarrow x]$. ###### Definition 0.A.2 (Extended Structural Congruence) We write $\equiv_{!}$ for the least congruence relation on processes which results from extending structural congruence $\equiv$ (Def. 0.A.1) with the following axioms, dubbed the Sharpened Replication Axioms [46]: 1. 1. $(\mathbf{\nu}u)(!u(z).P\mid(\nu y)(Q\mid R))\equiv_{!}(\nu y)((\nu u)(!u(z).P\mid Q)\mid(\nu u)(!u(z).P\mid R))$ 2. 2. $(\nu u)(!u(y).P\mid(\nu v)(!v(z).Q\mid R))\equiv_{!}(\nu v)((!v(z).(\nu u)(!u(y).P\mid Q))\mid(\nu u)(!u(y).P\mid R))$ 3. 3. $(\nu u)(!u(y).Q\mid P)\equiv_{!}P$ if $u\not\in\mbox{\it fn}(P)$ Axioms (1) and (2) represent principles for the distribution of shared servers among processes, while (3) formalises the garbage collection of shared servers which cannot be invoked by any process. The axioms embody distributivity, contraction and weakening of shared resources and are sound wrt (typed) observational equivalence [36]. ### 0.A.2 Additional Definitions for § 2 – Typing Rules Below we list the typing rules for the calculus of section § 2. We note that the judgment $\Omega\vdash B\,\mathsf{type}$ simply requires that free variables in $B$ be in $\Omega$. Moreover, typing treats processes quotiented by structural congruence – given a well-typed process $\Omega;\Gamma;\Delta\vdash P::T$, subject reduction ensures that for all possible reductions $P\xrightarrow{\tau}P^{\prime}$, there exists a process $Q$ where $P^{\prime}\equiv Q$ such that $\Omega;\Gamma;\Delta\vdash Q::T$. Related properties hold wrt general transitions $P\xrightarrow{\alpha}P^{\prime}$. We refer the reader to [9, 8] for additional details on this matter. $\begin{array}[]{c}\raise 11.0pt\hbox{$\hbox{$\hbox{\small\sc$(\mathsf{id})$}\,$}{\hbox{$\displaystyle\displaystyle{\hbox{\hskip 0.83331pt\vbox{\hbox{\hskip-0.83331pt\hbox{\hbox{$\displaystyle\displaystyle{\,}$}}}\vbox{}}}\over\hbox{\hskip 59.7636pt\vbox{\vbox{}\hbox{\hskip-59.7636pt\hbox{\hbox{$\displaystyle\displaystyle{\Omega;\Gamma;x{:}A\vdash[x\leftrightarrow z]::z{:}A}$}}}}}}$}}\hbox{}$}\quad\raise 10.44444pt\hbox{$\hbox{$\hbox{\small\sc$({{\mathbf{1}}\mathsf{R}})$}\,$}{\hbox{$\displaystyle\displaystyle{\hbox{\hskip 0.83331pt\vbox{\hbox{\hskip-0.83331pt\hbox{\hbox{$\displaystyle\displaystyle{\,}$}}}\vbox{}}}\over\hbox{\hskip 32.05894pt\vbox{\vbox{}\hbox{\hskip-32.05893pt\hbox{\hbox{$\displaystyle\displaystyle{\Omega;\Gamma;\cdot\vdash{\bf 0}::z{:}\mathbf{1}}$}}}}}}$}}\hbox{}$}\quad\raise 10.44444pt\hbox{$\hbox{$\hbox{\small\sc$({{\mathbf{1}}\mathsf{L}})$}\,$}{\hbox{$\displaystyle\displaystyle{\hbox{\hskip 41.2835pt\vbox{\hbox{\hskip-41.2835pt\hbox{\hbox{$\displaystyle\displaystyle{\Omega;\Gamma;\Delta\vdash P::z{:}C}$}}}\vbox{}}}\over\hbox{\hskip 50.25224pt\vbox{\vbox{}\hbox{\hskip-50.25223pt\hbox{\hbox{$\displaystyle\displaystyle{\Omega;\Gamma;\Delta,x{:}\mathbf{1}\vdash P::z{:}C}$}}}}}}$}}\hbox{}$}\\\\[10.00002pt] \raise 11.0pt\hbox{$\hbox{$\hbox{\small\sc$({{\multimap}\mathsf{R}})$}\,$}{\hbox{$\displaystyle\displaystyle{\hbox{\hskip 54.3921pt\vbox{\hbox{\hskip-54.3921pt\hbox{\hbox{$\displaystyle\displaystyle{\Omega;\Gamma;\Delta,x{:}A\vdash P::z{:}B}$}}}\vbox{}}}\over\hbox{\hskip 64.99284pt\vbox{\vbox{}\hbox{\hskip-64.99283pt\hbox{\hbox{$\displaystyle\displaystyle{\Omega;\Gamma;\Delta\vdash z(x).P::z{:}A\multimap B}$}}}}}}$}}\hbox{}$}\quad\raise 11.0pt\hbox{$\hbox{$\hbox{\small\sc$({{\multimap}\mathsf{L}})$}\,$}{\hbox{$\displaystyle\displaystyle{\hbox{\hskip 100.8318pt\vbox{\hbox{\hskip-100.8318pt\hbox{\hbox{$\displaystyle\displaystyle{\Omega;\Gamma;\Delta_{1}\vdash P::y{:}A\quad\Omega;\Gamma;\Delta_{2},x{:}B\vdash Q::z{:}C}$}}}\vbox{}}}\over\hbox{\hskip 107.60963pt\vbox{\vbox{}\hbox{\hskip-107.60962pt\hbox{\hbox{$\displaystyle\displaystyle{\Omega;\Gamma;\Delta_{1},\Delta_{2},x{:}A\multimap B\vdash(\mathbf{\nu}y)x\langle y\rangle.(P\mid Q)::z{:}C}$}}}}}}$}}\hbox{}$}\\\\[10.00002pt] \raise 11.0pt\hbox{$\hbox{$\hbox{\small\sc$({{\otimes}\mathsf{R}})$}\,$}{\hbox{$\displaystyle\displaystyle{\hbox{\hskip 87.65411pt\vbox{\hbox{\hskip-87.65411pt\hbox{\hbox{$\displaystyle\displaystyle{\Omega;\Gamma;\Delta_{1}\vdash P::y{:}A\quad\Omega;\Gamma;\Delta_{2}\vdash Q::z{:}B}$}}}\vbox{}}}\over\hbox{\hskip 90.73523pt\vbox{\vbox{}\hbox{\hskip-90.73521pt\hbox{\hbox{$\displaystyle\displaystyle{\Omega;\Gamma;\Delta_{1},\Delta_{2}\vdash(\mathbf{\nu}x)z\langle y\rangle.(P\mid Q)::z{:}A\otimes B}$}}}}}}$}}\hbox{}$}\quad\raise 11.0pt\hbox{$\hbox{$\hbox{\small\sc$({{\otimes}\mathsf{L}})$}\,$}{\hbox{$\displaystyle\displaystyle{\hbox{\hskip 67.34296pt\vbox{\hbox{\hskip-67.34296pt\hbox{\hbox{$\displaystyle\displaystyle{\Omega;\Gamma;\Delta,y{:}A,x{:}B\vdash P::z{:}C}$}}}\vbox{}}}\over\hbox{\hskip 76.03397pt\vbox{\vbox{}\hbox{\hskip-76.03395pt\hbox{\hbox{$\displaystyle\displaystyle{\Omega;\Gamma;\Delta,x{:}A\otimes B\vdash x(y).P::z{:}C}$}}}}}}$}}\hbox{}$}\\\\[10.00002pt] \raise 11.0pt\hbox{$\hbox{$\hbox{\small\sc$({{\with}\mathsf{R}})$}\,$}{\hbox{$\displaystyle\displaystyle{\hbox{\hskip 87.54617pt\vbox{\hbox{\hskip-87.54617pt\hbox{\hbox{$\displaystyle\displaystyle{\Omega;\Gamma;\Delta\vdash P::z{:}A\quad\Omega;\Gamma;\Delta\vdash Q::z{:}B}$}}}\vbox{}}}\over\hbox{\hskip 72.03941pt\vbox{\vbox{}\hbox{\hskip-72.03941pt\hbox{\hbox{$\displaystyle\displaystyle{\Omega;\Gamma;\Delta\vdash z.\mathsf{case}(P,Q)::z{:}A\with B}$}}}}}}$}}\hbox{}$}\quad\raise 10.44444pt\hbox{$\hbox{$\hbox{\small\sc$({{\with}\mathsf{L}}_{1})$}\,$}{\hbox{$\displaystyle\displaystyle{\hbox{\hskip 54.27995pt\vbox{\hbox{\hskip-54.27994pt\hbox{\hbox{$\displaystyle\displaystyle{\Omega;\Gamma;\Delta,x{:}A\vdash P::z{:}C}$}}}\vbox{}}}\over\hbox{\hskip 74.88237pt\vbox{\vbox{}\hbox{\hskip-74.88237pt\hbox{\hbox{$\displaystyle\displaystyle{\Omega;\Gamma;\Delta,x{:}A\with B\vdash x.\mathsf{inl};P::z{:}C}$}}}}}}$}}\hbox{}$}\\\\[10.00002pt] \raise 10.44444pt\hbox{$\hbox{$\hbox{\small\sc$({{\with}\mathsf{L}}_{2})$}\,$}{\hbox{$\displaystyle\displaystyle{\hbox{\hskip 54.27995pt\vbox{\hbox{\hskip-54.27994pt\hbox{\hbox{$\displaystyle\displaystyle{\Omega;\Gamma;\Delta,x{:}A\vdash P::z{:}C}$}}}\vbox{}}}\over\hbox{\hskip 75.45181pt\vbox{\vbox{}\hbox{\hskip-75.45181pt\hbox{\hbox{$\displaystyle\displaystyle{\Omega;\Gamma;\Delta,x{:}A\with B\vdash x.\mathsf{inr};P::z{:}C}$}}}}}}$}}\hbox{}$}\quad\raise 10.44444pt\hbox{$\hbox{$\hbox{\small\sc$({{\oplus}\mathsf{R}}_{1})$}\,$}{\hbox{$\displaystyle\displaystyle{\hbox{\hskip 39.71341pt\vbox{\hbox{\hskip-39.7134pt\hbox{\hbox{$\displaystyle\displaystyle{\Omega;\Gamma;\Delta\vdash P::z{:}A}$}}}\vbox{}}}\over\hbox{\hskip 62.96855pt\vbox{\vbox{}\hbox{\hskip-62.96855pt\hbox{\hbox{$\displaystyle\displaystyle{\Omega;\Gamma;\Delta\vdash z.\mathsf{inl};P::z{:}A\oplus B}$}}}}}}$}}\hbox{}$}\\\\[10.00002pt] \raise 10.44444pt\hbox{$\hbox{$\hbox{\small\sc$({{\oplus}\mathsf{R}}_{2})$}\,$}{\hbox{$\displaystyle\displaystyle{\hbox{\hskip 41.39566pt\vbox{\hbox{\hskip-41.39565pt\hbox{\hbox{$\displaystyle\displaystyle{\Omega;\Gamma;\Delta\vdash P::z{:}B}$}}}\vbox{}}}\over\hbox{\hskip 63.538pt\vbox{\vbox{}\hbox{\hskip-63.538pt\hbox{\hbox{$\displaystyle\displaystyle{\Omega;\Gamma;\Delta\vdash z.\mathsf{inr};P::z{:}A\oplus B}$}}}}}}$}}\hbox{}$}\quad\raise 11.0pt\hbox{$\hbox{$\hbox{\small\sc$({{\oplus}\mathsf{L}})$}\,$}{\hbox{$\displaystyle\displaystyle{\hbox{\hskip
# Compression of volume-surface integral equation matrices via Tucker decomposition for magnetic resonance applications Ilias I. Giannakopoulos, , Georgy D. Guryev, José E. C. Serrallés, , Ioannis P. Georgakis, Luca Daniel, , Jacob K. White, , Riccardo Lattanzi This work was supported by NIH R01 EB024536 and by NSF 1453675. It was performed under the rubric of the Center for Advanced Imaging Innovation and Research (CAI2R, www.cai2r.net), a NIBIB Biomedical Technology Resource Center (NIH P41 EB017183).Ilias I. Giannakopoulos, Ioannis P. Georgakis and Riccardo Lattanzi are with Center for Advanced Imaging Innovation and Research (CAI2R), Department of Radiology, New York University Grossman School of Medicine, NY, USA.Georgy D. Guryev, José E.C. Serrallés, Luca Daniel, and Jacob K. White are with the Research Laboratory of Electronics, Department of Electrical Engineering and Computer Science, Massachusetts Institute of Technology, Cambridge, MA, USA.Riccardo Lattanzi is also with the Bernard and Irene Schwartz Center for Biomedical Imaging, Department of Radiology, New York University Grossman School of Medicine, NY, USA and the Vilcek Institute of Graduate Biomedical Sciences, New York University Grossman School of Medicine, NY, USA. ###### Abstract In this work, we propose a method for the compression of the coupling matrix in volume-surface integral equation (VSIE) formulations. VSIE methods are used for electromagnetic analysis in magnetic resonance imaging (MRI) applications, for which the coupling matrix models the interactions between the coil and the body. We showed that these effects can be represented as independent interactions between remote elements in 3D tensor formats, and subsequently decomposed with the Tucker model. Our method can work in tandem with the adaptive cross approximation technique to provide fast solutions of VSIE problems. We demonstrated that our compression approaches can enable the use of VSIE matrices of prohibitive memory requirements, by allowing the effective use of modern graphical processing units (GPUs) to accelerate the arising matrix-vector products. This is critical to enable numerical MRI simulations at clinical voxel resolutions in a feasible computation time. In this paper, we demonstrate that the VSIE matrix-vector products needed to calculate the electromagnetic field produced by an MRI coil inside a numerical body model with $1$ mm3 voxel resolution, could be performed in $\sim 33$ seconds in a GPU, after compressing the associated coupling matrix from $\sim 80$ TB to $\sim 43$ MB. ###### Index Terms: Cross approximation, Global Maxwell Tomography, graphical processing unit, magnetic resonance imaging, matrix-vector product, Tucker decomposition, volume-surface integral equation. ## I Introduction Magnetic resonance (MR) imaging (MRI) provides high-resolution images of the interior anatomical and physiological structure of the human body, with exquisite soft-tissue contrast. The quality of MR images, as well as the achievable spatial and temporal resolution, depend on the available signal-to- noise ratio (SNR). SNR increases with the main magnetic field strength. This fact motivated the recent development of $7$ Tesla (T) clinical MR scanners and research-only scanners with field strengths as high as $11.7$ T [1]. At ultra-high-field (UHF) MRI ($\geq 7$ T), the radio frequency (RF) wavelength is short. This results in strong interactions between biological tissues and the electromagnetic (EM) field generated by the RF coils [2, 3, 4, 5]. Such interactions could compromise image quality and patient safety. To address these issues, EM modeling is often used to predict and manipulate the EM field distribution during RF coil design. Integral equation (IE) methods are suitable options for EM analysis in MRI. First, they do not suffer from grid dispersion errors [6, 7], in contrast with the finite-difference-time-domain (FDTD) and finite-element-methods (FEM), because the Green’s functions in the underlying IE formulation act as an exact EM field propagator from a source to an observation point. Second, for the case of single-frequency problems, IE algorithms can be extensively customized with the use of numerical linear algebra techniques for fast and accurate simulations, tailored to specific applications [8, 9, 10, 11]. For example, the MAgnetic-Resonance Integral Equation (MARIE) suite [11, 12] was developed to numerically compute the EM field distribution generated by RF coils in the human body during MRI. MARIE combines surface and volume integral equations (SIE,VIE), employing a triangular tessellation for the RF coils’ conductors and a uniform voxelized grid discretization for the body models. RWG basis functions [13] and polynomial basis functions [12, 14] are used to compute the unknowns of the surface and volume IE, respectively. Matrix-vector products are accelerated using the fast Fourier transform (FFT). The VIE computational engine of MARIE has been recently employed for the forward problem in Global Maxwell Tomography (GMT) [15], a technique that iteratively solves an ill-conditioned inverse problem to extract electrical properties from volumetric MR measurements. In the first experimental demonstration of GMT with a uniform phantom, constant incident fields were used for all iterations [15]. More recently, it was shown in simulation that GMT could accurately reconstruct brain electrical properties at $7$ T using a tailored RF coil array [16]. However, in order to confirm this with in-vivo experiments, the currents on the coil conductors cannot be simply considered constant as in the initial experiment with a uniform phantom. Instead, the incident fields must be updated at each iteration of GMT to account for changes in the sample electrical properties distribution. Therefore, GMT must be implemented with a volume-surface IE (VSIE) framework, in which a coupling matrix represents the coil-body interactions in the IE system of equations [11]. Such approach requires a large amount of memory, which could prevent using clinically-relevant voxel resolutions and fine coil meshes. The aim of this work is to use Tucker decomposition [17] to perform a column- wise compression of the VSIE coupling matrix, in order to limit the associated memory demand and enable the computation of the relevant matrix-vector products in GPUs. Our approach was motivated by previous work [10] on the reduction of the memory footprint of FFT-based VIE Green’s function tensors and the acceleration of matrix-vector products in VIE using GPU. Tucker decomposition belongs to a larger family of tensor decompositions and have been used successfully in the past for matrix compression inside IE-based simulations for EM applications. Examples include EM simulations of realistic body models simulations for UHF MRI [18, 10] and capacitance extraction [19, 20, 21]. Other tensor decompositions could be used [22, 23, 24], but for the intrinsic 3D nature of the problem at hand, Tucker optimizes operations and memory complexity. In cases where the coil is placed far from the scatterer, the coupling matrix can be first compressed with a 2D cross approximation method [25, 26, 27] and then further compressed by applying our proposed technique to the resulting matrices. Towards this direction, we developed an algorithm based on the adaptive cross approximation (ACA) [28, 29] to efficiently perform our compression approach within the iterative loop of ACA. The remainder of this paper is organized as follows. In Section II, we summarize the relevant technical background and equations related to the standard VSIE method. We also show, as an example application, the use of the VSIE coupling matrix in the GMT inverse problem formulation. In addition, we outline the Tucker decomposition and the ACA method. In Section III, we introduce the Tucker-based assembly technique of the coupling matrix along with the novel algorithms for the matrix-vector products implementation. Moreover, we present a new algorithm for a memory friendly assembly of the coupling matrix, based on the combination of ACA with the Tucker decomposition. Section IV describes a series of numerical experiments aimed at investigating the multilinear rank and the memory requirements of the compressed matrix in different scenarios, along with time footprint of the matrix-vector product for various mesh discretizations. Section V discusses the results. Section VI summarizes the work and provides a few take home points. The following TABLE I lists the notations used in this work. TABLE I: Notation Notation | Description ---|--- $a$ | Scalar $\bm{a}$ | Vector in $\mathbb{C}^{3}$ $\mathbf{a}$ | Vector in $\mathbb{C}^{n}$ $A$ | Matrix in $\mathbb{C}^{n_{1}\times n_{2}}$ $A^{T}$ | Transpose of matrix $A^{*}$ | Conjugate transpose of matrix ${\mathcal{A}}$ | Tensor in $\mathbb{C}^{n_{1}\times n_{2}\times n_{3}}$ $\mathscr{A}$ | Struct containing Tucker compressed tensors ${\cal{A}}$ | Operator acting on vectors in $\mathbb{C}^{3}$ $\times_{i}$ | n-mode products ${\mathrm{i}}$ | Imaginary unit ${\mathrm{i}}^{2}=-1$ ## II Technical Background ### II-A Volume-Surface Integral Equation #### II-A1 Coupled Linear System Let us consider an IE-based solver for the EM wave analysis in MRI applications. The body domain $\Omega$ can be modeled with the current-based VIE (JVIE), while the conducting surfaces (receive and transmit RF coil arrays, coil shields, and gradient coils) with the SIE. The IEs are solved with the Method of Moments (MoM) [30]. The resulting system of equations can be written in the following block matrix form. $\begin{bmatrix}Z_{\rm cc}&T^{T}_{\rm bc}\\\ T_{\rm bc}&Z_{\rm bb}\end{bmatrix}\begin{bmatrix}J_{\rm s}\\\ J_{\rm p}\end{bmatrix}=\begin{bmatrix}V\\\ 0\end{bmatrix}.$ (1) Here, $Z_{\rm cc}\in\mathbb{C}^{m\times m}$ is the Galerkin matrix that models interactions between the coil’s discretization elements (triangles’ common edges), with the aid of the free-space Green’s function that appears in the SIE formulation. It associates the equivalent surface currents on the coil ($J_{\rm s}\in\mathbb{C}^{m\times p}$), with the voltage excitation matrix $V\in\mathbb{C}^{m\times p}$. The coil conductors are modeled with triangular meshes, and the unknown surface equivalent currents are approximated with RWG basis functions [13]. $m$ is the number of RWG, or non-boundary, triangle edges that appear on the mesh, whereas $p$ is the number of excitation ports (i.e., number of coil’s channels) of the conducting surfaces. The matrix $Z_{\rm bb}\in\mathbb{C}^{qn_{\rm v}\times qn_{\rm v}}$ is another Galerkin matrix which models the EM interactions of an external volumetric EM field, produced by the coil, with the body. Specifically, the matrix relates the polarization currents in $\Omega$ to an external incident EM field produced by the conducting surfaces. $n_{\rm v}$ is the number of voxels and $q$ the number of basis functions per voxel. Differently than $Z_{\rm cc}$, $Z_{\rm bb}$ requires a large amount of memory, even for coarse resolutions. To handle that, $\Omega$ can be discretized over a voxelized uniform grid, giving $Z_{\rm bb}$ a three-level Block-Toeplitz with Toeplitz Blocks (BTTB) structure. As a result, only the defining columns of the BTTB matrix need to be stored and the matrix-vector product can be accelerated using the FFT, as in [31, 32, 33, 34, 35, 36, 37, 12]. The unknown polarization currents ($J_{\rm p}\in\mathbb{C}^{qn_{\rm v}\times p}$) can be discretized with polynomial basis functions, either piecewise constant [12] (PWC, $3$ unknowns per voxel) or piecewise linear [14] (PWL, $12$ unknowns per voxel), and a single-voxel support. The presence of conductive tissue near the coil conductors perturbs $J_{\rm s}$ from their values in free-space. In fact, the voltage excitations at the coil’s ports create incident EM fields that scatter from the dielectric body back to the coil conductors, changing their current distribution. The coupling matrix $T_{\rm bc}\in\mathbb{C}^{qn_{\rm v}\times m}$ is used to account for this effect, by modeling the coupling interactions between the dyadic Green’s function [38] of the SIE and the VIE formulations. Specifically, in equation (1), $T_{\rm bc}$ maps electric surface equivalent currents to electric fields through the ${\cal{N}}$ Green’s function operator of VIE: ${\cal{N}}\left(\bm{s}\right)\triangleq\nabla\times\nabla\times\int\limits_{\Omega}g\left(\bm{r}-\bm{r}^{\prime}\right)\bm{s}\left(\bm{r}^{\prime}\right)d^{3}\bm{r}^{\prime}.$ (2) $g$ is the free-space Green’s function, or fundamental Helmholtz solution, and it is equal to $g\left(\bm{r}-\bm{r}^{\prime}\right)=\frac{e^{-{\mathrm{i}}k_{0}\lvert\bm{r}-\bm{r}^{\prime}\rvert}}{4\pi\lvert\bm{r}-\bm{r}^{\prime}\rvert},$ (3) where $k_{0}$ is the free-space wavenumber, $\bm{r}$ the source point, and $\bm{r}^{\prime}$ the observation point. Each element of the VSIE coupling matrix is a 5D integral formed from the inner product between the discretized ${\cal{N}}$ operator applied on a VIE basis function, and an RWG basis function. #### II-A2 VSIE implementation of GMT GMT estimates tissue electrical properties from MR measurements by solving an inverse problem [15]. In GMT, the cost function compares actual measurements against simulated measurements of the relative $b_{1}^{+}$ fields generated by multiple sources (e.g., multiport transmit coils) inside a sample and iteratively updates the estimate of the sample’s electrical properties. GMT was initially demonstrated using the JVIE formulation for the solutions of the forward problem, therefore ignoring the effect of the dielectric sample on the incident fields. However, these interactions must be taken into account for accurate in-vivo experiments with close-fitting RF coils. In other words, the GMT framework must be ported from a VIE to a VSIE formulation, in which the incident fields are not constant but calculated at each GMT iteration as $\displaystyle E^{\rm inc}\left(\mathbf{\epsilon_{r}}\right)$ $\displaystyle=T_{\rm bc}J_{\rm s}\left(\mathbf{\epsilon_{r}}\right),$ (4) $\displaystyle H^{\rm inc}\left(\mathbf{\epsilon_{r}}\right)$ $\displaystyle=Z_{\rm bc}J_{\rm s}\left(\mathbf{\epsilon_{r}}\right).$ where $E^{\rm inc}$ and $H^{\rm inc}$ are the discretized incident electric and magnetic fields respectively. $\epsilon_{r}$ is the complex permittivity. $Z_{\rm bc}$ maps the equivalent surface electric currents to the magnetic fields with the aid of the ${\cal{K}}$ operator: ${\cal{K}}\left(\bm{s}\right)\triangleq\nabla\times\int\limits_{\Omega}g\left(\bm{r}-\bm{r}^{\prime}\right)\bm{s}\left(\bm{r}^{\prime}\right)d^{3}\bm{r}^{\prime}.$ (5) In addition, in the new implementation, the gradient of the GMT’s cost function will require to solve a Hermitian adjoint system of equations that includes multiplications with the conjugate transpose of $Z_{\rm bc}$. Matrix-vector products involving the coupling matrix are typically performed without storing the full matrix, due to its intractably large size. In the case of iterative inverse problem solutions, such as in GMT, this approach could considerably increase the computation time, because it requires the re- assembly of the full matrix at each iteration. In the next sections, we propose a compression algorithm that reduces the computational burden, by enabling one to assembly the full coupling matrix only once and then perform just the matrix-vector multiplications in each of GMT’s iterations. ### II-B Numerical Linear Algebra Methods #### II-B1 Tucker Decomposition A 3D tensor ${\mathcal{A}}\in\mathbb{C}^{n_{1}\times n_{2}\times n_{3}}$ can be decomposed with the Tucker model [17] in the following form: $\displaystyle{\mathcal{A}}$ $\displaystyle={\mathcal{G}}\times_{1}U^{1}\times_{2}U^{2}\times_{3}U^{3},\>\text{or}$ (6) $\displaystyle{\mathcal{A}}_{ijk}$ $\displaystyle=\sum_{\chi=1}^{r_{1}}\sum_{\psi=1}^{r_{2}}\sum_{\zeta=1}^{r_{3}}{\mathcal{G}}_{\chi\psi\zeta}U^{1}_{i\chi}U^{2}_{j\psi}U^{3}_{k\zeta}.$ Here $U^{\gamma}\in\mathbb{C}^{n_{\gamma}\times r_{\gamma}},\gamma=1,2,3$, are unitary matrices, dubbed as Tucker factors, while ${\mathcal{G}}\in\mathbb{C}^{r_{1}\times r_{2}\times r_{3}}$ is the Tucker core. The dimensions of the Tucker core indicate the multilinear (or Tucker) ranks of ${\mathcal{A}}$. The symbols $\times_{\gamma}$ are called $n$-mode products and perform a convolution over the $\times_{\gamma}$ axis, for example, the $\times_{1}$ product performs the following operation: ${\mathcal{P}}={\mathcal{G}}\times_{1}U^{1}\Leftrightarrow{\mathcal{P}}_{i\psi\zeta}=\sum\limits_{\chi=1}^{r_{1}}{\mathcal{G}}_{\chi\psi\zeta}U^{1}_{i\chi}.$ (7) Here, ${\mathcal{P}}\in\mathbb{C}^{n_{1}\times r_{2}\times r_{3}}$. The expansion of ${\mathcal{A}}$ in equation (6) can be truncated to a desired tolerance, and return an approximation of ${\mathcal{A}}$. A visual representation of Tucker decomposition can be seen in Fig. 1. Figure 1: Visual representation of Tucker decomposition. To compute the above-mentioned Tucker components, one has to choose a suitable compression algorithm. The higher order singular value decomposition (HOSVD) is an orthogonal Tucker decomposition, widely used because it has a proven upper error bound [39]. Moreover, the algorithm is based entirely on singular value decomposition (SVD), which provides a robust and stable approximation of the initial tensor. Note that SVD requires the assembly of the initial array, which could be challenging for large tensors. In such cases, one could implement the Tucker decomposition using a 3D cross approximation algorithm [40]. #### II-B2 Cross Approximation A matrix $A\in\mathbb{C}^{n_{1}\times n_{2}}$ can be approximated with the so- called 2D cross approximation method [25, 26] as follows: $A\approx UV^{*}.$ (8) Here, $U\in\mathbb{C}^{n_{1}\times r_{c}}$ and $V\in\mathbb{C}^{n_{2}\times r_{c}}$. $r_{c}$ represents the column rank of matrix $A$. Cross approximation algorithms construct the decomposition of $A$ by using only some rows and columns of it, differently than SVD, which depends on the availability of the full matrix. Several algorithms have been developed over the previous decades for the implementation of cross approximation, with two being the most used ones: the ACA [28, 29], and the maximum volume-based cross algorithm [27, 41]. The latter requires the implementation of LU, QR and SVD for its efficient implementation. Therefore, the memory demand of the algorithm increases drastically for large tall matrices, such as the coupling matrices $T_{\rm bc}$ and $Z_{\rm bc}$ in the case fine voxel resolutions. On the other hand, the memory demand of ACA is only dictated by the size of matrices $U$ and $V$. ## III Tucker-Based Compression Algorithm In VSIE, the columns of the coupling matrix describe interactions between coil and body basis functions through the Green’s functions of equations (2) and (5); therefore, they represent well-separated geometrical blocks. Due to the 3D nature of the problem, the key idea of our proposed compression algorithm is to reshape these columns as tensors and approximate them with the low multilinear Tucker model. This compression strategy enabled us to develop a new method to efficiently perform the matrix-vector product and an extension to ACA, which are described later in this section. ### III-A Matrix Assembly Each of the $m$ columns of the coupling matrices $Z_{\rm bc}$ and $T_{\rm bc}$ can be seen as the concatenation of $q$ vectors, where each vector represents the component-wise interaction between one RWG element on the coil and the basis functions of all the voxels in the body domain. For PWC, $q=3$, whereas for PWL, $q=12$. Since these vectors model interactions between remote discretization elements, they have low-rank properties [42]. To exploit the low-rank, each column of the coupling matrix can be reshaped as $q$ 3D tensors ${\mathcal{Z}}^{kj}\in\mathbb{C}^{n_{1}\times n_{2}\times n_{3}}$, $k=1:q$, $n_{\rm v}=n_{1}\times n_{2}\times n_{3}$, which are compressible with the Tucker decomposition [43]. A graphical description of the algorithm is shown in Fig. 2 for $Z_{\rm bc}$ and PWC basis functions,. Figure 2: Visual representation of the Tucker-based algorithm for the compression of the $Z_{\rm bc}$ matrix, in the case of PWC basis functions. Each vector can be reshaped into a 3D tensor that is then compressed via Tucker decomposition. If the coupling matrix is first approximated with the ACA as $UV^{*}$, then our approach can still be used to compress the $r_{c}$ columns of $U$. In fact, cross approximation is a well-conditioned operation, therefore the Tucker ranks of the reshaped columns of $U$ will be similar to the ones of the reshaped columns of the coupling matrix. The $V$ matrix here is usually much smaller than $U$ and does not require any additional compression. TABLE II shows the total memory footprint associated with the assembly of the coupling matrix: Full assembly, assembly with ACA, and assembly with our proposed method by compressing either the columns of the coupling matrix (Tucker) or the columns of $U$ (ACA+Tucker). The memory required after compressing the coupling matrix with Tucker is $n_{\rm v}/\left(r^{3}+3nr\right)$ times smaller than the memory required by the full matrix, where $n$ and $r$ refer to the tensor’s linear dimension and Tucker rank, respectively. If our Tucker-based compression method is instead applied after the ACA assembly, then the total compression improves by a factor of $\sim m/r_{c}$, given that $r_{c}$ is small. TABLE II also shows the computational complexity of the assembly operations. The multiplicative constant factor $c_{1}$, which is present in all cases, represents the cost to compute the elements of the coupling matrix and is usually large. In fact, each element requires a 5D integration, whose computational cost depends on the number of both surface and volume quadrature integration points. As a result, the assembly of the matrix is extremely inefficient and should be implemented in parallel for multiple voxel-RWG basis function interactions. Note that in certain cases, for example when the coil is close to the body, ACA may not achieve a meaningful compression and would not be advantageous to combine it with Tucker decomposition. In such cases, the preferable approach would be to divide the coupling matrix in $q$ blocks of size $n_{\rm v}\times m$, assembled them in parallel, and then compress their tensor components with a Tucker-based method like HOSVD. Alternatively, if the coupling matrix is sufficiently small, one could assemble it in its full form and then apply Tucker directly to compress it. TABLE II: Complexity for Constructing the Coupling Matrix Assembly Method | Operations | Memory ---|---|--- Full | ${\mathcal{O}}\left(c_{1}qn_{\rm v}m\right)$ | $qn_{\rm v}m$ ACA | ${\mathcal{O}}\left(c_{1}r_{c}^{2}\left(qn_{\rm v}+m\right)\right)$ | $qn_{\rm v}r_{c}+mr_{c}$ Tucker | Full + ${\mathcal{O}}\left(rqn_{\rm v}m\right)$ | $q\left(r^{3}+3nr\right)m$ ACA+Tucker | ACA + ${\mathcal{O}}\left(rqn_{\rm v}r_{c}\right)$ | $q\left(r^{3}+3nr\right)r_{c}+mr_{c}$ ### III-B Matrix-Vector Product Decompressing the coupling matrix to compute the matrix-vector product $\mathbf{y}=Z_{\rm bc}\mathbf{x}$, like in equations (4), may not be possible due to computer or GPU memory limitations. To address this, we propose a novel approach to efficiently compute the matrix-vector product without fully decompressing the coupling matrix. We initiate $\mathbf{y}$ as a vector of zeros. Inside a loop that cycles over the RWG basis functions, we decompress the $q$ tensors of a column $j\in[1,m]$, reshape them as vectors, and concatenate them to form the $j$-th column of the original $Z_{\rm bc}$ matrix. The vector-scalar product between the $j$-th column and $\mathbf{x}_{j}$ is then computed, and the result increments the elements of $\mathbf{y}$. The same algorithm can be followed for the matrix-matrix product $Y=Z_{\rm bc}X$. The conjugate transpose matrix-vector product $\mathbf{y}=Z^{*}_{\rm bc}\mathbf{x}$ is required for the computation of the gradient of the cost function in the VSIE-based GMT implementation. This case is slightly different than the standard matrix-vector product: inside the loop cycling through the RWG functions, a row-column vector product must be computed between the conjugate transpose of the decompressed $j$-th column of $Z_{\rm bc}$ and $\mathbf{x}$, which yields the scalar $\mathbf{y}_{j}$. The algorithm remains instead the same for the conjugate transpose matrix-matrix products. Both algorithms (for $p$-columned matrices $X$ and $Y$) are summarized in the pseudocode below: Algorithm 1: $\>Y=Z_{\rm bc}X$ 1:for k=1:$q$ do 2: $Y^{k}=0$ 3:for j=1:$m$ do 4: for k=1:$q$ do 5: $\text{Decompress}\>{\mathcal{Z}}^{kj}$ 6: $Y^{k}+={\mathcal{Z}}^{kj}(:)\mathbf{X_{j:}}$ 7:$Y=\begin{bmatrix}Y^{1}\\\ \cdots\\\ Y^{q}\end{bmatrix}$ Algorithm 2: $\>Y=Z^{*}_{\rm bc}X$ 1:$Y=0$ 2:for j=1:$m$ do 3: for k=1:$q$ do 4: $\text{Decompress}\>{\mathcal{Z}}^{kj}$ 5: $Y_{j:}=\begin{bmatrix}{\mathcal{Z}}^{1j}(:)\\\ \cdots\\\ {\mathcal{Z}}^{qj}(:)\end{bmatrix}^{*}X$ In Algorithm 1, $X\in\mathbb{C}^{m\times p}$, $Y\in\mathbb{C}^{qn_{\rm v}\times p}$ (vice-versa for Algorithm 2), and ${\mathcal{Z}}^{kj}(:)$ is the reshaped column vector of the tensor component ${\mathcal{Z}}^{kj}$. The algorithms remain the same if $Z_{\rm bc}$ is compressed with ACA first ($Z_{\rm bc}=UV^{*}$). One has to replace $Z_{\rm bc}$ with $U$, $m$ with $r_{c}$, and assign $X=V^{*}X$ for Algorithm 1, and $Y=VY$ for Algorithm 2. Both the standard and the conjugate transpose matrix-vector products have the same complexity, shown in TABLE III, for the full, ACA, Tucker, and ACA+Tucker compressed cases. The full matrix-vector product is faster than the Tucker- compressed approach by a factor of $(r+p)/p$, which depends on the number of columns of $X$ and $Y$ and the Tucker rank. ACA can be faster than the full case for small values of $r_{c}$. Although the approach based on Tucker decomposition is slower because it requires additional flops compared to the other methods, it is more likely to fit in GPUs, thanks to its small memory footprint. TABLE III: Matrix-Vector Product Complexity Matrix Form | Operations Complexity ---|--- Full | ${\mathcal{O}}\left(qn_{\rm v}mp\right)$ ACA | ${\mathcal{O}}\left(qn_{\rm v}r_{c}p\right)$ \+ ${\mathcal{O}}\left(r_{c}mp\right)$ Tucker | ${\mathcal{O}}\left(rqn_{\rm v}m\right)$ \+ ${\mathcal{O}}\left(qn_{\rm v}mp\right)$ ACA+Tucker | ${\mathcal{O}}\left(rqn_{\rm v}r_{c}\right)$ \+ ${\mathcal{O}}\left(qn_{\rm v}r_{c}p\right)$ \+ ${\mathcal{O}}\left(r_{c}mp\right)$ ### III-C Tucker-based ACA If the coupling matrix is first compressed with ACA, the previous methods for matrix assembly and matrix-vector product could still be applied to the matrix $U$ of the cross approximation. However, for the case of realistic body models discretized with fine voxel resolutions, the traditional implementation of ACA (a detailed description can be found in [44]) might fail due to random access memory (RAM) overflow because of the size of $U$ (see section IV.B.2). To address this, we propose an extension of ACA in which the matrix $U$ is assembled directly in a compressed form, based on our proposed Tucker decomposition technique. The algorithm is summarized in pseudocode bellow: Algorithm 3: ACA of $Z\in\mathbb{C}^{m_{1}\times m_{2}}$. Assembly with compressed $U$ 1:$\epsilon=\text{tolerance}$, $i=1$, $\mathbf{s}_{1}=0$ 2:$\mathscr{U}=[]$, $V=[]$ 3:for $k=1:\text{min}(m_{1},m_{2})$ do 4: $\mathbf{r}\leftarrow Z_{i:}$ 5: if $k>1$ then 6: $[f_{1},f_{2},f_{3},p]\leftarrow i$ 7: for $l=1:size(\mathscr{U},2)$ do 8: $[{\mathcal{G}},U^{1},U^{2},U^{3}]\leftarrow U_{pl}$ 9: $\mathbf{t}_{l}={\mathcal{G}}\times_{1}U_{f_{1},:}^{1}\times_{2}U_{f_{2},:}^{2}\times_{3}U_{f_{3},:}^{3}$ 10: $\mathbf{r}\mathrel{+}=-\mathbf{t}V^{*}$ 11: $j\leftarrow\text{index of max element of}\>\mathbf{r}$ 12: $\mathbf{y}\leftarrow\left(\mathbf{r}/\mathbf{r}_{j}\right)^{*}$ 13: $\mathbf{x}\leftarrow Z_{:j}$ 14: if $k>1$ then 15: $\mathbf{x}\mathrel{+}=-\textbf{Alg1}(\mathscr{U},V_{j:}^{*})$ 16: $\mathbf{s}_{k+1}\leftarrow\mathbf{s}_{k}+(\left\lVert\mathbf{x}\right\rVert\left\lVert\mathbf{y}\right\rVert)^{2}$ 17: if $k>1$ then 18: $\mathbf{s}_{k+1}\mathrel{+}=2\sum\left[\mathrm{Re}\,\\{\left(\textbf{Alg2}(\mathscr{U},\mathbf{x})\right)\odot\left(V^{*}\mathbf{y}\right)\\}\right]$ 19: Reshape $\mathbf{x}$ to $q$ ${\mathcal{X}}^{q}$ tensors. 20: $\mathscr{U}=[\mathscr{U}\>\text{HOSVD}({\mathcal{X}}^{1},3\epsilon)\>\cdots\>\text{HOSVD}({\mathcal{X}}^{q},3\epsilon)]$ 21: $V=[V\>y]$ 22: if $\left\lVert\mathbf{x}\right\rVert\left\lVert\mathbf{y}\right\rVert\leq\epsilon\sqrt{\mathbf{s}_{k+1}}$ then break 23: $\mathbf{x}=\lvert\mathbf{x}\rvert$, $\mathbf{x}_{i}=0$ 24: $i\leftarrow\text{index of max element of}\>\mathbf{x}$ Here $\mathscr{U}$ is a struct of size $q\times r_{c}$ ($q=3$ for PWC, $q=12$ for PWL, $r_{c}$ is the rank of $Z$) than contains tensors. Each time a new column of $U$ is computed, it is reshaped to $q$ tensors, which are then compressed with a truncated HOSVD of tolerance $3\epsilon$ (line 19,20). The HOSVD tolerance has to be higher than the ACA tolerance since the irrelevant numerical digits ($<1e-3$) appearing in $U$ are incompressible. We found that a $3$ times higher tolerance is a good choice for our numerical examples. To perform matrix- and conjugate transpose matrix-vector products with the compressed $U$ we followed Algorithms 1 (line 15) and 2 (line 18). Finally, when a row of $U$ is requested in Algorithm 3, we first calculate the voxel and basis function component corresponding to that row (line 6) and then decompress, using equation (6), only the required elements from the Tucker compressed components of $U$ (line 8,9). The proposed algorithm avoids RAM overflowing, but it is slower than the traditional ACA due to the multiple tensor decompressions. Nevertheless, it could always be accelerated via GPU, since its memory demand is as low as for one column of the coupling matrix. ## IV Numerical Experiments ### IV-A Tucker Rank Behavior In this section, we study the low-Tucker rank properties of the $Z_{\rm bc}$ coupling matrix. We considered multiple geometrical scenarios and altered the distance between the conductive surface (coil) and the VIE domain, the operating frequency and the conductive surface’s discretization. The tensor components of the columns of the coupling matrix were compressed with the HOSVD algorithm and a tolerance of $1e-8$, which yielded a relative error similar to the tolerance for all cases, due to the robustness of the SVD itself. Such error can be considered negligible, because the tolerance of the iterative solver used in FFT-based VIE systems is usually orders of magnitude higher. #### IV-A1 Tucker Rank vs. Distance It is well established that the Green’s function integro-differential operators between well-separated geometries present low-rank properties [42]. Here we studied the relation between the low multilinear rank of the compressed coupling matrix $Z_{\rm bc}$ and the distance between the body’s domain and the coil. We set the frequency to $298.06$ MHz, the operating frequency of $7$ Tesla MRI. We modeled a single perfectly electric conducting (PEC) loop coil of radius $\rho=0.50$ m and discretized it with $125$ triangular elements. The coil was centered in $\left(0,d,0\right)$, where $d$ was varied as $0.55$, $0.6$, $\dots$, $1$ m. The domain was a cuboid with edge length of $1$ m, centered at $(0,0,0)$ and discretized with voxels of $1$ cm isotropic resolution and PWC basis functions (Fig. 3). As a result, the tensor’s size was $101\times 101\times 101$ and the memory required by the fully assembled $Z_{\rm bc}$ was $5848$ MBs. Figure 3: Loop-cubic domain geometry. The loop coil was shifted on the $\hat{y}$ direction, for $10$ discrete distances between $0.55$ to $1$ m from the center of the cube. Fig. 4 illustrates the reduction of the maximum rank (maximum of all Tucker ranks for all components) of the coupling matrix (right axis), and the total memory of the compressed matrix using the algorithm described in Section IV (left axis). It is evident that the greater the distance between the domain and the coil, the lower the multilinear ranks and the required memory. The compression factor varied between $\sim 50$ and $190$, depending on the distance. Figure 4: Memory footprint (left) and maximum rank (rank) of the compressed $Z_{\rm bc}$ matrix, for different distances between the loop and the cubic domain. #### IV-A2 Tucker Rank vs. Frequency The work presented in [42] showed that the rank of the integral operators for 3D problems increases linearly with respect to the operating frequency. This was confirmed in [10], for the BTTB defining tensors of the FFT-based VIE systems (discretized integral operators). These tensors were columns of the corresponding Galerkin MoM matrices and modeled the interactions between one voxel’s basis function and the whole domain via the ${\cal{N}}$ or ${\cal{K}}$ operators. In the present study, the tensors are columns of the coupling matrix and model the interactions between one RWG basis function and the whole body domain via the same operators. Since in both cases the interactions between separated geometry blocks are modeled, one can expect a similar behavior for the Tucker ranks. To confirm this, we performed a frequency sweep ($300$, $600$, $\dots$, $2700$MHz) for the setup in Fig. 3. The coil was discretized with $125$ elements, whereas the voxel’s isotropic resolution was set to $\lambda/20$, with $\lambda$ being the wavelength. We repeated the calculations for three positions of the coil ($d=0.55$, $0.65$, and $0.8$ m). The memory footprint (left) of the compressed matrix, along with the maximum rank (right), are shown in the dual axis chart of Fig. 5. The memory footprint increased linearly with frequency, whereas the maximum rank grew at different rates for the three investigated cases. This is expected because the maximum rank represents the worst-case scenario among all ranks, whereas the memory footprint summarizes the overall effect of all ranks. Figure 5: Memory footprint (left) and maximum rank (rank) of the compressed $Z_{\rm bc}$ matrix, for different operating frequencies. Results are shown for three different distances between the loop and the domain. #### IV-A3 Tucker Rank vs. Surface Mesh Let us consider a fixed mesh for the domain and refine only the surface mesh of the coil. As the coil mesh is refined, the edges of the triangular elements become smaller and the number of columns of the coupling matrix increases, with each column representing more remote element interactions between an edge of the triangular mesh and the voxelized domain. As a result, we should expect a drop in the Tucker ranks. To verify this, we used the same domain of the previous sections, and a PEC equilateral triangle with centroid at $(0,0.55\text{m},0)$ and one vertex at $(0,0.55\text{m},0.5\text{m})$. The triangle was discretized with $10$ different meshes, starting from triangular element’s edge of $0.5$m and reducing it by a factor of $\sqrt{2}$, which resulted in $4$, $6$, $11$, $30$, $48$, $102$, $184$, $358$, $727$, and $1480$ elements. Fig. 6 reports the maximum rank as a function of the length of the triangular element’s edge, confirming that the rank is smaller when the PEC triangle’s mesh is finer. Figure 6: Maximum rank of the compressed $Z_{\rm bc}$ matrix, for various PEC triangle’s meshes. The rank drops as we refine the mesh. ### IV-B Application to VSIE-based MRI Simulations Here we aim to validate the performance of the proposed algorithms for the assembly of the coupling matrix $Z_{\rm bc}$, and the matrix-vector implementation for two VSIE-based MRI applications. Both numerical experiments were implemented in Matlab, except for the matrix assembly part which was written in C++. For the GPU computations, we used an NVIDIA Quadro Volta GV100 32GB HBM2 PCIe. For the CPU computations, in the first experiment we used a server with CentOS 6.9 operating system and an Intel(R) Xeon(R) CPU E5-2699 v3 at 2.30GHz, while for the second experiment we used a server with Ubuntu 18.04.5 LTS operating system and an Intel(R) Xeon(R) Gold 6248 CPU at 2.50GHz. We parallelized on $12$ workers where needed. #### IV-B1 Head Coil Experiments We first demonstrated the proposed compression method for an 8-ports close- fitting head coil, previously designed for GMT [16], which we loaded with the “Billie” realistic head model from the virtual family population [45] (Fig. 7). The operating frequency was set to $298$ MHz. Figure 7: Coil-head geometry. The RF coil (discretized with $2380$ triangular element edges) was loaded with the voxelized realistic human head model “Billie” (discretized with voxels of $1$ mm isotropic resolution). The VSIE-based implementation of GMT requires performing operations on the coupling matrix $Z_{\rm bc}$ and its conjugate transpose. We analyzed the memory footprint reduction for the compressed coupling matrix and measured the computation time for both the matrix- and conjugate transpose matrix-vector products using the algorithms presented in section III. The coil was discretized with both a coarse ($516$ RWG) and a fine ($2380$ RWG) mesh resolution. For the VIE domain enclosing the head, we tested three different voxel resolutions, namely $5$, $2$, and $1$ mm3, which resulted in $34\times 38\times 45$, $84\times 96\times 116$, and $168\times 188\times 222$ voxels, respectively. Both PWC ($3$ unknowns per voxel) and PWL ($12$ unknown per voxel) VIE basis functions were considered. We used a tolerance of $1e-8$ for HOSVD, which would ensure accurate estimation of electrical properties in an actual GMT experiment. Since the coil closely fits the head, ACA (or SVD-based methods in general) are expected to provide negligible compression with a tolerance of $1e-8$. We confirmed this for the case of PWC basis functions, $5$ mm3 voxel resolution, and fine coil discretization, for which, in fact, we found that $2238$ of the $2380$ singular values would be needed to accurately represent $Z_{\rm bc}$, compressing the matrix from $6.18$ GB to $6.07$ GB. Consequently, for the head coil experiments we did not use the Tucker-based ACA algorithm, but instead we compressed the columns of the coupling matrix only with the HOSVD-based method. ##### Memory Compression The memory footprint for the assembly of the coupling matrix $Z_{\rm bc}$ is shown in TABLE IV. The memory required to assemble the full matrix was considerably larger than for the HOSVD-compressed matrix. For example, for PWC basis functions, voxel resolution of $1$ mm3, and fine coil mesh, the required memory in the full matrix case was $>740$ GBs, whereas the compressed matrix required only $2.6$ GBs. Note that in the challenging case of PWL basis functions, $1$ mm3 voxel resolution, and fine coil mesh, it was not feasible to apply our compression method. In fact, the memory requirements even for just one of the $q$ blocks of the matrix (see Section IV. A.) were prohibitively large for our server. While we could have still implemented the HOSVD compression by further dividing the matrix in smaller blocks, that would have required $\sim 1$ month of computations. An alternative method for such costly cases is mentioned in the discussion and will be pursued in future work. TABLE IV: Memory Requirements (GBs) of $Z_{\rm bc}$ Voxel Res. | Assembly | PWC-coarse | PWC-fine | PWL-coarse | PWL-fine ---|---|---|---|---|--- $5$ mm3 | Full | 1.34 | 6.18 | 5.36 | 24.74 HOSVD | 0.20 | 0.88 | 0.86 | 3.77 $2$ mm3 | Full | 21.57 | 99.52 | 86.30 | 398.09 HOSVD | 0.40 | 1.74 | 1.79 | 7.49 $1$ mm3 | Full | 161.73 | 745.99 | 646.95 | 2983.99 HOSVD | 0.63 | 2.62 | 2.85 | N/A Fig. 8 shows that the compression factor, defined as the memory of the full matrix over the memory of the compressed one, decreased as the voxel resolutions ($h$) of the VIE domain’s grid became coarser. The behavior of the compression factor was similar for PWC or PWL basis functions, either with fine or coarse mesh discretization. This confirms the excellent stability of our compression method. Figure 8: Compression factor of the compressed matrix $Z_{\rm bc}$. Results are shown for all investigated head and coil discretizations. Fig. 9 shows the maximum Tucker rank, obtained with HOSVD, for all tensor components of the coupling matrix. The rank decreased slowlier than the compression factor (Fig. 8) for coarser discretizations of the VIE domain. For example, when PWC basis functions and fine coil resolution were used (PWC- fine), the maximum rank decreased by only $1.5$ times (from $42$ to $28$) when the isotropic voxel size increased from $1$ to $5$ mm3, which corresponds to a $5$ times smaller grid in all directions. For all cases, the maximum rank was smaller for finer coil meshes, which is in agreement with the results shown in section V.A.3. Figure 9: Maximum rank as a function of voxel resolution of the VIE domain. The maximum rank was calculated among all Tucker ranks of the decomposed tensors, which were obtained with HOSVD. Results are shown for all investigated head and coil discretizations. For each voxel resolution, the size of the corresponding VIE discretization grid is indicated. ##### Computation Time TABLE V reports the computation time for the assembly of the full and the HOSVD-compressed coupling matrix (rounded to the nearest higher second). For PWC, we used one quadrature integration point per triangle and voxel, while for PWL, two for each triangle and eight for each voxel. For the low resolution matrices, the assembly time for the compressed matrix was larger than the one for the full matrix by a factor $\leq q$, since in the HOSVD case the compressed matrix was assembled as $q$ sequential compressed matrix blocks. For the larger matrices, our server could not perform the assembly of the full matrix, due to the prohibitively large memory requirements, but it was able to assemble the compressed matrix using our proposed method. TABLE V: Time Footprint (hh:mm:ss) of $Z_{\rm bc}$ Assembly Voxel Res. | Assembly | PWC-coarse | PWC-fine | PWL-coarse | PWL-fine ---|---|---|---|---|--- $5$ mm3 | Full | 00:00:12 | 00:00:45 | 00:03:31 | 00:14:17 HOSVD | 00:00:34 | 00:02:33 | 00:32:27 | 02:21:21 $2$ mm3 | Full | 00:02:48 | N/A | 01:09:49 | N/A HOSVD | 00:06:16 | 00:33:28 | 07:15:45 | 30:32:42 $1$ mm3 | Full | N/A | N/A | N/A | N/A HOSVD | 00:27:59 | 02:31:52 | 52:17:11 | N/A TABLE VI and VII summarize the computation times for the matrix- and conjugate transpose matrix-vector products. Compared to the full form case (Full-CPU), the compressed matrix-vector product requires additional operations for the decompression of the tensors. While Algorithms 1 and 2 can reduce the memory requirements of the matrix-vector products, the time footprint varies based on how these algorithms are implemented. In particular, the nested loops over the $m$ RWG functions can be either parallelized on a CPU, if the RAM can support multiple tensor decompressions in parallel (HOSVD-CPU), or performed sequentially using a GPU (HOSVD-GPU). In our tests, we multiplied $Z_{\rm bc}$ with $X\in\mathbb{C}^{m\times 8}$ (TABLE VI) and $Z^{*}_{\rm bc}$ with $\Phi\in\mathbb{C}^{qn_{\rm v}\times 8}$ (TABLE VII), where both $X$ and $\Phi$ were random matrices to keep the results general. The eight columns of $X$ could correspond, for example, to the currents associated with the eight channels of the coil in Fig. 7. TABLE VI: Time Footprint (hh:mm:ss) of $Y=Z_{\rm bc}X$ Voxel Res. | Form | PWC-coarse | PWC-fine | PWL-coarse | PWL-fine ---|---|---|---|---|--- $5$ mm3 | Full-CPU | 00:00:01 | 00:00:02 | 00:00:02 | 00:00:06 HOSVD-CPU | 00:00:03 | 00:00:07 | 00:00:07 | 00:00:24 HOSVD-GPU | 00:00:03 | 00:00:11 | 00:00:10 | 00:00:44 $2$ mm3 | Full-CPU | 00:00:06 | N/A | 00:00:25 | N/A HOSVD-CPU | 00:00:25 | 00:01:46 | 00:01:37 | 00:06:24 HOSVD-GPU | 00:00:04 | 00:00:14 | 00:00:13 | 00:00:56 $1$ mm3 | Full-CPU | N/A | N/A | N/A | N/A HOSVD-CPU | 00:02:54 | 00:11:25 | 00:11:30 | N/A HOSVD-GPU | 00:00:13 | 00:00:53 | 00:00:52 | N/A TABLE VII: Time Footprint (hh:mm:ss) of $\Psi=Z^{*}_{\rm bc}\Phi$ Voxel Res. | Form | PWC-coarse | PWC-fine | PWL-coarse | PWL-fine ---|---|---|---|---|--- $5$ mm3 | Full-CPU | 00:00:01 | 00:00:02 | 00:00:02 | 00:00:06 HOSVD-CPU | 00:00:02 | 00:00:04 | 00:00:04 | 00:00:26 HOSVD-GPU | 00:00:02 | 00:00:07 | 00:00:05 | 00:00:21 $2$ mm3 | Full-CPU | 00:00:06 | N/A | 00:00:22 | N/A HOSVD-CPU | 00:00:13 | 00:00:45 | 00:00:49 | 00:03:25 HOSVD-GPU | 00:00:03 | 00:00:11 | 00:00:10 | 00:00:42 $1$ mm3 | Full-CPU | N/A | N/A | N/A | N/A HOSVD-CPU | 00:01:20 | 00:05:19 | 00:05:23 | N/A HOSVD-GPU | 00:00:11 | 00:00:45 | 00:00:41 | N/A For $5$ mm3 isotropic voxel resolution, the Full-CPU matrix-vector product was the fastest for all cases, because the coupling matrix is small. For $2$ and $1$ mm3 voxel resolution, the HOSVD-GPU implementation was the fastest. Note that the Full-CPU case could not be performed for high voxel and coil mesh resolutions, due to the excessive memory requirements. The HOSVD-CPU was slower than HOSVD-GPU, except for the $5$ mm3 voxel resolution. #### IV-B2 Body Coil Experiments For the second MRI experiment, we simulated the volume bodycoil of a commercial 3T MRI scanner [46, 47] and we loaded it with “Billie”, from the virtual family population [45] (Fig. 10). The frequency was set to $123$ MHz, corresponding to $3$ Tesla MRI. The coil has $32$ legs, a radius of $35.5$ cm, length of $45$ cm, and is centered at $\left(0,0,0\right)$. We also modeled the system conductive shield, which has a radius of $37.2$ cm, a length of $1.5$ m and is centered at $\left(0,0,0\right)$. The distance between the coil and the cuboid VIE domain enclosing “Billie” was $15.5$ cm and $24.5$ cm in the $x$ and $y$, respectively. In contrast with the previous case where the coil tightly fitted the head, here the coil is remote enough to allow a good compression of the coupling matrix $Z_{\rm bc}$ with ACA. For this experiment, we used PWC and PWL basis functions and three voxel resolutions ($5$, $2$, and $1$ mm3), which corresponded to $81\times 44\times 108$, $205\times 109\times 270$, and $409\times 219\times 541$ voxels for the VIE domain. For the coil and the shield we used $9450$ RWG basis functions. Two quadrature integration points were used for each triangle and eight for each voxel, both for PWC and PWL basis functions. Figure 10: Coil-body geometry. The RF coil and shield (discretized with $9450$ triangular element edges) was loaded with part of the voxelized realistic human body model “Billie” (discretized with voxels of $2$ mm isotropic resolution). ##### Matrix Assembly TABLE VIII, summarizes the memory requirements and the assembly time for the coupling matrix $Z_{\rm bc}$. The ACA tolerance was set to $1e-3$ to achieve good compression. The ACA rank of $Z_{\rm bc}$ was $250$ for the $5$ mm3 cases and $287$ for the $2$ and $1$ mm3 cases. The maximum Tucker rank of $U$ was between $15$ and $18$ for all cases. In the $5$ mm3 case, our results show that ACA could offer an excellent compression of the coupling matrix and the assembly could be rapidly performed in CPU. For $2$ mm3, ACA’s memory requirements were large and ACA was outperformed in speed by our proposed Algorithm 3 (Tucker-based ACA), for which the low memory footprint allowed using a GPU. For $1$ mm3 resolution, the standard ACA algorithm could not be performed even on a server equipped with hundreds of GB’s of RAM, due to overwhelming memory requirements. On the other hand, our proposed ACA extension in Algorithm 3 kept the memory demand small, enabling for fast matrix assembly in GPU. Note that the full matrix assembly was only possible for $5$ mm3 voxel resolution and PWC basis functions. TABLE VIII: Memory Demand (GB) and Time Footprint (hh:mm:ss) of $Z_{\rm bc}$ Assembly Voxel Res. | Form | PWC | PWL ---|---|---|--- memory | time | memory | time $5$ mm3 | Full-CPU | 162.60 | 00:15:49 | 650.42 | N/A ACA-CPU | 4.33 | 00:01:53 | 17.24 | 00:06:39 Algorithm 3-GPU | 0.036 | 00:03:26 | 0.036 | 00:09:58 $2$ mm3 | Full-CPU | 2548 | N/A | 10195 | N/A ACA-CPU | 77.44 | 00:37:16 | 309.65 | 04:30:36 Algorithm 3-GPU | 0.041 | 00:26:57 | 0.042 | 01:28:53 $1$ mm3 | Full-CPU | 20471 | N/A | 81884 | N/A ACA-CPU | 621.75 | N/A | 2486 | N/A Algorithm 3-GPU | 0.041 | 03:35:36 | 0.042 | 15:20:41 For the coarser case of $5$ mm3 voxel resolution and PWC basis functions, the time footprint of Algorithm 3 for CPU (not shown in the table) was 00:17:50, which is $\sim 5$ times slower than for the GPU execution. ##### Matrix-Vector Product Performance The time footprints for the matrix-vector product between the compressed coupling matrix $Z_{\rm bc}$ and a random vector $\mathbf{x}\in\mathbb{C}^{m\times 1}$ are shown in TABLE IX. ACA-CPU corresponds to performing the product $U(V^{*}\mathbf{x})$ in CPU. For ACA+HOSVD-GPU, $Z_{\rm bc}$ was compressed with Algorithm 3, and the matrix- vector product was performed with Algorithm 1 in GPU. For $5$ mm3 voxel resolution, the efficiency is similar for both approaches. For $2$ mm3 voxel resolution, ACA+HOSVD-GPU outperformed ACA by a factor of $3$, because, due to its low memory demand, it could be executed on a GPU, whereas ACA could not. TABLE IX: Time Footprint (hh:mm:ss) of $\mathbf{y}=Z_{\rm bc}\mathbf{x}$ Voxel Res. | Form | PWC | PWL ---|---|---|--- $5$ mm3 | Full-CPU | 00:00:11 | N/A ACA-CPU | 00:00:01 | 00:00:01 ACA+HOSVD-GPU | 00:00:01 | 00:00:02 $2$ mm3 | Full-CPU | N/A | N/A ACA-CPU | 00:00:05 | 00:00:18 ACA+HOSVD-GPU | 00:00:02 | 00:00:06 $1$ mm3 | Full-CPU | N/A | N/A ACA-CPU | N/A | N/A ACA+HOSVD-GPU | 00:00:09 | 00:00:33 The relative error of $\mathbf{y}$ obtained with ACA+HOSVD-GPU relative to Full-CPU (ground truth) is shown on the right axis of Fig. 11 for the case of $5$ mm3 voxel resolution and PWC basis functions. The plot shows how the error changes as a function of the tolerance ($1e-3$, $\dots$, $1e-8$) used for ACA. In particular, the relative error remained approximately an order of magnitude higher than ACA’s tolerance. Fig. 11 also shows plots for the ACA rank and the maximum Tucker rank of $Z_{\rm bc}$ (values on the left axis). Both ranks increased as the tolerance of ACA was decreased. We expect similar results for the other cases, but we were unable to assemble the full coupling matrix, due to its vast memory footprint. Figure 11: (left axis) ACA rank and maximum Tucker rank (obtained with HOSVD) of $Z_{\rm bc}$. (right axis) Error in $\mathbf{y}=Z_{\rm bc}\mathbf{x}$ calculated with ACA+HOSVD-GPU relative to Full-CPU. ## V Discussion We showed that our new Tucker-based algorithms could effectively compress the coupling VSIE matrices in the case of fine voxel resolutions. Thanks to the achieved compression factors, the matrix-vector products can be performed in GPU’s, yielding faster execution. The proposed approach will be key for a VSIE-based in vivo implementation of GMT, for which reducing memory requirements and computation time are both critical factors. For cases in which the coil is placed at some distance from the imaging object, the coupling matrix is low-rank, thus can be compressed using ACA as $UV^{*}$. For coarse voxel resolutions, such compression strategy alone is effective and allows for a rapid CPU implementation of matrix-vector products. However, as the voxel grid of the VIE domain is refined, the memory demand of $U$ increases until the standard ACA becomes impractical for applications that need high accuracy, such as GMT. In fact, in order to keep using ACA, one would have to relax the tolerance, sacrificing accuracy. To avoid this, in this work we introduced an extension of ACA (Algorithm 3), for which the memory demand remains as a low as the memory required by one column of the coupling matrix for any tolerance. Furthermore, Algorithm 3 can be executed in GPU for rapid computations also in the case of fine voxel resolutions. Other memory-friendly techniques are available for the fast implementation of matrix-vector products in VSIE simulations: the magnetic resonance Green’s function (MRGF), the fast multipole method (FMM), and the precorrected FFT method (pFFT). The MRGF [11] is a model order reduction technique that can considerably accelerate the solutions of the VSIE system. However, the required computational time can be overwhelming when fine voxel resolutions and PWL basis functions are used. The FMM has been extensively used for the compression of the MoM matrix [48, 49, 50]. However, in our case, the FMM could lead to unnecessary overheads, because we are only interested in the compression of an off-diagonal block of the full MoM matrix (i.e., the coupling matrix), since the remaining blocks can be handled efficiently with other methods [13, 12, 10]. Finally, the pFFT method [8] could be used to project the discretized coil’s elements onto an extended VIE domain, where the Green’s function tensors are compressible with the Tucker decomposition (pFFT+Tucker) [10]. However, this approach would be effective only when the coil is close to the scatterer, like for the close-fitting coil in section IV.B.1 [51]. In fact, in such situation the extended VIE domain would be larger than the original one by only a few voxels in each direction, allowing the matrix-vector products to fit in a GPU. As a result, and also because it relies on the FFT, such pFFT+Tucker could be more efficient than our proposed method, although more complex to implement. An important aspect of the approach presented in this manuscript is that it can effectively compress the coupling matrix both when the coil is close to or far from the scatterer. Because of this, our method allows using GPUs to accelerate the matrix-vector products for most applications. For example, if the close-fitting coil geometry in Fig. 7 were integrated with a head gradient insert with a surrounding conductive shield at a certain distance, with our approach the coupling matrix would still be compressible and fit in the memory of a GPU. In fact, the interactions between the shield and the head would have lower Tucker ranks than the ones between the coil’s conductors and the head (see Section IV.A.1). On the other hand, the pFFT+Tucker method would no longer be efficient because it would require extending the VIE domain to fully include the shield. Even though the Green’s function tensors of the extended domain would still be compressible with Tucker decomposition, the unknowns that multiply such tensors element-wise would not be. In fact, their dimensions would have to significantly increase in order to perform the relevant FFT-based matrix-vector products. For the previous example, in order to fully exploit the highly parallel architecture of the GPU with a minimal number of matrix-vector products operations, one could use a hybrid method that combines pFFT and Algorithm 3. To do that, the near and far interactions between the VIE domain and the conducting surfaces would need to be separated. Then, the pFFT could be used to model the near interactions between the VIE domain and the coil as in [51], while the arising Green’s function operators could be compressed with the Tucker decomposition as in [10]. Finally, the remaining far interactions between the VIE domain and the coil could be modeled with a coupling matrix, which would be vastly compressed with Algorithm 3. Such hybrid method could enable us to rapidly execute the matrix-vector products in GPU for most cases, including complex coil-shield geometries, fine voxel resolutions, and PWL basis functions. The described hybrid method will be investigated in future work. The main limitation of our proposed method is that it requires a considerable amount of time for the assembly of the coupling matrix, when the matrix is not compressible with ACA. It is especially slow because each element of the coupling matrix is a 5D integral. We showed that this could be addressed by implementing matrix assembly and compression in parallel, but such approach is not always possible due to memory limitations (IV.B.1). For such cases, one could alternatively employ 3D cross-Tucker approximation algorithms [40, 18], which are less efficient than HOSVD for the tensor dimensions in this work, but do not suffer from memory constraints in case of large tensors. In fact, 3D cross-Tucker methods require only a small number of rows, columns, and fibers of the tensor they approximate, and they can be implemented with linear complexity (with respect to tensor’s linear size). In future work, we will explore the execution of multiple 3D cross-Tucker steps in parallel to avoid memory overflows when assembling a compressed coupling matrix in the case of extremely fine resolutions. ## VI Conclusion We presented a memory compression technique for the coupling matrix in VSIE systems. Our method enables one to form and store the coupling matrix even when its full form size is prohibitively large ($\sim 80$ TB). Specifically, in this work we were able to achieve a compression between $\sim 0.5$ (PWC) and $\sim 2$ (PWL) million times when simulating interactions between MRI coils and realistic body models with some distance between them, in the case of fine voxel resolutions of $1$ mm3. The error was around one order of magnitude higher than the tolerance employed for the algorithm. The stored, compressed matrices could be used multiple times without the need to repeat the assembly. For example, this would allow one to rapidly perform EM simulations for the same coil with different body geometries, as far as they are contained in the original computational domain. For most cases, our compression method enables fitting large coupling matrices in GPUs, resulting in rapid execution of the VSIE matrix-vector product (from $1$ to $56$ seconds for the studied scenarios). Finally, the proposed method could facilitate the implementation of VSIE-based GMT for in vivo mapping of tissue electrical properties at clinically-relevant voxel resolutions. ## References * [1] S. W. Anderson _et al._ , “Effect of disease progression on liver apparent diffusion coefficient values in a murine model of NASH at 11.7 Tesla MRI,” _Journal of Magnetic Resonance Imaging_ , vol. 33, no. 4, pp. 882–888, 2011. * [2] J. Jin and J. Chen, “On the SAR and field inhomogeneity of birdcage coils loaded with the human head,” _Magnetic resonance in medicine_ , vol. 38, no. 6, pp. 953–963, 1997. * [3] R. Lattanzi _et al._ , “Electrodynamic constraints on homogeneity and radiofrequency power deposition in multiple coil excitations,” _Magnetic resonance in medicine_ , vol. 61, no. 2, pp. 315–334, 2009. * [4] X. Zhang _et al._ , “From complex B1 mapping to local SAR estimation for human brain MR imaging using multi-channel transceiver coil at 7T,” _IEEE transactions on medical imaging_ , vol. 32, no. 6, pp. 1058–1067, 2013\. * [5] M. Cosottini _et al._ , “Short-term side-effects of brain MR examination at 7 T: a single-centre experience,” _European radiology_ , vol. 24, no. 8, pp. 1923–1928, 2014. * [6] A. Taflove and K. R. Umashankar, “Review of FD-TD numerical modeling of electromagnetic wave scattering and radar cross section,” _Proceedings of the IEEE_ , vol. 77, no. 5, pp. 682–699, 1989. * [7] R. Lee and A. C. Cangellaris, “A study of discretization error in the finite element approximation of wave solutions,” _IEEE Transactions on Antennas and Propagation_ , vol. 40, no. 5, pp. 542–549, 1992. * [8] J. R. Phillips and J. K. White, “A precorrected-FFT method for electrostatic analysis of complicated 3-D structures,” _IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems_ , vol. 16, no. 10, pp. 1059–1072, 1997. * [9] A. A. Tambova _et al._ , “On the generalization of directfn for singular integrals over quadrilateral patches,” _IEEE Transactions on Antennas and Propagation_ , vol. 66, no. 1, pp. 304–314, 2017. * [10] I. I. Giannakopoulos, M. S. Litsarev, and A. G. Polimeridis, “Memory footprint reduction for the fft-based volume integral equation method via tensor decompositions,” _IEEE Transactions on Antennas and Propagation_ , vol. 67, no. 12, pp. 7476–7486, 2019. * [11] J. F. Villena _et al._ , “Fast electromagnetic analysis of MRI transmit RF coils based on accelerated integral equation methods,” _IEEE Transactions on Biomedical Engineering_ , vol. 63, no. 11, pp. 2250–2261, 2016. * [12] A. G. Polimeridis _et al._ , “Stable FFT-JVIE solvers for fast analysis of highly inhomogeneous dielectric objects,” _Journal of Computational Physics_ , vol. 269, pp. 280–296, 2014. * [13] S. Rao, D. Wilton, and A. Glisson, “Electromagnetic scattering by surfaces of arbitrary shape,” _IEEE Transactions on antennas and propagation_ , vol. 30, no. 3, pp. 409–418, 1982. * [14] I. P. Georgakis _et al._ , “A fast volume integral equation solver with linear basis functions for the accurate computation of electromagnetic fields in MRI,” _IEEE Transactions on Antennas and Propagation, Early Access_ , 2020. * [15] J. E. Serrallés _et al._ , “Noninvasive Estimation of Electrical Properties from Magnetic Resonance Measurements via Global Maxwell Tomography and Match Regularization,” _IEEE Transactions on Biomedical Engineering_ , vol. 67, no. 1, pp. 3–15, 2019. * [16] I. Giannakopoulos _et al._ , “Magnetic-resonance-based electrical property mapping using Global Maxwell Tomography with an 8-channel head coil at 7 Tesla: a simulation study,” _IEEE Transactions on Biomedical Engineering_ , vol. 68, no. 1, pp. 236–246, 2021. * [17] L. R. Tucker, “Some mathematical notes on three-mode factor analysis,” _Psychometrika_ , vol. 31, no. 3, pp. 279–311, 1966. * [18] I. I. Giannakopoulos, M. S. Litsarev, and A. G. Polimeridis, “3D cross-Tucker approximation in FFT-based volume integral equation methods,” in _2018 IEEE International Symposium on Antennas and Propagation & USNC/URSI National Radio Science Meeting_. IEEE, 2018, pp. 2507–2508. * [19] J. Zhang, Y. Han, and J. Jiang, “Tucker decomposition-based tensor learning for human action recognition,” _Multimedia Systems_ , vol. 22, no. 3, pp. 343–353, 2016. * [20] M. Wang _et al._ , “VoxCap: FFT-Accelerated and Tucker-Enhanced Capacitance Extraction Simulator for Voxelized Structures,” _arXiv preprint arXiv:2004.02609_ , 2020. * [21] C. Qian and A. C. Yucel, “On the Compression of Translation Operator Tensors in FMM-FFT-Accelerated SIE Simulators via Tensor Decompositions,” _arXiv preprint arXiv:2010.00520_ , 2020. * [22] I. V. Oseledets, “Tensor-train decomposition,” _SIAM Journal on Scientific Computing_ , vol. 33, no. 5, pp. 2295–2317, 2011. * [23] B. N. Khoromskij and I. Oseledets, “Quantics-TT collocation approximation of parameter-dependent and stochastic elliptic PDEs,” _Computational Methods in Applied Mathematics Comput. Methods Appl. Math._ , vol. 10, no. 4, pp. 376–394, 2010. * [24] L. Grasedyck, “Hierarchical singular value decomposition of tensors,” _SIAM Journal on Matrix Analysis and Applications_ , vol. 31, no. 4, pp. 2029–2054, 2010. * [25] E. Tyrtyshnikov, “Mosaic-skeleton approximations,” _Calcolo_ , vol. 33, no. 1-2, pp. 47–57, 1996. * [26] E. Tyrtyshnikov, “Mosaic ranks and skeletons,” in _International Workshop on Numerical Analysis and Its Applications_. Springer, 1996, pp. 505–516. * [27] S. A. Goreinov, E. E. Tyrtyshnikov, and N. L. Zamarashkin, “A theory of pseudoskeleton approximations,” _Linear algebra and its applications_ , vol. 261, no. 1-3, pp. 1–21, 1997. * [28] S. Kurz, O. Rain, and S. Rjasanow, “The adaptive cross-approximation technique for the 3D boundary-element method,” _IEEE transactions on Magnetics_ , vol. 38, no. 2, pp. 421–424, 2002. * [29] M. Bebendorf and S. Rjasanow, “Adaptive low-rank approximation of collocation matrices,” _Computing_ , vol. 70, no. 1, pp. 1–24, 2003. * [30] R. F. Harrington, _Field computation by moment methods_. Wiley-IEEE Press, 1993. * [31] M. F. Catedra, E. Gago, and L. Nuno, “A numerical scheme to obtain the RCS of three-dimensional bodies of resonant size using the conjugate gradient method and the fast Fourier transform,” _IEEE transactions on antennas and propagation_ , vol. 37, no. 5, pp. 528–537, 1989. * [32] P. Zwamborn and P. M. Van Den Berg, “The three dimensional weak form of the conjugate gradient FFT method for solving scattering problems,” _IEEE Transactions on Microwave Theory and Techniques_ , vol. 40, no. 9, pp. 1757–1766, 1992. * [33] H. Gan and W. C. Chew, “A discrete BCG-FFT algorithm for solving 3D inhomogeneous scatterer problems,” _Journal of Electromagnetic Waves and Applications_ , vol. 9, no. 10, pp. 1339–1357, 1995. * [34] J. Jin _et al._ , “Computation of electromagnetic fields for high-frequency magnetic resonance imaging applications,” _Physics in Medicine & Biology_, vol. 41, no. 12, p. 2719, 1996. * [35] M. Van Beurden and S. Van Eijndhoven, “Well-posedness of domain integral equations for a dielectric object in homogeneous background,” _Journal of Engineering Mathematics_ , vol. 62, no. 3, pp. 289–302, 2008. * [36] J. Markkanen _et al._ , “Analysis of volume integral equation formulations for scattering by high-contrast penetrable objects,” _IEEE Transactions on Antennas and Propagation_ , vol. 60, no. 5, pp. 2367–2374, 2012. * [37] P. Yla-Oijala _et al._ , “Surface and volume integral equation methods for time-harmonic solutions of Maxwell’s equations,” _Progress In Electromagnetics Research_ , vol. 149, pp. 15–44, 2014. * [38] C.-T. Tai, _Dyadic Green functions in electromagnetic theory_. Institute of Electrical & Electronics Engineers (IEEE), 1994. * [39] L. De Lathauwer, B. De Moor, and J. Vandewalle, “A multilinear singular value decomposition,” _SIAM journal on Matrix Analysis and Applications_ , vol. 21, no. 4, pp. 1253–1278, 2000. * [40] I. V. Oseledets, D. Savostianov, and E. E. Tyrtyshnikov, “Tucker dimensionality reduction of three-dimensional arrays in linear time,” _SIAM Journal on Matrix Analysis and Applications_ , vol. 30, no. 3, pp. 939–956, 2008. * [41] S. A. Goreinov and E. E. Tyrtyshnikov, “The maximal-volume concept in approximation by low-rank matrices,” _Contemporary Mathematics_ , vol. 280, pp. 47–52, 2001. * [42] W. Chai and D. Jiao, “Theoretical study on the rank of integral operators for broadband electromagnetic modeling from static to electrodynamic frequencies,” _IEEE Transactions on Components, Packaging and Manufacturing Technology_ , vol. 3, no. 12, pp. 2113–2126, 2013. * [43] M. A. Francavilla _et al._ , “Maxwell parallel imaging,” _arXiv preprint arXiv:2008.09042_ , 2020. * [44] K. Zhao, M. N. Vouvakis, and J.-F. Lee, “The adaptive cross approximation algorithm for accelerated method of moments computations of EMC problems,” _IEEE transactions on electromagnetic compatibility_ , vol. 47, no. 4, pp. 763–773, 2005. * [45] A. Christ _et al._ , “The Virtual Family-development of surface-based anatomical models of two adults and two children for dosimetric simulations,” _Physics in Medicine & Biology_, vol. 55, no. 2, p. N23, 2009\. * [46] SIEMENS Healthineers, “Magnetom skyra.” [Online]. Available: https://www.siemens-healthineers.com/magnetic-resonance-imaging/3t-mri-scanner/magnetom-skyra * [47] E. Milshteyn _et al._ , “Individualized sar calculations using computer vision-based mr segmentation and a fast electromagnetic solver,” _Magnetic Resonance in Medicine_ , vol. 85, no. 1, pp. 429–443, 2021. * [48] L. Greengard and V. Rokhlin, “A fast algorithm for particle simulations,” _Journal of computational physics_ , vol. 73, no. 2, pp. 325–348, 1987. * [49] R. Coifman, V. Rokhlin, and S. Wandzura, “The fast multipole method for the wave equation: A pedestrian prescription,” _IEEE Antennas and Propagation Magazine_ , vol. 35, no. 3, pp. 7–12, 1993. * [50] B. Shanker and H. Huang, “Accelerated Cartesian expansions–a fast method for computing of potentials of the form R- $\nu$ for all real $\nu$,” _Journal of Computational Physics_ , vol. 226, no. 1, pp. 732–753, 2007. * [51] G. G. Guryev _et al._ , “Fast field analysis for complex coils and metal implants in MARIE 2.0.” in _Proc. ISMRM_ , 2019, p. 1035.
# Gröbner-Shirshov bases for some Lie algebras111Supported by the NNSF of China (11171118), the Research Fund for the Doctoral Program of Higher Education of China (20114407110007), the NSF of Guangdong Province (S2011010003374) and the Program on International Cooperation and Innovation, Department of Education, Guangdong Province (2012gjhz0007). Yuqun Chen, Yu Li and Qingyan Tang School of Mathematical Sciences, South China Normal University Guangzhou 510631, P. R. China <EMAIL_ADDRESS> <EMAIL_ADDRESS> <EMAIL_ADDRESS> Abstract: We give Gröbner-Shirshov bases for Drinfeld-Kohno Lie algebra $\textbf{L}_{n}$ in [27] and Kukin Lie algebra $A_{P}$ in [32], where $P$ is a semigroup. As applications, we show that as $\mathbb{Z}$-module $\textbf{L}_{n}$ is free and a $\mathbb{Z}$-basis of $\textbf{L}_{n}$ is given. We give another proof of Kukin Theorem: if semigroup $P$ has the undecidable word problem then the Lie algebra $A_{P}$ has the same property. Key words: Gröbner-Shirshov basis, Lie algebra, Drinfeld-Kohno Lie algebra, word problem, semigroup. AMS 2000 Subject Classification: 17B01, 16S15, 3P10, 20M05, 03D15 ## 1 Introduction Gröbner bases and Gröbner-Shirshov bases were invented independently by A.I. Shirshov for ideals of free (commutative, anti-commutative) non-associative algebras [40, 41], free Lie algebras [39, 41] and implicitly free associative algebras [39, 41] (see also [2, 4]), by H. Hironaka [29] for ideals of the power series algebras (both formal and convergent), and by B. Buchberger [20] for ideals of the polynomial algebras. Gröbner bases and Gröbner-Shirshov bases theories have been proved to be very useful in different branches of mathematics, including commutative algebra and combinatorial algebra, see, for example, the books [1, 18, 21, 22, 24, 25], the papers [2, 3, 4, 7, 9, 10, 11, 12, 13, 20, 23, 30, 35], and the surveys [5, 6, 14, 15, 16, 17]. A.A. Markov [34], E. Post [37], A. Turing [42], P.S. Novikov [36] and W.W. Boone [19] constructed finitely presented semigroups and groups with the undecidable word problem. This result also follows from the Higman theorem [28] that any recursive presented group is embeddable into finitely presented group. A weak analogy of Higman theorem for Lie algebras was proved in [3] that was enough for existence of a finitely presented Lie algebra with the undecidable word problem. In [32], Kukin constructed the Lie algebra $A_{P}$ for a semigroup $P$ such that if $P$ has the undecidable word problem then $A_{P}$ has the same property. In this paper we find a Gröbner-Shirshov basis for $A_{P}$ and then give another proof of the above result. The Drinfeld-Kohno Lie algebra $\textbf{L}_{n}$ appears in [26, 31] as the holonomy Lie algebra of the complement of the union of the diagonals $z_{i}=z_{j},\ i<j$. The universal Knizhnik-Zamolodchikov connection takes values in this Lie algebra. In this paper, we give a Gröbner-Shirshov basis for the Drinfeld-Kohno Lie algebra $\textbf{L}_{n}$ over $\mathbb{Z}$ and a $\mathbb{Z}$-basis of $\textbf{L}_{n}$. As an application we get a simple proof that $\textbf{L}_{n}$ is an iterated semidirect product of free Lie algebras. We are very grateful to Professor L.A. Bokut for his guidance and useful discussions. ## 2 Composition-Diamond lemma for Lie algebras over a field For the completeness of the paper, we formulate the Composition-Diamond lemma for Lie algebras over a field in this section, see [6, 33, 38, 41] for details. Let $k$ be a filed, $I$ a well ordered index set, $X=\\{x_{i}|i\in I\\}$ a set, $X^{*}$ the free monoid generated by $X$ and $Lie(X)$ the free Lie algebra over $k$ generated by $X$. We order $X=\\{x_{i}|i\in I\\}$ by $x_{i}>x_{t}$ if $i>t$ for any $i,t\in I$. We use two linear orders on $X^{*}$: for any $u,v\in X^{*}$, (i) (lex order) $1\succ t$ if $t\neq 1$ and, by induction, if $u=x_{i}u_{i}$ and $v=x_{j}v_{j}$ then $u\succ v$ if and only if $x_{i}>x_{j}$, or $x_{i}=x_{j}$ and $u_{i}\succ v_{j}$; (ii) (deg-lex order) $u>v$ if and only if $deg(u)>deg(v)$, or $deg(u)=deg(v)$ and $u\succ v$, where $deg(u)$ is the length of $u$. We regard $Lie(X)$ as the Lie subalgebra of the free associative algebra $k\langle X\rangle$, which is generated by $X$ under the Lie bracket $[u,v]=uv-vu$. Given $f\in k\langle X\rangle$, denote $\bar{f}$ the leading word of $f$ with respect to the deg-lex order. $f$ is monic if the coefficient of $\bar{f}$ is 1. ###### Definition 2.1 An associative word $w\in X^{*}\setminus\\{1\\}$ is an associative Lyndon- Shirshov word (ALSW for short) if $(\forall u,v\in X^{*},u,v\neq 1)\ w=uv\Rightarrow w>vu.$ A non-associative word $(u)$ in $X$ is a non-associative Lyndon-Shirshov word (NLSW for short), denoted by $[u]$, if (i) $u$ is an ALSW; (ii) if $[u]=[(u_{1})(u_{2})]$ then both $(u_{1})$ and $(u_{2})$ are NLSW’s; (iii) if $[u]=[[[u_{11}][u_{12}]][u_{2}]]$ then $u_{12}\preceq u_{2}$. We denote the set of all ALSW’s and NLSW’s in $X$ by $ALSW(X)$ and $NLSW(X)$ respectively. For an ALSW $w$, there is a unique bracketing $[w]$ such that $[w]$ is NLSW: $[w]=[[u][v]]$ if $deg(w)>1$, where $v$ is the longest proper associative Lyndon-Shirshov end of $w$. Shirshov Lemma Suppose that $w=aub$, where $w,u\in ALSW(X)$. Then 1. (i) $[w]=[a[uc]d],$ where $b=cd$ and possibly $c=1$. 2. (ii) Represent $c$ in the form $c=c_{1}c_{2}\ldots c_{n},$ where $c_{1},\ldots,c_{n}\in ALSW(X)$ and $c_{1}\leq c_{2}\leq\ldots\leq c_{n}$. Replacing $[uc]$ by $[\ldots[[u][c_{1}]]\ldots[c_{n}]]$ we obtain the word $[w]_{u}=[a[\ldots[[[u][c_{1}]][c_{2}]]\ldots[c_{n}]]d]$ which is called the Shirshov special bracketing of $w$ relative to $u$. 3. (iii) $\overline{[w]}_{u}=w.$ ###### Definition 2.2 Let $S\subset Lie(X)$ with each $s\in S$ monic, $a,b\in{X^{*}}$ and $s\in S$. If $a\bar{s}b$ is an ALSW, then we call $[asb]_{\bar{s}}=[a\bar{s}b]_{\bar{s}}|_{[\bar{s}]\mapsto{s}}$ a special normal $S$-word, where $[a\bar{s}b]_{\bar{s}}$ is defined as in Shirshov Lemma. An $S$-word $(asb)$ is called a normal $S$-word if $\overline{(asb)}=a\overline{s}b$. Suppose that $f,\ g\in S$. Then, there are two kinds of compositions: 1. (i) If $w=\bar{f}=a\bar{g}b$ for some $a,b\in X^{*}$, then the polynomial $(f,g)_{w}=f-[agb]_{\bar{g}}$ is called the composition of inclusion of $f$ and $g$ with respect to $w$. 2. (ii) If $w$ is a word such that $w=\bar{f}b=a\bar{g}$ for some $a,b\in X^{*}$ with $deg(\bar{f})+deg(\bar{g})>deg(w)$, then the polynomial $(f,g)_{w}=[fb]_{\bar{f}}-[ag]_{\bar{g}}$ is called the composition of intersection of $f$ and $g$ with respect to $w$. In (i) and (ii), $w$ is called an ambiguity. Let $h$ be a Lie polynomial and $w\in X^{*}$. We shall say that $h$ is trivial modulo $(S,w)$, denoted by $h\equiv_{Lie}0\ mod(S,w)$, if $h=\sum_{i}\alpha_{i}(a_{i}s_{i}b_{i})$, where each $(a_{i}s_{i}b_{i})$ is a normal $S$-word and $a_{i}\bar{s_{i}}b_{i}<w$. The set $S$ is called a Gröbner-Shirshov basis in $Lie(X)$ if any composition in $S$ is trivial modulo $S$ and corresponding $w$. ###### Theorem 2.3 (Composition-Diamond lemma for Lie algebras over a field) Let $S\subset{Lie(X)}\subset{k\langle X\rangle}$ be nonempty set of monic Lie polynomials. Let $Id(S)$ be the ideal of $Lie(X)$ generated by $S$. Then the following statements are equivalent. 1. (i) $S$ is a Gröbner-Shirshov basis in $Lie(X)$. 2. (ii) $f\in{Id(S)}\Rightarrow{\bar{f}=a\bar{s}b}$ for some $s\in{S}$ and $a,b\in{X^{*}}$. 3. (iii) $Irr(S)=\\{[u]\in NLSW(X)\ |\ u\neq{a\bar{s}b},\ s\in{S},\ a,b\in{X^{*}}\\}$ is a linear basis for $Lie(X|S)=Lie(X)/Id(S)$. Remark: The above Composition-Diamond lemma is also valid if we replace the base field $k$ by an arbitrary commutative ring $K$ with identity. If this is the case, as $K$-module, $Lie(X|S)$ is free with a $K$-basis $Irr(S)$. ## 3 Kukin’s construction of a Lie algebra with unsolvable word problem Let $P=sgp\langle x,y|u_{i}=v_{i},\ i\in I\rangle$ be a semigroup. Consider the Lie algebra $A_{P}=Lie(x,\hat{x},y,\hat{y},z|S),$ where $S$ consists of the following relations: 1. (1) $[\hat{x}x]=0,\ [\hat{x}y]=0,\ [\hat{y}x]=0,\ [\hat{y}y]=0$, 2. (2) $[\hat{x}z]=-[zx],\ [\hat{y}z]=-[zy]$, 3. (3) $\lfloor zu_{i}\rfloor=\lfloor zv_{i}\rfloor,\ i\in I$. Here, $\lfloor zu\rfloor$ means the left normed bracketing. In this section, we give a Gröbner-Shirshov basis for Lie algebra $A_{P}$ and by using this result we give another proof for Kukin’s theorem, see Corollary 3.2. Let the order $\hat{x}>\hat{y}>z>x>y$ and $>$ the deg-lex order on $\\{\hat{x},\hat{y},x,y,z\\}^{*}$. Let $\rho$ be the congruence on $\\{x,y\\}^{*}$ generated by $\\{(u_{i},v_{i}),\ i\in I\\}$. Let 1. $(3)^{\prime}$ $\lfloor zu\rfloor=\lfloor zv\rfloor,\ (u,v)\in\rho$ with $u>v$. ###### Theorem 3.1 With the above notation, the set $S_{1}=\\{(1),(2),(3)^{\prime}\\}$ is a Gröbner-Shirshov basis in $Lie(\hat{x},\hat{y},x,y,z)$. Proof: For any $u\in\\{x,y\\}^{*}$, by induction on $|u|$, $\overline{\lfloor zu\rfloor}=zu$. All possible compositions in $S_{1}$ are intersection of (2) and $(3)^{\prime}$, and inclusion of $(3)^{\prime}$ and $(3)^{\prime}$. For $(2)\wedge(3)^{\prime},\ w=\hat{x}zu,\ (u,v)\in\rho,\ u>v,\ f=[\hat{x}z]+[zx],\ g=\lfloor zu\rfloor-\lfloor zv\rfloor$. We have $\displaystyle([\hat{x}z]+[zx],\lfloor zu\rfloor-\lfloor zv\rfloor)_{w}=[fu]_{\bar{f}}-[\hat{x}g]_{\bar{g}}$ $\displaystyle\equiv$ $\displaystyle\lfloor([\hat{x}z]+[zx])u\rfloor-[\hat{x}(\lfloor zu\rfloor-\lfloor zv\rfloor)]$ $\displaystyle\equiv$ $\displaystyle\lfloor zxu\rfloor+\lfloor\hat{x}zv\rfloor\equiv\lfloor zxu\rfloor-\lfloor zxv\rfloor\equiv 0\ \ mod(S_{1},w).$ For $(3)^{\prime}\wedge(3)^{\prime},\ w=zu_{1}=zu_{2}e,\ e\in\\{x,y\\}^{*},\ (u_{i},v_{i})\in\rho,\ u_{i}>v_{i},\ i=1,2$. We have $\displaystyle(\lfloor zu_{1}\rfloor-\lfloor zv_{1}\rfloor,\lfloor zu_{2}\rfloor-\lfloor zv_{2}\rfloor)_{w}\equiv(\lfloor zu_{1}\rfloor-\lfloor zv_{1}\rfloor)-\lfloor(\lfloor zu_{2}\rfloor-\lfloor zv_{2}\rfloor)e\rfloor$ $\displaystyle\equiv$ $\displaystyle\lfloor\lfloor zv_{2}\rfloor e\rfloor-\lfloor zv_{1}\rfloor\equiv\lfloor zv_{2}e\rfloor-\lfloor zv_{1}\rfloor\equiv 0\ \ mod(S_{1},w).$ Thus, the set $S_{1}=\\{(1),(2),(3)^{\prime}\\}$ is a Gröbner-Shirshov basis in $Lie(\hat{x},\hat{y},x,y,z)$. $\blacksquare$ ###### Corollary 3.2 (Kukin [32]) Let $u,v\in\\{x,y\\}^{*}$. Then $u=v\ \mbox{ in the semigroup}\ P\Leftrightarrow\lfloor zu\rfloor=\lfloor zv\rfloor\ \mbox{ in the Lie algebra }\ A_{P}.$ Proof: Suppose that $u=v\ \mbox{ in the semigroup}\ P$. Without loss of generality, we may assume that $u=au_{1}b,\ v=av_{1}b$ for some $a,b\in\\{x,y\\}^{*}$ and $(u_{1},v_{1})\in\rho$. For any $r\in\\{x,y\\}$, by the relations (1), we have $[\hat{x}r]=0$ and so $\lfloor zxc\rfloor=\lfloor[z\hat{x}]c\rfloor=[\lfloor zc\rfloor\hat{x}],\ \lfloor zyc\rfloor=[\lfloor zc\rfloor\hat{y}]$ for any $c\in\\{x,y\\}^{*}$. From this it follows that in $A_{P}$, $\lfloor zu\rfloor=\lfloor zau_{1}b\rfloor=\lfloor\lfloor zau_{1}\rfloor b\rfloor=\lfloor\lfloor zu_{1}\widehat{\overleftarrow{a}}\rfloor b\rfloor=\lfloor zu_{1}\widehat{\overleftarrow{a}}b\rfloor=\lfloor zv_{1}\widehat{\overleftarrow{a}}b\rfloor=\lfloor zav_{1}b\rfloor=\lfloor zv\rfloor$, where $\overleftarrow{x_{i_{1}}x_{i_{2}}\cdots x_{i_{n}}}=x_{i_{n}}x_{i_{n-1}}\cdots x_{i_{1}}$ and $\widehat{x_{i_{1}}x_{i_{2}}\cdots x_{i_{n}}}=\widehat{x_{i_{1}}}\widehat{x_{i_{2}}}\cdots\widehat{x_{i_{n}}}$, $x_{i_{j}}\in\\{x,y\\}$. Moreover, $(3)^{\prime}$ holds in $A_{P}$. Suppose that $\lfloor zu\rfloor=\lfloor zv\rfloor\ \mbox{ in the Lie algebra }\ A_{P}$. Then both $\lfloor zu\rfloor$ and $\lfloor zv\rfloor$ have the same normal form in $A_{P}$. Since $S_{1}$ is a Gröbner-Shirshov basis in $A_{P}$ by Theorem 3.1, both $\lfloor zu\rfloor$ and $\lfloor zv\rfloor$ can be reduced to the same normal form of the form $\lfloor zc\rfloor$ for some $c\in\\{x,y\\}^{*}$ only by the relations $(3)^{\prime}$. This implies that in $P$, $u=c=v$. $\blacksquare$ By the above corollary, if the semigroup $P$ has the undecidable word problem then so does the Lie algebra $A_{P}$. ## 4 Gröbner-Shirshov basis for the Drinfeld-Kohno Lie algebra $\textbf{L}_{n}$ In this section we give a Gröbner-Shirshov basis for the Drinfeld-Kohno Lie algebra $\textbf{L}_{n}$. ###### Definition 4.1 ([27]) Let $n>2$ be an integer. The Drinfeld-Kohno Lie algebra $\textbf{L}_{n}$ over $\mathbb{Z}$ is defined by generators $t_{ij}=t_{ji}$ for distinct indices $1\leq i,\ j\leq n-1$, and relations $\displaystyle t_{ij}t_{kl}=0,$ $\displaystyle t_{ij}(t_{ik}+t_{jk})=0,$ where $i,\ j,\ k,\ l$ are distinct. Clearly, $\textbf{L}_{n}$ has a presentation $Lie_{\mathbb{Z}}(T|S)$, where $T=\\{t_{ij}|\ 1\leq i<j\leq n-1\\}$ and $S$ consists of the following relations $\displaystyle t_{ij}t_{kl}=\ 0,\ \ \ \ k<i<j,\ k<l,\ l\neq\ i,\ j$ (1) $\displaystyle t_{jk}t_{ij}+t_{ik}t_{ij}=\ 0,\ \ \ \ i<j<k$ (2) $\displaystyle t_{jk}t_{ik}-t_{ik}t_{ij}=\ 0,\ \ \ i<j<k$ (3) Now we order $T$: $t_{ij}<t_{kl}$ if either $i<k$ or $i=k$ and $j<l$. Let $<$ be the deg-lex order on $T^{*}$. ###### Theorem 4.2 Let $S=\\{(\ref{a}),(\ref{b}),(\ref{c})\\}$ be as before, $<$ the deg-lex order on $T^{*}$. Then $S$ is a Gröbner-Shirshov basis for $\textbf{L}_{n}$. Proof. We list all the possible ambiguities. Denote $(i)\wedge(j)$ the composition of the type $(i)$ and type $(j)$. For $(1)\wedge(n)$, $1\leq n\leq 3$, the possible ambiguities $w$’s are: $\displaystyle(1)\wedge(1)$ $\displaystyle t_{ij}t_{kl}t_{mr},\ (k<i<j,\ k<l,\ l\neq\ i,j,\ m<k<l,\ m<r,\ r\neq\ k,l),$ $\displaystyle(1)\wedge(2)$ $\displaystyle t_{ij}t_{kl}t_{mk},\ (\ k<i<j,\ k<l,\ l\neq\ i,j,\ m<k<l),$ $\displaystyle(1)\wedge(3)$ $\displaystyle t_{ij}t_{kl}t_{ml},\ (k<i<j,\ k<l,\ l\neq\ i,j,\ m<k<l).$ For $(2)\wedge(n)$, $1\leq n\leq 3$, the possible ambiguities $w$’s are: $\displaystyle(2)\wedge(1)$ $\displaystyle t_{jk}t_{ij}t_{mr},\ (\ m<i<j<k,\ m<r,\ r\neq\ i,j),$ $\displaystyle(2)\wedge(2)$ $\displaystyle t_{jk}t_{ij}t_{mi},\ (\ m<i<j<k),$ $\displaystyle(2)\wedge(3)$ $\displaystyle t_{jk}t_{ij}t_{mj},\ (\ m<i<j<k).$ For $(3)\wedge(n)$, $1\leq n\leq 3$, the possible ambiguities $w$’s are: $\displaystyle(3)\wedge(1)$ $\displaystyle t_{jk}t_{ik}t_{mr},\ (\ m<i<j<k,\ m<r,\ r\neq\ i,k),$ $\displaystyle(3)\wedge(2)$ $\displaystyle t_{jk}t_{ik}t_{mi},\ (\ m<i<j<k),$ $\displaystyle(3)\wedge(3)$ $\displaystyle t_{jk}t_{ik}t_{mk},\ (\ m<i<j<k).$ We claim that all compositions are trivial relative to $S$. Here, we only prove cases $(1)\wedge(1)$, $(1)\wedge(2)$, $(2)\wedge(1)$, $(2)\wedge(2)$, and the other cases can be proved similarly. For $(1)\wedge(1)$, let $f=t_{ij}t_{kl},\ g=t_{kl}t_{mr},\ k<i<j,\ k<l,\ l\neq\ i,j,\ m<k<l,\ m<r,\ r\neq\ k,l.$ Then $w=t_{ij}t_{kl}t_{mr}$ and $\displaystyle(f,g)_{w}$ $\displaystyle=$ $\displaystyle(t_{ij}t_{kl})t_{mr}-t_{ij}(t_{kl}t_{mr})$ $\displaystyle=$ $\displaystyle\ (t_{ij}t_{mr})t_{kl}\ mod(S,w).$ There are three subcases to consider: $r\neq\ i,j$, $r=i$, $r=j$. Subcase 1. If $r\neq\ i,j$, then $\displaystyle(t_{ij}t_{mr})t_{kl}\equiv\ 0\ mod(S,w).$ Subcase 2. If $r=i$, then $\displaystyle(t_{ij}t_{mr})t_{kl}$ $\displaystyle=$ $\displaystyle(t_{ij}t_{mi})t_{kl}$ $\displaystyle\equiv$ $\displaystyle\ t_{kl}(t_{mj}t_{mi})$ $\displaystyle\equiv$ $\displaystyle\ (t_{kl}t_{mj})t_{mi}+t_{mj}(t_{kl}t_{mi})$ $\displaystyle\equiv$ $\displaystyle\ 0\ mod(S,w).$ Subcase 3. If $r=j$, then $\displaystyle(t_{ij}t_{mr})t_{kl}$ $\displaystyle=$ $\displaystyle(t_{ij}t_{mj})t_{kl}$ $\displaystyle\equiv$ $\displaystyle\ -t_{kl}(t_{mj}t_{mi})$ $\displaystyle\equiv$ $\displaystyle\ -(t_{kl}t_{mj})t_{mi}-t_{mj}(t_{kl}t_{mi})$ $\displaystyle\equiv$ $\displaystyle\ 0\ mod(S,w).$ For $(1)\wedge(2)$, let $f=t_{ij}t_{kl},\ g=t_{kl}t_{mk}+t_{ml}t_{mk},\ k<i<j,\ k<l,\ l\neq\ i,j,\ m<k<l.\ $Then $w=t_{ij}t_{kl}t_{mk}$ and $\displaystyle(f,g)_{w}$ $\displaystyle=$ $\displaystyle(t_{ij}t_{kl})t_{mk}-t_{ij}(t_{kl}t_{mk}+t_{ml}t_{mk})$ $\displaystyle=$ $\displaystyle\ (t_{ij}t_{mk})t_{kl}-t_{ij}(t_{ml}t_{mk})$ $\displaystyle\equiv$ $\displaystyle\ -t_{ij}(t_{ml}t_{mk})$ $\displaystyle\equiv$ $\displaystyle\ -(t_{ij}t_{ml})t_{mk}-t_{ml}(t_{ij}t_{mk})$ $\displaystyle\equiv$ $\displaystyle\ 0\ mod(S,w).$ For $(2)\wedge(1)$, let $f=t_{jk}t_{ij}+t_{ik}t_{ij},\ g=t_{ij}t_{mr},\ m<i<j<k,\ \ m<r,\ r\neq\ i,j.\ $Then $w=t_{jk}t_{ij}t_{mr}$ and $\displaystyle(f,g)_{w}$ $\displaystyle=$ $\displaystyle(t_{jk}t_{ij}+t_{ik}t_{ij})t_{mr}-t_{jk}(t_{ij}t_{mr})$ $\displaystyle\equiv$ $\displaystyle\ (t_{jk}t_{mr})t_{ij}+(t_{ik}t_{mr})t_{ij}\ mod(S,w).$ There are two subcases to consider: $r\neq\ k$, $r=k$. Subcase 1. If $r\neq\ k$, then $\displaystyle(t_{jk}t_{mr})t_{ij}+(t_{ik}t_{mr})t_{ij}\equiv\ 0\ mod(S,w).$ Subcase 2. If $r=k$, then $\displaystyle(t_{jk}t_{mr})t_{ij}+(t_{ik}t_{mr})t_{ij}$ $\displaystyle=$ $\displaystyle(t_{jk}t_{mk})t_{ij}+(t_{ik}t_{mk})t_{ij}$ $\displaystyle\equiv$ $\displaystyle\ -t_{ij}(t_{mk}t_{mj})-t_{ij}(t_{mk}t_{mi})$ $\displaystyle\equiv$ $\displaystyle\ (t_{ij}t_{mj})t_{mk}+(t_{ij}t_{mi})t_{mk}$ $\displaystyle\equiv$ $\displaystyle\ -t_{mk}(t_{mj}t_{mi})+t_{mk}(t_{mj}t_{mi})$ $\displaystyle\equiv$ $\displaystyle\ 0\ mod(S,w).$ For $(2)\wedge(2)$, let $f=t_{jk}t_{ij}+t_{ik}t_{ij},\ g=t_{ij}t_{mi}+t_{mj}t_{mi},\ m<i<j<k.\ $Then $w=t_{jk}t_{ij}t_{mi}$ and $\displaystyle(f,g)_{w}$ $\displaystyle=$ $\displaystyle(t_{jk}t_{ij}+t_{ik}t_{ij})t_{mi}-t_{jk}(t_{ij}t_{mi}+t_{mj}t_{mi})$ $\displaystyle=$ $\displaystyle\ (t_{jk}t_{mi})t_{ij}+(t_{ik}t_{mi})t_{ij}+t_{ik}(t_{ij}t_{mi})-t_{jk}(t_{mj}t_{mi})$ $\displaystyle\equiv$ $\displaystyle\ t_{ij}(t_{mk}t_{mi})-t_{ik}(t_{mj}t_{mi})-t_{jk}(t_{mj}t_{mi})$ $\displaystyle\equiv$ $\displaystyle\ -(t_{ij}t_{mi})t_{mk}+(t_{ik}t_{mi})t_{mj}-(t_{jk}t_{mj})t_{mi}$ $\displaystyle\equiv$ $\displaystyle\ -t_{mk}(t_{mj}t_{mi})-(t_{mk}t_{mi})t_{mj}+(t_{mk}t_{mj})t_{mi}$ $\displaystyle\equiv$ $\displaystyle\ 0\ mod(S,w).$ So $S$ is a Gröbner-Shirshov basis for $\textbf{L}_{n}$. $\blacksquare$ Let $L$ be a Lie algebra over a commutative ring $K$, $L_{1}$ an ideal of $L$ and $L_{2}$ a subalgebra of $L$. We call $L$ a semidirect product of $L_{1}$ and $L_{2}$ if $L=L_{1}\oplus L_{2}$ as $K$-modules. By Theorems 2.3 and 4.2, we have immediately the following corollaries. ###### Corollary 4.3 The Drinfeld-Kohno Lie algebra $\textbf{L}_{n}$ is a free $\mathbb{Z}$-module with a $\mathbb{Z}$-basis $Irr(S)=\\{[t_{ik_{1}}t_{ik_{2}}\cdots\ t_{ik_{m}}]\ |\ t_{ik_{1}}t_{ik_{2}}\cdots\ t_{ik_{m}}\ is\ an\mbox{ ALSW }in\ T^{*},\ m\in\mathbb{N}\\}.$ ###### Corollary 4.4 ([27]) $\textbf{L}_{n}$ is an iterated semidirect product of free Lie algebras. Proof. Let $A_{i}$ be the free Lie algebra generated by $\\{t_{ij}\ |\ i<j\leq\ n-1\\}$. Clearly, $\textbf{L}_{n}=A_{1}\oplus\ A_{2}\oplus\ \cdots\ \oplus\ A_{n-2}$ as $\mathbb{Z}$-modules, and from the relations $(1),(2),(3)$, we have $A_{i}\triangleleft\ A_{i}+A_{i+1}+\ \cdots\ +A_{n-2}.$ $\blacksquare$ Remark: In this section, if we replace the base ring $\mathbb{Z}$ by an arbitrary commutative ring $K$ with identity, then all results hold. ## References * [1] W.W. Adams, P. Loustaunau, An introduction to Gröbner bases, Graduate Studies in Mathematics, Vol. 3, American Mathematical Society (AMS), 1994. * [2] G.M. Bergman, The diamond lemma for ring theory, Adv. Math. 29 (1978) 178-218. * [3] L.A. Bokut, Insolvability of the word problem for Lie algebras and subalgebras of finitely presented Lie algebras, Izvestija AN USSR (mathem.) 36 (1972) 1173-1219. * [4] L.A. Bokut, Imbeddings into simple associative algebras, Algebra i Logika 15 (1976) 117-142. * [5] L.A. Bokut, Y.Q. Chen, Gröbner-Shirshov bases: Some new results, Proceedings of the Second International Congress in Algebra and Combinatorics, World Scientific, (2008) 35-56. * [6] L.A. Bokut, Y.Q. Chen, Gröbner-Shirshov bases and their calculation, arxiv.org/abs/1303.5366. * [7] L.A. Bokut, Y.Q. Chen, Y.S. Chen, Composition-Diamond lemma for tensor product of free algebras, _Journal of Algebra_ 323 (2010) 2520-2537. * [8] L.A. Bokut, Y.Q. Chen, Y.S. Chen, Groebner-Shirshov bases for Lie algebras over a commutative algebra, _Journal of Algebra_ 337 (2011) 82-102. * [9] L.A. Bokut, Y.Q. Chen, X.M. Deng, Gröbner-Shirshov bases for Rota-Baxter algebras, _Siberian Math. J._ 51 (6) (2010) 978-988. * [10] L.A. Bokut, Y.Q. Chen, Y. Li, Lyndon-Shirshov basis and anti-commutative algebras, _Journal of Algebra_ 378 (2013) 173-183. * [11] L.A. Bokut, Y.Q. Chen, C.H. Liu, Gröbner-Shirshov bases for dialgebras,_International Journal of Algebra and Computation_ 20 (3) (2010) 391-415. * [12] L.A. Bokut, Y.Q. Chen, Q.H. Mo, Gröbner-Shirshov bases and embeddings of algebras, _International Journal of Algebra and Computation_ 20 (2010) 875-900. * [13] L.A. Bokut, Y.Q. Chen, Q.H. Mo, Gröbner-Shirshov bases for semirings, _Journal of Algebra_ 385 (2013) 47-63. * [14] L.A. Bokut, Y.Q. Chen, K.P. Shum, Some new results on Groebner-Shirshov bases, in: Proceedings of International Conference on Algebra 2010, Advances in Algebraic Structures, (2012) 53-102. * [15] L.A. Bokut, Y. Fong, W.-F. Ke, P.S. Kolesnikov, Gröbner and Gröbner-Shirshov bases in algebra and conformal algebras, Fundamental and Applied Mathematics 6 (3) (2000) 669-706. * [16] L.A. Bokut, P.S. Kolesnikov, Gröbner-Shirshov bases: from their incipiency to the present, J. Math. Sci. 116 (1) (2003) 2894-2916. * [17] L.A. Bokut, P.S. Kolesnikov, Gröbner-Shirshov bases, conformal algebras and pseudo-algebras, J. Math. Sci. 131 (5) (2005) 5962-6003. * [18] L.A. Bokut, G. Kukin, Algorithmic and Combinatorial algebra, Kluwer Academic Publ., Dordrecht, 1994 * [19] W.W. Boone, The word problem, Ann. Math. 70 (1959) 207-265. * [20] B. Buchberger, An algorithmical criteria for the solvability of algebraic systems of equations, Aequationes Math. 4 (1970) 374-383. * [21] B. Buchberger, G.E. Collins, R. Loos, R. Albrecht, Computer algebra, symbolic and algebraic computation, Computing Supplementum, Vol.4, New York: Springer-Verlag, 1982. * [22] B. Buchberger, Franz Winkler, Gröbner bases and applications, London Mathematical Society Lecture Note Series, Vol.251, Cambridge: Cambridge University Press, 1998. * [23] Y.S. Chen, Y.Q. Chen, Groebner-Shirshov bases for matabelian Lie algebras, _Journal of Algebra_ 358 (2012) 143-161. * [24] D.A. Cox, J. Little, D. O’Shea, Ideals, varieties and algorithms: An introduction to computational algebraic geometry and commutative algebra, Undergraduate Texts in Mathematics, New York: Springer-Verlag, 1992. * [25] D. Eisenbud, Commutative algebra with a view toward algebraic geometry, Graduate Texts in Math., Vol.150, Berlin and New York: Springer-Verlag, 1995. * [26] V.G. Drinfeld, Quasi-Hopf algebras, Algebra i Analiz 1 (1989) 114-148. * [27] P. Etingof, A. Henriques, J. Kamnitzer, E.M. Rains, The cohomology ring of the real locus of the moduli space of stable curves of genus 0 with marked points, Ann. Math. 171 (2010) 731-777. * [28] G. Higman, Subgroups of finitely presented groups. Proc. Royal Soc. London (Series A) 262 (1961) 455-475. * [29] H. Hironaka, Resolution of singularities of an algebraic variety over a field if characteristic zero, I, II, Ann. Math. 79 (1964) 109-203, 205-326. * [30] S.-J. Kang, K.-H. Lee, Gröbner-Shirshov bases for irreducible $sl_{n+1}$-modules, Journal of Algebra 232 (2000) 1-20. * [31] T. Kohno, Serie de Poincare Koszul associee aux groupes de tresses pures, Invent. Math. 82 (1985) 57-75. * [32] G.P. Kukin, On the word problem for Lie algebras, Sibirsk. Mat. Zh. 18 (1977) 1194-1197. * [33] Lyndon, R.C.: On Burnside’s problem I, Trans. Amer. Math. Soc. 77 (1954) 202-215. * [34] A.A. Markov, Impossibility of some algorithms in the theory of some associative system, Dokl. Akad. Nauk SSSR 55 (1947) 587-590. * [35] A.A. Mikhalev, A.A. Zolotykh, Standard Gröbner-Shirshov bases of free algebras over rings, I. Free associative algebras, _International Journal of Algebra and Computation_ 8 (6) (1998) 689-726. * [36] P.S. Novikov, On algorithmic undecidability of the word problem in the theory of groups, Trudy MKat. Inst. Steklov. 44 (1955) 1-144. * [37] E. Post, A variant of a recursively unsolvable problem, Bull. Amer. Math. Soc. 52 (1946) 264-268. * [38] A.I. Shirshov, On free Lie rings, Mat. Sb. 45 (2) (1958) 113-122. * [39] A.I. Shirshov, Some algorithmic problem for Lie algebras, Sibirsk. Mat. Zh. 3 (2) (1962) 292-296; English translation in SIGSAM Bull. 33 (1999) 3-6. * [40] A.I. Shirshov, Some algorithmic problem for $\varepsilon$-algebras, Sibirsk. Mat. Z. 3 (1962) 132-137. * [41] Selected works of A.I. Shirshov, Eds L.A. Bokut, V. Latyshev, I. Shestakov, E. Zelmanov, Trs M. Bremner, M. Kochetov, Birkhäuser, Basel, Boston, Berlin, 2009. * [42] A.M. Turing, The word problem in semi-groups with cancellation, Ann. Math. 52 (1950) 191-505.
# On the Multiway Principal Component Analysis∗ Jialin Ouyang and Ming Yuan Department of Statistics Columbia University (()) ###### Abstract Multiway data are becoming more and more common. While there are many approaches to extending principal component analysis (PCA) from usual data matrices to multiway arrays, their conceptual differences from the usual PCA, and the methodological implications of such differences remain largely unknown. This work aims to specifically address these questions. In particular, we clarify the subtle difference between PCA and singular value decomposition (SVD) for multiway data, and show that multiway principal components (PCs) can be estimated reliably in absence of the eigengaps required by the usual PCA, and in general much more efficiently than the usual PCs. Furthermore, the sample multiway PCs are asymptotically independent and hence allow for separate and more accurate inferences about the population PCs. The practical merits of multiway PCA are further demonstrated through numerical, both simulated and real data, examples. 11footnotetext: This research was supported in part by NSF Grants DMS-2015285 and DMS-2052955. ## 1 Introduction More and more often in practice, we need to deal with data of rich and complex structures that are more appropriately organized as multiway arrays rather than the usual data matrices. Examples of such multiway data are ubiquitous in many fields such as chemometrics, economics, psychometrics, and signal processing among others (see, e.g., Kroonenberg, 2008; Anandkumar et al., 2014; Zhang and Xia, 2018; Chen et al., 2020a, b; Han et al., 2020; Xia et al., 2020; Bi et al., 2021; Chen et al., 2021; Han et al., 2022). In this paper, we investigate the methodological implications and statistical properties of principal component analysis (PCA) for this type of data and pinpoint the benefits and challenges of doing so. PCA is among the most popular statistical methods for multivariate data analysis when data are organized as matrices. See, e.g., Anderson (1984); Jolliffe (2002). With each column vector of a data matrix as an observation, PCA seeks orthogonal linear transformations of these vectors into a new coordinate system so that the variance of each coordinate is maximized successively. It allows us to represent most of the variation in the data by a small number of coordinates and therefore can guide us in reducing the dimensionality. As such, PCA often serves as a critical first step to capture the essential features in a dataset for many downstream analyses and is widely used in many scientific and engineering fields. Moving beyond matrices, for multiway data, each observation itself forms a matrix or more generally a multiway array. For example, when repeated measurements are made across different combinations of location and time, each observation can be more naturally organized as a matrix with each row corresponding to a certain location and each column a time point. To apply PCA to this type of data, it is tempting to neglect the multiway nature of the observations and treat each observation as a vector nonetheless, a practice often referred to as _stringing_. However, as observed in numerous practical applications, appropriately accounting for the additional structure when applying PCA can greatly enhance interpretability and improve efficiency. See, e.g., Kroonenberg (2008). There is a long and illustrious history of developing suitable methods for such a purpose and it can be traced back at least to the pioneering work of Tucker, Harshman, and Carroll in the 1960s. Since then, numerous approaches have also been developed. Examples include Kroonenberg and De Leeuw (1980); De Lathauwer et al. (2000); Vasilescu and Terzopoulos (2002); Yang et al. (2004); Kong et al. (2005); Zhang and Zhou (2005); Lu et al. (2006, 2008); Li et al. (2010); Liu et al. (2017); Taguchi (2018) among many others. See, e.g., Lu et al. (2011); Cichocki et al. (2015) for recent surveys of existing techniques. Most of these developments are outside the mainstream statistics literature and often with a strong algorithmic flavor and exploratory data analysis focus. These approaches are intuitive and often yield more interpretable insights than naively applying PCA after stringing. However, their statistical underpinnings are largely unknown. The main goal of this article is to fill in this void. Indeed, as we shall demonstrate, a careful and rigorous statistical treatment allows for a better understanding of the operating characteristics of multiway PCA, leads to improved methodology, and reveals new opportunities and challenges in analyzing multiway data. More specifically, we focus on a simple and natural approach to multiway PCA: when seeking linear transformations that maximize the variance, we impose the additional constraint that they conform to the multiway structure of the data. Doing so not only allows for enhanced interpretability but also inherits many nice properties of the usual PCA. Just as the usual principal components (PCs) are the eigenvectors of the covariance matrix, the multiway PCs can be identified with certain eigenvectors of the covariance operator. To better understand the impact of multiway structure on our ability to recover and make inferences about the multiway PCs, we also investigate the properties of multiway PCA under a spiked covariance model. Statistical properties of the usual PCA are well understood in the classical setting where the sample size is large whereas the number of variables is small (see, e.g., Anderson, 1984). More and more often in today’s applications, however, the dimensionality can also be large. There are abundant theoretical results concerning the usual PCA in such a high- dimensional setting as well, especially in the context of the spiked covariance model. For example, Johnstone and Lu (2009) first demonstrated the critical role of dimensionality in PCA by showing that, with fixed signal strength, the sample PCA is consistent if and only if the number of variables is of a smaller order than the sample size. In another influential paper, Paul (2007) established the asymptotic distribution of sample PCs. Other related treatments include Baik and Silverstein (2006); Nadler (2008); Johnstone and Lu (2009); Jung and Marron (2009); Bai and Silverstein (2010); Lee et al. (2010); Benaych-Georges and Nadakuditi (2011); Bai and Yao (2012); Shen et al. (2013); Koltchinskii and Lounici (2014); Koltchinskii et al. (2017); Wang and Fan (2017); Koltchinskii et al. (2020) among numerous others. In a sense, our results naturally extend these earlier works to multiway PCA. However, the need to work with higher-order covariance operators rather than covariance matrices creates new and fundamental challenges and requires us to develop a different proof strategy and several new technical tools. More importantly, our analysis also reveals fundamental differences in behavior between the usual PCA and multiway PCA and inspires new methodological development for the latter. Firstly, we establish the rates of convergence for the sample multiway PCs under mild regularity conditions. These rates explain why it is essential that we account for the inherent data structure when applying PCA to multiway data, and why naively applying PCA after stringing could be problematic. Intuitively, multiway PCA uses fewer parameters than the usual PCA and therefore is easier for estimation. This is described precisely by our result in that the estimation error of multiway PCs is determined by the dimension of each mode of the data array rather than the total number of entries, and therefore multiway PCs can be estimated accurately even if the latter far exceeds the sample size. But a more important observation is that how well a multiway PC can be estimated is determined by the corresponding eigenvalue of the covariance operator, and not the gap between its eigenvalues like the usual PCA. This somewhat surprising finding has far-reaching implications. In particular, it means that for multiway data the PCs can be estimated well even if their corresponding eigenvalues are not simple. Moreover, to facilitate making statistical inferences about multiway PCs, we derive asymptotic distributions of the sample multiway PCs. Our results again reveal unexpected but important distinctions between multiway PCA and usual PCA. For example, the estimated multiway PCs are asymptotically independent of each other, and their asymptotic distribution is determined by their corresponding eigenvalues instead of eigengaps. Furthermore, we show that bias correction is important for the sample multiway PCs. Similar to the usual PCA, sample multiway PC can exhibit significant bias when the dimension (of each mode) is high. But there is also another source of bias that may arise due to the inherent ambiguity in ordering the PCs in absence of eigengaps. Nonetheless, we show that both types of bias can be eliminated, enabling us to make inferences about and construct confidence intervals for the multiway PCs. The rest of the paper is organized as follows. In Section 2, we introduce the notion of multiway PCA both at a population level and how it works on a finite sample. Section 3 investigates the rates of convergence for the sample multiway PCs. Turning our attention to the asymptotic distribution of multiway PCA in section 4, we show how to make valid inferences about the multiway PCs. The merits of the multiway PCA and our proposed approaches are further demonstrated through numerical experiments, both simulated and real, in Section 5. We conclude with a summary in Section 6. Due to the space limit, all proofs are relegated to supplementary material. ## 2 Multiway PCA Multiway PCA can be viewed through the lens of usual PCA with the additional multiway structure imposed on the PCs. Let $\mathscr{X}\in\mathbb{R}^{d_{1}\times\cdots\times d_{p}}$ be an order-$p$ random array. To simplify, we shall assume in what follows that $\mathscr{X}$ is centered, i.e., $\mathbb{E}\mathscr{X}=0$, unless otherwise indicated. The idea behind PCA is to look for a linear transformation of $\mathscr{X}$ that maximizes the variance: $\max_{\mathscr{W}\in\mathbb{R}^{d_{1}\times\cdots\times d_{p}}:\|\mathscr{W}\|_{\rm F}=1}{\rm var}(\langle\mathscr{X},\mathscr{W}\rangle).$ (1) Here $\|\mathscr{W}\|_{\rm F}=\langle\mathscr{W},\mathscr{W}\rangle^{1/2}$ and $\langle\mathscr{X},\mathscr{W}\rangle=\sum_{j_{1}=1}^{d_{1}}\cdots\sum_{j_{p}=1}^{d_{p}}x_{j_{1},\ldots,j_{p}}w_{j_{1},\ldots,j_{p}}.$ Denote by $\mathscr{U}_{1}$ the solution to (1). The basic premise of multiway PCA is that $\mathscr{U}_{1}$ conforms to the multiway structure underlying $\mathscr{X}$ in that it is a rank-one tensor and can be expressed as $\mathscr{U}_{1}=\mathbf{u}_{1}^{(1)}\otimes\mathbf{u}_{1}^{(2)}\otimes\cdots\otimes\mathbf{u}_{1}^{(p)},$ (2) where $\mathbf{u}_{1}^{(q)}$ is a unit length vector $\mathbb{R}^{d_{q}}$ and $\otimes$ stands for the outer product, i.e., the $(i_{1},\ldots,i_{p})$ entry of $\mathscr{U}_{1}$ is given by $\left[\mathscr{U}_{1}\right]_{i_{1},\ldots,i_{p}}=u_{1i_{1}}^{(1)}u_{1i_{2}}^{(2)}\cdots u_{1i_{p}}^{(p)}.$ In other words, $\mathscr{U}_{1}$ is also the solution to $\max_{\mathscr{W}\in\Theta}{\rm var}(\langle\mathscr{X},\mathscr{W}\rangle),$ (3) where $\Theta$ is the collection of all unit length rank-one tensors of conformable dimensions, i.e., $\Theta=\\{\mathscr{W}=\mathbf{w}^{(1)}\otimes\mathbf{w}^{(2)}\otimes\cdots\otimes\mathbf{w}^{(p)}:\mathbf{w}^{(q)}\in\mathbb{R}^{d_{q}},\|\mathbf{w}^{(q)}\|=1,\forall q=1,\ldots,p\\}.$ Even if the solution to (1) is not strictly rank-one as described by (2), imposing such a constraint when seeking variance-maximizing transformation can nonetheless be desirable because of the enhanced interpretability: the additional rank-one constraint allows us to separate the effect along each mode, and help address questions such as “who does what to whom and when” which are often central to multiway data analysis. See, e.g., Kroonenberg (2008) for further discussion and numerous motivating examples. Subsequent PCs can be defined successively: $\max_{\begin{subarray}{c}\mathscr{W}\in\mathbb{R}^{d_{1}\times\cdots\times d_{p}}:\|\mathscr{W}\|_{\rm F}=1\\\ \mathscr{W}\perp\mathscr{U}_{l},l=1,\ldots,k-1\end{subarray}}{\rm var}(\langle\mathscr{X},\mathscr{W}\rangle).$ (4) As before, we shall consider the case when the solution has rank one. A key requirement in defining PCs is that the $k$th PC is orthogonal to all other PCs, i.e., $\mathscr{W}\perp\mathscr{U}_{l}$. In vector case, i.e., $p=1$, this simply means that $\langle\mathscr{W},\mathscr{U}_{l}\rangle=0$. In multiway case, however, there are many different notions of orthogonality. See, e.g., Kolda (2001) for a detailed discussion on this subject. Each notion has its own subtleties and caveats that may have different statistical implications. In this work we shall focus on the notion of _complete orthogonality_ : two rank-one tensors $\mathscr{W}_{1}=\mathbf{w}_{1}^{(1)}\otimes\mathbf{w}_{1}^{(2)}\otimes\cdots\otimes\mathbf{w}_{1}^{(p)}$ and $\mathscr{W}_{2}=\mathbf{w}_{2}^{(1)}\otimes\mathbf{w}_{2}^{(2)}\otimes\cdots\otimes\mathbf{w}_{2}^{(p)}$ are complete orthogonal if and only if $\langle\mathbf{w}_{1}^{(q)},\mathbf{w}_{2}^{(q)}\rangle=(\mathbf{w}_{1}^{(q)})^{\top}\mathbf{w}_{2}^{(q)}=0$ for all $q=1,\ldots,p$. More specifically, the $k$th multiway PC, denoted by $\mathscr{U}_{k}$, solves $\max_{\mathscr{W}\in\Theta:\mathscr{W}\perp_{c}\mathscr{U}_{l},\forall l<k}{\rm var}(\langle\mathscr{X},\mathscr{W}\rangle),$ (5) where $\perp_{c}$ stands for complete orthogonality. As in the case of the usual PCA, multiway PCs can also be equivalently defined using the covariance matrix of ${\rm vec}(\mathscr{X})$. In fact, it is more convenient to think of a covariance operator when it comes to multiway data. More specifically, we shall view $\Sigma:={\rm cov}(\mathscr{X})=\mathbb{E}(\mathscr{X}\otimes\mathscr{X})$ as a $d_{1}\times d_{2}\times\cdots\times d_{p}\times d_{1}\times\cdots\times d_{p}$ array. Then for any $\mathscr{W}\in\Theta$, ${\rm var}(\langle\mathscr{X},\mathscr{W}\rangle)=\langle\Sigma,\mathscr{W}\otimes\mathscr{W}\rangle=\langle\Sigma,\mathbf{w}^{(1)}\otimes\cdots\otimes\mathbf{w}^{(p)}\otimes\mathbf{w}^{(1)}\otimes\cdots\otimes\mathbf{w}^{(p)}\rangle.$ Write $\lambda_{k}={\rm var}(\langle\mathscr{X},\mathscr{U}_{k}\rangle).$ Because of the symmetry of $\Sigma$, $\mathscr{U}_{1}\otimes\mathscr{U}_{1}=\operatorname{argmax}_{\mathbf{w}^{(1)}\otimes\cdots\otimes\mathbf{w}^{(2p)}:\|\mathbf{w}^{(q)}\|=1}\langle\Sigma,\mathbf{w}^{(1)}\otimes\cdots\otimes\mathbf{w}^{(2p)}\rangle$ so that $\lambda_{1}\mathscr{U}_{1}\otimes\mathscr{U}_{1}$ is also the best rank-one approximation to $\Sigma$ (see, e.g., Friedland, 2013). Similarly, $\mathscr{U}_{k}\otimes\mathscr{U}_{k}=\operatorname{argmax}_{\begin{subarray}{c}\mathbf{w}^{(1)}\otimes\cdots\otimes\mathbf{w}^{(2p)}:\|\mathbf{w}^{(q)}\|=1\\\ \mathbf{w}^{(1)}\otimes\cdots\otimes\mathbf{w}^{(2p)}\perp_{c}\mathscr{U}_{l}\otimes\mathscr{U}_{l},\quad l<k\end{subarray}}\langle\Sigma,\mathbf{w}^{(1)}\otimes\cdots\otimes\mathbf{w}^{(2p)}\rangle$ In vector case, e.g. $p=1$, $\\{(\lambda_{k},\mathscr{U}_{k}):k\geq 1\\}$ are the eigenpairs of the covariance matrix $\Sigma$ and $\Sigma_{r}:=\sum_{k=1}^{r}\lambda_{k}\mathscr{U}_{k}\otimes\mathscr{U}_{k}$ is the best rank-$r$ approximation to $\Sigma$, i.e., $\Sigma_{r}=\operatorname{argmin}_{A\in\mathbb{R}^{d_{1}\times d_{1}}:{\rm rank}(A)\leq r}\|A-\Sigma\|.$ When $p>1$, this characterization becomes tenuous because the notion of best low-rank approximation becomes precarious. For matrices, best low-rank approximations can be identified with singular value decomposition thanks to the Eckart-Young theorem. Low-rank approximation to tensors is much more subtle and the best low-rank approximation may not exist in general. See, e.g., Hackbusch (2012). Nonetheless, by construction, $\Sigma_{r}$ is the so- called best rank-$r$ _greedy orthogonal approximation_ to $\Sigma$. See, e.g., Kolda (2001). In particular, when the multiway structure does manifest itself in a way such that the usual PCs are rank-one tensors, for example, the solution to (1) and (4) has rank one, then $\Sigma_{r}$ is the best low-rank approximation to $\Sigma$. Sample multiway PCs can also be defined in a similar fashion. Specifically, given a sample $\mathscr{X}_{1},\ldots,\mathscr{X}_{n}$ of independent copies of $\mathscr{X}$, $\mathscr{U}_{k}$s can be estimated by maximizing the sample variances: $\widehat{\mathscr{U}}_{k}:=\operatorname{argmax}_{\mathscr{W}\in\Theta:\mathscr{W}\perp_{c}\mathscr{U}_{l},\forall l<k}\frac{1}{n}\sum_{i=1}^{n}\langle\mathscr{X}_{i},\mathscr{W}\rangle^{2}.$ (6) Let $\widehat{\Sigma}=\frac{1}{n}\sum_{i=1}^{n}\mathscr{X}_{i}\otimes\mathscr{X}_{i}$ be the sample covariance operator. Then $\widehat{\mathscr{U}}_{1}$ can be defined via the best rank-one approximation to $\widehat{\Sigma}$ $\widehat{\mathscr{U}}_{1}=\operatorname{argmax}_{\mathscr{W}\in\Theta}\langle\widehat{\Sigma},\mathscr{W}\otimes\mathscr{W}\rangle.$ And other PCs can also be equivalently defined as $\widehat{\mathscr{U}}_{k}=\operatorname{argmax}_{\mathscr{W}\in\Theta:\mathscr{W}\perp_{c}\widehat{\mathscr{U}}_{l},\forall l<k}\langle\widehat{\Sigma},\mathscr{W}\otimes\mathscr{W}\rangle.$ Note that $\widehat{\mathscr{U}}_{k}$ can also be identified with the best rank-one approximation to a deflated covariance operator: $\widehat{\mathscr{U}}_{k}=\operatorname{argmax}_{\mathscr{W}\in\Theta}\langle\check{\Sigma}_{k},\mathscr{W}\otimes\mathscr{W}\rangle,$ where $\check{\Sigma}_{k}=\widehat{\Sigma}\times_{1}\widehat{{\cal P}}_{k}^{(1)}\times_{2}\cdots\times_{p}\widehat{{\cal P}}_{k}^{(p)}\times_{p+1}\widehat{{\cal P}}_{k}^{(1)}\times_{p+2}\cdots\times_{2p}\widehat{{\cal P}}_{k}^{(p)}$ and $\widehat{{\cal P}}_{k}^{(1)}$ is the projection matrix of the linear subspace spanned by $\\{\widehat{\mathbf{u}}_{l}^{(q)}:1\leq l<k\\}$. Hereafter $\times_{q}$ represents the mode $q$ product between a tensor $\mathscr{T}\in\mathbb{R}^{d_{1}\times d_{2}\times\dots\times d_{k}}$ and a matrix $A\in\mathbb{R}^{m\times d_{q}}$ so that $\mathscr{T}\times_{q}A\in\mathbb{R}^{d_{1}\times\dots d_{q-1}\times m\times d_{q+1}\dots\times d_{k}}$ with elements $[\mathscr{T}\times_{q}A]_{i_{1}\dots i_{q-1}ji_{q+1}\dots i_{k}}=\sum_{i_{q}=1}^{d_{q}}\mathscr{T}_{i_{1}\dots i_{q}\dots i_{k}}A_{ji_{q}}.$ Computing the best rank-one approximation to a tensor is a classical problem in numerical linear algebra, and casting the sample multiway PCA as such allows us to take advantage of the many existing algorithms for doing so. In this work, we focus on the statistical properties of multiway PCA. Readers interested in further discussions about the computational aspect are referred to, e.g., Zhang and Golub (2001); Hackbusch (2012); Janzamin et al. (2019) and references therein. Similar to the usual PCs, multiway PCs can be used to construct low-rank approximations of the original data. However, there are also fundamental, albeit sometimes subtle, differences between the two types of PCA. The usual sample PCs coincide with the leading singular vectors of the data matrix after appropriate centering and therefore can be computed via singular value decomposition (SVD). In contrast, multiway PCA is, while closely related to, not equivalent to the best low-rank approximations of the original data array in general. More specifically, consider stacking the observations into a higher-order tensor $\mathbf{X}\in\mathbb{R}^{n\times d_{1}\times\cdots\times d_{p}}$ whose $i$th frontal slice is $\mathscr{X}_{i}$. In the case when $\mathscr{X}$ is a vector, i.e., $p=1$, $\mathbf{X}$ is a matrix and the sample PC, $\widehat{\mathscr{U}}_{k}$ as defined above, is its $k$th right singular vector. It is therefore tempting to do the same and estimate $\mathscr{U}_{k}$s by seeking the best orthogonal low-rank approximation to $\mathbf{X}$ directly: $\min_{\begin{subarray}{c}\mathscr{W}_{1},\ldots,\mathscr{W}_{r}\in\Theta,\mathbf{a}_{1},\ldots,\mathbf{a}_{r}\in\mathbb{R}^{n}\\\ \mathscr{W}_{l}\perp_{c}\mathscr{W}_{k},\mathbf{a}_{l}\perp\mathbf{a}_{k},\forall l\neq k\end{subarray}}\left\|\mathbf{X}-\left(\mathbf{a}_{1}\otimes\mathscr{W}_{1}+\cdots+\mathbf{a}_{r}\otimes\mathscr{W}_{r}\right)\right\|_{\rm F}$ (7) See, e.g., Harshman and Lundy (1984). This problem, often known as the tensor SVD problem, has attracted a lot of attention in recent years. See, e.g., Richard and Montanari (2014); Hopkins et al. (2015); Liu et al. (2017); Zhang and Xia (2018); Auddy and Yuan (2020). However, the sample multiway PCs are generally not the solution to (7). First of all, the difference between the best orthogonal rank-$r$ and rank-$(r-1)$ approximations to $\mathbf{X}$ is generally not a rank-one tensor and therefore cannot be associated with a multiway PC. See, e.g., Hackbusch (2012). To overcome this challenge, one may consider solving (7) in a greedy fashion, i.e, optimizing (7) over $\mathscr{W}_{k}$ and $\mathbf{a}_{k}$ only while fixing the other ones. In general, however, this still results in a different set of PCs because of the extra orthogonality constraint on $\mathbf{a}_{k}$s imposed by (7). As we shall see, this subtle distinction between multiway PCA and low-rank approximations to a data tensor not only means that a treatment different from that for the tensor SVD is needed for multiway PCA but also leads to different statistical behavior between the two. ## 3 Rates of Convergence A natural question one first asks is how well $\mathscr{U}_{k}$ and its components $\mathbf{u}_{k}^{(q)}$s can be estimated by their sample counterparts. We shall now turn our attention to this question and study the rate of convergence for the sample multiway PCs. On the one hand, we provide further justification for the superiority of multiway PCA to the usual PCA with stringing, in addition to enhanced interpretability. On the other hand, our investigation also leads to new insights into the operating characteristics of sample multiway PCA and its intriguing distinction from the usual PCA. To fix ideas, we shall consider the so-called spiked covariance model as a working model for our theoretical development. Suppose that a random array $\mathscr{X}\in\mathbb{R}^{d_{1}\times\cdots\times d_{p}}$ follows a linear factor model: $\mathscr{X}=\sum_{k=1}^{r}\sigma_{k}\theta_{k}\mathscr{U}_{k}+\sigma_{0}\mathscr{E},$ (8) where $(\theta_{1},\ldots,\theta_{r})^{\top}\sim N(0,I_{r})$ are the random factor loadings, $\mathscr{U}_{k}$s ($\in\Theta$) are unit length rank-one principal components such that $\mathscr{U}_{k}\perp_{c}\mathscr{U}_{l}$ for any $k\neq l$, and $\mathscr{E}$ is a noise tensor with independent $N(0,1)$ entries. It is worth pointing out that our results and arguments can be extended beyond normality and applied to general subgaussian distributions. We opt for the normality assumption for ease of presentation. Without loss of generality, we shall also assume that eigenvalues of the signal are nontrivial and sorted in non-increasing order, i.e., $\sigma_{1}\geq\sigma_{2}\geq\cdots\geq\sigma_{r}>0$. Note that we do not require $\sigma_{k}$s to be distinct. It is not hard to see that the covariance operator of the aforementioned $\mathscr{X}$ is given by $\Sigma=\sum_{k=1}^{r}\sigma_{k}^{2}\mathscr{U}_{k}\otimes\mathscr{U}_{k}+\sigma_{0}^{2}\mathscr{I},$ where $\mathscr{I}$ is the identity tensor, i.e., $\mathscr{I}_{j_{1}\ldots j_{p}j_{1}^{\prime}\ldots j_{p}^{\prime}}=1$ if $j_{q}=j_{q}^{\prime}$ for all $q=1,\ldots,p$ and $0$ otherwise. The spiked covariance model such as (8) is widely used as a working model to study PCA in the case of vector observations, i.e., $p=1$. See, e.g., Johnstone (2001) and Paul (2007). In this section, we shall establish the rates of convergence of the sample multiway PCs. To this end, denote by $\angle(\mathbf{w}_{1},\mathbf{w}_{2})$ the angle between two vectors $\mathbf{w}_{1}$ and $\mathbf{w}_{2}$ taking value in $[0,\pi/2]$, and similarly for two arrays $\mathscr{W}_{1}$ and $\mathscr{W}_{2}$, $\angle(\mathscr{W}_{1},\mathscr{W}_{2})$ denotes the angle between their vectorizations ${\rm vec}(\mathscr{W}_{1})$ and ${\rm vec}(\mathscr{W}_{2})$. It is instructive to begin with the classical setting where the dimensionality $d_{1},\ldots,d_{p}$ as well as all other parameters, e.g. $\sigma_{0}$, $\sigma_{k}$s and $r$, are held fixed as the sample size $n$ diverges. Our first result shows that the sample PC $\widehat{\mathscr{U}}_{k}$ and its components $\widehat{\mathbf{u}}_{k}^{(q)}$s are root-$n$ consistent in this regime. ###### Theorem 3.1. Let $\mathscr{X}_{1},\ldots,\mathscr{X}_{n}$ be independent observations following the spiked covariance model (8) with $p>1$ such that $\mathscr{U}_{k}=\mathbf{u}_{k}^{(1)}\otimes\cdots\otimes\mathbf{u}_{k}^{(p)}$ and $\sigma_{k}>0$. Assume that all parameters are fixed as the sample size $n$ increases. Let $\widehat{\mathscr{U}}_{k}=\widehat{\mathbf{u}}_{k}^{(1)}\otimes\cdots\otimes\widehat{\mathbf{u}}_{k}^{(p)}$ be the sample multiway PC as defined by (6). Then there exists a permutation $\pi$ over $[r]:=\\{1,\ldots,r\\}$ such that $\max_{1\leq q\leq p}\sin\angle(\widehat{\mathbf{u}}_{k}^{(q)},\mathbf{u}_{\pi(k)}^{(q)})=O_{p}(n^{-1/2}),$ (9) for all $k\in[r]$, and hence $\sin\angle(\widehat{\mathscr{U}}_{k},\mathscr{U}_{\pi(k)})=O_{p}(n^{-1/2}),\qquad k=1,\ldots,r$ as $n\to\infty$. The most notable difference between the above result and those for the usual PCA (e.g., Anderson, 1984) is that fact that the root-$n$ consistency of the sample multiway PCs does not require that the eigenvalues ($(\sigma_{k}^{2}+\sigma_{0}^{2})$s or equivalently $\sigma_{k}$s) of the covariance matrix be simple, i.e., $\sigma_{k}\neq\sigma_{k+1}$. Note that, without the multiway structural constraint, the usual PCs are only uniquely defined and hence can possibly be estimated if their corresponding eigenvalues are simple. As Theorem 3.1 indicates, such a restriction is not necessary for multiway PCA. For multiway PCA, each sample PC is root-$n$ consistent regardless of the other eigenvalues. It is also worth noting that, since we do not require the $\sigma_{k}$s to be distinct, there is no guarantee that $\widehat{\mathscr{U}}_{k}$ estimates $\mathscr{U}_{k}$. This is not a deficiency of multiway PCA, but rather a necessity due to the possible indeterminacy of the $k$th largest eigenvalue. In fact, if $\sigma_{k+1}<\sigma_{k}<\sigma_{k-1}$, then we can choose $\pi(k)=k$ in Theorem 3.1. In general, Theorem 3.1 shows that each of the sample PCs is necessarily a root-$n$ consistent estimate of one of the multiway PCs. To further understand the operating characteristics and merits of multiway PCA, we now consider the more general case and further highlight the role of dimensionality and signal-to-noise ratio. For brevity, in what follows, we shall assume that $\mathscr{X}$ is “nearly cubic” in that there exist constants $0<c_{1},c_{2}<\infty$ such that $c_{1}d\leq d_{1},\ldots,d_{p}\leq c_{2}d$ for some natural number $d$ which may diverge with $n$. General cases can be treated similarly but incur considerably more cumbersome notation and tedious derivation. ###### Theorem 3.2. Let $\mathscr{X}_{1},\ldots,\mathscr{X}_{n}$ be independent observations following the spiked covariance model (8) with $p>1$ such that $\mathscr{U}_{k}=\mathbf{u}_{k}^{(1)}\otimes\cdots\otimes\mathbf{u}_{k}^{(p)}$. Let $\widehat{\mathscr{U}}_{k}=\widehat{\mathbf{u}}_{k}^{(1)}\otimes\cdots\otimes\widehat{\mathbf{u}}_{k}^{(p)}$ be the sample multiway PC as defined by (6). Suppose that $r\log r\leq c_{0}\min\\{n,d\\}\quad{\rm and}\quad\left(\frac{\sigma_{0}}{\sigma_{r}}+\frac{\sigma_{0}^{2}}{\sigma_{r}^{2}}\right)\cdot\max\left\\{\sqrt{\frac{d}{n}},{\frac{d}{n}}\right\\}\leq\frac{c_{0}}{\sqrt{r}},$ (10) for a sufficiently small constant $c_{0}>0$. Then there exist a constant $C>0$ and a permutation $\pi$ over $[r]$ such that $\max_{1\leq q\leq p}\sin\angle(\widehat{\mathbf{u}}_{k}^{(q)},\mathbf{u}_{\pi(k)}^{(q)})\leq C\left(\frac{\sigma_{0}}{\sigma_{\pi(k)}}+\frac{\sigma_{0}^{2}}{\sigma_{\pi(k)}^{2}}\right)\cdot\max\left\\{\sqrt{\frac{d}{n}},{\frac{d}{n}}\right\\},$ (11) for all $k\in[r]$, and hence $\sin\angle(\widehat{\mathscr{U}}_{k},\mathscr{U}_{\pi(k)})\leq C\left(\frac{\sigma_{0}}{\sigma_{\pi(k)}}+\frac{\sigma_{0}^{2}}{\sigma_{\pi(k)}^{2}}\right)\cdot\max\left\\{\sqrt{\frac{d}{n}},{\frac{d}{n}}\right\\},\qquad k=1,\ldots,r,$ with probability tending to one as $n$ diverges. Theorem 3.2 can be viewed as a generalization of Theorem 3.1. Its proof is rather involved and we shall brief discuss some of the challenges and the main ideas for resolving them. The proof proceeds by induction over $k$. Special attention is needed to deal with the case when an eigenvalue is not simple or the eigengap is small. This creates difficulty in identifying which multiway PC a sample multiway PC estimates, or equivalently the permutation $\pi$. To this end, we shall define $\pi(1)=\operatorname{argmax}_{1\leq l\leq r}\left\\{\sigma_{l}^{2}\left|\prod_{q=1}^{p}\langle\mathbf{u}_{l}^{(q)},\widehat{\mathbf{u}}_{1}^{(q)}\rangle\right|\right\\},$ and for $k>1$, $\pi(k):=\operatorname{argmax}_{l\notin\pi([k-1])}\left\\{\sigma_{l}^{2}\left|\prod_{q=1}^{p}\langle\mathbf{u}_{l}^{(q)},\widehat{\mathbf{u}}_{k}^{(q)}\rangle\right|\right\\}.$ To remove the influence of eigengaps altogether, we need to carefully quantify the impact of estimation error of $\widehat{\mathscr{U}}_{1},\ldots,\widehat{\mathscr{U}}_{k-1}$ on the $k$th sample multiway PC. To this end, we shall derive bounds for both $\max_{1\leq q\leq p}\sin\angle(\mathbf{u}_{\pi(k)}^{(q)},\widehat{\mathbf{u}}_{k}^{(q)}),$ and $\max_{1\leq q\leq p}\max_{l\notin\pi([k])}\langle\mathbf{u}_{l}^{(q)},\widehat{\mathbf{u}}_{k}^{(q)}\rangle,$ and leverage the fact that the latter can be much smaller than the former. When $d=O(n)$, the convergence rate given in Theorem 3.2 is $\sin\angle(\widehat{\mathscr{U}}_{k},\mathscr{U}_{\pi(k)})\leq C\left(\frac{\sigma_{0}}{\sigma_{\pi(k)}}+\frac{\sigma_{0}^{2}}{\sigma_{\pi(k)}^{2}}\right)\cdot\sqrt{\frac{d}{n}};$ and when $d\gg n$, we have $\sin\angle(\widehat{\mathscr{U}}_{k},\mathscr{U}_{\pi(k)})\leq C\left(\frac{\sigma_{0}}{\sigma_{\pi(k)}}+\frac{\sigma_{0}^{2}}{\sigma_{\pi(k)}^{2}}\right)\cdot\frac{d}{n}.$ In particular, $\widehat{\mathscr{U}}_{k}$ is consistent, e.g., $\sin\angle(\widehat{\mathscr{U}}_{k},\mathscr{U}_{\pi(k)})=o_{p}(1),$ whenever $\sigma_{\pi(k)}/\sigma_{0}\gg\max\\{d/n,(d/n)^{1/4}\\}$. Of particular interest here is the role of dimensionality. The rates of convergence given by Theorem 3.2 depend on the dimensionality through $d$ rather than the ambient dimension $D:=d_{1}d_{2}\cdots d_{p}$. This is because multiway PCA restricts PCs to be rank-one tensors and therefore has fewer parameters. Such dimensionality reduction is especially important for multiway data. Consider, for example, the case when $\sigma_{0}$ and $\sigma_{k}$s are fixed, then by virtue of the results from Johnstone and Lu (2009), direct application of the usual PCA after stringing necessarily leads to an inconsistent estimate of $\mathscr{U}_{k}$ whenever $D\gg n$. Yet, our result indicates that multiway PCA is consistent whenever $d\ll n$. To draw further comparisons with the usual PCA, we now focus on the case when $d\ll n$ and $r$, $\sigma_{0},\sigma_{1},\ldots,\sigma_{r}$ are fixed. As shown by Birnbaum et al. (2013), in this regime, the usual PCA (i.e., $p=1$) satisfies $\sin\angle(\widehat{\mathscr{U}}_{k},\mathscr{U}_{\pi(k)})\asymp\left(\frac{\sigma_{0}}{\sigma_{\pi(k)}}+\frac{\sigma_{0}^{2}}{\sigma_{\pi(k)}^{2}}\right)\cdot\sqrt{\frac{d}{n}}+{1\over\sqrt{n}}\left(\sum_{k^{\prime}\neq k}{(\sigma_{0}+\sigma_{\pi(k)})(\sigma_{0}+\sigma_{\pi(k^{\prime})})\over\sigma_{\pi(k)}^{2}-\sigma_{\pi(k^{\prime})}^{2}}\right)$ with probability tending to one. Comparing the above rate with that from Theorem 3.2, it is clear that the difference between the two lie at the second term on the right hand side. Its presence for the usual PCA dictates that there should be no ties among $\sigma_{k}$s. Even if the $\sigma_{k}$s are all distinct, how well we can estimate a PC crucially depends on the gap between its corresponding eigenvalue and the other eigenvalues when $p=1$. In contrast, the bounds given by Theorem 3.2 are determined by $\sigma_{\pi(k)}$ alone and not the eigengap $\min\\{\sigma_{\pi(k)-1}^{2}-\sigma_{\pi(k)}^{2},\sigma_{\pi(k)}^{2}-\sigma_{\pi(k)+1}^{2}\\}$ as in the usual PCA case. It is also instructive to compare the convergence rate for the multiway PCA from Theorem 3.2 with those for tensor SVD. Recall that $\mathbf{X}=\sum_{k=1}^{r}\sigma_{k}\Theta_{k}\otimes\mathscr{U}_{k}+\sigma_{0}\mathbf{E},$ where $\Theta_{k}=(\theta_{1k},\ldots,\theta_{nk})^{\top}$ is a vector containing the $n$ realizations of $\theta_{k}$ and $\mathbf{E}$ is a $n\times d_{1}\times\cdots\times d_{p}$ tensor whose $i$th frontal slice is $\mathscr{E}_{i}$. In contrast, $\Theta_{k}$s are deterministic in a tensor SVD model. If $\Theta_{k}$s are orthogonal to each other, then $\mathscr{U}_{k}$ can be estimated at the rate of $\sigma_{0}\|\mathbf{E}\|/(\sigma_{k}\|\Theta_{k}\|)$ which is of the order $(\sigma_{0}/\sigma_{k})\max\\{\sqrt{d/n},1\\}$. This is a direct consequence of the perturbations bounds from Auddy and Yuan (2020) and a similar bound was also derived by Richard and Montanari (2014) in the rank-one case, i.e., $r=1$. In our case, however, $\Theta_{k}$ and $\Theta_{l}$ are random and in general not orthogonal to each other. As a result, the rates we obtained are different in their dependence on the signal-to-noise ratio $\sigma_{k}/\sigma_{0}$. Similar phenomenon has also been observed for the usual PCA (see, e.g., Birnbaum et al., 2013). ## 4 Asymptotic Normality and Bias Correction We now turn to the distributional properties of multiway PCA. This requires us to further delineate the role of bias in the sample PCs. It is known that the usual PCA is biased when the dimension ($D$) is large when compared with the sample size. See, e.g., Koltchinskii and Lounici (2014); Koltchinskii et al. (2020) and the references therein. The same phenomenon is observed for the sample multiway PCs and a non-negligible bias arise when the dimension of each mode ($d$) is large when compared with the sample size. In addition, there is a more subtle source of bias for the sample multiway PCs due to the ambiguity in ordering the multiway PCs in the absence of eigengaps. As noted before, the lack of an eigengap means that the $k$th PC may not necessarily be estimated by the $k$th sample multiway PC. As a more concrete example, consider the case when $r=2$ and $\lambda_{1}=\lambda_{2}$. Then $\mathscr{U}_{1}$ can be estimated by either $\widehat{\mathscr{U}}_{1}$ or $\widehat{\mathscr{U}}_{2}$, and as Theorem 3.2 shows, the rate of convergence remains the same in both cases. But the asymptotic distribution may differ between the two scenarios: $\widehat{\mathscr{U}}_{2}$ is required to be orthogonal to $\widehat{\mathscr{U}}_{1}$ and estimating $\mathscr{U}_{1}$ by $\widehat{\mathscr{U}}_{2}$ may incur extra bias. In this section, we shall introduce ways to correct for both types of bias and establish the asymptotic normality of the bias-corrected sample PCs. As is customary in the literature, we shall assume that $r$ and $\sigma_{1},\ldots,\sigma_{r}$ are fixed for brevity. In light of the results from the previous section, the sample PCs are consistent if $d\ll n$ in this setting. We shall therefore focus on this regime in the current section. ### 4.1 When $d=o(\sqrt{n})$ When $d$ is not too large, the bias is solely due to the possibility of repeated eigenvalues and thus ambiguity of the ordering of PCs. Indeed if $\sigma_{k}$s are distinct, then there is no need for bias correction when $d=o(\sqrt{n})$ and all of our results in this subsection will hold for the sample multiway PCs. But in practice, we may not know or want to assume that the eigenvalues are simple. Fortunately, we can remove any possible bias fairly easily by a simple one-step update of the sample PCs. More specifically, we shall consider estimating $\mathbf{u}_{\pi(k)}^{(q)}$ by $\tilde{\mathbf{u}}_{k}^{(q)}$, the leading eigenvector of $\widehat{\Sigma}(\widehat{\mathbf{u}}_{k}^{(1)},\ldots,\widehat{\mathbf{u}}_{k}^{(q-1)},\cdot,\widehat{\mathbf{u}}_{k}^{(q+1)},\ldots,\widehat{\mathbf{u}}_{k}^{(p)},\widehat{\mathbf{u}}_{k}^{(1)},\ldots,\widehat{\mathbf{u}}_{k}^{(q-1)},\cdot,\widehat{\mathbf{u}}_{k}^{(q+1)},\ldots,\widehat{\mathbf{u}}_{k}^{(p)}).$ The additional step frees up the orthogonality constraints imposed on the $k$th sample multiway PC and therefore allows us to suppress any adverse influence of $\widehat{\mathscr{U}}_{1},\ldots,\widehat{\mathscr{U}}_{k-1}$. We now consider the asymptotic distribution of the bias-corrected sample PCs. We again start with the classical regime when all parameters are fixed as $n$ increases. ###### Theorem 4.1. Let $\mathscr{X}_{1},\ldots,\mathscr{X}_{n}$ be independent observations following the spiked covariance model (8) with $p>1$ such that $\mathscr{U}_{k}=\mathbf{u}_{k}^{(1)}\otimes\cdots\otimes\mathbf{u}_{k}^{(p)}$ and $\sigma_{k}>0$. Assume that all parameters are fixed as the sample size $n$ increases. Let $\tilde{\mathbf{u}}_{1}^{(q)},\ldots,\tilde{\mathbf{u}}_{r}^{(q)}$ be defined as above. Then there exists a permutation $\pi:[r]\to[r]$ such that $\displaystyle\sqrt{n}\left[{\rm vec}(\tilde{\mathbf{U}}^{(q)})-{\rm vec}(\mathbf{U}^{(q)}_{\pi})\right]$ $\displaystyle\overset{d}{\to}N\left(0,{\rm diag}\left(\left(\frac{\sigma_{0}^{2}}{\sigma_{\pi(1)}^{2}}+\frac{\sigma_{0}^{4}}{\sigma_{\pi(1)}^{4}}\right){\cal P}_{\mathbf{u}_{\pi(1)}^{(q)}}^{\perp},\ldots,\left(\frac{\sigma_{0}^{2}}{\sigma_{\pi(r)}^{2}}+\frac{\sigma_{0}^{4}}{\sigma_{\pi(r)}^{4}}\right){\cal P}_{\mathbf{u}_{\pi(r)}^{(q)}}^{\perp}\right)\right),$ as $n\to\infty$, where $\tilde{\mathbf{U}}^{(q)}=[\tilde{\mathbf{u}}_{1}^{(q)},\ldots,\tilde{\mathbf{u}}_{r}^{(q)}]$, $\mathbf{U}_{\pi}^{(q)}=[\mathbf{u}_{\pi(1)}^{(q)},\ldots,\mathbf{u}_{\pi(r)}^{(q)}]$ and ${\cal P}_{\mathbf{u}_{k}^{(q)}}^{\perp}=I_{d_{q}}-\mathbf{u}_{k}^{(q)}\otimes\mathbf{u}_{k}^{(q)}$. Theorem 4.1 indicates that $n\cdot{\rm var}\left(\tilde{\mathbf{u}}_{k}^{(q)}\right)\to{\sigma_{0}^{2}(\sigma_{\pi(k)}^{2}+\sigma_{0}^{2})\over\sigma_{\pi(k)}^{4}}\left(I-\mathbf{u}_{\pi(k)}^{(q)}\otimes\mathbf{u}_{\pi(k)}^{(q)}\right),$ and $n\cdot{\rm cov}\left(\tilde{\mathbf{u}}_{k}^{(q)},\tilde{\mathbf{u}}_{l}^{(q)}\right)\to 0.$ Namely, all estimates of the multiway PCs are asymptotically normal and independent of each other. Note also that the asymptotic distribution of $\tilde{\mathbf{u}}_{k}^{(q)}$ does not depend on other eigenvalues or PCs. In other words, it can be estimated to the same precision as if all other components $\mathscr{U}_{l}$, $l\neq k$ are known! This is to be contrasted with the usual PCA where the asymptotic distribution of $\mathbf{u}_{k}^{(q)}$ depends on all other eigenvectors and eigenvalues. More specifically, it is well known that in vector case, i.e., when $p=1$, under the additional assumption that $\sigma_{1}^{2},\ldots,\sigma_{r}^{2}$ are distinct, the sample PCs satisfy $\displaystyle n\cdot{\rm var}\left(\widehat{\mathbf{u}}_{k}^{(1)}\right)$ $\displaystyle\to\sum_{1\leq l\leq r,l\neq k}{(\sigma_{k}^{2}+\sigma_{0}^{2})(\sigma_{l}^{2}+\sigma_{0}^{2})\over(\sigma_{k}^{2}-\sigma_{l}^{2})^{2}}\mathbf{u}_{l}^{(1)}\otimes\mathbf{u}_{l}^{(1)}+{(\sigma_{k}^{2}+\sigma_{0}^{2})\sigma_{0}^{2}\over\sigma_{k}^{4}}\left(I-\sum_{1\leq l\leq r}\mathbf{u}_{l}^{(1)}\otimes\mathbf{u}_{l}^{(1)}\right)$ and for any $1\leq l\leq r$ and $l\neq k$, $n\cdot{\rm cov}\left(\widehat{\mathbf{u}}_{k}^{(1)},\widehat{\mathbf{u}}_{l}^{(1)}\right)\to-{(\sigma_{k}^{2}+\sigma_{0}^{2})(\sigma_{l}^{2}+\sigma_{0}^{2})\over(\sigma_{k}^{2}-\sigma_{l}^{2})^{2}}\cdot\mathbf{u}_{k}^{(1)}\otimes\mathbf{u}_{l}^{(1)}.$ See, e.g., Anderson (1984). It is clear that the sample PCs are always correlated with each other. Moreover, note that ${\sigma_{0}^{2}(\sigma_{k}^{2}+\sigma_{0}^{2})\over\sigma_{k}^{4}}\leq{(\sigma_{k}^{2}+\sigma_{0}^{2})(\sigma_{l}^{2}+\sigma_{0}^{2})\over(\sigma_{k}^{2}-\sigma_{l}^{2})^{2}}.$ and the strict inequality holds for any $k\neq l\leq r$. This suggests that the estimated multiway PCs have smaller variations than the usual PCs with the same set of eigenvalues. We now turn our attention to the more general case when the dimensionality and other parameters are allowed to diverge with $n$. Because the PCs now may have different dimensions for different sample sizes, it is more natural to consider their linear forms, e.g. $\langle\mathbf{u}^{(q)}_{k},\mathbf{v}\rangle$, for some fixed vector $\mathbf{v}\in\mathbb{R}^{d_{q}}$. If the dimensions are fixed, Theorem 4.1 immediately suggests that $\langle\tilde{\mathbf{u}}^{(q)}_{k},\mathbf{v}\rangle$ estimates $\langle\mathbf{u}^{(q)}_{\pi(k)},\mathbf{v}\rangle$, and $\sqrt{n}\left(\langle\tilde{\mathbf{u}}^{(q)}_{k},\mathbf{v}\rangle-\langle\mathbf{u}^{(q)}_{\pi(k)},\mathbf{v}\rangle\right)\to_{d}N\left(0,\left(\frac{\sigma_{0}^{2}}{\sigma_{\pi(k)}^{2}}+\frac{\sigma_{0}^{4}}{\sigma_{\pi(k)}^{4}}\right)\|{\cal P}_{\mathbf{u}_{\pi(k)}^{(q)}}^{\perp}\mathbf{v}\|^{2}\right)$ The following result shows that this continues to hold as long as $d=o(\sqrt{n})$. ###### Theorem 4.2. Let $\mathscr{X}_{1},\ldots,\mathscr{X}_{n}$ be independent observations following the spiked covariance model (8) with $p>1$ such that $\mathscr{U}_{k}=\mathbf{u}_{k}^{(1)}\otimes\cdots\otimes\mathbf{u}_{k}^{(p)}$ and $\sigma_{k}>0$. Assume that (10) holds, $d=o(\sqrt{n})$. Then there exists a permutation $\pi:[r]\to[r]$ such that $\sqrt{n}\left(\langle\tilde{\mathbf{u}}^{(q)}_{k},\mathbf{v}\rangle-\langle\mathbf{u}^{(q)}_{\pi(k)},\mathbf{v}\rangle\right)\to_{d}N\left(0,\left(\frac{\sigma_{0}^{2}}{\sigma_{\pi(k)}^{2}}+\frac{\sigma_{0}^{4}}{\sigma_{\pi(k)}^{4}}\right)\|{\cal P}_{\mathbf{u}_{\pi(k)}^{(q)}}^{\perp}\mathbf{v}\|^{2}\right)$ as $n\to\infty$. Theorem 4.2 shows that the same asymptotic behavior of $\tilde{\mathbf{u}}_{j}^{(q)}$ as in the fixed dimension case can be expected whenever $d=o(\sqrt{n})$. ### 4.2 When $d=o(n)$ For higher dimension, the simple bias-correction described above is no longer sufficient and a close inspection reveals that $\tilde{\mathbf{u}}_{j}^{(q)}$ still incurs a non-negligible bias when $d\gg\sqrt{n}$. Thankfully, both types of bias can be corrected with a sample-splitting approach similar in spirit to the scheme developed by Koltchinskii and Lounici (2014) for the usual PCA. Without loss of generality, assume that $n$ is an even number and we randomly split the $n$ observations into two halves: $(\mathscr{X}_{1},\ldots,\mathscr{X}_{n/2})$ and $(\mathscr{X}_{n/2+1},\ldots,\mathscr{X}_{n})$. Denote by $\widehat{\Sigma}^{[1]}$ and $\widehat{\Sigma}^{[2]}$ the sample covariance operator based on the two halves of data respectively. Similarly, we shall write $\widehat{\mathscr{U}}_{k}^{[h]}=\widehat{\mathbf{u}}_{k}^{(1),[h]}\otimes\cdots\otimes\widehat{\mathbf{u}}_{k}^{(p),[h]}$ the $k$th sample PC based on the $h$ ($=1$ or $2$) halves of the data. However, as noted before, $\widehat{\mathscr{U}}_{k}^{[1]}$ and $\widehat{\mathscr{U}}_{k}^{[2]}$ may not estimate the same PC. To this end, we shall reorder $\widehat{\mathscr{U}}_{k}^{[1]}$s and $\widehat{\mathscr{U}}_{k}^{[2]}$s with $\widehat{\mathscr{U}}_{k}$s (i.e., the estimators derived from the entire dataset) as reference points. Specifically, without loss of generality, we assume that $\widehat{\mathscr{U}}_{k}^{[1]}=\operatorname{argmin}_{k\leq l\leq r}\sin\angle\left(\widehat{\mathscr{U}}_{l}^{[1]},\widehat{\mathscr{U}}_{k}\right).$ The same procedure is applied to relabel $\widehat{\mathscr{U}}_{k}^{[2]}$s. Note also that the sign of a PC is irrelevant in that $\mathscr{U}_{k}$ and $-\mathscr{U}_{k}$ represent the same transformation. We shall therefore also assume hereafter, without loss of generality, that $\langle\widehat{\mathbf{u}}_{k}^{(q),[1]},\widehat{\mathbf{u}}_{k}^{(q),[2]}\rangle\geq 0$. Recall that $\displaystyle\sigma_{k}^{2}\mathbf{u}_{k}^{(q)}{\mathbf{u}_{k}^{(q)}}^{\top}+\sigma_{0}^{2}I_{d_{q}}=\Sigma(\mathbf{u}_{k}^{(1)},\ldots,\mathbf{u}_{k}^{(q-1)},\cdot,\mathbf{u}_{k}^{(q+1)},\ldots,\mathbf{u}_{k}^{(p)},$ $\displaystyle\mathbf{u}_{k}^{(1)},\ldots,\mathbf{u}_{k}^{(q-1)},\cdot,\mathbf{u}_{k}^{(q+1)},\ldots,\mathbf{u}_{k}^{(p)}).$ We shall then update the sample PC using the above identity with $\Sigma$ and $\mathbf{u}_{k}^{(q)}$s estimated from separate halves. Denote by $\check{\mathbf{u}}_{k}^{(q),[1]}$ the leading eigenvector of $\displaystyle\widehat{\Sigma}^{[1]}$ $\displaystyle(\widehat{\mathbf{u}}_{k}^{(1),[2]},\dots,\widehat{\mathbf{u}}_{k}^{(q-1),[2]},\cdot,\widehat{\mathbf{u}}_{k}^{(q+1),[2]},\dots,\widehat{\mathbf{u}}_{k}^{(p),[2]},$ $\displaystyle\widehat{\mathbf{u}}_{k}^{(1),[2]},\dots,\widehat{\mathbf{u}}_{k}^{(q-1),[2]},\cdot,\widehat{\mathbf{u}}_{k}^{(q+1),[2]},\dots,\widehat{\mathbf{u}}_{k}^{(p),[2]}),$ and similarly $\check{\mathbf{u}}_{k}^{(q),[2]}$ the leading eigenvector of $\displaystyle\widehat{\Sigma}^{[2]}$ $\displaystyle(\widehat{\mathbf{u}}_{k}^{(1),[1]},\dots,\widehat{\mathbf{u}}_{k}^{(q-1),[1]},\cdot,\widehat{\mathbf{u}}_{k}^{(q+1),[1]},\dots,\widehat{\mathbf{u}}_{k}^{(p),[1]},$ $\displaystyle\widehat{\mathbf{u}}_{k}^{(1),[1]},\dots,\widehat{\mathbf{u}}_{k}^{(q-1),[1]},\cdot,\widehat{\mathbf{u}}_{k}^{(q+1),[1]},\dots,\widehat{\mathbf{u}}_{k}^{(p),[1]}).$ To avoid losing efficiency due to sample splitting, we consider a new estimate $\check{\mathscr{U}}_{k}=\check{\mathbf{u}}_{k}^{(1)}\otimes\cdots\otimes\check{\mathbf{u}}_{k}^{(p)}$ where $\check{\mathbf{u}}_{k}^{(q)}={\check{\mathbf{u}}_{k}^{(q),[1]}+\check{\mathbf{u}}_{k}^{(q),[2]}\over\left\|\check{\mathbf{u}}_{k}^{(q),[1]}+\check{\mathbf{u}}_{k}^{(q),[2]}\right\|}.$ The following theorem shows that we can construct an unbiased estimate of $\langle\mathbf{u}^{(q)}_{\pi(k)},\mathbf{v}\rangle$ by appropriately rescaling $\langle\check{\mathbf{u}}^{(q)}_{k},\mathbf{v}\rangle$, as long as $d=o(n^{2/3})$. ###### Theorem 4.3. Let $\mathscr{X}_{1},\ldots,\mathscr{X}_{n}$ be independent observations following the spiked covariance model (8) with $p>1$ such that $\mathscr{U}_{k}=\mathbf{u}_{k}^{(1)}\otimes\cdots\otimes\mathbf{u}_{k}^{(p)}$ and $\sigma_{k}>0$. Let $\check{\mathscr{U}}_{k}=\check{\mathbf{u}}_{k}^{(1)}\otimes\cdots\otimes\check{\mathbf{u}}_{k}^{(p)}$ be the estimated PC as defined above. Assume $r$ and $\sigma_{1},\dots,\sigma_{r}$ are fixed, and $d=o(n^{2/3})$. Then there exists a permutation $\pi:[r]\to[r]$ such that $\sqrt{n}\left((1+b_{k}^{(q)})\langle\check{\mathbf{u}}^{(q)}_{k},\mathbf{v}\rangle-\langle\mathbf{u}^{(q)}_{\pi(k)},\mathbf{v}\rangle\right)\to_{d}N\left(0,\left(\frac{\sigma_{0}^{2}}{\sigma_{\pi(k)}^{2}}+\frac{\sigma_{0}^{4}}{\sigma_{\pi(k)}^{4}}\right)\|{\cal P}_{\mathbf{u}_{\pi(k)}^{(q)}}^{\perp}\mathbf{v}\|^{2}\right)$ as $n\to\infty$ where $b_{k}^{(q)}=\sqrt{1+\frac{d_{q}}{n}\left(\frac{\sigma_{0}^{2}}{\sigma_{\pi(k)}^{2}}+\frac{\sigma_{0}^{4}}{\sigma_{\pi(k)}^{4}}\right)}-1.$ (12) It is worth pointing out that when $d=o(n^{1/2})$, the bias correction factor described by (12) obeys $b_{k}^{(q)}=o(n^{-1/2})$ and therefore can be neglected. This agrees with our earlier observation and of course also suggests that sample-splitting is unnecessary if $d\ll n^{1/2}$. When $d\gg n^{1/2}$, bias correction becomes essential. In particular, Theorem 4.3 suggests that, as long as $d=o(n^{2/3})$, an explicit bias correction factor can be applied. For higher dimensions, it is unclear if a similar explicit expression exists for the debiasing factor. Nonetheless, we can derive a suitable bias correction factor for all $d\ll n$ via additional sample splitting. More specifically, we first randomly split the observations into two halves. The first half of the data is then further split into two equal-sized groups to compute the sample covariance operators $\widehat{\Sigma}^{[1][1]}$ and $\widehat{\Sigma}^{[1][2]}$, then we compute $\widehat{\mathbf{u}}_{k}^{(q),[1][1]}$ and $\widehat{\mathbf{u}}_{k}^{(q),[1][2]}$ as the leading eigenvectors of $\displaystyle\widehat{\Sigma}^{[1][1]}$ $\displaystyle(\widehat{\mathbf{u}}_{k}^{(1),[2]},\dots,\widehat{\mathbf{u}}_{k}^{(q-1),[2]},\cdot,\widehat{\mathbf{u}}_{k}^{(q+1),[2]},\dots,\widehat{\mathbf{u}}_{k}^{(p),[2]},$ $\displaystyle\widehat{\mathbf{u}}_{k}^{(1),[2]},\dots,\widehat{\mathbf{u}}_{k}^{(q-1),[2]},\cdot,\widehat{\mathbf{u}}_{k}^{(q+1),[2]},\dots,\widehat{\mathbf{u}}_{k}^{(p),[2]}),$ $\displaystyle\widehat{\Sigma}^{[1][2]}$ $\displaystyle(\widehat{\mathbf{u}}_{k}^{(1),[2]},\dots,\widehat{\mathbf{u}}_{k}^{(q-1),[2]},\cdot,\widehat{\mathbf{u}}_{k}^{(q+1),[2]},\dots,\widehat{\mathbf{u}}_{k}^{(p),[2]},$ $\displaystyle\widehat{\mathbf{u}}_{k}^{(1),[2]},\dots,\widehat{\mathbf{u}}_{k}^{(q-1),[2]},\cdot,\widehat{\mathbf{u}}_{k}^{(q+1),[2]},\dots,\widehat{\mathbf{u}}_{k}^{(p),[2]}),$ Similarly, we used the second half of the data to compute $\widehat{\mathbf{u}}_{k}^{(q),[2][1]}$s, and $\widehat{\mathbf{u}}_{k}^{(q),[2][2]}$s. As before, we shall sort these estimates in compatible order and sign. Let $\widehat{b}_{k}^{(q)}=\frac{\left\|\check{\mathbf{u}}_{k}^{(q),[1]}+\check{\mathbf{u}}_{k}^{(q),[2]}\right\|}{\sqrt{\left\langle\widehat{\mathbf{u}}_{k}^{(q),[1][1]},\widehat{\mathbf{u}}_{k}^{(q),[1][2]}\right\rangle}+\sqrt{\left\langle\widehat{\mathbf{u}}_{k}^{(q),[2][1]},\widehat{\mathbf{u}}_{k}^{(q),[2][2]}\right\rangle}}-1.$ (13) ###### Theorem 4.4. Let $\mathscr{X}_{1},\ldots,\mathscr{X}_{n}$ be independent observations following the spiked covariance model (8) with $p>1$ such that $\mathscr{U}_{k}=\mathbf{u}_{k}^{(1)}\otimes\cdots\otimes\mathbf{u}_{k}^{(p)}$ and $\sigma_{k}>0$. Let $\check{\mathscr{U}}_{k}=\check{\mathbf{u}}_{k}^{(1)}\otimes\cdots\otimes\check{\mathbf{u}}_{k}^{(p)}$ be the estimated PC as defined above. Assume $r$ and $\sigma_{1},\dots,\sigma_{r}$ are fixed, and $d=o(n)$. Then there exists a permutation $\pi:[r]\to[r]$ such that $\sqrt{n}\left((1+\widehat{b}_{k}^{(q)})\langle\check{\mathbf{u}}^{(q)}_{k},\mathbf{v}\rangle-\langle\mathbf{u}^{(q)}_{\pi(k)},\mathbf{v}\rangle\right)\to_{d}N\left(0,\left(\frac{\sigma_{0}^{2}}{\sigma_{\pi(k)}^{2}}+\frac{\sigma_{0}^{4}}{\sigma_{\pi(k)}^{4}}\right)\|{\cal P}_{\mathbf{u}_{\pi(k)}^{(q)}}^{\perp}\mathbf{v}\|^{2}\right),$ as $n\to\infty$ where $\widehat{b}_{k}^{(q)}$ is given by (13). Moreover, $\displaystyle\widehat{b}_{k}^{(q)}=\sqrt{1+\frac{d_{q}}{n}\left(\frac{\sigma_{0}^{2}}{\sigma_{\pi(k)}^{2}}+\frac{\sigma_{0}^{4}}{\sigma_{\pi(k)}^{4}}\right)}-1+O_{p}\left(\frac{d^{3/2}}{n^{3/2}}\right)+o_{p}\left(\frac{1}{\sqrt{n}}\right).$ In light of Theorem 4.4, the double sample splitting approach can be employed to derive confidence intervals for linear forms of the multiway PCs as long as $d=o(n)$. This robustness, however, comes at the expense of increased computational cost and could incur a loss of efficiency in finite samples. In practice, one may still prefer the explicit bias correction as described by Theorem 4.3 if $d$ is not very large, or the one-step update if $d$ is small. ### 4.3 Inference about multiway PCs The asymptotic normality we showed earlier in the section forms the basis for making inferences about linear forms $\langle\mathbf{u}^{(q)}_{\pi(k)},\mathbf{v}\rangle$. In particular, one of the most interesting and also simplest examples of linear forms of PCs is their coordinates, i.e., $\mathbf{v}$ is a column vector of the identity matrix. To derive confidence intervals of or testing hypotheses about $\langle\mathbf{u}^{(q)}_{\pi(k)},\mathbf{v}\rangle$, however, we need to also estimate its variance. Specifically, its asymptotic distribution depends only on $\sigma_{0}$, $\sigma_{\pi(k)}$, and $\mathbf{u}_{k}^{(q)}$, all of which can be consistently estimated by their sample counterpart. Let $\widehat{\sigma}_{0}^{2}={1\over\prod_{q=1}^{p}(d_{q}-r)}\sum_{1\leq i_{q}\leq d_{q},1\leq q\leq p}[\check{\Sigma}_{r+1}]_{i_{1}\cdots i_{p}i_{1}\cdots i_{p}}$ and $\widehat{\sigma}_{\pi(k)}^{2}=\langle\widehat{\Sigma},\widehat{\mathscr{U}}_{k}\otimes\widehat{\mathscr{U}}_{k}\rangle-\widehat{\sigma}_{0}^{2}.$ The following theorem suggest that the asymptotic normality remains valid if we replace the variance of linear forms $\langle\mathbf{u}^{(q)}_{\pi(k)},\mathbf{v}\rangle$ with these estimates: ###### Theorem 4.5. Let $\mathscr{X}_{1},\ldots,\mathscr{X}_{n}$ be independent observations following the spiked covariance model (8) with $p>1$ such that $\mathscr{U}_{k}=\mathbf{u}_{k}^{(1)}\otimes\cdots\otimes\mathbf{u}_{k}^{(p)}$ and $\sigma_{k}>0$. Assume $r$ and $\sigma_{1},\dots,\sigma_{r}$ are fixed. There exists a permutation $\pi:[r]\to[r]$ such that * (a) If $d=o(\sqrt{n})$, then ${\sqrt{n}\left(\langle\tilde{\mathbf{u}}^{(q)}_{k},\mathbf{v}\rangle-\langle\mathbf{u}^{(q)}_{\pi(k)},\mathbf{v}\rangle\right)\over\sqrt{\frac{\widehat{\sigma}_{0}^{2}}{\widehat{\sigma}_{\pi(k)}^{2}}+\frac{\widehat{\sigma}_{0}^{4}}{\widehat{\sigma}_{\pi(k)}^{4}}}\ \big{\|}{\cal P}_{\tilde{\mathbf{u}}_{\pi(k)}^{(q)}}^{\perp}\mathbf{v}\big{\|}}\to_{d}N\left(0,1\right),$ * (b) If $d=o(n^{2/3})$, then ${\sqrt{n}\left((1+b_{k}^{(q)})\langle\check{\mathbf{u}}^{(q)}_{k},\mathbf{v}\rangle-\langle\mathbf{u}^{(q)}_{\pi(k)},\mathbf{v}\rangle\right)\over\sqrt{\frac{\widehat{\sigma}_{0}^{2}}{\widehat{\sigma}_{\pi(k)}^{2}}+\frac{\widehat{\sigma}_{0}^{4}}{\widehat{\sigma}_{\pi(k)}^{4}}}\ \big{\|}{\cal P}_{\check{\mathbf{u}}_{\pi(k)}^{(q)}}^{\perp}\mathbf{v}\big{\|}}\to_{d}N\left(0,1\right),$ where $b_{k}^{(q)}$ is given by (12). * (c) If $d=o(n)$, then ${\sqrt{n}\left((1+\widehat{b}_{k}^{(q)})\langle\check{\mathbf{u}}^{(q)}_{k},\mathbf{v}\rangle-\langle\mathbf{u}^{(q)}_{\pi(k)},\mathbf{v}\rangle\right)\over\sqrt{\frac{\widehat{\sigma}_{0}^{2}}{\widehat{\sigma}_{\pi(k)}^{2}}+\frac{\widehat{\sigma}_{0}^{4}}{\widehat{\sigma}_{\pi(k)}^{4}}}\ \big{\|}{\cal P}_{\check{\mathbf{u}}_{\pi(k)}^{(q)}}^{\perp}\mathbf{v}\big{\|}}\to_{d}N\left(0,1\right),$ where $\widehat{b}_{k}^{(q)}$ is given by (13). Theorem 4.5 is an immediate consequence of Slutsky’s Theorem and Theorems 4.2-4.4. It allows us to make inference or construct confidence intervals for $\langle\mathbf{u}^{(q)}_{\pi(k)},\mathbf{v}\rangle$. Consider, for example, testing hypothesis that $H_{0}:\langle\mathbf{u}^{(q)}_{\pi(k)},\mathbf{v}\rangle=0\qquad vs\qquad H_{a}:\langle\mathbf{u}^{(q)}_{\pi(k)},\mathbf{v}\rangle\neq 0,$ when $d=o(\sqrt{n})$. We can proceed to reject $H_{0}$ if and only if $\left|\sqrt{n}\left(\langle\tilde{\mathbf{u}}^{(q)}_{k},\mathbf{v}\rangle-\langle\mathbf{u}^{(q)}_{\pi(k)},\mathbf{v}\rangle\right)\right|\geq z_{\alpha/2}\sqrt{\frac{\widehat{\sigma}_{0}^{2}}{\widehat{\sigma}_{\pi(k)}^{2}}+\frac{\widehat{\sigma}_{0}^{4}}{\widehat{\sigma}_{\pi(k)}^{4}}}\big{\|}{\cal P}_{\tilde{\mathbf{u}}_{\pi(k)}^{(q)}}^{\perp}\mathbf{v}\big{\|},$ where $z_{\alpha/2}$ is the upper $\alpha/2$ quantile of the standard normal distribution. Theorem 4.5 guarantees this is a level-$\alpha$ test asymptotically. Similarly, we can also construct $(1-\alpha)$ confidence interval for $\langle\mathbf{u}^{(q)}_{\pi(k)},\mathbf{v}\rangle$: $\left(\langle\tilde{\mathbf{u}}^{(q)}_{k},\mathbf{v}\rangle\pm\frac{z_{\alpha/2}}{\sqrt{n}}\sqrt{\frac{\widehat{\sigma}_{0}^{2}}{\widehat{\sigma}_{\pi(k)}^{2}}+\frac{\widehat{\sigma}_{0}^{4}}{\widehat{\sigma}_{\pi(k)}^{4}}}\big{\|}{\cal P}_{\tilde{\mathbf{u}}_{\pi(k)}^{(q)}}^{\perp}\mathbf{v}\big{\|}\right).$ In particular, by taking $\mathbf{v}\in\\{\mathbf{e}_{1},\ldots,\mathbf{e}_{d_{q}}\\}$, we can use the above formula to derive confidence intervals for the coordinates of $\mathbf{u}_{\pi(k)}^{(q)}$. Situations with larger $d$ can also be treated accordingly. ## 5 Numerical Experiments To complement our theoretical analyses and further demonstrate the merits of multiway PCA, we conducted several sets of numerical experiments. ### 5.1 Simulation Studies We first present a set of simulation studies to illustrate the finite-sample behavior of the sample PCs. These experiments are specifically designed to assess the role of bias correction, and robustness to deviation from the normal distribution. Throughout this subsection, unless otherwise noted, samples were generated according to the spike covariance model (8) with $p=2$, e.g., each $\mathscr{X}_{i}$ is a matrix. Since the two modes are exchangeable, we only focus on the first mode $q=1$ for brevity. We also fixed the number of spikes at $r=2$. In each case, we shall set the singular values $\sigma_{1}=\sigma_{2}$. In other words, for each of our examples, the usual PCA (with stringing) will not be able to identify the PCs because of the multiplicity. As mentioned before, without loss of generality and for the sake of brevity, we reordered $\check{\mathbf{u}}_{1}^{(1)}$ and $\check{\mathbf{u}}_{2}^{(1)}$ such that $\sin\angle(\check{\mathbf{u}}_{1}^{(1)},\mathbf{u}_{1})\leq\sin\angle(\check{\mathbf{u}}_{2}^{(1)},\mathbf{u}_{1})$. In addition, we replaced $\check{\mathbf{u}}_{k}^{(1)}$ with $-\check{\mathbf{u}}_{k}^{(1)}$ whenever $\langle\check{\mathbf{u}}_{k}^{(1)},\mathbf{u}_{k}^{(1)}\rangle<0$. For low- dimensional setup, $\tilde{\mathbf{u}}_{1}^{(1)}$ and $\tilde{\mathbf{u}}_{2}^{(1)}$ are treated similarly. In the first set of experiments, we considered a low-dimensional setup with $d_{1}=d_{2}=10$, $n=200$, $\sigma_{1}=\sigma_{2}=2$, and the true PCs were given by $\displaystyle\mathbf{u}_{1}^{(1)}=(\sqrt{3}/2,1/2,0,\dots,0)^{\top},\quad\mathbf{u}_{1}^{(2)}=(1,0,\dots,0)^{\top},$ $\displaystyle\mathbf{u}_{2}^{(1)}=(-1/2,\sqrt{3}/2,0,\dots,0)^{\top},\quad\mathbf{u}_{2}^{(2)}=(0,1,0,\dots,0)^{\top}.$ (14) Figure 1(a) reports the histograms of the first two (nonzero) entries of $\tilde{\mathbf{u}}_{1}^{(1)}$ based on 300 simulation runs. The histograms are overlaid with the asymptotic distributions derived in Theorem 4.1. The agreement between the two confirms the accuracy of the asymptotic distribution when the dimensionality is low. (a) $d_{1}=d_{2}=10$ (b) $d_{1}=d_{2}=50$ Figure 1: Multiway PCA for data generated from normal distribution. To demonstrate the need and effectiveness of bias correction, we increased the dimension to $d_{1}=d_{2}=50$. Correspondingly we set $n=400$ and $\sigma_{1}=\sigma_{2}=3$. We repeated the experiment another 300 times and as before, Figure 1(b) reports the histograms of the first two entries of $\check{\mathbf{u}}_{1}^{(1)}$ along with the asymptotic distribution derived in Theorem 4.3, plotted in red lines. The dashed black line overlaid with the histogram of the first entries corresponds to the asymptotic distribution without bias correction as given by Theorem 4.1. It is clear that in this setting, debiasing is necessary and the bias correction of Theorem 4.3 indeed leads to a more precise approximation of the finite sample distribution. Our next set of simulations aims to explore the robustness of our approach to deviation from normality. To this end, $\\{\theta_{k},k\in[r]\\}$ and the entries of $\mathscr{E}$ were simulated independently from ${\rm Poisson}(1)-1$ (so that they still have mean $0$ and variance $1$). Again we set $n=400$ and $\sigma_{1}=\sigma_{2}=3$. Figures 2(a) and 2(b) summarize results based on 300 runs, for dimensions $d_{1}=d_{2}=10$ and $d_{1}=d_{2}=50$, respectively. We overlay them with the theoretical asymptotic distributions given by Theorems 4.1 and 4.3. The results are qualitatively similar to those from the previous setting. (a) $d_{1}=d_{2}=10$ (b) $d_{1}=d_{2}=50$ Figure 2: Multiway PCA for data generated from Poisson distribution. ### 5.2 World Bank Data We now consider a real-world data example – the open source global development data from the World Bank***https://data.worldbank.org/. The world Bank offers access to annual country-level data of a number of development indicators. In particular, we shall focus on the following nine most common and important economic and demographic indicators: * `GDP`: gross domestic product (GDP) based on purchasing power parity; * `Import`: import volume index (year 2000=100); * `Export`: export volume index (year 2000=100); * `CO2`: total CO2 emissions, in kilo-ton; * `CPI`: Consumer price index (year 2010 = 100); * `Life Span`: Life expectancy at birth; * `Urban Population`: Urban population, percentage of total population; * `Tourism`: number of international inbound tourists; * `Birth Rate`: Birth rate, crude (per 1,000 people). Yearly data for these indicators have been recorded and we focus on data from Year 2000 through 2018, as considerable data are missing outside this range. We also discarded countries that have more than 5% of missing data in our analysis, resulting in a total of 160 countries under consideration. These indicators are all positive but of vastly different magnitudes. To this end, a log transformation was first applied. Each log-transformed indicator was then standardized so that the log-transformed indicator has a mean $0$ and a mean absolute deviation $1$ for all countries. The use of mean absolute deviation, instead of variance, for standardization allows more robust analysis in the presence of outlying observations. Denote by $\mathbf{X}_{k,t,i}$ the resulting indicator $i$ for country $k$ at time $t$. There remain a handful of missing values and for convenience, they are replaced with $0$ in our analysis. The data tensor $\mathbf{X}$ of dimensions $160\times 19\times 9$. Each frontal slice $\mathscr{X}_{k}=\mathbf{X}_{k,\cdot,\cdot}$ corresponds to a country and is a $19\times 9$ matrix. Note that its ambient dimension is $19\times 9=171$ and greater than the number of countries so it is problematic to apply the usual PCA with stringing. Accounting for the multiway structure, we can consider the multiway PCs of the form $\mathscr{U}_{k}=\mathbf{u}_{k}^{(1)}\otimes\mathbf{u}_{k}^{(2)}\in\mathbb{R}^{19\times 9}.$ These PC carry a clear meaning: each $\mathscr{U}_{k}$ represents a shared _development pattern_ , where $\mathbf{u}_{k}^{(1)}$ is the corresponding shared _temporal trend_ , and $\mathbf{u}_{k}^{(2)}$ is the corresponding _comovement pattern_. Figure 3 plots the estimated leading PC along both modes, namely $\check{\mathbf{u}}_{1}^{(1)}$ and $\check{\mathbf{u}}_{k}^{(2)}$, together with the $95\%$ confidence intervals for each of their coordinates. It is by far the most significant component, explaining 57.6% of the total variation. It is also evident from the temporal component that the first PC describes a roughly constant growth trend. The only year with a decrease is 2008 when the Global Financial Crisis took place. Correspondingly, except for the entry corresponding to birth rate, all other entries of $\check{\mathbf{u}}_{k}^{(2)}$ are positive. This suggests a general economic development during this period, with the birth rate in decline. Figure 3: The first PC, general economic development: $\mathbf{u}_{1}^{(1)}$ and $\mathbf{u}_{1}^{(2)}$ plotted with 95% confidence intervals. Similarly, Figure 4 shows the second multiway PC in the two modes along with their $95\%$ confidence bands. This PC captures a change of developmental direction at the year of 2008. In particular, CPI, life span, urban population, and tourism steadily decreased prior to 2008 but reversed course after the financial crisis. In contrast, GDP, import, export, CO2 emission and birth rate followed an opposite pattern. There are many plausible explanations for this pattern. It is possible that the quantitative easing policies applied by most major economies since 2008 led to growth in the domestic market, thus enhancing the life-quality indicators. It is also possible that the growing inequality after 2008, also caused by quantitative easing among other factors (see, e.g., Montecino and Epstein, 2015), has in turn caused the increase in life quality among the upper and the upper middle class. The tourism indicator is the number of international inbound tourists, which most likely is driven by the upper middle class and beyond. The continuous increase in life expectancy in the USA is also reported to be driven primarily by the well-off (see, e.g., Chetty et al., 2016). Figure 4: The second PC, life quality: $\mathbf{u}_{2}^{(1)}$ and $\mathbf{u}_{2}^{(2)}$ plotted with 95% confidence intervals. Finally, Figure 5 shows the third multiway PC. We begin to see much wider confidence intervals as the signal becomes weaker. In fact, only the period around 2008 are significantly different from zero, and likewise, the entries corresponding to life span, urbanization, and tourism are statistically insignificant. This indicates that these patterns likely focus on the impact of the 2008 financial crisis: it caused an immediate economic downturn but recovered not long after. Figure 5: The third PC, international trade: $\mathbf{u}_{3}^{(1)}$ and $\mathbf{u}_{3}^{(2)}$ plotted with 95% confidence intervals. ### 5.3 NYC Bike Rental Data Another data example we considered is the Citibike trip data†††https://ride.citibikenyc.com/system-data. In particular, all the Citibike trips from January 1, 2018 to December 31, 2019, on weekdays (522 days in total) that started in Manhattan and lasted for at least 60 seconds were used in our analysis. During this period, there are 35 zip codes in Manhattan with at least one Citibike station. There are a total of 29,515,527 trips and we form a data tensor $\mathbf{Y}$ of dimension $522\times 24\times 35$ where $Y_{kij}$ denotes the number of trips starting during the $i$th hour of the $k$th day from the $j$th zip code. The number of counts at different zip codes are of drastically different magnitudes, and the total counts during the two years also display a clear seasonal trend. To facilitate our analysis, we standardized the counts from each zip code $j$ at each day $k$ so that they have mean $0$ and mean absolute deviation $1$. As in the previous example, a direct application of the usual PCA can be misleading as the ambient dimension of the daily observation is $24\times 35=840$ and greater than the number of days. Nonetheless, it is helpful to consider multiway PCs of the form $\mathscr{U}_{k}=\mathbf{u}_{k}^{(1)}\otimes\mathbf{u}_{k}^{(2)}\in\mathbb{R}^{24\times 35},$ where $\mathbf{u}_{k}^{(1)}$ captures the time-of-the-day effect of bike rental, and $\mathbf{u}_{k}^{(2)}$ the location pattern. Figure 6 plots the first multiway PC. The spatial pattern clearly indicates that this represents an overall pattern across Manhattan with all 35 entries of $\mathbf{u}_{k}^{(2)}$ being estimated as positive. The temporal pattern indicates that bike rental strongly coincides with the rush hours with two peaks during the morning and afternoon rush hours. The blank area downtown is zip code 10006, the big blank rectangular is Central Park, the small blank underneath is zip code 10020, and the blank area to the north of Central Park has zip codes 10030 and 10031. At the time of the recorded period, no Citibike station existed in these areas. Figure 6: The first PC: overall pattern. The second PC, as shown in Figure 7, reveals differences in rental patterns across neighborhoods. While the first PC suggests increased rental activities both in the morning and afternoon rush hours, the second PC captures the difference between morning and evening rental patterns as indicated by the positive peak during the evening rush hours and the negative peak during the morning rush hours. As such, a neighborhood with positive loadings may see more evening rentals than morning rentals. These are the downtown Financial District, Lower Manhattan, and Midtown, largely corresponding to the business area of Manhattan. On the other hand, zip codes corresponding to negative loadings represent mostly residential areas of Manhattan, including the East Village, Upper West Side, and Upper East Side. Figure 7: The second PC: rush hour differences. Figure 8 depicts the third PC. The temporal pattern has a narrow and tall peak during the afternoon rush hours suggesting that this PC captures the subtle spatial difference during this time of the day. In particular, the zip codes with large positive values (purple color) are the area around Wall Street (the small purple block in Lower Manhattan), the area around Grand Central Terminal, and an area in Upper East Side. The negative zip codes in this pattern include the areas around SoHo, Greenwich Village, and Harlem. Figure 8: The third PC: afternoon rush hour details. ## 6 Summary In this paper, we study PCA under the settings that each observation is a matrix or more generally a multiway array. We investigate how to extract multiway PCs and study their statistical properties. In addition to the obvious advantages of increased efficiency and enhanced interpretability, our analysis provides a number of new insights into the operating characteristics of multiway PCA and their methodological implications. First, we show that multiway PCs can be estimated without the eigengap requirement. Specifically, under a spike covariance model, we establish rates of convergence for the sample multiway PCs. In particular, they are consistent whenever the signal-to-noise ratio $\sigma_{k}/\sigma_{0}\gg\max\\{d/n,(d/n)^{1/4}\\}$ where $d$ is the dimension of one mode. Perhaps more interestingly, we prove that the sample multiway PCs are asymptotically independent of each other, at least when the dimension $d=o(\sqrt{n})$. In higher dimensions, the sample PCs can be biased and the bias can be corrected via sample-splitting to lead to asymptotically normal estimates of the multiway PCs, which enables us to construct confidence intervals or conduct hypothesis testing for linear forms of the PCs. Our theoretical developments are complemented by numerical experiments, both simulated and real. In particular, meaningful findings can be inferred when applying our methods to two real-world datasets, further demonstrating the merits of our methodology. ## References * Anandkumar et al. (2014) Animashree Anandkumar, Rong Ge, Daniel Hsu, Sham M Kakade, and Matus Telgarsky. Tensor decompositions for learning latent variable models. _Journal of machine learning research_ , 15:2773–2832, 2014\. * Anderson (1984) T. W. Anderson. _An Introduction to Multivariate Statistical Analysis_. Wiley, New York, NY, second edition, 1984. * Auddy and Yuan (2020) Arnab Auddy and Ming Yuan. Perturbation bounds for (nearly) orthogonally decomposable tensors. _arXiv preprint arXiv:2007.09024_ , 2020. * Bai and Silverstein (2010) Zhidong Bai and Jack W Silverstein. _Spectral analysis of large dimensional random matrices_ , volume 20. Springer, 2010. * Bai and Yao (2012) Zhidong Bai and Jianfeng Yao. On sample eigenvalues in a generalized spiked population model. _Journal of Multivariate Analysis_ , 106:167–177, 2012\. * Baik and Silverstein (2006) Jinho Baik and Jack W Silverstein. Eigenvalues of large sample covariance matrices of spiked population models. _Journal of multivariate analysis_ , 97(6):1382–1408, 2006. * Benaych-Georges and Nadakuditi (2011) Florent Benaych-Georges and Raj Rao Nadakuditi. The eigenvalues and eigenvectors of finite, low rank perturbations of large random matrices. _Advances in Mathematics_ , 227(1):494–521, 2011\. * Bi et al. (2021) Xuan Bi, Xiwei Tang, Yubai Yuan, Yanqing Zhang, and Annie Qu. Tensors in statistics. _Annual review of statistics and its application_ , 8:345–368, 2021. * Birnbaum et al. (2013) Aharon Birnbaum, Iain M Johnstone, Boaz Nadler, and Debashis Paul. Minimax bounds for sparse pca with noisy high-dimensional data. _Annals of statistics_ , 41(3):1055, 2013. * Chen et al. (2020a) Elynn Y Chen, Jianqing Fan, and Ellen Li. Statistical inference for high-dimensional matrix-variate factor model. _arXiv preprint arXiv:2001.01890_ , 2020a. * Chen et al. (2020b) Elynn Y Chen, Dong Xia, Chencheng Cai, and Jianqing Fan. Semiparametric tensor factor analysis by iteratively projected svd. _arXiv preprint arXiv:2007.02404_ , 2020b. * Chen et al. (2021) Rong Chen, Dan Yang, and Cun-Hui Zhang. Factor models for high-dimensional tensor time series. _Journal of the American Statistical Association_ , pages 1–23, 2021\. * Chetty et al. (2016) Raj Chetty, Michael Stepner, Sarah Abraham, Shelby Lin, Benjamin Scuderi, Nicholas Turner, Augustin Bergeron, and David Cutler. The association between income and life expectancy in the united states, 2001-2014. _Jama_ , 315(16):1750–1766, 2016. * Cichocki et al. (2015) Andrzej Cichocki, Danilo Mandic, Lieven De Lathauwer, Guoxu Zhou, Qibin Zhao, Cesar Caiafa, and Huy Anh Phan. Tensor decompositions for signal processing applications: From two-way to multiway component analysis. _IEEE signal processing magazine_ , 32(2):145–163, 2015. * De Lathauwer et al. (2000) Lieven De Lathauwer, Bart De Moor, and Joos Vandewalle. A multilinear singular value decomposition. _SIAM journal on Matrix Analysis and Applications_ , 21(4):1253–1278, 2000. * Friedland (2013) Shmuel Friedland. Best rank one approximation of real symmetric tensors can be chosen symmetric. _Frontiers of Mathematics in China_ , 8(1):19–40, 2013. * Hackbusch (2012) Wolfgang Hackbusch. _Tensor spaces and numerical tensor calculus_ , volume 42. Springer, 2012. * Han et al. (2022) Rungang Han, Rebecca Willett, and Anru R Zhang. An optimal statistical and computational framework for generalized tensor estimation. _The Annals of Statistics_ , 50(1):1–29, 2022\. * Han et al. (2020) Yuefeng Han, Rong Chen, Dan Yang, and Cun-Hui Zhang. Tensor factor model estimation by iterative projection. _arXiv preprint arXiv:2006.02611_ , 2020. * Harshman and Lundy (1984) Richard A Harshman and Margaret E Lundy. The parafac model for three-way factor analysis and multidimensional scaling. _Research methods for multimode data analysis_ , 46:122–215, 1984. * Hopkins et al. (2015) Samuel B Hopkins, Jonathan Shi, and David Steurer. Tensor principal component analysis via sum-of-square proofs. In _Conference on Learning Theory_ , pages 956–1006, 2015. * Janzamin et al. (2019) Majid Janzamin, Rong Ge, Jean Kossaifi, Anima Anandkumar, et al. Spectral learning on matrices and tensors. _Foundations and Trends® in Machine Learning_ , 12(5-6):393–536, 2019. * Johnstone (2001) Iain M Johnstone. On the distribution of the largest eigenvalue in principal components analysis. _Annals of statistics_ , pages 295–327, 2001. * Johnstone and Lu (2009) Iain M Johnstone and Arthur Yu Lu. On consistency and sparsity for principal components analysis in high dimensions. _Journal of the American Statistical Association_ , 104(486):682–693, 2009. * Jolliffe (2002) I. Jolliffe. _Principal Component Analysis_. Springer, 2002. * Jung and Marron (2009) Sungkyu Jung and J Stephen Marron. Pca consistency in high dimension, low sample size context. _The Annals of Statistics_ , 37(6B):4104–4130, 2009. * Kolda (2001) Tamara G Kolda. Orthogonal tensor decompositions. _SIAM Journal on Matrix Analysis and Applications_ , 23(1):243–255, 2001. * Koltchinskii and Lounici (2014) Vladimir Koltchinskii and Karim Lounici. Asymptotics and concentration bounds for spectral projectors of sample covariance. _arXiv preprint arXiv:1408.4643_ , 2014. * Koltchinskii et al. (2017) Vladimir Koltchinskii, Karim Lounici, et al. Concentration inequalities and moment bounds for sample covariance operators. _Bernoulli_ , 23(1):110–133, 2017. * Koltchinskii et al. (2020) Vladimir Koltchinskii, Matthias Löffler, and Richard Nickl. Efficient estimation of linear functionals of principal components. _The Annals of Statistics_ , 48(1):464–490, 2020\. * Kong et al. (2005) Hui Kong, Lei Wang, Eam Khwang Teoh, Xuchun Li, Jian-Gang Wang, and Ronda Venkateswarlu. Generalized 2d principal component analysis for face image representation and recognition. _Neural Networks_ , 18(5-6):585–594, 2005. * Kroonenberg (2008) Pieter M Kroonenberg. _Applied multiway data analysis_. John Wiley & Sons, 2008. * Kroonenberg and De Leeuw (1980) Pieter M Kroonenberg and Jan De Leeuw. Principal component analysis of three-mode data by means of alternating least squares algorithms. _Psychometrika_ , 45(1):69–97, 1980. * Lee et al. (2010) Seunggeun Lee, Fei Zou, and Fred A Wright. Convergence and prediction of principal component scores in high-dimensional settings. _Annals of statistics_ , 38(6):3605, 2010. * Li et al. (2010) Xuelong Li, Yanwei Pang, and Yuan Yuan. L1-norm-based 2dpca. _IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics)_ , 40(4):1170–1175, 2010. * Liu et al. (2017) Tianqi Liu, Ming Yuan, and Hongyu Zhao. Characterizing spatiotemporal transcriptome of human brain via low rank tensor decomposition. _arXiv preprint arXiv:1702.07449_ , 2017. * Lu et al. (2006) Haiping Lu, Konstantinos N Plataniotis, and Anastasios N Venetsanopoulos. Multilinear principal component analysis of tensor objects for recognition. In _18th International Conference on Pattern Recognition (ICPR’06)_ , volume 2, pages 776–779. IEEE, 2006. * Lu et al. (2008) Haiping Lu, Konstantinos N Plataniotis, and Anastasios N Venetsanopoulos. Mpca: Multilinear principal component analysis of tensor objects. _IEEE transactions on Neural Networks_ , 19(1):18–39, 2008. * Lu et al. (2011) Haiping Lu, Konstantinos N Plataniotis, and Anastasios N Venetsanopoulos. A survey of multilinear subspace learning for tensor data. _Pattern Recognition_ , 44(7):1540–1551, 2011\. * Montecino and Epstein (2015) Juan Montecino and Gerald Epstein. Did quantitative easing increase income inequality? _Institute for New Economic Thinking working paper series_ , (28), 2015. * Nadler (2008) Boaz Nadler. Finite sample approximation results for principal component analysis: A matrix perturbation approach. _The Annals of Statistics_ , 36(6):2791–2817, 2008. * Paul (2007) Debashis Paul. Asymptotics of sample eigenstructure for a large dimensional spiked covariance model. _Statistica Sinica_ , pages 1617–1642, 2007. * Richard and Montanari (2014) Emile Richard and Andrea Montanari. A statistical model for tensor pca. In _Advances in Neural Information Processing Systems_ , pages 2897–2905, 2014. * Shen et al. (2013) Dan Shen, Haipeng Shen, Hongtu Zhu, and JS Marron. Surprising asymptotic conical structure in critical sample eigen-directions. _arXiv preprint arXiv:1303.6171_ , 2013. * Taguchi (2018) Y-H Taguchi. Tensor decomposition-based and principal-component-analysis-based unsupervised feature extraction applied to the gene expression and methylation profiles in the brains of social insects with multiple castes. _BMC bioinformatics_ , 19(4):99, 2018. * Vasilescu and Terzopoulos (2002) M Alex O Vasilescu and Demetri Terzopoulos. Multilinear analysis of image ensembles: Tensorfaces. In _European conference on computer vision_ , pages 447–460. Springer, 2002. * Vershynin (2010) Roman Vershynin. Introduction to the non-asymptotic analysis of random matrices. _arXiv preprint arXiv:1011.3027_ , 2010. * Wang and Fan (2017) Weichen Wang and Jianqing Fan. Asymptotics of empirical eigenstructure for high dimensional spiked covariance. _Annals of statistics_ , 45(3):1342, 2017. * Xia et al. (2020) Dong Xia, Anru R Zhang, and Yuchen Zhou. Inference for low-rank tensors–no need to debias. _arXiv preprint arXiv:2012.14844_ , 2020. * Yang et al. (2004) Jian Yang, David Zhang, Alejandro F Frangi, and Jing-yu Yang. Two-dimensional pca: a new approach to appearance-based face representation and recognition. _IEEE transactions on pattern analysis and machine intelligence_ , 26(1):131–137, 2004. * Zhang and Xia (2018) Anru Zhang and Dong Xia. Tensor svd: Statistical and computational limits. _IEEE Transactions on Information Theory_ , 64(11):7311–7338, 2018. * Zhang and Zhou (2005) Daoqiang Zhang and Zhi-Hua Zhou. (2d) 2pca: Two-directional two-dimensional pca for efficient face representation and recognition. _Neurocomputing_ , 69(1-3):224–231, 2005. * Zhang and Golub (2001) Tong Zhang and Gene H Golub. Rank-one approximation to high order tensors. _SIAM Journal on Matrix Analysis and Applications_ , 23(2):534–550, 2001. ## Appendix A Notations and Preliminary Bounds Write $a\vee b:=\max\\{a,b\\}$ and $a\wedge b:=\min\\{a,b\\}$. For a positive integer $n$, let $[n]:=\\{1,2,\dots,n\\}$. For a vector $\mathbf{x}\in\mathbb{R}^{d}$, denote $\|\mathbf{x}\|$ to be its $\ell_{2}$-norm, $\|\mathbf{x}\|_{1}$ to be its $\ell_{1}$-norm, and $\|\mathbf{x}\|_{\infty}=\max_{i}|x_{i}|$ to be its $\ell_{\infty}$-norm. For two sequences of real numbers $\\{a_{n}\\}$ and $\\{b_{n}\\}$, write $a_{n}=O(b_{n})$ if $\exists C,\exists M$, such that $\forall n>M$, $|a_{n}|\leq C|b_{n}|$. Write $a_{n}=o(b_{n})$ if $\lim_{n\to\infty}a_{n}/b_{n}=0$. For two sequences of real-valued random variables $X_{n}$ and $Y_{n}$, write $X_{n}=O_{p}(Y_{n})$ if $X_{n}=R_{n}Y_{n}$ and $R_{n}$ is uniformly tight. Write $X_{n}=o_{p}(Y_{n})$ if $X_{n}=R_{n}Y_{n}$ and $R_{n}\overset{p}{\to}0$. For linear subspace $U$ of $\mathbb{R}^{d}$, denote $P_{U}$ and $P_{U}^{\perp}$ to be the orthogonal projection onto $U$ and its orthogonal complement $U^{\perp}$, respectively. For a non-zero vector $u\in\mathbb{R}^{d}$, denote $P_{u}:=P_{{\rm span}\\{u\\}}$ and $P_{u}^{\perp}:=P_{{\rm span}\\{u\\}}^{\perp}$. For an order-$k$ tensor $\mathscr{T}\in\mathbb{R}^{d_{1}\times d_{2}\times\dots\times d_{k}}$, define its _tensor operator norm_ as: $\displaystyle\|\mathscr{T}\|:=\sup_{\mathbf{u}_{j}\in\mathbb{R}^{d_{j}},\|\mathbf{u}_{j}\|=1}\mathscr{T}(\mathbf{u}_{1},\mathbf{u}_{2},\dots,\mathbf{u}_{k}).$ (15) Specifically, when $k=2$ so that $\mathscr{T}\in\mathbb{R}^{d_{1}\times d_{2}}$ is a matrix, $\|\mathscr{T}\|$ is the matrix spectral norm of $\mathscr{T}$. For tensor $\mathscr{T}\in\mathbb{R}^{d_{1}\times d_{2}\times\dots\times d_{k}}$, write $\|\mathscr{T}\|_{\max}=\max_{i_{1},\dots,i_{k}}\left|\mathscr{T}_{i_{1},\dots,i_{k}}\right|$ to be its $\ell_{\infty}$-norm. With a slight abuse of notation, the mode $q$ product of $\mathscr{T}\in\mathbb{R}^{d_{1}\times d_{2}\times\dots\times d_{k}}$ with a vector $\mathbf{a}\in\mathbb{R}^{d_{q}}$, denoted by $\mathscr{T}\times_{q}\mathbf{a}$, is defined as an order-$(k-1)$ tensor of size $d_{1}\times\dots d_{q-1}\times d_{q+1}\dots\times d_{k}$, with elements $[\mathscr{T}\times_{q}\mathbf{a}]_{i_{1}\dots i_{q-1}i_{q+1}\dots i_{k}}=\sum_{i_{q}=1}^{d_{q}}\mathscr{T}_{i_{1}\dots i_{q}\dots i_{k}}\mathbf{a}_{i_{q}}.$ Write $\widehat{\Sigma}_{\theta}={1\over n}\sum_{i=1}^{n}\theta_{i}\otimes\theta_{i},\qquad\widehat{\Sigma}_{\mathscr{E}}={1\over n}\sum_{i=1}^{n}\mathscr{E}_{i}\otimes\mathscr{E}_{i},$ and $\widehat{\Sigma}_{\theta,\mathscr{E}}={1\over n}\sum_{i=1}^{n}\theta_{i}\otimes\mathscr{E}_{i},$ the sample covariance matrices of $\theta$, $\mathscr{E}$ and between them respectively. Correspondingly denote by $\Sigma_{\theta}$, $\Sigma_{\mathscr{E}}$ and $\Sigma_{\theta,\mathscr{E}}$ their population counterpart. It is clear $\Sigma_{\theta,\mathscr{E}}=0$. Recall also that $\widehat{\Sigma}={1\over n}\sum_{i=1}^{n}\mathscr{X}_{i}\otimes\mathscr{X}_{i}.$ and $\Sigma=\sum_{l=1}^{r}\sigma_{l}\mathbf{u}_{l}^{(1)}\otimes\cdots\otimes\mathbf{u}_{l}^{(p)}\otimes\mathbf{u}_{l}^{(1)}\otimes\cdots\otimes\mathbf{u}_{l}^{(p)}+\sigma_{0}^{2}\mathscr{I}$ are the sample and population covariance matrices of $\mathscr{X}$. The proof relies on the following technical lemmas. ###### Lemma 1. There exists a numerical constant $C>0$ such that for any $t\geq 1$, $\|\widehat{\Sigma}-\Sigma\|\leq C(\sigma_{1}^{2}+\sigma_{0}^{2})\max\left\\{\sqrt{d\over n},{d\over n},\sqrt{t\over n},{t\over n}\right\\},$ $\|\widehat{\Sigma}_{\theta,\mathscr{E}}\|\leq C\sigma_{0}\max\left\\{\sqrt{d\over n},{d\over n},\sqrt{t\over n},{t\over n}\right\\},$ $\|\widehat{\Sigma}_{\mathscr{E}}-\Sigma_{\mathscr{E}}\|\leq C\sigma_{0}^{2}\max\left\\{\sqrt{d\over n},{d\over n},\sqrt{t\over n},{t\over n}\right\\},$ and $\|\widehat{\Sigma}_{\theta}-\Sigma_{\theta}\|\leq C\max\left\\{\sqrt{r\over n},{r\over n},\sqrt{t\over n},{t\over n}\right\\},$ with probability at least $1-e^{-t}$. Note that we shall use $C$ to denote a constant that may take different values at each appearance. We shall also make use the following bounds: ###### Lemma 2. There exists a numerical constant $C>0$ such that for any $t>0$, $\|\widehat{\Sigma}_{\theta}-\Sigma_{\theta}\|_{\max}\leq C\max\left\\{\sqrt{\log r\over n},{\log r\over n},\sqrt{t\over n},{t\over n}\right\\},$ $\max_{1\leq l_{1},l_{2}\leq r}\left|\widehat{\Sigma}_{\mathscr{E},\theta}(\mathbf{u}_{l_{1}}^{(1)},\mathbf{u}_{l_{2}}^{(2)},\ldots,\mathbf{u}_{l_{2}}^{(p)},{\bf e}_{l_{2}})\right|\leq C\sigma_{0}\max\left\\{\sqrt{\log r\over n},{\log r\over n},\sqrt{t\over n},{t\over n}\right\\}$ where ${\bf e}_{l_{2}}$ is the $l_{2}$th canonical basis of ${\mathbb{R}}^{r}$, and $\max_{1\leq l\leq r}\left|(\widehat{\Sigma}_{\mathscr{E}}-\Sigma_{\mathscr{E}})(\mathbf{u}_{l}^{(1)},\ldots,\mathbf{u}_{l}^{(p)},\mathbf{u}_{l}^{(1)},\ldots,\mathbf{u}_{l}^{(p)})\right|\leq C\sigma_{0}^{2}\max\left\\{\sqrt{\log r\over n},{\log r\over n},\sqrt{t\over n},{t\over n}\right\\},$ with probability at least $1-e^{-t}$. Both Lemmas are well known and follow immediately from an application of union bounds and $\chi^{2}$ tail bounds. See, e.g., Vershynin (2010). ## Appendix B Proof of Theorems 3.1 and 3.2 Theorem 3.1 follows immediately from Theorem 3.2 and it suffices to prove the latter. For brevity, we shall focus on the case when $d\leq n$ and $r$ diverges with $n$. Denote by ${\cal E}$ the event that $\|\widehat{\Sigma}_{\theta}-\Sigma_{\theta}\|\leq C\sqrt{r\over n},\quad{\rm and}\quad\sigma_{0}^{-2}\|\widehat{\Sigma}_{\mathscr{E}}-\Sigma_{\mathscr{E}}\|,\sigma_{0}^{-1}\|\widehat{\Sigma}_{\theta,\mathscr{E}}\|\leq C\sqrt{d\over n}$ and $\|\widehat{\Sigma}_{\theta}-\Sigma_{\theta}\|_{\max},\ \sigma_{0}^{-1}\max_{1\leq l_{1},l_{2}\leq r}\left|\widehat{\Sigma}_{\mathscr{E},\theta}(\mathbf{u}_{l_{1}}^{(1)},\mathbf{u}_{l_{2}}^{(2)},\ldots,\mathbf{u}_{l_{2}}^{(p)},{\bf e}_{l_{2}})\right|\leq C\sqrt{\log r\over n}$ By Lemmas 1 and 2, ${\cal E}$ holds with probability tending to one. It suffices to proceed conditional on the event ${\cal E}$. As noted, the $k$th sample PCs may not correspond to the $k$th population PCs because we do not assume the existence of eigengap and $\sigma_{k}$s may not even be distinct. Nonetheless, we can match the sample PCs with population PCs as follows. Define $\pi(1)=\operatorname{argmax}_{1\leq l\leq r}\left\\{\sigma_{l}^{2}\left|\prod_{q=1}^{p}\langle\mathbf{u}_{l}^{(q)},\widehat{\mathbf{u}}_{1}^{(q)}\rangle\right|\right\\},$ and for $k>1$, $\pi(k):=\operatorname{argmax}_{l\notin\pi([k-1])}\left\\{\sigma_{l}^{2}\left|\prod_{q=1}^{p}\langle\mathbf{u}_{l}^{(q)},\widehat{\mathbf{u}}_{k}^{(q)}\rangle\right|\right\\}.$ The goal is to show that with high probability, $\eta_{k}:=\max_{1\leq q\leq p}\sin\angle(\mathbf{u}_{\pi(k)}^{(q)},\widehat{\mathbf{u}}_{k}^{(q)})\leq C\left({\sigma_{0}\over\sigma_{\pi(k)}}+{\sigma_{0}^{2}\over\sigma_{\pi(k)}^{2}}\right)\max\left\\{\sqrt{d\over n},{d\over n}\right\\}=:\delta_{k},$ (16) for $k=1,\ldots,r$. Our proof proceeds by induction over $k$. To facilitate the induction, we shall also prove that $\displaystyle\tilde{\eta}_{k}$ $\displaystyle:=$ $\displaystyle\max_{1\leq q\leq p}\max_{l\notin\pi([k])}\langle\mathbf{u}_{l}^{(q)},\widehat{\mathbf{u}}_{k}^{(q)}\rangle$ (17) $\displaystyle\leq$ $\displaystyle C\left({\sigma_{0}\over\sigma_{\pi(k)}}+{\sigma_{0}^{2}\over\sigma_{\pi(k)}^{2}}\right)^{2}\max\left\\{{d\over n},{d^{2}\over n^{2}}\right\\}+C\left({\sigma_{0}\over\sigma_{\pi(k)}}+{\sigma_{0}^{2}\over\sigma_{\pi(k)}^{2}}\right)\sqrt{\log r\over n}$ $\displaystyle=:$ $\displaystyle\tilde{\delta}_{k}.$ In addition to (16) and (17), we shall also prove that $\sigma_{\pi(k)}^{2}\geq\max_{l\notin\pi([k])}\sigma_{l}^{2}(1-C\delta_{k}^{2}).$ (18) This immediately implies that $\sum_{l=1}^{k}\tilde{\delta}_{k}^{2}\leq C\delta_{k}^{2}\quad{\rm and}\quad\max_{1\leq l\leq k}\\{\sigma_{\pi(l)}\delta_{l}\\}\leq C\sigma_{\pi(k)}\delta_{k},$ by the taking the constant $c_{0}$ in (10) small enough. We shall make use of these bounds repeatedly. As noted, we shall proceed by induction over $k$. In particular, we shall denote by $\delta_{0}=\tilde{\delta}_{0}=0$ so that the the base case holds trivially when $k=0$. Now assume the induction hypotheses (16) and (17) holds for $1,\ldots,k-1$. We want to show that they continue to hold for $k$. The general architect of the argument is similar to that for the base case, but additional challenges arise with the need to control the impact of estimation error of $\widehat{\mathbf{u}}_{l}^{(q)}$s for $1\leq l<k$. Denote by $\widehat{{\cal P}}^{(q)}$ the projection matrix onto the linear space spanned by $\\{\widehat{\mathbf{u}}_{1}^{(q)},\dots,\widehat{\mathbf{u}}_{k-1}^{(q)}\\}$, for $q\in[p]$. Note that in the case when $k=1$, $\widehat{{\cal P}}^{(q)}={\bf 0}_{d\times d}$. Then $\displaystyle(\widehat{\mathbf{u}}_{k}^{(1)},\ldots,\widehat{\mathbf{u}}_{k}^{(p)})$ $\displaystyle=$ $\displaystyle\operatorname{argmax}_{\begin{subarray}{c}\|\mathbf{w}^{(q)}\|\leq 1,\langle\mathbf{w}^{(q)},\widehat{\mathbf{u}}_{l}^{(q)}\rangle=0,\\\ \forall l\leq k-1,q\in[p]\end{subarray}}\widehat{\Sigma}(\mathbf{w}^{(1)},\dots,\mathbf{w}^{(p)},\mathbf{w}^{(1)},\dots,\mathbf{w}^{(p)})$ $\displaystyle=$ $\displaystyle\operatorname{argmax}_{\begin{subarray}{c}\|\mathbf{w}^{(q)}\|=1,\forall q\in[p]\end{subarray}}\widehat{\Sigma}(\widehat{{\cal P}}^{(1)}_{\perp}\mathbf{w}^{(1)},\dots,\widehat{{\cal P}}^{(p)}_{\perp}\mathbf{w}^{(p)},\widehat{{\cal P}}^{(1)}_{\perp}\mathbf{w}^{(1)},\dots,\widehat{{\cal P}}^{(p)}_{\perp}\mathbf{w}^{(p)})$ where $\widehat{{\cal P}}^{(q)}_{\perp}=I-\widehat{{\cal P}}^{(q)}$. Observe that $\widehat{{\cal P}}_{\perp}^{(q)}\tilde{\Sigma}(\widehat{\mathbf{u}}_{k}^{(1)},\ldots,\widehat{\mathbf{u}}_{k}^{(q-1)},\cdot,\widehat{\mathbf{u}}_{k}^{(q+1)},\ldots,\widehat{\mathbf{u}}_{k}^{(p)},\widehat{\mathbf{u}}_{k}^{(1)},\ldots,\widehat{\mathbf{u}}_{k}^{(p)})\propto\widehat{\mathbf{u}}_{k}^{(q)},$ where $\tilde{\Sigma}=\widehat{\Sigma}-\sigma_{0}^{2}\mathscr{I}$. This implies that $\langle\widehat{\mathbf{u}}_{k}^{(q)},\mathbf{w}\rangle={\tilde{\Sigma}(\widehat{\mathbf{u}}_{k}^{(1)},\ldots,\widehat{\mathbf{u}}_{k}^{(q-1)},\widehat{{\cal P}}_{\perp}^{(q)}\mathbf{w},\widehat{\mathbf{u}}_{k}^{(q+1)},\ldots,\widehat{\mathbf{u}}_{k}^{(p)},\widehat{\mathbf{u}}_{k}^{(1)},\ldots,\widehat{\mathbf{u}}_{k}^{(p)})\over\tilde{\Sigma}(\widehat{\mathbf{u}}_{k}^{(1)},\widehat{\mathbf{u}}_{k}^{(2)},\ldots,\widehat{\mathbf{u}}_{k}^{(p)},\widehat{\mathbf{u}}_{k}^{(1)},\ldots,\widehat{\mathbf{u}}_{k}^{(p)})}.$ In particular, for $l\neq k$, $\sin\angle(\widehat{\mathbf{u}}_{k}^{(q)},\mathbf{u}_{l}^{(q)})={\tilde{\Sigma}(\widehat{\mathbf{u}}_{k}^{(1)},\ldots,\widehat{\mathbf{u}}_{k}^{(q-1)},\mathbf{u}_{l}^{(q)},\widehat{\mathbf{u}}_{k}^{(q+1)},\ldots,\widehat{\mathbf{u}}_{k}^{(p)},\widehat{\mathbf{u}}_{k}^{(1)},\ldots,\widehat{\mathbf{u}}_{k}^{(p)})\over\tilde{\Sigma}(\widehat{\mathbf{u}}_{k}^{(1)},\widehat{\mathbf{u}}_{k}^{(2)},\ldots,\widehat{\mathbf{u}}_{k}^{(p)},\widehat{\mathbf{u}}_{k}^{(1)},\ldots,\widehat{\mathbf{u}}_{k}^{(p)})},$ and $\sin\angle(\widehat{\mathbf{u}}_{k}^{(q)},\mathbf{u}_{k}^{(q)})={\tilde{\Sigma}(\widehat{\mathbf{u}}_{k}^{(1)},\ldots,\widehat{\mathbf{u}}_{k}^{(q-1)},\widehat{{\cal P}}_{\perp}^{(q)}(\widehat{\mathbf{u}}_{k}^{(q)}-\langle\widehat{\mathbf{u}}_{k}^{(q)},\mathbf{u}_{\pi(k)}^{(q)}\rangle\mathbf{u}_{\pi(k)}^{(q)}),\widehat{\mathbf{u}}_{k}^{(q+1)},\ldots,\widehat{\mathbf{u}}_{k}^{(p)},\widehat{\mathbf{u}}_{k}^{(1)},\ldots,\widehat{\mathbf{u}}_{k}^{(p)})\over\tilde{\Sigma}(\widehat{\mathbf{u}}_{k}^{(1)},\widehat{\mathbf{u}}_{k}^{(2)},\ldots,\widehat{\mathbf{u}}_{k}^{(p)},\widehat{\mathbf{u}}_{k}^{(1)},\ldots,\widehat{\mathbf{u}}_{k}^{(p)})},$ We shall derive lower bounds for the nominators and an upper bound for the denominator. It suffices to consider the case $q=1$. Other indices can be treated in an identical fashion. ### B.1 Lower Bound for $\tilde{\Sigma}(\widehat{\mathbf{u}}_{k}^{(1)},\widehat{\mathbf{u}}_{k}^{(2)},\ldots,\widehat{\mathbf{u}}_{k}^{(p)},\widehat{\mathbf{u}}_{k}^{(1)},\ldots,\widehat{\mathbf{u}}_{k}^{(p)})$ Denote by $\tilde{\Sigma}=\Sigma-\sigma_{0}^{2}\mathscr{I}$. Observe that $\displaystyle\tilde{\Sigma}(\widehat{\mathbf{u}}_{k}^{(1)},\widehat{\mathbf{u}}_{k}^{(2)},\ldots,\widehat{\mathbf{u}}_{k}^{(p)},\widehat{\mathbf{u}}_{k}^{(1)},\ldots,\widehat{\mathbf{u}}_{k}^{(p)})$ (19) $\displaystyle\geq$ $\displaystyle\max_{l\notin\pi([k-1])}\tilde{\Sigma}(\widehat{{\cal P}}^{(1)}_{\perp}\mathbf{u}_{l}^{(1)},\ldots,\widehat{{\cal P}}^{(p)}_{\perp}\mathbf{u}_{l}^{(p)},\widehat{{\cal P}}^{(1)}_{\perp}\mathbf{u}_{l}^{(1)},\ldots,\widehat{{\cal P}}^{(p)}_{\perp}\mathbf{u}_{l}^{(p)})$ $\displaystyle\geq$ $\displaystyle\max_{l\notin\pi([k-1])}(\Sigma-\sigma_{0}^{2}\mathscr{I})\widehat{{\cal P}}^{(1)}_{\perp}\mathbf{u}_{l}^{(1)},\ldots,\widehat{{\cal P}}^{(p)}_{\perp}\mathbf{u}_{l}^{(p)},\widehat{{\cal P}}^{(1)}_{\perp}\mathbf{u}_{l}^{(1)},\ldots,\widehat{{\cal P}}^{(p)}_{\perp}\mathbf{u}_{l}^{(p)})$ $\displaystyle-\sup_{\|\mathbf{w}^{(q)}\|\leq 1,1\leq q\leq p}\left|(\widehat{\Sigma}-\Sigma)(\widehat{{\cal P}}_{\perp}^{(1)}\mathbf{w}^{(1)},\dots,\widehat{{\cal P}}_{\perp}^{(p)}\mathbf{w}^{(p)},\widehat{{\cal P}}_{\perp}^{(1)}\mathbf{w}^{(1)},\dots,\widehat{{\cal P}}_{\perp}^{(p)}\mathbf{w}^{(p)})\right|.$ Next we bound the two terms on the rightmost hand side. Starting with the first term, note that for any $l\notin\pi([k-1])$, $\displaystyle(\Sigma-\sigma_{0}^{2}\mathscr{I})(\widehat{{\cal P}}^{(1)}_{\perp}\mathbf{u}_{l}^{(1)},\ldots,\widehat{{\cal P}}^{(p)}_{\perp}\mathbf{u}_{l}^{(p)},\widehat{{\cal P}}^{(1)}_{\perp}\mathbf{u}_{l}^{(1)},\ldots,\widehat{{\cal P}}^{(p)}_{\perp}\mathbf{u}_{l}^{(p)})$ $\displaystyle=$ $\displaystyle\sum_{1\leq l^{\prime}\leq r}\left[\sigma_{l^{\prime}}^{2}\prod_{q=1}^{p}\langle\widehat{{\cal P}}^{(q)}_{\perp}\mathbf{u}_{l}^{(q)},\mathbf{u}_{l^{\prime}}^{(q)}\rangle^{2}\right]$ $\displaystyle\geq$ $\displaystyle\sigma_{l}^{2}\prod_{q=1}^{p}\langle\widehat{{\cal P}}^{(q)}_{\perp}\mathbf{u}_{l}^{(q)},\mathbf{u}_{l}^{(q)}\rangle^{2}$ $\displaystyle=$ $\displaystyle\sigma_{l}^{2}\prod_{q=1}^{p}\|\widehat{{\cal P}}^{(q)}_{\perp}\mathbf{u}_{l}^{(q)}\|^{4}.$ By the induction hypothesis (17), $\|\widehat{{\cal P}}^{(q)}\mathbf{u}_{l}^{(q)}\|^{2}=\sum_{1\leq l^{\prime}<k}\langle\mathbf{u}_{l}^{(q)},\widehat{\mathbf{u}}_{l^{\prime}}^{(q)}\rangle^{2}\leq\sum_{1\leq l^{\prime}<k}\tilde{\delta}^{2}_{l^{\prime}}\leq C\delta_{k}^{2}.$ By taking the constant $c_{0}$ of (10) small enough, we can ensure that $\max_{l\notin\pi([k-1])}(\Sigma-\sigma_{0}^{2}\mathscr{I})(\widehat{{\cal P}}^{(1)}_{\perp}\mathbf{u}_{l}^{(1)},\ldots,\widehat{{\cal P}}^{(p)}_{\perp}\mathbf{u}_{l}^{(p)},\widehat{{\cal P}}^{(1)}_{\perp}\mathbf{u}_{l}^{(1)},\ldots,\widehat{{\cal P}}^{(p)}_{\perp}\mathbf{u}_{l}^{(p)})\geq(1-C\delta_{k}^{4})\tau^{2}.$ (20) where $\tau^{2}=\max_{l\notin\pi([k-1])}\sigma_{l}^{2}.$ Next we derive a bound for $\sup_{\|\mathbf{w}^{(q)}\|\leq 1,1\leq q\leq p}(\Sigma-\widehat{\Sigma})(\widehat{{\cal P}}^{(1)}_{\perp}\mathbf{w}^{(1)},\dots,\widehat{{\cal P}}^{(p)}_{\perp}\mathbf{w}^{(p)},\widehat{{\cal P}}^{(1)}_{\perp}\mathbf{w}^{(1)},\dots,\widehat{{\cal P}}^{(p)}_{\perp}\mathbf{w}^{(p)}).$ Note that $\displaystyle(\Sigma-\widehat{\Sigma})(\widehat{{\cal P}}^{(1)}_{\perp}\mathbf{w}^{(1)},\dots,\widehat{{\cal P}}^{(p)}_{\perp}\mathbf{w}^{(p)},\widehat{{\cal P}}^{(1)}_{\perp}\mathbf{w}^{(1)},\dots,\widehat{{\cal P}}^{(p)}_{\perp}\mathbf{w}^{(p)})$ (21) $\displaystyle=$ $\displaystyle\sum_{l_{1}=1}^{r}\sum_{l_{2}=1}^{r}\sigma_{l_{1}}\sigma_{l_{2}}\left(\widehat{\Sigma}_{\theta,l_{1}l_{2}}-\Sigma_{\theta,l_{1}l_{2}}\right)\left(\prod_{q=1}^{p}\langle\mathbf{u}_{l_{1}}^{(q)},\widehat{{\cal P}}_{\perp}^{(q)}\mathbf{w}^{(q)}\rangle\right)\left(\prod_{q=1}^{p}\langle\mathbf{u}_{l_{2}}^{(q)},\widehat{{\cal P}}_{\perp}^{(q)}\mathbf{w}^{(q)}\rangle\right)$ $\displaystyle+\frac{2}{n}\sum_{i=1}^{n}\sum_{l=1}^{r}\sigma_{l}\theta_{il}\left(\prod_{q=1}^{p}\langle\mathbf{u}_{l}^{(q)},\widehat{{\cal P}}_{\perp}^{(q)}\mathbf{w}^{(q)}\rangle\right)\mathscr{E}_{i}(\widehat{{\cal P}}_{\perp}^{(1)}\mathbf{w}^{(1)},\dots,\widehat{{\cal P}}_{\perp}^{(p)}\mathbf{w}^{(p)})$ $\displaystyle+\left(\frac{1}{n}\sum_{i=1}^{n}[\mathscr{E}_{i}(\widehat{{\cal P}}_{\perp}^{(1)}\mathbf{w}^{(1)},\dots,\widehat{{\cal P}}_{\perp}^{(p)}\mathbf{w}^{(p)})]^{2}-\sigma_{0}^{2}\prod_{q=1}^{p}\|\widehat{{\cal P}}_{\perp}^{(q)}\mathbf{w}^{(q)}\|^{2}\right).$ We bound the three terms on the right hand side separately. The first term can be bounded by $\displaystyle\biggl{|}\sum_{l_{1}=1}^{r}\sum_{l_{2}=1}^{r}\sigma_{l_{1}}\sigma_{l_{2}}\left(\frac{1}{n}\sum_{i=1}^{n}\theta_{il_{1}}\theta_{il_{2}}\right)\left(\prod_{q=1}^{p}\langle\mathbf{u}_{l_{1}}^{(q)},\widehat{{\cal P}}_{\perp}^{(q)}\mathbf{w}^{(q)}\rangle\right)\left(\prod_{q=1}^{p}\langle\mathbf{u}_{l_{2}}^{(q)},\widehat{{\cal P}}_{\perp}^{(q)}\mathbf{w}^{(q)}\rangle\right)$ $\displaystyle\qquad-\sum_{l=1}^{r}\sigma_{l}^{2}\left(\prod_{q=1}^{p}\langle\mathbf{u}_{l}^{(q)},\widehat{{\cal P}}_{\perp}^{(q)}\mathbf{w}^{(q)}\rangle\right)^{2}\biggr{|}$ $\displaystyle\leq$ $\displaystyle\|\widehat{\Sigma}_{\theta}-I_{r}\|\left[\sum_{l=1}^{r}\sigma_{l}^{2}\left(\prod_{q=1}^{p}\langle\mathbf{u}_{l}^{(q)},\widehat{{\cal P}}_{\perp}^{(q)}\mathbf{w}^{(q)}\rangle\right)^{2}\right]$ $\displaystyle=$ $\displaystyle\|\widehat{\Sigma}_{\theta}-I_{r}\|\left[\sum_{l\in\pi([k-1])}\sigma_{l}^{2}\left(\prod_{q=1}^{p}\langle\mathbf{u}_{l}^{(q)},\widehat{{\cal P}}_{\perp}^{(q)}\mathbf{w}^{(q)}\rangle\right)^{2}+\sum_{l\notin\pi([k-1])}\sigma_{l}^{2}\left(\prod_{q=1}^{p}\langle\mathbf{u}_{l}^{(q)},\widehat{{\cal P}}_{\perp}^{(q)}\mathbf{w}^{(q)}\rangle\right)^{2}\right],$ Recall that for any $l\in\pi([k-1])$, $|\langle\mathbf{u}_{l}^{(q)},\widehat{{\cal P}}_{\perp}^{(q)}\mathbf{w}^{(q)}\rangle|=|\langle\widehat{{\cal P}}_{\perp}^{(q)}\mathbf{u}_{l}^{(q)},\mathbf{w}^{(q)}\rangle|\leq\|\widehat{{\cal P}}_{\perp}^{(q)}\mathbf{u}_{l}^{(q)}\|\leq\delta_{l}.$ Therefore, $\displaystyle\sum_{l\in\pi([k-1])}\sigma_{l}^{2}\left(\prod_{q=1}^{p}\langle\mathbf{u}_{l}^{(q)},\widehat{{\cal P}}_{\perp}^{(q)}\mathbf{w}^{(q)}\rangle\right)^{2}$ $\displaystyle\leq$ $\displaystyle\max_{l\in\pi([k-1])}\sigma_{l}^{2}\left(\prod_{q=2}^{p}\langle\mathbf{u}_{l}^{(q)},\widehat{{\cal P}}_{\perp}^{(q)}\mathbf{w}^{(q)}\rangle\right)^{2}$ $\displaystyle\leq$ $\displaystyle\max_{1\leq l<k}\\{\sigma_{\pi(l)}^{2}\delta_{l}^{2p-2}\\}\leq\max_{1\leq l<k}\\{\sigma_{\pi(l)}^{2}\delta_{l}^{2}\\}\leq\tau^{2},$ by taking $c_{0}$ of (10) small enough. On the other hand, $\sum_{l\notin\pi([k-1])}\sigma_{l}^{2}\left(\prod_{q=1}^{p}\langle\mathbf{u}_{l}^{(q)},\widehat{{\cal P}}_{\perp}^{(q)}\mathbf{w}^{(q)}\rangle\right)^{2}\leq\max_{l\notin\pi([k-1])}\sigma_{l}^{2}=\tau^{2}.$ This implies that $\left|\sum_{l_{1}=1}^{r}\sum_{l_{2}=1}^{r}\sigma_{l_{1}}\sigma_{l_{2}}\left(\widehat{\Sigma}_{\theta,l_{1}l_{2}}-\Sigma_{\theta,l_{1}l_{2}}\right)\left(\prod_{q=1}^{p}\langle\mathbf{u}_{l_{1}}^{(q)},\widehat{{\cal P}}_{\perp}^{(q)}\mathbf{w}^{(q)}\rangle\right)\left(\prod_{q=1}^{p}\langle\mathbf{u}_{l_{2}}^{(q)},\widehat{{\cal P}}_{\perp}^{(q)}\mathbf{w}^{(q)}\rangle\right)\right|\leq C\tau^{2}\sqrt{r\over n}.$ (22) Similarly, the second term can be bounded by $\displaystyle\left|\frac{1}{n}\sum_{i=1}^{n}\sum_{l=1}^{r}\sigma_{l}\theta_{il}\left(\prod_{q=1}^{p}\langle\mathbf{u}_{l}^{(q)},\widehat{{\cal P}}_{\perp}^{(q)}\mathbf{w}^{(q)}\rangle\right)\mathscr{E}_{i}(\widehat{{\cal P}}_{\perp}^{(1)}\mathbf{w}^{(1)},\dots,\widehat{{\cal P}}_{\perp}^{(p)}\mathbf{w}^{(p)})\right|$ (23) $\displaystyle\leq$ $\displaystyle\|\widehat{\Sigma}_{\theta,\mathscr{E}}\|\left[\sum_{l=1}^{r}\sigma_{l}^{2}\left(\prod_{q=1}^{p}\langle\mathbf{u}_{l}^{(q)},\widehat{{\cal P}}_{\perp}^{(q)}\mathbf{w}^{(q)}\rangle\right)^{2}\right]^{1/2}$ $\displaystyle\leq$ $\displaystyle C\tau\sigma_{0}\sqrt{d\over n}.$ Finally, the third term can be bounded by $\left|\frac{1}{n}\sum_{i=1}^{n}[\mathscr{E}_{i}(\widehat{{\cal P}}_{\perp}^{(1)}\mathbf{w}^{(1)},\dots,\widehat{{\cal P}}_{\perp}^{(p)}\mathbf{w}^{(p)})]^{2}-\sigma_{0}^{2}\prod_{q=1}^{p}\|\widehat{{\cal P}}_{\perp}^{(q)}\mathbf{w}^{(q)}\|^{2}\right|\leq\|\widehat{\Sigma}_{\mathscr{E}}-\Sigma_{\mathscr{E}}\|\leq C\sigma_{0}^{2}\sqrt{d\over n}.$ (24) Combing (21)-(24), we get $\displaystyle\sup_{\|\mathbf{w}^{(q)}\|\leq 1,1\leq q\leq p}\left|(\widehat{\Sigma}-\Sigma)(\widehat{{\cal P}}_{\perp}^{(1)}\mathbf{w}^{(1)},\dots,\widehat{{\cal P}}_{\perp}^{(p)}\mathbf{w}^{(p)},\widehat{{\cal P}}_{\perp}^{(1)}\mathbf{w}^{(1)},\dots,\widehat{{\cal P}}_{\perp}^{(p)}\mathbf{w}^{(p)})\right|$ $\displaystyle\leq$ $\displaystyle C\tau^{2}\sqrt{r\over n}+C(\sigma_{0}\tau+\sigma_{0}^{2})\sqrt{d\over n}.$ Together with (19), this implies $\tilde{\Sigma}(\widehat{\mathbf{u}}_{k}^{(1)},\widehat{\mathbf{u}}_{k}^{(2)},\ldots,\widehat{\mathbf{u}}_{k}^{(p)},\widehat{\mathbf{u}}_{k}^{(1)},\ldots,\widehat{\mathbf{u}}_{k}^{(p)})\geq\tau^{2}\left(1-C\delta_{k}^{4}-C\sqrt{r\over n}\right)-C(\sigma_{0}\tau+\sigma_{0}^{2})\sqrt{d\over n}-C\sigma_{0}^{2}\delta_{k}^{2},$ (25) by taking $c_{0}$ of (10) small enough. ### B.2 Upper Bounds for $\tilde{\Sigma}(\widehat{{\cal P}}_{\perp}^{(1)}(\widehat{\mathbf{u}}_{k}^{(1)}-\langle\widehat{\mathbf{u}}_{k}^{(1)},\mathbf{u}_{\pi(k)}^{(1)}\rangle\mathbf{u}_{\pi(k)}^{(1)}),\widehat{\mathbf{u}}_{k}^{(2)},\ldots,\widehat{\mathbf{u}}_{k}^{(p)},\widehat{\mathbf{u}}_{k}^{(1)},\ldots,\widehat{\mathbf{u}}_{k}^{(p)})$ Observe that for any $\mathbf{w}$ orthogonal to $\mathbf{u}_{\pi(k)}^{(1)}$, we have $\displaystyle\tilde{\Sigma}(\widehat{{\cal P}}_{\perp}^{(1)}\mathbf{w},\widehat{\mathbf{u}}_{k}^{(2)},\dots,\widehat{\mathbf{u}}_{k}^{(p)},\widehat{\mathbf{u}}_{k}^{(1)},\dots,\widehat{\mathbf{u}}_{k}^{(p)})$ (26) $\displaystyle=$ $\displaystyle\sigma_{\pi(k)}^{2}\left(\frac{1}{n}\sum_{i=1}^{n}\theta_{i\pi(k)}^{2}\right)\left(\prod_{q=2}^{p}\langle\mathbf{u}_{\pi(k)}^{(q)},\widehat{\mathbf{u}}_{k}^{(q)}\rangle\right)\left(\prod_{q=1}^{p}\langle\mathbf{u}_{\pi(k)}^{(q)},\widehat{\mathbf{u}}_{k}^{(q)}\rangle\right)\langle\mathbf{u}_{\pi(k)}^{(1)},\widehat{{\cal P}}_{\perp}^{(1)}\mathbf{w}\rangle$ $\displaystyle+\sum_{l\neq\pi(k)}\sigma_{\pi(k)}\sigma_{l}\left(\frac{1}{n}\sum_{i=1}^{n}\theta_{i\pi(k)}\theta_{il}\right)\left(\prod_{q=2}^{p}\langle\mathbf{u}_{\pi(k)}^{(q)},\widehat{\mathbf{u}}_{k}^{(q)}\rangle\right)\left(\prod_{q=1}^{p}\langle\mathbf{u}_{l}^{(q)},\widehat{\mathbf{u}}_{k}^{(q)}\rangle\right)\langle\mathbf{u}_{\pi(k)}^{(1)},\widehat{{\cal P}}_{\perp}^{(1)}\mathbf{w}\rangle$ $\displaystyle+\sum_{l\neq\pi(k)}\sigma_{l}\sigma_{\pi(k)}\left(\frac{1}{n}\sum_{i=1}^{n}\theta_{il}\theta_{i\pi(k)}\right)\left(\prod_{q=2}^{p}\langle\mathbf{u}_{l}^{(q)},\widehat{\mathbf{u}}_{k}^{(q)}\rangle\right)\left(\prod_{q=1}^{p}\langle\mathbf{u}_{\pi(k)}^{(q)},\widehat{\mathbf{u}}_{k}^{(q)}\rangle\right)\langle\mathbf{u}_{l}^{(1)},\widehat{{\cal P}}_{\perp}^{(1)}\mathbf{w}\rangle$ $\displaystyle+\sum_{l_{1},l_{2}\neq\pi(k)}\sigma_{l_{1}}\sigma_{l_{2}}\left(\frac{1}{n}\sum_{i=1}^{n}\theta_{il_{1}}\theta_{il_{2}}\right)\left(\prod_{q=2}^{p}\langle\mathbf{u}_{l_{1}}^{(q)},\widehat{\mathbf{u}}_{k}^{(q)}\rangle\right)\left(\prod_{q=1}^{p}\langle\mathbf{u}_{l_{2}}^{(q)},\widehat{\mathbf{u}}_{k}^{(q)}\rangle\right)\langle\mathbf{u}_{l_{1}}^{(1)},\widehat{{\cal P}}_{\perp}^{(1)}\mathbf{w}\rangle$ $\displaystyle+\frac{1}{n}\sum_{i=1}^{n}\left[\sum_{l=1}^{r}\sigma_{l}\theta_{il}\left(\prod_{q=2}^{p}\langle\mathbf{u}_{l}^{(q)},\widehat{\mathbf{u}}_{k}^{(q)}\rangle\right)\mathscr{E}_{i}(\widehat{\mathbf{u}}_{k}^{(1)},\dots,\widehat{\mathbf{u}}_{k}^{(p)})\langle\mathbf{u}_{l}^{(1)},\widehat{{\cal P}}_{\perp}^{(1)}\mathbf{w}\rangle\right]$ $\displaystyle+\frac{1}{n}\sum_{i=1}^{n}\left[\sum_{l=1}^{r}\mathscr{E}_{i}(\widehat{{\cal P}}_{\perp}^{(1)}\mathbf{w},\widehat{\mathbf{u}}_{k}^{(2)},\dots,\widehat{\mathbf{u}}_{k}^{(p)})\sigma_{l}\theta_{il}\left(\prod_{q=1}^{p}\langle\mathbf{u}_{l}^{(q)},\widehat{\mathbf{u}}_{k}^{(q)}\rangle\right)\right]$ $\displaystyle+\frac{1}{n}\sum_{i=1}^{n}\mathscr{E}_{i}(\widehat{{\cal P}}_{\perp}^{(1)}\mathbf{w},\widehat{\mathbf{u}}_{k}^{(2)},\dots,\widehat{\mathbf{u}}_{k}^{(p)})\mathscr{E}_{i}(\widehat{\mathbf{u}}_{k}^{(1)},\dots,\widehat{\mathbf{u}}_{k}^{(p)})-\sigma_{0}^{2}\langle\widehat{{\cal P}}_{\perp}^{(1)}\mathbf{w},\widehat{\mathbf{u}}_{k}^{(p)}\rangle.$ Again each term on the right hand side needs to be bounded carefully. The first term on the right hand side of (26) can be bounded by $\displaystyle\left|\sigma_{\pi(k)}^{2}\left(\frac{1}{n}\sum_{i=1}^{n}\theta_{i\pi(k)}^{2}\right)\left(\prod_{q=2}^{p}\langle\mathbf{u}_{\pi(k)}^{(q)},\widehat{\mathbf{u}}_{k}^{(q)}\rangle\right)\left(\prod_{q=1}^{p}\langle\mathbf{u}_{\pi(k)}^{(q)},\widehat{\mathbf{u}}_{k}^{(q)}\rangle\right)\langle\mathbf{u}_{\pi(k)}^{(1)},\widehat{{\cal P}}_{\perp}^{(1)}\mathbf{w}\rangle\right|$ $\displaystyle\leq\tau^{2}\|\widehat{\Sigma}_{\theta}\|_{\max}|\langle\mathbf{u}_{\pi(k)}^{(1)},\widehat{{\cal P}}_{\perp}\mathbf{w}\rangle|.$ In particular, when $\mathbf{w}=\widehat{\mathbf{u}}_{k}^{(1)}-\langle\widehat{\mathbf{u}}_{k}^{(1)},\mathbf{u}_{\pi(k)}^{(1)}\rangle\mathbf{u}_{\pi(k)}^{(1)}$, we have $\displaystyle|\langle\mathbf{u}_{\pi(k)}^{(1)},\widehat{{\cal P}}_{\perp}^{(1)}\mathbf{w}\rangle|$ $\displaystyle=$ $\displaystyle\left|\langle\widehat{\mathbf{u}}_{k}^{(1)},\mathbf{u}_{\pi(k)}^{(1)}\rangle(1-\langle\mathbf{u}_{\pi(k)}^{(1)},\widehat{{\cal P}}_{\perp}^{(1)}\mathbf{u}_{\pi(k)}^{(1)}\rangle)\right|$ $\displaystyle\leq$ $\displaystyle 1-\langle\mathbf{u}_{\pi(k)}^{(1)},\widehat{{\cal P}}_{\perp}^{(1)}\mathbf{u}_{\pi(k)}^{(1)}\rangle$ $\displaystyle=$ $\displaystyle\|\widehat{{\cal P}}^{(1)}\mathbf{u}_{\pi(k)}^{(1)}\|^{2}\leq\sum_{1\leq l<k}\tilde{\delta}_{l}^{2}\leq C\delta_{k}^{2},$ by taking $c_{0}$ small enough. Thus, $\left|\sigma_{\pi(k)}^{2}\left(\frac{1}{n}\sum_{i=1}^{n}\theta_{i\pi(k)}^{2}\right)\left(\prod_{q=2}^{p}\langle\mathbf{u}_{\pi(k)}^{(q)},\widehat{\mathbf{u}}_{k}^{(q)}\rangle\right)\left(\prod_{q=1}^{p}\langle\mathbf{u}_{\pi(k)}^{(q)},\widehat{\mathbf{u}}_{k}^{(q)}\rangle\right)\langle\mathbf{u}_{\pi(k)}^{(1)},\widehat{{\cal P}}_{\perp}^{(1)}\mathbf{w}\rangle\right|\leq C\tau^{2}\delta_{k}^{2}.$ (27) The second term on the right hand side of (26) can be bounded as follows: $\displaystyle\left|\sum_{l\neq\pi(k)}\sigma_{\pi(k)}\sigma_{l}\left(\frac{1}{n}\sum_{i=1}^{n}\theta_{i\pi(k)}\theta_{il}\right)\left(\prod_{q=2}^{p}\langle\mathbf{u}_{\pi(k)}^{(q)},\widehat{\mathbf{u}}_{k}^{(q)}\rangle\right)\left(\prod_{q=1}^{p}\langle\mathbf{u}_{l}^{(q)},\widehat{\mathbf{u}}_{k}^{(q)}\rangle\right)\langle\mathbf{u}_{\pi(k)}^{(1)},\widehat{{\cal P}}_{\perp}^{(1)}\mathbf{w}\rangle\right|$ $\displaystyle\leq$ $\displaystyle\tau\|\widehat{\Sigma}_{\theta}-I\|\left(\sum_{l\neq\pi(k)}\sigma_{l}^{2}\prod_{q=1}^{p}\langle\mathbf{u}_{l}^{(q)},\widehat{\mathbf{u}}_{k}^{(q)}\rangle^{2}\right)^{1/2}$ $\displaystyle\leq$ $\displaystyle\tau\|\widehat{\Sigma}_{\theta}-I\|\left(\sum_{l\neq\pi(k)}\langle\mathbf{u}_{l}^{(1)},\widehat{\mathbf{u}}_{k}^{(1)}\rangle^{2}\right)^{1/2}\max_{l\neq\pi(k)}\left\\{\sigma_{l}\prod_{q=2}^{p}|\langle\mathbf{u}_{l}^{(q)},\widehat{\mathbf{u}}_{k}^{(q)}\rangle|\right\\}$ $\displaystyle=$ $\displaystyle\tau\delta_{k}\|\widehat{\Sigma}_{\theta}-I\|\max_{l\neq\pi(k)}\left\\{\sigma_{l}\prod_{q=2}^{p}|\langle\mathbf{u}_{l}^{(q)},\widehat{\mathbf{u}}_{k}^{(q)}\rangle|\right\\}$ $\displaystyle\leq$ $\displaystyle\tau\delta_{k}\|\widehat{\Sigma}_{\theta}-I\|\max\left\\{\max_{l\in\pi([k-1])}\left\\{\sigma_{l}\prod_{q=2}^{p}|\langle\mathbf{u}_{l}^{(q)},\widehat{\mathbf{u}}_{k}^{(q)}\rangle|\right\\},\max_{l\notin\pi([k])}\left\\{\sigma_{l}\prod_{q=2}^{p}|\langle\mathbf{u}_{l}^{(q)},\widehat{\mathbf{u}}_{k}^{(q)}\rangle|\right\\}\right\\}.$ Note that $\max_{l\in\pi([k-1])}\left\\{\sigma_{l}\prod_{q=2}^{p}|\langle\mathbf{u}_{l}^{(q)},\widehat{\mathbf{u}}_{k}^{(q)}\rangle|\right\\}\leq\max_{l\in\pi([k-1])}\left\\{\sigma_{l}\prod_{q=2}^{p}|\langle\widehat{{\cal P}}_{\perp}^{(q)}\mathbf{u}_{l}^{(q)},\widehat{\mathbf{u}}_{k}^{(q)}\rangle|\right\\}\leq\max_{1\leq l<k}\\{\sigma_{\pi(l)}\delta_{l}^{p-1}\\}\leq 2\tau\delta_{k}^{p-1},$ and $\max_{l\notin\pi([k])}\left\\{\sigma_{l}\prod_{q=2}^{p}|\langle\mathbf{u}_{l}^{(q)},\widehat{\mathbf{u}}_{k}^{(q)}\rangle|\right\\}\leq\tau\delta_{k}^{p-1}.$ We get $\left|\sum_{l\neq\pi(k)}\sigma_{\pi(k)}\sigma_{l}\left(\frac{1}{n}\sum_{i=1}^{n}\theta_{i\pi(k)}\theta_{il}\right)\left(\prod_{q=2}^{p}\langle\mathbf{u}_{\pi(k)}^{(q)},\widehat{\mathbf{u}}_{k}^{(q)}\rangle\right)\left(\prod_{q=1}^{p}\langle\mathbf{u}_{l}^{(q)},\widehat{\mathbf{u}}_{k}^{(q)}\rangle\right)\right|\leq C\tau^{2}\delta_{k}^{p}\sqrt{r\over n}.$ (28) And similarly, when $\mathbf{w}=\widehat{\mathbf{u}}_{k}^{(1)}-\langle\widehat{\mathbf{u}}_{k}^{(1)},\mathbf{u}_{\pi(k)}^{(1)}\rangle\mathbf{u}_{\pi(k)}^{(1)}$, we bound the third term on the right hand side of (26) by $\displaystyle\left|\sum_{l\neq\pi(k)}\sigma_{l}\sigma_{\pi(k)}\left(\frac{1}{n}\sum_{i=1}^{n}\theta_{il}\theta_{i\pi(k)}\right)\left(\prod_{q=2}^{p}\langle\mathbf{u}_{l}^{(q)},\widehat{\mathbf{u}}_{k}^{(q)}\rangle\right)\left(\prod_{q=1}^{p}\langle\mathbf{u}_{\pi(k)}^{(q)},\widehat{\mathbf{u}}_{k}^{(q)}\rangle\right)\langle\mathbf{u}_{l}^{(1)},\widehat{{\cal P}}_{\perp}^{(1)}\mathbf{w}\rangle\right|$ (29) $\displaystyle\leq$ $\displaystyle\tau\|\widehat{\Sigma}_{\theta}-I\|\left(\sum_{l\neq\pi(k)}\sigma_{l}^{2}\langle\mathbf{u}_{l}^{(1)},\widehat{{\cal P}}_{\perp}^{(1)}\mathbf{w}\rangle^{2}\prod_{q=2}^{p}\langle\mathbf{u}_{l}^{(q)},\widehat{\mathbf{u}}_{k}^{(q)}\rangle^{2}\right)^{1/2}$ $\displaystyle\leq$ $\displaystyle\tau\|\widehat{\Sigma}_{\theta}-I\|\left(\sum_{l\neq\pi(k)}\langle\mathbf{u}_{l}^{(1)},\widehat{{\cal P}}_{\perp}^{(1)}\mathbf{w}\rangle^{2}\right)^{1/2}\max_{l\neq\pi(k)}\left\\{\sigma_{l}\prod_{q=2}^{p}|\langle\mathbf{u}_{l}^{(q)},\widehat{\mathbf{u}}_{k}^{(q)}\rangle\right\\}$ $\displaystyle\leq$ $\displaystyle C\tau^{2}\delta_{k}^{p}\sqrt{r\over n},$ where in the last inequality we used the fact that $\sum_{l\neq\pi(k)}\langle\mathbf{u}_{l}^{(1)},\widehat{{\cal P}}_{\perp}^{(1)}\mathbf{w}\rangle^{2}\leq\|\widehat{{\cal P}}_{\perp}^{(1)}\mathbf{w}\|^{2}\leq\|\mathbf{w}\|^{2}\leq\delta_{k}^{2};$ the fourth term by $\displaystyle\left|\sum_{l_{1},l_{2}\neq\pi(k)}\sigma_{l_{1}}\sigma_{l_{2}}\left(\frac{1}{n}\sum_{i=1}^{n}\theta_{il_{1}}\theta_{il_{2}}\right)\left(\prod_{q=2}^{p}\langle\mathbf{u}_{l_{1}}^{(q)},\widehat{\mathbf{u}}_{k}^{(q)}\rangle\right)\left(\prod_{q=1}^{p}\langle\mathbf{u}_{l_{2}}^{(q)},\widehat{\mathbf{u}}_{k}^{(q)}\rangle\right)\langle\mathbf{u}_{l_{1}}^{(1)},\widehat{{\cal P}}_{\perp}^{(1)}\mathbf{w}\rangle\right|$ (30) $\displaystyle\leq$ $\displaystyle\|\widehat{\Sigma}_{\theta}\|\left(\sum_{l\neq\pi(k)}\sigma_{l}^{2}\langle\mathbf{u}_{l}^{(1)},\widehat{{\cal P}}_{\perp}^{(1)}\mathbf{w}\rangle^{2}\prod_{q=2}^{p}\langle\mathbf{u}_{l}^{(q)},\widehat{\mathbf{u}}_{k}^{(q)}\rangle^{2}\right)^{1/2}\left(\sum_{l\neq\pi(k)}\sigma_{l}^{2}\prod_{q=1}^{p}\langle\mathbf{u}_{l}^{(q)},\widehat{\mathbf{u}}_{k}^{(q)}\rangle^{2}\right)^{1/2}$ $\displaystyle\leq$ $\displaystyle C\tau^{2}\delta_{k}^{2(p-1)};$ the fifth term by $\displaystyle\frac{1}{n}\sum_{i=1}^{n}\left[\sum_{l\neq\pi(k)}\sigma_{l}\theta_{il}\left(\prod_{q=2}^{p}\langle\mathbf{u}_{l}^{(q)},\widehat{\mathbf{u}}_{k}^{(q)}\rangle\right)\mathscr{E}_{i}(\widehat{\mathbf{u}}_{k}^{(1)},\dots,\widehat{\mathbf{u}}_{k}^{(p)})\langle\mathbf{u}_{l}^{(1)},\widehat{{\cal P}}_{\perp}^{(1)}\mathbf{w}\rangle\right]$ (31) $\displaystyle=$ $\displaystyle\sum_{l\neq\pi(k)}\left\\{\sigma_{l}\left(\prod_{q=2}^{p}\langle\mathbf{u}_{l}^{(q)},\widehat{\mathbf{u}}_{k}^{(q)}\rangle\right)\langle\mathbf{u}_{l}^{(1)},\widehat{{\cal P}}_{\perp}^{(1)}\mathbf{w}\rangle\left[{1\over n}\sum_{i=1}^{n}\theta_{il}\mathscr{E}_{i}(\widehat{\mathbf{u}}_{k}^{(1)},\dots,\widehat{\mathbf{u}}_{k}^{(p)})\right]\right\\}$ $\displaystyle\leq$ $\displaystyle\|\widehat{\Sigma}_{\theta,\mathscr{E}}\|\cdot\left[\sum_{l\neq\pi(k)}\left(\sigma_{l}^{2}\langle\mathbf{u}_{l}^{(1)},\widehat{{\cal P}}_{\perp}^{(1)}\mathbf{w}\rangle^{2}\prod_{q=2}^{p}\langle\mathbf{u}_{l}^{(q)},\widehat{\mathbf{u}}_{k}^{(q)}\rangle^{2}\right)\right]^{1/2}$ $\displaystyle\leq$ $\displaystyle C\sigma_{0}\tau\delta_{k}^{p-1}\sqrt{d\over n};$ and the sixth term by $\displaystyle\frac{1}{n}\sum_{i=1}^{n}\left[\sum_{l=1}^{r}\mathscr{E}_{i}(\widehat{{\cal P}}_{\perp}^{(1)}\mathbf{w},\widehat{\mathbf{u}}_{k}^{(2)},\dots,\widehat{\mathbf{u}}_{k}^{(p)})\sigma_{l}\theta_{il}\left(\prod_{q=1}^{p}\langle\mathbf{u}_{l}^{(q)},\widehat{\mathbf{u}}_{k}^{(q)}\rangle\right)\right]$ (32) $\displaystyle\leq$ $\displaystyle\frac{1}{n}\sum_{i=1}^{n}\left[\mathscr{E}_{i}(\widehat{{\cal P}}_{\perp}^{(1)}\mathbf{w},\widehat{\mathbf{u}}_{k}^{(2)},\dots,\widehat{\mathbf{u}}_{k}^{(p)})\sigma_{\pi(k)}\theta_{i\pi(k)}\left(\prod_{q=1}^{p}\langle\mathbf{u}_{\pi(k)}^{(q)},\widehat{\mathbf{u}}_{k}^{(q)}\rangle\right)\right]$ $\displaystyle+\frac{1}{n}\sum_{i=1}^{n}\left[\sum_{l\neq\pi(k)}\mathscr{E}_{i}(\widehat{{\cal P}}_{\perp}^{(1)}\mathbf{w},\widehat{\mathbf{u}}_{k}^{(2)},\dots,\widehat{\mathbf{u}}_{k}^{(p)})\sigma_{l}\theta_{il}\left(\prod_{q=1}^{p}\langle\mathbf{u}_{l}^{(q)},\widehat{\mathbf{u}}_{k}^{(q)}\rangle\right)\right]$ $\displaystyle\leq$ $\displaystyle\tau\|\widehat{\Sigma}_{\theta,\mathscr{E}}\|+\|\widehat{\Sigma}_{\theta,\mathscr{E}}\|\left(\sum_{l\neq\pi(k)}\sigma_{l}^{2}\prod_{q=1}^{p}\langle\mathbf{u}_{l}^{(q)},\widehat{\mathbf{u}}_{k}^{(q)}\rangle^{2}\right)^{1/2}$ $\displaystyle\leq$ $\displaystyle C\sigma_{0}\tau\sqrt{d\over n}.$ Finally the last term can be bounded by $\left|\frac{1}{n}\sum_{i=1}^{n}\mathscr{E}_{i}(\widehat{{\cal P}}_{\perp}^{(1)}\mathbf{w},\widehat{\mathbf{u}}_{k}^{(2)},\dots,\widehat{\mathbf{u}}_{k}^{(p)})\mathscr{E}_{i}(\widehat{\mathbf{u}}_{k}^{(1)},\dots,\widehat{\mathbf{u}}_{k}^{(p)})-\sigma_{0}^{2}\langle\widehat{{\cal P}}_{\perp}^{(1)}\mathbf{w},\widehat{\mathbf{u}}_{k}^{(1)}\rangle\right|\leq\|\widehat{\Sigma}_{\mathscr{E}}-\Sigma_{\mathscr{E}}\|\leq C\sigma_{0}^{2}\sqrt{d\over n}.$ Together with (27)-(32), we get $\displaystyle\tilde{\Sigma}(\widehat{{\cal P}}_{\perp}^{(1)}(\widehat{\mathbf{u}}_{k}^{(1)}-\langle\widehat{\mathbf{u}}_{k}^{(1)},\mathbf{u}_{\pi(k)}^{(1)}\rangle\mathbf{u}_{\pi(k)}^{(1)}),\widehat{\mathbf{u}}_{k}^{(2)},\ldots,\widehat{\mathbf{u}}_{k}^{(p)},\widehat{\mathbf{u}}_{k}^{(1)},\ldots,\widehat{\mathbf{u}}_{k}^{(p)})\leq C\tau^{2}\delta_{k}^{2}+C(\sigma_{0}^{2}+\sigma_{0}\tau)\sqrt{d\over n}.$ ### B.3 Upper Bounds for $\max_{l\notin\pi([k])}\tilde{\Sigma}(\widehat{{\cal P}}_{\perp}^{(1)}\mathbf{u}_{m}^{(1)},\widehat{\mathbf{u}}_{k}^{(2)},\ldots,\widehat{\mathbf{u}}_{k}^{(p)},\widehat{\mathbf{u}}_{k}^{(1)},\ldots,\widehat{\mathbf{u}}_{k}^{(p)})$ To derive the helper bound (17), we also need an upper bound for $\max_{l\notin\pi([k])}\tilde{\Sigma}(\widehat{{\cal P}}_{\perp}^{(1)}\mathbf{u}_{m}^{(1)},\widehat{\mathbf{u}}_{k}^{(2)},\ldots,\widehat{\mathbf{u}}_{k}^{(p)},\widehat{\mathbf{u}}_{k}^{(1)},\ldots,\widehat{\mathbf{u}}_{k}^{(p)}).$ We shall follow a similar step by bound each term on the right hand side of (26), but now with $\mathbf{w}=\mathbf{u}_{m}^{(1)}$ ($m\notin\pi([k])$). Specifically, the first term can be bounded by $\displaystyle\left|\sigma_{\pi(k)}^{2}\left(\frac{1}{n}\sum_{i=1}^{n}\theta_{i\pi(k)}^{2}\right)\left(\prod_{q=2}^{p}\langle\mathbf{u}_{\pi(k)}^{(q)},\widehat{\mathbf{u}}_{k}^{(q)}\rangle\right)\left(\prod_{q=1}^{p}\langle\mathbf{u}_{\pi(k)}^{(q)},\widehat{\mathbf{u}}_{k}^{(q)}\rangle\right)\langle\mathbf{u}_{\pi(k)}^{(1)},\widehat{{\cal P}}_{\perp}^{(1)}\mathbf{u}_{m}^{(1)}\rangle\right|$ $\displaystyle\leq\tau^{2}\|\widehat{\Sigma}_{\theta}\|_{\max}|\langle\mathbf{u}_{\pi(k)}^{(1)},\widehat{{\cal P}}_{\perp}^{(1)}\mathbf{u}_{m}^{(1)}\rangle|.$ Note that $\langle\mathbf{u}_{\pi(k)}^{(1)},\mathbf{u}_{m}^{(1)}\rangle=\langle\mathbf{u}_{\pi(k)}^{(1)},\widehat{{\cal P}}^{(1)}\mathbf{u}_{m}^{(1)}\rangle+\langle\mathbf{u}_{\pi(k)}^{(1)},\widehat{{\cal P}}_{\perp}^{(1)}\mathbf{u}_{m}^{(1)}\rangle=0.$ We get $|\langle\mathbf{u}_{\pi(k)}^{(1)},\widehat{{\cal P}}_{\perp}^{(1)}\mathbf{u}_{m}^{(1)}\rangle|=|\langle\mathbf{u}_{\pi(k)}^{(1)},\widehat{{\cal P}}^{(1)}\mathbf{u}_{m}^{(1)}\rangle|\leq\|\widehat{{\cal P}}\mathbf{u}_{\pi(k)}^{(1)}\|\|\widehat{{\cal P}}\mathbf{u}_{m}^{(1)}\|\leq\sum_{1\leq l<k}\tilde{\delta}_{l}^{2}\leq C\delta_{k}^{2},$ by Cauchy-Schwartz inequality. This implies that $\left|\sigma_{\pi(k)}^{2}\left(\frac{1}{n}\sum_{i=1}^{n}\theta_{i\pi(k)}^{2}\right)\left(\prod_{q=2}^{p}\langle\mathbf{u}_{\pi(k)}^{(q)},\widehat{\mathbf{u}}_{k}^{(q)}\rangle\right)\left(\prod_{q=1}^{p}\langle\mathbf{u}_{\pi(k)}^{(q)},\widehat{\mathbf{u}}_{k}^{(q)}\rangle\right)\langle\mathbf{u}_{\pi(k)}^{(1)},\widehat{{\cal P}}_{\perp}^{(1)}\mathbf{u}_{m}^{(1)}\rangle\right|\leq C\tau^{2}\delta_{k}^{2}.$ by taking $c_{0}$ small enough. The second term can also be bounded by $\displaystyle\left|\sum_{l\neq\pi(k)}\sigma_{\pi(k)}\sigma_{l}\left(\frac{1}{n}\sum_{i=1}^{n}\theta_{i\pi(k)}\theta_{il}\right)\left(\prod_{q=2}^{p}\langle\mathbf{u}_{\pi(k)}^{(q)},\widehat{\mathbf{u}}_{k}^{(q)}\rangle\right)\left(\prod_{q=1}^{p}\langle\mathbf{u}_{l}^{(q)},\widehat{\mathbf{u}}_{k}^{(q)}\rangle\right)\langle\mathbf{u}_{\pi(k)}^{(1)},\widehat{{\cal P}}_{\perp}^{(1)}\mathbf{u}_{m}^{(1)}\rangle\right|$ $\displaystyle\leq$ $\displaystyle\tau\|\widehat{\Sigma}_{\theta}-I\||\langle\mathbf{u}_{\pi(k)}^{(1)},\widehat{{\cal P}}_{\perp}^{(1)}\mathbf{u}_{m}^{(1)}\rangle|\left(\sum_{l\neq\pi(k)}\sigma_{l}^{2}\prod_{q=1}^{p}\langle\mathbf{u}_{l}^{(q)},\widehat{\mathbf{u}}_{k}^{(q)}\rangle^{2}\right)^{1/2}$ $\displaystyle\leq$ $\displaystyle\tau^{2}\delta_{k}^{p+1}\sqrt{r\over n}.$ We bound the third term by $\displaystyle\left|\sum_{l\neq\pi(k)}\sigma_{l}\sigma_{\pi(k)}\left(\frac{1}{n}\sum_{i=1}^{n}\theta_{il}\theta_{i\pi(k)}\right)\left(\prod_{q=2}^{p}\langle\mathbf{u}_{l}^{(q)},\widehat{\mathbf{u}}_{k}^{(q)}\rangle\right)\left(\prod_{q=1}^{p}\langle\mathbf{u}_{\pi(k)}^{(q)},\widehat{\mathbf{u}}_{k}^{(q)}\rangle\right)\langle\mathbf{u}_{l}^{(1)},\widehat{{\cal P}}_{\perp}^{(1)}\mathbf{u}_{m}^{(1)}\rangle\right|$ $\displaystyle\leq$ $\displaystyle\left|\sigma_{m}\sigma_{\pi(k)}\left(\frac{1}{n}\sum_{i=1}^{n}\theta_{im}\theta_{i\pi(k)}\right)\left(\prod_{q=2}^{p}\langle\mathbf{u}_{m}^{(q)},\widehat{\mathbf{u}}_{k}^{(q)}\rangle\right)\left(\prod_{q=1}^{p}\langle\mathbf{u}_{\pi(k)}^{(q)},\widehat{\mathbf{u}}_{k}^{(q)}\rangle\right)\langle\mathbf{u}_{m}^{(1)},\widehat{{\cal P}}_{\perp}^{(1)}\mathbf{u}_{m}^{(1)}\rangle\right|$ $\displaystyle+\left|\sum_{l\notin\\{\pi(k),m\\}}\sigma_{l}\sigma_{\pi(k)}\left(\frac{1}{n}\sum_{i=1}^{n}\theta_{il}\theta_{i\pi(k)}\right)\left(\prod_{q=2}^{p}\langle\mathbf{u}_{l}^{(q)},\widehat{\mathbf{u}}_{k}^{(q)}\rangle\right)\left(\prod_{q=1}^{p}\langle\mathbf{u}_{\pi(k)}^{(q)},\widehat{\mathbf{u}}_{k}^{(q)}\rangle\right)\langle\mathbf{u}_{l}^{(1)},\widehat{{\cal P}}_{\perp}^{(1)}\mathbf{u}_{m}^{(1)}\rangle\right|$ The first term on the right hand side can be further bounded by $\tau^{2}\|\widehat{\Sigma}_{\theta}-\Sigma_{\theta}\|_{\max}\prod_{q=2}^{p}\left|\langle\mathbf{u}_{m}^{(q)},\widehat{\mathbf{u}}_{k}^{(q)}\rangle\right|\leq C\tau^{2}\delta_{k}^{p-1}\sqrt{\log r\over n}.$ Now consider the second term: $\displaystyle\left|\sum_{l\notin\\{\pi(k),m\\}}\sigma_{l}\sigma_{\pi(k)}\left(\frac{1}{n}\sum_{i=1}^{n}\theta_{il}\theta_{i\pi(k)}\right)\left(\prod_{q=2}^{p}\langle\mathbf{u}_{l}^{(q)},\widehat{\mathbf{u}}_{k}^{(q)}\rangle\right)\left(\prod_{q=1}^{p}\langle\mathbf{u}_{\pi(k)}^{(q)},\widehat{\mathbf{u}}_{k}^{(q)}\rangle\right)\langle\mathbf{u}_{l}^{(1)},\widehat{{\cal P}}_{\perp}^{(1)}\mathbf{u}_{m}^{(1)}\rangle\right|$ $\displaystyle\leq$ $\displaystyle\tau\|\widehat{\Sigma}_{\theta}-I\|\left(\sum_{l\notin\\{\pi(k),m\\}}\sigma_{l}^{2}\langle\mathbf{u}_{l}^{(1)},\widehat{{\cal P}}_{\perp}^{(1)}\mathbf{u}_{m}^{(1)}\rangle^{2}\prod_{q=2}^{p}\langle\mathbf{u}_{l}^{(q)},\widehat{\mathbf{u}}_{k}^{(q)}\rangle^{2}\right)^{1/2}$ $\displaystyle\leq$ $\displaystyle\tau\|\widehat{\Sigma}_{\theta}-I\|\Biggl{(}\sum_{l\in\pi([k-1])}\sigma_{l}^{2}\langle\mathbf{u}_{l}^{(1)},\widehat{{\cal P}}_{\perp}^{(1)}\mathbf{u}_{m}^{(1)}\rangle^{2}\prod_{q=2}^{p}\langle\mathbf{u}_{l}^{(q)},\widehat{\mathbf{u}}_{k}^{(q)}\rangle^{2}$ $\displaystyle\hskip 100.0pt+\sum_{l\notin\pi([k])\cup\\{m\\}}\sigma_{l}^{2}\langle\mathbf{u}_{l}^{(1)},\widehat{{\cal P}}_{\perp}^{(1)}\mathbf{u}_{m}^{(1)}\rangle^{2}\prod_{q=2}^{p}\langle\mathbf{u}_{l}^{(q)},\widehat{\mathbf{u}}_{k}^{(q)}\rangle^{2}\Biggr{)}^{1/2}.$ The first term in the bracket on the rightmost hand side can be bounded by $\displaystyle\sum_{l\in\pi([k-1])}\sigma_{l}^{2}\|\widehat{{\cal P}}_{\perp}^{(1)}\mathbf{u}_{l}^{(1)}\|^{2}\prod_{q=2}^{p}\langle\mathbf{u}_{l}^{(q)},\widehat{\mathbf{u}}_{k}^{(q)}\rangle^{2}$ $\displaystyle\leq$ $\displaystyle\left(\sum_{l\in\pi([k-1])}\langle\mathbf{u}_{l}^{(2)},\widehat{\mathbf{u}}_{k}^{(2)}\rangle^{2}\right)\left(\max_{l\in\pi([k-1])}\sigma_{l}^{2}\|\widehat{{\cal P}}_{\perp}^{(1)}\mathbf{u}_{l}^{(1)}\|^{2}\prod_{q=3}^{p}\langle\mathbf{u}_{l}^{(q)},\widehat{\mathbf{u}}_{k}^{(q)}\rangle^{2}\right)$ $\displaystyle\leq$ $\displaystyle\delta_{k}^{2}\left(\max_{l\in\pi([k-1])}\sigma_{l}^{2}\delta_{l}^{2}\delta_{k}^{2(p-2)}\right)\leq C\tau^{2}\delta_{k}^{2p};$ the second term by $\|\widehat{{\cal P}}^{(1)}\mathbf{u}_{m}^{(1)}\|^{2}\left(\max_{l\notin\pi([k])\cup\\{m\\}}\sigma_{l}^{2}\prod_{q=2}^{p}\langle\mathbf{u}_{l}^{(q)},\widehat{\mathbf{u}}_{k}^{(q)}\rangle^{2}\right)\leq C\tau^{2}\delta_{k}^{2p},$ so that $\displaystyle\left|\sum_{l\neq\pi(k)}\sigma_{l}\sigma_{\pi(k)}\left(\frac{1}{n}\sum_{i=1}^{n}\theta_{il}\theta_{i\pi(k)}\right)\left(\prod_{q=2}^{p}\langle\mathbf{u}_{l}^{(q)},\widehat{\mathbf{u}}_{k}^{(q)}\rangle\right)\left(\prod_{q=1}^{p}\langle\mathbf{u}_{\pi(k)}^{(q)},\widehat{\mathbf{u}}_{k}^{(q)}\rangle\right)\langle\mathbf{u}_{l}^{(1)},\widehat{{\cal P}}_{\perp}^{(1)}\mathbf{u}_{m}^{(1)}\rangle\right|$ $\displaystyle\leq$ $\displaystyle C\left(\tau^{2}\delta_{k}^{p-1}\sqrt{\log r\over n}+\tau^{2}\delta_{k}^{p}\sqrt{r\over n}\right).$ Similar to before, the fourth term on the right hand side of (26) can be bounded by $\displaystyle\left|\sum_{l_{1},l_{2}\neq\pi(k)}\sigma_{l_{1}}\sigma_{l_{2}}\left(\frac{1}{n}\sum_{i=1}^{n}\theta_{il_{1}}\theta_{il_{2}}\right)\left(\prod_{q=2}^{p}\langle\mathbf{u}_{l_{1}}^{(q)},\widehat{\mathbf{u}}_{k}^{(q)}\rangle\right)\left(\prod_{q=1}^{p}\langle\mathbf{u}_{l_{2}}^{(q)},\widehat{\mathbf{u}}_{k}^{(q)}\rangle\right)\langle\mathbf{u}_{l_{1}}^{(1)},\widehat{{\cal P}}_{\perp}^{(1)}\mathbf{u}_{m}^{(1)}\rangle\right|$ $\displaystyle\leq$ $\displaystyle\|\widehat{\Sigma}_{\theta}\|\left(\sum_{l\neq\pi(k)}\sigma_{l}^{2}\langle\mathbf{u}_{l}^{(1)},\widehat{{\cal P}}_{\perp}^{(1)}\mathbf{u}_{m}^{(1)}\rangle^{2}\prod_{q=2}^{p}\langle\mathbf{u}_{l}^{(q)},\widehat{\mathbf{u}}_{k}^{(q)}\rangle^{2}\right)^{1/2}\left(\sum_{l\neq\pi(k)}\sigma_{l}^{2}\prod_{q=1}^{p}\langle\mathbf{u}_{l}^{(q)},\widehat{\mathbf{u}}_{k}^{(q)}\rangle^{2}\right)^{1/2}$ $\displaystyle\leq$ $\displaystyle C\tau^{2}\delta_{k}^{2(p-1)};$ and the fifth term by $\displaystyle\left|\frac{1}{n}\sum_{i=1}^{n}\left[\sum_{l\neq\pi(k)}\sigma_{l}\theta_{il}\left(\prod_{q=2}^{p}\langle\mathbf{u}_{l}^{(q)},\widehat{\mathbf{u}}_{k}^{(q)}\rangle\right)\mathscr{E}_{i}(\widehat{\mathbf{u}}_{k}^{(1)},\dots,\widehat{\mathbf{u}}_{k}^{(p)})\langle\mathbf{u}_{l}^{(1)},\widehat{{\cal P}}_{\perp}^{(1)}\mathbf{u}_{m}^{(1)}\rangle\right]\right|$ $\displaystyle=$ $\displaystyle\left|\sum_{l\neq\pi(k)}\left\\{\sigma_{l}\left(\prod_{q=2}^{p}\langle\mathbf{u}_{l}^{(q)},\widehat{\mathbf{u}}_{k}^{(q)}\rangle\right)\langle\mathbf{u}_{l}^{(1)},\widehat{{\cal P}}_{\perp}^{(1)}\mathbf{u}_{m}^{(1)}\rangle\left[{1\over n}\sum_{i=1}^{n}\theta_{il}\mathscr{E}_{i}(\widehat{\mathbf{u}}_{k}^{(1)},\dots,\widehat{\mathbf{u}}_{k}^{(p)})\right]\right\\}\right|$ $\displaystyle\leq$ $\displaystyle\|\widehat{\Sigma}_{\theta,\mathscr{E}}\|\cdot\left(\sum_{l\neq\pi(k)}\sigma_{l}^{2}\langle\mathbf{u}_{l}^{(1)},\widehat{{\cal P}}_{\perp}^{(1)}\mathbf{u}_{m}^{(1)}\rangle^{2}\prod_{q=2}^{p}\langle\mathbf{u}_{l}^{(q)},\widehat{\mathbf{u}}_{k}^{(q)}\rangle^{2}\right)^{1/2}$ $\displaystyle\leq$ $\displaystyle C\sigma_{0}\tau\delta_{k}^{p-1}\sqrt{d\over n}.$ We now turn to the sixth term on the right hand side of (26). Write $\displaystyle\frac{1}{n}\sum_{i=1}^{n}\left[\sum_{l=1}^{r}\mathscr{E}_{i}(\widehat{{\cal P}}_{\perp}^{(1)}\mathbf{u}_{m}^{(1)},\widehat{\mathbf{u}}_{k}^{(2)},\dots,\widehat{\mathbf{u}}_{k}^{(p)})\sigma_{l}\theta_{il}\left(\prod_{q=1}^{p}\langle\mathbf{u}_{l}^{(q)},\widehat{\mathbf{u}}_{k}^{(q)}\rangle\right)\right]$ $\displaystyle\leq$ $\displaystyle\frac{1}{n}\sum_{i=1}^{n}\left[\sum_{l\neq\pi(k)}\mathscr{E}_{i}(\widehat{{\cal P}}_{\perp}^{(1)}\mathbf{u}_{m}^{(1)},\widehat{\mathbf{u}}_{k}^{(2)},\dots,\widehat{\mathbf{u}}_{k}^{(p)})\sigma_{l}\theta_{il}\left(\prod_{q=1}^{p}\langle\mathbf{u}_{l}^{(q)},\widehat{\mathbf{u}}_{k}^{(q)}\rangle\right)\right]$ $\displaystyle+\frac{1}{n}\sum_{i=1}^{n}\left[\mathscr{E}_{i}(\widehat{{\cal P}}_{\perp}^{(1)}\mathbf{u}_{m}^{(1)},\widehat{\mathbf{u}}_{k}^{(2)},\dots,\widehat{\mathbf{u}}_{k}^{(p)})\sigma_{\pi(k)}\theta_{i\pi(k)}\left(\prod_{q=1}^{p}\langle\mathbf{u}_{\pi(k)}^{(q)},\widehat{\mathbf{u}}_{k}^{(q)}\rangle\right)\right].$ The first term again can be bounded by $\displaystyle\left|\frac{1}{n}\sum_{i=1}^{n}\left[\sum_{l\neq\pi(k)}\mathscr{E}_{i}(\widehat{{\cal P}}_{\perp}^{(1)}\mathbf{u}_{m}^{(1)},\widehat{\mathbf{u}}_{k}^{(2)},\dots,\widehat{\mathbf{u}}_{k}^{(p)})\sigma_{l}\theta_{il}\left(\prod_{q=1}^{p}\langle\mathbf{u}_{l}^{(q)},\widehat{\mathbf{u}}_{k}^{(q)}\rangle\right)\right]\right|$ $\displaystyle\leq$ $\displaystyle\|\widehat{\Sigma}_{\theta,\mathscr{E}}\|\left(\sum_{l\neq\pi(k)}\sigma_{l}^{2}\prod_{q=1}^{p}\langle\mathbf{u}_{l}^{(q)},\widehat{\mathbf{u}}_{k}^{(q)}\rangle^{2}\right)^{1/2}\leq C\sigma_{0}\tau\delta_{k}^{p-1}\sqrt{d\over n}.$ For the second term, note that $\displaystyle\left|\frac{1}{n}\sum_{i=1}^{n}\left[\mathscr{E}_{i}(\widehat{{\cal P}}_{\perp}^{(1)}\mathbf{u}_{m}^{(1)},\widehat{\mathbf{u}}_{k}^{(2)},\dots,\widehat{\mathbf{u}}_{k}^{(p)})\sigma_{\pi(k)}\theta_{i\pi(k)}\left(\prod_{q=1}^{p}\langle\mathbf{u}_{\pi(k)}^{(q)},\widehat{\mathbf{u}}_{k}^{(q)}\rangle\right)\right]\right|$
# $k$-reduced groups and Milnor invariants Benjamin Audoux Aix Marseille Univ, CNRS, Centrale Marseille, I2M, Marseille, France<EMAIL_ADDRESS>, Jean-Baptiste Meilhan Univ. Grenoble Alpes, CNRS, Institut Fourier, F-38000 Grenoble, France jean- <EMAIL_ADDRESS>and Akira Yasuhara Faculty of Commerce, Waseda University, 1-6-1 Nishi-Waseda, Shinjuku-ku, Tokyo 169-8050, Japan<EMAIL_ADDRESS> ###### Abstract. We characterize, in an algebraic and in a diagrammatic way, Milnor string link invariants indexed by sequences where any index appears at most $k$ times, for any fixed $k\geq 1$. The algebraic characterization is given in terms of an Artin-like action on the so-called $k$–reduced free groups; the diagrammatic characterization uses the langage of welded knot theory. The link case is also addressed. ## Introduction In his seminal works [20, 21], J. Milnor introduced a family of concordance invariants for $n$-component links, which can be seen as a wide generalization of the linking number. Indexed by sequences $I$ of possibly repeating indices in $\\{1,\ldots,n\\}$, these _Milnor invariants_ $\mu(I)$ are integers extracted from longitudes within the fundamental group of the link complement, well-defined only modulo a subtle indeterminacy. The geometric meaning of this indeterminacy was clarified by the work of N. Habegger and X.S. Lin [12], who showed that Milnor invariants are actually well-defined integer-valued invariants of _string links_ , which are pure tangles without closed components. Recall that two (string) links are _link-homotopic_ if they are related by a sequence of isotopies and crossing changes involving two strands of a same component. Milnor proved in [21] that, if a sequence $I$ is without repetition, then Milnor invariant $\overline{\mu}(I)$ is in fact a link- homotopy invariant. He further showed how these non-repeated invariants can be extracted from the _reduced_ fundamental group of the link complement, which is is the ‘maximal’ quotient where each meridian commutes with any of its conjugates. Using an Artin-like action of string links on the reduced free group, Habegger and Lin actually showed that two string links have same Milnor invariants indexed by sequences without repetition, if and only if they are link-homotopic [12]. The purpose of this paper is to give a higher-order version of this classification result of Habegger and Lin. Namely, we characterize the information contained by Milnor invariants $\mu(I)$ of string links with $r(I)\leq k$, where $r(I)$ denotes the maximum number of time that any index appears in $I$. (In particular, Milnor invariants $\mu(I)$ with $r(I)=1$ are precisely Milnor link-homotopy invariants.) Our main result is the following; explanations of terminologies and notation shall follow. ###### Main Theorem. Let $L$ and $L^{\prime}$ be two $n$-component string links. The following are equivalent. 1. (1) $\mu_{L}(I)=\mu_{L^{\prime}}(I)$ for any sequence $I$ with $r(I)\leq k$. 2. (2) $L$ and $L^{\prime}$ induce the same $k$–reduced free action: $\varphi_{L}^{(k)}=\varphi_{L^{\prime}}^{(k)}\in\operatorname{Aut}_{c}(R_{k}F_{n})$. 3. (3) $L$ and $L^{\prime}$ are self $w_{k}$-concordant. Let us first explain assertion (2). Let $G(L)$ be the fundamental group of the complement of $L$. This group is normally generated by $n$ meridians, and for each $i$, we denote by $N_{i}$ the normal subgroup of $G(L)$ generated by the $i$th meridian. The _$k$ –reduced quotient_ of $G(L)$ is defined by $\textnormal{R}_{k}G(L):=\hbox{\leavevmode\kern 1.00006pt\raise 1.07639pt\hbox{\sevenrm$G(L)$}\kern-1.00006pt}\big{/}{\hbox{\kern-1.49994pt\lower 2.15277pt\hbox{\sevenrm$\Gamma_{k+1}N_{1}\cdots\Gamma_{k+1}N_{n}$}}},$ where $\Gamma_{q}N$ denotes the $q$th term of the lower central series of a group $N$. This generalizes Milnor’s above-mentioned notion of reduced group [20], which coincides with $R_{1}G(L)$. In particular, if $F_{n}$ is the free group on $n$ generators, then $\textnormal{R}_{k}F_{n}$ is called the _$k$ –reduced free group_. Now, given an $n$-component string link $L$, we show in Section 2.2 that, for each $k\geq 2$, there is an associated _$k$ –reduced free action_ $\varphi^{(k)}_{L}\in\operatorname{Aut}_{c}(R_{k}F_{n}),$ where $\operatorname{Aut}_{c}(R_{k}F_{n})$ denotes the set of automorphisms of $R_{k}F_{n}$ that act by conjugation on each generator. This can be seen as a generalization of Habegger-Lin’s representation for the group of link-homotopy classes of string links [12, Thm. 1.7] in the case $k=1$. We stress, however, that our proof is very different in nature from [12]. Let us now clarify assertion (3). We use the langage of _welded knot theory_ , which is a diagrammatic generalization of knot theory. We stress, firstly, that the set of classical (string) links injects into the larger set $w\textnormal{S}L(n)$ of welded (string) links and, secondly, that Milnor invariants extend naturally to welded objects, so that they coincide with the classical invariants on classical objects. In [19] a diagrammatic calculus for welded knotted objects was developped, called _arrow calculus_ , which can be seen as a generalization of the theory of Gauss diagrams. A $w$-tree for a welded diagram $D$, is an oriented, unitrivalent tree, whose univalent vertices lie disjointly on $D$. Given such a $w$-tree $T$ for $D$, there is a ‘surgery’ procedure that yields a new welded diagram $D_{T}$, which is roughly obtained by inserting an ‘iterated commutator of crossings’ in a neighborhood of the head of $T$. Define the degree of a $w$-tree to be half the number of its vertices. The _self $w_{k}$-equivalence_ is the equivalence relation on welded string links generated by surgeries on degree $\geq k$ $w$-trees whose univalent vertices all lie on the _same_ string link component. For $k=1$, this notion turns out to coincide with the usual link-homotopy relation for string links [1]. On the other hand, there is a combinatorial equivalence relation of _welded concordance_ for welded (string) links, which is generated by birth, death and saddle moves [5, 10] and which naturally encompasses the topological concordance for classical (string) links. The _self $w_{k}$-concordance_ is the equivalence relation obtained by combining the above two relations, and our main result states that this characterizes combinatorially the information contained in Milnor invariants $\mu(I)$ with $r(I)\leq k$. In fact, the $k$-reduced free action involved in assertion (2) is more generally defined for $wSL(n)$, and our main theorem follows from a more general characterization for welded string links, see Theorems 2.8 and 3.19. We also show that the map sending a welded string link $L$ to its associated $k$-reduced free action $\varphi^{(k)}$, descends to a group isomorphism $\hbox{\leavevmode\kern 1.00006pt\raise 1.07639pt\hbox{\sevenrm$w\textnormal{S}L(n)$}\kern-1.00006pt}\big{/}{\hbox{\kern-1.49994pt\lower 2.15277pt\hbox{\sevenrm$\textrm{self $w_{k}$-concordance}$}}}\stackrel{{\scriptstyle\simeq}}{{\longrightarrow}}\operatorname{Aut}_{c}(R_{k}F_{n}),$ for any $k\geq 1$; see Proposition 3.21. The case $k=1$ was proved in [1], and is a generalization of [12, Thm. 1.7]. This isomorphism suggests that welded theory provides a sensible diagrammatic counterpart of the algebraic constructions underlying Milnor invariants. Our main theorem also refines a recent result of B. Colombari [7], who gave a diagrammatic characterization of string links having same Milnor invariants of length $\leq q$. We note that the geometric properties of Milnor link invariants $\overline{\mu}(I)$ with $r(I)\leq k$ was previously investigated in [9, 27, 26], using clasper theory. We address the (welded) link case in the final section of this paper. As developped there, building on the proof of our main theorem for string links, and using straightforward adaptations of the above mentioned work of Colombari [7], we obtain in particular the following for classical links. ###### Theorem. A link has vanishing Milnor invariants $\overline{\mu}(I)$ with $r(I)\leq k$, if and only if it is self $w_{k}$-equivalent to the trivial link. This follows from a more general result (Theorem 4.3) which characterizes the so-called $k$–reduced peripheral system of _welded_ links, that is the $k$-reduced link group endowed with peripheral elements, see Section 4. ###### Acknowledgments. The authors thank Jacques Darné for bringing to their knowledge the result [8, Lem. 2.37]. The first author is partially supported by the project SyTriQ (ANR-20-CE40-0004) of the ANR, and thanks the IRL PIMS-Europe for its hospitality during the period in which part of the work on this paper was done. The second author is partially supported by the project AlMaRe (ANR-19-CE40-0001-01) of the ANR. The third author is supported by the JSPS KAKENHI grant 21K03237, and by the Waseda University grant for Special Research Projects 2021C-120. ## 1\. Basic algebraic preliminaries This section reviews algebraic tools that will be used in this paper, together with basic and well-known results. Several notation used throughout the paper will also be set in this section. Let $n$ be a positive integer. We denote by $F_{n}$ the free group on $n$ generators $\alpha_{1},\cdots,\alpha_{n}$. For each $i\in\\{1,\cdots,n\\}$, denote by $N_{i}$ the normal subgroup of $F_{n}$ generated by $\alpha_{i}$. ### 1.1. Commutators and the lower central series We use the following convention for commutators and conjugates ($x,y\in F_{n}$): $[x,y]:=x\overline{y}\,\overline{x}y\quad\textrm{ and }\quad x^{y}:=\overline{y}xy,$ where we write $\overline{a}$ for the inverse of an element $a$. This somewhat nonstandard convention for commutators will be justified by the diagrammatic counterpart of the theory, reviewed in Section 3.1. We note that with our convention, for elements $a,b,c$ of $F_{n}$, we have the following basic properties: (C$0$) $\overline{[a,b]}=[\overline{b},\overline{a}];$ (C$1$) $[a,b]=\overline{b}^{\overline{a}}b=a\overline{a}^{b};$ (C$2$) $[a,b]^{c}=[a^{c},b^{c}];$ Moreover, recall the following commutator identies (compare with [18, Thm. 5.1]). (C$3$) $[a,bc]=[a,c][a,b]^{c}\,\,\,\textrm{ and }\,\,\,[ab,c]=[b,c]^{\overline{a}}[a,c];$ (C$4$) $\big{[}[a,b],c\big{]}=\big{[}\overline{a},[\overline{c},\overline{b}]\big{]}^{b\overline{a}}\big{[}\overline{b},[\overline{a},\overline{c}]\big{]}^{c\overline{a}}.$ The lower central series of $F_{n}$ is the family of nested subgroups $\\{\Gamma_{k}F_{n}\\}_{k}$ defined inductively by $\Gamma_{1}F_{n}=F_{n}$ and $\Gamma_{k+1}F_{n}=[F_{n},\Gamma_{k}F_{n}]$. The following, less standard, notion will also be useful. ###### Definition 1.1. A length $k$ _linear commutator_ is a an element of $\Gamma_{k}F_{n}$ of the form $\left[x_{1},\left[x_{2},\left[x_{3},\cdots[x_{k-1},x_{k}]\cdots\rule{0.0pt}{8.5359pt}\right]\rule{0.0pt}{11.38092pt}\right]\rule{0.0pt}{14.22636pt}\right]$ for some elements $x_{i}\in F_{n}$. We shall need the following basic results. ###### Lemma 1.2. Let $C\in F_{n}$ be a length $l$ commutator, with $k$ entries in $N_{i}$ for some $i$ ($k\leq l$). Then $C$ is a product of length $\geq l$ commutators where each entry is an element of $\\{\alpha_{j},\overline{\alpha}_{j}\\}_{j}$, and with at least $k$ entries that are either $\alpha_{i}$ or its inverse. ###### Proof. Combining (C$3$) with (C$0$) and (C$1$), gives the identities (C$5$) $[a,bc]=[a,c][a,b]\big{[}[\overline{b},\overline{a}],c\big{]}\,\,\,\textrm{ and }\,\,\,[ab,c]=\big{[}a,[\overline{c},\overline{b}]\big{]}[b,c][a,c].$ By assumption, $k$ entries of $C$ are products of conjugates of $\alpha_{i}$ or its inverse. Using (C$5$) on these $k$ entries, $C$ can be written as a product of length $\geq l$ commutators, each having $\geq k$ entries that are a single conjugate of $\alpha_{i}$ or its inverse. Since $a^{b}=a[\overline{a},b]$ by (C$1$), we have by using (C$5$) that $C$ is written as a product of length $\geq l$ commutators, each having at least $k$ entries that are either $\alpha_{i}$ or its inverse. Now consider one such length $\geq l$ commutator $C^{\prime}$. One can apply (C$5$) iteratively on all entries of $C^{\prime}$, until it is written as a product of iterated commutators with entries in $\big{\\{}\alpha_{j};\overline{\alpha}_{j}\big{\\}}_{j}$ and clearly, each of these commutators necessarily contains at least $k$ entries that are either $\alpha_{i}$ or its inverse, since $C^{\prime}$ does. ∎ ###### Lemma 1.3. Let $C\in F_{n}$ be a length $l$ commutator, with $k$ entries in $N_{i}$ for some $i$ ($k\leq l$). Then $C\in\Gamma_{k}N_{i}$. More precisely, $C$ is a product of length $k$ linear commutators whose entries are all conjugates of $\alpha_{i}$ or its inverse. ###### Proof. By assumption, $C$ is a length $l$ commutator with at least $k$ entries that are products of conjugates of $\alpha_{i}$ or its inverse. Using (C$1$) repeatedly, one can write $C$ as a length $k$ commutator, whose entries are all products of conjugates of $\alpha_{i}$ or its inverse. This shows that $C\in\Gamma_{k}N_{i}$. Now, using recursively (C$3$) and (C$2$), each such commutator can be expressed as a product of length $k$ commutators, whose entries are a single conjugate of $\alpha_{i}$ or its inverse. By (C$4$), combined with (C$2$), a length $k$ commutator can be expressed as a product of linear ones, and in our context all $k$ entries will remain a single conjugate of $\alpha_{i}$ or its inverse. ∎ For the next two results, let $G$ be a group which is normally generated by $n$ elements $\alpha_{1},\ldots,\alpha_{n}$. For each $i\in\\{1,\cdots,n\\}$, denote by $N_{i}$ the normal subgroup of $G$ generated by the $i$th generator. ###### Lemma 1.4. For any $k\in{\mathds{N}}$, $\Gamma_{kn+1}G\subset\Gamma_{k+1}N_{1}\cdots\Gamma_{k+1}N_{n}.$ ###### Proof. Using (C$5$), any element of $\Gamma_{kn+1}G$ can be expressed as a product of length $\geq kn+1$ commutators with entries in $\big{\\{}\alpha_{j}^{g},\overline{\alpha}_{j}^{g}\,;\,g\in G\big{\\}}_{j}$. For any such commutator $C$, there exists some $i$ such that at least $k+1$ entries of $C$ are elements of $N_{i}$, and Lemma 1.3 implies that $C\in\Gamma_{k+1}N_{i}$. ∎ Define a _conjugating_ endomorphism of $G$ as an endomorphism which sends each generator to a conjugate of itself. The following is a simple adaptation of [8, Lem. 2.37] to our setting. ###### Lemma 1.5. For all $k\geq 2$, any conjugating endomorphism $\varphi$ of $G$ induces an automorphism of $\hbox{\leavevmode\kern 1.00006pt\raise 1.07639pt\hbox{\sevenrm$G$}\kern-1.00006pt}\big{/}{\hbox{\kern-1.49994pt\lower 2.15277pt\hbox{\sevenrm$\Gamma_{k}G$}}}$. In particular, if $G$ is nilpotent, then $\varphi$ is itself an automorphism of $G$. ###### Proof. As a conjugating endomorphism, $\varphi$ induces the identity on $\hbox{\leavevmode\kern 1.00006pt\raise 1.07639pt\hbox{\sevenrm$G$}\kern-1.00006pt}\big{/}{\hbox{\kern-1.49994pt\lower 2.15277pt\hbox{\sevenrm$\Gamma_{2}G$}}}$, and more generally on $\hbox{\leavevmode\kern 1.00006pt\raise 1.07639pt\hbox{\sevenrm$\Gamma_{k-1}G$}\kern-1.00006pt}\big{/}{\hbox{\kern-1.49994pt\lower 2.15277pt\hbox{\sevenrm$\Gamma_{k}G$}}}$ for all $k\geq 2$. An induction on $k$, using the Five Lemma on the natural exact sequence $0\longrightarrow\hbox{\leavevmode\kern 1.00006pt\raise 1.07639pt\hbox{\sevenrm$\Gamma_{k-1}G$}\kern-1.00006pt}\big{/}{\hbox{\kern-1.49994pt\lower 2.15277pt\hbox{\sevenrm$\Gamma_{k}G$}}}\longrightarrow\hbox{\leavevmode\kern 1.00006pt\raise 1.07639pt\hbox{\sevenrm$G$}\kern-1.00006pt}\big{/}{\hbox{\kern-1.49994pt\lower 2.15277pt\hbox{\sevenrm$\Gamma_{k}G$}}}\longrightarrow\hbox{\leavevmode\kern 1.00006pt\raise 1.07639pt\hbox{\sevenrm$G$}\kern-1.00006pt}\big{/}{\hbox{\kern-1.49994pt\lower 2.15277pt\hbox{\sevenrm$\Gamma_{k-1}G$}}}\longrightarrow 0,$ then shows that $\varphi$ induces an automorphism of $\hbox{\leavevmode\kern 1.00006pt\raise 1.07639pt\hbox{\sevenrm$G$}\kern-1.00006pt}\big{/}{\hbox{\kern-1.49994pt\lower 2.15277pt\hbox{\sevenrm$\Gamma_{k}G$}}}$, for any $k$. In particular, if $G$ is nilpotent of order $N$, then $\varphi$ induces hence an automorphism of $\hbox{\leavevmode\kern 1.00006pt\raise 1.07639pt\hbox{\sevenrm$G$}\kern-1.00006pt}\big{/}{\hbox{\kern-1.49994pt\lower 2.15277pt\hbox{\sevenrm$\Gamma_{N}G$}}}\cong G$. ∎ ### 1.2. Basic commutators We now recall the notion of basic commutators in a free group, which seems to have first appeared in [14]. ###### Definition 1.6. A _set of basic commutators_ in the set $\\{\alpha_{1},\cdots,\alpha_{n}\\}$ is an infinite ordered set of commutators $\big{\\{}C_{i}\big{\\}}_{i}$, defined inductively as follows. Basic commutators of length $1$ are the $C_{i}=\alpha_{i}$ for $i=1,\cdots,n$. A basic commutator of length $m>1$ has form $[C_{i},C_{j}]$, for some basic commutators $C_{i},C_{j}$ such that * • length$(C_{i})+$length$(C_{j})=m$; * • $C_{i}<C_{j}$, and $C_{j}=[C_{k},C_{l}]$ further implies that $C_{k}\leq C_{i}$. Basic commutators of length $m$ follow those of length $<m$, and are ordered in an arbitrary fixed way with respect to each other. What makes this notion significant is the following fundamental result from [13, Thm. 11.2.4] (see also [18, Thm. 5.13.A]). ###### Theorem 1.7. If $\big{\\{}C_{i}\big{\\}}_{i}$ is a set of basic commutators, and $k\geq 0$ is an integer, then any element $g$ in the free group $F_{n}$ can be written in a unique way as a product $g=C_{1}^{e_{1}}\cdots C_{N(k)}^{e_{N(k)}}h,$ with $e_{j}\in\mathbb{Z}$ and $h\in\Gamma_{k+1}F_{n}$, where $N(k)$ is the number of basic commutators of length $\leq k$. ###### Remark 1.8. In [13, 18], the convention $[a,b]=\overline{a}\overline{b}ab$ is used for commutators. Although Definition 1.6 thus gives _different_ sets of basic commutators as the ones used in [13, 18], it does share the same fundamental properties, and in particular Theorem 1.7, as well as the coming Lemma 1.11. ### 1.3. The Magnus expansion Denote by $\mathbb{Z}\langle\langle X_{1},\cdots,X_{n}\rangle\rangle$ the ring of formal power series in the noncommuting variables $X_{1},\cdots,X_{n}$. ###### Definition 1.9. The _Magnus expansion_ is the group homomorphism $E:\leavevmode\nobreak\ F_{n}\longrightarrow\mathbb{Z}\langle\langle X_{1},\cdots,X_{n}\rangle\rangle$ defined by sending $\alpha_{i}$ to $1+X_{i}$. It is well-known that $E$ is in fact injective [18, Thm. 5.6], and that it is well-behaved with respect to the lower central series, in the sense of the following fundamental result [17, 25]. ###### Theorem 1.10. For all $k\geq 1$, we have $f\in\Gamma_{k}F_{n}$ if and only if $E(f)=1+\textrm{(terms of degree $\geq k$)}$. Basic commutators are well-behaved with respect to the Magnus expansion, in the sense of the following result, which is outlined by Levine in [16, pp. 365]. ###### Lemma 1.11. Let $C$ be a basic commutator of length $k$, such that $\alpha_{i}$ occurs $r_{i}$ times in $C$ for each $i$. We have $E(C)=1+P+\textrm{(terms of degree $\geq k+1$)},$ where $P\neq 0$ is a sum of degree $k$ terms, each involving $r_{i}$ times the variable $X_{i}$ ($i=1,\cdots,n$). In the above, the sum $P$ of lowest degree (non trivial) terms in $E(C)$ is called the _principal part_ of $E(C)$. ###### Proof. We proceed by induction on the length $k$, where the case $k=1$ is trivial. Let $C$ be a basic commutator of length $k$ for some $k>1$. There exists basic commutators $C_{1}$ and $C_{2}$, of respective length $k_{1}$ and $k_{2}$, such that $C=[C_{1},C_{2}]$ and $k_{1}+k_{2}=k$. By induction hypothesis for $i=1,2$ we have that $E(C_{i})=1+P_{i}+\textrm{(degree $>k_{i}$ terms)}$, with $P_{i}$ a sum of degree $k_{i}$ terms with the appropriate occurences of each variables. Thus $E(C)=1+P_{1}P_{2}-P_{2}P_{1}+\textrm{(degree $>k$ terms)}$, and the conclusion would follow from the fact that $P_{1}P_{2}-P_{2}P_{1}\neq 0$. In order to see that $P_{1}P_{2}-P_{2}P_{1}$ is indeed nonzero, observe that length $k$ basic commutators form a basis for $\hbox{\leavevmode\kern 1.00006pt\raise 1.07639pt\hbox{\sevenrm$\Gamma_{k}F_{n}$}\kern-1.00006pt}\big{/}{\hbox{\kern-1.49994pt\lower 2.15277pt\hbox{\sevenrm$\Gamma_{k+1}F_{n}$}}}$, as a consequence of Theorem 1.7. This in particular tells us that $C\notin\Gamma_{k+1}F_{n}$, hence by Theorem 1.10 we have $P_{1}P_{2}-P_{2}P_{1}\neq 0$. ∎ ## 2\. Milnor invariants and $k$–reduced free action The main diagrammatic object of this paper will be the following. ###### Definition 2.1. Consider the $2$-disk $[0,1]\times[0,1]$, equipped with fixed points $p_{i}\times\\{\varepsilon\\}$ for $i\in\\{1,\cdots,n\\}$ and $\varepsilon\in\\{0,1\\}$. An $n$-component _welded string link_ is the welded equivalence class of an immersion of $n$ disjoint copies of the unit interval into $[0,1]\times[0,1]$, such that the $i$th interval runs from $p_{i}\times\\{0\\}$ to $p_{i}\times\\{1\\}$, and whose double points are decorated either as a classical (as in usual knot diagrams) or a virtual crossing (drawn as transverse double points); see Figure 1. Here, the _welded equivalence_ is generated by the three usual Reidemeister moves involving classical crossings, together with the OC move shown in Figure 1 and the _Detour move_ , which replaces an arc with only virtual crossings (possibly none) by another arc with same endpoints and only virtual crossings (possibly none). $\vbox{\hbox{\includegraphics[height=56.9055pt]{exsl.pdf}}}\hskip 71.13188pt\vbox{\hbox{\includegraphics[height=49.79231pt]{moves_3.pdf}}}\xleftrightarrow{\textrm{OC}}\vbox{\hbox{\includegraphics[height=49.79231pt]{moves_4.pdf}}}$ Figure 1. A $3$-component welded string link (left), and the OC move (right) In particular, a welded string link without virtual crossing is merely a diagram of a classical string link. A key point is that classical string links inject in this way into welded string links: two diagrams without virtual crossings, that are welded equivalent, represent isotopic objects, see [11, Thm 1.B]. We denote by $\mathbf{1}$ the trivial string link diagram $\cup_{i}p_{i}\times[0,1]$. ### 2.1. Brief review of welded Milnor invariants Milnor invariants are classical link invariants defined by Milnor in the fifties [20, 21], and extended to (classical) string links by Habbeger and Lin [12]. We review here the welded extension of Milnor invariants. It was given in [1, Sec. 6] using a topological approach, and was later reformulated in [22] in a purely diagrammatic way. Given a welded string link $L$, there is an associated group $G(L)$, which coincides with the fundamental group of the complement when $L$ is classical, see [15]. As in Wirtinger’s algorithm, a generator of $G(L)$ is associated with each arc in a diagram of $L$, where now an arc is allowed to contain virtual crossings, and each classical crossing gives a conjugacy relation: $\vbox{\hbox{\includegraphics[height=42.67912pt]{Wir1.pdf}}}\ \leadsto\ \overline{a}\overline{b}a^{\prime}b\hskip 28.45274pt\vbox{\hbox{\includegraphics[height=42.67912pt]{Wir2.pdf}}}\ \leadsto\ \overline{a}ba^{\prime}\overline{b}.\\\ $ It is well-known that this is invariant under welded equivalence. Denote by $L_{1},\ldots,L_{n}$ the components of $L$. For each $i$, label by $a_{i,j}$ ($1\leq j\leq m_{i}+1$) the successive arcs met along $L_{i}$ when following the orientation, where $m_{i}+1$ denotes the number of arcs of $L_{i}$, et by $b_{i,j}$ the generator $a_{k,l}^{\pm 1}$ associated with the overpassing arc that is met when running from $a_{i,j}$ to $a_{i,j+1}$, and the local orientation: $\vbox{\hbox{\includegraphics[height=49.79231pt]{Longitude.pdf}}}.$ Then the group of $L$ has a presentation of the form (2.1) $G(L)=\big{\langle}a_{i,j},\ 1\leq i\leq n,\ 1\leq j\leq m_{i}\ \big{|}\ \overline{a}_{ij+1}\overline{b}_{i,j}a_{i,j}b_{i,j}\ 1\leq i\leq n,\ 1\leq j\leq m_{i}-1\big{\rangle},$ ###### Definition 2.2. For $1\leq i\leq n$, the _preferred $i$th longitude_ $\lambda_{i}(L)$ is given by $\overline{a}_{i,1}^{f_{i}}\ b_{i,1}\ b_{i,2}\ \ldots\ b_{i,m_{i}-1}$, where $f_{i}$ is the sum of the signs of all classical crossings involving only arcs of the $i$th component. Let us fix some integer $q\geq 2$. It is well-known that the images $\alpha_{1},\ldots,\alpha_{n}$ of the generators $a_{i,1}$ in the quotient $\hbox{\leavevmode\kern 1.00006pt\raise 1.07639pt\hbox{\sevenrm$G(L)$}\kern-1.00006pt}\big{/}{\hbox{\kern-1.49994pt\lower 2.15277pt\hbox{\sevenrm$\Gamma_{q}G(L)$}}}$, do generate this group [6]. In particular, the image of the preferred $i$th longitude in $\hbox{\leavevmode\kern 1.00006pt\raise 1.07639pt\hbox{\sevenrm$G(L)$}\kern-1.00006pt}\big{/}{\hbox{\kern-1.49994pt\lower 2.15277pt\hbox{\sevenrm$\Gamma_{q}G(L)$}}}$, can be expressed as a word, which we still denote by $\lambda_{i}(L)$, in the variables $\alpha_{1},\ldots,\alpha_{n}$. ###### Definition 2.3. For each sequence $I=j_{1}j_{2}\ldots j_{l}i$ of integers in $\\{1,\cdots,n\\}$ ($l<q$), the coefficient $\mu_{L}(I)$ of $X_{j_{1}}\cdots X_{j_{l}}$ in the Magnus expansion $E\big{(}\lambda_{i}(L)\big{)}$ is an invariant of the welded string link $L$, called a _welded Milnor invariant_. ###### Remarks 2.4. 1. (1) These welded Milnor invariants naturally coincides with the classical construction in the case of classical string links. 2. (2) The fact that $\lambda_{i}(L)$ represents the _preferred_ $i$th longitude implies that $E\big{(}\lambda_{i}(L)\big{)}$ does not contain any term of the form $X_{i}^{s}$, for $s\geq 1$, hence that $\mu_{L}(I)=0$ for any sequence of the form $I=ii\cdots i$. Recall from the introduction that, given a sequence $I$, we denote by $r(I)$ the maximum number of time that any index appears in $I$. For classical (string) links, Milnor invariants $\mu(I)$ with $r(I)=1$, that is, non- repeated Milnor invariants, are known to be link-homotopy invariants [21, 12]. Habegger and Lin actually showed that non-repeated Milnor invariants classify string links up to link-homotopy [12], a result that was later extended to the welded setting in [1], where non-repeated Milnor invariants are showed to classify welded string links up to self-virtualization. Here, _self- virtualization_ is the equivalence relation generated by the local replacement of a classical self-crossing, by a virtual one – what turns out to coincide with link-homotopy for classical string links (this was implicit in [1] and formally stated in [2, Thm. 4.3]). ### 2.2. The $k$–reduced free action As above, let $L$ be an $n$-component welded string link, with associated group $G(L)$. By definition, $G(L)$ is normally generated by the initial arcs $\alpha_{i}:=a_{i,1}$ of each component. As in the introduction, we can thus consider the _$k$ –reduced quotient_ of $G(L)$ $\textnormal{R}_{k}G(L):=\hbox{\leavevmode\kern 1.00006pt\raise 1.07639pt\hbox{\sevenrm$G(L)$}\kern-1.00006pt}\big{/}{\hbox{\kern-1.49994pt\lower 2.15277pt\hbox{\sevenrm$\Gamma_{k+1}N_{1}\cdots\Gamma_{k+1}N_{n}$}}},$ where $N_{i}$ denotes the normal subgroup generated by $\alpha_{i}$. As a consequence of Lemma 1.4, we have that the group $\textnormal{R}_{k}G(L)$ is nilpotent of order at most $kn+1$. Let $F_{n}$ be the free group generated by $\alpha_{1},\cdots,\alpha_{n}$. We have the following. ###### Lemma 2.5. For each $k\geq 1$, we have an isomorphism $\textnormal{R}_{k}G(L)\simeq\textnormal{R}_{k}F_{n}=\hbox{\leavevmode\kern 1.00006pt\raise 1.07639pt\hbox{\sevenrm$F_{n}$}\kern-1.00006pt}\big{/}{\hbox{\kern-1.49994pt\lower 2.15277pt\hbox{\sevenrm$\Gamma_{k+1}N_{1}\cdots\Gamma_{k+1}N_{n}$}}}.$ ###### Proof. As shown in [1, §5] (see also [19, §6.3]), for each $q\geq 1$ we have an isomorphism $\hbox{\leavevmode\kern 1.00006pt\raise 1.07639pt\hbox{\sevenrm$G(L)$}\kern-1.00006pt}\big{/}{\hbox{\kern-1.49994pt\lower 2.15277pt\hbox{\sevenrm$\Gamma_{q}G(L)$}}}\simeq\hbox{\leavevmode\kern 1.00006pt\raise 1.07639pt\hbox{\sevenrm$F_{n}$}\kern-1.00006pt}\big{/}{\hbox{\kern-1.49994pt\lower 2.15277pt\hbox{\sevenrm$\Gamma_{q}F_{n}$}}}.$ Hence, by Lemma 1.4, we have $\textnormal{R}_{k}G(L)=\textnormal{R}_{k}\\!\left(\\!\hbox{\leavevmode\kern 1.00006pt\raise 1.07639pt\hbox{\sevenrm$G(L)$}\kern-1.00006pt}\big{/}{\hbox{\kern-1.49994pt\lower 2.15277pt\hbox{\sevenrm$\Gamma_{kn+1}G(L)$}}}\right)\simeq\textnormal{R}_{k}\\!\left(\\!\hbox{\leavevmode\kern 1.00006pt\raise 1.07639pt\hbox{\sevenrm$F_{n}$}\kern-1.00006pt}\big{/}{\hbox{\kern-1.49994pt\lower 2.15277pt\hbox{\sevenrm$\Gamma_{kn+1}F_{n}$}}}\right)=\textnormal{R}_{k}F_{n}.\qed$ For each $i$, consider the $i$th preferred longitude of $L$ from Definition 2.2. This defines an element $\lambda^{k}_{i}(L)$ in $\textnormal{R}_{k}G(L)\simeq\textnormal{R}_{k}F_{n}$, called the _$k$ –reduced $i$th longitude_ of $L$. Denote by $\operatorname{Aut}_{c}(\textnormal{R}_{k}F_{n})$ the set of conjugating automorphisms of $\textnormal{R}_{k}F_{n}$, which are automorphisms sending each generator to a conjugate of itself. Since $\textnormal{R}_{k}F_{n}$ is nilpotent, by Lemma 1.5, any conjugating endomorphism of $R_{k}F_{n}$ is in $\operatorname{Aut}_{c}(\textnormal{R}_{k}F_{n})$, for all $k\geq 1$. ###### Definition 2.6. The _$k$ –reduced free action_ associated with $L$, $\varphi^{(k)}_{L}\in\operatorname{Aut}_{c}(\textnormal{R}_{k}F_{n}),$ is defined by sending each generator $\alpha_{i}$ to its conjugate by $\lambda^{k}_{i}(L)$. This action can be seen as a generalization of Habegger-Lin’s representation for the group of link-homotopy classes of string links [12, Thm. 1.7], in the sense that the case $k=1$ recovers their construction. ### 2.3. Milnor invariants and $k$–reduced free action The main purpose of this section is Theorem 2.8, which implies the equivalence (1)$\Leftrightarrow$(2) in our main theorem. Throughout the rest of this paper, we shall make use of the following. ###### Notation 2.7. Recall that $N_{i}$ denotes the normal subgroup of $F_{n}$ generated by the $i$th generator $\alpha_{i}$ ($i\in\\{1,\cdots,n\\}$). For $k\geq 1$, we set $J^{k}:=\Gamma_{k+1}N_{1}\cdots\Gamma_{k+1}N_{n}\,\,\,\textrm{ and }\,\,\,J_{i}^{k}:=\Gamma_{k+1}N_{1}\cdots\Gamma_{k}N_{i}\cdots\Gamma_{k+1}N_{n}.$ Denote also by $R^{k}$ the two-sided ideal of $\mathbb{Z}\langle\langle X_{1},\cdots,X_{n}\rangle\rangle$ generated by all terms having at least $k+1$ occurences of some variable, and by $R^{k}_{i}$ the ideal generated by terms having either at least $k+1$ occurences of some variable, or $k$ occurences of the variable $X_{i}$. ###### Theorem 2.8. Let $L$ and $L^{\prime}$ be two $n$-component welded string links. The following are equivalent: 1. (i) $\mu_{L}(I)=\mu_{L^{\prime}}(I)$ for any sequence $I$ with $r(I)\leq k$. 2. (ii) the $k$–reduced $i$th longitudes $\lambda^{k}_{i}(L)$ and $\lambda^{k}_{i}(L^{\prime})$ are congruent modulo $J_{i}^{k}$, for all $i$. 3. (iii) $L$ and $L^{\prime}$ induce the same $k$–reduced free action $\varphi_{L}^{(k)}=\varphi_{L^{\prime}}^{(k)}\in\operatorname{Aut}_{c}(\textnormal{R}_{k}F_{n})$. The rest of this section is devoted to the proof of Theorem 2.8, which is done in three steps. Firstly, we prove that (ii)$\Rightarrow$(iii). Suppose that the $k$–reduced $i$th longitudes $\lambda^{k}_{i}(L)$ and $\lambda^{k}_{i}(L^{\prime})$ are congruent modulo $J_{i}^{k}$, for all $i$. We have $\lambda^{k}_{i}(L)\overline{\lambda^{k}_{i}(L^{\prime})}=Xg_{i}$, for some $g_{i}\in\Gamma_{k}N_{i}$ and some $X\in\prod_{j\neq i}\Gamma_{k+1}N_{j}$. By (C$3$) we thus have that $\big{[}\lambda^{k}_{i}(L)\overline{\lambda^{k}_{i}(L^{\prime})},\alpha_{i}\big{]}=[g_{i},\alpha_{i}]^{\overline{X}}[X,\alpha_{i}]\in J^{k}.$ This readily implies that $\big{[}\alpha_{i},\lambda^{k}_{i}(L)\big{]}\equiv\big{[}\alpha_{i},\lambda^{k}_{i}(L^{\prime})\big{]}$ mod $J^{k}$, which is equivalent to saying that $L$ and $L^{\prime}$ induce the same $k$–reduced free action: $\varphi_{L}^{(k)}=\varphi_{L^{\prime}}^{(k)}\in\operatorname{Aut}_{c}(R_{k}F_{n})$. Secondly, we prove that (iii)$\Rightarrow$(i). If $\varphi_{L}^{(k)}=\varphi_{L^{\prime}}^{(k)}\in\operatorname{Aut}_{c}(R_{k}F_{n})$, then for each $i$ we have $\alpha_{i}^{\lambda^{k}_{i}(L)}\equiv\alpha_{i}^{\lambda^{k}_{i}(L^{\prime})}$ mod $J^{k}$, which is equivalent to $\alpha_{i}\big{(}\lambda^{k}_{i}(L)\overline{\lambda^{k}_{i}(L^{\prime})}\big{)}\overline{\alpha}_{i}\equiv\lambda^{k}_{i}(L)\overline{\lambda^{k}_{i}(L^{\prime})}\textrm{ mod $J^{k}$.}$ Taking the Magnus expansion then gives (2.2) $X_{i}E\big{(}\lambda^{k}_{i}(L)\overline{\lambda^{k}_{i}(L^{\prime})}\big{)}-E\big{(}\lambda^{k}_{i}(L)\overline{\lambda^{k}_{i}(L^{\prime})}\big{)}\big{(}X_{i}-X_{i}^{2}+\cdots\big{)}-X_{i}E\big{(}\lambda^{k}_{i}(L)\overline{\lambda^{k}_{i}(L^{\prime})}\big{)}\big{(}X_{i}-X_{i}^{2}+\cdots\big{)}\in R^{k}.$ If $E\big{(}\lambda^{k}_{i}(L)\big{)}-E\big{(}\lambda^{k}_{i}(L^{\prime})\big{)}$ lives in $R^{k}_{i}$, then $L$ and $L^{\prime}$ have same Milnor invariants $\mu(I)$ with $r(I)\leq k$. Suppose by contradiction that $E\big{(}\lambda^{k}_{i}(L)\big{)}-E\big{(}\lambda^{k}_{i}(L^{\prime})\big{)}\equiv S_{q}+\left(\textrm{terms of degree $>q$}\right)\textrm{ mod $R^{k}_{i}$},$ for some $q$ and a sum $S_{q}$ of degree $q$ terms which are _not_ in $R^{k}_{i}$. Multiplying the above by $E\big{(}\overline{\lambda^{k}_{i}(L^{\prime})}\big{)}$, which by definition has constant term equal to $1$, we have $E\big{(}\lambda^{k}_{i}(L)\overline{\lambda^{k}_{i}(L^{\prime})}\big{)}\equiv 1+S_{q}+\left(\textrm{terms of degree $>q$}\right)\textrm{ mod $R^{k}_{i}$}.$ Equation (2.2) then gives $X_{i}E\big{(}\lambda^{k}_{i}(L)\overline{\lambda^{k}_{i}(L^{\prime})}\big{)}-E\big{(}\lambda^{k}_{i}(L)\overline{\lambda^{k}_{i}(L^{\prime})}\big{)}X_{i}\equiv X_{i}S_{q}-S_{q}X_{i}+\left(\textrm{terms of degree $>q+1$}\right)\textrm{ mod $R^{k}_{i}$}.$ Observe that $X_{i}S_{q}$ and $S_{q}X_{i}$ are not in $R^{k}$, since $S_{q}\notin R_{i}^{k}$. Hence, if $X_{i}S_{q}\neq S_{q}X_{i}$, we obtain a contradiction with (2.2). But if $X_{i}S_{q}=S_{q}X_{i}$, a simple combinatorial argument shows that $S_{q}$ can be nothing else than the monomial $X_{i}^{q}$. This tells us that $E\big{(}\lambda^{k}_{i}(L)\big{)}-E\big{(}\lambda^{k}_{i}(L^{\prime})\big{)}$ contains the term $X_{i}^{q}$, which is in contradiction with the fact that $\lambda^{k}_{i}(L)$ and $\lambda^{k}_{i}(L^{\prime})$ are images of _preferred_ $i$th longitudes, see Remark 2.4 (ii). Thirdly and lastly, let us prove that (i)$\Rightarrow$(ii). For this purpose, we prove the following, which should be thought of as a ‘$k$–reduced’ version of the injective property of the Magnus expansion (Theorem 1.10). ###### Proposition 2.9. Let $g$ be an element of $F_{n}$. Then $g\in J_{i}^{k}$ if and only if $E(g)\in 1+R_{i}^{k}$. Assuming this result momentarily, we immediately deduce the desired implication (i)$\Rightarrow$(ii) of Theorem 2.8. Indeed, consider two welded string links $L$ and $L^{\prime}$ with same Milnor invariants $\mu(I)$ with $r(I)\leq k$. This precisely means that, for each $i$, their respective $i$th preferred longitudes $\lambda^{k}_{i}(L)$ and $\lambda^{k}_{i}(L^{\prime})$ satisfy $E\big{(}\lambda^{k}_{i}(L^{\prime})\big{)}\equiv E\big{(}\lambda^{k}_{i}(L)\big{)}\textrm{ mod $R_{i}^{k}$}.$ This rewrites as $E\big{(}\overline{\lambda^{k}_{i}(L)}\lambda^{k}_{i}(L^{\prime})\big{)}=1+R_{i}^{k}$, which by Proposition 2.9 gives that $\overline{\lambda^{k}_{i}(L)}\lambda^{k}_{i}(L^{\prime})\in J_{i}^{k}$, as desired. ###### Proof of Proposition 2.9. The ‘only if’ part of the statement follows easily from well-known properties of the Magnus expansion [18], so we prove here the other implication. By Theorem 1.7, there is a set of basic commutators $\\{C_{i}\\}_{i}$ such that $g=C_{1}^{e_{1}}\cdots C_{N(nk)}^{e_{N(nk)}}h,$ for some unique integers $e_{1},\cdots,e_{N(nk)}\in\mathbb{Z}$ and a unique element $h\in\Gamma_{nk+1}F_{n}$. Set $g_{j}:=\prod_{\textrm{length$(C_{j_{m}})=j$}}C_{j_{m}}^{e_{j_{m}}}$, so that $g=g_{1}\cdots g_{kn}h$. Since $h\in J_{i}^{k}$, it remains to show that $g_{j}\in J_{i}^{k}$ for all $j$. Suppose by contradiction that this is not the case, and let $p$ be the smallest integer such that $g_{p}=\prod_{m=1}^{N}C_{p_{m}}^{e_{p_{m}}}$ is not in $J_{i}^{k}$ ($N\geq 1$). This means that there is a nonempty subset $S$ of $\\{1,\cdots,N\\}$ such that $e_{p_{l}}\neq 0$ and $C_{p_{l}}\notin J_{i}^{k}$, for all $l\in S$. Lemma 1.11 tells us that, for each $m$, the principal part of $E(C_{p_{m}})$ is either in $R_{i}^{k}$, or is in the $\mathbb{Z}$-module generated by monomials with $<(k-1)$ copies of $X_{i}$ and $<k$ copies of any other variable. This implies in particular that the sum of the principal parts of $E\big{(}\prod_{l\in S}C_{p_{l}}^{e_{p_{l}}}\big{)}$ is not in $R_{i}^{k}\setminus\\{0\\}$. But as a property of basic commutators, we know that these principal parts are linearly independent [18]; it follows that $E(g_{p})-1$ is non trivial and not in $R_{i}^{k}$. By minimality of $p$, this implies that $E(g)-1$ is not in $R_{i}^{k}$. This is a contradiction, which concludes the proof. ∎ ###### Remark 2.10. The case $k=1$ of Proposition 2.9 was previously established in [28, Prop. 7.10]. One can actually further generalize this result as follows. Let $\mathbf{r}=(r_{1},\cdots,r_{n})$ be an $n$-tuple of positive integers. Consider the subgroup $J^{\mathbf{r}}:=\Gamma_{r_{1}}N_{1}\cdots\Gamma_{r_{n}}N_{n}$ of $F_{n}$, and the ideal $R^{\mathbf{r}}$ of $\mathbb{Z}\langle\langle X_{1},\cdots,X_{n}\rangle\rangle$ generated by all terms where the variable $X_{i}$ appears at least $r_{i}$ times, for $i=1,\cdots,n$. Then the above proof generalizes in a straightforward way to show that an element $g\in F_{n}$ is in $J^{\mathbf{r}}$ if and only if $E(g)$ is in $1+R^{\mathbf{r}}$. ## 3\. Arrow calculus and self $w_{k}$-concordance ### 3.1. Arrow calculus for welded objects Let us briefly review arrow calculus, which is diagrammatic calculus developped in [19] for the study of welded objects. In Section 3.2, we explain how this diagrammatic tool is intimately related to the commutator calculus reviewed in Section 1. Let $L$ be an $n$-component welded string link. ###### Definition 3.1. A _$w$ -tree_ for $L$ is a planar immersion of an oriented, connected uni- trivalent tree $T$, such that * • the set of all vertices of $T$ is embedded in the interior of $[0,1]\times[0,1]$, such that trivalent vertices are disjoint from $L$ and univalent vertices are in $L\setminus\\{\textrm{crossings of $L$}\\}$; * • at each trivalent vertex, the three incident edges are cyclically ordered, and there are two ingoing and one outgoing edge; * • edges of $T$ may cross virtually $L$ or $T$ itself; * • edges are possibly decorated by some _twists_ $\bullet$, which are disjoint from all crossings and vertices, and which satisfy the rule . A univalent vertex of $T$ is a _head_ if $T$ is locally oriented towards $L$, and it is a _tail_ otherwise. For a union of $w$-trees for $L$, we allow virtual crossings among edges, and we require all vertices to be pairwise disjoint. See Figure 2 for an example. We note that a $w$-tree contains a single head, due to the orientation convention at trivalent vertices. Figure 2. An example of $w$-tree for the trivial $3$–component string link ###### Definition 3.2. A $w$-tree is _linear_ , if it has the following shape as an abstract tree with possibly a number of $\bullet$ on its edges. ###### Definition 3.3. The _degree_ of $T$ is its number of tails or, equivalently, half the total number of vertices. A degree $k$ $w$-tree is called a _$w_{k}$ -tree_, and a $w_{1}$-tree is also called a $w$-arrow. Given a union of $w$-arrows $A$ for $L$, there is a notion of _surgery along $A$_, which produces a new welded string link $L_{A}$ as follows: . In general, if $A$ contains some virtual crossing, this likewise introduces pairs of virtual crossings in $L_{A}$. Now, given a union of $w_{k}$-trees $P$ for $L$, one can define the notion of surgery along $P$ by first replacing $P$ by a union of $w$-arrows, called the _expansion_ of $P$ and denoted by $E(P)$, defined recursively by the local rule illustrated below, then performing surgery on $E(P)$. The result will be denoted by $L_{P}$. . ###### Remark 3.4. The above figure suggests that the union $E(P)$ of $w$-arrows obtained from $P$ by applying (E) recursively, has the shape of an ‘iterated commutator’ of $w$-arrows. This observation is made rigourous and further discussed in Section 3.2. ###### Definition 3.5. A _$w$ -tree presentation_ $(\mathbf{1},P)$ for $L$ is a union $P$ of $w$-trees for the trivial diagram $\mathbf{1}$, such that $L=\mathbf{1}_{P}$. Two arrow presentations $(\mathbf{1},P)$ and $(\mathbf{1},P^{\prime})$ representing equivalent welded string links are called _equivalent_. The main point of this notion is that any welded string link admits a $w$-tree presentation [19, Prop. 4.2]. Moreover, in [19, Thm. 5.21], a set of moves on $w$-trees is provided, which suffice to deform any $w$-tree presentation of $L$ into any other. These moves imply further operations, hence a full diagrammatic calculus called _arrow calculus_ , which can be used to study welded objects and their invariants. We do not reproduce all these moves here, but only provide below those that will be needed in this paper : We refer the reader to [19] for more details on arrow calculus. We will also use the following, which refines the notion of equivalence given in Definition 3.5. ###### Definition 3.6. Let $(\mathbf{1},P)$ be a $w$-tree presentation for $L$, and let $S$ be a subset of $P$. Denote by $\sigma\subset\mathbf{1}$ a neighborhood of the endpoints of $S$, which identifies with a trivial diagram, such that $\sigma$ is disjoint from $P\setminus S$. Let $S^{\prime}$ be a union of $w$-trees for $\sigma$. We say that $S$ and $S^{\prime}$ are _locally equivalent_ whenever $\sigma_{S^{\prime}}$ is welded equivalent to $\sigma_{S}$. Note that, in this setting, $(\mathbf{1},P)$ and $\big{(}\mathbf{1},(P\setminus S)\cup S^{\prime}\big{)}$ are equivalent $w$-tree presentations. ### 3.2. Algebraic formalism for $w$-trees Let $(\mathbf{1},P)$ be a $w$-tree presentation for a welded diagram. Let $A$ be a union of $w$-arrows in $P$ that are _adjacent_ , in the sense that _all_ heads in $A$ are met consecutively on a portion $h_{A}$ of $\mathbf{1}$, called the _support_ of $A$, without meeting any crossing or endpoint.111We shall also say in this situation that the heads of $A$ are _adjacent_. The _complement_ of $A$ is then defined as the $w$-tree presentation $(\mathbf{1},P\setminus A)$ obtained from $(\mathbf{1},P)$ by removing all $w$-arrows in $A$. The support $h_{A}$ and the arrow heads of $P\setminus A$ then cut the strands of $\mathbf{1}$ into intervals that we label by letters $x_{1},\ldots,x_{p}$; we set $F_{(P;A)}:=\langle x_{1},\ldots,x_{p}\rangle$, the free group generated by these letters. $\vbox{\hbox{\includegraphics[height=85.35826pt]{Cut1.pdf}}}\ \leadsto\ \vbox{\hbox{\includegraphics[height=85.35826pt]{Cut2.pdf}}}\ \leadsto\ \ w(A)=\overline{x}_{3}x_{5}x_{4}\overline{x}_{6}$ Figure 3. An example of word associated to a set of adjacent $w$-arrows Assuming that all arrow heads are met to the right side when traveling along $h_{A}$ following its orientation (this is always possible thanks to the Head reversal move), each head of an arrow in $A$ can be labeled222It should be noted that replacing arrows by labels corresponds actually to the _cut- diagram_ point of view on welded objects, introduced in [4]. by the letter $x_{i}$ or $\overline{x}_{i}$, depending on whether the arrow contains an even or an odd number of twists, where $x_{i}$ is the label of the interval on which the tail lies. We can then define an element $w(A)\in F_{(P;A)}$ by reading the labels in order when running along $h_{A}$ according to its orientation; see Figure 3 for an example. Notice that this _word_ $w(A)$ remains unchanged under a Tails exchange move and that, conversely, it determines the union of $w$-arrows $A$ up to Tails exchange moves. More generally, we have the following. ###### Lemma 3.7. Suppose that two $w$-tree presentations $(\mathbf{1},P)$ and $(\mathbf{1},P^{\prime})$ only differ by replacing a union of adjacent arrows $A$ by another union of adjacent arrows $A^{\prime}$ with same support. Then $F_{(P;A)}=F_{(P^{\prime};A^{\prime})}$, and $w(A)=w(A^{\prime})$ if and only if $A$ and $A^{\prime}$ differ by a sequence of Inverse moves and Tails exchange moves. ###### Proof. First note that $A$ and $A^{\prime}$ have same complement, hence we indeed have that $F_{(P;A)}=F_{(P^{\prime};A^{\prime})}$. Since this is a free group, we have $w(A)=w(A^{\prime})$ if and only if these two words differ by inserting or deleting copies of $x_{j}\overline{x}_{j}$ or $\overline{x}_{j}x_{j}$ for any $j$. But this is equivalent to saying that $A$ and $A^{\prime}$ differ by a sequence of Inverse moves (with pairs of $w$-arrows) and Tails exchange moves. ∎ Lemma 3.7 provides a one-to-one correspondence between sets of adjacent arrows with a fixed support and complement, up to equivalence, and elements in the associated free group. Under this correspondence, our conventions for the commutator notation $[x,y]$ and the conjugation notation $x^{y}$, given in Section 1.1, have natural diagrammatic counterparts. It is easily observed that a commutator corresponds to the expansion of a $w$–tree: $x\overline{y}\overline{x}y=[x,y]\quad\leftrightarrow\quad\vbox{\hbox{\includegraphics[height=42.67912pt]{com1.pdf}}}\,=\,\vbox{\hbox{\includegraphics[height=42.67912pt]{com0.pdf}}}.$ In general, to a $w$-tree $T$ which is part of a $w$-tree presentation $(\mathbf{1},P)$, corresponds a word $w(T)\in F_{\left(P;E(T)\right)}$, which is defined as the word corresponding to its expansion; in other words, we set $w(T)=w\big{(}E(T)\big{)}$. From the definition, the word $w(T)$ can be directly read from $T$ using the following procedure. Label each edge of $T$ containing a tail by the generator $x_{i}$ at the tail, then label the remaining edges of $T$ by recursively applying the local rules of Figure 4; the label at the edge containing the head is $w(T)$; see Example 3.9 (i). Figure 4. Procedure to compute $w(T)$ from a $w$-tree Note that, under this correspondence, the word associated to a linear $w$-tree (Definition 3.2), is a linear commutator with entries in $\\{x_{i},\overline{x}_{i}\\}_{i}$, in the sense of Definition 1.1. The conjugation notation, on the other hand, corresponds to a situation where the Slide move can be performed: $\overline{y}xy=x^{y}\quad\leftrightarrow\quad\vbox{\hbox{\includegraphics[height=42.67912pt]{com2.pdf}}}\,=\,\vbox{\hbox{\includegraphics[height=42.67912pt]{com3.pdf}}},$ Since two $w$-tree presentations that differ by a Slide move are locally equivalent, the above rightmost picture can be seen as the diagrammatic counterpart of notation $x^{y}$ in our correspondence. #### 3.2.1. Conjugated trees Combining these two observations, we can thus extend our correspondence to a wider range of $w$-tree presentations: ###### Definition 3.8. A $w$-tree $T$ for $\mathbf{1}$ is called _conjugated_ if there exists a union $U$ of _pairs of conjugating $w$-arrows_ for $T$. Here, a pair of conjugating $w$-arrows for $T$ is a union of two parallel $w$-arrows that only differ by a twist, and whose heads are on the same component of $\mathbf{1}$ and only separated by a tail of $T$, and possibly other nested pairs of conjugating $w$-arrows. See Example 3.9. Let $T$ be a conjugated $w$-tree $T$ with union $U$ of conjugating $w$-arrows, in a $w$-tree presentation $(\mathbf{1},P)$. One can define a corresponding word $w^{U}(T)\in F_{(P\setminus U;E(T))}$, which is the word associated with the union of adjacent $w$-arrows obtained by taking the expansion of $T$ and applying Inverse moves and Slide moves333Since each tail of $T$ yields a _union_ of $w$-arrows after expansion, one first need to use the Inverse move several times between these tails to be able to perform the Slide moves. to $w$-arrows in $U$. This word $w^{U}(T)$ can be directly read off $T\cup U$, by substituting each letter $a$ in $w(T)\in F_{(P\setminus U;E(T))}$ by its conjugate $a^{w(X)}$ whenever the corresponding tail of $T$ admits a union of conjugating $w$-arrows $X\cup\overline{X}\subset U$, where $X$ denotes the union of $w$-arrows met before the tail according to the orientation (and $\overline{X}$ are the $w$-arrows met after the tail). Let us illustrate this concretely on an example: ###### Example 3.9. The $w_{4}$-tree $T$ shown below is a conjugated tree (here, the labels $a$, $b$, $c$, $d$ and $e$ are not necessarily mutually distinct). (i) Ignoring all conjugating $w$-arrow, we have that $w(T)=\big{[}[a,b],[d,e]\big{]}$. (ii) Denoting by $U$ the union of conjugating $w$-arrows for $T$, we have $w^{U}(T)=\big{[}[a^{b},b^{\overline{c}}],[d^{ec},e]\big{]}.$ ###### Remark 3.10. We stress that, for a $w_{k}$–tree $T$, $w^{U}(T)$ is a length $k$ commutator whose entries are conjugates of $\big{\\{}x_{i},\overline{x}_{i}\big{\\}}_{i}$. Conversely, it follows from the above that for any length $k$ commutator $w\in F_{n}$ whose entries are conjugates of $\\{x_{i},\overline{x}_{i}\\}_{i}$, there exists a $w_{k}$-tree $T$ and a union $U$ of conjugating arrows such that $w=w^{U}(T)$. This applies more generaly to _products_ of such commutators, which then correspond to _adjacent_ conjugated trees, _i.e._ conjugated trees with adjacent heads. ###### Remark 3.11. Deleting a conjugated tree $T$, in the notation of Definition 3.8, yields the union $U$ of conjugating $w$-arrows, which can in turn all be deleted by using the Inverse move. For this paper, we will need the following, which is a direct translation of Lemma 1.3 via the above correspondence: ###### Lemma 3.12. Let $T$ be a $w_{l}$-tree in some union of $w$-trees for $\mathbf{1}$, such that the $i$th component of $\mathbf{1}$ contains $k$ tails of $T$ ($k\leq l$). Then $T$ is locally equivalent to a union of adjacent conjugated linear $w_{k}$-trees, with all tails on the $i$th component. #### 3.2.2. Relation to the welded group Finally, let us recall how, given a $w$-tree presentation $(\mathbf{1},P)$ for $L$, one can associate a presentation for the group $G(L)$ defined in Section 2.1, using our algebraic formalism. Again, by the Head reversal move, one can freely assume that all heads of $P$ are attached to the right side of $\mathbf{1}$ according to the orientation. The heads of $P$ split $\mathbf{1}$ into a union of arcs, each of which yields a generator $x_{i}$, and we denote by $F_{P}$ the free group generated by $\big{\\{}x_{i}\big{\\}}_{i}$. Now, let $T$ be a single $w_{k}$-tree in $P$, and denote by $a$ and $b$ the two generators of $F_{P}$ associated with the arcs to the left and right of its head, respectively. Following the above, a word $w(T)$ is defined by expanding $T$ into a union $E(T)$ of adjacent $w$-arrows, and writing the associated word. This $w(T)$ naturally lives in $F_{P}$, since the complement of $h_{E(T)}$ is obtained by just deleting a neighborhood of the head of $T$. Figure 5 then illustrates how $T$ yields a conjugation relation $R_{T}$:$\leavevmode\nobreak\ b=\overline{w(T)}aw(T)$ among the two generators $a,b$ separated by its head. Figure 5. Conjugating relation $R_{T}$ associated with $T$ In fact, we obtain in this way the following presentation (see [19, § 6.1.1]): $G(L)=\langle\\{x_{i}\\}_{i}\,|\,\\{R_{T}\\}_{T\in P}\rangle.$ In particular, the $i$th preferred longitude of $L$ from Definition 2.2, can be written as $\lambda_{i}(L)=\alpha_{i}^{s_{i}}\prod_{T\in P_{i}}w(T)$, for some $s_{i}\in\mathbb{Z}$, where $P_{i}$ is the subset of $w$-trees in $P$ whose heads are on the $i$th component of $\mathbf{1}$, ordered according to their occurence on the $i$th component when following the orientation. ###### Remark 3.13. Observe that, in the notation of Figure 5, if the head of $T$ is on the $i$th component then $a,b\in N_{i}$, and if moreover $w(T)\in\Gamma_{k}N_{i}$ for some $k$, then we have: $b=\overline{w(T)}aw(T)=a\big{[}\overline{a},w(T)\big{]}\equiv a\textrm{ mod $\Gamma_{k+1}N_{i}$}$. #### 3.2.3. Self $w_{k}$-equivalence and $w^{(k+1)}$-equivalence This section is concerned with the following families of equivalence relations for welded objects. ###### Definition 3.14. Let $k\geq 1$. The _$w_{k}$ -equivalence_, resp. _self $w_{k}$-equivalence_, is the equivalence relation generated by welded equivalence and surgery along $w$-trees, resp. self $w$-trees, of degree $\geq k$. Here, a _self $w$-tree_ is a $w$-tree whose endpoints all lie on a same component. The _$w^{(k+1)}$ -equivalence_ is the equivalence relation generated by welded equivalence and surgery along $w$-trees having at least $k+1$ endpoints on a same component. ###### Remark 3.15. Given a $w_{kn+1}$-tree $T$ for some $n$-component string link, there is necessarily some index $i$ such that $T$ has at least $k+1$ ends on the $i$th component. This elementary observation shows that the $w_{kn+1}$-equivalence implies the $w^{(k+1)}$-equivalence. The following will play a central role in proving our main theorem, but might also be of independent interest in the study of arrow calculus. ###### Theorem 3.16. Two welded string links are self $w_{k}$-equivalent, if and only if they are $w^{(k+1)}$-equivalent. ###### Proof. Since a self $w$-tree of degree $\geq k$ has at least $k+1$ endpoints, which are all attached to a same component, we clearly have the ‘only if’ part of the statement. In order to prove the ‘if’ part, consider a $w$-tree $T^{\prime}$ for an $n$-component welded string link $L$, with $k+1$ endpoints on the $i$th component $L_{i}$ of $L$. We distinguish two cases. If the head of $T$ is on $L_{i}$, then by Lemma 3.12, it is locally equivalent to a union of conjugated self $w_{k}$-trees on the $i$th component. Hence in this case, $L_{T}$ is clearly self $w_{k}$-equivalent to $L$ by Remark 3.11. Now, if the head of $T$ is on the $j$th component $L_{j}$ of $L$ for some $j\neq i$, then Lemma 3.12 more precisely tells us that $T$ is locally equivalent to a union of conjugated linear $w_{k+1}$-trees, with all tails on the $i$th component and with head on the $j$th component. Consider one such linear $w$-tree, as on the left-hand side of the figure below: This figure shows how applying the expansion move (E) to such a tree, followed by the Slide move, yields a union of two $w$-arrows $A\cup A^{\prime}$ and two self $w_{k}$-trees $B\cup B^{\prime}$ on the $i$th component. Deleting $B\cup B^{\prime}$ up to self $w_{k}$-equivalence, then deleting $A\cup A^{\prime}$ by the Inverse move, yields the empty diagram. This observation, together with Remark 3.11, shows that $L_{T}$ is self $w_{k}$-equivalent to $L$. ∎ ###### Remark 3.17. The argument of this proof can be used to show that the (self) $w_{k}$-equivalence is generated by welded equivalence and surgery along (self) $w_{k}$-trees, rather than (self) $w$-trees of degree $\geq k$. ### 3.3. Self $w_{k}$-concordance and $w^{(k+1)}$-concordance The main purpose of this section is Theorem 3.19, which implies the equivalence (1)$\Leftrightarrow$(3) in our main theorem. First, we introduce the self $w_{k}$-concordance relation, along with the $w^{(k+1)}$-concordance equivalence. The notion of concordance for welded links was introduced and studied in [5, 10]. Two $n$-component welded string links $L$ and $L^{\prime}$ are _welded concordant_ if one can be obtained from the other by a sequence of welded equivalence and the birth/death and saddle moves of Figure 6, such that, for each $i\in\\{1,\cdots,n\\}$, the number of birth/death moves used to deform the $i$th component of $L$ into the $i$th component of $L^{\prime}$ is equal to the number of saddle moves. Figure 6. The birth/death and saddle moves ###### Definition 3.18. Let $k\geq 1$. The _$w_{k}$ -concordance_, resp. _self $w_{k}$-concordance_, is the equivalence relation generated by welded concordance and $w_{k}$-equivalence, resp. self $w_{k}$-equivalence. The _$w^{(k+1)}$ -concordance_ is the equivalence relation generated by $w^{(k+1)}$-equivalence and welded concordance. The following is the main result of this section. ###### Theorem 3.19. Let $L$ and $L^{\prime}$ be two $n$-component welded string links. The following are equivalent: 1. (i) $\mu_{L}(I)=\mu_{L^{\prime}}(I)$ for any sequence $I$ with $r(I)\leq k$. 2. (ii) $L$ and $L^{\prime}$ are $w^{(k+1)}$-concordant. 3. (iii) $L$ and $L^{\prime}$ are self $w_{k}$-concordant. The rest of this section is devoted to the proof of Theorem 3.19. We already have directly that (ii) and (iii) are equivalent, by Theorem 3.16. In the rest of the proof, we use the fact from Theorem 2.8 that (i) is equivalent to saying that the $k$–reduced $i$th longitudes $\lambda^{k}_{i}(L)$ and $\lambda^{k}_{i}(L^{\prime})$ are congruent modulo $J_{i}^{k}=\Gamma_{k+1}N_{1}\cdots\Gamma_{k}N_{i}\cdots\Gamma_{k+1}N_{n}$ for all $i$. Let us prove that (iii) implies (i). It is shown in [4] that welded Milnor invariants are invariant under welded concordance. So it suffices to show that, if $L^{\prime}$ is obtained from $L$ by surgery along a self $w_{k}$-tree $T$, we have $\lambda^{k}_{i}(L^{\prime})\equiv\lambda^{k}_{i}(L)\mod J_{i}^{k}$ for all $i$. Suppose that all endpoints of $T$ are on the $i_{0}$th component of $L$ for some $i_{0}$. Then $w(T)\in\Gamma_{k}N_{i_{0}}$, and by Remark 3.13 we have directly that $\lambda^{k}_{i}(L)\equiv\lambda^{k}_{i}(L)\mod\Gamma_{k+1}N_{i_{0}}(\subset J_{i}^{k})$ for all $i\neq i_{0}$. Furthermore, there exists some words $a,b$ in $F_{n}$, such that $\lambda^{k}_{i_{0}}(L)=ab$ and, again by Remark 3.13, $\lambda^{k}_{i_{0}}(L^{\prime})\equiv aw(T)b\mod\Gamma_{k+1}N_{i_{0}}$. This implies that $\lambda^{k}_{i_{0}}(L^{\prime})\equiv\lambda^{k}_{i_{0}}(L)\mod J_{i_{0}}^{k}$, as desired. Finally, we prove that (i) implies (ii). By assumption, and using Lemmas 2.5 and 1.2, the $k$–reduced $i$th preferred longitudes $\lambda^{k}_{i}(L^{\prime})$ and $\lambda^{k}_{i}(L)$ differ by a finite sequence of the following operations: * (a) inserting or deleting copies of $\alpha_{j}\overline{\alpha}_{j}$ or $\overline{\alpha}_{j}\alpha_{j}$ for any $j$; * (b) inserting or deleting length $\geq k+1$ commutators with entries in $\big{\\{}\alpha_{l};\overline{\alpha}_{l}\big{\\}}_{l}$, and with at least $k+1$ entries with some index $j\neq i$. * (c) inserting or deleting length $\geq k$ commutators with entries in $\big{\\{}\alpha_{l};\overline{\alpha}_{l}\big{\\}}_{l}$, and with at least $k$ entries with index $i$. We aim at showing that, under this assumption, $L$ and $L^{\prime}$ are $w^{(k+1)}$-equivalent. As a first step, let us assume that both $L$ and $L^{\prime}$ are given by a so-called sorted $w$-tree presentation: ###### Definition 3.20. A $w$-tree presentation for an $n$-component welded string link is _sorted_ 444This notion was first defined in [1] under the term _ascending_ , in the context of Gauss diagrams. if, when running along each component following the orientation, all tails are met before all heads. Given a sorted presentation $(\mathbf{1},P)$, the tails of all $w_{k}$-trees are contained in the initial arcs of $\mathbf{1}$, hence the words associated to these $w_{k}$-trees are length $k$ commutator with entries in $\big{\\{}\alpha_{l};\overline{\alpha}_{l}\big{\\}}_{l}$. Denote by $P_{i}$ the subset of $w$-trees in $P$ whose heads are on the $i$th component. According to Section 3.2.2, there exists some $s_{i}\in\mathbb{Z}$ such that the $i$th preferred longitude is equal to $\alpha_{i}^{s_{i}}\prod_{T\in P_{i}}w(T)$. Using the Isolated move, we can introduce a union $A_{i}$ of $w$-arrows whose endpoints are all on the $i$th component, such that $(\mathbf{1},P\cup A_{i})$ is sorted and equivalent to $(\mathbf{1},P)$, and such that the $i$th preferred longitude is equal to $\prod_{T\in A_{i}\cup P_{i}}w(T)$. By this observation, we can freely assume that the words associated with our sorted presentations, are precisely the preferred longitudes of our string links $L$ and $L^{\prime}$. Hence by Lemma 3.7, if $\lambda_{i}(L)$ and $\lambda_{i}(L^{\prime})$ are equal as words in $\alpha_{1},\cdots,\alpha_{n}$, then applying (E) recursively to these sorted presentations, yields the _same_ union of $w$-arrows. It thus remains to realize the above three operations (a), (b) and (c) by a $w^{(k+1)}$-equivalence among sorted tree presentations. This is done as follows: * • Operation (a) is simply achieved by Inverse moves, which insert a pair of parallel $w$-arrows, one having a $\bullet$, running from the bottom of the $j$th component and ending on the $i$th component. * • Operation (b), resp. (c), is achieved by inserting or deleting sorted $w$-trees, with head on the $i$th component, and with at least $k+1$ tails on the $j$th component, resp. with at least $k$ tails on the $i$th component. This shows that $L$ and $L^{\prime}$ are $w^{(k+1)}$-equivalent, and we are done in the sorted case. In order to conclude the proof, it is now enough to show that any welded string link is $w^{(k+1)}$-concordant to a string link with a sorted presentation, since Milnor invariants $\mu(I)$ with $r(I)\leq k$ are invariants of $w^{(k+1)}$-concordance. This can be seen as a direct corollary of [7, Cor. 2.5], which tells us that any $w$-tree presentation can be deformed into a sorted one by a sequence of welded concordance and surgeries along $w_{nk+1}$-trees. The desired result then follows by Remark 3.15. ### 3.4. Artin-like isomorphisms The stacking product endows the set $w\textnormal{S}L(n)$ of $n$-component welded string links with a structure of monoid, with unit the trivial diagram $\mathbf{1}$. As seen in subsection 2.2, a conjugating automorphism $\varphi^{(k)}_{L}\in\operatorname{Aut}_{c}(\textnormal{R}_{k}F_{n})$ is associated to any welded string link $L$, which sends each generator $\alpha_{i}$ of $R_{k}F_{n}$ to its conjugate by the $k$-reduced longitude $\lambda^{k}_{i}(L)$. This yields, for each $k\geq 1$, a monoid homomorphism $\varphi^{(k)}:w\textnormal{S}L(n)\longrightarrow\operatorname{Aut}_{c}(\textnormal{R}_{k}F_{n}).$ ###### Proposition 3.21. For each $k\geq 1$, the map $\varphi^{(k)}$ descends to a group isomorphism $\hbox{\leavevmode\kern 1.00006pt\raise 1.07639pt\hbox{\sevenrm$w\textnormal{S}L(n)$}\kern-1.00006pt}\big{/}{\hbox{\kern-1.49994pt\lower 2.15277pt\hbox{\sevenrm$\textrm{self $w_{k}$-concordance}$}}}\stackrel{{\scriptstyle\simeq}}{{\longrightarrow}}\operatorname{Aut}_{c}(\textnormal{R}_{k}F_{n}).$ ###### Proof. Consider the quotient map $\hbox{\leavevmode\kern 1.00006pt\raise 1.07639pt\hbox{\sevenrm$w\textnormal{S}L(n)$}\kern-1.00006pt}\big{/}{\hbox{\kern-1.49994pt\lower 2.15277pt\hbox{\sevenrm$\textrm{self $w_{k}$-concordance}$}}}\longrightarrow\operatorname{Aut}_{c}(\textnormal{R}_{k}F_{n})$, which will shall still denote by $\varphi^{(k)}$. The fact that this map is injective is a consequence of Theorems 3.19 and 2.8. To prove surjectivity, it is sufficient to observe that an element of $\operatorname{Aut}_{c}(\textnormal{R}_{k}F_{n})$ is specified by an $n$-tuple of conjugating elements in $\textnormal{R}_{k}F_{n}$, and that these conjugating elements are easily realized as the preferred longitudes of a sorted welded string link; see the paragraph following Def. 3.20. Since $\operatorname{Aut}_{c}(\textnormal{R}_{k}F_{n})$ is a group, it follows that $\hbox{\leavevmode\kern 1.00006pt\raise 1.07639pt\hbox{\sevenrm$w\textnormal{S}L(n)$}\kern-1.00006pt}\big{/}{\hbox{\kern-1.49994pt\lower 2.15277pt\hbox{\sevenrm$\textrm{self $w_{k}$-concordance}$}}}$ is a group too555This fact was already known, as any welded string link is invertible up to concordance [10, Prop. 6]., and that the isomorphism is a group isomorphism. ∎ ###### Remark 3.22. Using the result of Colombari [7] mentioned in the introduction, the proof of Proposition 3.21 can be adapted in a straightfoward way to show that, for all $k\geq 1$, we have an isomorphism $\hbox{\leavevmode\kern 1.00006pt\raise 1.07639pt\hbox{\sevenrm$w\textnormal{S}L(n)$}\kern-1.00006pt}\big{/}{\hbox{\kern-1.49994pt\lower 2.15277pt\hbox{\sevenrm$\textrm{$w_{k}$-concordance}$}}}\stackrel{{\scriptstyle\simeq}}{{\longrightarrow}}\operatorname{Aut}_{c}\\!\left(\hbox{\leavevmode\kern 1.00006pt\raise 1.07639pt\hbox{\sevenrm$F_{n}$}\kern-1.00006pt}\big{/}{\hbox{\kern-1.49994pt\lower 2.15277pt\hbox{\sevenrm$\Gamma_{k+1}F_{n}$}}}\right),$ where $\operatorname{Aut}_{c}\\!\left(\hbox{\leavevmode\kern 1.00006pt\raise 1.07639pt\hbox{\sevenrm$F_{n}$}\kern-1.00006pt}\big{/}{\hbox{\kern-1.49994pt\lower 2.15277pt\hbox{\sevenrm$\Gamma_{k+1}F_{n}$}}}\right)$ denotes the group of conjugating automorphisms of $\hbox{\leavevmode\kern 1.00006pt\raise 1.07639pt\hbox{\sevenrm$F_{n}$}\kern-1.00006pt}\big{/}{\hbox{\kern-1.49994pt\lower 2.15277pt\hbox{\sevenrm$\Gamma_{k+1}F_{n}$}}}$. We conclude this section by a (long) remark addressing the 4-dimensional counterpart of this study, that can be safely skipped by the reader who is not interested in this matter. ###### Remark 3.23. Welded theory is intimately connected to the topology of _ribbon knotted surfaces_ in $4$–space, via the so-called _Tube map_ defined by Satoh in [23]. In our context, the Tube map is a surjective monoid homomorphism $\textrm{Tube}:w\textnormal{S}L(n)\longrightarrow rT(n),$ where $rT(n)$ denotes the monoid of _ribbon tubes_ introduced in [1]. The question of the injectivity of this Tube map is however still open, but it is worth noting that our present work implies that any element in the kernel of the Tube map is self $w_{k}$-concordant to $\mathbf{1}$ for all $k$. Indeed, there is a $4$–dimensional analogue of arrow calculus, which was developped (prior to [19]) by Watanabe in [24]. There, a notion of $RC_{k}$-equivalence is defined for ribbon knotted surfaces, in terms of surgery along degree $k$ oriented claspers, which are embedded surfaces that realize topologically an oriented uni-trivalent tree with $2k$ vertices. A refined notion of _self $RC_{k}$-equivalence_ is easily defined on $rT(n)$ by further requesting that such degree $k$ oriented claspers only intersect a single ribbon tube component. Combining furthermore this self $RC_{k}$-equivalence with the topological concordance of ribbon tubes, one defines a notion of _self $RC_{k}$-concordance_ for these objects. It is known [5, Prop. 4.8] that the Tube map sends concordant welded links to concordant ribbon tori, and it is easily verified that it sends self $w_{k}$-concordant welded string links to self $RC_{k}$-concordant ribbon tubes. Hence, for each $k\geq 1$, the Tube map induces a surjective homomorphism $\textrm{Tube}_{k}:\hbox{\leavevmode\kern 1.00006pt\raise 1.07639pt\hbox{\sevenrm$w\textnormal{S}L(n)$}\kern-1.00006pt}\big{/}{\hbox{\kern-1.49994pt\lower 2.15277pt\hbox{\sevenrm$\textrm{self $w_{k}$-concordance}$}}}\longrightarrow\hbox{\leavevmode\kern 1.00006pt\raise 1.07639pt\hbox{\sevenrm$rT(n)$}\kern-1.00006pt}\big{/}{\hbox{\kern-1.49994pt\lower 2.15277pt\hbox{\sevenrm$\textrm{self $RC_{k}$-concordance}$}}}.$ These maps are actually isomorphisms, and the proof goes as follows. Following [1, Sec. 2.2.4], which deals with the $k=1$ case, one can use Stallings theorem to associate an action $\varphi^{(k)}_{T}\in\operatorname{Aut}_{c}(\textnormal{R}_{k}F_{n})$ for any ribbon tube $T$, which actually conjugates, in $\textnormal{R}_{k}\pi_{1}(B^{4}\setminus T)\cong\textnormal{R}_{k}F_{n}$, the $i$th meridians of $T$ by its $i$th preferred longitude. By another use of Stallings theorem in dimension one more, this action is invariant under concordance, and it can be directly seen that $RC_{k}$-equivalent ribbon tubes have $k$-reduced $i$th longitudes which are congruent modulo $J_{i}^{k}$ (borrowing notation from Thm. 2.8). As the Tube map is known to preserve the associated groups and longitudes, this leads to a map $\varphi^{(k)}_{r}:\hbox{\leavevmode\kern 1.00006pt\raise 1.07639pt\hbox{\sevenrm$rT(n)$}\kern-1.00006pt}\big{/}{\hbox{\kern-1.49994pt\lower 2.15277pt\hbox{\sevenrm$\textrm{self $RC_{k}$-concordance}$}}}\longrightarrow\operatorname{Aut}_{c}(\textnormal{R}_{k}F_{n}),$ which satisfies $\varphi^{(k)}=\varphi^{(k)}_{r}\circ\textrm{Tube}_{k}$. It follows then from the injectivity of $\varphi^{(k)}$ that $\textrm{Tube}_{k}$ is injective, hence an isomorphism. This implies that an element in the kernel of the Tube map is trivial up to self $w_{k}$-concordance. ## 4\. The link case By Theorems 2.8 and 3.19, welded string links are classified up to self $w_{k}$–equivalence by their $k$–reduced longitudes. Similar phenomena occur in the classifications up to self-virtualization [1] and $w_{k}$–concordance [7], and these results were extended to the case of welded links in [3] and [7] respectively, in terms of (adaptations of) the peripheral system. In this final section, we outline how a similar extension can be derived for links up to self $w_{k}$–concordance. _Welded links_ are defined in the very same way as welded string links, by replacing the copies of the unit interval with copies of the circle $S^{1}$. Let $L$ be a welded link. Using the same Wirtinger-like procedure as in Section 2.1, a group $G(L)$ can be associated to any welded link $L$, which agrees with the fundamental group of the complement if $L$ is a classical link. A choice of one generic basepoint on each component determines a set of meridians which normally generates $G(L)$. Up to Detour moves, these basepoints also allow to cut open $L$ into a welded string links $S_{L}$, and $G(L)$ is isomorphic to the quotient of $G(S_{L})$ which identifies, for each strand, the meridians associated with its two endpoints. An _$i$ th longitude_ for $L$ can then be defined as the image of the $i$th longitude for $S_{L}$ in this quotient. Of course, the result depends on the choice of the basepoints: moving the $i$th basepoint actually results in conjugating simultaneously the associated $i$th meridian and longitude.666Note that, since all the meridians defined in this way are conjugated, all these normally generating sets of meridians lead to the same notion of $k$–reduced quotient for $G(L)$. ###### Definition 4.1. Let $k$ be a positive integer. The _$k$ –reduced peripheral system_ of $L$ is defined as $\big{(}\textnormal{R}_{k}G(L),\\{x_{i}\\}_{i},\\{\lambda^{k}_{i}\Gamma_{k}N_{i}\\}_{i}\big{)}$ where, for a given choice of basepoints, $(x_{i},\lambda^{k}_{i})$ are the images in $\textnormal{R}_{k}G(L)$ of the associated meridians and preferred $i$th longitudes, and $\lambda^{k}_{i}.\Gamma_{k}N_{i}$ is the image in $\textnormal{R}_{k}G(L)$ of the coset of $\lambda^{k}_{i}$ modulo $\Gamma_{k}N_{i}$, with $N_{i}$ the normal subgroup of $\textnormal{R}_{k}G(L)$ generated by $x_{i}$. It is well-defined up to, for each $i$, simultaneous conjugation of $x_{i}$ and $\lambda^{k}_{i}$ by an element of $\textnormal{R}_{k}G(L)$. ###### Remark 4.2. The $1$–reduced peripheral system coincides with the _reduced peripheral system_ introduced by Milnor in his foundational paper [21] on links up to link-homotopy, see also [3]. The arrow calculus reviewed in Section 3.1 applies in the exact same way to welded links, and leads to the notions of _self $w_{k}$-equivalence_ and _self $w_{k}$-concordance_ for these objects, as in Definitions 3.14 and 3.18. Theorem 3.16 also holds for welded links, since the proof is purely local. The main result of this section is the following. ###### Theorem 4.3. Two (welded) links have equivalent $k$–reduced peripheral systems if and only if they are self $w_{k}$–concordant. The rest of this section is devoted to the proof of Theorem 4.3. We stress that all ingredients of the proof are mostly corollaries of their string link counterparts, and straightforward adaptations of techniques of [3, 7]. Hence we will only sketch the proof of Theorem 4.3 below, only hinting to the main arguments and outlining the specificities of the link case. The fact that the $k$–reduced peripheral system is invariant under self $w_{k}$–concordance is an easy consequence of the string link case addressed in the previous sections. Indeed, a notion of $k$–reduced peripheral system can be defined for welded string links as in Definition 4.1, except that, in this case, there is a natural choice of meridians, and it follows from Theorems 3.19 and 2.8 that this is invariant under self $w_{k}$–concordance. By fixing a set of basepoints on a welded link $L$, and considering its $k$–reduced peripheral system as a quotient of the $k$–reduced peripheral system of the associated welded string link $S_{L}$, we obtain the desired invariance property. The proof of ‘only if’ part of Theorem 4.3 goes roughly along the same lines as the proof of (i)$\Rightarrow$(ii) of Theorem 3.19, given in Section 3.3. The analogue of Definition 3.20 in the link case is the following, see [3]: a $w$–tree presentation for a welded link is _sorted_ if, on each component, all the heads are adjacent. (A $w$–tree presentation for a welded link is a union of $w$-trees for the trivial diagram of the unlink.) Since the $w_{kn+1}$–equivalence implies the self $w_{k}$–equivalence by Remark 3.15 and Theorem 3.16, we have the following as a direct corollary of [7, Prop. 3.5]. ###### Proposition 4.4. Any welded link is self $w_{k}$–equivalent to a welded link admitting a sorted presentation. From this proposition, and the invariance of the $k$–reduced group under self $w_{k}$–equivalence, the exact same argument as in [3, Lem. 1.18] proves the following. ###### Proposition 4.5. For every welded link $L$, its $k$–reduced group admits the following presentation $\textnormal{R}_{k}G(L)\cong\big{\langle}x_{1},\ldots,x_{n}\ |\ \Gamma_{k+1}N_{i}\textrm{ and }[x_{i},\lambda^{k}_{i}]\textrm{ for all $i$}\big{\rangle},$ where, for each $i$, $x_{i}$ is a meridian on the $i$th component, $N_{i}$ is the normal subgroup generated by $x_{i}$, and $\lambda^{k}_{i}$ is a representative word for the corresponding longitude. The rest of the proof then follows the exact same lines as [7, Prop. 3.7]. We start with two welded links which have equivalent $k$–reduced peripheral systems, and consider sorted presentations using Proposition 4.4. The main difference with the string link case of Section 3.3 is that a fourth operation is involved on the $i$th preferred longitudes: by Proposition 4.5, these longitudes might differ by the insertion or deletion of a commutator $[x_{j},\lambda^{k}_{j}]$ for some $j$. This extra operation can be achieved by a $w_{k}$-concordance on sorted presentations, with the same trick as illustrated in the first figure of [7, Proof of Prop. 3.7]. This concludes the proof of Theorem 4.3. Notice that, in the above argument, the welded concordance is only needed for the fourth operation that inserts/deletes a commutator $[x_{j},\lambda^{k}_{j}]$ for some $j$. This is because such relators appear in the presentation of $\textnormal{R}_{k}G(L)$ given in Proposition 4.4. In the special case of a link $L$ with vanishing Milnor invariants $\overline{\mu}_{L}(I)$ with $r(I)\leq k$, we have that $\textnormal{R}_{k}G(L)\cong RF_{n}$, meaning that this extra operation involving welded concordance is not needed for the proof. This implies that a (welded) link has vanishing Milnor invariants $\overline{\mu}(I)$ with $r(I)\leq k$, if and only if it is self $w_{k}$-equivalent to the unlink, as stated in the introduction. This is shown by applying verbatim the same argument as in [7, §3.2].777More precisely, the exact same arguments showing that [7, Thm. 3.8] implies [7, Cor. 3.11], shows that Theorem 4.3 implies the theorem stated at the end of the introduction. ## References * [1] B. Audoux, P. Bellingeri, J.-B. Meilhan, and E. Wagner. Homotopy classification of ribbon tubes and welded string links. Ann. Sc. Norm. Super. Pisa Cl. Sci., 17(1):713–761, 2017. * [2] B. Audoux, P. Bellingeri, J.-B. Meilhan, and E. Wagner. On usual, virtual and welded knotted objects up to homotopy. J. Math. Soc. Japan, 69(3):1079–1097, 2017. * [3] B. Audoux and J.-B. Meilhan. Characterization of the reduced peripheral system of links. arXiv:1904.04763, 2019. * [4] B. Audoux, J.-B. Meilhan, and A. Yasuhara. Milnor concordance invariant for knotted surfaces and beyond. arXiv:2109.14578, 2021. * [5] H. U. Boden and M. Chrisman. Virtual concordance and the generalized alexander polynomial. J. Knot Theory Ramifications, 30(5):2150030, 2021. * [6] K.-T. Chen. Commutator calculus and link invariants. Proc. Amer. Math. Soc., 3(4):44–55, 1952. * [7] B. Colombari. A diagrammatic characterization of milnor invariants. arXiv:2201.01499 , 2021. * [8] J. Darné. On the stable Andreadakis problem. J. Pure Appl. Algebra, 223(12):5484–5525, 2019. * [9] T. Fleming and A. Yasuhara. Milnor’s invariants and self $C_{k}$-equivalence. Proc. Amer. Math. Soc., 137(2):761–770, 2009. * [10] R. Gaudreau. Classification of virtual string links up to cobordism. Ars Math. Contemp., 19(1):37–49, 2020. * [11] M. Goussarov, M. Polyak, and O. Viro. Finite-type invariants of classical and virtual knots. Topology, 39(5):1045–1068, 2000. * [12] N. Habegger and X.-S. Lin. The classification of links up to link-homotopy. J. Amer. Math. Soc., 3:389–419, 1990. * [13] M. Hall. The Theory of Groups. AMS Chelsea Publishing, 1959. * [14] P. Hall. A contribution to the theory of groups of prime-power order. Proc. Lond. Math. Soc. (2), 36:29–95, 1933. * [15] L. H. Kauffman. Virtual knot theory. European J. Combin., 20(7):663–690, 1999. * [16] J. P. Levine. An approach to homotopy classification of links. Trans. Am. Math. Soc., 306(1):361–387, 1988. * [17] W. Magnus. Über Beziehungen zwischen höheren Kommutatoren. J. Reine Angew. Math., 177:105–115, 1937. * [18] W. Magnus, A. Karrass, and D. Solitar. Combinatorial group theory : presentations of groups in terms of generators and relations. Dover books on advanced mathematics. Dover, New York, 1976. * [19] J.-B. Meilhan and A. Yasuhara. Arrow calculus for welded and classical links. Alg. Geom. Topol., 19(1):397–456, 2019. * [20] J. Milnor. Link groups. Ann. of Math. (2), 59:177–195, 1954. * [21] J. Milnor. Isotopy of links. Algebraic geometry and topology. In A symposium in honor of S. Lefschetz, pages 280–306. Princeton University Press, Princeton, N. J., 1957. * [22] H. A. Miyazawa, K. Wada, and A. Yasuhara. Combinatorial Approach to Milnor Invariants of Welded Links. Michigan Mathematical Journal, pages 1 – 30, 2021. * [23] S. Satoh. Virtual knot presentation of ribbon torus-knots. J. Knot Theory Ramifications, 9(4):531–542, 2000. * [24] T. Watanabe. Clasper-moves among ribbon 2-knots characterizing their finite type invariants. J. Knot Theory Ramifications, 15(9):1163–1199, 2006. * [25] E. Witt. Treue Darstellung Liescher Ringe. J. Reine Angew. Math., 177:152–160, 1937. * [26] A. Yasuhara. Classification of string links up to self delta-moves and concordance. Algebr. Geom. Topol., 9(1):265–275, 2009. * [27] A. Yasuhara. Self delta-equivalence for links whose Milnor’s isotopy invariants vanish. Trans. Amer. Math. Soc., 361(9):4721–4749, 2009. * [28] E. Yurasovskaya. Homotopy string links over surfaces. PhD Thesis, The University of British Columbia, 2008.
# HesScale: Scalable Computation of Hessian Diagonals Mohamed Elsayed & A. Rupam Mahmood Department of Computing Science, Alberta Machine Intelligence Institute (Amii) University of Alberta Edmonton, Alberta, Canada <EMAIL_ADDRESS> CIFAR AI Chair ###### Abstract Second-order optimization uses curvature information about the objective function, which can help in faster convergence. However, such methods typically require expensive computation of the Hessian matrix, preventing their usage in a scalable way. The absence of efficient ways of computation drove the most widely used methods to focus on first-order approximations that do not capture the curvature information. In this paper, we develop HesScale, a scalable approach to approximating the diagonal of the Hessian matrix, to incorporate second-order information in a computationally efficient manner. We show that HesScale has the same computational complexity as backpropagation. Our results on supervised classification show that HesScale achieves high approximation accuracy, allowing for scalable and efficient second-order optimization.111Code is available at https://github.com/mohmdelsayed/HesScale. ## 1 Introduction First-order optimization offers a cheap and efficient way of performing local progress in optimization problems by using gradient information. However, their performance suffers from instability or slow progress when used in ill- conditioned landscapes. Such a problem is present because first-order methods do not capture curvature information which causes two interrelated issues. First, the updates in first-order have incorrect units (Duchi et al. 2011), which creates a scaling issue. Second, first-order methods lack parameterization invariance (Martens 2020) in contrast to second-order methods such as natural gradient (Amari 1998) or Newton-Raphson methods. Therefore, some first-order normalization methods were developed to address the invariance problem (Ba et al. 2016, Ioffe & Szegedy 2015, Salimans & Kingma 2016). On the other hand, some recent adaptive step-size methods try to alleviate the scaling issue by using gradient information for first-order curvature approximation (Luo et al. 2019, Duchi et al. 2011, Zeiler 2012, Reddi et al. 2018, Kingma & Ba 2015, Tran & Phong 2019, Tieleman et al. 2012). Specifically, such methods use the empirical Fisher diagonals heuristic by maintaining a moving average of the squared gradients to approximate the diagonal of the Fisher information matrix. Despite the huge adoption of such methods due to their scalability, they use inaccurate approximations. Kunstner et al. (2019) showed that the empirical Fisher does not generally capture curvature information and might have undesirable effects. They argued that the empirical Fisher approximates the Fisher or the Hessian matrices only under strong assumptions that are unlikely to be met in practice. Moreover, Wilson et al. (2017) presented a counterexample where the adaptive step-size methods are unable to reduce the error compared to non-adaptive counterparts such as stochastic gradient descent. Although second-order optimization can speed up the training process by using the geometry of the landscape, its adoption is minimal compared to first-order methods. The exact natural gradient or Newton-Raphson methods require the computation, storage, and inversion of the Fisher information or the Hessian matrices, making them computationally prohibitive in large-scale tasks. Accordingly, many popular second-order methods attempt to approximate less expensively. For example, a type of truncated-Newton method called Hessian- free methods (Martens 2010) exploits the fact that the Hessian-vector product is cheap (Bekas et al. 2007) and uses the iterative conjugate gradient method to perform an update. However, such methods might require many iterations per update or some tricks to achieve stability, adding computational overhead (Martens & Sutskever 2011). Some variations try to approximate only the diagonals of the Hessian matrix using stochastic estimation with matrix-free computations (Chapelle & Erhan 2011, Martens et al. 2012, Yao et al. 2021). Other methods impose probabilistic modeling assumptions and estimate a block diagonal Fisher information matrix (Martens & Grosse 2015, Botev et al. 2017). Such methods are invariant to reparametrization but are computationally expensive since they need to perform matrix inversion for each block. Deterministic diagonal approximations to the Hessian (LeCun et al. 1990, Becker & Lecun 1989) provide some curvature information and are efficient to compute. Specifically, they can be implemented to be as efficient as first- order methods. We view this category of approximation as scalable second-order methods. In neural networks, curvature backpropagation (Becker & Lecun 1989, Mizutani & Dreyfus 2008) can be used to backpropagate the curvature vector. Although these methods show a promising direction for scalable second-order optimization, the approximation quality is sometimes poor with objectives such as cross-entropy (Martens et al. 2012). A scalable second-order method with high quality approximation is still needed. In this paper, we present HesScale, a high-quality approximation method for the Hessian diagonals. Our method is also scalable and has little memory requirement with linear computational complexity while maintaining high approximation accuracy. ## 2 Background In this section, we describe the Hessian matrix for neural networks and some existing methods for estimating it. Generally, Hessian matrices can be computed for any scalar-valued function that are twice differentiable. If $f:\mathbb{R}^{n}\rightarrow\mathbb{R}$ is such a function, then for its argument ${\bm{\psi}}\in\mathbb{R}^{n}$, the Hessian matrix ${\bm{H}}\in\mathbb{R}^{n\times n}$ of $f$ with respect to ${\bm{\psi}}$ is given by $H_{i,j}=\nicefrac{{\partial^{2}f({\bm{\psi}})}}{{\partial\psi_{i}\partial\psi_{j}}}$. Here, the $i$th element of a vector ${\bm{v}}$ is denoted by $v_{i}$, and the element at the $i$th row and $j$th column of a matrix ${\bm{M}}$ is denoted by $M_{i,j}$. When the need for computing the Hessian matrix arises for optimization in deep learning, the function $f$ is typically the objective function, and the vector ${\bm{\psi}}$ is commonly the weight vector of a neural network. Computing and storing an $n\times n$ matrix, where $n$ is the number of weights in a neural network, is expensive. Therefore, many methods exist for approximating the Hessian matrix or parts of it with less memory footprint, computational requirement, or both. A common technique is to utilize the structure of the function to reduce the computations needed. For example, assuming that connections from a certain layer do not affect other layers in a neural network allows one to approximate a block diagonal Hessian. The computation further simplifies when we have piece-wise linear activation functions (e.g., ReLU), which result in a _Generalized Gauss-Newton_ (GGN) (Schraudolph 2002) approximation that is equivalent to the block diagonal Hessian matrix with linear activation functions. The GGN matrix is more favored in second-order optimization since it is positive semi-definite. However, computing a block diagonal matrix is still demanding. Many approximation methods were developed to reduce the storage and computation requirements of the GGN matrix. For example, under probabilistic modeling assumptions, the _Kronecker-factored Approximate Curvature_ (KFAC) method (Martens & Grosse 2015) writes the GGN matrix ${\bm{G}}$ as a Kronecker product of two matrices of smaller sizes as: ${\bm{G}}={\bm{A}}\otimes{\bm{B}}$, where ${\bm{A}}=\mathbb{E}[{\bm{h}}{\bm{h}}^{\top}]$, ${\bm{B}}=\mathbb{E}[{\bm{g}}{\bm{g}}^{\top}]$, ${\bm{h}}$ is the activation output vector, and ${\bm{g}}$ is the gradient of the loss with respect to the activation input vector. The ${\bm{A}}$ and ${\bm{B}}$ matrices can be estimated by Monte Carlo sampling and an exponential moving average. KFAC is more efficient when used in optimization since it requires inverting only the small matrices using the Kronecker-product property $({\bm{A}}\otimes{\bm{B}})^{-1}={\bm{A}}^{-1}\otimes{\bm{B}}^{-1}$. However, KFAC is still expensive due to the storage of the block diagonal matrices and computation of Kronecker product, which prevent it from being used as a scalable method. Computing the Hessian diagonals can provide some curvature information with relatively less computation. However, it has been shown that the exact computation for diagonals of the Hessian typically has quadratic complexity with the unlikely existence of algorithms that can compute the exact diagonals with less than quadratic complexity (Martens et al. 2012). Some stochastic methods provide a way to compute unbiased estimates of the exact Hessian diagonals. For example, the AdaHessian (Yao et al. 2021) algorithm uses the Hutchinson’s estimator $\text{diag}({\bm{H}})=E[{\bm{z}}\circ({\bm{H}}{\bm{z}})]$, where ${\bm{z}}$ is a multivariate random variable with a Rademacher distribution and the expectation can be estimated using Monte Carlo sampling with an exponential moving average. Similarly, the GGN-MC method (Dangel et al. 2020) uses the relationship between the Fisher information matrix and the Hessian matrix under probabilistic modeling assumptions to have an MC approximation of the diagonal of the GGN matrix. Although these stochastic approximation methods are scalable due to linear or $O(n)$ computational and memory complexity, they suffer from low approximation quality, improving which requires many sampling and factors of additional computations. ## 3 The Proposed HesScale Method In this section, we present our method for approximating the diagonal of the Hessian at each layer in feed-forward networks, where a backpropagation rule is used to utilize the Hessian of previous layers. We present the derivation of the backpropagation rule for fully connected and convolutional neural networks in supervised learning. Similar derivation for fully connected networks with mean squared error is presented before (LeCun et al. 1990, Becker & Lecun 1989). However, we use the exact diagonals of the Hessian matrix at the last layer with some non-linear and non-element-wise output activations such as softmax and show that it can still be computed in linear computational complexity. We show the derivation for Hessian diagonals for fully connected networks in the following and provide the derivation for the convolutional neural networks in Appendix B. We use the supervised classification setting where there is a collection of data examples. These data examples are generated from some _target function_ ${f}^{*}$ mapping the input ${\bm{x}}$ to the output $y$, where the $k$-th input-output pair is $({\bm{x}}_{k},y_{k})$. In this task, the _learner_ is required to predict the output class $y\in\\{1,2,...,m\\}$ given the input vector ${\bm{x}}\in\mathbb{R}^{d}$ by estimating the target function ${f}^{*}$. The performance is measured with the cross-entropy loss, $\mathcal{L}({\bm{p}},{\bm{q}})=-\sum_{i=1}^{m}p_{i}\log q_{i}$, where ${\bm{p}}\in\mathbb{R}^{m}$ is the vector of the target one-hot encoded class and ${\bm{q}}\in\mathbb{R}^{m}$ is the predicted output. The learner is required to reduce the cross-entropy by matching the target class. Consider a neural network with $L$ layers that outputs the predicted output ${\bm{q}}$. The neural network is parametrized by the set of weights $\\{{\bm{W}}_{1},...,{\bm{W}}_{L}\\}$, where ${\bm{W}}_{l}$ is the weight matrix at the $l$-th layer, and its element at the $i$th row and the $j$th column is denoted by $W_{l,i,j}$. During learning, the parameters of the neural network are changed to reduce the loss. At each layer $l$, we get the activation output ${\bm{h}}_{l}$ by applying the activation function $\bm{\sigma}$ to the activation input ${\bm{a}}_{l}$: ${\bm{h}}_{l}=\bm{\sigma}({\bm{a}}_{l})$. We simplify notations by defining ${\bm{h}}_{0}\doteq{\bm{x}}$. The activation output ${\bm{h}}_{l}$ is then multiplied by the weight matrix ${\bm{W}}_{l+1}$ of layer $l+1$ to produce the next activation input: ${a}_{l+1,i}=\sum_{j=1}^{|{\bm{h}}_{l}|}{{W}_{l+1,i,j}}{h}_{l,j}$. We assume here that the activation function is element-wise activation for all layers except for the final layer $L$, where it becomes the softmax function. The backpropagation equations for the described network are given as follows Rumelhart et al. (1986): $\displaystyle\frac{\partial\mathcal{L}}{\partial a_{l,i}}$ $\displaystyle=\sum_{k=1}^{|{\bm{a}}_{l+1}|}\frac{\partial\mathcal{L}}{\partial a_{l+1,k}}\frac{\partial a_{l+1,k}}{\partial h_{l,i}}\frac{\partial h_{l,i}}{\partial a_{l,i}}=\sigma^{\prime}(a_{l,i})\sum_{k=1}^{|{\bm{a}}_{l+1}|}\frac{\partial\mathcal{L}}{\partial a_{l+1,k}}W_{l+1,k,i},$ (1) $\displaystyle\frac{\partial\mathcal{L}}{\partial W_{l,i,j}}$ $\displaystyle=\frac{\partial\mathcal{L}}{\partial a_{l,i}}\frac{\partial a_{l,i}}{\partial W_{l,i,j}}=\frac{\partial\mathcal{L}}{\partial a_{l,i}}h_{l-1,j}.$ (2) In the following, we write the equations for the exact Hessian diagonals with respect to weights $\nicefrac{{\partial^{2}\mathcal{L}}}{{\partial{W^{2}_{l,i,j}}}}$, which requires the calculation of $\nicefrac{{\partial^{2}\mathcal{L}}}{{\partial{a^{2}_{l,i}}}}$ first: $\displaystyle\frac{\partial^{2}\mathcal{L}}{\partial{a^{2}_{l,i}}}$ $\displaystyle=\frac{\partial}{\partial a_{l,i}}\left(\sigma^{\prime}(a_{l,i})\sum_{k=1}^{|{\bm{a}}_{l+1}|}\frac{\partial\mathcal{L}}{\partial a_{l+1,k}}W_{l+1,k,i}\right)$ $\displaystyle={\sigma^{\prime}}(a_{l,i})\sum_{k=1}^{|{\bm{a}}_{l+1}|}\sum_{p=1}^{|{\bm{a}}_{l+1}|}\frac{\partial^{2}\mathcal{L}}{\partial a_{l+1,k}\partial a_{l+1,p}}\frac{\partial a_{l+1,p}}{\partial a_{l,i}}W_{l+1,k,i}+\sigma^{\prime\prime}(a_{l,i})\sum_{k=1}^{|{\bm{a}}_{l+1}|}\frac{\partial\mathcal{L}}{\partial a_{l+1,k}}W_{l+1,k,i}$ $\displaystyle={\sigma^{\prime}}(a_{l,i})^{2}\sum_{k=1}^{|{\bm{a}}_{l+1}|}\sum_{p=1}^{|{\bm{a}}_{l+1}|}\frac{\partial^{2}\mathcal{L}}{\partial a_{l+1,k}\partial a_{l+1,p}}W_{l+1,p,i}W_{l+1,k,i}+\sigma^{\prime\prime}(a_{l,i})\sum_{k=1}^{|{\bm{a}}_{l+1}|}\frac{\partial\mathcal{L}}{\partial a_{l+1,k}}W_{l+1,k,i},$ $\displaystyle\frac{\partial^{2}\mathcal{L}}{\partial{W^{2}_{l,i,j}}}$ $\displaystyle=\frac{\partial}{\partial W_{l,i,j}}\left(\frac{\partial\mathcal{L}}{\partial a_{l,i}}h_{l-1,j}\right)=\frac{\partial}{\partial a_{l,i}}\left(\frac{\partial\mathcal{L}}{\partial a_{l,i}}\right)\frac{\partial a_{l,i}}{\partial W_{l,i,j}}h_{l-1,j}=\frac{\partial^{2}\mathcal{L}}{\partial a^{2}_{l,i}}h^{2}_{l-1,j}.$ (3) Since, the calculation of $\nicefrac{{\partial^{2}\mathcal{L}}}{{\partial{a^{2}_{l,i}}}}$ depends on the off-diagonal terms, the computation complexity becomes quadratic. Following Becker and Lecun (1989), we approximate the Hessian diagonals by ignoring the off-diagonal terms, which leads to a backpropagation rule with linear computational complexity for our estimates $\widehat{\frac{\partial^{2}\mathcal{L}}{\partial{W^{2}_{l,i,j}}}}$ and $\widehat{\frac{\partial^{2}\mathcal{L}}{\partial{a^{2}_{l,i}}}}$: $\displaystyle\widehat{\frac{\partial^{2}\mathcal{L}}{\partial{a^{2}_{l,i}}}}$ $\displaystyle\doteq{\sigma^{\prime}}(a_{l,i})^{2}\sum_{k=1}^{|{\bm{a}}_{l+1}|}\widehat{\frac{\partial^{2}\mathcal{L}}{\partial{a^{2}_{l+1,k}}}}W^{2}_{l+1,k,i}+\sigma^{\prime\prime}(a_{l,i})\sum_{k=1}^{|{\bm{a}}_{l+1}|}\frac{\partial\mathcal{L}}{\partial a_{l+1,k}}W_{l+1,k,i},$ (4) $\displaystyle\widehat{\frac{\partial^{2}\mathcal{L}}{\partial{W^{2}_{l,i,j}}}}$ $\displaystyle\doteq\widehat{\frac{\partial^{2}\mathcal{L}}{\partial{a^{2}_{l,i}}}}h^{2}_{l-1,j}.$ (5) However, for the last layer, we use the exact Hessian diagonals $\widehat{\frac{\partial^{2}\mathcal{L}}{\partial{a^{2}_{L,i}}}}\doteq\frac{\partial^{2}\mathcal{L}}{\partial{a^{2}_{L,i}}}$ since it can be computed in $O(n)$ for the softmax activation function and the cross-entropy loss. More precisely, the exact Hessian diagonals for cross- entropy loss with softmax is simply ${\bm{q}}-{\bm{q}}\circ{\bm{q}}$, where ${\bm{q}}$ is the predicted probability vector and $\circ$ denotes element- wise multiplication. We found empirically that this small change makes a large difference in the approximation quality, as shown in Fig. 1(a). Hence, unlike Becker and Lecun (1989) who use a Hessian diagonal approximation of the last layer by Eq. 4, we use the exact values directly to achieve more approximation accuracy. We call this method for Hessian diagonal approximation _HesScale_ and provide its pseudocode for supervised classification in Algorithm 1. Algorithm 1 HesScale: Computing Hessian diagonals of a neural network layer in classification Neural network $f$ and a layer number $l$ First and second order information $\widehat{\frac{\partial\mathcal{L}}{\partial a_{l+1,i}}}$ and $\widehat{\frac{\partial^{2}\mathcal{L}}{\partial a_{l+1,i,j}^{2}}}$, unless $l=L$ Input-output pair $({\bm{x}},y)$ Set loss function $\mathcal{L}$ to cross-entropy loss Compute preference vector ${\bm{a}}_{L}\leftarrow f({\bm{x}})$ and target one- hot-encoded vector ${\bm{p}}\leftarrow\texttt{onehot}(y)$ Compute the predicted probability vector ${\bm{q}}\leftarrow{\bm{\sigma}}({\bm{a}}_{L})$ using softmax function ${\bm{\sigma}}$ Compute the error $\mathcal{L}({\bm{p}},{\bm{q}})$ if $l=L$ then $\triangleright$ Computing Hessian diagonals exactly for the last layer Compute $\frac{\partial\mathcal{L}}{\partial{\bm{a}}_{L}}\leftarrow{\bm{q}}-{\bm{p}}$$\triangleright$ $\frac{\partial\mathcal{L}}{\partial{\bm{a}}_{L}}$ consists of elements $\frac{\partial\mathcal{L}}{\partial a_{L,i}}$ Compute $\frac{\partial\mathcal{L}}{\partial{\bm{W}}_{L}}$ using Eq. 2 $\triangleright$ $\frac{\partial\mathcal{L}}{\partial{\bm{W}}_{L}}$ consists of elements $\frac{\partial\mathcal{L}}{\partial W_{L,i,j}}$ $\widehat{\frac{\partial^{2}\mathcal{L}}{\partial{\bm{a}}_{L}^{2}}}\leftarrow{\bm{q}}-{\bm{q}}\circ{\bm{q}}$ $\triangleright$ $\widehat{\frac{\partial^{2}\mathcal{L}}{\partial{\bm{a}}_{L}^{2}}}$ consists of elements $\widehat{\frac{\partial^{2}\mathcal{L}}{\partial a_{L,i}^{2}}}$ Compute $\widehat{\frac{\partial^{2}\mathcal{L}}{\partial{\bm{W}}_{L}^{2}}}$ using Eq. 5$\triangleright$ $\widehat{\frac{\partial^{2}\mathcal{L}}{\partial{\bm{W}}_{L}^{2}}}$ consists of elements $\widehat{\frac{\partial^{2}\mathcal{L}}{\partial W_{L,i,j}^{2}}}$ else if $l\neq L$ then Compute $\frac{\partial\mathcal{L}}{\partial{\bm{a}}_{l}}$ and $\nicefrac{{\partial\mathcal{L}}}{{\partial{\bm{W}}_{l}}}$ using Eq. 1 and Eq. 2 Compute $\widehat{\frac{\partial^{2}\mathcal{L}}{\partial{\bm{a}}_{l}^{2}}}$ and $\widehat{\frac{\partial^{2}\mathcal{L}}{\partial{\bm{W}}_{l}^{2}}}$ using Eq. 4 and Eq. 5 end if return $\frac{\partial\mathcal{L}}{\partial{\bm{W}}_{l}}$, $\widehat{\frac{\partial^{2}\mathcal{L}}{\partial{\bm{W}}_{l}^{2}}}$, $\frac{\partial\mathcal{L}}{\partial{\bm{a}}_{l}}$, and $\widehat{\frac{\partial^{2}\mathcal{L}}{\partial{\bm{a}}_{l}^{2}}}$ HesScale is not specific to cross-entropy loss as the exact Hessian diagonals can be calculated in $O(n)$ for some other widely used loss functions as well. We show this property for negative log-likelihood function with Gaussian and softmax distributions in Appendix A. The computations can be reduced further using a linear approximation for the activation functions (by dropping the second term in Eq. 4), which corresponds to an approximation of the GGN matrix. We call this variation of our method _HesScaleGN_. Based on HesScale, we make a stable optimizer, which we call AdaHesScale, given in Algorithm 2. We use the same style introduced in Adam (Kingma & Ba 2015), using the squared diagonal approximation instead of the squared gradients to update the moving average. Moreover, we introduce another optimizer based on HesScaleGN, which we call AdaHesScaleGN. Algorithm 2 AdaHesScale for optimization Neural network $f$ with weights $\\{{\bm{W}}_{1},...,{\bm{W}}_{L}\\}$ and a dataset $\mathcal{D}$ Small number $\epsilon\leftarrow 10^{-8}$ Exponential decay rates $\beta_{1},\beta_{2}\in[0,1)$ step size $\alpha$ Initialize $\\{{\bm{W}}_{1},...,{\bm{W}}_{L}\\}$ Initialize time step $t\leftarrow 0$. for $l$ in $\\{L,L-1,...,1\\}$ do $\triangleright$ Set exponential moving averages at time step 0 to zero ${\bm{M}}_{l}\leftarrow 0;\quad{\bm{V}}_{l}\leftarrow 0$ $\triangleright$ Same size as ${\bm{W}}_{l}$ end for for $({\bm{x}},y)$ in $\mathcal{D}$ do $t\leftarrow t+1$ ${\bm{r}}_{L+1}\leftarrow{\bm{s}}_{L+1}\leftarrow\bm{\emptyset}$ $\triangleright$ ${\bm{r}}_{l}$ and ${\bm{s}}_{l}$ stand for $\frac{\partial\mathcal{L}}{\partial{\bm{a}}_{l}}$ and $\widehat{\frac{\partial^{2}\mathcal{L}}{\partial{\bm{a}}_{l}^{2}}}$, respectively for $l$ in $\\{L,L-1,...,1\\}$ do ${\bm{F}}_{l},{\bm{S}}_{l},{\bm{r}}_{l},{\bm{s}}_{l}\;\leftarrow\;$ HesScale($f,{\bm{x}},y,l,{\bm{r}}_{l+1},{\bm{s}}_{l+1}$). $\triangleright$ Check Algorithm 1 ${\bm{M}}_{l}\leftarrow\beta_{1}{\bm{M}}_{l}+(1-\beta_{1}){\bm{F}}_{l}$ $\triangleright$ ${\bm{F}}_{l}$ stands for $\frac{\partial\mathcal{L}}{\partial{\bm{W}}_{l}}$ ${\bm{V}}_{l}\leftarrow\beta_{2}{\bm{V}}_{l}+(1-\beta_{2}){\bm{S}}_{l}^{2}$ $\triangleright$ ${\bm{S}}_{l}$ stands for $\widehat{\frac{\partial^{2}\mathcal{L}}{\partial{\bm{W}}_{l}^{2}}}$ $\hat{{\bm{M}}}_{l}\leftarrow{\bm{M}}_{l}/(1-\beta_{1}^{t})$ $\triangleright$ Bias-corrected estimate for ${\bm{F}}_{l}$ $\hat{{\bm{V}}}_{l}\leftarrow{\bm{V}}_{l}/(1-\beta_{2}^{t})$ $\triangleright$ Bias-corrected estimate for ${\bm{S}}_{l}$ ${\bm{W}}_{l}\leftarrow{\bm{W}}_{l}-\alpha\hat{{\bm{M}}}_{l}\oslash(\hat{{\bm{V}}}_{l}+\epsilon)^{\circ\frac{1}{2}}$ $\triangleright$ $\oslash$ is element-wise division $\triangleright$ ${\bm{A}}^{\circ\frac{1}{2}}$ is element-wise square root of ${\bm{A}}$ end for end for ## 4 Approximation quality & scalability of HesScale In this section, we evaluate HesScale for its approximation quality and computational cost and compare it with other methods. These measures constitute the criteria we look for in scalable and efficient methods. For our experiments, we implemented HesScale using the _BackPack_ framework (Dangel et al. 2020), which allows easy implementation of backpropagation of statistics other than the gradient. We start by studying the approximation quality of Hessian diagonals compared to the true values. To measure the approximation quality of the Hessian diagonals for different methods, we use the $L^{1}$ distance between the exact Hessian diagonals and their approximations. Our task here is supervised classification, and data examples are randomly generated. We used a network of three hidden layers with _tanh_ activations, each containing 16 units. The network weights and biases are initialized randomly. The network has six inputs and ten outputs. For each example pair, we compute the exact Hessian diagonals for each layer and their approximations from each method. All layers’ errors are summed and averaged over 1000 data examples for each method. In this experiment, we used 40 different initializations for the network weights, shown as colored dots in Fig. 1(a). Each point represents the summed error over network layers, averaged over 1000 examples for each different initialization. In this figure, we show the average error incurred by each method normalized by the average error incurred by HesScale. Any approximation that incurs an averaged error above 1 has a worse approximation than HesScale, and any approximation with an error less than 1 has a better approximation than HesScale. Moreover, we show the layer-wise error for each method in Fig. 1(b). (a) Normalized $L^{1}$ error with respect to HesScale (b) Layer-wise $L^{1}$ error Figure 1: The averaged error for each method is normalized by the averaged error incurred by HesScale. We show 40 initialization points with the same colors across all methods. The norm of the vector of Hessian diagonals $|\text{diag}({\bm{H}})|$ is shown as a reference. Different Hessian diagonal approximations are considered for comparison with HesScale. We included several deterministic and stochastic approximations for the Hessian diagonals. We also include the approximation of the Fisher Information Matrix done by squaring the gradients and denoted by $g^{2}$, which is highly adopted by many first-order methods (e.g., Kingma and Ba, 2015). We compare HesScale with three stochastic approximation methods: AdaHessian (Yao et al. 2021), Kronecker-factored approximate curvature (KFAC) (Martens & Grosse 2015), and the Monte-Carlo (MC) estimate of the GGN matrix (GGN-MC) (Dangel et al. 2020). We also compare HesScale with two deterministic approximation methods: the diagonals of the exact GGN matrix (Schraudolph 2002) ($\text{diag}({\bm{G}})$) and the diagonal approximation by Becker and Lecun (1989) (BL89). In KFAC, we extract the diagonals from the block diagonal matrix and show the approximation error averaged over 1 MC sample (KFAC-MC1) and over 50 MC samples (KFAC-MC50), both per each data example. Since AdaHessian and GGN-MC are already diagonal approximations, we use them directly and show the error with 1 MC sample (GGN-MC1 & AdaHessian-MC1) and with 50 MC samples (GGN-MC50 & AdaHessian-MC50). HesScale provides a better approximation than the other deterministic and stochastic methods. For stochastic methods, we use many MC samples to improve their approximation. However, their approximation quality is still poor. Methods approximating the GGN diagonals do not capture the complete Hessian information since the GGN and Hessian matrices are different when the activation functions are not piece-wise linear. Although these methods approximate the GGN diagonals, their approximation is significantly better than the AdaHessian approximation. And among the methods for approximating the GGN diagonals, HesScaleGN performs the best and close to the exact GGN diagonals. This experiment clearly shows that HesScale achieves the best approximation quality compared to other stochastic and deterministic approximation methods. Next, we perform another experiment to evaluate the computational cost of our optimizers. Our Hessian approximation methods and corresponding optimizers have linear computational complexity, which can be seen from Eq. 4 and Eq. 5. However, computing second-order information in optimizers still incurs extra computations compared to first-order optimizers, which may impact how the total computations scale with the number of parameters. Hence, we compare the computational cost of our optimizers with others for various number of parameters. More specifically, we measure the update time of each optimizer, which is the time needed to backpropagate first-order and second-order information and update the parameters. We designed two experiments to study the computational cost of first-order and second-order optimizers. In the first experiment, we used a neural network with a single hidden layer. The network has 64 inputs and 512 hidden units with tanh activations. We study the increase in computational time when increasing the number of outputs exponentially, which roughly doubles the number of parameters. The set of values we use for the number of outputs is $\\{2^{4},2^{5},2^{6},2^{7},2^{8},2^{9}\\}$. The results of this experiment are shown in Fig. 2(a). In the second experiment, we used a neural network with multi-layers, each containing 512 hidden units with tanh activations. The network has 64 inputs and 100 outputs. We study the increase in computational time when increasing the number of layers exponentially, which also roughly doubles the number of parameters. The set of values we use for the number of layers is $\\{1,2,4,8,16,32,64,128\\}$. The results of this experiment are shown in Fig. 2(b). The points in Fig. 2(a) and Fig. 2(b) are averaged over 30 updates. The standard errors of the means of these points are smaller than the width of each line. On average, we notice that the cost of AdaHessian, AdaHesScale, and AdaHesScaleGN are three, two, and 1.25 times the cost of Adam, respectively. (a) Increasing number of outputs in a neural network (b) Increasing number of layers in a neural network Figure 2: The average computation time for each step of an update is shown for different optimizers. The computed update time is the time needed by each optimizer to backpropagate gradients or second-order information and to update the parameters of the network. GGN overlaps with H in (a). It is clear that our methods are among the most computationally efficient approximation method for Hessian diagonals. ## 5 Empirical performance of HesScale in Optimization In this section, we compare the performance of our optimizers—AdaHesScale and AdaHesScaleGN—with three second-order optimizers: BL89, GGNMC, and AdaHessian. We also include comparisons to two first-order methods: Adam and SGD. We exclude KFAC and the exact diagonals of the GGN matrix from our comparisons due to their prohibitive computations. Our optimizers are evaluated in the supervised classification problem with a series of experiments using different architectures and three datasets: MNIST, CIFAR-10, and CIFAR-100. Instead of attempting to achieve state-of-the-art performance with specialized techniques and architectures, we follow the DeepOBS benchmarking work (Schneider et al. 2019) and compare the optimizers in their generic and pristine form using relatively simple networks. It allows us to perform a more fair comparison without extensively utilizing specialized knowledge for a particular task. In the first experiment, we use the MNIST-MLP task from DeepOBS. The images are flattened and used as inputs to a network of three fully connected layers (1000, 500, and 100 units) with tanh activations. We train each method for 100 epochs with a batch size of 128. We show the training plots in Fig. 6(a) with their corresponding sensitivity plots in Appendix C, Fig. 8(a). In the second experiment, we use the CIFAR10-3C3D task from the DeepOBS benchmarking tasks. The network consists of three convolutional layers with _tanh_ activations, each followed by max pooling. After that, two fully connected layers (512 and 256 units) with _tanh_ activations are used. We train each method for 100 epochs with a batch size of 128. We show the training plots in Fig. 6(b) with their corresponding sensitivity plots in Fig. 8(b). In the third experiment, we use the CIFAR100-3C-3D task from DeepOBS. The network is the same as the one used in the second task except for the activations are _ELU_. We train each method for 200 epochs with a batch size of 128. We show the training plots in Fig. 7(b) with their corresponding sensitivity plots in Fig. 9(b). In the fourth experiment, we use the CIFAR100-ALL-CNN task from DeepOBS with the ALL-CNN-C network, which consists of 9 convolutional layers (Springenberg et al. 2015). We use ELU activations insead of ReLU to differentiate between the performance of AdaHesScale and AdaHesScaleGN. We show the training plots in Fig. 7(a) with their corresponding sensitivity plots in Fig. 9(a). In the MNIST-MLP and CIFAR-10-3C3D experiments, we performed a hyperparameter search for each method to determine the best set of $\beta_{1}$, $\beta_{2}$, and $\alpha$. The range of $\beta_{2}$ is $\\{0.99,0.999,0.9999\\}$ and the range of $\beta_{1}$ is $\\{0.0,0.9\\}$. The range of step size is selected for each method to create a convex curve. Our criterion was to find the best hyperparameter configuration for each method in the search space that minimizes the area under the validation loss curve. The performance of each method was averaged over 30 independent runs. Each independent run had the same initial representation for the algorithms used in an experiment. Using each method’s best hyperparameter configuration on the validation set, we show the performance of each method against the time in seconds needed to complete the required number of epochs, which better depicts the computational efficiency of the methods. Fig. 3(a) and Fig. 3(b) show these results on MNIST-MLP and CIFAR-10 tasks. Moreover, we show the sensitivity of each method to the step size in Fig. 5(a) and Fig. 5(b). (a) MNIST-MLP (b) CIFAR-10 3C3D Figure 3: MNIST-MLP and CIFAR-10 3C3D classification tasks. Each method is trained for 100 epochs. We show the time taken by each algorithm in seconds (left) and we show the learning curves in the number of epochs (right). The performance of each method is averaged over 30 independent runs. The shaded area represents the standard error. (a) CIFAR-100 3C3D (b) CIFAR-100 All-CNN-C Figure 4: CIFAR-100 3C3D and CIFAR-100 ALL-CNN classification tasks. Each method from the first task is trained for 200 epochs and each method from the second task is trained for 350 epochs. We show the time taken by each algorithm in seconds (left) and we show the learning curves in the number of epochs (right). The performance of each method is averaged over 30 independent runs. The shaded area represents the standard error. (a) MNIST (b) CIFAR-10 Figure 5: Sensitivity of the step size for each method on MNIST-MLP and CIFAR-10 3C3D tasks. We select the best values of $\beta_{1}$ and $\beta_{2}$ for each step size $\alpha$. In the CIFAR-100-ALL-CNN and CIFAR-100-3C3D experiments, we used the set of $\beta_{1}$ and $\beta_{2}$ that achieved the best robustness in the previous two tasks. The values used for $\beta_{1}$ and $\beta_{2}$ were $0.9$ and $0.999$ respectively. We did a hyperparameter search for each method to determine the best step size using the specified $\beta_{1}$ and $\beta_{2}$. The range of step size is selected for each method to create a convex curve, while ensuring that each method uses a similar search budget and the same interval between search points. Our criterion was to find the best hyperparameter configuration for each method in the search space that minimizes the area under the validation loss curve. The performance of each method was averaged over 30 independent runs. Each independent run had the same initial representation for the algorithms used in an experiment. Using each method’s best hyperparameter configuration on the validation set, we show the performance of each method against the time in seconds needed to complete the required number of epochs. Fig. 4(a) and Fig. 4(b) show these results on CIFAR-100-ALL-CNN and CIFAR-100-3C3D tasks. Our results show that all optimizers except for BL89 performed well on the MNIST-MLP task. However, in CIFAR-10, CIFAR-100 3c3d, and CIFAR-100 ALL-CNN, we notice that AdaHessian performed worse than all methods except BL89. This result is aligned with AdaHessian’s inability to accurately approximate the Hessian diagonals, as shown in Fig. 1. Moreover, AdaHessian required more computational time compared to all methods, which is also reflected in Fig. 2. While being time-efficient, AdaHesScaleGN consistently outperformed all methods in CIFAR-10-3C3D and CIFAR-100-3C3D, and it outperformed all methods except AdaHesScale in CIFAR-100 ALL-CNN. This result is aligned with our methods’ accurate approximation of Hessian diagonals. Our experiments indicate that incorporating HesScale and HesScaleGN approximations in optimization methods can be of significant performance advantage in both computation and accuracy. AdaHesScale and AdaHesScaleGN outperformed other optimizers likely due to their accurate approximation of the diagonals of the Hessian and GGN, respectively. ## 6 Conclusion HesScale is a scalable and efficient second-order method for approximating the diagonals of the Hessian at every network layer. Our work is based on the previous work done by Becker and Lecun (1989). We performed a series of experiments to evaluate HesScale against other scalable algorithms in terms of computational cost and approximation accuracy. Moreover, we demonstrated how HesScale can be used to build efficient second-order optimization methods. Our results showed that our methods provide a more accurate approximation and require small additional computations. ## 7 Broader Impact Second-order information is used in domains other than optimization. For example, some works alleviating catastrophic forgetting use a utility measure for the network’s connections to protect them. Typically, an auxiliary loss is used between such connections, and their old values are weighted by their corresponding importance. Such methods (LeCun et al. 1990, Hassibi & Stork 1993, Dong et al. 2017, Kirkpatrick et al. 2017, Schwarz et al. 2018, Ritter et al. 2018) use the diagonal of the Fisher information matrix or the Hessian matrix as a utility measure for each weight. The quality of these algorithms depends heavily on the approximation quality of the second-order approximation. Second-order information can also be used in neural network pruning. Molchanov et al. (2019) showed that second-order approximation with the exact Hessian diagonals could closely represent the true measure of the utility of each weight. The accurate and efficient approximation for the diagonals of the Hessian at each layer enables HesScale to be used in many important lines of research. Using this second-order information provides a reliable measure of connection utility. Therefore, using HesScale in these types of problems can potentially improve the performance of neural network pruning methods and regularization- based catastrophic forgetting prevention methods. #### Acknowledgments We gratefully acknowledge funding from the Canada CIFAR AI Chairs program, the Reinforcement Learning and Artificial Intelligence (RLAI) laboratory, the Alberta Machine Intelligence Institute (Amii), and the Natural Sciences and Engineering Research Council (NSERC) of Canada. We would also like to thank Compute Canada for providing the computational resources needed. ## References Amari, S. (1998). Natural Gradient Works Efficiently in Learning. Neural Computation, 10(2), 251–276. Ba, J. L., Kiros, J. R., & Hinton, G. E. (2016). Layer normalization. arXiv preprint arXiv:1607.06450. Becker, S. & Lecun, Y. (1989). Improving the convergence of back-propagation learning with second-order methods. Proceedings of the 1988 Connectionist Models Summer School (pp. 29–37). Bekas, C., Kokiopoulou, E., & Saad, Y. (2007). An estimator for the diagonal of a matrix. Applied Numerical Mathematics, 57(11), 1214–1229. Botev, A., Ritter, H., & Barber, D. (2017). Practical Gauss-Newton optimisation for deep learning. Proceedings of the 34th International Conference on Machine Learning, 70, 557–565. Chan, A., Silva, H., Lim, S., Kozuno, T., Mahmood, A. R., & White, M. (2022). Greedification operators for policy optimization: Investigating forward and reverse KL divergences. Journal of Machine Learning Research, 23(253), 1-79. Chapelle, O. & Erhan, D. (2011). Improved preconditioner for hessian free optimization. NIPS Workshop on Deep Learning and Unsupervised Feature Learning. Dangel, F., Kunstner, F., & Hennig, P. (2020). BackPACK: Packing more into backprop. International Conference on Learning Representations. Dong, X., Chen, S., & Pan, S. J. (2017). Learning to prune deep neural networks via layer-wise optimal brain surgeon. Proceedings of the 31st International Conference on Neural Information Processing Systems (pp. 4860-4874). Duchi, J., Hazan, E., & Singer, Y. (2011). Adaptive subgradient methods for online learning and stochastic optimization. Journal of Machine Learning Research, 12(61), 2121–2159. Hassibi, B. & Stork, D. (1993). Second order derivatives for network pruning: Optimal brain surgeon. Advances in Neural Information Processing Systems, 5. Ioffe, S. and Szegedy, C. (2015). Batch normalization: Accelerating deep network training by reducing internal covariate shift. International conference on machine learning (pp. 448-456). Kingma, D. P. & Ba, J. (2015). Adam: A method for stochastic optimization. International Conference on Learning Representations. Kirkpatrick, J., Pascanu, R., Rabinowitz, N., Veness, J., Desjardins, G., Rusu, A. A., Milan, K., Quan, J., Ramalho, T., Grabska-Barwinska, A., et al. (2017). Overcoming catastrophic forgetting in neural networks. Proceedings of the national academy of sciences, 114(13), 3521–3526. Kunstner, F., Hennig, P., & Balles, L. (2019). Limitations of the empirical fisher approximation for natural gradient descent. Advances in Neural Information Processing Systems, 32. LeCun, Y., Denker, J., & Solla, S. (1990). Optimal brain damage. Advances in Neural Information Processing Systems, 2. Luo, L., Xiong, Y., & Liu, Y. (2019). Adaptive gradient methods with dynamic bound of learning rate. International Conference on Learning Representations. Martens, J. (2010). Deep learning via hessian-free optimization. International Conference on Machine Learning (pp. 735–742). Martens, J. (2020). New insights & perspectives on the natural gradient method. Journal of Machine Learning Research, 21(146), 1-76. Martens, J. & Grosse, R. (2015). Optimizing neural networks with kronecker- factored approximate curvature. International Conference on Machine Learning (pp. 2408–2417). Martens, J. & Sutskever, I. (2011). Learning recurrent neural networks with hessian-free optimization. International Conference on Machine Learning (pp. 1033-1040). Martens, J., Sutskever, I., & Swersky, K. (2012). Estimating the hessian by back-propagating curvature. International Conference on International Conference on Machine Learning(pp. 963–970). Mizutani, E. & Dreyfus, S. E. (2008). Second-order stagewise backpropagation for hessian-matrix analyses & investigation of negative curvature. Neural Networks, 21(2), 193–203. Molchanov, P., Mallya, A., Tyree, S., Frosio, I., & Kautz, J. (2019). Importance estimation for neural network pruning. IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 11264–11272). Reddi, S. J., Kale, S., & Kumar, S. (2018). On the convergence of Adam and beyond. International Conference on Learning Representations. Ritter, H., Botev, A., and Barber, D. (2018). Online structured Laplace approximations for overcoming catastrophic forgetting. International Conference on Neural Information Processing Systems (pp. 3742–3752). Rumelhart, D. E., Hinton, G. E., and Williams, R. J. (1986). Learning representations by back-propagating errors. Nature, 323(6088), 533–536. Salimans, T. and Kingma, D. P. (2016). Weight normalization: A simple reparameterization to accelerate training of deep neural networks. Advances in neural information processing systems, 29, 901–909. Schneider, F., Balles, L., and Hennig, P. (2019). DeepOBS: A deep learning optimizer benchmark suite. International Conference on Learning Representations. Schraudolph, N. N. (2002). Fast curvature matrix-vector products for second- order gradient descent. Neural Computation, 14(7), 1723–1738. Schwarz, J., Czarnecki, W., Luketina, J., Grabska-Barwinska, A., Teh, Y. W., Pascanu, R., and Hadsell, R. (2018). Progress & compress: A scalable framework for continual learning. International Conference on Machine Learning (pp. 4528–4537). Springenberg, J., Dosovitskiy, A., Brox, T., and Riedmiller, M. (2015). Striving for simplicity: The all convolutional net. International Conference on Learning Representations [Workshop]. Tieleman, T., Hinton, G., et al. (2012). Lecture 6.5-RMSProp: Divide the gradient by a running average of its recent magnitude. COURSERA: Neural networks for machine learning, 4(2), 26–31. Tran, P. T. and Phong, L. T. (2019). On the convergence proof of AMSGrad and a new version. IEEE Access, 7, 61706–61716. Wilson, A. C., Roelofs, R., Stern, M., Srebro, N., and Recht, B. (2017). The marginal value of adaptive gradient methods in machine learning. International Conference on Neural Information Processing Systems (pp. 4151–4161). Yao, Z., Gholami, A., Shen, S., Keutzer, K., and Mahoney, M. W. (2021). Adahessian: An adaptive second order optimizer for machine learning. AAAI Conference on Artificial Intelligence, 35(12), 10665-10673. Zeiler, M. D. (2012). Adadelta: An adaptive learning rate method. arXiv preprint arXiv:1212.5701. ## Appendix A Hessian diagonals of the log-likelihood function for two common distributions Here, we provide the diagonals of the Hessian matrix of functions involving the log-likelihood of two common distributions: a normal distribution and a categorical distribution with probabilities represented by a softmax function, which we refer to as a _softmax distribution_. We show that the exact computations of the diagonal can be computed with linear complexity since computing the diagonal elements does not depend on off-diagonals in these cases. In the following, we consider the softmax and normal distributions, and we write the exact Hessian diagonals in both cases. ### A.1 Softmax distribution Consider a cross-entropy function for a discrete probability distribution as $f\doteq-\sum_{i=1}^{|{\bm{q}}|}p_{i}\log q_{i}(\bm{\theta})$, where ${\bm{q}}$ is a probability vector that depends on a parameter vector $\bm{\theta}$, and ${\bm{p}}$ is a one-hot vector for the target class. For softmax distributions, ${\bm{q}}$ is parametrized by a softmax function ${\bm{q}}\doteq{e^{\bm{\theta}}/\sum_{i=1}^{|{\bm{q}}|}e^{\theta_{i}}}$. In this case, we can write the gradient of the cross-entropy function with respect to $\bm{\theta}$ as $\nabla_{\bm{\theta}}f(\bm{\theta})={\bm{q}}-{\bm{p}}.$ Next, we write the exact diagonal elements of the Hessian matrix as follows: $\text{diag}({\bm{H}}_{\bm{\theta}})=\text{diag}(\nabla_{\bm{\theta}}({\bm{q}}-{\bm{p}}))={\bm{q}}-{\bm{q}}^{2},$ where ${\bm{q}}^{2}$ denotes element-wise squaring of ${\bm{q}}$, and $\nabla$ operator applied to a vector denotes Jacobian. Computing the exact diagonals of the Hessian matrix depends only on vector operations, which means that we can compute it in $O(n)$. The cross-entropy loss is used with softmax distribution in many important tasks, such as supervised classification and discrete reinforcement learning control with parameterized policies (Chan et al. 2022). ### A.2 Multivariate normal distribution with diagonal covariance For a multivariate normal distribution with diagonal covariance, the parameter vector $\bm{\theta}$ is determined by the mean-variance vector pair: $\bm{\theta}\doteq(\bm{\mu},{\bm{\sigma}}^{2})$. The log-likelihood of a random vector ${\bm{x}}$ drawn from this distribution can be written as $\displaystyle\log q({\bm{x}};\bm{\mu},{\bm{\sigma}}^{2})$ $\displaystyle=-\frac{1}{2}({\bm{x}}-\bm{\mu})^{\top}\bm{D}({\bm{\sigma}}^{2})^{-1}({\bm{x}}-\bm{\mu})-\frac{1}{2}\log(|\bm{D}({\bm{\sigma}}^{2})|)+c$ $\displaystyle=-\frac{1}{2}({\bm{x}}-\bm{\mu})^{\top}\bm{D}({\bm{\sigma}}^{2})^{-1}({\bm{x}}-\bm{\mu})-\frac{1}{2}\log(\sum_{i=1}^{|{\bm{\sigma}}|}\sigma^{2}_{i})+c,$ where $\bm{D}({\bm{\sigma}}^{2})$ gives a diagonal matrix with ${\bm{\sigma}}^{2}$ in its diagonal, $|{\bm{M}}|$ is the determinant of a matrix ${\bm{M}}$ and $c$ is some constant. We can write the gradients of the log-likelihood function with respect to $\bm{\mu}$ and ${\bm{\sigma}}^{2}$ as follows: $\displaystyle\nabla_{\bm{\mu}}\log q({\bm{x}};\bm{\mu},{\bm{\sigma}}^{2})$ $\displaystyle=\bm{D}({\bm{\sigma}}^{2})^{-1}({\bm{x}}-\bm{\mu})=({\bm{x}}-\bm{\mu})\oslash{\bm{\sigma}}^{2},$ $\displaystyle\nabla_{{\bm{\sigma}}^{2}}\log q({\bm{x}};\bm{\mu},{\bm{\sigma}}^{2})$ $\displaystyle=\frac{1}{2}\big{[}({\bm{x}}-\bm{\mu})^{2}\oslash{\bm{\sigma}}^{2}-\mathbf{1}\big{]}\oslash{\bm{\sigma}}^{2},$ where $\mathbf{1}$ is an all-ones vector, and $\oslash$ denotes element-wise division. Finally, we write the exact diagonals of the Hessian matrix as $\text{diag}({\bm{H}}_{\bm{\mu}})=\text{diag}(\nabla_{{\bm{\mu}}}({\bm{x}}-\bm{\mu})\oslash{\bm{\sigma}}^{2})=-\mathbf{1}\oslash{\bm{\sigma}}^{2},$ $\text{diag}({\bm{H}}_{{\bm{\sigma}}^{2}})=\text{diag}\Big{(}\nabla_{{\bm{\sigma}}^{2}}\big{[}\frac{1}{2}[({\bm{x}}-\bm{\mu})^{2}\oslash{\bm{\sigma}}^{2}-\mathbf{1}]\oslash{\bm{\sigma}}^{2}\big{]}\Big{)}=\big{[}0.5\mathbf{1}-({\bm{x}}-\bm{\mu})^{2}\oslash{\bm{\sigma}}^{2}\big{]}\oslash{\bm{\sigma}}^{4}.$ Clearly, the gradient and the exact Hessian diagonals can be computed in $O(n)$. Log-likelihood functions for normal distributions are used in many important problems, such as variational inference and continuous reinforcement learning control. ## Appendix B HesScale with Convolutional Neural Networks Here, we derive the Hessian propagation for convolutional neural networks (CNNs). Consider a CNN with $L-1$ layers followed by a fully connected layer that outputs the predicted output ${\bm{q}}$. The CNN filters are parameterized by $\\{{\bm{W}}_{1},...,{\bm{W}}_{L}\\}$, where ${\bm{W}}_{l}$ is the filter matrix at the $l$-th layer with the dimensions $k_{l,1}\times k_{l,2}$, and its element at the $i$th row and the $j$th column is denoted by $W_{l,i,j}$. For the simplicity of this proof, we assume that the number of filters at each layer is one; the proof can be extended easily to the general case. The learning algorithm learns the target function $f^{*}$ by optimizing the loss $\mathcal{L}$. During learning, the parameters of the neural network are changed to reduce the loss. At the layer $l$, we get the activation output matrix ${\bm{H}}_{l}$ by applying the activation function $\bm{\sigma}$ to the activation input ${\bm{A}}_{l}$: ${\bm{H}}_{l}=\bm{\sigma}({\bm{A}}_{l})$. We assume here that the activation function is element-wise activation for all layers except for the final layer $L$, where it becomes the softmax function. We simplify notations by defining ${\bm{H}}_{0}\doteq{\bm{X}}$, where ${\bm{X}}$ is the input sample. The activation output ${\bm{H}}_{l}$ is then convoluted by the weight matrix ${\bm{W}}_{l+1}$ of layer $l+1$ to produce the next activation input: ${A}_{l+1,i,j}=\sum_{m=0}^{k_{l,1}-1}\sum_{n=0}^{k_{l,2}-1}{{W}_{l+1,m,n}}{H}_{l,(i+m),(j+n)}$. We denote the size of the activation output at the $l$-th layer by $h_{l}\times w_{l}$. The backpropagation equations for the described network are given following Rumelhart et al. (1986): $\displaystyle\frac{\partial\mathcal{L}}{\partial A_{l,i,j}}$ $\displaystyle=\sum_{m=0}^{k_{l+1,1}-1}\sum_{n=0}^{k_{l+1,2}-1}\frac{\partial\mathcal{L}}{\partial A_{l+1,(i-m),(j-n)}}\frac{\partial A_{l+1,(i-m),(j-n)}}{\partial A_{l,i,j}}$ $\displaystyle=\sum_{m=0}^{k_{l+1,1}-1}\sum_{n=0}^{k_{l+1,2}-1}\frac{\partial\mathcal{L}}{\partial A_{l+1,(i-m),(j-n)}}\sum_{m^{\prime}=0}^{k_{l+1,1}-1}\sum_{n^{\prime}=0}^{k_{l+1,2}-1}W_{l+1,m^{\prime},n^{\prime}}\frac{\partial H_{l,(i-m+m^{\prime}),(j-n+n^{\prime})}}{\partial A_{l,i,j}}$ $\displaystyle=\sum_{m=0}^{k_{l+1,1}-1}\sum_{n=0}^{k_{l+1,2}-1}\frac{\partial\mathcal{L}}{\partial A_{l+1,(i-m),(j-n)}}W_{l+1,m,n}\sigma^{\prime}(A_{l,i,j})$ $\displaystyle=\sigma^{\prime}(A_{l,i,j})\sum_{m=0}^{k_{l+1,1}-1}\sum_{n=0}^{k_{l+1,2}-1}\frac{\partial\mathcal{L}}{\partial A_{l+1,(i-m),(j-n)}}W_{l+1,m,n},$ (6) $\displaystyle\frac{\partial\mathcal{L}}{\partial W_{l,i,j}}$ $\displaystyle=\sum_{m=0}^{h_{l}-k_{l,1}}\sum_{n=0}^{w_{l}-k_{l,2}}\frac{\partial\mathcal{L}}{\partial A_{l,m,n}}\frac{\partial A_{l,m,n}}{\partial W_{l,i,j}}=\sum_{m=0}^{h_{l}-k_{l,1}}\sum_{n=0}^{w_{l}-k_{l,2}}\frac{\partial\mathcal{L}}{\partial A_{l,m,n}}H_{l-1,(i+m),(j+n)}.$ (7) In the following, we write the equations for the exact Hessian diagonals with respect to weights $\nicefrac{{\partial^{2}\mathcal{L}}}{{\partial{W^{2}_{l,i,j}}}}$, which requires the calculation of $\nicefrac{{\partial^{2}\mathcal{L}}}{{\partial{A^{2}_{l,i,j}}}}$ first: $\displaystyle\frac{\partial^{2}\mathcal{L}}{\partial{A^{2}_{l,i,j}}}$ $\displaystyle=\frac{\partial}{\partial A_{l,i,j}}\Bigg{[}\sigma^{\prime}(A_{l,i,j})\sum_{m=0}^{k_{l+1,1}-1}\sum_{n=0}^{k_{l+1,2}-1}\frac{\partial\mathcal{L}}{\partial A_{l+1,(i-m),(j-n)}}W_{l+1,m,n}\Bigg{]}$ $\displaystyle=\sigma^{\prime}(A_{l,i,j})\sum_{m,p=0}^{k_{l+1,2}-1}\sum_{n,q=0}^{k_{l+1,2}-1}\frac{\partial^{2}\mathcal{L}}{\partial A_{l+1,(i-m),(j-n)}\partial A_{l+1,(i-p),(j-q)}}\frac{\partial A_{l+1,(i-p),(j-q)}}{\partial A_{l,i,j}}W_{l+1,m,n}$ $\displaystyle+\sigma^{\prime\prime}(A_{l,i,j})\sum_{m=0}^{k_{l+1,2}-1}\sum_{n=0}^{k_{l+1,2}-1}\frac{\partial\mathcal{L}}{\partial A_{l+1,(i-m),(j-n)}}W_{l+1,m,n}$ $\displaystyle\frac{\partial^{2}\mathcal{L}}{\partial{W^{2}_{l,i,j}}}$ $\displaystyle=\frac{\partial}{\partial W_{l,i,j}}\Bigg{[}\sum_{m=0}^{h_{l}-k_{l,1}}\sum_{n=0}^{w_{l}-k_{l,2}}\frac{\partial\mathcal{L}}{\partial A_{l,m,n}}H_{l-1,(i+m),(j+n)}\Bigg{]}$ $\displaystyle=\sum_{m,p=0}^{h_{l}-k_{l,1}}\sum_{n,q=0}^{w_{l}-k_{l,2}}\frac{\partial^{2}\mathcal{L}}{\partial A_{l,m,n}\partial A_{l,p,q}}\frac{\partial A_{l,p,q}}{\partial W_{l,i,j}}H_{l-1,(i+m),(j+n)}$ Since the calculation of $\nicefrac{{\partial^{2}\mathcal{L}}}{{\partial{A^{2}_{l,i,j}}}}$ and $\nicefrac{{\partial^{2}\mathcal{L}}}{{\partial{W^{2}_{l,i,j}}}}$ depend on the off-diagonal terms, the computation complexity becomes quadratic. Following Becker and Lecun (1989), we approximate the Hessian diagonals by ignoring the off-diagonal terms, which leads to a backpropagation rule with linear computational complexity for our estimates $\widehat{\frac{\partial^{2}\mathcal{L}}{\partial{W^{2}_{l,i,j}}}}$ and $\widehat{\frac{\partial^{2}\mathcal{L}}{\partial{A^{2}_{l,i,j}}}}$: $\displaystyle\widehat{\frac{\partial^{2}\mathcal{L}}{\partial{A^{2}_{l,i,j}}}}$ $\displaystyle\doteq\sigma^{\prime}(A_{l,i,j})^{2}\sum_{m=0}^{k_{l+1,2}-1}\sum_{n=0}^{k_{l+1,2}-1}\widehat{\frac{\partial^{2}\mathcal{L}}{\partial A^{2}_{l+1,(i-m),(j-n)}}}W^{2}_{l+1,m,n}$ $\displaystyle+\sigma^{\prime\prime}(A_{l,i,j})\sum_{m=0}^{k_{l+1,2}-1}\sum_{n=0}^{k_{l+1,2}-1}\frac{\partial\mathcal{L}}{\partial A_{l+1,(i-m),(j-n)}}W_{l+1,m,n},$ (8) $\displaystyle\widehat{\frac{\partial^{2}\mathcal{L}}{\partial{W^{2}_{l,i,j}}}}$ $\displaystyle\doteq\sum_{m=0}^{h_{l}-k_{l,1}}\sum_{n=0}^{w_{l}-k_{l,2}}\widehat{\frac{\partial^{2}\mathcal{L}}{\partial A^{2}_{l,m,n}}}H^{2}_{l-1,(i+m),(j+n)}.$ (9) ## Appendix C Optimization plots in the number of epochs We give the training loss, training accuracy, validation loss, validation accuracy, test loss, and test accuracy for each of the methods we include in our comparison in Fig. 6 and Fig. 7. Moreover, we give the sensitivity plots for $\beta_{1}$, $\beta_{2}$, and $\alpha$ for each method in Fig. 8 and Fig. 9. (a) MNIST (b) CIFAR-10 Figure 6: Learning curves of each algorithm on two tasks, MNIST-MLP and CIFAR-10 3C3D, for 100 epochs. We show the best configuration for each algorithm on the validation set. The best parameter configuration for each algorithm is selected based on the area under the curve for the validation loss. (a) CIFAR-100 All CNN (b) CIFAR-100 3C3D Figure 7: Learning Curves of each algorithm on CIFAR-100 with All-CNN and 3C3D architectures, for 100 epochs. We show the best configuration for each algorithm on the validation set. The best parameter configuration for each algorithm is selected based on the area under the curve for the validation loss. (a) MNIST (b) CIFAR-10 Figure 8: Parameter Sensitivity study for each algorithm on two data sets, MNIST and CIFAR-10. The range of $\beta_{2}$ is $\\{0.99,0.999,0.9999\\}$ and the range of $\beta_{1}$ is $\\{0.0,0.9\\}$. Each point for each algorithm represents the average test loss given a set of parameters. (a) CIFAR-100 All-CNN (b) CIFAR-100 3C3D Figure 9: Parameter Sensitivity study for each algorithm on CIFAR-100 with All-CNN and 3C3D architectures. The range of step size is $\\{10^{-5},10^{-4},10^{-3},10^{-2},10^{-1},10^{0}\\}$. We choose $\beta_{1}$ to be equal to $0.9$ and $\beta_{2}$ to be equal to $0.999$. Each point for each algorithm represents the average test loss given a set of parameters.
# Intriguing properties of neural networks Christian Szegedy Google Inc. &Wojciech Zaremba New York University &Ilya Sutskever Google Inc. &Joan Bruna New York University &Dumitru Erhan Google Inc. &Ian Goodfellow University of Montreal &Rob Fergus New York University Facebook Inc. ###### Abstract Deep neural networks are highly expressive models that have recently achieved state of the art performance on speech and visual recognition tasks. While their expressiveness is the reason they succeed, it also causes them to learn uninterpretable solutions that could have counter-intuitive properties. In this paper we report two such properties. First, we find that there is no distinction between individual high level units and random linear combinations of high level units, according to various methods of unit analysis. It suggests that it is the space, rather than the individual units, that contains the semantic information in the high layers of neural networks. Second, we find that deep neural networks learn input-output mappings that are fairly discontinuous to a significant extent. We can cause the network to misclassify an image by applying a certain hardly perceptible perturbation, which is found by maximizing the network’s prediction error. In addition, the specific nature of these perturbations is not a random artifact of learning: the same perturbation can cause a different network, that was trained on a different subset of the dataset, to misclassify the same input. ## 1 Introduction Deep neural networks are powerful learning models that achieve excellent performance on visual and speech recognition problems [9, 8]. Neural networks achieve high performance because they can express arbitrary computation that consists of a modest number of massively parallel nonlinear steps. But as the resulting computation is automatically discovered by backpropagation via supervised learning, it can be difficult to interpret and can have counter- intuitive properties. In this paper, we discuss two counter-intuitive properties of deep neural networks. The first property is concerned with the semantic meaning of individual units. Previous works [6, 13, 7] analyzed the semantic meaning of various units by finding the set of inputs that maximally activate a given unit. The inspection of individual units makes the implicit assumption that the units of the last feature layer form a distinguished basis which is particularly useful for extracting semantic information. Instead, we show in section 3 that random projections of $\phi(x)$ are semantically indistinguishable from the coordinates of $\phi(x)$. This puts into question the conjecture that neural networks disentangle variation factors across coordinates. Generally, it seems that it is the entire space of activations, rather than the individual units, that contains the bulk of the semantic information. A similar, but even stronger conclusion was reached recently by Mikolov et al. [12] for word representations, where the various directions in the vector space representing the words are shown to give rise to a surprisingly rich semantic encoding of relations and analogies. At the same time, the vector representations are stable up to a rotation of the space, so the individual units of the vector representations are unlikely to contain semantic information. The second property is concerned with the stability of neural networks with respect to small perturbations to their inputs. Consider a state-of-the-art deep neural network that generalizes well on an object recognition task. We expect such network to be robust to small perturbations of its input, because small perturbation cannot change the object category of an image. However, we find that applying an _imperceptible_ non-random perturbation to a test image, it is possible to arbitrarily change the network’s prediction (see figure 5). These perturbations are found by optimizing the input to maximize the prediction error. We term the so perturbed examples “adversarial examples”. It is natural to expect that the precise configuration of the minimal necessary perturbations is a random artifact of the normal variability that arises in different runs of backpropagation learning. Yet, we found that adversarial examples are relatively robust, and are shared by neural networks with varied number of layers, activations or trained on different subsets of the training data. That is, if we use one neural net to generate a set of adversarial examples, we find that these examples are still statistically hard for another neural network even when it was trained with different hyperparameters or, most surprisingly, when it was trained on a different set of examples. These results suggest that the deep neural networks that are learned by backpropagation have nonintuitive characteristics and intrinsic blind spots, whose structure is connected to the data distribution in a non-obvious way. ## 2 Framework Notation We denote by $x\in\mathbb{R}^{m}$ an input image, and $\phi(x)$ activation values of some layer. We first examine properties of the image of $\phi(x)$, and then we search for its blind spots. We perform a number of experiments on a few different networks and three datasets : * • For the MNIST dataset, we used the following architectures [11] * – A simple fully connected network with one or more hidden layers and a Softmax classifier. We refer to this network as “FC”. * – A classifier trained on top of an autoencoder. We refer to this network as “AE”. * • The ImageNet dataset [3]. * – Krizhevsky et. al architecture [9]. We refer to it as “AlexNet”. * • $\sim 10$M image samples from Youtube (see [10]) * – Unsupervised trained network with $\sim$ 1 billion learnable parameters. We refer to it as “QuocNet”. For the MNIST experiments, we use regularization with a weight decay of $\lambda$. Moreover, in some experiments we split the MNIST training dataset into two disjoint datasets $P_{1}$, and $P_{2}$, each with 30000 training cases. ## 3 Units of: $\phi(x)$ Traditional computer vision systems rely on feature extraction: often a single feature is easily interpretable, e.g. a histogram of colors, or quantized local derivatives. This allows one to inspect the individual coordinates of the feature space, and link them back to meaningful variations in the input domain. Similar reasoning was used in previous work that attempted to analyze neural networks that were applied to computer vision problems. These works interpret an activation of a hidden unit as a meaningful feature. They look for input images which maximize the activation value of this single feature [6, 13, 7, 4]. The aforementioned technique can be formally stated as visual inspection of images $x^{\prime}$, which satisfy (or are close to maximum attainable value): $\displaystyle x^{\prime}=\operatorname*{arg\,max}_{x\in\mathcal{I}}\langle\phi(x),e_{i}\rangle$ where $\mathcal{I}$ is a held-out set of images from the data distribution that the network was not trained on and $e_{i}$ is the natural basis vector associated with the $i$-th hidden unit. Our experiments show that any random direction $v\in\mathbb{R}^{n}$ gives rise to similarly interpretable semantic properties. More formally, we find that images $x^{\prime}$ are semantically related to each other, for many $x^{\prime}$ such that $\displaystyle x^{\prime}=\operatorname*{arg\,max}_{x\in\mathcal{I}}\langle\phi(x),v\rangle$ This suggests that the natural basis is not better than a random basis for inspecting the properties of $\phi(x)$. This puts into question the notion that neural networks disentangle variation factors across coordinates. First, we evaluated the above claim using a convolutional neural network trained on MNIST. We used the MNIST test set for $\mathcal{I}$. Figure 1 shows images that maximize the activations in the natural basis, and Figure 2 shows images that maximize the activation in random directions. In both cases the resulting images share many high-level similarities. Next, we repeated our experiment on an AlexNet, where we used the validation set as $\mathcal{I}$. Figures 3 and 4 compare the natural basis to the random basis on the trained network. The rows appear to be semantically meaningful for both the single unit and the combination of units. (a) Unit sensitive to lower round stroke. (b) Unit sensitive to upper round stroke, or lower straight stroke. (c) Unit senstive to left, upper round stroke. (d) Unit senstive to diagonal straight stroke. Figure 1: An MNIST experiment. The figure shows images that maximize the activation of various units (maximum stimulation in the natural basis direction). Images within each row share semantic properties. (a) Direction sensitive to upper straight stroke, or lower round stroke. (b) Direction sensitive to lower left loop. (c) Direction senstive to round top stroke. (d) Direction sensitive to right, upper round stroke. Figure 2: An MNIST experiment. The figure shows images that maximize the activations in a random direction (maximum stimulation in a random basis). Images within each row share semantic properties. (a) Unit sensitive to white flowers. (b) Unit sensitive to postures. (c) Unit senstive to round, spiky flowers. (d) Unit senstive to round green or yellow objects. Figure 3: Experiment performed on ImageNet. Images stimulating single unit most (maximum stimulation in natural basis direction). Images within each row share many semantic properties. (a) Direction sensitive to white, spread flowers. (b) Direction sensitive to white dogs. (c) Direction sensitive to spread shapes. (d) Direction sensitive to dogs with brown heads. Figure 4: Experiment performed on ImageNet. Images giving rise to maximum activations in a random direction (maximum stimulation in a random basis). Images within each row share many semantic properties. Although such analysis gives insight on the capacity of $\phi$ to generate invariance on a particular subset of the input distribution, it does not explain the behavior on the rest of its domain. We shall see in the next section that $\phi$ has counterintuitive properties in the neighbourhood of almost every point form data distribution. ## 4 Blind Spots in Neural Networks So far, unit-level inspection methods had relatively little utility beyond confirming certain intuitions regarding the complexity of the representations learned by a deep neural network [6, 13, 7, 4]. Global, network level inspection methods _can_ be useful in the context of explaining classification decisions made by a model [1] and can be used to, for instance, identify the parts of the input which led to a correct classification of a given visual input instance (in other words, one can use a trained model for weakly- supervised localization). Such global analyses are useful in that they can make us understand better the input-to-output mapping represented by the trained network. Generally speaking, the output layer unit of a neural network is a highly nonlinear function of its input. When it is trained with the cross-entropy loss (using the Softmax activation function), it represents a conditional distribution of the label given the input (and the training set presented so far). It has been argued [2] that the deep stack of non-linear layers in between the input and the output unit of a neural network are a way for the model to encode a _non-local generalization prior_ over the input space. In other words, it is assumed that is possible for the output unit to assign non- significant (and, presumably, non-epsilon) probabilities to regions of the input space that contain no training examples in their vicinity. Such regions can represent, for instance, the same objects from different viewpoints, which are relatively far (in pixel space), but which share nonetheless both the label and the statistical structure of the original inputs. It is implicit in such arguments that $local$ generalization—in the very proximity of the training examples—works as expected. And that in particular, for a small enough radius $\varepsilon>0$ in the vicinity of a given training input $x$, an $x+r$ satisfying $||r||<\varepsilon$ will get assigned a high probability of the correct class by the model. This kind of smoothness prior is typically valid for computer vision problems. In general, imperceptibly tiny perturbations of a given image do not normally change the underlying class. Our main result is that for deep neural networks, the smoothness assumption that underlies many kernel methods does not hold. Specifically, we show that by using a simple optimization procedure, we are able to find adversarial examples, which are obtained by imperceptibly small perturbations to a correctly classified input image, so that it is no longer classified correctly. In some sense, what we describe is a way to traverse the manifold represented by the network in an efficient way (by optimization) and finding _adversarial examples_ in the input space. The adversarial examples represent low- probability (high-dimensional) “pockets” in the manifold, which are hard to efficiently find by simply randomly sampling the input around a given example. Already, a variety of recent state of the art computer vision models employ input deformations during training for increasing the robustness and convergence speed of the models [9, 13]. These deformations are, however, statistically inefficient, for a given example: they are highly correlated and are drawn from the same distribution throughout the entire training of the model. We propose a scheme to make this process adaptive in a way that exploits the model and its deficiencies in modeling the local space around the training data. We make the connection with hard-negative mining explicitly, as it is close in spirit: hard-negative mining, in computer vision, consists of identifying training set examples (or portions thereof) which are given low probabilities by the model, but which should be high probability instead, cf. [5]. The training set distribution is then changed to emphasize such hard negatives and a further round of model training is performed. As shall be described, the optimization problem proposed in this work can also be used in a constructive way, similar to the hard-negative mining principle. ### 4.1 Formal description We denote by $f:\mathbb{R}^{m}\longrightarrow\\{1\dots k\\}$ a classifier mapping image pixel value vectors to a discrete label set. We also assume that $f$ has an associated continuous loss function denoted by $\textrm{loss}_{f}:\mathbb{R}^{m}\times\\{1\dots k\\}\longrightarrow\mathbb{R}^{+}$. For a given $x\in\mathbb{R}^{m}$ image and target label $l\in\\{1\dots k\\}$, we aim to solve the following box- constrained optimization problem: * • Minimize $\|r\|_{2}$ subject to: 1. 1. $f(x+r)=l$ 2. 2. $x+r\in[0,1]^{m}$ The minimizer $r$ might not be unique, but we denote one such $x+r$ for an arbitrarily chosen minimizer by $D(x,l)$. Informally, $x+r$ is the closest image to $x$ classified as $l$ by $f$. Obviously, $D(x,f(x))=f(x)$, so this task is non-trivial only if $f(x)\neq l$. In general, the exact computation of $D(x,l)$ is a hard problem, so we approximate it by using a box-constrained L-BFGS. Concretely, we find an approximation of $D(x,l)$ by performing line- search to find the minimum $c>0$ for which the minimizer $r$ of the following problem satisfies $f(x+r)=l$. * • Minimize $c|r|+\textrm{loss}_{f}(x+r,l)$ subject to $x+r\in[0,1]^{m}$ This penalty function method would yield the exact solution for $D(X,l)$ in the case of convex losses, however neural networks are non-convex in general, so we end up with an approximation in this case. ### 4.2 Experimental results Figure 5: Adversarial examples generated for AlexNet [9].(Left) is a correctly predicted sample, (center) difference between correct image, and image predicted incorrectly magnified by 10x (values shifted by 128 and clamped), (right) adversarial example. All images in the right column are predicted to be an “ostrich, Struthio camelus”. Average distortion based on 64 examples is 0.006508. Plase refer to http://goo.gl/huaGPb for full resolution images. The examples are strictly randomly chosen. There is not any postselection involved. Figure 6: Adversarial examples for QuocNet [10]. A binary car classifier was trained on top of the last layer features without fine-tuning. The randomly chosen examples on the left are recognized correctly as cars, while the images in the middle are not recognized. The rightmost column is the magnified absolute value of the difference between the two images. Our “minimimum distortion” function $D$ has the following intriguing properties which we will support by informal evidence and quantitative experiments in this section: 1. 1. For all the networks we studied (MNIST, QuocNet [10], AlexNet [9]), for each sample, we have always managed to generate very close, visually hard to distinguish, adversarial examples that are misclassified by the original network (see figure 5 and http://goo.gl/huaGPb for examples). 2. 2. Cross model generalization: a relatively large fraction of examples will be misclassified by networks trained from scratch with different hyper-parameters (number of layers, regularization or initial weights). 3. 3. Cross training-set generalization a relatively large fraction of examples will be misclassified by networks trained from scratch on a disjoint training set. The above observations suggest that adversarial examples are somewhat universal and not just the results of overfitting to a particular model or to the specific selection of the training set. They also suggest that back- feeding adversarial examples to training might improve generalization of the resulting models. Our preliminary experiments have yielded positive evidence on MNIST to support this hypothesis as well: We have successfully trained a two layer 100-100-10 non-convolutional neural network with a test error below $1.2\%$ by keeping a pool of adversarial examples a random subset of which is continuously replaced by newly generated adversarial examples and which is mixed into the original training set all the time. We used weight decay, but no dropout for this network. For comparison, a network of this size gets to $1.6\%$ errors when regularized by weight decay alone and can be improved to around $1.3\%$ by using carefully applied dropout. A subtle, but essential detail is that we only got improvements by generating adversarial examples for each layer outputs which were used to train all the layers above. The network was trained in an alternating fashion, maintaining and updating a pool of adversarial examples for each layer separately in addition to the original training set. According to our initial observations, adversarial examples for the higher layers seemed to be significantly more useful than those on the input or lower layers. In our future work, we plan to compare these effects in a systematic manner. For space considerations, we just present results for a representative subset (see Table 1) of the MNIST experiments we performed. The results presented here are consistent with those on a larger variety of non-convolutional models. For MNIST, we do not have results for convolutional models yet, but our first qualitative experiments with AlexNet gives us reason to believe that convolutional networks may behave similarly as well. Each of our models were trained with L-BFGS until convergence. The first three models are linear classifiers that work on the pixel level with various weight decay parameters $\lambda$. All our examples use quadratic weight decay on the connection weights: $\textrm{loss}_{decay}=\lambda\sum w_{i}^{2}/k$ added to the total loss, where $k$ is the number of units in the layer. Three of our models are simple linear (softmax) classifier without hidden units (FC10($\lambda$)). One of them, FC10($1$), is trained with extremely high $\lambda=1$ in order to test whether it is still possible to generate adversarial examples in this extreme setting as well.Two other models are a simple sigmoidal neural network with two hidden layers and a classifier. The last model, AE400-10, consists of a single layer sparse autoencoder with sigmoid activations and 400 nodes with a Softmax classifier. This network has been trained until it got very high quality first layer filters and this layer was not fine-tuned. The last column measures the minimum average pixel level distortion necessary to reach $0\%$ accuracy on the training set. The distortion is measure by $\sqrt{\frac{\sum(x_{i}^{\prime}-x_{i})^{2}}{n}}$ between the original $x$ and distorted $x^{\prime}$ images, where $n=784$ is the number of image pixels. The pixel intensities are scaled to be in the range $[0,1]$. In our first experiment, we generated a set of adversarial instances for a given network and fed these examples for each other network to measure the proportion of misclassified instances. The last column shows the average minimum distortion that was necessary to reach 0% accuracy on the whole training set. The experimental results are presented in Table 2. The columns of Table 2 show the error (proportion of misclassified instances) on the so distorted training sets. The last two rows are given for reference showing the error induced when distorting by the given amounts of Gaussian noise. Note that even the noise with stddev 0.1 is greater than the stddev of our adversarial noise for all but one of the models. Figure 7 shows a visualization of the generated adversarial instances for two of the networks used in this experiment The general conclusion is that adversarial examples tend to stay hard even for models trained with different hyperparameters. Although the autoencoder based version seems most resilient to adversarial examples, it is not fully immune either. (a) Even columns: adversarial examples for a linear (FC) classifier (stddev=0.06) (b) Even columns: adversarial examples for a 200-200-10 sigmoid network (stddev=0.063) (c) Randomly distorted samples by Gaussian noise with stddev=1. Accuracy: 51%. Figure 7: Adversarial examples for a randomly chosen subset of MNIST compared with randomly distorted examples. Odd columns correspond to original images, and even columns correspond to distorted counterparts. The adversarial examples generated for the specific model have accuracy 0% for the respective model. Note that while the randomly distorted examples are hardly readable, still they are classified correctly in half of the cases, while the adversarial examples are never classified correctly. Model Name | Description | Training error | Test error | Av. min. distortion ---|---|---|---|--- FC10($10^{-4}$) | Softmax with $\lambda=10^{-4}$ | 6.7% | 7.4% | 0.062 FC10($10^{-2}$) | Softmax with $\lambda=10^{-2}$ | 10% | 9.4% | 0.1 FC10($1$) | Softmax with $\lambda=1$ | 21.2% | 20% | 0.14 FC100-100-10 | Sigmoid network $\lambda=10^{-5},10^{-5},10^{-6}$ | 0% | 1.64% | 0.058 FC200-200-10 | Sigmoid network $\lambda=10^{-5},10^{-5},10^{-6}$ | 0% | 1.54% | 0.065 AE400-10 | Autoencoder with Softmax $\lambda=10^{-6}$ | 0.57% | 1.9% | 0.086 Table 1: Tests of the generalization of adversarial instances on MNIST. | FC10($10^{-4}$) | FC10($10^{-2}$) | FC10($1$) | FC100-100-10 | FC200-200-10 | AE400-10 | Av. distortion ---|---|---|---|---|---|---|--- FC10($10^{-4}$) | 100% | 11.7% | 22.7% | 2% | 3.9% | 2.7% | 0.062 FC10($10^{-2}$) | 87.1% | 100% | 35.2% | 35.9% | 27.3% | 9.8% | 0.1 FC10($1$) | 71.9% | 76.2% | 100% | 48.1% | 47% | 34.4% | 0.14 FC100-100-10 | 28.9% | 13.7% | 21.1% | 100% | 6.6% | 2% | 0.058 FC200-200-10 | 38.2% | 14% | 23.8% | 20.3% | 100% | 2.7% | 0.065 AE400-10 | 23.4% | 16% | 24.8% | 9.4% | 6.6% | 100% | 0.086 Gaussian noise, stddev=0.1 | 5.0% | 10.1% | 18.3% | 0% | 0% | 0.8% | 0.1 Gaussian noise, stddev=0.3 | 15.6% | 11.3% | 22.7% | 5% | 4.3% | 3.1% | 0.3 Table 2: Cross-model generalization of adversarial examples. The columns of the Tables show the error induced by distorted examples fed to the given model. The last column shows average distortion wrt. original training set. Still, this experiment leaves open the question of dependence over the training set. Does the hardness of the generated examples rely solely on the particular choice of our training set as a sample or does this effect generalize even to models trained on completely different training sets? Model | Error on $P_{1}$ | Error on $P_{2}$ | Error on Test | Min Av. Distortion ---|---|---|---|--- FC100-100-10: 100-100-10 trained on $P_{1}$ | 0% | 2.4% | 2% | 0.062 FC123-456-10: 123-456-10 trained on $P_{1}$ | 0% | 2.5% | 2.1% | 0.059 FC100-100-10’ trained on $P_{2}$ | 2.3% | 0% | 2.1% | 0.058 Table 3: Models trained to study cross-training-set generalization of the generated adversarial examples. Errors presented in Table correpond to original not-distorted data, to provide a baseline. | FC100-100-10 | FC123-456-10 | FC100-100-10’ ---|---|---|--- Distorted for FC100-100-10 (av. stddev=0.062) | 100% | 26.2% | 5.9% Distorted for FC123-456-10 (av. stddev=0.059) | 6.25% | 100% | 5.1% Distorted for FC100-100-10’ (av. stddev=0.058) | 8.2% | 8.2% | 100% Gaussian noise with stddev=$0.06$ | 2.2% | 2.6% | 2.4% Distorted for FC100-100-10 amplified to stddev=$0.1$ | 100% | 98% | 43% Distorted for FC123-456-10 amplified to stddev=$0.1$ | 96% | 100% | 22% Distorted for FC100-100-10’ amplified to stddev=$0.1$ | 27% | 50% | 100% Gaussian noise with stddev=$0.1$ | 2.6% | 2.8% | 2.7% Table 4: Cross-training-set generalization error rate for the set of adversarial examples generated for different models. The error induced by a random distortion to the same examples is displayed in the last row. To study cross-training-set generalization, we have partitioned the 60000 MNIST training images into two parts $P_{1}$ and $P_{2}$ of size 30000 each and trained three non-convolutional networks with sigmoid activations on them: Two, FC100-100-10 and FC123-456-10, on $P_{1}$ and FC100-100-10 on $P_{2}$. The reason we trained two networks for $P_{1}$ is to study the cumulative effect of changing the hypermarameters and the training sets at the same time. Models FC100-100-10 and FC100-100-10 share the same hyperparameters: both of them are 100-100-10 networks, while FC123-456-10 has different number of hidden units. In this experiment, we were distorting the elements of the test set rather than the training set. Table 3 summarizes the basic facts about these models. After we generate adversarial examples with $100\%$ error rates with minimum distortion for the test set, we feed these examples to the each of the models. The error for each model is displayed in the corresponding column of the upper part of Table 4. In the last experiment, we magnify the effect of our distortion by using the examples $x+0.1\frac{x^{\prime}-x}{\|x^{\prime}-x\|_{2}}$ rather than $x^{\prime}$. This magnifies the distortion on average by 40%, from stddev $0.06$ to $0.1$. The so distorted examples are fed back to each of the models and the error rates are displayed in the lower part of Table 4. The intriguing conclusion is that the adversarial examples remain hard for models trained even on a disjoint training set, although their effectiveness decreases considerably. ### 4.3 Spectral Analysis of Unstability The previous section showed examples of deep networks resulting from purely supervised training which are unstable with respect to a peculiar form of small perturbations. Independently of their generalisation properties across networks and training sets, the adversarial examples show that there exist small additive perturbations of the input (in Euclidean sense) that produce large perturbations at the output of the last layer. This section describes a simple procedure to measure and control the additive stability of the network by measuring the spectrum of each rectified layer. Mathematically, if $\phi(x)$ denotes the output of a network of $K$ layers corresponding to input $x$ and trained parameters $W$, we write $\phi(x)=\phi_{K}(\phi_{K-1}(\dots\phi_{1}(x;W_{1});W_{2})\dots;W_{K})~{},$ where $\phi_{k}$ denotes the operator mapping layer $k-1$ to layer $k$. The unstability of $\phi(x)$ can be explained by inspecting the upper Lipschitz constant of each layer $k=1\dots K$, defined as the constant $L_{k}>0$ such that $\forall\,x,\,r~{},~{}\|\phi_{k}(x;W_{k})-\phi_{k}(x+r;W_{k})\|\leq L_{k}\|r\|~{}.$ The resulting network thus satsifies $\|\phi(x)-\phi(x+r)\|\leq L\|r\|$, with $L=\prod_{k=1}^{K}L_{k}$. A half-rectified layer (both convolutional or fully connected) is defined by the mapping $\phi_{k}(x;W_{k},b_{k})=\max(0,W_{k}x+b_{k})$. Let $\|W\|$ denote the operator norm of $W$ (i.e., its largest singular value). Since the non- linearity $\rho(x)=\max(0,x)$ is contractive, i.e. satisfies $\|\rho(x)-\rho(x+r)\|\leq\|r\|~{}$ for all $x,r$; it follows that $\|\phi_{k}(x;W_{k})-\phi_{k}(x+r;W_{k})\|=\|\max(0,W_{k}x+b_{k})-\max(0,W_{k}(x+r)+b_{k})\|\leq\|W_{k}r\|\leq\|W_{k}\|\|r\|~{},$ and hence $L_{k}\leq\|W_{k}\|$. On the other hand, a max-pooling layer $\phi_{k}$ is contractive: $\forall\,x\,,\,r\,,~{}\|\phi_{k}(x)-\phi_{k}(x+r)\|\leq\|r\|~{},$ since its Jacobian is a projection onto a subset of the input coordinates and hence does not expand the gradients. Finally, if $\phi_{k}$ is a contrast- normalization layer $\phi_{k}(x)=\frac{x}{\Big{(}\epsilon+\|x\|^{2}\Big{)}^{\gamma}}~{},$ one can verify that $\forall\,x\,,\,r\,,~{}\|\phi_{k}(x)-\phi_{k}(x+r)\|\leq\epsilon^{-\gamma}\|r\|$ for $\gamma\in[0.5,1]$, which corresponds to most common operating regimes. It results that a conservative measure of the unstability of the network can be obtained by simply computing the operator norm of each fully connected and convolutional layer. The fully connected case is trivial since the norm is directly given by the largest singular value of the fully connected matrix. Let us describe the convolutional case. If $W$ denotes a generic $4$-tensor, implementing a convolutional layer with $C$ input features, $D$ output features, support $N\times N$ and spatial stride $\Delta$, $Wx=\left\\{\sum_{c=1}^{C}x_{c}\star w_{c,d}(n_{1}\Delta,n_{2}\Delta)\,;d=1\,\dots,D\right\\}~{},$ where $x_{c}$ denotes the $c$-th input feature image, and $w_{c,d}$ is the spatial kernel corresponding to input feature $c$ and output feature $d$, by applying Parseval’s formula we obtain that its operator norm is given by $\|W\|=\sup_{\xi\in[0,N\Delta^{-1})^{2}}\|A(\xi)\|~{},$ (1) where $A(\xi)$ is a $D\times(C\cdot\Delta^{2})$ matrix whose rows are $\forall~{}d=1\dots D~{},~{}A(\xi)_{d}=\Big{(}\Delta^{-2}\widehat{w_{c,d}}(\xi+l\cdot N\cdot\Delta^{-1})\,;\,c=1\dots C\,,\,l=(0\dots\Delta-1)^{2}\Big{)}~{},$ and $\widehat{w_{c,d}}$ is the 2-D Fourier transform of $w_{c,d}$: $\widehat{w_{c,d}}(\xi)=\sum_{u\in[0,N)^{2}}w_{c,d}(u)e^{-2\pi i(u\cdot\xi)/N^{2}}~{}.$ Layer | Size | Stride | Upper bound ---|---|---|--- Conv. $1$ | $3\times 11\times 11\times 96$ | $4$ | $2.75$ Conv. $2$ | $96\times 5\times 5\times 256$ | $1$ | $10$ Conv. $3$ | $256\times 3\times 3\times 384$ | $1$ | $7$ Conv. $4$ | $384\times 3\times 3\times 384$ | $1$ | $7.5$ Conv. $5$ | $384\times 3\times 3\times 256$ | $1$ | $11$ FC. 1 | $9216\times 4096$ | N/A | $3.12$ FC. 2 | $4096\times 4096$ | N/A | $4$ FC. 3 | $4096\times 1000$ | N/A | $4$ Table 5: Frame Bounds of each rectified layer of the network from [9]. Table 5 shows the upper Lipschitz bounds computed from the ImageNet deep convolutional network of [9], using (1). It shows that instabilities can appear as soon as in the first convolutional layer. These results are consistent with the exsitence of blind spots constructed in the previous section, but they don’t attempt to explain why these examples generalize across different hyperparameters or training sets. We emphasize that we compute upper bounds: large bounds do not automatically translate into existence of adversarial examples; however, small bounds guarantee that no such examples can appear. This suggests a simple regularization of the parameters, consisting in penalizing each upper Lipschitz bound, which might help improve the generalisation error of the networks. ## 5 Discussion We demonstrated that deep neural networks have counter-intuitive properties both with respect to the semantic meaning of individual units and with respect to their discontinuities. The existence of the adversarial negatives appears to be in contradiction with the network’s ability to achieve high generalization performance. Indeed, if the network can generalize well, how can it be confused by these adversarial negatives, which are indistinguishable from the regular examples? Possible explanation is that the set of adversarial negatives is of extremely low probability, and thus is never (or rarely) observed in the test set, yet it is dense (much like the rational numbers), and so it is found near every virtually every test case. However, we don’t have a deep understanding of how often adversarial negatives appears, and thus this issue should be addressed in a future research. ## References * [1] David Baehrens, Timon Schroeter, Stefan Harmeling, Motoaki Kawanabe, Katja Hansen, and Klaus-Robert Müller. How to explain individual classification decisions. The Journal of Machine Learning Research, 99:1803–1831, 2010. * [2] Yoshua Bengio. Learning deep architectures for ai. Foundations and trends® in Machine Learning, 2(1):1–127, 2009. * [3] Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In Computer Vision and Pattern Recognition, 2009. CVPR 2009. IEEE Conference on, pages 248–255. IEEE, 2009. * [4] Dumitru Erhan, Yoshua Bengio, Aaron Courville, and Pascal Vincent. Visualizing higher-layer features of a deep network. Technical Report 1341, University of Montreal, June 2009. Also presented at the ICML 2009 Workshop on Learning Feature Hierarchies, Montréal, Canada. * [5] Pedro Felzenszwalb, David McAllester, and Deva Ramanan. A discriminatively trained, multiscale, deformable part model. In Computer Vision and Pattern Recognition, 2008. CVPR 2008. IEEE Conference on, pages 1–8. IEEE, 2008. * [6] Ross Girshick, Jeff Donahue, Trevor Darrell, and Jitendra Malik. Rich feature hierarchies for accurate object detection and semantic segmentation. arXiv preprint arXiv:1311.2524, 2013. * [7] Ian Goodfellow, Quoc Le, Andrew Saxe, Honglak Lee, and Andrew Y Ng. Measuring invariances in deep networks. Advances in neural information processing systems, 22:646–654, 2009\. * [8] Geoffrey E. Hinton, Li Deng, Dong Yu, George E. Dahl, Abdel rahman Mohamed, Navdeep Jaitly, Andrew Senior, Vincent Vanhoucke, Patrick Nguyen, Tara N. Sainath, and Brian Kingsbury. Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups. IEEE Signal Process. Mag., 29(6):82–97, 2012. * [9] Alex Krizhevsky, Ilya Sutskever, and Geoff Hinton. Imagenet classification with deep convolutional neural networks. In Advances in Neural Information Processing Systems 25, pages 1106–1114, 2012. * [10] Quoc V Le, Marc’Aurelio Ranzato, Rajat Monga, Matthieu Devin, Kai Chen, Greg S Corrado, Jeff Dean, and Andrew Y Ng. Building high-level features using large scale unsupervised learning. arXiv preprint arXiv:1112.6209, 2011. * [11] Yann LeCun and Corinna Cortes. The mnist database of handwritten digits, 1998. * [12] Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781, 2013. * [13] Matthew D Zeiler and Rob Fergus. Visualizing and understanding convolutional neural networks. arXiv preprint arXiv:1311.2901, 2013.
Date of publication xxxx 00, 0000, date of current version xxxx 00, 0000. 10.1109/ACCESS.2023.0322000 This work was funded (or partially funded) by the Center for Statistics and Applications in Forensic Evidence (CSAFE) through Cooperative Agreements 70NANB15H176 and 70NANB20H019 between NIST and Iowa State University, which includes activities carried out at Carnegie Mellon University, Duke University, University of California Irvine, University of Virginia, West Virginia University, University of Pennsylvania, Swarthmore College and University of Nebraska, Lincoln. Corresponding author: Abby Martin (e-mail: [email protected]). # Forensic Camera Identification: Effects of Off-Nominal Exposures ABBY MARTIN1 ROY MAXION2 and JENNIFER NEWMAN3 Department of Mathematics, Iowa State University, Ames, IA 50011 USA (e-mail<EMAIL_ADDRESS>Department of Computer Science, Carnegie Mellon University, Pittsburgh, PA 15213 USA (e-mail<EMAIL_ADDRESS> Department of Mathematics, Iowa State University, Ames, IA 50011 USA (email<EMAIL_ADDRESS> ###### Abstract Photo-response non-uniformity (PRNU) is a technology that can match a digital photograph to the camera that took it. Due to its use in forensic investigations and use by forensic experts in court, it is important that error rates for this technology are reliable for a wide range of evidence image types. In particular, images with off-nominal exposures are not uncommon. This paper presents a preliminary investigation of the impact that images with different exposure types — too dark or too light — have on error rates for PRNU source camera identification. We construct a new dataset comprised of 8400 carefully collected images ranging from under-exposed (too dark) to nominally exposed to over-exposed (too bright). We first establish baseline error rates using only nominally exposed images, resulting in a true- positive rate of 100% and a true-negative rate of 99.92%. When off-nominal images are tested, we find striking results: the true-negative rate for under- exposed images is 99.46% (a false-positive rate of roughly one in two hundred, typically unacceptable in a forensic context), and for over-exposed images the true-positive rate falls to 82.90%. Our results highlight the importance of continued study of error rates for the PRNU source camera identification to assure adherence to the high standards set for admissibility of forensic evidence in court. ###### Index Terms: Camera Identification, Digital Forensics, Image Forensics, PRNU, Questioned Images =-21pt ## I Introduction Consider a photo found at a crime scene. The photo can be traced to the camera that took it using a phenomenon known as photo response non-uniformity (PRNU). PRNU is a unique and persistent pattern (fingerprint) of variabilities of voltage levels across the pixels in a digital camera’s photosensitive image sensor, and this pattern appears in each photo the camera takes. The PRNU fingerprint of the camera’s sensor is compared with the PRNU fingerprint of a photo, producing a score measured against an accepted fixed threshold to determine a match (or non-match) between the camera sensor and the photo. The PRNU source-camera-identification algorithm is based on a large-scale study of images from Flickr [1], and is considered accurate across a variety of images. Flickr is a public website for users to share images. Due to self- curation (users uploading their best images), Flickr has a negligible proportion of off-nominal exposure images [2] (e.g., over- or under-exposed, like the examples in Fig. 2 [3]). Thus, results from Flickr data may have overlooked limitations, including generalizing error rates to data not represented in the Flickr dataset (e.g., off-nominal exposure types). The exclusion of off-nominal data when testing forensics tools has real-world consequences. Studies performed by the forensic community play a critical role supporting admissibility of expert witness testimony under federal law (including estimating error rates), as introduced by Daubert v. Merrell Dow Pharmaceuticals [4]. A large-scale study [1] established the court-approved PRNU source camera identification algorithm using Flickr images to determine error rates. However, many forensic tools have opportunities for improvement due to a host of factors such as lack of standardize corpora [5], non-existent tool validation procedures [6], etc. We were unable to find any studies that determined error rates for off-nominal exposure images for the PRNU source camera identification algorithm. Additionally, the data in [1] is no longer available [7]. This motivated our investigation of the PRNU source-camera- identification algorithm applied to known off-nominal image data. Our contribution is a methodology to evaluate the response of the PRNU source- camera-identification algorithm to off-nominal exposure images. Forensic experts representing both the prosecution and defense can use publications incorporating this methodology to better represent the error rates associated with specific evidence images, especially the false-positive error rate (incorrectly matching an image with a camera). Our methodology can be adapted to estimate PRNU source-camera-identification error rates for other types of off-nominal images as well. This process can be integrated into the estimation of error rates for a tool or technique to satisfy the Daubert criteria. The rest of the paper is organized as follows: Section II outlines the research problem and approach; Section III clarifies language used throughout the remainder of the paper; Section IV is a discussion of the forensic development of PRNU source camera identification; Section V provides a detailed overview of the PRNU source-camera-identification algorithm used in this paper; Section VI describes the data and characterizes under- and over- exposed images; Section VII outlines the methodology of the experiments performed; and Section VIII presents the results. In Section IX, we share the limitations of the experiments, followed by a discussion of implications of our results in Section X. We conclude with a summary in Section XI. ## II Problem and Approach Recognizing that prior work did not account for off-nominal images (e.g., too light or too dark), the present work aims to determine whether off-nominal images skew the true positive and true negative rates previously reported in [1]. We implement the PRNU algorithm presented in that work to establish error rates for off-nominal images. In Section VII, we present a rigorous evaluation of the error rates for this source-camera-identification algorithm when the image exposures are off-nominal, including a sensitivity analysis to determine how proportions of off-nominal images in the dataset can impact error rate estimates. In principle, this framework could also be applied to other types of off-nominal image data (e.g., digital zoom or out-of-focus). ## III Terminology The image of unknown origin, which could be considered evidence in a forensic case, is referred to as the questioned image. The camera believed to have taken the questioned image is referred to as the specific camera. To avoid repetitious language, when we refer to an image, we mean the questioned image (unless otherwise specified). Similarly, when we refer to a camera, we mean the specific camera. We focus on the source-camera-identification problem, which aims to identify the specific camera that captured a questioned image. We do not address the camera-model-identification problem, which attempts to identify only the model of the camera used to collect the questioned image. As an example, source camera identification could conclude an image came from a specific iPhone6s camera, whereas camera-model identification would only be able to conclude the image was from some iPhone6s camera (i.e., _this_ camera versus this _kind_ of camera). For simplicity, when we refer to camera identification we are referring to the source-camera-identification problem. We use PRNU cameraID algorithm to refer specifically to the source-camera-identification algorithm as established in the large-scale study [1], which is the algorithm used throughout this paper. ## IV Background and Related Work Photo response non-uniformity (PRNU) is a persistent artifact in digital images due to imperfections in the camera-sensor manufacturing process [8]. When light impinges on the photosensitive portion of a pixel, called the photodiode, the pixel responds by generating a current in proportion to the number of photons striking it. However, the imperfections result in consistent differences from the mean values of currents among the pixels, and it is this pattern of responses – the PRNU – that is unique to each camera sensor. This pattern remains constant and present in each image; therefore, PRNU can be considered part of the sensor’s fixed pattern noise. The first digital image sensor, a charged-coupled device (CCD), was invented in 1970 [9], and within a year large variabilities in dark signal (thermal noise when no light is falling on the sensor) were observed [10]. The first use of fixed pattern noise to identify an individual digital camera sensor appeared in [11] using the dark signal. This method assumes scenes that are very dark, so it is not useful for images taken under typical lighting conditions. In 2005, the first computational method was introduced for extracting the PRNU from an image for the purpose of source camera identification [12]. Since the PRNU can be extracted from photos taken in typical lighting environments, researchers shifted to the PRNU as a camera fingerprint. Several improvements followed, including: introducing a maximum likelihood estimator to improve the camera fingerprint calculation [13]; changing to the peak-to-correlation energy (PCE) ratio as a similarity metric [14]; and discovering that (unlike correlation scores) a threshold value based on the PCE does not need to change for each camera fingerprint estimation [15]. PRNU-based camera identification became standardized for court use in a large- scale study performed on 1,053,580 JPEG images from Flickr representing approximately 6,896 cameras over 150 models [1]. This work used the distribution of PCE scores between the questioned images and the camera fingerprints to set a threshold based on the false positive rate (FPR) to ensure acceptable true negative rates (TNR) and true positive rates (TPR), resulting in a recommended PCE threshold of 60. The overall identification rates given in [1] are a TPR of 97.62% and a TNR of 99.9976%. Our goal is to use the standard algorithm set by this work to establish error rates for a very different set of data. In Section V, we give details of the PRNU source- camera-identification (PRNU cameraID) algorithm as established in [1]. Research since 2009 has continued to modify and question PRNU camera identification. Use of the PRNU has expanded to fields such as forensic countermeasures for forged PRNU information [16, 17], image anonymity techniques [18], convolutional neural networks (CNN)-based forgery detection [19], and user authentication [20]. Many papers have proposed changes to the PRNU cameraID algorithm: enhancements to fingerprint estimation [21, 22, 23, 24]; different similarity measures [25]; and use of machine-learning methods such as CNNs [26]. Other work has focused on the impact of various image artifacts on PRNU camera identification, including: vignetting [27]; JPEG compression [28]; proprietary image processing [29, 30, 31, 32]; color saturation [33]; as well as gamma correction, contrast enhancement, histogram equalization, and white balance [34]. One large-scale study of over 33,000 images from Flickr found several devices with low true negative rates (e.g., 0.8% for the Fujifilm X-T30 camera) [2] for images from the same camera model as the specific camera. Concerns about PRNU-based camera identification have been raised in preliminary investigations of exposure settings, specifically ISO. A brief study of the effects of ISO values on PRNU camera identification found that ISO impacts noise levels, gray levels, and correlation values [35]. The Warwick database [36], which contains images with a variety of ISO values, establishes that forgery detection and correlation-predictor values are impacted by ISO [37]. However, the Warwick investigation of ISO relies on correlation predictors to identify forged regions of images and does not implement a decision threshold [37]. The standard PRNU cameraID algorithm in [1] relies on PCE scores and a threshold of 60, not correlation predictors. Additionally, the authors in [37] vary both the ISO and exposure time settings to ensure similar exposure values between images of the same scene, meaning all images have a consistent visual brightness. The research question in [37] differs from ours: their paper asks whether correlation predictors and forgery detection are impacted by ISO (the exposure type of the image remains constant by changing the exposure time to compensate for the changes in ISO). We ask whether the PCE score and camera identification are impacted by changing the exposure type (i.e., auto-, under-, or over-exposed images). For our data, we intentionally vary the ISO and exposure time to gather brighter and darker versions of the same scene, thus varying the exposure value and brightness of the image (see Fig. 2). In order to establish error rates for off-nominal images, we must adhere to the algorithm used by forensic experts. To the best of our knowledge, this is the algorithm implemented in the large-scale study [1]. Whereas these recent works [35, 37] [36] have addressed the impact of ISO on correlation, forgery detection, and correlation predictor values, we do not know of any work that has addressed the problem of over- or under-exposed images and their effect on error rates for the PRNU cameraID algorithm. This paper tackles exactly that problem. ## V Overview of PRNU Camera Identification We refer to the protocol followed in [1] as the PRNU cameraID algorithm. Generally, this algorithm consists of three parts: (1) estimating the PRNU fingerprints of the camera and of the image; (2) calculating the peak-to- correlation energy (PCE) ratio between these two fingerprints; (3) comparing the PCE score with a threshold of 60 to determine whether (or not) the camera captured the image. The details of this process are described in the remainder of this section. ### V-A Estimate Fingerprints Using PRNU Noise Model This section gives a summary of the fingerprint extraction algorithms as presented in [38]. To compare a camera with an image, we must estimate the PRNU noise from both the camera sensor and the image. Let $I$ be an image, and let $I_{0}$ be the corresponding “noiseless image” which would result from a sensor without any imperfections. Similarly, let $K$ be the true PRNU noise component of the camera. Note that all multiplication in this section is performed element-wise, thus the image $I$ is modeled: $I=I_{0}+I_{0}K+\Theta,$ (1) where all noise components besides the PRNU are denoted by $\Theta$ [38]. However, it is not feasible to obtain the noiseless image $I_{0}$ or the true PRNU fingerprint $K$. In order to approximate the PRNU component of the image noise, a high-pass filter is applied to suppress scene content and reduce non- unique low-frequency patterns included in the noise fingerprint of an image, such as intensity gradient and vignetting. The Daubechies wavelet denoising filter [39] was originally chosen because it produced the best experimental results, likely due to its superior scene suppression (particularly for edges that appear within an image) [8]. Hence, $F$ is the Daubechies denoising filter applied to the image. The noise $W$ for image $I$ can be estimated [13]: $W=I-F(I).$ (2) This denoising step is followed by additional image processing of the grayscale image to remove non-unique artifacts (NUAs) due to JPEG-compression and camera-model fixed-pattern noise. NUAs can increase the similarity between images from different cameras and thus contribute to a lower TNR [40]. The image fingerprint estimate is therefore calculated as: $Q=G(W),$ (3) where $G$ represents these additional image processing operations. The camera-fingerprint estimation follows a similar process, but is calculated using a set of several images. This improves the PRNU noise estimate in (2) by implementing a maximum likelihood estimate of the true camera fingerprint $K$ using several images. First, we apply $F$, the Daubechies denoising filter, to all $N$ images. Let $I^{(i)}$ for $i\in\\{1,2,...,N\\}$ be the set of images used to estimate $\hat{K}$, the camera fingerprint. First, estimate the image fingerprint for each $I^{(i)}$ as before, $W^{(i)}=I^{(i)}-F(I^{(i)})$. Then the camera fingerprint $\hat{K}$ is estimated [41]: $\hat{K}=G\left(\frac{\sum_{i=1}^{N}W^{(i)}I^{(i)}}{\sum_{i=1}^{N}(I^{(i)})^{2}}\right),$ (4) where we used $N=30$ images, and $G(\cdot)$ represents the additional processing performed after calculating the maximum likelihood estimate of the camera fingerprint to remove NUAs due to the camera model and JPEG compression, as done for the fingerprint estimate for a single image. From a careful reading of the available code [42] in MATLAB [43], we observe that saturated pixels of an image are excluded from the camera fingerprint estimate, where saturated pixels are characterized by the maximum intensity value of the image (at least 250) with at least one neighboring pixel of equal intensity. Although we intentionally collected very bright images, none of the pixels in our over-exposed (or any exposure type) image set meet these standards for a saturated pixel. ### V-B Peak-To-Correlation Energy (PCE) Calculation When we calculate the similarity between camera and image fingerprints, we use the signed PCE score given in Equation (8) of [28] as well as in the MATLAB implementation [42]. Using the signed PCE score differs slightly from the PCE calculation in [1], because if the peak correlation is negative, then the PCE score will be negative. Negative PCE scores could change the estimated probability density function (Equation (12) in [1]) used to set the threshold of 60. An alternative distribution could impact the chosen PCE decision threshold, which would change the estimated error rates. Although it is unlikely that the signed PCE would impact the PRNU cameraID error rate estimates, it is worth noting this change from the algorithm as initially implemented [1]. ### V-C PRNU Source Camera Identification Algorithm Overview To replicate the algorithm in [1], we used the MATLAB [43] code provided by the same authors [42]. An overview of the PRNU cameraID algorithm is shown in Fig. 1. The inputs are two fingerprints, one from the camera (Box 1 in Fig. 1) and one from an image (Box 2 in Fig. 1), and the questioned image (Box 3 in Fig. 1). Recall that the image fingerprint is estimated from a single image, and the camera fingerprint is estimated from 30 images. Figure 1: PRNU cameraID Algorithm. Boxes 1 and 2 are the fingerprints estimated for the camera and questioned image, respectively. Operation 4 is an element-wise multiplication between two matrices representing the camera fingerprint (Box 1) and questioned image (Box 3). If the PCE (Box 5) is above 60, conclude that the image was taken by the specific-camera (Box 6); otherwise not (Box 7). The PCE score is calculated between the image fingerprint and the product of the camera fingerprint and the image pixel intensities (operation 4 on Boxes 1 and 3 in Fig. 1). If the PCE score is greater than 60, conclude that the image came from the camera under test (Box 6 in Fig. 1). Otherwise, conclude that the image originated elsewhere (Box 7 in Fig. 1). ## VI Data The dataset from [1] is no longer available [7], so we collected 8400 images from StegoAppDB [3] with specific ISO and exposure-time settings relative to the auto-exposure settings. This section provides our characterization of off- nominal exposure types, as well as the data sources, data protocol, and theoretical and experimental support for our labeling decision of over- and under-exposed images. ### VI-A Characterization of Over- and Under-Exposed Images A selection of over-, under-, and auto-exposed images from our dataset is shown in Fig. 2, along with their ISO and exposure time settings. These are the three exposure types of data we use in our experiments. Perceived brightness of objects varies, so we characterize an off-nominally exposed image by comparing its level of brightness with its nominally exposed version. Specifically, we say an image is over-exposed (third row in Fig. 2) if its overall visual appearance is noticeably brighter than its auto-exposed counterpart (second row in Fig. 2). Similarly, we say an image is under- exposed (first row in Fig. 2) if its overall visual appearance is noticeably darker than its auto-exposed counterpart. See Section VI-B2 for the formulaic relation between auto- and off-nominal exposure settings. Figure 2: Off-Nominal and Nominal Exposure Examples. The first row is a sample of under-exposed images, the second row is a sample of auto-exposed images, and the third row is a sample of over-exposed images from our dataset [3]. The pair (ISO, EXP) denotes the auto-exposure ISO and exposure time settings, respectively. The programmed (ISO, EXP) values relative to the auto-exposed settings are given to the left of the images under the exposure type, and the recorded values for the actual data are provided underneath the photo itself. The model and device number that acquired the trio of images is given at the top of each column of images (model-device number). Five different devices provided the sample data. The intentional selection of exposure settings relative to auto-exposure is one supporting argument for an image being over- or under-exposed, but we provide two additional arguments that strengthen our characterization of exposure type: (1) the exposure value [44] quantifies the brightness of each off-nominal image relative to its auto-exposed version; and (2) a visual inspection by three human judges assesses the agreement between the captured exposure settings and a human visual assessment of the brightness level. With these additional arguments, we obtain an acceptable level of certainty that the characterizations of exposure type for over-, under-, and auto-exposed images in StegoAppDB are indeed consistent and are satisfactory for testing the PRNU cameraID algorithm for nominal and off-nominal exposures. ### VI-B Apparatus & Instrumentation #### VI-B1 Apparatus Twenty-eight smartphone cameras were used to acquire the images used in this research; they are cataloged in Table I. #### VI-B2 Instrumentation The controlled collection of images for hundreds of scenes using 28 separate smartphone cameras in StegoAppDB [3] requires some amount of automation, not least because the images were taken with specific ISOs and exposure times. For this reason, an application (app) called Cameraw (pronounced camer-raw) was written in Apple’s Swift language [45] for Apple devices, and in Java [46] for Android devices. TABLE I: Instruments, OS, protocols. Parenthetical numbers indicate multiple instances of the camera model. 100 over-, 100 auto-, and 100 under-exposed images (300 images total) per camera. | Camera (# of Devices) | OS | Protocol ---|---|---|--- 1 | Pixel1 (4) | Android | A (Same Scene) 2 | Pixel2 (4) | Android | A (Same Scene) 3 | iPhone6s (4) | iOS | A (Same Scene) 4 | iPhone7 (4) | iOS | A (Same Scene) 5 | iPhone8 (2) | iOS | A (Same Scene) 6 | OnePlus5 (2) | Android | B (Unique Scene) 7 | SamsungS8 (2) | Android | B (Unique Scene) 8 | iPhone6sPlus (2) | iOS | B (Unique Scene) 9 | iPhone7Plus (2) | iOS | B (Unique Scene) 10 | iPhoneX (2) | iOS | B (Unique Scene) Cameraw operates much like any other camera application, using one button to capture a scene. When that button is pressed, Cameraw collects images with a variety of exposure settings, stepping through a pre-selected sequence of changing ISO and exposure times (EXP). From this sequence, we use the following three settings: (1) auto-exposed/nominal (camera establishes ISO and EXP automatically); (2) over-exposed/off-nominal (3 * ISO, 2 * EXP); and (3) under-exposed/off-nominal (0.5 * ISO, 0.5 * EXP). Camera aperture for each device remained constant (smartphone apertures cannot be changed). ### VI-C Images & Data-Collection Protocols #### VI-C1 Images The data comprise 8400 images from StegoAppDB [3], with 300 images captured from each of 28 smartphones across ten brands/models (e.g., Apple/iPhone8); see Table I. One third of all the images were auto-exposed; one third of all images were intentionally over-exposed; the remaining third were intentionally under-exposed. Full-sized images are used for the experiments described in this paper. #### VI-C2 Protocol Table I shows two protocols, A and B, that were followed during data- collection. One (A) is for same-scene content; the other (B) is for unique- scene content. Protocol A: Same-scene content. Each of the 18 protocol-A cameras was attached to a tripod in a given scene location. Each camera took three images with staggered exposure settings, as previously described. All cameras were similarly oriented, so there were no upside-down images. This process was repeated 100 times, with 100 unique scene positions for the tripod. The scene content was the same for all 18 cameras, although the registration may have been slightly imperfect. Protocol B: Unique-scene content. Ten cameras follow a “unique” scene content acquisition protocol. This is the same as Protocol A except the scene content is not repeated from one camera to the next. In contrast to the Protocol-A images, which comprised 100 different scenes, Protocol-B images comprised 1000 different scenes. ### VI-D Exposure Value Comparison One clear justification for our characterization of auto-, under-, and over- exposed images is the exposure value for the examples in Fig. 2. Consider the three images of oranges in a bowl from the OnePlus5-1 camera (first column of the figure). The top-row image is visually much darker than the middle-row auto-exposed image, and the bottom-row image is visually much brighter. The under-exposed image has an ISO of 500, half of 1000 (the auto-exposed image ISO value), and an exposure time of 1/100 seconds (again, half of the auto- exposed image exposure time of 1/50 seconds). Similarly, the over-exposed image has an ISO of 3000, three times the auto-exposed ISO of 1000, and an exposure time of 1/25 seconds (twice the auto-exposed image exposure time of 1/50 seconds). The exposure value is a quantification of light on the sensor determined by the $f$-stop (related to the aperture, which is constant in smartphone cameras), exposure time, and ISO. Exposure value is lower when there is less light and higher when there is more light on the sensor. We calculate exposure value as described in [44] relative to the exposure value with ISO 100 ($EV_{100}$). For the auto-exposed example image, the exposure value is $EV_{100}+\log_{2}10$. The under-exposed example image has an exposure value of $EV_{100}+\log_{2}5$ and the over-exposed example has an exposure value of $EV_{100}+\log_{2}30$. Clearly, the under-exposed image has quantifiably less and the over-exposed image has quantifiably more light on the sensor than the auto-exposed image. This relationship holds for all images in our dataset, which consists of only typically-lit indoor scenes. ### VI-E Visual Validation of Off-Nominal Settings Another supporting argument for the three exposure types is a validation of the images using human judgment. By assessing the agreement amongst three human judges and one computer program, we can measure the consistency of the responses. If the consistency is high enough, we are satisfied that this validation supports using these images for testing PRNU camera identification of nominal and off-nominal exposures. We used the same 5600 off-nominal images described in Section VI-C1, half of which are under-exposed and half are over-exposed. This is too many images for a human rater to evaluate in a reasonable amount of time without fatigue, so we randomly selected 5% (280 images) for a human-judgment study. Half of those (140 images) were too dark and the other half were too light. Each half was mixed with 140 randomly drawn, nominally exposed images to form two sets of 280 images each. The two sets of 280 images were shown to human judges in a web-based tool displaying a single image at a time. The judge clicked one of three text boxes indicating their judgment as auto/over/under-exposed. When a text box was clicked, the tool logged the choice, and advanced to the next image. The task took about 17 minutes per set. The resulting data were analyzed using Fleiss’s kappa [47] with four raters – three human judges and the computer program that chose the images in the first place. Using the “R” statistical software package irr and the “R” function kappam.fleiss [48, 49], the kappa value was 0.9430212 with a z-statistic of 75.70786 (i.e., 75.7 standard deviations away from the mean) with a p-value of 0 – a nearly-perfect agreement amongst the four raters. This is clearly a level of confidence that justifies using these images for testing a camera- identification algorithm. ## VII Methods We propose a methodical investigation of off-nominally exposed images. The methodology consists of the following steps: 1. 1. Define a careful data collection protocol which minimizes the differences between nominal and off-nominal data, intentionally changing the characteristic under investigation. 2. 2. Isolate the points in the forensic algorithm where the characteristic under investigation impact the error rate estimate. 3. 3. Establish baseline error rates by executing the forensic algorithm using only the collected nominal data. 4. 4. For each point in the algorithm identified in step 2, incrementally and independently exchange the nominal data with the off-nominal data. We are investigating exposure settings, so our nominal dataset is our auto- exposed images and the off-nominal dataset consists of our over- and under- exposed images. Next, we identify estimation of the camera fingerprint (Box 1 in Fig. 1) and the questioned image (Boxes 2 and 3 in Fig. 1) as the two steps of the PRNU cameraID algorithm directly impacted by image exposure settings. Section VII-A details our baseline experiment, which generates camera fingerprints using auto-exposed images and uses a set of questioned images composed only of auto-exposed images. Finally, we iteratively change the exposure type of the images used to generate the camera fingerprint and the set of questioned images. This allows us to understand how off-nominal exposure images alter the error rate estimates for the PRNU cameraID algorithm, and provides baseline error rates for direct comparisons. We investigate the impact of off-nominal exposure settings on the PRNU cameraID algorithm by partitioning the data into three exposure types. Each exposure type is used systematically to generate the camera fingerprints and/or questioned images. We performed 3 fundamental experiments: 1\. Auto-exposed images (baseline / nominal) 2\. Over-exposed images (too light / off-nominal) 3\. Under-exposed images (too dark / off-nominal) We also implement two sensitivity analyses, one for the sensitivity of the TPR to different proportions of off-nominal exposure images in the questioned image set and one for the sensitivity of error rates relative to the PCE decision threshold. A sensitivity analysis can show how even small, controlled changes impact error rate estimates. ### VII-A Auto-exposed images - baseline / nominal The baseline experiment is the PRNU cameraID algorithm (Section V) applied to a set of nominally exposed images, which comprises 2800 auto-exposed images (100 auto-exposed images from each of the 28 cameras). We repeat the baseline experiment for five trials, where each trial is defined by a specific partitioning of the images. First, we randomly partition the 100 images per camera into two groups: 1) 30 images used to generate the camera fingerprint; 2) 70 questioned images. The images are partitioned so that no camera fingerprint image shares scene content with any questioned image. For each camera, estimate the camera fingerprint (28 cameras $\times$ 1 camera fingerprint = 28 camera fingerprints). For each questioned image, estimate the image fingerprint (28 cameras $\times$ 70 images = 1960 image fingerprints). Second, we calculate the PCE scores between each camera fingerprint and its own questioned images (28 camera fingerprints $\times$ 70 fingerprints for images from the camera = 1960 PCE scores). Next, we compare the PCE scores for images from the camera to the threshold of 60. If the PCE score is above 60, the image is a true positive and contributes to the TPR. If the PCE score is at most 60, the image is a false negative and contributes to the False Negative Rate (FNR = $1-$ TPR). Next, we calculate the PCE scores between each camera fingerprint and a set of questioned images from a different camera (28 specific-camera fingerprints $\times$ 70 fingerprints from images from a different camera $\times$ 27 different cameras for test = 52920 PCE scores). To calculate a balanced accuracy (i.e., an accuracy that responds equally to the TNR and TPR), we select a random subset of 1960 PCE scores from the 52920 PCEs for questioned images from a different camera. In order to avoid lucky or unlucky subsets of PCE scores, we perform the random selection of 1960 PCE scores 100 times. We compare the PCE scores for images from other cameras to the threshold of 60. If the PCE score is above 60, the image is a false positive and contributes to the FPR (FPR = $1-$ TNR). If the PCE score is at most 60, the image is a true negative and contributes to the TNR. We calculate the accuracy for each of the 100 PCE score subsets using the 1960 PCE scores used to calculate the TPR, and the 1960 PCE scores used to calculate the TNR. Finally, we average the TPR, TNR, and accuracy values for the 100 PCE score subsets to calculate the error rates for each of the five trials. The auto-exposed image baseline experiment is also subjected to two sensitivity analyses regarding the TPR and TNR: 1) for different proportions of off-nominal exposures in the questioned image set, and 2) for different PCE thresholds. For the first sensitivity analysis, we incrementally replace 1% of the images comprising the questioned image set with off-nominal data, thus creating 101 questioned image sets (i.e., the first set has 100% auto-exposed questioned images and 0% off-nominally exposed images, the second set has 99% auto-exposed questioned images and 1% off-nominally exposed images, and so on with the final set comprising 100% off-nominally exposed images). The sensitivity analysis of the PCE decision threshold is performed by incrementally shifting the threshold of 60 and recalculating the TPR and TNR values. ### VII-B Over-exposed images - too light / off-nominal The over-exposed image experiment differs from the baseline by estimating two camera fingerprints (one using auto-exposed images and one using over-exposed images) and the questioned image set comprises only over-exposed images. This experiment uses 5600 images: 100 auto-exposed images from each of the 28 devices and 100 over-exposed images from each of the 28 devices. We adapt the procedure used in the baseline experiment for the over-exposed experiment by performing the same procedural steps but with different sets of data: 1) 30 auto-exposed images per camera are used to estimate the camera fingerprints and 70 over-exposed images from each camera comprise the set of questioned images (auto-fingerprint vs. over-image); 2) 30 over-exposed images per camera are used to estimate the camera fingerprints and 70 over-exposed images from each camera comprise the set of questioned images (over- fingerprint vs. over-image). We repeat the sensitivity analysis of the PCE threshold on the auto-fingerprint vs. over-image and over-fingerprint vs. over-image experiments. We remark that the most likely scenario where a forensic practitioner might encounter an over-exposed image is as a questioned image. The practitioner may have access to the suspect camera and can therefore control the exposure settings of images used to estimate the camera fingerprint, but the exposure of the questioned image is already established. A forensic practitioner using over-exposed images for the camera fingerprint when the questioned image is auto-exposed is an unlikely scenario, so we omit the results of this experiment. We investigate the over-fingerprint vs. over-image scenario as a possible solution to the degraded error rates caused by comparing auto-exposed fingerprints and over-exposed questioned images. This experiment examines whether camera fingerprints estimated with the same exposure type as the questioned image will have a more similar PRNU estimate than camera fingerprints estimated solely from auto-exposed images, potentially reducing the error rates. The sensitivity analysis of the PCE decision threshold is performed for both experiments by incrementally shifting the PCE threshold and recalculating the TPR and TNR values. ### VII-C Under-exposed images - too dark / off-nominal The under-exposed image experiment repeats the investigations in Section VII-B except we replace the over-exposed images with under-exposed images, as characterized in Section VI — Data. The purpose behind each of the two under- exposed image scenarios corresponds to the motivations for the over-exposed experiment scenarios. Each investigation was also subjected to a sensitivity analysis of the PCE threshold. ## VIII Results We present results from the experiments detailed in Section VII \- Methods for images of each exposure type. The TPR decreases by at least 14.27% for the off-nominal questioned images compared to the auto-exposed questioned-image baseline, meaning many images from the specific camera are missed. Similarly, the TNR decreases to 99.46% for under-exposed questioned images - an error of approximately one in two hundred images incorrectly identified as a match with the specific camera. Such mistakes can lead to increased false positives, often connected to wrongful convictions. Our results are separated into nominal and off-nominal questioned images, and end with an investigation of possible remediations (Sections VIII-A, VIII-B, and VIII-C, respectively). TABLE II: Aggregate results over 5 trials averaged over 100 repetitions Experiment | TPR (std.) | TNR (std.) | Accuracy (std.) ---|---|---|--- | Auto-fingerprint --- vs. Auto-image Baseline 1 (0) | 0.9992 (0.0001) | 0.9996 (0.0001) | Auto-fingerprint --- vs. Over-image 0.8290 (0.0030) | 0.9998 (0.0001) | 0.9144 (0.0015) | Auto-fingerprint --- vs. Under-image 0.8573 (0.0003) | 0.9946 (0.0007) | 0.9260 (0.0004) | Over-fingerprint --- vs. Over-image 0.8888 (0.0070) | 0.9996 (0.0002) | 0.9442 (0.0035) | Under-fingerprint --- vs. Under-image 0.9999 (0.0002) | 0.8426 (0.0043) | 0.9213 (0.0020) TABLE III: True Positive Rate (TPR), True Negative Rate (TNR), and standard deviation (STD) for auto-exposed camera fingerprint with auto-exposed questioned images (left column), over-exposed test data (middle column), and under-exposed test data (right column) by camera model. TPRs and TNRs lower than the baseline experiment are in BOLD for the off-nominal experiments. Camera Model | Auto-fingerprint vs. Auto-image Baseline | Auto-fingerprint vs. Over-image | Auto-fingerprint vs. Under-image ---|---|---|--- | TPR (STD) | TNR (STD) | TPR (STD) | TNR (STD) | TPR (STD) | TNR (STD) Pixel1 | 1 (0) | 0.9998 (0.0002) | 0.9850 (0.0122) | 0.9995 (0.0003) | 1 (0) | 0.9857 (0.0021) Pixel2 | 1 (0) | 0.9998 (0.0003) | 0.0007 (0.0016) | 0.9999 (0.0002) | 0.0014 (0.0020) | 0.9977 (0.0010) iPhone6s | 1 (0) | 0.9999 (0.0001) | 0.9936 (0.0016) | 0.9997 (0.0001) | 1 (0) | 0.9984 (0.00003) iPhone7 | 1 (0) | 0.9999 (0.0001) | 0.9750 (0.0067) | 0.9999 (0.0001) | 1 (0) | 0.9995 (0.0003) iPhone8 | 1 (0) | 1 (0) | 0.9543 (0.0130) | 1.0000 (0.0001) | 1 (0) | 0.9998 (0.0003) OnePlus5 | 1 (0) | 0.9984 (0.0009) | 0.9957 (0.0039) | 0.9993 (0.0006) | 1 (0) | 0.9673 (0.042) SamsungS8 | 1 (0) | 0.9920 (0.0018) | 1 (0) | 0.9999 (0.0001) | 1 (0) | 0.9957 (0.0005) iPhone6sPlus | 1 (0) | 1 (0) | 0.9914 (0.0032) | 0.9999 (0.0002) | 1 (0) | 0.9992 (0.0002) iPhone7Plus | 1 (0) | 1 (0) | 0.8729 (0.0155) | 1.0000 (0.0001) | 1 (0) | 0.9994 (0.0004) iPhoneX | 1 (0) | 1 (0) | 0.8829 (0.0139) | 1.0000 (0.0001) | 1 (0) | 0.9999 (0.0002) ### VIII-A Baseline (Nominal Images) The case where the camera fingerprint and questioned images are auto-exposed provides a baseline to compare with the off-nominal experiments. Results from this baseline experiment are shown in the first row of Table II, and represent the current scenario used by forensic practitioners. The TPR is 100% and the TNR is 99.92%. Results for the same baseline data are listed by the 10 individual models, shown on the left-hand side of Table III. These results are in line with the error rates from the study in [1]. ### VIII-B Off-Nominal Questioned Image Sets The most singular results using off-nominal data with the PRNU cameraID algorithm are the TPR values shown in Table II. Rows two and three in Table II are error rate estimates for the most common scenarios where a practitioner might encounter an off-nominally exposed image: when the camera fingerprint is estimated from auto-exposed images, but the questioned image is over- or under-exposed. The TPR values for these two experiments are strikingly different from the baseline results. When the camera fingerprint is composed of auto-exposed images and all questioned images are over-exposed, the TPR is 82.90% and the TNR is 99.98% (row two, Table II). When the camera fingerprint is composed of auto-exposed images and all questioned images are under- exposed, the TPR is 85.73% and the TNR is 99.46% (row three, Table II). The TPRs for these off-nominal images – a TPR decrease of 17.1% for over-exposed and of 14.27% for under-exposed - are much lower than the 100% TPR for the baseline experiment. The large reduction in TPR from our baseline experiment warrants attention to the differing error rates between image exposure types. Additionally, the TNR of 99.46% for the under-exposed questioned images (row three, Table II) corresponds to an FPR of 0.54%. This is an error of roughly one in two hundred associated with incorrectly matching an image with a camera. The sensitivity analysis of the TPR when the questioned image set consists of various proportions of off-nominal images is given in Fig. 3. This sensitivity analysis highlights the negative linear relationship between the proportion of off-nominally exposed questioned images and the TPR. Both the over-exposed questioned images (Fig. 3, solid blue line) and the under-exposed questioned images (Fig. 3, dashed orange line) have a direct impact on the TPR estimate, even when only a small percent of the questioned image set are off-nominally exposed. The leftmost endpoints of the orange and blue lines in Fig. 3 correspond to TPRs of 100% (TPR for row one of Table II). Similarly, the rightmost endpoints of the orange (dashed) and blue (solid) lines in Fig. 3 correspond to a TPR of 82.90% for over-exposed images (row two of Table II) and 85.73% for under-exposed images (row three of Table II). Note that the TPR for over-exposed images is always less than the TPR for under-exposed images, regardless of the proportion of off-nominal test data. The consistency of this linear relationship supports the notion that off-nominal exposure types do indeed affect the error rates of the PRNU cameraID algorithm. The results for the auto-exposed camera fingerprint experiments are listed for the 10 models in Table III (auto-image in the left-hand column, over-image in the middle column, and under-image in the right-hand column). Note that the Pixel 2 performs dramatically poorly for both off-nominal exposures with a TPR of 0.07% on over-exposed questioned images and TPR of 0.14% on under-exposed questioned images. Although determining the cause of the Pixel 2 camera model’s poor performance for off-nominally exposed images is outside the scope of this paper, we theorize this could be caused by proprietary pipeline processing, which is protected by manufacturers (in this case, Google). Poor performance for specific camera models has been observed in prior research, such as the 0.8% TNR of the Fujifilm X-T30 in [2]. Particularly poor performance by individual models is another reason to encourage rigorous tool validation procedures [6]. Figure 3: Sensitivity Analysis of Off-Nominally Exposed Questioned Images: Both under-exposed (dashed orange line) and over-exposed (solid blue line) questioned images have a roughly linear relationship with the degradation of the True Positive Rate (TPR). This implies that even small percentages of off- nominally exposed images in the questioned image set can have an impact on the error rate estimates. ### VIII-C Possible Solutions for Off-Nominal Data Figure 4: Sensitivity analysis of PCE threshold (off-nominal exposures). The upper left graph (baseline) shows a near-perfect TPR and TNR for the PCE threshold of 60 (dashed red line) when both the camera fingerprint and the image fingerprint are estimated from nominal images. However, when fingerprints are estimated from any off-nominal images, either the TPR or the TNR degrades dramatically. Raising the PCE threshold does not improve the TPR (solid blue line), although it does gradually improve the TNR (dotted orange line). Lowering the PCE threshold slightly improves the TPR, but does degrade the TNR. One possible method to improve the error rates for off-nominal exposures is to estimate the camera fingerprint from the same exposure type as the questioned image. When the camera fingerprint uses only over-exposed images and the questioned images are also over-exposed (row 4 in Table II), the TPR rises slightly to 88.88%, but the TNR falls to 99.96%. Similarly, when the camera fingerprint uses only under-exposed images and the questioned images are also under-exposed (row 5 in Table II), the TPR increases considerably (99.99%), yet the TNR decreases (84.26%). A drop in the TNR corresponds to an increase in false positives, so this is a trade-off that forensic practitioners are unlikely to accept. Another possible solution to the reduced accuracy values in Table II for off- nominal exposures is to use an alternative PCE threshold. We perform a sensitivity analysis of the TPR and TNR to different PCE thresholds by incrementally shifting the integer PCE threshold and recalculating the TPR and TNR using the same set of PCE scores. Fig. 4 demonstrates that lowering the threshold to increase the TPR for the over-exposed image experiments (plots (b) and (d) in Fig. 4) or for the under-exposed image experiments (plots (c) and (e) in Fig. 4) would also lower the TNR, meaning that the overall accuracy would not improve. Our final experiment also investigates an alternate PCE threshold. The PCE threshold of 60 was initially set to produce a 0% FPR on the subset of images from a different model than the specific camera [1]. We compute the lowest integer threshold value to produce a 0% FPR (regardless of camera model) and recalculate the accuracy (results for each experiment are given in Table IV). Note that in each case the threshold must be raised, and in some cases more than doubled (rows 1 and 3 through 5 in Table IV). For the over-exposed image experiments, these accuracy values are all lower than the accuracy using the PCE threshold of 60 (rows 2 and 4 in Table IV). The alternate threshold does improve the accuracy slightly for the auto-fingerprint vs. under-image experiment (row 3 in Table IV) compared with the results for the PCE threshold of 60 (row 3 in Table II). Choosing a threshold based on a 0% FPR, however, not only depends on the dataset and exposure type, but is problematic for other reasons detailed in Section X. TABLE IV: Lowest threshold resulting in a 100% TNR (0% FPR) for each experiment and the corresponding TPR and accuracy. Experiment | Threshold | TPR | TNR | Accuracy ---|---|---|---|--- | Auto-fingerprint --- vs. Auto-image Baseline 210 | 1 | 1 | 1 | Auto-fingerprint --- vs. Over-image 77 | 0.8213 | 1 | 0.9107 | Auto-fingerprint --- vs. Under-image 143 | 0.8569 | 1 | 0.9285 | Over-fingerprint --- vs. Over-image 442 | 0.2300 | 1 | 0.6150 | Under-fingerprint --- vs. Under-image 539 | 0.7587 | 1 | 0.8794 ## IX Limitations Digital image forensics researchers typically encounter one of two problems for data collection: resource exhaustion or quality of images. Manually taking enough pictures from a large variety of cameras is often infeasible due to the significant time and resources required. An alternative source of data collection is scraping images from online collections (e.g., Flickr). However, these images are of unknown provenance and their true origin and exposure settings are frequently unknown. The foundational PRNU cameraID research [1] prioritized the size of the dataset by downloading over one million Flickr images from nearly seven thousand cameras, enabling them to make universal claims. The absence of a data-collection protocol makes it impossible to ascertain the proportion of nominal/off-nominal images in that dataset. We chose instead to prioritize control over the image collection and exposure settings by taking 8400 calibrated images from 28 cameras, as detailed in Section VI \- Data. Although we are not claiming universality, we present our results as a demonstration that off-nominal exposure settings can alter camera-identification error rates, often quite dramatically. Our claims about our dataset are only possible because of the exacting data-collection protocol followed. A similar, but larger-scale study remains for future research. A further limitation is that the cameras used in our research are not the most recent models (e.g., the iPhone X was released in November 2017). We have estimated that it would take at least 14 weeks to add just one more camera of a new model. Adding a new camera would delay the preparation of a technical report by roughly a quarter of a year, by which time yet another new model would likely have been released. Therefore, it is simply not practical to continually add up-to-date cameras for the purpose of one paper. Newer camera models include additional complications that must be addressed in future work, including high dynamic ranges, multiple lenses and sensors, new proprietary processing, AI, etc. That said, we recognize the importance of maintaining current datasets if camera-identification technology is to keep pace with camera development. ## X Discussion The off-nominal image experiments performed dramatically poorly when compared with the auto-exposed image baseline. The 14-17% TPR decrease is quite large (rows two and three, Table II) and cannot be attributed to chance, poor data quality, or methodological errors. When used to investigate which camera captured an evidence image, a TPR of 85.73% is roughly one out of seven, meaning the correct camera might be missed as many as one out of seven times if the questioned image is under-exposed (or more often if the questioned image is over-exposed). One goal of forensic research is to prepare methodologies for use in a court of law. The PRNU camera-identification algorithm described in [1] forms the foundation of an FBI application called FINDCamera [50]. The false-positive error rate estimate of one in a million presented by an expert witness was based on over one million images from Flickr [51]. Our results show that the false-positive error rate can rise to one in two hundred for under-exposed questioned images, and one in five thousand for over-exposed questioned images. Differences in error rate estimates could impact how jurors weigh the strength of evidence, which is particularly important in cases with severe consequences. For example, the aforementioned 2011 trial resulted in a prison sentence of 45 years [51]. We investigated two seemingly-obvious solution candidates that, unfortunately, turned out not to fully address the aforementioned shift in error rates. The first solution was to create camera fingerprints consistent with the exposure type of the questioned image. However, this approach only modestly improves the over-exposure true-positive error rates, while causing the false-positive rate to skyrocket for under-exposed images (rows 4 and 5 of Table II). A second solution is to use a PCE threshold other than 60, as demonstrated in Fig. 4 and in Table IV. Note that the alternate thresholds only minimally improve the error rates, and in some cases simply exacerbate the problem. Additionally, recent work has raised concerns that examiners changing decision thresholds based on subjective analysis of the evidence and forensic scenario can negatively impact the legal system [52]. Although there is a good instinct to mitigate any error inherent to a methodology, it can be difficult to change existing forensic tools. Further refining error rates in context of the questioned image can help forensic practitioners better understand and communicate the error associated with pre- existing tools. Introducing new methodologies requires both acceptance by forensic practitioners and rigorous research to meet the Daubert standard, which requires time, use by others in the community, and passing another Daubert challenge for the new tool. Methods which can be used in conjunction with current tools (such as our proposed protocol to estimate more accurate error rates with respect to the exposure type of the questioned image) can introduce needed incremental changes between significant technological shifts. Our experiments clearly demonstrate that off-nominal images (e.g., over- or under-exposed) impact the error rates of the PRNU cameraID algorithm. We also show that the two most obvious and straightforward modifications to the existing algorithm do not adequately rectify the performance problems for off- nominal exposure images. Proper tool validation and error rate estimation is a crucial aspect of this forensic field that must continually be improved. ## XI Conclusions We present a study of the PRNU source-camera-identification algorithm [1] for off-nominal (over- and under-exposed) images using a meticulously-collected dataset [3]. Our work implements a systematic investigation to show that error rates are worse when off-nominal images are used for forensic source camera identification. In particular, for over-exposed questioned images the true- positive rate is 82.90%, as compared with 100% for nominal (auto-exposed) images. Of note is the contrast between our nominal baseline’s false-positive rate (0.08%) and the roughly one in two hundred false-positive rate for under- exposed (too dark) images. This disparity is concerning, as it can have real- life consequences in the criminal judicial process. Simple and obvious mitigations, such as changing PCE thresholds, do not solve the error-rate problem for off-nominal images. The insight gained from our methodology can help forensic practitioners better understand and communicate the error rates of forensic tools when applied to data representing off-nominal conditions. ## Appendix A Data and Code Availability Data from this study are available upon request. Code is available from its authors [42]. ## Appendix B Attributions/Declarations Author contributions: AM: Conceptualization, analysis, initial/final drafts. RM: Conceptualization, methodology, analysis, all drafts. JN: Conceptualization, funding acquisition, initial draft. Ethical approval: The Iowa State University Institutional Review Board confirmed that the human- subjects aspect of this study is exempt; the information collected contains no personally identifiable information, and is not intended to contribute to generalizable knowledge. Conflicts: The authors declare no conflicts of interest. ## Acknowledgment We thank Huayun Huang for her design and implementation of the user-study data-set validation software. ## References * [1] M. Goljan, J. Fridrich, and T. Filler, “Large scale test of sensor fingerprint camera identification,” in IS&T/SPIE Electronic Imaging Science and Technology, pp. 1–12, 04 February 2009. San Jose, CA, USA, 18-22 January 2009. * [2] M. Iuliani, M. Fontani, and A. Piva, “A leak in PRNU based source identification—questioning fingerprint uniqueness,” IEEE Access, vol. 9, pp. 52455–52463, 2021. * [3] J. Newman, L. Lin, W. Chen, S. Reinders, Y. Wang, M. Wu, and Y. Guan, “StegoAppDB: A steganography apps forensics image database,” in Proceedings of the IS&T International Symposium on Electronic Imaging: Media Watermarking, Security, and Forensics (N. D. M. Adnan M. Alattar and G. Sharma, eds.), pp. 536–1 – 536–11, 13-17 January 2019. Burlingame, CA, USA, 13-17 January 2019. * [4] Daubert v. Merrell Dow Pharmaceuticals, Inc., 509 U.S. 579 (1993). * [5] S. Garfinkel, P. Farrell, V. Roussev, and G. Dinolt, “Bringing science to digital forensics with standardized forensic corpora,” Digital Investigation, vol. 6, pp. S2–S11, 2009. * [6] N. Hughes and U. Karabiyik, “Towards reliable digital forensics investigations through measurement science,” WIREs Forensic Science, vol. 2, no. 4, p. e1367, 2020. * [7] M. Goljan, “Personal communication; email to R. Maxion, 04 March 2023,” 2023. * [8] J. Lukáš, J. Fridrich, and M. Goljan, “Digital camera identification from sensor pattern noise,” IEEE Transactions on Information Forensics and Security, vol. 1, no. 2, pp. 205–214, 2006. * [9] W. Boyle and G. Smith, “Charge coupled semiconductor devices,” The Bell System Technical Journal, vol. 49, no. 4, pp. 587–593, 1970. * [10] M. Tompsett, G. Amelio, W. Bertram, R. Buckley, W. McNamara, J. Mikkelsen, and D. Sealer, “Charge-coupled imaging devices: Experimental results,” IEEE Transactions on Electron Devices, vol. 18, no. 11, pp. 992–996, 1971. * [11] K. Kurosawa, K. Kuroki, and N. Saitoh, “CCD fingerprint method-identification of a video camera from videotaped images,” in International Conference on Image Processing, pp. 537–540, 1999. Kobe, Japan, 24-28 October 1999. * [12] J. Lukas, J. Fridrich, and M. Goljan, “Determining digital image origin using sensor imperfections,” in Proceedings of SPIE IS&T Electronic Imaging Science and Technology: Image and Video Communications and Processing (A. Said and J. G. Apostolopoulos, eds.), vol. 5685, pp. 249–260, 16-20 January 2005. San Jose, CA, USA, 16-20 January 2005. * [13] M. Chen, J. Fridrich, and M. Goljan, “Digital imaging sensor identification (further study),” in Sec., Steg., and Watermarking of Multimedia Contents IX (E. J. D. III and P. W. Wong, eds.), vol. 6505, pp. 258 – 270, 2007. * [14] M. Goljan and J. Fridrich, “Camera identification from cropped and scaled images,” in Proceedings of the IS&T International Symposium on Electronic Imaging: Security, Forensics, Steganography, and Watermarking of Multimedia Contents X, vol. 6819, 68190E, pp. 68190E–1 – 68190E–13, 26-31 January 2008. San Jose, CA, USA, 26-31 January 2008. * [15] M. Goljan, “Digital camera identification from images – estimating false acceptance probability,” in International Workshop on Digital Watermarking, p. 454—468, 2008. Busan, Korea (Republic of), 10-12 November 2008. * [16] M. Goljan, J. Fridrich, and M. Chen, “Sensor noise camera identification: Countering counter-forensics,” in Proceedings of SPIE IS&T Electronic Imaging: Media Forensics and Security II (A. M. A. Nasir D. Memon, Jana Dittmann and E. J. D. III, eds.), vol. 7541, pp. 75410s–1 – 75410s–12, 18-20 January 2010. San Jose, CA, USA, 18-20 January 2010. * [17] M. Goljan, J. Fridrich, and M. Chen, “Defending against fingerprint-copy attack in sensor-based camera identification,” IEEE Transactions on Information Forensics and Security, vol. 6, no. 1, pp. 227–236, 2011. * [18] M. Irshad, N.-F. Law, K. Loo, and S. Haider, “IMGCAT: An approach to dismantle the anonymity of a source camera using correlative features and an integrated 1D convolutional neural network,” Array, vol. 18, p. 100279ff, 2023. * [19] D. Cozzolino and L. Verdoliva, “Noiseprint: A CNN-based camera model fingerprint,” IEEE Transactions on Information Forensics and Security, vol. 15, pp. 144–159, 2020. * [20] D. Maier, H. Erb, P. Mullan, and V. Haupert, “Camera fingerprinting authentication revisited,” in International Symposium on Research in Attacks, Intrusions and Defenses (RAID 2020), pp. 31–46, October 2020. San Sebastian, Spain, 14-16 October 2020. * [21] A. Cortiana, V. Conotter, G. Boato, and F. G. B. DeNatale, “Performance comparison of denoising filters for source camera identification,” in Proceedings of the SPIE IS&T International Symposium on Electronic Imaging:Media Watermarking, Security, and Forensics III (N. D. Memon, J. Dittmann, A. M. Alattar, and E. J. Delp III, eds.), vol. 7880, pp. 78807–1 – 78807–6, 23-27 January 2011. San Francisco, CA, USA, 23-27 January 2011. * [22] J. Li, Y. Liu, B. Ma, C. Wang, C. Qin, X. Wu, and S. Li, “A novel PCA-based method for PRNU distillation to the benefit of source camera identification,” Applied Sciences, vol. 13, no. 11, 2023. * [23] S. Fernández-Menduiña, F. Pérez-González, and M. Masciopinto, “Source camera attribution via PRNU emphasis: Towards a generalized multiplicative model,” Signal Processing: Image Communication, vol. 114, p. 116944, 2023. * [24] A. Mehrish, A. V. Subramanyam, and S. Emmanuel, “Robust PRNU estimation from probabilistic raw measurements,” Signal Processing: Image Communication, vol. 66, pp. 30–41, 2018. * [25] S. Reinders, L. Lin, W. Chen, Y. Guan, and J. Newman, “Score-based likelihood ratiosfor camera device identification,” in Proceedings of the IS&T International Symposium on Electronic Imaging: Media Watermarking, Security, and Forensics (N. D. M. Adnan M. Alattar and G. Sharma, eds.), pp. 1–7, 26-30 January 2020. Burlingame, CA, USA, 26-30 January 2020. * [26] M. Fanfani, A. Piva, and C. Colombo, “PRNU registration under scale and rotation transform based on convolutional neural networks,” Pattern Recognition, vol. 124, no. 108413, 2022. * [27] C.-T. Li and R. Satta, “Empirical investigation into the correlation between vignetting effect and the quality of sensor pattern noise,” IET Computer Vision, vol. 6, no. 6, pp. 560–566, 2012. * [28] M. Goljan, M. Chen, P. Comesaña, and J. Fridrich, “Effect of compression on sensor-fingerprint based camera identification,” in IS&T International Symposium on Electronic Imaging Science and Technology: Media Watermarking, Security, and Forensics (A. M. Alattar and N. D. Memon, eds.), pp. 1–10, Society for Imaging Science and Technology (IS&T), 14-18 February 2016. San Francisco, CA, USA, 14-18 February 2016. * [29] D. Baracchi, M. Iuliani, A. G. Nencini, and A. Piva, “Facing image source attribution on iPhone X,” in International Workshop on Digital Watermarking, pp. 196–207, 2020. Melbourne, Victoria, Australia, 25-27 November 2020. * [30] A. Montibeller and F. Pérez-González, “Exploiting PRNU and linear patterns in forensic camera attribution under complex lens distortion correction,” in IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 1–5, 2023. Rhodes Island, Greece, 04-10 June 2023. * [31] C. Albisani, M. Iuliani, and A. Piva, “Checking PRNU usability on modern devices,” in IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 2535–2539, 06-11 June 2021. Toronto, ON, Canada, 06-11 June 2021. * [32] S. Joshi, P. Korus, N. Khanna, and N. Memon, “Empirical evaluation of PRNU fingerprint variation for mismatched imaging pipelines,” in IEEE International Workshop on Information Forensics and Security, pp. 1–6, 2020. New York, NY, USA, 06-11 December 2020. * [33] K. Henry, Digital photography analysis: Analytical framework for measuring the effects of saturation on photo response non-uniformity. University of Colorado at Denver, 2016. * [34] S. Samaras, V. Mygdalis, and I. Pitas, “Robustness in blind camera identification,” in International Conference on Pattern Recognition, pp. 3874–3879, 04 December 2016. Cancun, Mexico, 04-08 December 2016. * [35] L. Lin, W. Chen, Y. Wang, S. Reinders, Y. Guan, J. Newman, and M. Wu, “The impact of exposure settings in digital image forensics,” in IEEE International Conference on Image Processing, pp. 540–544, 2018. Athens, Greece, 07-10 October 2018. * [36] Y. Quan, C.-T. Li, Y. Zhou, and L. Li, “Warwick image forensics dataset for device fingerprinting in multimedia forensics,” in IEEE International Conference on Multimedia and Expo, pp. 1–6, 2020. London, UK, 06-10 July 2020. * [37] Y. Quan and C.-T. Li, “On addressing the impact of ISO speed upon PRNU and forgery detection,” IEEE Transactions on Information Forensics and Security, vol. 16, pp. 190–202, 2021. * [38] M. Goljan, M. Chen, and J. Fridrich, “Identifying common source digital camera from image pairs,” in 2007 IEEE International Conference on Image Processing, vol. 6, pp. VI–125, IEEE, 2007. * [39] I. Daubechies, “Orthonormal bases of compactly supported wavelets,” Communications on Pure and Applied Mathematics, vol. 41, no. 7, pp. 909–996, 1988. * [40] J. Fridrich, “Sensor defects in digital image forensic [sic],” in Digital Image Forensics: There is More to a Picture than Meets the Eye (H. T. Sencar and N. Memon, eds.), pp. 179–218, New York, NY: Springer Science+Business Media, 2013. * [41] M. Chen, J. Fridrich, M. Goljan, and J. Lukáš, “Determining image origin and integrity using sensor noise,” IEEE Transactions on Information and Security, vol. 3, no. 1, pp. 74–90, 2008. * [42] M. Goljan, M. Chen, P. Comeaña, and J. Fridrich, “MATLAB/Python camera fingerprint implementation.” http://dde.binghamton.edu/download/camera_fingerprint/, 2016. Last date modified: February 2016. * [43] The MathWorks Inc., “MATLAB version: 9.13.0 (R2022b),” 2022. The MathWorks Inc., Natick, MA, United States. * [44] H.-C. Lee, Introduction to color imaging science. Cambridge University Press, 2005. * [45] Apple Inc., “The Swift Programming Language,” 2014. Apple Inc., Cupertino, CA, United States. * [46] K. Arnold, J. Gosling, and D. Holmes, “The Java programming language,” 2000. Addison-Wesley Longman Publishing Co., Inc, Boston, MA, United States. * [47] J. L. Fleiss, “Measuring nominal scale agreement among many raters,” Psychological Bulletin, vol. 76, no. 5, pp. 378–382, 1971. * [48] M. Gamer, J. Lemon, I. Fellows, and P. Singh, “irr: Various Coefficients of Interrater Reliability and Agreement,” 2012. R package, version 0.84, https://CRAN.R-project.org/package=irr. * [49] R Core Team, “R: A Language and Environment for Statistical Computing,” 2017. R Foundation for Statistical Computing, Vienna, Austria. * [50] A. MacKenzie and W. E. Bruehs, “Photo response nonuniformity (PRNU) meets Daubert standards,” Journal of Forensic Identification, vol. 68, no. 4, pp. 467–471, 2018. * [51] US District Court, “United States of America v. Nathan Railey, United States District Court for the Southern District of Alabama, Case # 1:10-cr-00266-CG-N,” 2011. PACER Document 198.pdf. * [52] W. C. Thompson, “Shifting decision thresholds can undermine the probative value and legal utility of forensic pattern-matching evidence,” Proceedings of the National Academy of Sciences, vol. 120, no. 41, p. e2301844120, 2023. | Abby Martin received her B.A. degree in Computer Science/Software Engineering and Mathematics from Augustana University, Sioux Falls, SD, in 2017 and is pursuing her Ph.D. degree in Mathematics and Computer Science from Iowa State University, Ames, IA under the supervision of Professors Jennifer Newman and Jin Tian. From 2020 to 2024, she was a Research Assistant with the Center for Statistics and Applications in Forensic Evidence. Her research interests include camera source identification, steganalysis, and digital image forensics. ---|--- | Roy Maxion (IEEE Fellow) is a Research Professor emeritus in computer science and machine learning at Carnegie Mellon University, where he is also the director of the Dependable Systems Laboratory. He has long been a passionate proponent of rigorous/foundational scientific methodology. He is an IEEE Fellow, and recently served as a member of the US National Academy of Sciences committee on Future Research Goals and Directions for Foundational Science in Cybersecurity. He won the 2019 IEEE/IFIP Test-of-Time award for his work using typing rhythms for user authentication. He recently won the Taiwan Tamkang Panda Trophy for his lecture on the sensitivity of machine learning systems to small irregularities in data. He is one of the founding members of the US Center for Statistics and Applications in Forensic Evidence. He is a member of the editorial boards of IEEE Security and Privacy and the International Journal of Machine Learning. ---|--- | Jennifer Newman Jennifer L. Newman (Senior Member, IEEE) is a Professor of Mathematics at Iowa State University, Ames, Iowa and holds a Scott Hanna Faculty Fellow in Mathematics. She received her B.A. in Physics from Mount Holyoke College, and an M.S. and Ph.D. in Mathematics from the University of Florida, Gainesville. She has served as an Associate Editor for the Journal of Electronic Imaging and for the Journal of Mathematical Imaging and has served on numerous program committees for many professional conferences. Her current research is in statistical forensic imaging, including steganography, steganalysis and camera identification. Other research interests include image algebra; object detection, modeling and machine learning for images; morphological and other neural networks; and statistical modeling of textures for forward and inverse problems. She has over 80 refereed publications and has been a Principal Investigator or Co-Principal Investigator on over $24 million in grants. She has mentored over 32 graduate students as their major professor, and many undergraduate students as well. ---|---
11institutetext: Université Côte d’Azur, Observatoire de la Côte d’Azur, CNRS, Laboratoire Lagrange, France 22institutetext: V. N. Karazin Kharkiv National University, 4 Svobody Sq., Kharkiv, 61022, Ukraine 33institutetext: European Southern Observatory (ESO), Alonso de Cordova 3107, 1900, Casilla Vitacura, Santiago, Chile 44institutetext: Department of Earth, Atmospheric and Planetary Sciences, MIT, 77 Massachusetts Avenue, Cambridge, MA, 02139, USA 55institutetext: Astronomical Institute, Academy of Sciences of the Czech Republic, CZ-25165 Ondřejov, Czech Republic 66institutetext: INAF - Osservatorio Astronomico di Roma, Via Frascati 33, I-00078 Monte Porzio Catone, Italy 77institutetext: Department of Earth, Atmospheric, and Planetary Sciences, Massachusetts Institute of Technology, 77 Massachusetts Avenue, Cambridge, MA 02139, USA 88institutetext: INAF - Osservatorio Astronomico di Roma, Monte Porzio Catone (RM), Italy 99institutetext: INAF - Department of Physics and Astronomy, University of Padova, Vicolo dell’Osservatorio, 3, I-35122 Padova, Italy 1010institutetext: Agenzia Spaziale Italiana (ASI), Via del Politecnico 00133 Roma, Italy # Compositional properties of planet-crossing asteroids from astronomical surveys A. V. Sergeyev Corresponding author<EMAIL_ADDRESS>B. Carry 11 M. Marsset 3344 P. Pravec 55 D. Perna 66 F. E. DeMeo 77 4 4 V. Petropoulou 88 M. Lazzarin 99 F. La Forgia 99 I. Di Petro 1010 the NEOROCKS team The NEOROCKS team: E. Dotto, M. Banaszkiewicz, S. Banchi, M.A. Barucci, F. Bernardi, M. Birlan, A. Cellino, J. De Leon, M. Lazzarin, E. Mazzotta Epifani, A. Mediavilla, J. Nomen Torres, E. Perozzi, C. Snodgrass, C. Teodorescu, S. Anghel, A. Bertolucci, F. Calderini, F. Colas, A. Del Vigna, A. Dell’Oro, A. Di Cecco, L. Dimare, P. Fatka, S. Fornasier, E. Frattin, P. Frosini, M. Fulchignoni, R. Gabryszewski, M. Giardino, A. Giunta, T. Hromakina, J. Huntingford, S. Ieva, J.P. Kotlarz, M. Popescu, J. Licandro, H. Medeiros, F. Merlin, F. Pinna, G. Polenta, A. Rozek, P. Scheirich, A. Sonka, G.B. Valsecchi, P. Wajer, A. Zinzi. ###### Abstract Context. The study of planet-crossing asteroids is of both practical and fundamental importance. As they are closer than asteroids in the Main Belt, we have access to a smaller size range, and this population frequently impacts planetary surfaces and can pose a threat to life. Aims. We aim to characterize the compositions of a large corpus of planet- crossing asteroids and to study how these compositions are related to orbital and physical parameters. Methods. We gathered publicly available visible colors of near-Earth objects (NEOs) from the Sloan Digital Sky Survey (SDSS) and SkyMapper surveys. We also computed SDSS-compatible colors from reflectance spectra of the Gaia mission and a compilation of ground-based observations. We determined the taxonomy of each NEO from its colors and studied the distribution of the taxonomic classes and spectral slope against the orbital parameters and diameter. Results. We provide updated photometry for 470 NEOs from the SDSS, and taxonomic classification of 7,401 NEOs. We classify 42 NEOs that are mission- accessible, including six of the seven flyby candidates of the ESA Hera mission. We confirm the perihelion dependance of spectral slope among S-type NEOs, likely related to a rejuvenation mechanism linked with thermal fatigue. We also confirm the clustering of A-type NEOs around 1.5–2 AU, and predict the taxonomic distribution of small asteroids in the NEO source regions in the Main Belt. ###### Key Words.: Minor planets, asteroids: NEOs – Techniques: photometric – Surveys ## 1 Introduction Asteroids are the remnants of the building blocks that accreted to form the terrestrial planets and the core of the giant planets in the early Solar System $4.6\text{\,}\mathrm{G}\mathrm{y}$ ago. Asteroids are also the origin of the meteorites that fell on the planets, including the Earth. These meteorites represent the only possibility to study in detail the composition of asteroids in the laboratory (e.g., 2008-ChEG-68-Consolmagno; 2015-Icarus-252-Cloutis), with the exception of the tiny samples of rock, provided by return-sample missions: JAXA Hayabusa (2011-Science-333-Yurimoto) and Hayabusa-2 (2022Sci...375.1011T), as well as the soon due NASA OSIRIS-REx (2017SSRv..212..925L). In contrast to targeted sample collection, we cannot choose the origin of meteorites striking the Earth. Identifying their source regions is therefore crucial to determining the physical conditions and abundances in elements that reigned in the protoplanetary nebula around the young Sun (2006-MESS2-McSween). From the analysis of a bolide trajectory, it is possible to reconstruct a meteorite’s heliocentric orbit (2006-MPS-41-Gounelle), although such determinations have been limited to only a few meteorites (2018-Icarus-311-Granvik). Figure 1: Schematic view of the extraction, convertion, and merging of NEOs from SDSS, SMSS, Gaia, and Classy catalogs. Among the different dynamical classes of asteroids, the near-Earth and Mars- crosser asteroids (NEAs and MCs), whose orbits cross that of the telluric planets, form a transient population. Their typical lifetime is of only a few million years before they are ejected from the Solar System, fall into the Sun, or impact a planet (1997Sci...277..197G). We refer here to near-Earth objects (NEOs) in a liberal sense, encompassing both asteroid-like and comet- like objects whose orbits cross that of a terrestrial planet (hence including NEAs, MCs, and some Hungarias). These populations are of both scientific and pragmatic interest. As they are closer to the Earth than the asteroid belt, we have access to smaller objects from ground-based telescopes. Their orbital proximity implies a much smaller impulsion to reach them with a spacecraft and make them favorable targets for space exploration (2012-DPS-Abell). On the other hand, these objects could potentially pose a threat, and studying their properties is a key aspect in planning risk mitigation (2015hchp.book..763D), of which the National Aeronautics and Space Administration (NASA) Demonstration for Autonomous Rendezvous Technology (DART) and European Space Agency (ESA) Hera missions are lively demonstrators (2021PSJ.....2..173R; 2022PSJ.....3..160M). We focus here on the compositional properties of a large corpus of NEOs as part of the NEOROCKS project (2021plde.confE.221D), whose goal is the characterization of the NEO population. The article is organized as follows:. In
# The SDSS-Gaia View of the Color–Magnitude Relation for Blue Horizontal- branch Stars Fabrícia O. Barbosa Universidade de São Paulo, Instituto de Astronomia, Geofísica e Ciências Atmosféricas, Departamento de Astronomia, SP 05508-090, São Paulo, Brazil Rafael M. Santucci Universidade Federal de Goiás, Instituto de Estudos Socioambientais, Planetário, Goiânia, GO 74055-140, Brazil Universidade Federal de Goiás, Campus Samambaia, Instituto de Física, Goiânia, GO 74001-970, Brazil Silvia Rossi Universidade de São Paulo, Instituto de Astronomia, Geofísica e Ciências Atmosféricas, Departamento de Astronomia, SP 05508-090, São Paulo, Brazil Guilherme Limberg Universidade de São Paulo, Instituto de Astronomia, Geofísica e Ciências Atmosféricas, Departamento de Astronomia, SP 05508-090, São Paulo, Brazil Angeles Pérez-Villegas Instituto de Astronomía, Universidad Nacional Autónoma de México, Apartado Postal 106, C. P. 22800, Ensenada, B. C., Mexico Hélio D. Perottoni Universidade de São Paulo, Instituto de Astronomia, Geofísica e Ciências Atmosféricas, Departamento de Astronomia, SP 05508-090, São Paulo, Brazil Institut de Ciències del Cosmos (ICCUB), Universitat de Barcelona (IEEC-UB), Martí i Franquès 1, E08028 Barcelona, Spain ###### Abstract We present an updated sample of blue horizontal-branch (BHB) stars selected from the photometric and spectroscopic data from Sloan Digital Sky Survey and its associated project Sloan Extension for Galactic Understanding and Exploration (SEGUE). With this data, we selected candidates for A-type stars in the color-color space and then a mixture modeling technique was implemented in order to distinguish between BHB and main-sequence/blue-straggler stars based on their surface gravity values ($\log\rm{g}$) estimated by the SEGUE Stellar Parameter Pipeline. Our robust approach allows us to attribute individual probabilities of each star truly being in the BHB stage. Hence, our method is advantageous in comparison to previous SEGUE BHB selections that adopted simple $\log\rm{g}$ cuts. We also revisit the color–magnitude relation for these stars and propose two calibrations, based on updated distances for Galactic globular clusters, to estimate absolute magnitudes with $(g-r)_{0}$ and $(u-r)_{0}$ colors. Galaxy: stellar halo – stars: horizontal branch – stars: distances ## 1 Introduction The Gaia mission (Gaia Collaboration et al., 2016) has provided a better understanding of the Galaxy, in particular regarding the field of Galactic Archaeology (Helmi, 2020; Brown, 2021). The astrometric information provided for an unprecedented number of objects has dramatically changed the way we study the Galactic halo (e.g., Belokurov et al., 2018; Myeong et al., 2018; Koppelman et al., 2018; Malhan et al., 2018). Despite the huge amount of direct measurements supplied by Gaia, distances inferred from brightness are still of great value. At magnitude $G<15$, the early third data release (EDR3) presents parallax uncertainties of ${\sim}0.02$ mas (Gaia Collaboration et al., 2021), and they increase significantly for fainter stars. To overcome this limitation, we can use various well-known distance tracers such as RR Lyrae (Shapley, 1916), Cepheids (Leavitt & Pickering, 1912), and blue horizontal-branch (BHB) stars (Cacciari, 1999). BHBs are metal-poor ([Fe/H]111[A/B] $=\log\rm(N_{A}/N_{B})_{\star}-\log(N_{A}/N_{B})_{\odot}$, where $\rm N_{A}$ and $\rm N_{B}$ are the number density of atoms of the elements A and B, respectively. $\star$ refers to the considered star, and $\odot$ refers to the Sun. $\lesssim-0.5$; Santucci et al. 2015b) A or B-type stars that burn helium in their cores. These evolved stars present a high and nearly constant luminosity, making them perfect for investigating the outer regions of the halo and the assembly history of our Galaxy (Xue et al., 2011, 2008; Deason et al., 2011, 2017; Belokurov et al., 2014; Santucci et al., 2015b). In recent works, BHBs were used to study dynamical substructures and stellar streams (Yuan et al., 2019, 2020, 2022; Peñarrubia & Petersen, 2021; Li et al., 2022; Wu et al., 2022), the connection between the apocenter pile-up of orbits and the so-called “break-radius” of the stellar halo density profile (Deason et al., 2018), the anisotropy of the halo velocity distribution (Lancaster et al., 2019), the age gradient of the halo out to $\sim$35 kpc (Whitten et al., 2019), to estimate the total dynamical mass of the Milky Way (Deason et al., 2021; Bird et al., 2022), and even to demonstrate the influence of the Large Magellanic Cloud in our Galaxy’s halo (Erkal et al., 2021; Petersen & Peñarrubia, 2021). The well-defined structure of the horizontal branch in the color-magnitude diagram (CMD), a roughly constant luminosity, permits the development of a distance calibration for these BHBs. The first approximation developed was a linear fit, using the ($B-V$) color and absolute magnitude in the $V$ band, for stars in globular clusters (Hayes & Philip, 1979). Likewise, Preston et al. (1991) defined a smoother relation, a fourth degree polynomial, for the same color-magnitude space. Two decades later, a widely used calibration was presented by Deason et al. (2011, hereafter D11) based on magnitudes in the $ugriz$ system for the Sloan Digital Sky Survey (SDSS; York et al., 2000) eighth data release (DR8; Aihara et al., 2011), which had its color range extended by Belokurov & Koposov (2016) afterwards. In the meantime, Fermani & Schönrich (2013) argued that it is extremely important to take into account the effect of the metallicity on the absolute magnitude estimation, proposing a new calibration based on a statistical method. However, Santucci et al. (2015a) and Utkin & Dambis (2020) showed that the differences between considering or not the metallicity in the relations are negligible, with D11’s estimates being $2.5\%$ higher, within (1-$\sigma$) errors of both calibrations. D11’s relation still remains the most used calibration for BHB stars (Santucci et al., 2015a, b; Thomas et al., 2018; Whitten et al., 2019; Donlon et al., 2020; Martin et al., 2022) even though photometric data have been updated several times since then. Moreover, we can now compare photometric distances of BHB stars with purely geometric estimates from Gaia’s parallaxes (e.g., Bailer-Jones et al. 2021) as well as new measurements for Galactic globular clusters (Vasiliev & Baumgardt, 2021). These facts bring to light the relevance of reviewing the D11’s calibration with recent data. This paper is organized as follows. In Section 2, we describe the photometric selection and revise a previous method to identify BHB stars. Section 3 presents the selection of stars in globular clusters and the method used to define the absolute magnitude calibration. Finally, in Section 4 we discuss our results. ## 2 Data ### 2.1 A-type stars The initial selection of A-type stars was made using the photometry from the sixteenth data release (DR16) of SDSS (Ahumada et al., 2020). For the selection of BHB stars, we were specially interested in the spectroscopic data obtained by the Sloan Extension for Galactic Understanding and Exploration (SEGUE; Yanny et al., 2009) processed by the SEGUE Stellar Parameter Pipeline (SSPP; Lee et al., 2008a, b)222Last run on DR9 (Ahn et al., 2012; Rockosi et al., 2022).. We implemented color cuts applying the following criteria: $-0.3<(g-r)_{0}<0.1$ and $0.8<(u-g)_{0}<1.4$, similar to those used in previous works (Sirko et al. 2004, D11). All the magnitudes were corrected using the extinction coefficients ($A_{g}$, $A_{r}$, $A_{u}$) provided by the SDSS catalog itself and we removed stars with relative errors in the $g$-band magnitude greater than 1%. The photometric selection is able to exclude several undesired objects, such as white dwarfs, quasars and cooler spectral types (Yanny et al., 2009; Vickers et al., 2012), but the major source of contamination, blue straggler stars (BSSs), remains. The distinction between evolved and main-sequence stars/BSSs is commonly made by investigating spectral features, specially Balmer lines, whose depths are affected by effective temperature ($T_{\rm eff}$) and widths by surface gravity ($\log\rm{g}$). With the output of SSPP, we can directly inspect these stellar atmospheric parameters. Therefore, we cross-matched the filtered sample with the SSPP catalog using $5^{\prime\prime}$ radius. In addition to color filters, we restricted our sample to stars with moderate signal-to-noise ratio ($S/N>10$) and $7500\,{\rm K}<T_{\rm eff}<10000\,{\rm K}$ (Deason et al., 2012; Santucci et al., 2015a), where $T_{\rm eff}$ is the estimate adopted by the pipeline. Duplicated stars with the smallest $S/N$ were removed, which resulted in 16463 stars. The restrictions above remove poor-quality data and cooler stars that could remain after the color cut, which assures that contamination from other non-BSS stars is minimal. ### 2.2 BHB stars One of the techniques used to disentangle BSSs and BHB stars is the $f_{m}$ versus $D_{0.2}$ method (Pier, 1983), where $f_{m}$ is the minimum flux relative to the continuum level and $D_{0.2}$ is a measurement of the line width of Balmer lines, so it provides indirect information regarding both $T_{\rm eff}$ and $\log\rm{g}$. Later, a different approach was proposed by Clewley et al. (2002) based on the parameters of the Sérsic profile (Sersic, 1968), which describes the shape of the lines. BSSs present a stronger $\log\rm{g}$ than those located in the horizontal branch. Santucci et al. (2015a) showed that these stellar types are clearly distinguishable for magnitudes $g_{0}<18$ with SEGUE/SDSS DR8 data, being possible to classify them by fitting a combination of two Gaussian functions to their $\log\rm{g}$ distributions. This method was proved to be in good concordance with spectral analysis, with more than 90% of agreement. When replicating this procedure with current SEGUE data, we noticed a change in the peaks of both groups and a greater overlap in the $\log\rm{g}$ distribution as presented in the left column of Fig. 1. This is observed even for relatively bright A-type stars ($g_{0}<18$, top panel), which makes it more difficult to separate these objects with that simple approach (dashed lines indicate the Gaussian fits from Santucci et al. 2015a). The differences are probably due to changes between the releases of SDSS on the $\log\rm{g}$ estimates considered to obtain the final adopted parameter333See https://www.sdss.org/dr16/spectro/sspp_changes/ for detailed information.. Figure 1: Histograms of $\log\rm{g}$. Left column: log g adopted by the pipeline for stars with $g_{0}<18$ (top) and $g_{0}>18$ (bottom). Dashed lines are the Gaussian distributions defined by Santucci et al. (2015a). Right column: log g estimates provided by SSPP spectroscopically determined (top) and from ANNRR method (bottom) for all stars. ### 2.3 Classification Given the two-Gaussian-like morphology of the $\log\rm{g}$ distributions observed for our sample of A-type stars (Fig. 1), we used a Gaussian Mixture Model (GMM) unsupervised approach in order to distinguish BHBs and BSSs. For this task, we utilize the scikit-learn (Pedregosa et al., 2011) GaussianMixture444https://scikit- learn.org/stable/modules/generated/sklearn.mixture.GaussianMixture.html#sklearn.mixture.GaussianMixture. package. In this GMM implementation, the expectation-maximization algorithm (Dempster et al., 1977) is employed in the search for the best-fit model. The GMM technique fits the data as a finite combination of $K$ Gaussian distributions. As made previously by Santucci et al. (2015a), $K$ was defined based on visual inspection of $\log\rm{g}$ estimates presented in Fig. 1 and the assumption that the contamination is predominantly of main-sequence stars/BSSs. Therefore, $K=2$ is an adequate value for the sample. Moreover, GMM can be readily applied to data of arbitrary dimensionality. Therefore, we take advantage of such flexibility and explore a suitable combination of $\log\rm{g}$ estimates provided by SSPP (we refer the reader to Lee et al. 2008a for details about different approaches to determine $\log\rm{g}$ from SEGUE spectra). We noticed that the distributions of both $\log\rm{g}_{\rm ANNRR}$ and $\log\rm{g}_{\rm SPEC}$ exhibit clearly two peaks, as expected for the BHBs/BSSs dichotomy, while it is not possible to observe this feature in others. These two distributions are shown in the right column of Fig. 1. Hence, we proceeded with the GMM separation within the two-dimensional space defined by these $\log\rm{g}$ estimates. The final $\log\rm{g}$ adopted by the pipeline was not considered an extra dimension as it consists of a weighted mean of the valid estimates. In order to guarantee the robustness of our method against uncertainties reported by the SSPP, we constructed a set of $10^{4}$ realizations of each star’s $\log\rm{g}$ estimates in a Monte Carlo framework. Then, we performed the GMM classification for all iterations. Finally, the fraction of instances that a star is attributed to a certain class (either BHB or BSS) is taken as its membership probability for that given group. For this procedure, stars without valid estimates of both $\log\rm{g}_{\rm ANNRR}$ and $\log\rm{g}_{\rm SPEC}$ are removed. With this strategy, we achieved a sample of 5699/4590 stars classified as BHBs above 50%/99% probability555The full sample is available at https://github.com/guilhermelimberg/bhb_dist.. The final classification obtained is shown in Fig. 2. The difference in the uncertainties of the estimates greatly influences the classification, as the $\log\rm{g}_{\rm ANNRR}$ presents more precise values ($\sim 0.06$) than $\log\rm{g}_{\rm SPEC}$ ($\sim 0.21$). We cross-matched our sample with the one from Santucci et al. (2015a) to evaluate the fraction of BSS contamination. 10% of our BHB set was classified previously as BSSs, and, among those with a probability greater than 99% of being BHB following the method implemented here, 2% is possibly incorrectly assigned. Figure 2: Distribution of classified stars with $g_{0}<18$ (left) and $g_{0}>18$ (right) in the surface gravity space. Median errors are indicated in the bottom right corner. Histograms show the distribution of log g from the respective axis for stars classified as BHB and BSS. Colors indicate the probability of being a BHB star. ## 3 Absolute magnitude calibration ### 3.1 Globular clusters stars The procedure to construct an absolute magnitude relation follows previous works (see Section 1), starting with the selection of BHB stars in globular clusters. We used the photometric catalog from An et al. (2008), which provides magnitudes for crowded fields observed by SDSS. The clusters presenting a well defined horizontal branch were selected and their magnitudes were corrected using the standard extinction ($E(B-V)$) from Schlegel et al. (1998) along with the relative extinctions from Wang & Chen (2019) for $g-,u-$ and $r-$band. Vasiliev & Baumgardt (2021) attributed a membership probability for stars in globular clusters based on proper motions and parallaxes from Gaia EDR3. We selected stars from several globular clusters that were more likely than 0.99 to belong to those clusters and we obtained their absolute magnitude in the SDSS $g$-band ($M_{g}$) with the estimated distance for each cluster given by these authors. The list of clusters, their heliocentric distances and distance moduli are presented in Table 1. Table 1: Heliocentric distances provided by Vasiliev & Baumgardt (2021) for each globular cluster, uncertainties, and their respective distance moduli. Cluster | D (kpc) | $\sigma_{D}$ (kpc) | $(m-M)_{0}$ (mag) ---|---|---|--- NGC2419 | 83.0 | 1.5 | 19.59 NGC4147 | 18.65 | 0.16 | 16.35 NGC5024, M53 | 18.59 | 0.15 | 16.35 NGC5053 | 17.30 | 0.14 | 16.19 NGC5272, M3 | 10.20 | 0.06 | 15.04 NGC5466 | 16.32 | 0.13 | 16.06 NGC5904, M5 | 7.49 | 0.05 | 14.37 NGC6205, M13 | 7.53 | 0.06 | 14.38 NGC6341, M92 | 8.60 | 0.05 | 14.67 NGC7078, M15 | 10.73 | 0.14 | 15.15 NGC7089, M2 | 11.62 | 0.13 | 15.33 To create the sample used to implement the calibration, we applied the limits for colors as defined for the initial selection (see Section 2.1). Then, the stars were selected in a single combined CMD, limiting the $M_{g}$ between $-0.15$ and $1.15$. After this exercise, the remaining globular cluster members were checked individually at the SIMBAD database (Wenger et al., 2000), and those classified as variables, blue stragglers and other undesirable types were removed. We also excluded stars with flags in the magnitudes used, leaving us with 744 stars to derive the calibrations from. ### 3.2 Fitting the horizontal branch Finding the best mathematical relationships to fit observable data is not an easy task. In previous works, the absolute magnitudes for BHB stars have been described as a high-degree polynomial (Preston et al., 1991; Deason et al., 2011; Belokurov & Koposov, 2016). Instead of arbitrarily assuming that this function is the best representation of the data, we explore the possible combinations between colors and absolute magnitudes. For this task, we employed the TuringBot software (Ashok et al., 2020), a code that performs symbolic regression using a simulated annealing algorithm (Delahaye et al., 2019; Chira & Plionis, 2019) in order to search for the best set of parameters and mathematical operations to describe the data. TuringBot is particularly interesting in this case, because it allows the visualization of the estimated mathematical laws, allowing the user to choose the most appropriate equations for their needs. Furthermore, the user is free to choose the mathematical operations involved in the fitted functions, the error metric for convergence, as well as the input variables. The best fits are presented in a summarized box, combining the error and the complexity of the equations. The complexity is defined by the sum of the “size” of the mathematical operations, constants, and variables present in the solutions. The program assumes that an input variable, constant, sum, subtraction, and multiplication have size 1 each, division has size 2, and more complex operations have higher sizes666More TuringBot details can be found in the program documentation, available at: https://turingbotsoftware.com/documentation.html.. We verified that the use of very complex mathematical operations is unnecessary and does not improve the average error of the equations presented by the software. Hence, we adopted only the basic mathematical operations (sum, subtraction, multiplication, and division) as input for the search for absolute magnitude calibrations. The mean absolute error was used as a criterion for convergence and we tested the dependence of all the most common available observable variables found in the literature for estimates of this type, such as magnitudes, color indices, and metallicity. After evaluating all the combinations of input variables ($u_{0}$, $g_{0}$, $r_{0}$, $(u-g)_{0}$, $(u-r)_{0}$, $(g-r)_{0}$, and [Fe/H]) presented in Appendix A, we found that there is no significant dependence on metallicity in the calibrations provided by the code, regardless of the mathematical operations adopted and the algorithm convergence time. The colors $(u-g)_{0}$ and $(u-r)_{0}$ provided calibrations with smaller errors than the color $(g-r)_{0}$, traditionally used in the absolute magnitude calibration of BHB stars (D11; Belokurov & Koposov 2016), and also smaller than $(g-r)_{0}$ with [Fe/H], which means we can achieve more accurate results that do not require metallicity information. The observed improvement with $(u-g)_{0}$ and $(u-r)_{0}$ color might be associated with the $u$ filter, whose transmission curve is mostly between 3000Å and 4000Å, i.e., it is positioned in a region of the spectrum where the Balmer discontinuity ($\sim$3645Å) is located, as well as several Hydrogen lines from the Balmer series, which makes it a useful indirect indicator of $T_{\rm eff}$ and $\log\rm{g}$ of the BHBs, atmospheric parameters that are directly linked to the mass of the stars in the horizontal branch (Valcarce & Catelan, 2008). Fig. 3 shows the associated errors for each fit in the final BHB sample. Clearly, $(u-r)_{0}$ presents a better performance than $(g-r)_{0}$ and $(u-g)_{0}$. Using the same tool, we find that the relation proposed in D11 is a function of complexity 33 and ${\rm error}=0.12$, whilst equations of lower order present a much lower complexity with errors of ${\sim}0.10$. We chose the first functions from which there is no significant decrease in error, i.e., functions of complexity 6 in Fig. 3, as those that best describes the data. Exists a singularity in the calibrations, however it is outside of our color range. Hence, it does not imply an obstacle to their usage in the context of this work. Figure 3: Error comparison between fits for colors $(g-r)_{0}$, $(u-g)_{0}$ and $(u-r)_{0}$ from TuringBot. $M_{g}=\frac{0.178}{0.537+(g-r)_{0}}$ (1) $M_{g}=\frac{0.721}{(u-r)_{0}}-0.212$ (2) ### 3.3 Distances analysis Left panels in Fig. 4 show the distribution of BHB stars in the CMD with color $(g-r)_{0}$ (top) and $(u-r)_{0}$ (bottom). In the top left panel, we can observe how the calibration proposed here (Eq. 1) provides magnitudes lower than D11’s, which results in larger distances. The difference is minimal at $(g-r)_{0}\sim-0.20$, where both equation come closer, and the smaller values are a consequence of the inclusion of the cluster NGC7078, whose stars are brighter and were not included in D11. On the other hand, the distribution using $(u-r)_{0}$ has a lower dispersion (bottom left panel). Figure 4: Left panels: color–magnitude diagrams showing BHBs used to define the calibrations. Dash-dotted line represents the polynomial fit defined in D11. Solid lines represent the calibration for $(g-r)_{0}$ and $(u-r)_{0}$ color presented in this work (Eq. 1 and 2, respectively, from top to bottom). Median errors of the data are indicated in the bottom right corner of each panel. Right panels: difference between distances calculated with D11’s calibration and those presented in this work, respectively $(g-r)_{0}$ (top) and $(u-r)_{0}$ (bottom). In the right panels (Fig. 4), we also show the comparison between distances estimated with the relation from D11 and each calibration defined in the present work. For consistence with D11’s relation, only stars bluer than $(g-r)_{0}=0$ were considered, and we rejected stars with BHB probabilities of less than 0.99 to reduce the number of misclassified stars. The new calibration using color $(g-r)_{0}$ provides distances about 5% larger than D11’s for the reddest stars, while the other end of the color window attains a relative difference of up to 9% (top right panel). For the color $(u-r)_{0}$, the scatter is more uniform and much larger for the bluest stars (bottom right panel). When comparing with purely astrometric heliocentric distances, there is a considerable scatter, even for stars closer than 5 kpc. For this comparison, we selected stars with relative parallax uncertainty from Gaia EDR3777Gaia Collaboration (2020). in the interval $0<\sigma_{\varpi}/\varpi<0.2$, re- normalized unit weight errors within the recommended range ($\texttt{RUWE}<1.4$; Lindegren 2018), and also a BHB probability greater than 0.99 (${>}\ 300$ stars). In Fig. 5, we show the comparison between geometric (left) and photogeometric (right) distances provided by Bailer-Jones et al. (2021) and our calibration using $(u-r)_{0}$ color. For fainter stars, both Bailer-Jones et al.’s (2021) distances are frequently underestimated. The gray region indicates the interval within 20% of distance in the respective horizontal axes, and we find 65% of the stars when using photogeometric estimates and 54% with the purely geometric outside it. Gaia’s parallax measurements potentially are not accurate enough for these BHBs and so the final results are not representative of the sample (since the distances inferred from Bayesian methods are strongly dependent on the measured parallax). We also point out that this effect is unlikely the consequence of an inappropriate classification since 94% of the stars possess $\log\rm{g}_{\rm ADOP}<3.6$ and would receive the same label by Santucci et al.’s (2015a) method. Finally, we cannot endorse the compatibility between D11’s distances for BHB stars found in the Pristine survey (Starkenburg et al., 2019) and Gaia DR2’s parallaxes. As the Pristine data are not publicly available, it is not possible to evaluate whether the difference is due to the BHB sample used. Figure 5: Comparison between the distances estimated by Eq. 2 ($D_{(u-r)_{0}}$), geometric ($D_{geo}$, left) and photogeometric ($D_{phot}$, right) distances from Bailer-Jones et al. (2021). Colors indicate Gaia G magnitude and gray area indicates the region of $1-\sigma$. Median errors are indicated in the bottom right corner of the top panels. Similar inconsistencies between photometric and astrometic-inferred distances were also observed by previous works. Using OB stars, Shull & Danforth (2019) noted an increase in the discrepancies at ${d>1.5}$ kpc, with B-type stars showing smaller values of distances when considering parallaxes alone. Our A-type stars sample seems to follow this same trend. ## 4 Discussion & summary Since the last absolute magnitude calibration published for BHB stars we had the advent of Gaia data, which allowed us to review the previous relationship thanks to the better characterization of globular clusters (Vasiliev & Baumgardt, 2021). Using data from the SSPP catalog, we obtained a sample of ${\sim}5700$ BHB stars implementing the GMM algorithm. This new approach is an alternative to the previous individual Gaussian fits, as it is clear that they do not distinguish the latest SSPP $\log{\rm g}$ distribution properly (Fig. 1). To find which kind of function better describes the distribution of BHBs in the CMD, we used a software for implementing symbolic regression. We suggest two new color–magnitude calibrations based on photometry, including a relation with $(u-r)_{0}$ color that has not been used before. This calibration provides more accurate estimates than $(g-r)_{0}$ color in most cases. For the bluest stars, the differences can exceed 10% of nominal values (Eq. 2). However, this difference decreases for redder and more distant stars. Here, we show that the calibrations can be simpler and achieve an acceptable result, very similar to those from by D11’s relation. We noted substantial differences between photometric and geometric/photogeometric distances. A possibility would be inaccurate estimates of $\log{\rm g}$ provided by SEGUE, as it was mainly designed for cool stars ($T_{\rm eff}<7500\,{\rm K}$). However, Santucci et al. (2015a) showed that this is unlikely to be the case. It could also be due to an incorrect value of extinction for the SDSS photometry, yet it also does not seem to explain the disparity. Most stars were observed in regions of low extinction and we could not find any relation between the extinction values and the inconsistency observed. The observed differences also lead us to believe that measured parallaxes for these stars could be unreliable, as there is no agreement between the distance estimates even for the closest stars. It would be interesting to investigate whether the same discrepancy can be observed with other halo tracers. The new sample made available here can help to improve results already known about the structures of the Milky Way stellar halo. For example, these BHBs can be used to revisit the duality of the stellar halo or re-evaluate the Galaxy’s mass estimate. We thank the anonymous referee for the careful review and all the suggestions, which greatly improved our work. This research was financed with public funds, without which it would not have been possible. F.O.B. acknowledges CAPES (PROEX; Proc. 88887.604787/2021-00). R.M.S. acknowledges CNPq (Proc. 306667/2020-7). S.R. acknowledges partial financial support from FAPESP (Proc. 2015/50374-0 and 2014/18100-4), CAPES, and CNPq. G.L. acknowledges FAPESP (Proc. 2021/10429-0). A.P.-V. acknowledges the DGAPA-PAPIIT grant IA103122. H.D.P. thanks FAPESP (Procs. 2018/21250-9 and 2022/04079-0). This work has made use of data from the European Space Agency (ESA) mission Gaia (https://www.cosmos.esa.int/gaia), processed by the Gaia Data Processing and Analysis Consortium (DPAC, https://www.cosmos.esa.int/web/gaia/dpac/consortium). Funding for the DPAC has been provided by national institutions, in particular the institutions participating in the Gaia Multilateral Agreement. Funding for the Sloan Digital Sky Survey IV has been provided by the Alfred P. Sloan Foundation, the U.S. Department of Energy Office of Science, and the Participating Institutions. SDSS acknowledges support and resources from the Center for High-Performance Computing at the University of Utah. The SDSS web site is www.sdss.org. SDSS is managed by the Astrophysical Research Consortium for the Participating Institutions of the SDSS Collaboration including the Brazilian Participation Group, the Carnegie Institution for Science, Carnegie Mellon University, Center for Astrophysics — Harvard & Smithsonian (CfA), the Chilean Participation Group, the French Participation Group, Instituto de Astrofísica de Canarias, The Johns Hopkins University, Kavli Institute for the Physics and Mathematics of the Universe (IPMU) / University of Tokyo, the Korean Participation Group, Lawrence Berkeley National Laboratory, Leibniz Institut für Astrophysik Potsdam (AIP), Max-Planck-Institut für Astronomie (MPIA Heidelberg), Max-Planck-Institut für Astrophysik (MPA Garching), Max-Planck- Institut für Extraterrestrische Physik (MPE), National Astronomical Observatories of China, New Mexico State University, New York University, University of Notre Dame, Observatório Nacional / MCTI, The Ohio State University, Pennsylvania State University, Shanghai Astronomical Observatory, United Kingdom Participation Group, Universidad Nacional Autónoma de México, University of Arizona, University of Colorado Boulder, University of Oxford, University of Portsmouth, University of Utah, University of Virginia, University of Washington, University of Wisconsin, Vanderbilt University, and Yale University. This research has also made use of RStudio (RStudio Team, 2022) and TOPCAT (http://www.starlink.ac.uk/topcat/, Taylor, 2005). ## References * Ahn et al. (2012) Ahn, C. P., Alexandroff, R., Allende Prieto, C., et al. 2012, ApJS, 203, 21, doi: 10.1088/0067-0049/203/2/21 * Ahumada et al. (2020) Ahumada, R., Prieto, C. A., Almeida, A., et al. 2020, ApJS, 249, 3, doi: 10.3847/1538-4365/ab929e * Aihara et al. (2011) Aihara, H., Allende Prieto, C., An, D., et al. 2011, ApJS, 193, 29, doi: 10.1088/0067-0049/193/2/29 * An et al. (2008) An, D., Johnson, J. A., Clem, J. L., et al. 2008, ApJS, 179, 326, doi: 10.1086/592090 * Ashok et al. (2020) Ashok, D., Scott, J., Wetzel, S., Panju, M., & Ganesh, V. 2020, arXiv e-prints, arXiv:2010.11328. https://arxiv.org/abs/2010.11328 * Bailer-Jones et al. (2021) Bailer-Jones, C. A. L., Rybizki, J., Fouesneau, M., Demleitner, M., & Andrae, R. 2021, AJ, 161, 147, doi: 10.3847/1538-3881/abd806 * Belokurov et al. (2018) Belokurov, V., Erkal, D., Evans, N. W., Koposov, S. E., & Deason, A. J. 2018, MNRAS, 478, 611, doi: 10.1093/mnras/sty982 * Belokurov & Koposov (2016) Belokurov, V., & Koposov, S. E. 2016, MNRAS, 456, 602, doi: 10.1093/mnras/stv2688 * Belokurov et al. (2014) Belokurov, V., Koposov, S. E., Evans, N. W., et al. 2014, MNRAS, 437, 116, doi: 10.1093/mnras/stt1862 * Bird et al. (2022) Bird, S. A., Xue, X.-X., Liu, C., et al. 2022, arXiv e-prints, arXiv:2207.08839. https://arxiv.org/abs/2207.08839 * Brown (2021) Brown, A. G. A. 2021, ARA&A, 59, doi: 10.1146/annurev-astro-112320-035628 * Cacciari (1999) Cacciari, C. 1999, in Astronomical Society of the Pacific Conference Series, Vol. 167, Harmonizing Cosmic Distance Scales in a Post-HIPPARCOS Era, ed. D. Egret & A. Heck, 140–160 * Chira & Plionis (2019) Chira, M., & Plionis, M. 2019, MNRAS, 490, 5904, doi: 10.1093/mnras/stz2885 * Clewley et al. (2002) Clewley, L., Warren, S. J., Hewett, P. C., et al. 2002, MNRAS, 337, 87, doi: 10.1046/j.1365-8711.2002.05864.x * Deason et al. (2011) Deason, A. J., Belokurov, V., & Evans, N. W. 2011, MNRAS, 416, 2903, doi: 10.1111/j.1365-2966.2011.19237.x * Deason et al. (2017) Deason, A. J., Belokurov, V., Koposov, S. E., et al. 2017, MNRAS, 470, 1259, doi: 10.1093/mnras/stx1301 * Deason et al. (2018) Deason, A. J., Belokurov, V., Koposov, S. E., & Lancaster, L. 2018, ApJ, 862, L1, doi: 10.3847/2041-8213/aad0ee * Deason et al. (2012) Deason, A. J., Belokurov, V., Evans, N. W., et al. 2012, MNRAS, 425, 2840, doi: 10.1111/j.1365-2966.2012.21639.x * Deason et al. (2021) Deason, A. J., Erkal, D., Belokurov, V., et al. 2021, MNRAS, 501, 5964, doi: 10.1093/mnras/staa3984 * Delahaye et al. (2019) Delahaye, D., Chaimatanan, S., & Mongeau, M. 2019, in International Series in Operations Research & Management Science (ISOR), Vol. 272, Handbook of Metaheuristics, ed. M. Gendreau & J.-Y. Potvin (Springer), 1–35.ISBN 978–3–319–91085–7, doi: 10.1007/978-3-319-91086-4_1 * Dempster et al. (1977) Dempster, A. P., Laird, N. M., & Rubin, D. B. 1977, J. R. Stat. Soc. B, 39, 1, doi: https://doi.org/10.1111/j.2517-6161.1977.tb01600.x * Donlon et al. (2020) Donlon, Thomas, I., Newberg, H. J., Sanderson, R., & Widrow, L. M. 2020, ApJ, 902, 119, doi: 10.3847/1538-4357/abb5f6 * Erkal et al. (2021) Erkal, D., Deason, A. J., Belokurov, V., et al. 2021, MNRAS, 506, 2677, doi: 10.1093/mnras/stab1828 * Fermani & Schönrich (2013) Fermani, F., & Schönrich, R. 2013, MNRAS, 430, 1294, doi: 10.1093/mnras/sts703 * Gaia Collaboration (2020) Gaia Collaboration. 2020, Gaia Source Catalogue EDR3, IPAC, doi: 10.26131/IRSA541 * Gaia Collaboration et al. (2016) Gaia Collaboration, Prusti, T., de Bruijne, J. H. J., et al. 2016, A&A, 595, A1, doi: 10.1051/0004-6361/201629272 * Gaia Collaboration et al. (2021) Gaia Collaboration, Brown, A. G. A., Vallenari, A., et al. 2021, A&A, 649, A1, doi: 10.1051/0004-6361/202039657 * Hayes & Philip (1979) Hayes, D. S., & Philip, A. G. D. 1979, PASP, 91, 71, doi: 10.1086/130444 * Helmi (2020) Helmi, A. 2020, ARA&A, 58, 205, doi: 10.1146/annurev-astro-032620-021917 * Koppelman et al. (2018) Koppelman, H., Helmi, A., & Veljanoski, J. 2018, ApJ, 860, L11, doi: 10.3847/2041-8213/aac882 * Lancaster et al. (2019) Lancaster, L., Koposov, S. E., Belokurov, V., Evans, N. W., & Deason, A. J. 2019, MNRAS, 486, 378, doi: 10.1093/mnras/stz853 * Leavitt & Pickering (1912) Leavitt, H. S., & Pickering, E. C. 1912, Harvard College Observatory Circular, 173, 1 * Lee et al. (2008a) Lee, Y. S., Beers, T. C., Sivarani, T., et al. 2008a, AJ, 136, 2022, doi: 10.1088/0004-6256/136/5/2022 * Lee et al. (2008b) —. 2008b, AJ, 136, 2050, doi: 10.1088/0004-6256/136/5/2050 * Li et al. (2022) Li, T. S., Ji, A. P., Pace, A. B., et al. 2022, ApJ, 928, 30, doi: 10.3847/1538-4357/ac46d3 * Lindegren (2018) Lindegren, L. 2018. http://www.rssd.esa.int/doc_fetch.php?id=3757412 * Malhan et al. (2018) Malhan, K., Ibata, R. A., & Martin, N. F. 2018, MNRAS, 481, 3442, doi: 10.1093/mnras/sty2474 * Martin et al. (2022) Martin, N. F., Venn, K. A., Aguado, D. S., et al. 2022, Nature, 601, 45, doi: 10.1038/s41586-021-04162-2 * Myeong et al. (2018) Myeong, G. C., Evans, N. W., Belokurov, V., Sanders, J. L., & Koposov, S. E. 2018, ApJ, 856, L26, doi: 10.3847/2041-8213/aab613 * Peñarrubia & Petersen (2021) Peñarrubia, J., & Petersen, M. S. 2021, MNRAS, 508, L26, doi: 10.1093/mnrasl/slab090 * Pedregosa et al. (2011) Pedregosa, F., Varoquaux, G., Gramfort, A., et al. 2011, Journal of Machine Learning Research, 12, 2825 * Petersen & Peñarrubia (2021) Petersen, M. S., & Peñarrubia, J. 2021, Nature Astronomy, 5, 251, doi: 10.1038/s41550-020-01254-3 * Pier (1983) Pier, J. R. 1983, ApJS, 53, 791, doi: 10.1086/190910 * Preston et al. (1991) Preston, G. W., Shectman, S. A., & Beers, T. C. 1991, ApJ, 375, 121, doi: 10.1086/170175 * Rockosi et al. (2022) Rockosi, C. M., Lee, Y. S., Morrison, H. L., et al. 2022, The Astrophysical Journal Supplement Series, 259, 60, doi: 10.3847/1538-4365/ac5323 * RStudio Team (2022) RStudio Team. 2022, RStudio: Integrated Development Environment for R, RStudio, PBC, Boston, MA. http://www.rstudio.com/ * Santucci et al. (2015a) Santucci, R. M., Placco, V. M., Rossi, S., et al. 2015a, ApJ, 801, 116, doi: 10.1088/0004-637X/801/2/116 * Santucci et al. (2015b) Santucci, R. M., Beers, T. C., Placco, V. M., et al. 2015b, ApJ, 813, L16, doi: 10.1088/2041-8205/813/1/L16 * Schlegel et al. (1998) Schlegel, D. J., Finkbeiner, D. P., & Davis, M. 1998, ApJ, 500, 525, doi: 10.1086/305772 * Sersic (1968) Sersic, J. L. 1968, Atlas de Galaxias Australes (Universidad Nacional de Cordoba: Observatorio Astronomico) * Shapley (1916) Shapley, H. 1916, ApJ, 43, 217, doi: 10.1086/142246 * Shull & Danforth (2019) Shull, J. M., & Danforth, C. W. 2019, ApJ, 882, 180, doi: 10.3847/1538-4357/ab357d * Sirko et al. (2004) Sirko, E., Goodman, J., Knapp, G. R., et al. 2004, AJ, 127, 899, doi: 10.1086/381483 * Starkenburg et al. (2019) Starkenburg, E., Youakim, K., Martin, N., et al. 2019, MNRAS, 490, 5757, doi: 10.1093/mnras/stz2935 * Taylor (2005) Taylor, M. B. 2005, in Astronomical Society of the Pacific Conference Series, Vol. 347, Astronomical Data Analysis Software and Systems XIV, ed. P. Shopbell, M. Britton, & R. Ebert, 29 * Thomas et al. (2018) Thomas, G. F., McConnachie, A. W., Ibata, R. A., et al. 2018, MNRAS, 481, 5223, doi: 10.1093/mnras/sty2604 * Utkin & Dambis (2020) Utkin, N. D., & Dambis, A. K. 2020, MNRAS, 499, 1058, doi: 10.1093/mnras/staa2819 * Valcarce & Catelan (2008) Valcarce, A. A. R., & Catelan, M. 2008, A&A, 487, 185, doi: 10.1051/0004-6361:20078231 * Vasiliev & Baumgardt (2021) Vasiliev, E., & Baumgardt, H. 2021, MNRAS, 505, 5978, doi: 10.1093/mnras/stab1475 * Vickers et al. (2012) Vickers, J. J., Grebel, E. K., & Huxor, A. P. 2012, AJ, 143, 86, doi: 10.1088/0004-6256/143/4/86 * Wang & Chen (2019) Wang, S., & Chen, X. 2019, ApJ, 877, 116, doi: 10.3847/1538-4357/ab1c61 * Wenger et al. (2000) Wenger, M., Ochsenbein, F., Egret, D., et al. 2000, A&AS, 143, 9, doi: 10.1051/aas:2000332 * Whitten et al. (2019) Whitten, D. D., Beers, T. C., Placco, V. M., et al. 2019, ApJ, 884, 67, doi: 10.3847/1538-4357/ab4269 * Wu et al. (2022) Wu, W., Zhao, G., Xue, X.-X., Bird, S. A., & Yang, C. 2022, ApJ, 924, 23, doi: 10.3847/1538-4357/ac31ac * Xue et al. (2008) Xue, X. X., Rix, H. W., Zhao, G., et al. 2008, ApJ, 684, 1143, doi: 10.1086/589500 * Xue et al. (2011) Xue, X.-X., Rix, H.-W., Yanny, B., et al. 2011, ApJ, 738, 79, doi: 10.1088/0004-637X/738/1/79 * Yanny et al. (2009) Yanny, B., Rockosi, C., Newberg, H. J., et al. 2009, AJ, 137, 4377, doi: 10.1088/0004-6256/137/5/4377 * York et al. (2000) York, D. G., Adelman, J., Anderson, John E., J., et al. 2000, AJ, 120, 1579, doi: 10.1086/301513 * Yuan et al. (2020) Yuan, Z., Chang, J., Beers, T. C., & Huang, Y. 2020, ApJ, 898, L37, doi: 10.3847/2041-8213/aba49f * Yuan et al. (2019) Yuan, Z., Smith, M. C., Xue, X.-X., et al. 2019, ApJ, 881, 164, doi: 10.3847/1538-4357/ab2e09 * Yuan et al. (2022) Yuan, Z., Malhan, K., Sestito, F., et al. 2022, ApJ, 930, 103, doi: 10.3847/1538-4357/ac616f ## Appendix A Fitting the horizontal branch with [Fe/H] For the analysis of dependence of metallicity, we have a smaller sample of stars than the one used for the calibrations, about 40 stars from the original sample were available in the SSPP data. The same procedure was done with both the pure photometric sample and this one cross-matched with SSPP. Each solution evaluation (considering different input variables) was taken in a period of approximately 10 min, which is enough for the convergence of several solutions, as TuringBot needs less time to converge than other similar softwares (Ashok et al., 2020). The hardware involved in this process is highly important for the convergence time. In our case, the program was executed in a computer with an AMD Ryzen 7 2700 processor, with 16 threads, all used at once. Tests were also made by running the program longer and no significant improvement was observed. Figure 6: Error comparison between fits for some combinations of magnitudes, colors, and [Fe/H] from TuringBot. Figure 6 shows how the error decreases with the increase of the complexity of the functions (see Section 3.2 for the definition of “complexity”). Some fits coincide since the program can create colors from magnitudes, if it is better than the magnitudes alone, and not use all the variables provided. This is the reason why the line for all the parameters is not visible. We can see that, for equations up to complexity 25, there is no advantage of adding the metallicity information as we can achieve similar errors using only magnitudes. ## Appendix B Globular cluster distances Figure 7 presents a boxplot for the distances obtained with both calibrations and the one from D11 compared to those provided by Vasiliev & Baumgardt (2021). The size of each box represent the spread (25th and 75th quartiles) and the center line indicates the median value. The number of stars in each cluster is displayed above their NGC identifier. The overall results indicate that Eq. 2 is in general more accurate than D11’s relation. This conclusion is supported by better distance predictions where 10 of the 11 globular clusters were better constrained with Eq. 2. The calibration with color $(u-r)_{0}$ revealed to be more accurate than using the relation provided in this work for color $(g-r)_{0}$, as nine clusters present lower dispersion and eight clusters show medians close to zero when using the former color. NGC2416 was the only cluster where the color $(g-r)_{0}$ presented a better performance, however, this cluster was represented by only 9 members and therefore this result may be due to sub-sampling. The sample considered has a bias to closer cluster, which is apparent by huge distance gap between NGC2416, the farthest cluster, and NGC5466, the penultimate. Therefore, we cannot ascertain whether or not the accuracy of distances estimated using color $(u-r)_{0}$ varies with the distance. Figure 7: Boxplot for relative distance difference using each calibration presented in this work (Eq. 1 and Eq. 2) and that from D11. $D_{V21}$ is the distance provided by Vasiliev & Baumgardt (2021), inferred from Gaia EDR3 data. The number of data points in each cluster is displayed at the bottom of the panel.
# Hierarchical hub-filament structures and gas inflows on galaxy-cloud scales J. W. Zhou Max-Planck-Institut für Radioastronomie, Auf dem Hügel 69, 53121 Bonn, Germany<EMAIL_ADDRESS>Timothy A. Davis Cardiff Hub for Astrophysics Research and Technology, School of Physics and Astronomy, Cardiff University, Queens Buildings, Cardiff CF24 3AA, UK ###### Abstract We investigated the kinematics and dynamics of gas structures on galaxy-cloud scales in two spiral galaxies NGC5236 (M83) and NGC4321 (M100) using CO (2$-$1) line. We utilized the FILFINDER algorithm on integrated intensity maps for the identification of filaments in two galaxies. Clear fluctuations in velocity and density were observed along these filaments, enabling the fitting of velocity gradients around intensity peaks. The variations in velocity gradient across different scales suggest a gradual and consistent increase in velocity gradient from large to small scales, indicative of gravitational collapse, something also revealed by the correlation between velocity dispersion and column density of gas structures. Gas structures at different scales in the galaxy may be organized into hierarchical systems through gravitational coupling. All the features of gas kinematics on galaxy-cloud scale are very similar to that on cloud-clump and clump-core scales studied in previous works. Thus, the interstellar medium from galaxy to dense core scales presents multi-scale/hierarchical hub-filament structures. Like dense core as the hub in clump, clump as the hub in molecular cloud, now we verify that cloud or cloud complex can be the hub in spiral galaxies. Although the scaling relations and the measured velocity gradients support the gravitational collapse of gas structures on galaxy-cloud scales, the collapse is much slower than a pure free-fall gravitational collapse. ###### keywords: Submillimeter: ISM; ISM: clouds; ISM: kinematics and dynamics; galaxies: ISM; galaxies: structure; galaxies: star formation; techniques: image processing ## 1 Introduction Measuring the dynamical coupling between density enhancements in giant molecular clouds and gas motion of their local environment gives us a perspective to understand the formation of hierarchical structures in high- mass star formation regions McKee2007-45; Motte2018-56; Henshaw2020-4. High- resolution observations of high-mass star-forming regions reveal the structured arrangement of density enhancements within filamentary gas networks, notably in hub-filament systems. Within these systems, converging flows channel material into the central hub through the interconnected filaments Peretto2013; Henshaw2014; Zhang2015; Liu2016; Yuan2018; Lu2018; Issac2019; Dewangan2020; Liu2022-511; Zhou2022-514; Zhou2023-676. In observations, gas inflow will naturally produce velocity gradients along filaments Kirk2013; Liu2016; Yuan2018; Williams2018-613; Chen2019-875; Chen2020-891; Pillai2020-4; Zhou2022-514; Zhou2023-676. Henshaw2020-4 identified widespread velocity fluctuations spanning various spatial scales and physical contexts within galaxies. They observed oscillatory gas movements with wavelengths ranging from 0.3 to 400 parsecs, intricately linked to regularly spaced density enhancements. These enhancements are likely a result of gravitational instabilities Henshaw2016-463; Elmegreen2018-863. Furthermore, the spatial correlation between density enhancements and velocity gradient extrema may indicate convergent motion due to gravitational collapse Hacar2011-533; Hacar2016-587; Clarke2016-458; Misugi2019-881; Zhou2022-514; Zhou2023-676. Zhou2022-514 investigated the physical properties and evolution of hub- filament systems in $\sim$ 140 protoclusters, utilizing spectral lines observed in the ATOMS (ALMA Three-millimeter Observations of Massive Star- forming regions) survey Liu2020. Their findings indicate the presence of hub- filament structures across a range of scales, spanning from 0.1 parsec to several parsecs, in diverse Galactic environments. Additionally, slender structures resembling filaments, such as spiral arms, have been identified at scales below 0.1 parsecs in the vicinity of high-mass protostars Liu2015-804; Maud2017-467; Izquierdo2018-478; Chen2020-4; Sanhueza2021-915. As proposed by Zhou2022-514, self-similar hub-filament structures and filamentary accretion seem to exist across scales, ranging from several thousand astronomical units to several parsecs, within high-mass star-forming regions. This paradigm of hierarchical/multi-scale hub-filament structures was generalized from clump- core scale to cloud-clump scale in Zhou2023-676. Hierarchical collapse and hub-filaments structures feeding the central regions are also described in previous works, see Motte2018-56; Vazquez2019-490; Kumar2020-642 and references therein. Kinematically, the results in Zhou2023-676 also reveal the multi-scale hub- filament structures in the G333 molecular cloud complex. The G333 complex exhibits prevalent kinematic characteristics consistent with hub-filament systems. Notably, the intensity peaks, acting as hubs, correlate with converging velocities, indicating that the surrounding gas flows are converging to dense structures. Specifically, there is a discernible increase in velocity gradients at smaller scales. The filaments in the Large APEX sub- Millimeter Array (LAsMA) and the Atacama Large Millimeter/submillimeter Array (ALMA) observations show clear velocity gradients. The velocity gradients fitted using the LAsMA and ALMA data exhibit consistent agreement over the range of scales covered by ALMA observations in the ATOMS survey ($\textless$ 5 pc). In Zhou2023-676, larger scale gas motions were investigated (the longest filament $\sim$50 pc), yet similar results were obtained compared to small-scale ALMA observations. Interestingly, the variations in velocity gradients measured at distinct scales—small scales ($\textless$ 1 pc), medium scales ($\sim$ 1-7.5 pc), and large scales ($\textgreater$ 7.5 pc)—align with expectations from gravitational free-fall with central masses of $\sim$ 500 M⊙, $\sim$ 5000 M⊙ and $\sim$ 50000 M⊙ for the respective scales. This implies that sustaining velocity gradients on larger scales necessitates higher masses. The association of higher masses with larger scales suggests that the inflow on a larger scale is driven by the larger-scale structure which may be the gravitational clustering of smaller-scale structures. This observation aligns with the hierarchical nature commonly found in molecular clouds and the gas inflow from large to small scales. The large-scale gas inflow is likely driven by gravity, indicating a state of global gravitational collapse within the molecular clouds in the G333 complex. This supports the argument that these molecular clouds serve as cloud-scale hub-filament structures. The change in velocity gradients with the scale in the G333 complex indicates that the morphology of the velocity field in position-position-velocity (PPV) space resembles a ”funnel” structure. The funnel structure can be elucidated as the acceleration of material converging towards the central hub and the gravitational contraction of star-forming clouds or clumps. Large-scale velocity gradients always associate with many intensity peaks, and the larger- scale inflow is driven by the larger-scale structure, indicating that the clustering of smaller-scale gravitational structures locally can serve as the gravitational center on a larger scale. In a way, the funnel structure provides insight into the gravitational potential well shaped by this clustering. In previous work, we have investigated gas dynamics and kinematics in molecular clouds from cloud to core scales. The main task of this work is to generalize the above physical pictures to galaxy-cloud scales, now the smallest unit is the molecular cloud itself. In Zhou2022-514, dense core is the hub in clump, in Zhou2023-676, clump is the hub in molecular cloud, here we treat molecular cloud as the hub in galaxy. Self-similar or hierarchical/multi-scale hub-filament structures and filamentary accretion feeding the central regions exist from molecular cloud to dense core scales Motte2018-56; Vazquez2019-490; Kumar2020-642; Zhou2022-514; Zhou2023-676, this picture will be extended to galaxy-cloud scales in this work. ## 2 Data and target We select two face-on spiral galaxies NGC5236 (M83) and NGC4321 (M100) from the PHANGS-ALMA survey. We use the combined 12m+7m+TP PHANGS-ALMA CO (2$-$1) data cubes to investigate gas kinematics and dynamics in the two galaxies, which have a spectral resolution of 2.5 km s-1 and angular resolutions $\sim$2.1 ′′ and $\sim$1.7 ′′, corresponding to linear resolutions $\sim$51 pc and $\sim$123 pc for NGC5236 and NGC4321 at the distances 4.89 Mpc and 15.21 Mpc Leroy2021-257; Anand2021-501, respectively. The field-of-view (FOV) of NGC5236 and NGC4321 in the ALMA observations are $\sim$ 10.5 Kpc $\times$ 10.8 Kpc and $\sim$ 17.1 Kpc $\times$ 15.5 Kpc, respectively. An overview of the PHANGS-ALMA survey’s science goals, sample selection, observation strategy, and data products is described in Leroy2021-257. A detailed description of data calibration, imaging, and product creation procedures is presented in Leroy2021-255. The PHANGS-ALMA CO (2$-$1) data cubes and other data products (such as Moment maps) are available from the PHANGS team website 111https://sites.google.com/view/phangs/home. NGC5236 and NGC4321 were selected for the following reasons: 1\. They are face-on galaxies. The inclination angles of NGC5236 and NGC4321 are 24o and 38.5o Lang2020-897, respectively. Thus, we can ignore the projection effect in velocity gradient fitting in Sec.3.3; 2\. They have strong CO (2$-$1) emission, presenting large-scale continuous structures (strong spiral arms), thus we can trace galaxy-scale gas motions; 3\. NGC5236 has similar morphology to NGC4321, but it is about 3 times closer than NGC4321, thus we can resolve more detailed structures. ## 3 Results ### 3.1 Velocity component Figure 1: An overview of gas kinematics in NGC4321 and NGC5236. (a) and (d) The velocity distribution of the entire galaxies in PPV space decomposed by GAUSSPY+; (b) and (e) The main filaments of NGC4321 and NGC5236 identified by FILFINDER algorithm overlaid on the Moment 0 maps; (c) and (f) The integrated intensity distributions extracted from the Moment 0 maps along the main filaments in panels (b) and (e); (c2) and (f2) The velocity distributions extracted from the Moment 1 maps along the main filaments in panels (b) and (e); (c1) and (f1) The velocity residual distributions along the main filaments in panels (b) and (e) after subtracting the large-scale velocity gradients due to the galaxy rotation. Before studying gas kinematics, we need to ascertain the distribution of gas components in the galaxy. We utilized the fully automated Gaussian decomposer GAUSSPY+ Lindner2015-149; Riener2019-628, designed to break down intricate spectra of molecular lines into multiple Gaussian components. The parameter settings for the decomposition align with those employed by Zhou2023-676. For the CO (2$-$1) data cube of NGC4321, 50071 spectra were fitted by GAUSSPY+, 98% spectra are single component, 2% spectra have two velocity components. 323059 spectra were fitted for NGC5236, 87% spectra show single component, 11.5% spectra have two components. The reduced $\chi^{2}$ values output in the fitting of NGC4321 and NGC5236 are around 1, which means that both of them get a good fit. Generally, multiple velocity components ($>$1 velocity component) mainly concentrate on the central region of the galaxy. In this work, we mainly focus on the gas kinematics on the spiral arms. Therefore, the analysis can be carried out based on the Moment maps. In Fig.1, we can see ubiquitous velocity fluctuations, which may be attributed to gravitational instability, as discussed below. In the central region of the galaxy, there is almost no velocity fluctuations, clear velocity fluctuations are mainly distributed on the spiral arms, revealing the interconnection between velocity fluctuations and star formation activities. ### 3.2 Scaling relations #### 3.2.1 Dendrogram Figure 2: Dendrogram structures. (a) Leaves; (b) Branches. Red and orange eclipses represent type1 and type3 structures, respectively. Figure 3: Same as Fig.2, but for NGC5236. Table 1: Dendrogram structures in NGC 4321 and NGC 5236. | leaves | branches | type1 | type3 | total ---|---|---|---|---|--- NGC 4321 | 583 | 270 | 828 | 25 | 853 NGC 5236 | 1722 | 859 | 2339 | 242 | 2581 The issues of identifying structures in a PPV cube using the dendrogram algorithm are discussed in Zhou2024-682-128. Consequently, we employed the same approach as outlined in Zhou2024-682-128 and Zhou2024-682-173 to identify dense gas structures. We conducted a direct identification of hierarchical (sub-)structures based on the 2D integrated intensity (Moment 0) map of CO (2$-$1) emission. Subsequently, we extracted the average spectrum of each identified structure to delve into its velocity components and gas kinematics. All the retained structures on the strictly masked Moment 0 map are reliable, we therefore only require the smallest area of the identified structure larger than 1 beam and do not set other parameters in the algorithm to reduce the dependence of the identification on the parameter settings. Dendrogram algorithm decomposes the intensity data (Moment 0 map) into hierarchical structures. As illustrated in Fig.1 of Zhou2024-682-128, all identified structures can be divided into three categories, i.e. i-leaves (isolated leaves), c-leaves (clustered leaves) and branches. In this work, we merged c-leaves and i-leaves. Finally, there are two kinds of structures, i.e. leaf and branch structures. In Fig.2 and Fig.3, the structures identified by the dendrogram algorithm exhibit a strong correspondence with the background integrated intensity maps. The algorithm characterizes the morphology of each structure by approximating it as an ellipse. Within the dendrogram, the rms sizes (second moments) of the intensity distribution along the two spatial dimensions define the long and short axes of the ellipse, denoted as $a$ and $b$. As described in Zhou2024-682-128, a smaller ellipse is obtained with $a$ and $b$, necessitating a multiplication by a factor of two to appropriately enlarge the ellipse. Then the effective physical radius of an ellipse is $R\rm_{eff}$ =$\sqrt{2a\times 2b}*D$, where $D$ is the distance of the galaxy. For a structure with the area $A$, the total integrated intensity of the structure is $I_{\rm CO}$, then the mass of the structure can be calculated by $M=\alpha^{2-1}_{\rm CO}\times I_{\rm CO}\times A,$ (1) here $\alpha^{2-1}_{\rm CO}\approx 6.7\rm M_{\odot}\rm pc^{-2}(\rm K*kms^{-1})^{-1}$ Leroy2022-927. The large-scale velocity gradients due to the galaxy rotation will contribute to the non-thermal velocity dispersion. Before extracting the average spectra of the identified structures, we subtracted the large-scale velocity gradients in PPV data cube based on the constructed gas dynamical model in Sec.3.3.3. According to their averaged spectra, the structures with absorption features were eliminated firstly. Following Zhou2024-682-173, we only consider type1 (single velocity component) and type3 (blended velocity components) structures in this work. As detailed in Zhou2024-682-128, two screening criteria were applied for structure refinement: (1) Removal of repetitive branch structures; and (2) Exclusion of branch structures exhibiting complex morphology. Finally, there are 853 and 2581 retained structures for NGC4321 and NGC5236, respectively, as listed in Table 1 and marked in Fig.2 and Fig.3 by ellipses. NGC5236 is $\sim$3 times nearer than NGC4321, the total number of the identified structures in NGC5236 is also $\sim$3 times more than that in NGC4321. #### 3.2.2 Scaling relations Figure 4: Scaling relations of leaf and branch structures of NGC 4321 and NGC 5236. (a)-(c) and (a1)-(c1) $\sigma-N*R$; (d)-(f) and (d1)-(f1) $\sigma-N$; (g)-(i) and (g1)-(i1) $\sigma-R$. $\sigma$, $R$ and $N$ are the velocity dispersion, effective radius, and column density of each structure, respectively. Here, “P” represents the Pearson coefficient. Dashed vertical line marks the boundary of two fittings. The scaling relations provide insights into the physical states of the structures. Although the identified structures in NGC5236 are much more than that in NGC4321, the scaling relations in Fig.4 are comparable. Due to the further distance, some faint structures were filtered out in NGC4321, which can be seen in NGC5236. In Fig.4, the faint or low-density structures produce dispersive tails, similar to the Fig.14 of Zhou2024-682-128 for the G333 molecular cloud complex in the Milky Way. In Fig.4(g)-(i), it appears that only branch structures demonstrate a discernible correlation between velocity dispersion and scale. Clumps or cores characterized by high column density exhibit greater velocity dispersion relative to those with lower column density, a phenomenon attributed to gas motions associated with gravitational collapse Ballesteros2011-411; Traficante2018-473; Ballesteros2018-479; Li2023-949; Zhou2024-682-128. As depicted in Fig.4(d)-(f), the positive correlations observed between velocity dispersion and column density suggest a gravitational origin for the velocity dispersion. In the context of pure free-fall, an anticipated $\sigma-N*R$ relation would exhibit a slope of 0.5. For a more convenient comparison with $\sigma-R$ and $\sigma-N$ relations, we transform the Heyer relation, $\sigma/R^{0.5}\propto N^{0.5}$, to the form $\sigma\propto(R*N)^{0.5}$ (Ballesteros2011-411, Eq. 3 in). Both relations should ideally have a slope of 0.5. However, in Fig.4(a)-(c), the slopes of the $\sigma-N*R$ relations are notably shallower than 0.5, suggesting a deceleration from the expected behavior of pure free-fall gravitational collapse. ### 3.3 Velocity gradient The gravitational collapse of the structures has been revealed by the scaling relations. This section gives a more detailed discussion of the gravitational collapse from the perspective of the velocity gradient. #### 3.3.1 Identification of filaments The FILFINDER algorithm Koch2015 was used to identify and characterize filaments in the galaxy. Following the method described in Zhou2022-514 and Zhou2023-676, we used the Moment 0 maps of CO (2$-$1) emission from the data products of the PHANGS-ALMA survey to identify filaments. Fig.1(b) and (e) display the skeletons of the identified filaments superimposed on the Moment 0 maps. Notably, these skeletons exhibit a high degree of agreement with the gas structures traced by CO (2$-$1) emission. In this work, we only consider the filaments along the spiral arms (the main filaments), where have strong CO (2$-$1) emission, which allows us to study large-scale continuous gas motions in the galaxy. Almost all algorithms have the issue of parameter settings, thus we try not to discuss the identified filament itself in this work, such as its length and width. Actually, we mainly regard the FILFINDER algorithm as a tool to draw lines, these lines can make connections between discrete dense structures, or put dense structures in gas networks. Then by extracting the velocity information along these lines, we can study how the surrounding gas converges to dense gas structures (gravitational centers). In Zhou2022-514 and Zhou2023-676, the filaments traverse multiple local hub-filament structures, characterized by fluctuations in velocity and density along the filaments. These local dense structures, acting as gravitational centers, facilitate the accretion of surrounding diffuse gas, culminating in the formation of local hub-filament structures. In the PPV space, a hub-filament structure can be represented as a funnel structure, as demonstrated in Fig.9 of Zhou2023-676. The gradient of the funnel profile serves as an indicator of the gravitational field’s strength. Consequently, examining the local velocity gradients along the filaments provides insights into the strength of local gravitational fields. In Fig.1, although the velocity fluctuations at small scales can be seen everywhere, they are hidden in global large-scale velocity gradients due to the galaxy rotation. To derive the local velocity fluctuations, we must first subtract the large-scale velocity gradients. We take two methods to solve this issue. #### 3.3.2 Linear fitting Figure 5: A segment of NGC5236’s main filament, used to demonstrate the velocity gradient fitting. (a) Linear fitting of the large-scale velocity gradient due to the bulk motion; (b) Velocity field of the segment in panel (a) after subtracting the fitted large-scale velocity gradient. Local velocity gradients are fitted in the ranges defined by the red vertical dashed lines in panel (c), and straight lines show the linear fitting results. The color- coding of straight lines are meaningless, but it is convenient for the reader; (c) Blue and red dotted lines show the normalized velocity and intensity, respectively. Orange box marks a gravitational coupling structure. Fig.1(c2)&(c) and (f2)&(f) show the intensity-weighted velocity (Moment 1) and integrated intensity (Moment 0) of CO (2$-$1) line emission along the main filaments with intense velocity and density fluctuations. A segment of NGC5236’s main filament was used to demonstrate the velocity gradient fitting in Fig.5. Here, we eliminated the large-scale velocity gradient by employing a simple linear function for modeling. Subsequently, we subtracted this model from the velocity field obtained from the Moment 1 map along the main filament. We divided the main filaments into many segments to ensure that each segment is as straight as possible, thus increasing the accuracy of the linear fit, as shown in Fig.5(a). The fitted straight line can be the baseline, then we subtracted the baseline for each segment to leave only the residual local velocity fluctuations. The good correspondence between velocity gradients and intensity peaks in Fig.5(b) indicates that the surrounding gases are converging to the center of the gravitational potential well. For each segment of the main filament, we firstly estimated the global velocity gradients between velocity peaks and valleys at two sides of the intensity peaks (representing gravitational centers), and ignored local velocity fluctuations. Then we also derived additional velocity gradients over smaller distances around the local intensity peaks, as illustrated in Fig.5(b). Generally, large-scale velocity gradients are associated with multiple intensity peaks. Given that NGC4321 and NGC5236 are good face-on galaxies, here we ignored the projection effect in velocity gradient fitting. #### 3.3.3 Gas dynamical model Figure 6: (a) The velocity field (Moment 1) of NGC 4321 in observation; (b) Gas dynamical model created by the Kinematic Molecular Simulation (KinMS) package; (c) Velocity fluctuations after removing the created model. As a comparison, we also removed the bulk motion or the galaxy rotation by creating gas dynamical models using the Kinematic Molecular Simulation (KinMS) package of Davis2013-429 based on the Moment-1 map, as shown in Fig.6(b). Then we used the Moment 1 map to subtract the created model, and obtained the velocity residual map. Finally, we extracted the velocity fluctuations from the velocity residual map rather than the Moment 1 map. The subsequent velocity gradient fitting is the same with Sec.3.3.2. #### 3.3.4 Statistical analysis of the fitted velocity gradients Figure 7: (a) The correlation between the fitted velocity gradient and the scale; (b) The mass distribution of the identified structures. Figure 8: Statistical analysis of all fitted velocity gradients. (a) Velocity gradient vs. the length. The color lines show the freefall velocity gradients for comparison. For the freefall model, red, magenta, green, cyan, blue and black lines denote masses of 105 M⊙, 105.5 M⊙, 106 M⊙ , 106.5 M⊙, 107 M⊙ and 107.5 M⊙, respectively. Panels (b), (c), and (d): Zoomed maps with lengths $\textless$ 500 pc (small scale), $\sim$ 500 – 1500 pc (medium scale), and $\textgreater$ 1500 pc (large scale) in panel (a). In Sec.3.3.2 and Sec.3.3.3, the galaxy centers were excluded in the fitting. As shown in Fig.7, two methods give consistent fitting results. In Fig.8, we can find the same gas kinematic modes presented in Zhou2023-676 and Zhou2024-682-173. The variations in velocity gradients at both small and large scales (with a boundary around 500 pc) align with expectations from gravitational free-fall, with central masses of $\sim$ 105–106.5 M⊙ and $\sim$ 106–107.5 M⊙. This implies that sustaining velocity gradients on larger scales requires correspondingly larger masses, and larger masses imply larger scales, suggesting that the larger-scale inflow is driven by the larger-scale structure which may arise from the gravitational clustering of smaller-scale structures, in harmony with the presence of hierarchical or multi-scale hub- filament structures within the galaxy and the gas inflows from large to small scales. In the orange box marked in Fig.5(b) and (c), multiple peaks are coupled together to form a gravitational potential well on larger scale, and each peak itself is also a local gravitational potential well. In Fig.7(a), almost all measured velocity gradients can be fitted in the mass range $\sim$ 105–107 M⊙, which is consistent with the mass distribution of the identified structures shown in Fig.7(b), indicating that local dense structures and their complex as gravitational centers will accrete the surrounding diffuse gas and then produce the observed velocity gradients at different scales. #### 3.3.5 Deviation of the free-fall model We can do a deeper analysis for the fitted velocity gradients with a simple model. Assuming free-fall, $\nabla v=-\frac{d}{dR}\sqrt{\frac{2GM}{R}}=\sqrt{\frac{GM}{2R^{3}}},$ (2) in this equation, velocity gradient is more sensitive to scale. Thus we only fit the correlation between velocity gradient and scale in Fig.7(a). In equation.2, $\nabla v\propto R^{-1.5}$, however, the linear fitting in Fig.7(a) gives $\nabla v\propto R^{-0.8}$, thus $\sim$2 times smaller than the slope of the free-fall model, also indicating the slowing down of a pure free- fall gravitational collapse presented in Sec.3.2.2. ## 4 Discussion Gas kinematics on galaxy-cloud scales clearly deviate from the free-fall model. The deviation may come from measurement or calculation biases. We only measured the projection of the realistic velocity vector and hence acceleration, thus tends to underestimate the observed acceleration. Moreover, we only considered the molecular line CO (2$-$1), which only traces part of the baryonic matters in the galaxy, thus underestimated the mass of the gravitational center. The shallower slope in Fig.7(a) may have physical correlation with the flat rotation curve of the galaxy. Gas motions on galaxy-cloud scales may still couple with the galactic potential, a comprehensive model should account for the interplay between motions within the galactic potential and the self- gravitational potential of the cloud. Dobbs2013-432; Meidt2018-854; Meidt2020-892; Utreras2020-892. In the model proposed by Meidt2020-892, the transition to free-fall collapse can only happen once the gas has completely decoupled from the galactic potential. However, it is not clear down to which spatial scales gas motions remain dynamically relevant to galactic potential or start decoupling with galactic potential. Tidal forces have been advocated in previous works to restrict or trigger star formation. Previous studies by Ballesteros2009-393; Ballesteros2009-395 have shown the impact of tidal forces arising from an effective Galactic potential on molecular clouds. These forces have the potential to either compress or disrupt molecular clouds, influencing the overall star formation efficiency. Thilliez2014-31 examined the stability of molecular clouds within the Large Magellanic Cloud (LMC) against shear and the galactic tide. However, their findings indicate that star formation in the LMC is not impeded by either tidal or shear instability. Moreover, in Ramirez2022-515, tidal stresses from neighbouring molecular cloud complexes may increase interstellar turbulence, rather than the galactic potential. Gravity is a long-range force. A local dense structure evolves under its self- gravity, but as a gravitational center, its influence can also affect neighboring structures. At the same time, it also experiences the gravitational pull from other nearby sources. The tidal and gravitational fields are mutually interdependent. As described above, the hierarchical/multi-scale gravitational coupling of gas structures means the extensive tidal interactions in the galaxy. Whether the galactic potential has effect on molecular clouds and their complexes or not, molecular clouds should be affected by the cumulative tidal interactions of many nearby materials, which may prevent gravitational collapse and growth of instabilities or star formation in the cloud. Due to the diffuse and complex morphology of matter distribution in the galaxy, a complete tidal calculation would be very complex. One should derive the gravitational potential distribution from the observed density distribution and then calculate the tidal field according to the gravitational potential distribution, as presented in Li2024-528. Diverse manifestations of gravitational effects on gas within molecular clouds were unveiled in Li2024-528: Dense regions experience gravitational collapse, while the surrounding gas is subject to significant tidal forces, suppressing fragmentation. This gas, influenced by extensive tides, is directed towards the dense regions, serving as fuel for star formation. The spatial distribution of regions experiencing varying tidal influences elucidates the observed hierarchical and localized pattern of star formation. Similar mechanisms may also exist on galaxy-cloud scales, we will discuss this topic in detail in future work. Figure 9: The average number density distribution of the identified structures. In addition to the factors discussed above, magnetic fields may have a significant impact on the gas kinematics of molecular clouds as suggested by simulations Kim2021-911; Seifried2020-497; Ibanez2022-925; Ganguly2023-525 and observations Crutcher2010-725; Li2011-479; Crutcher2012-50; Stephens2022-926; Ngoc2023-953; Rawat2024-528. Especially, in simulations, the diffuse gas with a number density ($n$) less than 100 cm-3 in the envelopes of molecular clouds may be upheld against gravitational collapse due to magnetic support Ibanez2022-925; Ganguly2023-525. Assuming the spherical geometry of the identified clouds, the average number density can be calculated by $\overline{n}=M/(\frac{4}{3}\pi R_{\rm eff}^{3})/(\mu m_{\rm H}),$ (3) where $\mu$ = 2.37 is the mean molecular weight per ‘free particle’ (H2 and He, the number of metal particles is negligible), mH is atomic hydrogen mass. As shown in Fig.9, almost all of structures have the average number density ¡ 100 cm-3. However, the real cloud shape should be more sheetlike instead of spherical Shetty2006-647; Inutsuka2015-580; Arzoumanian2018-70; Kohno2021-73; Arzoumanian2022-660; Rezaei2022-930; Zhou2023-519; Zhou2023-676; Clarke2023-519; Ganguly2023-525. Therefore, the average number density of the clouds should be significantly underestimated. Even if the average number density estimates differ by an order of magnitude, what revealed by Fig.9 remains promising. The supportive role of the magnetic field may be a significant factor contributing to the deviation from free fall motions, and further detailed investigation is warranted in future research. ## 5 Summary We investigated the kinematics and dynamics of gas structures on galaxy-cloud scales in two spiral galaxies NGC5236 (M83) and NGC4321 (M100) using the CO (2$-$1) line. The main conclusions are as follows: 1\. We directly identified hierarchical (sub-)structures according to the 2D integrated intensity (Moment 0) map of CO (2$-$1) emission. Subsequently, we extracted the average spectrum for each structure, delving into its velocity components and gas kinematics. Considering that the large-scale velocity gradients due to the galaxy rotation will contribute to the non-thermal velocity dispersion, before extracting the average spectra of the identified structures, we subtracted the large-scale velocity gradients in PPV data cube based on the constructed gas dynamical model. 2\. In examining the scaling relation among velocity dispersion ($\sigma$), effective radius ($R$), and column density ($N$) across all structures, it becomes evident that $\sigma-N*R$ consistently exhibits a stronger correlation compared to $\sigma-N$ and $\sigma-R$. The observed correlations between velocity dispersion and column density suggest a potential link to gravitational collapse, corroborated by the measured velocity gradients. However, it is noteworthy that the slopes of the $\sigma-N*R$ relations appear to be significantly shallower than the anticipated value of 0.5, implying a deceleration from the characteristic behavior of a pure free-fall gravitational collapse. 3\. We employed the FILFINDER algorithm to identify and characterize filaments within the galaxy using integrated intensity maps. Observable velocity and density fluctuations along these filaments enabled us to fit local velocity gradients around intensity peaks, a process performed after removing the global large-scale velocity gradients attributed to the galaxy’s rotation. 4\. Statistical analysis of the fitted velocity gradients on galaxy-cloud scales shows the same gas kinematic modes presented on cloud-clump and clump- core scales. The variations in velocity gradients at both small and large scales (with a boundary around 500 pc) align with expectations from gravitational free-fall, with central masses of $\sim$ 105–106.5 M⊙ and $\sim$ 106–107.5 M⊙. This implies that sustaining velocity gradients on larger scales requires correspondingly larger masses, and larger masses imply larger scales, suggesting that the larger-scale inflow is driven by the larger-scale structure which may arise from the gravitational clustering of smaller-scale structures, in harmony with the presence of hierarchical or multi-scale hub- filament structures within the galaxy and the gas inflows from large to small scales. 5\. In free-fall model, the velocity gradient and scale satisfy $\nabla v\propto R^{-1.5}$. However, in the observation, $\nabla v\propto R^{-0.8}$, also indicating the slowing down of a pure free-fall gravitational collapse. J. W. Zhou thanks V. Kalinova for the comments. It is a pleasure to thank the PHANGS team, the data cubes and other data products shared by the team make this work can be carried out easily. This paper makes use of the following ALMA data: ADS/JAO.ALMA#2013.1.01161.S, ADS/JAO.ALMA#2015.1.00121.S, ADS/JAO.ALMA#2015.1.00956.S, ADS/JAO.ALMA#2016.1.00386.S, ADS/JAO.ALMA#2017.1.00886.L. ALMA is a partnership of ESO (representing its member states), NSF (USA) and NINS (Japan), together with NRC (Canada), NSTC and ASIAA (Taiwan), and KASI (Republic of Korea), in cooperation with the Republic of Chile. The Joint ALMA Observatory is operated by ESO, AUI/NRAO and NAOJ. ## Data Availability The PHANGS-ALMA CO (2$-$1) data cubes and other data products (such as moment maps) are available from the PHANGS team website 222https://sites.google.com/view/phangs/home.
11institutetext: 1Aarhus University, 2University of Copenhagen # Text-Driven Stylization of Video Objects Sebastian Loeschcke1 Serge Belongie2 Sagie Benaim2 # Text-Driven Stylization of Video Objects Sebastian Loeschcke1 Serge Belongie2 Sagie Benaim2 ###### Abstract We tackle the task of stylizing video objects in an intuitive and semantic manner following a user-specified text prompt. This is a challenging task as the resulting video must satisfy multiple properties: (1) it has to be temporally consistent and avoid jittering or similar artifacts, (2) the resulting stylization must preserve both the global semantics of the object and its fine-grained details, and (3) it must adhere to the user-specified text prompt. To this end, our method stylizes an object in a video according to two target texts. The first target text prompt describes the global semantics and the second target text prompt describes the local semantics. To modify the style of an object, we harness the representational power of CLIP to get a similarity score between (1) the local target text and a set of local stylized views, and (2) a global target text and a set of stylized global views. We use a pretrained atlas decomposition network to propagate the edits in a temporally consistent manner. We demonstrate that our method can generate consistent style changes over time for a variety of objects and videos, that adhere to the specification of the target texts. We also show how varying the specificity of the target texts and augmenting the texts with a set of prefixes results in stylizations with different levels of detail. Full results are given on our project webpage: https://sloeschcke.github.io/Text-Driven- Stylization-of-Video-Objects/. ###### Keywords: Video Editing, Text-Guided Stylization, CLIP ## 1 Introduction Manipulating semantic object entities in videos using human instructions requires skilled workers with domain knowledge. We seek to eliminate these requirements by specifying a desired edit or stylization through an easy, intuitive, and semantic user instruction in the form of a text-prompt. However, manipulating video content semantically is a challenging task. One challenge is in generating consistent content or style changes in time, that adhere to the target text specification. Another challenge is to manipulate the content of an object such that it preserves the content of the original video and the global semantics while also adhering to fine-grained details in the target text. | | ---|---|--- | | Input video | “Swan made out of cactus” | “Swan with crocodile skin” Figure 1: Two representative video frames and the edited video frames together with the global target text. In recent years, advances in computational methods emerged that enable manipulation of appearances and style in images and allow novice users to perform realistic image editing. These methods include manipulation tools that use natural language (text prompts) to express the desired stylization of images or 3D objects [18, 6]. The text-driven manipulation is facilitated by recent developments in models for joint embeddings of text and images, e.g the Contrastive Language Image Pretraining (CLIP [23]) model. Instead of manipulating images or 3D objects, we use CLIP in the context of video manipulation. This is not a straightforward task since simply maximizing the semantic (CLIP-based) similarity between a valid target text and each 2D frame in the video often leads to degenerate solutions. Also, applying methods for image manipulation to each frame in a video results in edits that lack temporal consistency. The recently introduced Neural Layered Atlases (NLA) work [14], demonstrates the ability to separate a moving object in a video from its background by decomposing the video into a set of 2D atlases. Each atlas provides a unified image representation of an object or background over the video. Edits applied to the image representation are automatically mapped back to the video in a temporally consistent manner. However, editing an image still requires manual effort and editing skills from the user. Another problem with this approach is that the 2D atlas representation can be hard to edit due to local deformations. We propose a method for performing intuitive and consistent video editing with multiple capabilities by using the representational power of CLIP to express a desired edit through a text-prompt. An example could be to change the style of a swan swimming in a lake according to a target text: “A swan with cactus skin.” Text is easily modifiable and allows users to express complex and abstract stylizations intuitively. Using text to express edits reduces the need for manual editing skills and also avoids the problems related to deformation in the 2D atlas representations. An illustration is shown in Fig. 1. To apply temporally consistent edits to an object in a video, our method uses the atlas decomposition method presented in NLA [14]. We train a generator on a single input video by sampling local and global views of each frame in the video and applying various augmentations to each view. Our method uses a global loss that compares each global view with a global target text and a local loss that compares each local view with a local target text. The global loss then focuses on the global semantics and the local views focus on the fine-grained details. To regularize our learning, we use a sparsity loss that encourages sparse representation and a temporal triplet loss that encourages frames that are close in time to be similar in CLIP’s embedding space. We demonstrate that our method results in natural and consistent stylizations of objects for a diverse set of videos and target texts. We show how varying the specificity of both the local and global target texts varies the stylization and how augmenting the target texts with neutral prefixes can result in more detailed stylizations. We also demonstrate that our global loss focuses on the global semantics and the local losses on the fine-grained details. ## 2 Related Work Our work is related to video editing works and also to text-based stylization works. ### 2.1 Video editing Unlike images, editing or stylizing objects in videos requires the ability to handle temporal consistency. One natural approach is to propagate the edits from one frame to the next. Video Propagation Networks [10] use a bilateral network to connect pixels in consecutive frames and an adaption network to refine the pixels. Other approaches use optical flow to propagate edits made on a few key-frames [28]. These approaches work well when there is a clear correspondence between frames, but have difficulties, e.g., when the video contains occlusions. To address occlusion challenges, recent work has used deep learning approaches, e.g. self-supervised methods for learning visual correspondence from unlabeled videos [9, 29]. Instead, our work uses the representation proposed by Neural Layered Atlases (NLA) [14], which decomposes a video into a set of layered 2D atlases. Each atlas provides a unified representation of the appearance of an object or background throughout the video. Similar to NLA, Deformable Sprites [30] decomposes a video into a texture atlas that captures an object’s motion across the entire video. Their method allows for temporally consistent video edits, by applying edits to the decomposed atlas. In contrast to NLA, they use optical flow to compute foreground objects instead of a pretrained segmentation network to get the segmentation mask for an object in the input video. However, both NLA and Deformable Sprites only allow for basic manual editing. We use NLA’s atlas separation method for objects in videos, but unlike NLA, we allow for text-driven stylization. ### 2.2 Text-based stylization Our work bears similarities to recent image and 3D manipulation techniques that edit style and appearances through natural language descriptions. These descriptions are often embedded with the Contrastive Language Image Pretraining (CLIP) [23] model, a multi-modal embedding model that learns an image-text embedding space. Recent work used CLIP together with pretrained generative networks for image editing and stylization [1, 21, 3, 5, 6, 8, 15]. For example, StyleCLIP [21], and StyleGAN-NADA [8] both use a pretrained StyleGAN [12] and CLIP to perform image editing, either by using CLIP to control a latent code or to adapt an image generator to a specific domain [8]. Other examples include using StyleGAN and CLIP for image stylization [5] or Paint by Word [3] which uses CLIP paired with StyleGAN2 [13] and BigGAN [4] to enable “painting” images in the style of a text prompt. Usually, pretrained generators only work well for the specific domain they are trained on. In contrast, our method does not require a pretrained generator. We train our own generator on the set of video frames we wish to stylize. Other examples of semantic text-guided manipulation in the context of 3D objects include 3DStyleNet [31], a method for changing the geometric and texture style of 3D objects, ClipMatrix [11] which uses text-prompts to create digital 3D creatures, and methods for generating 3D voxels using CLIP [27]. Another line of recent work uses joint-embedding architectures [26, 25, 24, 19] for image generation, e.g., DALL-E [25] and its successor, DALL-E 2 [24], which can also be used for stylizing images. DALL-E 2 uses a two-stage model, where a CLIP image embedding is generated using a text prompt and a decoder is then used to generate an image conditioned on the generated image embedding. Training joint embedding architectures requires enormous datasets and many training hours. Instead of training on a large dataset, we train on a set of frames for a single video and use augmentations to extract many different views of the input frames. As opposed to all the abovementioned techniques, we work on videos. Another line of work uses CLIP without relying on a pretrained generator, e.g. given a natural language input, CLIPdraw [7] synthesizes novel drawings, and CLIPstyler [17] stylizes images. Similarly, Texts2Mesh [18] does not rely on a pretrained generator. In Text2Mesh, the CLIP embedding space is used to enable text-driven editing of 3D meshes. Text2Mesh uses a multi-layer perceptron (MLP) to apply a stylization to $(x,y,z)$-coordinates of an input mesh. The neural optimization process of the MLP is guided by a semantic loss that computes the similarity between multiple augmented 2D views embedded with CLIP and a target text. Similarly, we do not rely on a pretrained generator. Our work was developed concurrently to Text2Live [2], which shares many of the same goals and methods as our work. Similarly to our method, Text2Live uses a pretrained Neural Layered Atlases (NLA) model to separate a moving object in a video from its background. Text2Live train a generator to apply text-driven local edits to a single frame and use the NLA model to map the edits back to the input video in a temporally consistent manner. In contrast to our approach, Text2Live does not directly generate the edited output. Instead, it generates an edit layer that is composited with the original input. ## 3 Method We wish to apply natural and temporally consistent stylizations to objects in videos using a natural language text prompt as guidance. To change the style of an object to conform with a target text prompt in a temporally consistent manner, we build on top of a Layered Neural Atlas method [14], which separates the appearance of an object in a video from its background. We then use a pre- trained text-image multimodal embedding of CLIP [23] in a set of objectives. Minimizing this set of objectives aims at matching the style of a foreground object in a video with that of a target text. The objectives include a global and local objective. The global objective focuses on the global semantics by maximizing the similarity between the global views and a target text that relates to the underlying content. Instead, the local objective focuses on the fine-grained details, by maximizing the similarity between the local views with a target text that relates to local semantics of the stylization. We add a sparsity loss, similar to a $L_{1}$-regularization term, that encourages the predicted foreground color values to be minimal. Additionally, we add a temporal triplet loss that encourages the embeddings of frames that are close in time to also be close in CLIP’s embedding space. We begin by describing the method of CLIP [23] and that of Neural Layered Atlases (NLA) on which our method is based. We then describe the training and loss formulations used by our method. ### 3.1 CLIP CLIP is a multi-modal embedding method that trains an image encoder $E_{img}$ and a text encoder $E_{txt}$ to match between the embeddings of corresponding image-text pairs using a contrastive loss formulation. This loss formulation optimizes the similarity between corresponding image-text pair $T$ and $I$. More specifically, $I$ and $T$ are first embedded: $I_{emb}=E_{img}(I)\in\mathbb{R}^{512},\quad T_{emb}=E_{txt}(T)\in\mathbb{R}^{512}$ The similarity between I and T is then measured by $\operatorname{sim}(I_{emb},T_{emb})$ where $\operatorname{sim}(a,b)=\frac{a\cdot b}{|a||b|}$, is the cosine similarity. ### 3.2 Neural Layered Atlases (NLA) Figure 2: Our stylization pipeline. In the first step, we train the network using the NLA procedure [14] to reconstruct input video frames. We then finetune the editing atlas $A$ using our approach described in Fig. 3. In our stylization pipeline, we create a set of cropped input video frames $Q_{Crop}$. This set is passed through a stylization model to create a foreground and background atlas. A set of stylized frames $Q_{Style}$ is produced by $\alpha$-blending the predicted atlases. All weights of the MLPs are frozen except for the editing atlas MLP $A$ which is finetuned. A closer look at how our editing atlas is trained is given in Fig. 3. Figure 3: Finetuning the editing atlas. As described in the stylization pipeline (Fig. 2), we use our stylization pipeline to get a set of stylized frames $Q_{Style}$. To this end, we finetune our editing atlas, which is part of the stylization model. We sample $n_{Global}$ global views $I^{Global}$ and $n_{Local}$ local views $I^{Local}$. The set of images $I^{Global}$ are then augmented using random perspectives and random background removal and used together with the global target text $T_{Global}$ to compute a global loss (Eq. 3). Three global augmented images are used to compute the temporal loss (Eq. 4). Similarly, the $I^{Local}$ images are augmented and used together with the local target text $T_{Local}$ to compute the local loss (Eq. 2). Lastly, a sparsity loss (Eq. 5) is computed from the stylized frames $Q_{Style}$. Neural Layered Atlases (NLA) [14] decompose a video into a set of layered 2D atlases. Each atlas provides a unified representation of the appearance of an object or the background throughout the video. NLA use two mapping networks $M_{f}$ and $M_{b}$, where each takes a pixel and time location $(x,y,t)$ in the video as input and outputs the corresponding 2D $(u,v)$-coordinate in each atlas: $M_{f}(p)=(u_{f}^{p},u_{f}^{p}),\quad M_{b}(p)=(u_{b}^{p},u_{b}^{p})$ An atlas network $A$ takes the predicted 2D $(u,v)$-coordinate as input and outputs the atlas’s RGB color at that location. Additionally, all pixel coordinates are fed into the Alpha MLP network $M_{\alpha}$ which outputs the opacity of each atlas at that location. The RGB color can then be reconstructed at each pixel location by alpha-blending the predicted atlas points according to the opacity value predicted by $M_{\alpha}$. NLA enables consistent video editing. First, each atlas is discretized into an image. A user can then manually apply edits using an editing program. These edits are mapped back to the input video using the computed $(u,v)$-mapping. To get the reconstructed color $c^{p}$ for pixel $p$, the color of the predicted foreground color $c_{f}^{p}$, background color $c_{b}^{p}$ and predicted opacity value $\alpha^{p}$ of the edited atlas are blended as follows: $c^{p}=(1-\alpha^{p})c_{b}^{p}+\alpha^{p}c_{f}^{p}$ ### 3.3 Our Stylization Pipeline Instead of having to manually apply edits to the discretized atlases as in [14], we wish to apply edits automatically using a target text prompt. Specifically, we are interested in modifying the RGB values such that they conform with a target text. We focus on the Atlas MLP $A$, since $A$ makes the RGB predictions. Instead of using a single atlas MLP $A$ for all atlases, we create two copies of the atlas MLP after pre-training the NLA network, one editing atlas MLP for the foreground object we want to stylize ($A$) and one for all other atlases ($A_{b}$). We then freeze the weights of $M_{b}$, $M_{f}$, $M_{\alpha}$, $A_{b}$ and finetune the weights of $A$. While the goal in NLA is to optimize the model to produce the best reconstruction, we instead want to create a stylization that conforms with the inputted target text. Our stylization pipeline is illustrated in Fig. 2. The input is a set of raw frames $Q_{Raw}$. Since we know the position of the object in the video from the $\alpha$-map, we can compute a bounding box containing the whole object. Once we have the bounding box, we crop each frame in $Q_{Raw}$ such that it only contains the content within the bounding box plus a small margin. All the cropped frames $Q_{Cropped}$ are passed through a pre-trained NLA model, where all MLP weights have been frozen, except for the weights of the $A$ MLP. The NLA method produces a set of stylized frames $Q_{Style}$. To fine-tune the weights of $A$, we sample training batches in both time and space. Our sampling method is illustrated in Fig. 3. First, we sample a set $Q_{Sample}$ uniformly at random among all frames in the input video and pass them through the stylization pipeline (Fig. 2) to create a set $Q_{Style}$. For each frame in $Q_{Style}$ we sample $n_{Global}$ views $I^{Global}$ and a set of $n_{Local}$ views $I^{Local}$. Each of the $n_{Global}$ views is produced by sampling a crop with a size in the range $[0.9,1.0]$ of the original frame size. Each of the $n_{Local}$ views is produced by sampling a crop with a size in the range $[0.1,0.5]$ of the original frame size. To ensure the local views contain the object we want to stylize, we use the $\alpha$-map of the frame to determine the position of the object in the frame. We then sample until we get $n_{Local}$ views where at least $\frac{1}{3}$ of the sampled view is part of the object. Once the local and global views have been sampled, we apply a random perspective transformation and a random background removal which with some probability $p$ removes the background (details in Appendix Sec. 6.2). Additionally, each augmented frame is normalized with the same mean and standard deviation as used to train the CLIP model [23]. Our objective function is composed of three main losses defined in CLIP’s feature space: (1) $L_{\text{Local}}$ which focuses on local semantics, (2) $L_{\text{Global }}$ which focuses on global semantics, and (3) $L_{Temp}$ which encourages temporal consistency. Additionally, we use the regularization term $L_{\text{sparsity }}$ introduced in NLA [14], which encourages sparse representations. In all loss terms, when we compute the similarity between a text and each sampled view, we use the average embedding across all views: $I_{emb}^{Local}=\frac{1}{n_{Local}}\sum_{i=1}^{n_{Local}}E_{img}\left(I_{i}^{\text{Local}}\right),I_{emb}^{Global}=\frac{1}{n_{Global}}\sum_{i=1}^{n_{Global}}E_{img}\left(I_{i}^{\text{Global}}\right)$ (1) #### Local loss $L_{\text{Local}}$ is applied to all views in $I^{\text{Local}}$. The goal is to modify the image, such that the local details conform with a target text $T_{\text{Local}}$: $L_{\text{Local}}=1-\operatorname{sim}\left(I_{emb}^{Local},E_{txt}\left(T_{\text{Local}}\right)\right)$ (2) where $\operatorname{sim}(a,b)=\frac{a\cdot b}{|a||b|}$ is the cosine similarity, and $E_{txt}$ denote CLIP’s pre-trained text encoder. The local views have a more zoomed-in view of the object we are stylizing. Additionally, the local target text $T_{Local}$ contains local specific semantics, e.g. “rough cactus texture.” Hereby, the local loss can focus on the texture and fine-grained details of the stylization we apply to the input video. #### Global loss The global loss is applied to views in $I^{\text{Global}}$ that all include the entire object being stylized. The intended goal is that the global loss will preserve the overall context. In the target text $T_{\text{Global}}$, we include words that describe the global context of the object we are trying to stylize, e.g. “A swan made of cactus.” The global loss formulation is then given by: $L_{\text{Global}}=1-\operatorname{sim}\left(I_{emb}^{Global},E_{txt}\left(T_{\text{Global }}\right)\right)$ (3) #### Temporal loss We use a triplet loss to include a temporal aspect and enforce that consecutive frames should be more similar in CLIP’s embedding space than frames that are further apart. To compute the temporal loss, we sample three frames $t_{1},t_{2,},t_{3}$, where we have that $t_{1}<t_{2,}<t_{3}$ w.r.t. the order of the frames in the input video. We then enforce that the similarity between the sampled global views of $t_{1}$ and $t_{2}$ in the CLIP embedding space should be greater than the similarity between $t_{1}$ and $t_{3}$: Let $I_{emb({t_{1}})}^{Global},{}I_{emb({t_{2}})}^{Global},I_{emb({t_{3}})}^{Global}$ denote the average embedded global views (computed in Eq. 1) for each of the three frames. Then we compute the triplet loss $L_{Temp}$ as follows: $\begin{split}\operatorname{Sim}_{t_{1}t_{3}-t_{1}t_{2}}&=\operatorname{sim}\left(I_{emb({t_{1}})}^{Global},I_{emb({t_{3}})}^{Global}\right)-\operatorname{sim}\left(I_{emb({t_{1}})}^{Global},I_{emb({t_{2}})}^{Global}\right)\\\ L_{Temp}&=\lambda_{Temp}\cdot\max\left(0,\operatorname{Sim}_{t_{1}t_{3}-t_{1}t_{2}}\right)\end{split}$ (4) where $\lambda_{Temp}$ is a weighting of the temporal loss. If the frames are further away from each other, the temporal loss should contribute less to the overall loss. For this reason, we weigh the contribution of the temporal loss by a Gaussian probability density function $g$ with a mean equal to zero and a standard deviation of five. We compute the weight of a triplet by applying $g$ to the difference between $t_{1}$ and $t_{3}$: $\lambda_{Temp}=g(t_{3}-t_{1})$ #### Sparsity loss We use the same sparsity loss as in NLA [14]. Its intended function is to encourage points that are mapped to the background atlas to have a zero value in the foreground atlas, e.g. if a point $p$ is mapped to the background atlases it should not contain information about the foreground atlas. $L_{\text{sparsity }}=\left\|\left(1-\alpha^{P}\right)c_{f}^{P}\right\|$ (5) where $c_{f}^{P}$ is the predicted color at $p$ for the foreground layer and $\alpha^{P}$ is the opacity value at location $p$. #### Full objective The full loss term that is minimized is represented as: $L=\lambda_{Sparsity}L_{\text{Sparsity }}+\lambda_{Temp}L_{\text{Temp }}+\lambda_{Local}L_{\text{Local}}+\lambda_{Global}L_{\text{Global }}$ (6) where $\lambda_{Local}$,$\lambda_{Global}$, $\lambda_{Temp}$, $\lambda_{Sparsity}$ are hyperparameters used to control the weighting of each loss term. As default $\lambda_{Local}=\lambda_{Global}=1$, while $\lambda_{Temp}$ and and $\lambda_{Sparsity}$ vary depending on the input video. ## 4 Experiments We evaluate our method on a set of videos from the DAVIS dataset [22] across a diverse set of target text prompts. Our goal is to perform consistent and natural video editing. For this purpose, we present both a quantitative and qualitative evaluation of our results and perform a careful ablation study of each loss term in our objective function. In Sec. 4.1 we demonstrate the capabilities of our method by showing various stylizations for different videos. We also present a quantitative evaluation, where we compare our method to an image baseline method applied to each frame in the video. In Sec. 4.2 we demonstrate that the specificity of text prompts influences the details of the results. In Sec. 4.3 we demonstrate how text augmentation affects the results of the stylizations. In Sec. 4.4 we conduct a series of ablations on our loss terms and demonstrate how the local and global losses focus on different semantics. Finally, in Sec. 4.5 we illustrate some of the limitations of our method. | | ---|---|--- | | “Swan” | “Shiny metal swan” | “Swan with wood bark skin” | | | | “Boat” | “Shiny aluminum fishing boat” | “Boat made of copper” | | | | “Dog” | “Dog with zebra fur” | “Golden dog” Figure 4: Example results. Two representative frames from each edited video together with the global target text. | Q1 (Realism) | Q2 (Matching Text) ---|---|--- Blended Diffusion [1] | $2.22$ ($\pm 1.08$) | $2.10$ ($\pm 1.00$) Ours | $\mathbf{3.47}$ ($\pm 1.10$) | $\mathbf{3.94}$ ($\pm 0.99$) Table 1: Mean opinion scores (1-5) and standard deviation for Q1 and Q2. ### 4.1 Varied stylizations In Fig. 4 we illustrate our video stylization method applied to three videos and three texts. All local target texts used for these examples are similar to the global target text, but without containing the information about the underlying content, e.g., for the swan, the local target text is “metal skin.” The results show that we can apply temporally consistent stylizations that adhere to the target text specification. The swan example shows fine-grained details of the target texts and also preserves the underlying content of the swan. In the boat example, the stylization captures the details of the target text. The boat has a texture similar to shiny aluminum and also has something that looks like a fishing net at the end of the boat. The dog example shows that our method can apply a realistic and consistent stylization to a video containing occlusions. We quantify the effectiveness of our method by comparing it to an image baseline applied to each frame in a video input. As a baseline, we use a pretrained Blended-diffusion (BF) [1] model with standard configurations. The model takes as input an image, a ROI mask, and a target text. BF performs local (region-based) edits based on a target text description and the ROI mask. We conduct a user study to evaluate the perceived quality of the stylized outputs generated by both the BF model and our method, and the degree to which the outputs adhere to the global target text. Our user study comprises $50$ users and $20$ stylized videos, each with a different target text. For each video and target text combination, the users are asked to assign a score ($1$-$5$) to two factors: (Q1) “How realistic is the video?,” (Q2) “does the {object} in the video adhere to the text {content}. For Q1 we make it clear to the user that “realistic” refers to the quality of the video content. The results are shown in Table 1. | | | | ---|---|---|---|--- (a) | (b) | (c) | (d) | (e) | | | | ---|---|---|---|--- (a) | (b) | (c) | (d) | (e) Figure 5: Target text specificity. Each example shows a representative frame from the video. The experiment shows the specificity of a target affects the stylization. Global target text prompts, row one: (a) “Armor,” (b) “Iron armor,” (c) “Medieval iron armor,” (d) “Suit of shiny medieval iron armor,” (e) “Full plate shiny medieval iron armor,” row two: (a) “Boat made of wood,” (b) “Boat made of dark walnut wood (c) “Fishing boat made of wood,” (d) “Old fishing boat made of wood,” (e) “Fishing boat made of wood planks.” ### 4.2 Prompt specificity We demonstrate that varying the specificity of the target text prompt affects the level of detail in the stylization. Our experiment is motivated by recent work on prompt engineering [32] that shows how slight changes in the target text can have a big impact on the CLIP similarity between a text and an image. Fig. 5 shows an increasing level of detail for two videos and two target texts. The target text specificity not only influences the level of detail, it also makes it easier for CLIP to navigate in its embedding space. In the swan example in column (a), we have “Swan with an armor,” which is a more ambiguous target compared to the other swan examples with more detailed target texts and stylizations. We hypothesize that this is because several stylizations can satisfy the more simple target text, while a more specific target text narrows down the set of possible directions in CLIP’s embedding space. The swan examples in columns (d) and (e) indicate that CLIP has some understanding of the different body parts of the swan. The “full plate” target text (d) covers the entire head of the swan while the “suit” of armor in column (e) has a clear cut around the head of the swan. | | | | ---|---|---|---|--- (a) | (b) | (c) | (d) | (e) | | | | ---|---|---|---|--- (a) | (b) | (c) | (d) | (e) Figure 6: Prefix augmentations - varying number of prefixes to sample from in each iteration. Each figure shows a representative frame from each video and experiment configuration. The examples in row one use the texts: $T_{Global}$:“Origami swan with white paper skin,” $T_{Local}$: “Origami white paper skin,” and in row two: $T_{Global}$:“Dog with bengal tiger fur,” $T_{Local}$: “Bengal tiger fur.” (a) no prefixes, (b) $4$ local, no global, (c) $4$ global, no local (d) $4$ global & $4$ local, (e) $8$ global & $8$ local. The specific prefixes are described in Sec. 6.3. | | | | ---|---|---|---|--- (a) | (b) | (c) | (d) | (e) Figure 7: Ablation on each loss term \- All experiments were run with the same seed and with the texts: $T_{Global}$: “A swan with crocodile skin,” $T_{Local}$: “Crocodile skin.” Each figure shows a representative frame from a video, where we ablate one of our loss terms in Eq. 6. (a) All losses, (b) w/o local loss, (c) w/o global loss, (d) w/o temporal loss, (e) w/o sparsity loss. | | ---|---|--- (a) | (b) | (c) Figure 8: Global and local semantics. A representative frame from each of the edited video frames. The texts used are: (a) $T_{Global}$: “Swan with cactus skin,” $T_{Local}$: “Cactus skin,” (b) $T_{Global}$: “Swan with cactus skin,” $T_{Local}$: “Rough cactus skin,” (c) $T_{Global}$: “Swan made out of cactus,” $T_{Local}$: “Catus skin.” (a) and (b) have the same global target text while (a) and (c) have the same local target texts. In (b) the local details have changed to a more rough texture. In (c) the global swan semantics are better preserved. ### 4.3 Text augmentation We add textual augmentation to our method to address some of the challenges with prompt engineering. Inspired by Zhou et al. [32], we add neutral prefixes to both the local and global target texts, e.g., “a photo of a {}.” We then sample a new prefix each iteration for each of the target texts as a form of regularization. Fig. 6 illustrates our text augmentation experiment. We demonstrate that using prefixes increases the robustness and quality of the results. In each experiment and each iteration, we sample one prefix among a set of prefixes (details in appendix Sec. 6.3) for both the global and local target texts. In this experiment, we vary the number of prefixes to sample from and show that using an equal amount of prefixes for both the local and global target texts produces better quality results than using no prefixes. In both the swan and dog examples for columns (e) and (d) that use multiple prefixes, we see that the stylizations are more detailed than the examples in columns (a-c), e.g., in the dog example (d) and (e) the tiger fur has white pigments around the belly area and also more natural tiger stripes. ### 4.4 Ablation study We validate each term in our objective function (Eq. 6) with an ablation study illustrated in Fig. 7. In (a), we see that all losses combined result in a natural stylization that shows clear characteristics of a swan with crocodile skin. In (b) we see that without the local loss the results lack fine-grain details. It is still evident what the intended stylization was but the crocodile texture is not as clear as in (a). The results in (c) show that without the global loss the global semantics are less clear, e.g., the swan’s neck is not stylized as detailed as the rest of the body. In (d), we see that without the temporal loss, we get more edits outside the mask of the object. In (e), we see that without the sparsity loss, the stylization is very noisy. | | ---|---|--- (a) | (b) | (c) Figure 9: Limitations. A representative frame from each of the edited video frames. The texts used are: (a) $T_{Global}$: “Swan with cactus skin,” $T_{Local}$: “Cactus skin,” (b) $T_{Global}$: “Boat made out of chocolate,” $T_{Local}$: “Chocolate texture,” (c) $T_{Global}$: “Dog with zebra fur,” $T_{Local}$: “Zebra fur.” In (a), the cactus texture is applied like photos of cactus. In (b) the chocolate boat has a sailing ship printed at the end of the boat. In (c), the dog has a face of a dog on its haunches. In Fig 8 we illustrate how the local loss affects the fine-grained details and the global loss affects the global semantics. We use (a) as a baseline and then vary the local target texts in (b) and the global target texts in (c). In (b) we see, how changing the local target text to include “rough” affects the details of the cactus texture. In (c) we see how the global semantics of the swan’s body become more realistic as an effect of changing the global target text. ### 4.5 Limitations Fig. 9 illustrates some of the limitations of our method. During training our method occasionally starts to overfit and produce unintended stylizations. In (a), we see that the body of the swan contains photo-like cacti instead of natural cactus texture. In (b) and (c), we see how our model has used the contexts of the global target text. In (b), a sailing ship has been added to the end of the boat and in (c), a face of a dog has been added. Our method is limited to text-prompts that do not entail features that cross between the decomposed atlas layers. Some changes are best realized through a shape change, e.g, “Swan with long hair.”, which is not currently possible in our framework. Other limitations of our method include stylizations that focus too much on some property of the target texts, e.g., we experienced that the global target texts and similar variants “Swan with strawberry texture skin,” gave noisy results, where the swan was colored red. ## 5 Conclusion We considered the problem of developing intuitive and semantic control for consistent editing and styling of objects in videos. This problem poses a challenge in generating consistent content and style changes over time while being able to produce fine-grained details and preserve the global semantics. We proposed a method that uses CLIP [23] and the video object representation [14] to stylize objects in videos by using both a global target text, to control the global semantics of the stylization, and a local target text, to control the fine-grained details. We demonstrated that the specificity and the prefixes of the target texts can have a significant impact on the details produced by our method’s stylization. In future work, it would be interesting to investigate the limitations of CLIP. A model that is able to generate fine- grained stylizations of videos and images could be leveraged to create data augmentations in other learning settings. Another line of future work is to extend our model to be able to generate shape changes or even new objects from scratch. ### Acknowledgement This research was supported by the Pioneer Centre for AI, DNRF grant number P1. We would like to thank Ira Assent for the helpful discussions. ## References * [1] Avrahami, O., Lischinski, D., Fried, O.: Blended diffusion for text-driven editing of natural images. CoRR abs/2111.14818 (2021), https://arxiv.org/abs/2111.14818 * [2] Bar-Tal, O., Ofri-Amar, D., Fridman, R., Kasten, Y., Dekel, T.: Text2live: Text-driven layered image and video editing (2022). https://doi.org/10.48550/ARXIV.2204.02491, https://arxiv.org/abs/2204.02491 * [3] Bau, D., Andonian, A., Cui, A., Park, Y., Jahanian, A., Oliva, A., Torralba, A.: Paint by word. CoRR abs/2103.10951 (2021), https://arxiv.org/abs/2103.10951 * [4] Brock, A., Donahue, J., Simonyan, K.: Large scale GAN training for high fidelity natural image synthesis. CoRR abs/1809.11096 (2018), http://arxiv.org/abs/1809.11096 * [5] Chefer, H., Benaim, S., Paiss, R., Wolf, L.: Image-based clip-guided essence transfer. CoRR abs/2110.12427 (2021), https://arxiv.org/abs/2110.12427 * [6] Crowson, K., Biderman, S., Kornis, D., Stander, D., Hallahan, E., Castricato, L., Raff, E.: Vqgan-clip: Open domain image generation and editing with natural language guidance (2022). https://doi.org/10.48550/ARXIV.2204.08583, https://arxiv.org/abs/2204.08583 * [7] Frans, K., Soros, L.B., Witkowski, O.: Clipdraw: Exploring text-to-drawing synthesis through language-image encoders (2021). https://doi.org/10.48550/ARXIV.2106.14843, https://arxiv.org/abs/2106.14843 * [8] Gal, R., Patashnik, O., Maron, H., Chechik, G., Cohen-Or, D.: Stylegan-nada: Clip-guided domain adaptation of image generators. CoRR abs/2108.00946 (2021), https://arxiv.org/abs/2108.00946 * [9] Jabri, A., Owens, A., Efros, A.: Space-time correspondence as a contrastive random walk. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M., Lin, H. (eds.) Advances in Neural Information Processing Systems. vol. 33, pp. 19545–19560. Curran Associates, Inc. (2020), https://proceedings.neurips.cc/paper/2020/file/e2ef524fbf3d9fe611d5a8e90fefdc9c-Paper.pdf * [10] Jampani, V., Gadde, R., Gehler, P.V.: Video propagation networks. CoRR abs/1612.05478 (2016), http://arxiv.org/abs/1612.05478 * [11] Jetchev, N.: Clipmatrix: Text-controlled creation of 3d textured meshes. CoRR abs/2109.12922 (2021), https://arxiv.org/abs/2109.12922 * [12] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. CoRR abs/1812.04948 (2018), http://arxiv.org/abs/1812.04948 * [13] Karras, T., Laine, S., Aittala, M., Hellsten, J., Lehtinen, J., Aila, T.: Analyzing and improving the image quality of stylegan. CoRR abs/1912.04958 (2019), http://arxiv.org/abs/1912.04958 * [14] Kasten, Y., Ofri, D., Wang, O., Dekel, T.: Layered neural atlases for consistent video editing. CoRR abs/2109.11418 (2021), https://arxiv.org/abs/2109.11418 * [15] Kim, G., Ye, J.C.: Diffusionclip: Text-guided image manipulation using diffusion models. CoRR abs/2110.02711 (2021), https://arxiv.org/abs/2110.02711 * [16] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: Bengio, Y., LeCun, Y. (eds.) 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings (2015), http://arxiv.org/abs/1412.6980 * [17] Kwon, G., Ye, J.C.: Clipstyler: Image style transfer with a single text condition. CoRR abs/2112.00374 (2021), https://arxiv.org/abs/2112.00374 * [18] Michel, O., Bar-On, R., Liu, R., Benaim, S., Hanocka, R.: Text2mesh: Text-driven neural stylization for meshes. CoRR abs/2112.03221 (2021), https://arxiv.org/abs/2112.03221 * [19] Nichol, A., Dhariwal, P., Ramesh, A., Shyam, P., Mishkin, P., McGrew, B., Sutskever, I., Chen, M.: GLIDE: towards photorealistic image generation and editing with text-guided diffusion models. CoRR abs/2112.10741 (2021), https://arxiv.org/abs/2112.10741 * [20] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., Desmaison, A., Kopf, A., Yang, E., DeVito, Z., Raison, M., Tejani, A., Chilamkurthy, S., Steiner, B., Fang, L., Bai, J., Chintala, S.: Pytorch: An imperative style, high-performance deep learning library. In: Advances in Neural Information Processing Systems 32, pp. 8024–8035. Curran Associates, Inc. (2019), http://papers.neurips.cc/paper/9015-pytorch-an-imperative-style-high-performance-deep-learning-library.pdf * [21] Patashnik, O., Wu, Z., Shechtman, E., Cohen-Or, D., Lischinski, D.: Styleclip: Text-driven manipulation of stylegan imagery. CoRR abs/2103.17249 (2021), https://arxiv.org/abs/2103.17249 * [22] Pont-Tuset, J., Perazzi, F., Caelles, S., Arbelaez, P., Sorkine-Hornung, A., Gool, L.V.: The 2017 DAVIS challenge on video object segmentation. CoRR abs/1704.00675 (2017), http://arxiv.org/abs/1704.00675 * [23] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., Krueger, G., Sutskever, I.: Learning transferable visual models from natural language supervision. CoRR abs/2103.00020 (2021), https://arxiv.org/abs/2103.00020 * [24] Ramesh, A., Dhariwal, P., Nichol, A., Chu, C., Chen, M.: Hierarchical text-conditional image generation with clip latents (2022). https://doi.org/10.48550/ARXIV.2204.06125, https://arxiv.org/abs/2204.06125 * [25] Ramesh, A., Pavlov, M., Goh, G., Gray, S., Voss, C., Radford, A., Chen, M., Sutskever, I.: Zero-shot text-to-image generation. CoRR abs/2102.12092 (2021), https://arxiv.org/abs/2102.12092 * [26] Saharia, C., Chan, W., Saxena, S., Li, L., Whang, J., Denton, E., Ghasemipour, S.K.S., Ayan, B.K., Mahdavi, S.S., Lopes, R.G., Salimans, T., Ho, J., Fleet, D.J., Norouzi, M.: Photorealistic text-to-image diffusion models with deep language understanding (2022). https://doi.org/10.48550/ARXIV.2205.11487, https://arxiv.org/abs/2205.11487 * [27] Sanghi, A., Chu, H., Lambourne, J.G., Wang, Y., Cheng, C.Y., Fumero, M., Malekshan, K.R.: Clip-forge: Towards zero-shot text-to-shape generation (2021). https://doi.org/10.48550/ARXIV.2110.02624, https://arxiv.org/abs/2110.02624 * [28] Texler, O., Futschik, D., Kucera, M., Jamriska, O., Sochorová, S., Chai, M., Tulyakov, S., Sýkora, D.: Interactive video stylization using few-shot patch-based training. CoRR abs/2004.14489 (2020), https://arxiv.org/abs/2004.14489 * [29] Wang, X., Jabri, A., Efros, A.A.: Learning correspondence from the cycle-consistency of time. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (June 2019) * [30] Ye, V., Li, Z., Tucker, R., Kanazawa, A., Snavely, N.: Deformable sprites for unsupervised video decomposition (2022). https://doi.org/10.48550/ARXIV.2204.07151, https://arxiv.org/abs/2204.07151 * [31] Yin, K., Gao, J., Shugrina, M., Khamis, S., Fidler, S.: 3dstylenet: Creating 3d shapes with geometric and texture style variations. CoRR abs/2108.12958 (2021), https://arxiv.org/abs/2108.12958 * [32] Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. CoRR abs/2109.01134 (2021), https://arxiv.org/abs/2109.01134 ## 6 Appendix ### 6.1 Training details We train on a single GPU (RTX $6000$). Our method is implemented using the PyTorch framework [20] and will be made available. We use an ADAM optimizer [16] with an initial learning rate of $1^{-4}$ and decay the learning rate by a factor of $0.9$ every $200$ iterations. Our method takes about $40$ minutes and $2000$ iterations with a batch size of three input frames, each of size $432$x$768$, sampled from a video of maximally $70$ frames. High-quality results usually appear after $10-20$ minutes. All images are normalized and resized to $224$x$224$ before being passed to the CLIP model. We normalization with mean $(0.48145466,0.4578275,0.40821073)$ and standard deviation $(0.26862954,0.26130258,0.27577711)$, which is the same as CLIP was trained with. We use the pretrained “ViT-L/14” CLIP model loaded from OpenAi’s GitHub page. ### 6.2 Image augmentation * • Random Crops: For the global views we crop the reconstructed frame with a random scaling in the range: $[0.9,1.0]$ and for the local views we use a scaling in the range $[0.1,0.5]$. For both global and local views we use a random aspect ratio in the range $[0.8,1.2]$. The local crops have a chance of not including the object we want to edit, e.g. a small crop from the bounding box containing the Swan in Fig. 4 has a chance of only containing the background. To combat this, we use the $\alpha$-map to locate the object, and sample local crops until we get a crop that contains at least $\frac{1}{3}$ of the object. * • Random Perspective Transformation: With probability $\frac{1}{2}$, we sample a number $d$ uniformly at random in the range: $[0.1,0.5]$ and use $d$ as the distortion scale. * • Random Background Removal: We locate the foreground object using the $\alpha$-map. Then, with probability $\frac{1}{2}$ we color all background pixels black. ### 6.3 Text augmentations In our text augmentation we randomly sample one of the following prefixes in each iteration: 1. 1. “a photo of a {}” 2. 2. “a {}” 3. 3. “an image of a {}” 4. 4. “the {}” 5. 5. “image of a {}” 6. 6. “image of the {}” 7. 7. “photo of a {}” 8. 8. “photo of the {}” All prefixes are neutral and intended to not change the semantics of the stylization. In the text augmentation experiments (Sec. 4.3), if we use four prefixes, these are the first four in the list above.
11institutetext: IIIT Hyderabad, India22institutetext: 22email: <EMAIL_ADDRESS>33email: <EMAIL_ADDRESS> # A framework for syntactic and semantic quality evaluation of ontologies Vivek Iyer 1122 0000-0003-4451-8293 Lalit Mohan Sanagavarapu 1122 0000-0003-0745-1042 Y Raghu Reddy 1133 0000-0003-2280-5400 ###### Abstract The increasing focus on Web 3.0 is leading to automated creation and enrichment of ontologies and other linked datasets. Alongside automation, quality evaluation of enriched ontologies can impact software reliability and reuse. Current quality evaluation approaches oftentimes seek to evaluate ontologies in either syntactic (degree of following ontology development guidelines) or semantic (degree of semantic validity of enriched concepts/relations) aspects. This paper proposes an ontology quality evaluation framework consisting of: (a) SynEvaluator and (b) SemValidator for evaluating syntactic and semantic aspects of ontologies respectively. SynEvaluator allows dynamic task-specific creation and updation of syntactic rules at run-time without any need for programming. SemValidator uses Twitter- based expertise of validators for semantic evaluation. The efficacy and validity of the framework is shown empirically on multiple ontologies. ###### Keywords: Ontology Quality Evaluation Syntactic Evaluation Semantic Validation Crowdsourcing Twitter-based expertise ## 1 Introduction The exponential increase in Internet users over the past decade has led to generation of large volume of data. Web 3.0, otherwise commonly referred to as Semantic web, seeks to represent internet data as knowledge through knowledge graphs, ontologies and other knowledge systems [4]. These representations enable knowledge integration, semantic ambiguity resolution, information extraction, decision making, reasoning and many other use cases relevant to the building of ‘intelligent’ software systems. Ontologies, in particular, store domain-specific knowledge, and represent this knowledge through concepts, relations, axioms and instances. They contain a formal structure and achieve a certain level of rigor due to the presence of rules and constraints. Ontologies are rarely static in nature. The range and the depth of the knowledge stored are enriched over time. This impacts a wide variety of software applications that utilize ontologies for reasoning, decision-making, question-answering, etc. Ontology enrichment is thus a crucial step in the ontology engineering process. Traditionally, ontologies are created and managed by knowledge engineers and domain experts resulting in high costs due to the expert human labour involved. Automated or semi-automated approaches to ontology enrichment are increasingly popular, driven by increased availability of domain-relevant internet data and improvements in natural language processing and machine learning models [36]. Research on ontology learning (both creation and enrichment) snowballed in the last two decades [20], [36], [15], with increased focus on fully automated Deep Learning based approaches [30]. Given the variety in ontology enrichment approaches, it is important to evaluate the quality of the enriched ontology. Figure 1: Syntactic quality violation: No domain or range for property Figure 2: Semantic violation: Invalid enriched concept Ontology evaluation approaches can broadly be divided into: manual, automated and semi-automated approaches. In general, ontology evaluation happens on one of two aspects: syntactic quality, or semantic quality. We define syntactic quality of an ontology as a measure of its adherence to ontology development guidelines or rules. For example, One such rule could necessitate presence of both domain and range elements in properties. Examples of other rules or guidelines could include explicit declaration of equivalent and inverse properties, presence of annotations [34], following of unique naming conventions [23], etc. Figure 1 shows, in Turtle syntax [3], a property without a defined range element - thus violating the rule that necessitates the presence of both domain and range elements. Semantic quality deals with validity of enriched concepts, relations and instances. Figure 2 shows an example ontology enriched with concepts extracted from a sentence to emphasize the need for evaluation of enriched ontologies. The previously existing concepts are shown in blue, the valid enriched concepts in green and the invalid enriched ones in red. Ontology enrichment algorithms using Hearst patterns [14] could mistakenly detect ‘antivirus’ as a type of ‘malware’. In such cases, before creating a final ontology, the enriched ontology needs to be validated for semantic quality, to accept or reject the enriched concepts, properties and instances. A variety of metric-based methods were proposed to evaluate various syntactic quality-based characteristics of ontologies [9], [19], [32]. Publicly available tools were proposed that allow users to evaluate syntactical quality of ontologies using pre-defined metrics [18], [22] or rules [28], [31]. However, they offer limited customization and flexibility to the user for creating task-specific rules for evaluation, even more so for non-programmers. In regards to semantic evaluation, researchers have traditionally employed domain experts [21], while in this decade, crowdsourced validators [17], [24] are being used for semantic ontology validation. This paper proposes a customizable and scalable framework that evaluates syntactic and semantic aspects of ontology quality using SynEvaluator and SemValidator respectively. * • $SynEvaluator$: a tool that uses a rule-creation framework for allowing users to non-programatically create rules during run-time and to set task-specific priorities for these rules. * • $SemValidator$: a tool that uses crowdsourcing for validation of semantic quality of enriched ontologies. In this paper, a Twitter-based expertise estimation algorithm is used to weight validators’ decisions. The source code for SynEvaluator and SemValidator is available on GitHub111https://github.com/Remorax/SynEvaluator/,222https://github.com/Remorax/SemValidator/. They are also deployed on Heroku as web- applications333https://synevaluator.herokuapp.com/,444http://semvalidator.herokuapp.com/. The rest of the paper is structured as follows: Section 2 describes related work in ontology quality evaluation. The proposed framework constituting of SynEvaluator and SemValidator is shown in section 3. Section 4 details the experiments done to show the efficacy and accuracy of the framework, through these tools. SynEvaluator is tested for its utility and accuracy for implementing syntactical quality evaluation rules by comparing it against a popular syntactic quality evaluation tool, OOPS! [27]. The efficacy of $SemValidator$ is shown by conducting crowdsourced survey involving 28 validators on two popular ontologies, Stanford Pizza [8] and Information Security ontology [10]. Accuracy of TweetExpert algorithm on responses to both of these ontologies using multiple ML regression algorithms is shown. Finally, section 5 summarizes the contributions and suggests possible future directions of research. ## 2 Related Work Ontology quality evaluation approaches can be broadly classified into a) syntactic and b) semantic quality evaluation approaches. Syntactic evaluation approaches primarily evaluate structural aspects of an ontology based on ontology development guidelines, common pitfalls, structural metrics etc. OntoClean [13], one of the earliest known works in this area, proposed a methodology for validating adequacy of relationships in an ontology based on notions drawn from philosophy such as essence, identity, and unity. Similarly, OntoQA [32] proposed ontology evaluation on the basis of schema metrics and instance metrics. They stated that ‘goodness’ or ‘validity’ of an ontology varies between different users and domains. Gangemi et al. [11] proposed structural, functional and usability-related measures using O2 and oQual, a meta-ontology and an ontology model for ontology evaluation and validation. Burton et al. [5] proposed an ontology evaluation model based on semiotic theory. In order to apply the metrics proposed in these works, tools such as S-OntoEval [7] drawn from semiotic theory and AktiveRank [1] that ranked ontologies based on structural metrics like class match measure, density measure etc. were proposed. However, the tools proposed in these articles are either closed-source prototypes or theoretical frameworks and are not publicly available. There are also a few publicly available closed-source ontology (syntactic) evaluation tools, such as OOPS! [28], DoORS [22] and OntoMetrics [18]. OntOlogy Pitfall Scanner (OOPS!) evaluates ontologies on the basis of established ontology quality rules related to human understanding, logical consistency, real world representation and modelling issues, and manually assigned priorities. DoORS, evaluates ontologies based on metrics drawn from Semiotic Theory while OntoMetrics uses metrics proposed in OntoQA, [11]. These tools, however are not flexible or customizable, and evaluate ontologies using a fixed set of rules or metrics. Users are unable to create/update customized rules and set task-specific priorities, which can be crucial as requirements and application scenarios vary. The available open source implementations [33] require creation of new rules and priorities via programming which can be daunting for non-programmers. Semantic evaluation approaches focus on semantic validity of concepts and relationships in an ontology. Traditionally, it has been formulated as a task requiring simple accept/reject decisions from domain experts [2]. In the past few years, a good number of crowdsourced ontology evaluation approaches have been proposed. Hanika et al. [35] have developed the UComp Protégé plugin to provide a platform for crowdsourced workers to validate classes, subclasses, properties and instances. Kiptoo et al. [16] use crowdsourcing for axiom and assertion verification in ontologies as well as for verification of subclass- superclass relations by Amazon Mechanical Turks [24]. Pittet et al. used crowdsourced workers to propose changes related to addition, deletion and substitution errors [25]. Zhang et al. [37] used crowdsourced workers to obtain written feedback (comments/suggestions/references) for making final validation decisions. Requiring complex tasks (such as making data quality decisions or requiring written feedback) from crowdsourced workers can be expensive and unscalable as the size of ontology and/or number of workers increases. Some approaches [35], [17], [25] used quality control mechanisms like majority voting that are debatable. Noy et al. [24] addressed this by using qualification questions and spam filtering techniques. While these mechanisms eliminate spammers, it may not be applicable where large number of workers have some degree of knowledge but only few are experts. Therefore, an assessment of domain expertise on a continuous scale would be useful as a quality control metric. Further, an integrated quality evaluation framework that seeks to evaluate ontologies on both syntactic and semantic aspects, and also addresses the above-mentioned problems would be useful for a holistic and integrated approach to quality evaluation of enriched ontologies. ## 3 Proposed Framework In this paper, an ontology evaluation framework that combines automated syntactic evaluation and human-centric semantic evaluation is proposed. SynEvaluator aims at increasing user flexibility by allowing customized rule creation at runtime, as well as scalability (with respect to user base) by proposing an approach that removes the need for programming. SemValidator proposes a crowdsourcing based approach that uses the validators’ Twitter profiles for quality control. Figure 3: Work flow for semantic and syntactic quality evaluation of ontologies. The framework’s work flow is shown in Figure 3. The input to quality evaluation process is an enriched ontology. The ontology may have been enriched with concepts, relations and instances using some automated and semi- automated algorithms. The ontology may contain syntactic quality errors (due to violation of ontology development guidelines) and/or semantic errors (due to wrongly enriched concepts). In the figure, for clarity, concepts and relationships with potential syntactic violations are bordered in yellow, an those that do not contain such violations are outlined in blue. Also, concepts that potentially contain semantic violations are highlighted in yellow colour and concepts that are semantically valid are highlighted in green. An ontology engineer uploads an enriched ontology to SynEvaluator, and creates syntactic quality evaluation rules. The rules created using a theoretical rule creation framework are applied on the parsed ontology object through SynEvaluator’s ontology evaluation module. This returns a list of detected violations and the elements causing these violations. Using these elements as suggestions, the ontology engineer can fix violations in a iterative manner. The iterations may be repeated as needed. Then, the ontology engineer can provide this ontology as input to SemValidator so that it can be validated for semantic quality. As part of semantic validation, the ontology is provided to crowdsourced validators who give their accept or reject validation decisions for each of the enriched concepts, relations and instances. Simultaneously, an estimate of the domain knowledge of each of these validators is calculated from their Twitter profile using the TweetExpert algorithm. These scores alluded to as ‘TweetExpert scores’, are used as a quality control mechanism to ensure that the decisions of crowdsourced validators are given weightages according to their knowledge of the ontology domain. Finally, the output of this algorithm is used to take the final accept/reject decision for each enriched element, resulting in an ontology with both good syntactic and semantic quality. ### 3.1 Stage 1: SynEvaluator In this section, the underlying terminology used in SynEvaluator and rule creation framework is illustrated with examples. Further, the implementation of SynEvaluator as a web application is detailed. Finally, the section ends with a discussion on potential benefits and limitations of SynEvaluator. #### 3.1.1 Defining the Rule Creation Framework SynEvaluator allows users to create rules at run-time. The rules are constructed from individual components like “Subjects", “Clauses" or “Operator Expressions". The operational definitions of these and other relevant terms are : * • Ontological Element: Refers to any element that forms a constituent part of an ontology. It refers to any primary component (classes, individuals, properties), their related elements (subclasses, domains, ranges), annotations (labels, descriptions, comments) or attributes (ID, language, namespace). * • Rule: Refers to a sequence of one or more clauses, optionally connected by one or more operator expressions that returns either one or more ontological elements or a boolean value. * • Clause: Refers to a transformation applied on an ontological element(s) to return either one or more ontological elements or a boolean value. * • Operator Expression: An expression used to compare and/or connect non-empty sequences of clauses to return a boolean value. * • Subject: Refers to an ontological element (typically primary components such as classes, individuals and properties), that is subjected to transformations carried out through sequential clauses to form a rule. Figure 4: Structure of supported expressions Using these concepts, the expressions supported by SynEvaluator are formally defined as shown in Figure 4. The notation used is similar to $RegEx$ notation with $*$ denoting zero to infinity, $+$ denoting one to infinity, and $?$ denoting zero or one occurrences. A ‘Subject’ comprises the beginning and is always the first keyword in a rule in out proposed framework. It goes through a series of transformations as defined by sequences of clauses. Clauses can be of of two types: a) Extractive Clauses and b) Functional Clauses. Extractive Clauses consist of (Predicate, Object) pairs that use the Predicate to execute a transformation on the return value from the previous clause using the Object as argument. More specifically, object specifies the type of element ‘extracted’ by the predicate, and elements satisfying this (Predicate, Object) pair are returned as output. Functional clauses, on the other hand, consist of (Predicate, Function) pairs that involve executing a function of type described by predicate on the return value from the previous clause. These clauses typically check for existence of a certain functional property and thus return a boolean value in response. Currently, two kinds of functional properties are supported: i) ontological (or structural) properties, that execute ontology-level functions (such as uniqueness, validity, consistency etc) on ontological elements, and ii) linguistic properties that linguistically analyze text (such as checking for polysemes, conjunctions etc) returned from previous clauses. In case of ‘False’ value returned by a Functional Clause or an empty list (no matching elements) returned by an Extractive Clause, that particular ontological element is returned as an element containing a violation. Extractive Clauses can also be chained together to form clause sequences. Since functional clauses return a boolean value, they cannot be chained further. Clause sequences can be compared through Operator Expressions. Operator Expressions essentially consist of a ‘Predicate’ indicating operator type, followed by an ‘Operator’ keyword. Operator Expressions comprise of two main categories of operators, namely: (a) Logical Operators and (b) Comparative Operators. Logical Operators like ‘And’, ‘Or’ and ‘Not’ are used to create logical combinations of clause sequences. Comparative Operators like ‘Equality’, ‘Inverse’ and ‘Synonymy’ are used to compare return values. Table 1: Keywords for different expression types in the proposed framework Subject | Extractive Clause | Functional Clause | Operator Expression ---|---|---|--- Predicate | Predicate | Predicate Ontology Metadata | Has Related Element (1) | Has Ontological Property (1) | Uses Comparative Operator (1) Ontological Element | Has Attribute (2) | Has Linguistic Property (2) | Uses Logical Operator (2) Class | Object | Function | Operator Instance | Domain (1) | ID Consistency (1) | Equality (1) Property | Subclass (1) | Uniqueness (1) | Inverse (1) Object Property | Disjoint Class (1) | Text Validity (1) | And (2) Datatype Property | ID (2) | Contains Polysemes (2) | Or (2) Symmetric Property | Language (2) | Contains Conjunctions (2) | Not (2) Table 1 summarizes keywords supported by the proposed framework. Note that the table lists 8 Subject keywords, while for Extractive Clauses, Functional clauses and operator expressions it lists 2 predicates and 5 objects, functions and operators respectively. This is due to differing syntax followed by each expression type. Also, every predicate has a list of valid Objects/Functions/Operators. This is shown in the table through bracketed numbering. For example, the valid Predicates for ‘Has Attribute‘ are ‘ID‘ and ‘Language‘. The complete list of supported keywords is provided over here555https://bit.ly/3zfdI8f. #### 3.1.2 Examples of Created Rules Figure 5 shows examples of 2 rules. In both examples 1 and 2, ‘Property’ is the ‘Subject’ of the rule. ‘hasRelatedElement Domain’, ‘hasRelatedElement Range’ and ‘hasOntologicalProperty Uniqueness’ are all clauses. In rule 1, the ‘hasRelatedElement Domain’ clause carries out a transformation that uses the ‘hasRelatedElement’ predicate to extract ‘Domain’ elements. A similar clause is used for extracting ‘Range’ elements. These two clauses (or clause sequences of length are combined using the operator expression ‘usesLogicalOperator And’. This rule thus necessitates non-null values for both Domain and Range elements for each element of ‘Subject’, in this case, ‘Property’. Properties that do not contain both elements are therefore returned as ontological elements containing violations. Figure 5: Examples of rules implemented by SynEvaluator Example 2 shows a rule where clauses have been chained together to constitute a clause sequence of length 2. This sequence consists of the extractive clause ‘hasRelatedElement Domain’ followed by functional clause ‘hasOntologicalProperty Uniqueness’. The extractive clause essentially extracts Domain element(s) of each Property. Then, the functional clause applies ‘Uniqueness’ function on ontological elements returned by previous clause with function type defined by Predicate “hasOntologicalProperty". In case of non-existence of domain for a particular property or existence of multiple domains, this rule would return that property as containing a violation due to violation in first and second clauses respectively. #### 3.1.3 Proposing SynEvaluator: the final web application Figure 6: The SynEvaluator interface. Subjects are highlighted in blue, Clauses in yellow, Operator Expressions in green and Rule Priorities in red. Among Clauses, Extractive Clauses are shown in solid lines while Functional Clauses are outlined with dotted lines. The rule creation framework proposed above is used to create a web application for use by the ontology engineer. The primary interface of this application, SynEvaluator, is shown in Figure 6. It allows the users (ontology engineers) to use dropdown menus to create functional and extractive clauses, operator expressions and thus, rules, as well as set priorities for these rules. Users can choose between ‘Low’, ‘Medium’ and ‘High’ priorities based on the task- specific importance of the rule. Figure 8 shows, with the help of an activity diagram, how a user could create an appropriate rule using SynEvaluator. Finally, after uploading the enriched ontology and creating the rules, SynEvaluator parses the ontology using its parsing module and executes the created rules (Figure 3). Post evaluation, the user is presented with a list of violating elements along with count and priority of each valid rule. #### 3.1.4 Benefits and Limitations The proposed framework makes it significantly easier for non-programmers to create customised rules dynamically. Also, compared to previous quality evaluation tools, due to the framework’s ability to reuse keywords to create new rules, the developer effort required to hard-code rules is minimised. Another major benefit is that the proposed framework can potentially be used to query over entire OWL language. This can be done as any ontological element/attribute can be extracted using extractive clauses and the appropriate function executed on them. Lastly, due to functional clauses, it is possible to execute ontological or ontology-level functions like normal query languages and use linguistic analysis on text. This is particularly useful in quality evaluation while applying appropriateness checks on IDs or labels. Figure 7: More examples of rules supported by SynEvaluator In Figure 5 (Example 1), it can be seen that properties with missing domain or range, a common quality violation, could be detected through a combination of logical operators and extractive clauses. Another common violation is related to the absence of annotations (comments, descriptions, etc.) for an ontological element [34]. This can be converted into a rule, once again through Extractive Clauses as shown in Figure 7 (Example 1). A third quality violation would be when similar (or synonymous) classes are incorrectly defined as equivalent classes [23]. This can be defined as in Figure 7 (Example 2), where the comparative operator ‘Dissimilarity’ is used to test for semantic similarity through cosine similarity of label embeddings. As mentioned previously, it is also possible to perform linguistic tests unlike other query languages. A basic example of this is detection of conjunctions in a label as shown in Figure 7(Example 3). This is useful to identify quality violations where different concepts are merged in the same class using conjunctions [28]. A more advanced example of a linguistic test would be a rule detecting polysemous elements (Figure 7, Example 4). This is useful in detecting violating elements that have labels denoting differing conceptual ideas in differing senses. SynEvaluator implements this check through the use of WordNet’s synsets to find out how many senses a word can have. Once these rules are created, the ontology engineer can add domain/range elements; annotations; remove synonymous equivalent classes and fix classes with conjunctions and polysemes as appropriate. SynEvaluator can thus help in fixing structural and linguistic quality violations. The current version of SynEvaluator has a few limitations. It is currently only possible to chain clauses together or use operators to compare chained clauses. As a result, it is not possible to create rules with multiple lines. One major consequence of this is that variable assignment is not supported, and it is not possible to create a variable in one line and refer to it in another, as part of the same rule. Aggregation operators, such as ‘Count‘ or ‘Sum‘, are currently not supported either. Finally, it is not possible to create rules that require reasoning. The described limitations shall be addressed in future iterations of SynEvaluator. In spite of the limitations, the current framework (as shown in Experiments section) is still able to support creation of the majority of quality evaluation rules. Figure 8: Activity Diagram for creating a rule in SynEvaluator ### 3.2 Stage 2: SemValidator SemValidator uses a crowdsourced approach to semantically evaluate ontologies. The key feature of SemValidator is that it does not require ontology engineers, domain experts or knowledge of OWL language for validation. This is useful in a crowdsourced setting, where validators may have varying degrees of expertise and knowledge. If further necessitates the use of appropriate quality control mechanisms. To ensure quality, SemValidator uses TweetExpert algorithm to calculate expertise score of a crowdsourced validator, which is then used to weigh their decisions. This section describes the approach used by TweetExpert and justifies the choices made. This is then followed by a discussion on the assumptions made by TweetExpert and the feasibility of the assumptions in the context of crowdsourcing. The section finally ends with a description of the implementation of SemValidator and how it can be used by crowdsourced validators. #### 3.2.1 TweetExpert algorithm TweetExpert algorithm takes Twitter usernames of validators as input. For each of the validators, it calculates two scores: a) a ‘TweetSim’ score and b) a ‘FriendSim’ score. A ‘TweetSim’ score is intended to assess the similarity of the validator’s tweets to the domain of the ontology while ‘FriendSim’ estimates the domain similarity of the validator’s friends (pages they are following). To calculate the ‘TweetSim’ score, the validator’s ‘n’ most recent tweets are extracted from their profile and their semantic similarity with the domain keyword is computed using the Universal Sentence Encoder (USE) [6]. Here the ‘domain keyword’ is manually chosen as the word most relevant to the domain of the ontology. For instance, “Pizza" for Stanford Pizza ontology [8] and “Information Security" for the ISO 27001 based “Information Security" ontology [10]. The current implementation uses only one keyword, but it is possible to compute similarity scores with multiple keywords and average these scores for greater accuracy. The similarities are sorted in decreasing order. Out of n similarities, the top-K similarities are chosen. These top-K similarities are then averaged to yield ‘TweetSim’ score. A similar approach is used to calculate ‘FriendSim’ score. The ‘m’ most recent friends are extracted to calculate their ‘TweetSim’ scores. After sorting in decreasing order, the top-K most similar scores are averaged to yield ‘FriendSim’ score. The reason behind extracting the most recent tweets and friends is to get a better estimate of the validator’s current knowledge and interests. On the other hand, a top-K average helps in both filtering out occasional out-of- domain tweets from domain experts and smoothening out the effects of coincidental in-domain outliers from non-experts. The value of K is thus appropriately empirically chosen such that it is large enough to not include in-domain outliers from laymen, but small enough to exclude any out-of-domain tweets from experts. The pseudocode for TweetExpert is shown in Algorithm 1. Result: TweetExpert score Function _calculate_tweet_similarity(_$username$_)_: user_tweets := extract_tweets(username) tweet_similarities := calculate_USE_similarity(user_tweets, domain_name) best_tweet_similarities := get_top_K_tweets(tweet_similarities) tweet_similarity_score = average(best_tweet_similarities) return tweet_similarity_score Function _calculate_friend_similarity(_$username$_)_: user_friends := extract_friends(username) user_friends_scores := foreach _ $friend\in user\\_friends$_ do calculate_tweet_similarity(friend) best_friend_scores := get_top_K’_friends(user_friends_scores) friend_similarity_score = average(best_friend_scores) return friend_similarity_score tweet_sim := calculate_tweet_relevance(username) friend_sim := calculate_friend_relevance(username) score := ML_predict_score(tweet_sim, friend_sim) Algorithm 1 TweetExpert algorithm Finally, the calculated ‘TweetSim’ and ‘FriendSim’ scores are input to a pre- trained Machine Learning regression algorithm that predicts the final $TweetExpert$ score using the two scores as feature vectors. The current implementation uses Epsilon-Support Vector Regression (SVR), since it was experimentally found to yield best results (shown in Experiments section). However, the system uses strategy software design pattern [29] which enables easy interchange of regression algorithms. The TweetExpert score is calculated for each of the validators by repeating the process described above. The final decision is taken using a weighted majority voting algorithm with the TweetExpert scores being used as weights. The TweetExpert scores provide a way to estimate a confidence value for the decisions input by each validator. This is particularly crucial as a quality control metric in non-probabilistic sampling techniques like crowdsourcing where number of validators can grow exponentially in count and diversity. #### 3.2.2 Assumptions SemValidator makes reasonably grounded assumptions to establish efficacy and suitability in a crowdsourced setting. For example, about 49% of crowdsourced workers whose primary source of income is Amazon Mechanical Turk, a popular crowdsourcing platform, are in the 18-29 age range666https://pewrsr.ch/3vX5q2G. As of February 2021, about 42% of Americans in the 18-29 age range use Twitter, with this age group being the most active demographic on Twitter777https://bit.ly/3z5IDn9. It is also assumed that in order for expertise estimation to work, Twitter users tweet reasonably frequently. The average number of tweets per day per user, according to a 2016 study888https://bit.ly/3fSjWmA, is 4.422, which translates to over 1600 tweets a year. This is typically sufficient since $K\sim 50-100$. Finally, SemValidator assumes workers have public Twitter profiles, which would enable tweets and friends of a worker to be extracted. This assumption is predicated upon a recent 2019 survey999https://bit.ly/3pmMc3O that showed 87% of Twitter users in USA have public accounts. Also, as part of future work, SemValidator would allow login through Facebook and Linkedin as well and a similar algorithm for expertise estimation could be used. This is expected to further increase the applicability of this expertise-based approach for crowdsourcing. #### 3.2.3 Proposing SemValidator: the final validation workbench Figure 9: The main validation interface of SemValidator. The Stanford Pizza ontology has been uploaded on SemValidator, with its concepts in blue and the enriched concepts in green. One of the enriched concepts, “Tandoori Pizza" has been selected, with options to “Accept" and “Reject" it. The proposed workflow is used to develop a validation workbench for crowdsourced workers that allows for accepting or rejecting enriched concepts, relations and instances in an enriched ontology. The main validation interface of this workbench, called SemValidator, is shown in Figure 9. This application integrates Twitter authentication and uses the TweetExpert algorithm for calculating validator expertise. SemValidator allows for two types of users: (a) the administrator and (b) the validator. The administrator, typically the ontology engineer, can upload/delete ontologies to be validated and also access decisions made by validators. When the validator selects an ontology to validate, the ontology is served using the WebVOWL ontology visualization software. The enriched concepts and instances are highlighted in green, and on selection, enable the validators to select accept/reject decisions accordingly. Enriched relations, also in green, may also be accepted/rejected independent of the concepts they relate. The validators’ decisions are recorded by SemValidator and logged to a database. The administrator can download this database after the crowdsourcing survey is completed, evaluate validator expertise using their Twitter usernames and then make final accept/reject decisions. The syntactic and semantic evaluation aspects of the framework may be used independently of each other. However, utilizing the framework in an integrated manner as shown in Figure 3 is expected to give best results. This way, syntactical violations can be detected easily and in an automated and customizable manner, while semantic violations can now be detected more accurately by crowdsourced validators. The resulting ontology has enhanced syntactic and semantic ontological quality and is now fit for reuse. ## 4 Experiments Stanford Pizza and ISO-IEC 27001 Information Security ontologies are evaluated using the proposed framework to demonstrate ontology quality evaluation. Since the focus of this work is on enriched ontologies, these are manually enriched with concepts, properties and instances before quality evaluation. The RDF triples used to enrich Pizza and Information Security ontologies are provided over here101010https://bit.ly/3z5lGR1. Only 5 triples were chosen for this iteration, considering that ontology enrichment is as an iterative process and the count of triples per iteration is expected to be of this order. For Pizza, the domain-specific webpages used for extraction consisted of culinary articles111111https://bit.ly/3yYGGZP and food travel blogs121212https://bit.ly/3iiEtCm. For Information Security they consisted of informative articles and product pages by Cisco131313https://bit.ly/3wXRHZj, Barracuda141414https://bit.ly/34PIUN9 and SearchSecurity151515https://bit.ly/3wXRJQV. Please note that some relevant data in this section is shown through external links due to space constraints. ### 4.1 Syntactic Quality Evaluation using SynEvaluator This section attempts to evaluate the applicability and accuracy of SynEvaluator in creating rules and detecting violations respectively. It is hypothesized that SynEvaluator’s reusable theoretical framework for priority- specific rule creation increases its applicability for diverse range of tasks, but without compromising on violation-detection accuracy. To prove these claims, SynEvaluator is compared against OOPS! [27], a popular, publicly available SOTA tool that allows qualitative rule evaluation using rules. Moreover, OOPS! compiles 41 most commonly observed pitfalls drawn from several popular works on ontology quality evaluation [12], [23], [34] and the ones described in the OOPS! catalogue [26] are chosen as rules for implementation. This allows appropriate assessment of applicability for rule creation, while also allowing accuracy evaluation by comparing SynEvaluator’s detected violations to that of OOPS! Note that while SynEvaluator’s suitability for rule creation is being assessed here by comparing with rule-based evaluation approaches, it cannot be compared yet with other contemporary works focusing on metric-based evaluation. This is being planned as part of future work by introducing metric-to-rule conversion, which would allow metrics to be framed as rules. One possible way this could be done is by adding conditions, such as comparison operators, after metrics to form rules. Table 2: No of pitfalls implemented by SynEvaluator (SE) Vs OOPS! SE OOPS! | | Implemented --- | Not implemented --- | Implemented --- 29 | 4 | Not implemented --- 1 | 7 SynEvaluator’s framework can be used to successfully formulate 30 of the 41 pitfalls compiled in the catalogue, compared to OOPS! which can automate 33. As shown in Table 2, the majority of pitfalls (29 of 41) can be successfully implemented by both OOPS! and SynEvaluator. 7 cannot be implemented by either of them, and this involves rules that require human knowledge and reasoning (such as overspecialization of ontology hierarchy, usage of wrong relations, etc.). 1 of the pitfalls (involving linguistic polysemy detection) could be implemented by SynEvaluator but not by OOPS!. There are 4 pitfalls implemented by OOPS! which SynEvaluator cannot check. This includes rules that check for undeclared disjoint concepts, undeclared transitive properties, equivalent classes, etc. Supporting such rules involves a higher degree of ontological reasoning that SynEvaluator is incapable of. Nevertheless, SynEvaluator’s ability to implement vast majority of quality evaluation rules off-the-shelf in a customizable manner increases confidence in its applicability for future evaluation tasks that may involve more rules. For comparing accuracy of violation detection, SynEvaluator’s reported violations are compared with OOPS! [27]. Given the absence of ground truth and the widespread popularity of OOPS!, it is assumed that OOPS! uses valid, non- erroneous rule checks for the implemented pitfalls. It can be observed that the violations reported by SynEvaluator match those of OOPS! in all pitfalls except for P22. OOPS! mentions that these could be due to inconsistent naming conventions, however SynEvaluator is unable to detect any such errors in the ontologies. This may be a result of the incorrect naming checks currently used. But for the other pitfalls, both tools returning the same number of violations suggests that for the implemented framework, SynEvaluator is accurate in evaluating rules and detecting pitfalls. ### 4.2 Semantic Quality Validation using SemValidator SemValidator uses a crowdsourced survey to semantically validate the enriched Pizza and Information Security ontologies. Crowdsourced validators were invited by tweeting a description of the task along with the link to the SemValidator application. Among the 31 validators that participated in the survey, 3 had private Twitter accounts so their responses were discarded. The 28 validators with public accounts included 5 validators with a background in Cybersecurity, 4 in Artificial Intelligence, 5 in Software Engineering, 4 in Healthcare, 3 in Food domain (such as chefs and culinary experts), 3 in Literature, 2 in Politics and 1 each in Fashion and Pure Science. These diverse bunch of validators were asked to self-identify their domain of expertise for statistical purposes before participating in the survey. To preserve validator anonymity, the anonymized results of the survey are provided here161616https://bit.ly/3pvzbFh. After completion of the survey for both ontologies, ‘TweetSim’ and ‘FriendSim’ scores were calculated for all 28 validators using the TweetExpert algorithm, described previously. TweetExpert took 1̃5 seconds to execute for our values of $K=20$ and $K^{\prime}=5$ per validator profile. Finally, TweetExpert score was calculated by experimenting with 4 standard regression algorithms: Linear Regression, Random Forest Regression, K-Nearest Neighbours (KNN) Regression and Epsilon-Support Vector Regression (SVR). A standard majority voting where all decisions have equal weightage is considered as a naive baseline. The input features for each algorithm were ‘TweetSim’ and ‘FriendSim’ scores of a validator and the label was the ratio of correct answers (to total answers) given by them. This was done to ensure validators with more correct answers received higher ‘TweetExpert’ scores using similarity scores as feature vectors. Since a gold standard for the correct answers did not exist, CISOs were asked to manually validate the enriched triples from the information security domain, while the authors themselves validated the triples from the pizza domain. To determine the best among the 4 regression algorithms for score prediction, 7-fold cross validation was carried out. The 28 responses (now reduced to feature vectors and labels) available for both ontologies were divided into 7 folds of 4 responses each. 5 folds were used for training, 1 for validation and 1 for testing. For each chosen test fold, immediately succeeding fold was chosen as the validation set and all the other folds were used for training. Different sets of hyperparameters were considered for each regression algorithm171717https://bit.ly/3wUa9SA. The best set of hyperparameters for each algorithm was determined by selecting the algorithm with highest accuracy on the validation set, calculated by determining the proportion of correctly answered questions to the total number of questions. The predicted answer to a question is determined by using weighted majority voting algorithm on the predicted expertise scores in the validation set. The higher the number of correct answers, the closer the predicted scores are to their actual value. The same procedure was followed for calculating the accuracy on the test folds. Accuracy obtained on test and validation sets for each of these algorithms is shown in Table 3 for both ontologies. Epsilon Support Vector Regression was found to yield best results on both datasets. To determine whether training score prediction algorithms requires feature vectors from the same domain, an SVR model is trained on a more generic dataset, i.e the Pizza dataset and tested on Information Security, a niche dataset. The model hyperparameters are chosen as the ones that performed best on the Information Security dataset earlier. Varying percentages of the Pizza dataset are used to construct training subsets, to observe variation in test accuracy with subset size. To ensure randomness while choosing these subsets, 10 experimental trials were conducted where the dataset is shuffled before each trial and trained on the first $x\%$ of samples, where $x$ is the percentage of the training dataset chosen. The results are shown in Table 4. The accuracy monotonically increases with increase in size of training subset. The results suggest that a model pre-trained to predict the expertise score on one domain could be reused multiple times in different domains, even if they are niche. Table 3: Mean Validation and Test Accuracy scores on Pizza and Security datasets Algorithm | Pizza | Security ---|---|--- Val | Test | Val | Test Majority Voting | 71.43 | 71.43 | 51.43 | 51.43 Linear Regression | 85.71 | 83.57 | 68.57 | 71.43 Random Forest | 80.0 | 81.43 | 68.57 | 74.28 KNN | 85.71 | 80.0 | 77.14 | 77.14 SVR | 88.57 | 85.71 | 80.0 | 80.0 Table 4: Mean accuracy scores on test dataset (Information Security) with variation in % of training dataset (Pizza). Dataset% | 10% | 25% | 50% | 75% | 100% ---|---|---|---|---|--- Accuracy | 70% | 76% | 86% | 92% | 100% Table 3 shows that the best-performing algorithm (TweetExpert with SVR followed by weighted majority voting) significantly outperforms naive majority voting. It can be observed that naive majority voting gives particularly bad results for Information Security, a relatively niche domain, than Pizza, a domain known more to laymen as well. When replacing naive majority voting with TweetExpert + SVR, a drastic increase in performance in Information Security (55.55%) vs Pizza (20%) can be observed. Even the worst performing regression algorithm gives an increase of 38.8% and 14% respectively. As a result, it can be inferred that estimating expertise using TweetExpert can be particularly useful for quality control in niche domains. ## 5 Conclusion and Future Work Ontology evaluation is a critical stage of any ontology engineering process. In this paper, SynEvaluator and SemValidator are proposed for syntactic and semantic evaluation of an enriched ontology respectively. Although the work focuses on an enriched ontology, it can be easily extended for evaluation of any generic ontology. SynEvaluator automatically evaluates ontologies using customised rules created dynamically at runtime, and improves on previous quality evaluation approaches in a variety of ways. Firstly, it offers greater flexibility to the user in terms of creating, updating and deleting custom rules as well as setting priorities. Secondly, it proposes rule creation by the usage of a novel theoretical framework that factors rules into sequences of ‘clauses’ and ‘operator expressions’. This facilitates creation of an interactive interface that makes it easier for non-programmers to dynamically create rules. In addition, chaining together independent keywords can facilitate creation of a large number of rules without requiring additional developer programming. SemValidator improves on previously proposed crowdsourced ontology validation approaches by incorporating a Twitter-based quality control mechanism. The TweetExpert algorithm is proposed for calculating the expertise score of a validator using the tweets and friends extracted from their Twitter profile as input. The efficacy of SynEvalautor was shown by implementing rules to detect pitfalls and the accuracy of the detected violations was compared with publicly available OOPS! tool. Semantic quality evaluation using SemValidator is performed on both Pizza and Information security ontologies with the help of a crowdsourced survey of 28 validators. The experimental results showed a significantly higher than naive majority voting. The experimental results are encouranging but also can be used as an aid to further extend the research work. For example, SynEvaluator can be expanded to support additional, more complex operations such as using parantheses, arithmetic operators, variable assignment etc. Although the TweetExpert algorithm used by SemValidator currently calculates expertise using only tweets and friends as feature vectors, it could be extended to use additional information such as Twitter lists, followers, tweet metadata, etc. ## References * [1] Alani, H., Brewster, C., Shadbolt, N.: Ranking ontologies with AKTiveRank. In: International Semantic Web Conference. pp. 1–15. Springer (2006) * [2] Amardeilh, F., Laublet, P., Minel, J.L.: Document Annotation and Ontology Population from Linguistic Extractions. In: Proceedings of the 3rd International Conference on Knowledge Capture. pp. 161–168 (2005) * [3] Beckett, D., Berners-Lee, T., Prud’hommeaux, E., Carothers, G.: Rdf 1.1 turtle. World Wide Web Consortium pp. 18–31 (2014) * [4] Berners-Lee, T., Hendler, J., Lassila, O.: The Semantic Web. Scientific American 284(5), 34–43 (2001) * [5] Burton-Jones, A., Storey, V.C., Sugumaran, V., Ahluwalia, P.: A semiotic metrics suite for assessing the quality of ontologies. Data & Knowledge Engineering 55(1), 84–102 (2005) * [6] Cer, D., Yang, Y., Kong, S.y., Hua, N., Limtiaco, N., John, R.S., Constant, N., Guajardo-Céspedes, M., Yuan, S., Tar, C., et al.: Universal Sentence Encoder. arXiv preprint arXiv:1803.11175 (2018) * [7] Dividino, R.Q., Romanelli, M., Sonntag, D., et al.: Semiotic-based Ontology Evaluation Tool (S-OntoEval). In: LREC (2008) * [8] Drummond, N.: Stanford pizza ontology. https://protege.stanford.edu/ontologies/pizza/pizza.owl, accessed: 2021-03-28 * [9] Duque-Ramos, A., Fernández-Breis, J.T., Iniesta, M., Dumontier, M., Aranguren, M.E., Schulz, S., Aussenac-Gilles, N., Stevens, R.: Evaluation of the OQuaRE Framework for Ontology Quality. In: Expert Systems with Applications. vol. 40, pp. 2696–2703. Elsevier (2013) * [10] Fenz, S., Goluch, G., Ekelhart, A., Riedl, B., Weippl, E.: Information security fortification by ontological mapping of the iso/iec 27001 standard. In: 13th Pacific Rim International Symposium on Dependable Computing (PRDC 2007). pp. 381–388. IEEE (2007) * [11] Gangemi, A., Catenacci, C., Ciaramita, M., Lehmann, J.: A theoretical framework for ontology evaluation and validation. In: SWAP. vol. 166, p. 16 (2005) * [12] Gómez-Pérez, A.: Evaluation of taxonomic knowledge in ontologies and knowledge bases (1999) * [13] Guarino, N., Welty, C.: Evaluating ontological decisions with ontoclean. Communications of the ACM 45(2), 61–65 (2002) * [14] Hearst, M.A.: Automatic Acquisition of Hyponyms from Large Text Corpora. In: Proceedings of the 14th conference on Computational linguistics. vol. 2, pp. 539–545. Association for Computational Linguistics (1992) * [15] Iyer, V., Mohan, L., Bhatia, M., Reddy, Y.R.: A survey on ontology enrichment from text. In: Proceedings of the 16th International Conference on Natural Language Processing. pp. 95–104. NLP Association of India, International Institute of Information Technology, Hyderabad, India (Dec 2019), https://aclanthology.org/2019.icon-1.11 * [16] Kiptoo, C.C.: Ontology enhancement using crowdsourcing: a conceptual architecture. International Journal of Crowd Science (2020) * [17] Kontokostas, D., Zaveri, A., Auer, S., Lehmann, J.: Triplecheckmate: A tool for crowdsourcing the quality assessment of linked data. In: International Conference on Knowledge Engineering and the Semantic Web. pp. 265–272. Springer (2013) * [18] Lantow, B.: OntoMetrics: Putting Metrics into Use for Ontology Evaluation. In: KEOD. pp. 186–191 (2016) * [19] Lozano-Tello, A., Gómez-Pérez, A.: Ontometric: A Method to Choose the Appropriate Ontology. In: Journal of Database Management. vol. 2, pp. 1–18. IDEA Group Publishing (2004) * [20] Maedche, A., Staab, S.: Ontology Learning for the Semantic Web. Intelligent Systems 16(2), 72–79 (2001) * [21] Makki, J., Alquier, A.M., Prince, V.: An nlp-based ontology population for a risk management generic structure. In: Proceedings of the 5th international conference on Soft computing as transdisciplinary science and technology. pp. 350–355 (2008) * [22] McDaniel, M., Storey, V.C., Sugumaran, V.: Assessing the Quality of Domain Ontologies: Metrics and an Automated Ranking System. Data & Knowledge Engineering 115, 32–47 (2018) * [23] Noy, N.F., McGuinness, D.L., et al.: Ontology development 101: A guide to creating your first ontology (2001) * [24] Noy, N.F., Mortensen, J., Musen, M.A., Alexander, P.R.: Mechanical turk as an ontology engineer? using microtasks as a component of an ontology-engineering workflow. In: Proceedings of the 5th Annual ACM Web Science Conference. pp. 262–271 (2013) * [25] Pittet, P., Barthélémy, J.: Exploiting users’ feedbacks: Towards a task-based evaluation of application ontologies throughout their lifecycle. In: International conference on knowledge engineering and ontology development. vol. 2 (2015) * [26] Poveda, M.: Catalogue of common pitfalls. http://oops.linkeddata.es/catalogue.jsp, accessed: 2021-03-28 * [27] Poveda-Villalón, M., Gómez-Pérez, A., Suárez-Figueroa, M.C.: Oops!: A pitfall-based system for ontology diagnosis. In: Innovations, Developments, and Applications of Semantic Web and Information Systems, pp. 120–148. IGI Global (2018) * [28] Poveda-Villalón, M., Suárez-Figueroa, M.C., Gómez-Pérez, A.: Validating Ontologies with OOPS! In: International Conference on Knowledge Engineering and Knowledge Management. pp. 267–281. Springer (2012) * [29] Richard, E., RALPH, H., JOHNSON, R., et al.: Design patterns: Elements of reusable object-oriented software (1995) * [30] Sanagavarapu, L.M., Iyer, V., Reddy, Y.R.: OntoEnricher: A Deep Learning Approach for Ontology Enrichment from Unstructured Text. arXiv preprint arXiv:2102.04081 (2021) * [31] Schober, D., Tudose, I., Svatek, V., Boeker, M.: Ontocheck: verifying ontology naming conventions and metadata completeness in protégé 4. In: Journal of biomedical semantics. vol. 3, pp. 1–10. BioMed Central (2012) * [32] Tartir, S., Arpinar, I.B., Sheth, A.P.: Ontological Evaluation and Validation. In: Theory and applications of ontology: Computer applications, pp. 115–130. Springer (2010) * [33] Tibaut, A.: Ontology Evaluation. https://github.com/atibaut/ontology-evaluation (Online; accessed on 15-Mar-2021) * [34] Vrandečić, D.: Ontology Evaluation. In: Handbook on Ontologies, pp. 293–313. Springer (2009) * [35] Wohlgenannt, G., Sabou, M., Hanika, F.: Crowd-based ontology engineering with the ucomp protégé plugin. Semantic Web 7(4), 379–398 (2016) * [36] Wong, W., Liu, W., Bennamoun, M.: Ontology Learning from Text: A Look Back and into the Future. ACM Computing Surveys (CSUR) 44(4), 1–36 (2012) * [37] Zhang, Y., Saberi, M., Chang, E.: Semantic-based lightweight ontology learning framework: a case study of intrusion detection ontology. In: Proceedings of the international conference on web intelligence. pp. 1171–1177 (2017)
# Capacity Limitation and Optimization Strategy for Flexible Point-to-Multi- Point Optical Networks Ji Zhou, Haide Wang, Liangchuan Li, Weiping Liu, Changyuan Yu, and Zhaohui Li Manuscript received; revised. This work was supported in part by the National Natural Science Foundation of China (62371207, 62005102); Hong Kong Scholars Program (XJ2021018). (Corresponding authors: Ji Zhou and Liangchuan Li), (e-mail<EMAIL_ADDRESS>[email protected]).J. Zhou, H. Wang, and W. Liu are with the Department of Electronic Engineering, College of Information Science and Technology, Jinan University, Guangzhou 510632, China.Z. Xing and L. Li are with Optical Research Department, Huawei Technologies Co Ltd, Dongguan, 523808, China.C. Yu is with the Department of Electronic and Information Engineering, The Hong Kong Polytechnic University, Hong Kong.Z. Li is with the Guangdong Provincial Key Laboratory of Optoelectronic Information Processing Chips and Systems, Sun Yat-sen University, Guangzhou 510275, China ###### Abstract Point-to-multi-point (PtMP) optical networks become the main solutions for network-edge applications such as passive optical networks and radio access networks. Entropy-loading digital subcarrier multiplexing (DSCM) is the core technology to achieve low latency and approach high capacity for flexible PtMP optical networks. However, the high peak-to-average power ratio of the entropy-loading DSCM signal limits the power budget and restricts the capacity, which can be reduced effectively by clipping operation. In this paper, we derive the theoretical capacity limitation of the flexible PtMP optical networks based on the entropy-loading DSCM signal. Meanwhile, an optimal clipping ratio for the clipping operation is acquired to approach the highest capacity limitation. Based on an accurate clipping-noise model under the optimal clipping ratio, we establish a three-dimensional look-up table for bit-error ratio, spectral efficiency, and link loss. Based on the three- dimensional look-up table, an optimization strategy is proposed to acquire optimal spectral efficiencies for achieving a higher capacity of the flexible PtMP optical networks. ###### Index Terms: Flexible PtMP optical networks, theoretical capacity limitation, entropy- loading DSCM, three-dimensional look-up table, optimization strategy. ## I Introduction Point-to-multi-point (PtMP) optical networks with hub-to-leaves architecture become the main solutions for the network-edge applications such as optical access networks and optical metro networks [1, 2, 3]. Compared to traditional point-to-point (PtP) optical networks, PtMP optical networks require less optical transceivers, thus have lower capital expense (CapEx) and operating expense (OpEx) costs [4, 5, 6]. In current commercial PtMP optical access networks, only time-division multiple access (TDMA) has been widely applied, which is statistical multiple access to make full use of bandwidth [7, 8, 9]. However, the TDMA-based PtMP optical networks naturally have a high latency, which faces the challenge in the latency-sensitive scenarios [10, 11, 12]. Particularly, the low-latency requirement becomes stricter and stricter in the future optical networks[13, 14, 15]. Owing to the development of coherent optical techniques, entropy-loading digital subcarrier multiplexing (DSCM) has been commercially applied in the 800Gb/s long-haul optical communications [16, 17, 18]. Furthermore, coherent entropy-loading DSCM can implement the revolutionary frequency-division multiple access (FDMA) for the PtMP optical networks [19, 20, 21]. For the leaves of the PtMP optical networks, the FDMA can provide the independent frequency channel to achieve low latency, and the entropy loading allocates maximal spectral efficiency to approach high capacity [22, 23, 24]. Meanwhile, the leaves of FDMA can use a relatively low-bandwidth transceiver matching the subcarrier bandwidth but that of TDMA needs a whole-bandwidth transceiver matching the signal bandwidth. Unfortunately, the high peak-to-average-power- ratio (PAPR) of the entropy-loading DSCM signal causes a low optical power budget, thus limiting the capacity in the peak-power-constrained (PPC) PtMP optical networks [25, 26, 27]. In this paper, the entropy-loading digital subcarriers with well-matched spectral efficiencies are assigned to the leaves depending on their different link losses, which can approach the highest possible capacity for the flexible PtMP optical networks. Meanwhile, the clipping operation is used to reduce the PAPR of the entropy-loading DSCM signal, thus increasing the optical power budget and gaining more capacity. Furthermore, the theoretical capacity limitation and optimization strategy will be investigated, which can guide the optimal parameter setting for the flexible PtMP optical networks. The main contributions of this paper are as follows: * • The theoretical capacity limitation is derived considering the clipping operation for the flexible PtMP optical networks. Meanwhile, an optimal clipping ratio is acquired based on the theoretical capacity limitation. * • We establish a three-dimensional look-up table for bit-error ratio (BER), spectral efficiency, and link loss, and propose an optimization strategy to approach the capacity limitation of the flexible PtMP optical networks. The remainder of this paper is organized as follows. The capacity limitation for flexible PtMP optical networks is derived in Section II. In Section III, an optimization strategy for flexible PtMP optical networks is introduced in detail. Finally, the paper is concluded in Section IV. ## II Capacity Limitation for Flexible PtMP Optical Networks Figure 1 shows the schematic diagram of the DSCM-based flexible PtMP optical networks. At the hub, the optical DSCM signal is generated and sent to the leaves. The transmitted subcarriers are set to the same power. After the optical distribution network (ODN), the leaves select and detect their corresponding subcarriers by coherent optical detection. In the actual situation, the optical signals to the different leaves pass the different numbers of optical splitters and different lengths of fiber, leading to different link losses. Based on the on-hand statistic of the deployed PON, the maximum difference of the link losses among the leaves is probably larger than 10 dB [28]. Therefore, the received signal at the different leaves has different power and signal-to-noise ratio (SNR). By considering the difference of the leaves, we will investigate the theoretical capacity limitation for the DSCM-based flexible PtMP optical networks, and simultaneously use the clipping operation to increase the capacity. Figure 1: The schematic diagram of the DSCM-based flexible PtMP optical networks with different link losses. At the hub, the DSCM signal is generated, of which the subcarriers are set to the same average power $P_{\text{t}}$. After the ODN and fiber transmission, the $i$-th subcarrier is selected and detected by the $i$-th leaf. The received signal power of the $i$-th subcarrier can be defined as $P_{\text{r-}i}=P_{\text{t}}/Loss_{i}=P_{\text{DSCM}}/(N\times Loss_{i})$ (1) where ${P_{\text{DSCM}}}$ is the average power of the DSCM signal. $N$ is the subcarrier number in the DSCM signal. $Loss_{i}$ is the link loss of the $i$-th subcarrier. Assuming that the devices for the leaves have good uniformity, all the subcarriers suffer from the white noise with the same power. Therefore, the SNR of the DSCM signal on $i$-th subcarrier is expressed as $SNR_{i}=P_{\text{r-}i}/\sigma^{2}_{\text{n}}=P_{\text{DSCM}}/(N\times Loss_{i}\times\sigma^{2}_{\text{n}})$ (2) where $\sigma^{2}_{\text{n}}$ is the variance of the white noise on each subcarrier. Obviously, the increase of ${P_{\text{DSCM}}}$ directly improves the SNR. However, a high-PAPR DSCM signal has a low average power due to the constrained peak power in the PPC PtMP optical networks. The clipping operation can effectively reduce the PAPR of the DSCM signal to increase the average power, which can be expressed as $x_{\text{c}}=\left\\{\begin{array}[]{cc}A,&x~{}>~{}A\\\ x,&\left|x\right|\leq~{}A\\\ -A,&x<-A\end{array}\right.$ (3) where $A$ is the clipping amplitude. The clipping ratio can be defined as $20\times\text{log}_{10}(\eta)$ where $\eta$ is equal to $A/\sqrt{P_{\text{DSCM}}}$. After the clipping operation, the frequency-domain signal on each subcarrier is expressed as $X_{\text{c}}=\alpha\times X+Noise_{\text{c}}$ (4) where $X$ is the frequency-domain signal on each subcarrier before the clipping operation. $Noise_{\text{c}}$ denotes the clipping noise. The clipping attenuation $\alpha$ can be calculated by $\alpha=\frac{E(x_{\text{c}}\cdot x)}{E(x^{2})}=\frac{\int_{-\infty}^{\infty}x_{\text{c}}\cdot x\cdot p(x)\text{d}x}{P_{\text{DSCM}}}=1-2Q(\eta)$ (5) where $p(x)$ is the probability distribution function of the DSCM signal. Due to the central limit theorem, the DSCM signal with enough subcarriers becomes a Gaussian distribution with zero mean, which can be expressed as $p(x)=\frac{1}{\sqrt{2\pi P_{\text{DSCM}}}}e^{-\frac{x^{2}}{2P_{\text{DSCM}}}}.$ (6) $Q(x)$ denotes the Q function, which can be defined as $Q(x)=\int_{x}^{\infty}\frac{1}{\sqrt{2\pi}}e^{-\frac{x^{2}}{2}}dx.$ (7) Figure 2: The simulation architecture of the flexible PtMP optical networks based on the entropy-loading DSCM. Based on the Eqs. (4) and (5), the average power $P_{\text{c}}$ of the clipping noise $Noise_{\text{c}}$ can be calculated by $\begin{split}P_{\text{c}}&=E(X_{\text{c}}^{2})-\alpha^{2}E(X^{2})\\\ &=\int_{-\infty}^{\infty}x_{\text{c}}^{2}\cdot p_{\text{c}}(x)dx-\alpha^{2}P_{\text{DSCM}}\\\ &=2A^{2}Q(\eta)+\int_{-A}^{A}x^{2}p(x)dx-\alpha^{2}P_{\text{DSCM}}\\\ &=2P_{\text{DSCM}}\left\\{Q(\eta)[1+\eta^{2}-2Q(\eta)]-\frac{\eta}{\sqrt{2\pi}}e^{-\frac{\eta^{2}}{2}}\right\\}\\\ \end{split}$ (8) where $p_{\text{c}}(x)$ is the probability distribution of the clipping DSCM signal, which can be defined as $p_{\text{c}}(x)=\left\\{\begin{array}[]{cc}p(x),&\left|x\right|\leq A\\\ Q(\eta)\delta(\left|x\right|-A),&\left|x\right|>A\end{array}\right.$ (9) where $\delta(x)$ is the Dirac delta function with a unit impulse. In the PPC PtMP optical networks, the signal peak is limited to constant value $A_{\text{p}}$ due to the nonuse of expensive optical amplifier. The peak amplitude $A$ of the clipping DSCM signal is matched to the $A_{\text{p}}$. We define a matching coefficient $\beta$ as $A_{\text{p}}/A$ for the clipping DSCM signal. The effective SNR on $i$-th subcarrier of the clipping DSCM signal depends on the power of the clipping signal, clipping noise, and white noise, which can be calculated by $ESNR_{i}=\frac{\alpha^{2}\times\beta^{2}\times P_{\text{DSCM}}}{\beta^{2}\times P_{\text{c}}+N\times Loss_{i}\times\sigma_{\text{n}}^{2}}.$ (10) Obviously, the $ESNR_{i}$ depends on the clipping ratio $\eta$ and the link loss $Loss_{i}$. When the link loss of the $i$-th subcarrier is confirmed, there is an optimal clipping ratio to maximize the effective SNR of the $i$-th subcarrier. The optimal clipping ratios are different among the subcarriers with different link losses. Considering the effective SNRs of all the subcarriers, an optimal clipping ratio can be obtained to maximize the capacity of the DSCM-based flexible P2MP network. The theoretical capacity limitation of the DSCM-based flexible P2MP network can be defined as $C=\sum_{i=1}^{N}C_{i}=B\sum_{i=1}^{N}\text{log}_{2}(1+ESNR_{i})$ (11) where $B$ is the bandwidth of one subcarrier. The optimal clipping ratio to maximize the capacity can be acquired by $\eta_{\text{opt}}=\underset{\eta}{\arg\max}~{}B\sum_{i=1}^{N}\text{log}_{2}(1+ESNR_{i}).$ (12) When the link losses of the leaves are measured, the theoretical results based on Eq. (12) can get the optimal clipping ratio to achieve the highest capacity for the DSCM-based flexible PtMP optical networks. Figure 3: The effective SNR versus the clipping ratio for the subcarriers with different link losses. The solid and dashed lines denote the simulation and theoretical results, respectively. Figure 2 shows the simulation architecture of the flexible PtMP optical networks based on the entropy-loading DSCM. First of all, the SNR of each leaf should be estimated to calculate the spectral efficiency for implementing entropy loading. At the transmitter, the probabilistic-shaping 64-level quadrature amplitude modulation (PS-64QAM) for each subcarrier is mapped depending on the calculated spectral efficiency of each leaf. After the root- raised-cosine-shaping (RRC) shaping and frequency upshift, the PS-64QAM- modulated subcarriers are multiplexed to generate an entropy-loading DSCM signal. The clipping operation is applied in the entropy-loading DSCM signal to reduce the PAPR. After the normalization to the same peak amplitude $A_{p}$, the different losses and the white noise with the same power are added to the signals on the subcarriers for the different leaves. At the receiver, the frequency downshift and matched filter are used to select the subcarrier and recover the PS-64QAM signal for each leaf. Finally, the recovered PS-64QAM signal is used to estimate the SNR and calculate the BER. Figure 3 depicts the effective SNR versus clipping ratio for the subcarriers with different link losses. The solid and dashed lines denote the simulation and theoretical results, respectively. The subcarrier number $N$ is set to 8. The losses of the leaves are set to the $\boldsymbol{Loss}$ of [1, 1.33, 1.74, 2.32, 3.05, 4.03, 5.25, 6.53]. The noise power $\sigma_{\text{n}}^{2}$ and the peak value $A_{p}$ are set to 0.0237 and 2.579, respectively. The theoretically effective SNR can be calculated by Eq. (10). Obviously, the effective SNR obtained by the simulations coincides with the theoretically effective SNR well. The effective SNR is improved with the decrease of clipping ratio at the beginning owing to the increasing signal power and then decreased with the decrease of clipping ratio due to the increasing clipping noise. There is an optimal clipping ratio to maximize the effective SNR for each subcarrier. However, the optimal clipping ratios are different among the subcarriers with different losses. Figure 4: The theoretical capacity limitation versus the clipping ratio for the DSCM-based flexible PtMP optical networks when the symbol rate per subcarrier is 8Gbaud. The theoretical capacity limitation can be calculated by Eq. (11) for the flexible PtMP network. Meanwhile, the optimal clipping ratio can be obtained by Eq. (12) to achieve the highest capacity limitation. Fig. 4 shows the theoretical capacity limitation versus clipping ratio for the DSCM-based flexible PtMP optical networks when the symbol rate per subcarrier is 8Gbaud. When the clipping operation is not used (i.e. a large clipping ratio is set such as 13dB), the capacity limitation of the DSCM-based flexible PtMP optical networks is approximately 263Gb/s. When the clipping operation is used and the clipping ratio is set to the optimal value of $7$dB, the capacity limitation of the DSCM-based flexible PtMP optical networks is approximately 361Gb/s. The capacity limitation of the DSCM-based flexible PtMP optical networks with the optimal clipping operation is approximately 98Gb/s higher than that of the DSCM-based flexible PtMP optical networks without clipping operation. ## III Optimization Strategy for flexible PtMP Optical Networks In this section, we will investigate the optimization strategy to increase the capacity of the DSCM-based flexible PtMP optical networks. A three-dimensional look-up table for BER, spectral efficiency, and link loss will be established to acquire the spectral efficiencies for the subcarriers with different losses under the same targeted BER. Figure 5 depicts the BERs of the subcarriers for the DSCM-based flexible PtMP optical networks without and with the clipping operation. When the clipping operation is not employed, the spectral efficiencies of the 8 subcarriers are set to the $\boldsymbol{SE}$ of [4.80, 4.40, 4.00, 3.60, 3.20, 2.80, 2.40, 2.00] bits/symbol depending on the $\boldsymbol{Loss}$ of [1, 1.33, 1.74, 2.32, 3.05, 4.03, 5.25, 6.53]. Under the $\boldsymbol{SE}$, the BER of each subcarrier is on the forward error correction (FEC) limit with 7% overhead. When the clipping operation is used and the clipping ratio is set to optimal 7dB, the BER performance of each subcarrier is much improved, particularly the subcarrier with low spectral efficiency. To obtain a higher capacity, the spectral efficiency of each subcarrier can be increased until the BER approaches the 7% FEC limit. In this section, we will establish a three- dimensional look-up table for the BER, spectral efficiency, and link loss. When the link losses of the leaves and targeted BERs of the subcarriers are confirmed, the corresponding spectral efficiency for each subcarrier can be obtained from the three-dimensional look-up table to achieve the highest capacity for the DSCM-based flexible PtMP optical networks. Figure 5: The BERs of the subcarriers for the DSCM-based flexible PtMP optical networks without and with the clipping operation. For obtaining the three-dimensional look-up table, a theoretical BER should be derived considering the white noise and clipping noise. The PS-64QAM signal can be decomposed into independent in-phase and quadrature amplitudes, which are PS 8-level pulse amplitude modulation (PS-8PAM) signals. Gray code is applied to generate the PS-8PAM signals. Fig. 6 (a) shows the probability density function for the in-phase PS-8PAM of the PS-64QAM in the clipping entropy-loading DSCM signal. It demonstrates that the clipping noise does not coincide with the Gaussian distribution. Figs. 6 (b)-(e) depict the probability density functions of clipping noise on the amplitudes of 1, 3, 5, and 7, respectively. Obviously, the distributions of the clipping noise on different amplitudes are different. The piecewise power exponential model is used to get the accurate fitting curves for the probability density function of the clipping noise on the different amplitudes as the red lines in Figs. 6 (b)-(e) show. Based on the piecewise power exponential model, the theoretical probability density function of the clipping noise can be defined as $f_{Y_{\text{c}}}(y\mid k)=\left\\{\begin{array}[]{cc}A_{k,1}\times e^{\frac{-\left|y-\mu_{k,1}\right|^{b_{k,1}}}{2\sigma_{k,1}^{2}}},{}&y\leq D_{k}\\\ A_{k,2}\times e^{\frac{-\left|y-\mu_{k,2}\right|^{b_{k,2}}}{2\sigma_{k,2}^{2}}},{}&y>D_{k}\end{array}\right.$ (13) where $k$ is the amplitudes of [$-7$, $-5$, $-3$, $-1$, 1, 3, 5, 7] for the in-phase amplitude of the PS-64QAM in the clipping entropy-loading DSCM signal. $D_{k}$ is the value corresponding to the highest probability near the amplitude of $k$. $A_{k,1}$, $\mu_{k,1}$, $b_{k,1}$ and $\sigma_{k,1}^{2}$ are the fitting parameters of the signal at the left of $D_{k}$, while $A_{k,2}$, $\mu_{k,2}$, $b_{k,2}$ and $\sigma_{k,2}^{2}$ are the those of the signal at the right of $D_{k}$, respectively. Figure 6: (a) Probability density function (PDF) of the in-phase PS-8PAM of the PS-64QAM in the clipping entropy-loading DSCM signal. The Probability density function of clipping noise on the amplitude of (b) 1, (c) 3, (d) 5, and (e) 7, respectively. The red lines denote the fitting curves based on the piecewise power exponential model. Based on Eq. (13), the accurate fitting curves are used to acquire the fitting parameters of $A_{k,1}$, $\mu_{k,1}$, $b_{k,1}$, $\sigma_{k,1}^{2}$, $A_{k,2}$, $\mu_{k,2}$, $b_{k,2}$, $\sigma_{k,2}^{2}$, and $D_{k}$ for the theoretical probability density function of the clipping noise. Meanwhile, to make sure the integral value of $f_{Y_{\text{c}}}(y\mid k)$ is equal to $1$, the $A_{k,i}$ should be updated by $A_{k,i}=\frac{A_{k,i}}{\int_{-\infty}^{\infty}f_{Y_{\text{c}}}(y\mid k)\text{d}y}$ (14) where $i=1,2$ denotes the left and the right parts of the probability density function, respectively. The theoretical BER can be derived by the joint probability density function based on the probability density functions of the white noise and clipping noise. The white noise and clipping noise can be considered as two independent continuous random variables $Y_{\text{n}}$ and $Y_{\text{c}}$. The white noise coincides with Gaussian distribution, and its probability density functions $f_{Y_{\text{n}}}$ is equal to $1/(\sqrt{2\pi}\sigma_{\text{n}})e^{-y^{2}/(2\sigma_{\text{n}}^{2})}$. The probability density functions of the clipping noise $f_{Y_{\text{c}}}$ is equal to Eq. (13). The random variable of combined noise $Z$ is equal to $Y_{\text{n}}+Y_{\text{c}}$, and its probability density functions $f_{Z}$ can be calculated by $f_{Z}(z)=\int_{-\infty}^{\infty}f_{Y_{\text{n}}}(y)f_{Y_{\text{c}}}(z-y)\text{d}y$ (15) Substituting $f_{Y_{\text{n}}}$ and $f_{Y_{\text{c}}}$ into Eq. (15), the probability density functions $f_{Z}$ can be expressed as $\displaystyle f_{Z}(z\mid k)$ $\displaystyle=\frac{A_{k,1}}{\sqrt{2\pi}\sigma_{\text{n}}}\int_{z-D_{k}}^{+\infty}e^{-\frac{y^{2}}{2\sigma_{\text{n}}^{2}}}e^{-\frac{\left|z-y-\mu_{k,1}\right|^{b_{k,1}}}{2\sigma_{k,1}^{2}}}\text{d}y$ (16) $\displaystyle+\frac{A_{k,2}}{\sqrt{2\pi}\sigma_{\text{n}}}\int_{-\infty}^{z-D_{k}}e^{-\frac{y^{2}}{2\sigma_{\text{n}}^{2}}}e^{-\frac{\left|z-y-\mu_{k,2}\right|^{b_{k,2}}}{2\sigma_{k,2}^{2}}}\text{d}y$ The BER of the PS-64QAM signal is similar to that of the decomposed PS-8PAM signal. For the PS-8PAM signal, the probabilities of the amplitudes are different, which can be defined as $\boldsymbol{Pr}=[Pr_{\pm 1},Pr_{\pm 3},Pr_{\pm 5},Pr_{\pm 7}]$. The theoretical error ratio of the first bit for the PS-PAM8 signal can be calculated by $\displaystyle E_{\text{b,~{}1}}$ $\displaystyle=2\left[Pr_{1}\int_{-\infty}^{-d}f_{Z}(z\mid 1)\text{d}z+Pr_{3}\int_{-\infty}^{-3d}f_{Z}(z\mid 3)\text{d}z\right.$ (17) $\displaystyle\left.+Pr_{5}\int_{-\infty}^{-5d}f_{Z}(z\mid 5)\text{d}z+Pr_{7}\int_{-\infty}^{-7d}f_{Z}(z\mid 7)\text{d}z\right]$ where $d$ is the Euclidean distance. Similarly, the theoretical error ratio of the second bit can be calculated by $\displaystyle E_{\text{b},~{}2}$ $\displaystyle=2Pr_{1}\left[\int_{3d}^{\infty}f_{Z}(z\mid 1)\text{d}z+\int_{-\infty}^{-5d}f_{Z}(z\mid 1)\text{d}z\right]$ (18) $\displaystyle+2Pr_{3}\left[\int_{d}^{\infty}f_{Z}(z\mid 3)\text{d}z+\int_{-\infty}^{-7d}f_{Z}(z\mid 3)\text{d}z\right]$ $\displaystyle+2Pr_{5}\int_{-9d}^{-d}f_{Z}(z\mid 5)\text{d}z+2Pr_{7}\int_{-11d}^{3d}f_{Z}(z\mid 7)\text{d}z.$ The theoretical error ratio of the third bit can be calculated by Eq. (19). $\displaystyle E_{\text{b},~{}3}$ $\displaystyle=2\left\\{Pr_{1}\left[\int_{d}^{5d}f_{Z}(z\mid 1)\text{d}z+\int_{-7d}^{-3d}f_{Z}(z\mid 1)\text{d}z\right]+Pr_{3}\left[\int_{3d}^{\infty}f_{Z}(z\mid 3)\text{d}z+\int_{-5d}^{-d}f_{Z}(z\mid 3)\text{d}z+\int_{-\infty}^{-9d}f_{Z}(z\mid 3)\text{d}z\right]\right.$ (19) $\displaystyle\left.+Pr_{5}\left[\int_{d}^{\infty}f_{Z}(z\mid 5)\text{d}z+\int_{-7d}^{-3d}f_{Z}(z\mid 5)\text{d}z+\int_{-\infty}^{-11d}f_{Z}(z\mid 5)\text{d}z\right]+Pr_{7}\left[\int_{-5d}^{-d}f_{Z}(z\mid 7)\text{d}z+\int_{-13d}^{-9d}f_{Z}(z\mid 7)\text{d}z\right]\right\\}$ As shown in Eqs. (17)-(19), the $d$ should be confirmed before calculating the theoretical error ratios of the three bits for the PS-8PAM signal. The relationship between $d$ and the average power of the PS-64QAM signal can be defined as $d=\sqrt{\frac{P_{\text{PS-64QAM}}}{4\sum_{i=0}^{3}(2i+1)^{2}\times Pr_{2i+1}}}.$ (20) where $P_{\text{PS-64QAM}}$ is the average power of the PS-64QAM signal on the subcarrier, which can be calculated by $\alpha^{2}\times\beta^{2}\times P_{\text{DSCM}}/(N\times Loss_{i})$. Finally, the theoretical BER of the PS-64QAM signal on the subcarrier can be calculated by $E_{\text{b}}=(E_{\text{b, 1}}+E_{\text{b, 2}}+E_{\text{b, 3}})/3.$ (21) Figure 7: Three-dimensional look-up table for BER, spectral efficiency (SE), and link loss under the optimal clipping ratio. For the PS-64QAM signal, the spectral efficiency corresponds to the specific probabilities $\boldsymbol{Pr}$ of the amplitudes. As shown in Eqs. (17)-(19), the $E_{\text{b},~{}i}$ is related to the $\boldsymbol{Pr}$ (i.e., spectral efficiency) and the $d$ where $i$ is from 1 to 3. Meanwhile, as Eq. (20) shows, the $d$ is relevant to the $Loss_{i}$ and the $\boldsymbol{Pr}$ (i.e., spectral efficiency) under a given clipping ratio. In conclusion, the theoretical BER of the PS-64QAM signal is determined by the $Loss_{i}$ and spectral efficiency. Fig. 7 depicts the three-dimensional look-up table for BER, spectral efficiency, and link loss under the optimal clipping ratio for the flexible PtMP optical networks. In Fig. 4, the optimal clipping ratio can be confirmed depending on the link losses of the leaves to achieve the highest capacity limitation of the flexible PtMP optical networks. The three- dimensional look-up table is established by Eq. (21) under the optimal clipping ratio. As Fig. 5 shows, the BER of each subcarrier for the DSCM-based flexible PtMP optical networks with the clipping operation has a large margin compared to the 7% FEC limit. Therefore, when the targeted BER is set to 7% FEC limit, the spectral efficiency can be further improved for each subcarrier based on the three-dimensional look-up table. Algorithm 1 Optimization strategy for the capacity of flexible point-to-multi- point optical networks 1:Subcarrier number $N$; link losses $\boldsymbol{Loss}$; Noise variance $\sigma^{2}_{\text{n}}$; Signal power $P_{\text{DSCM}}$; Targeted BER $BER_{\text{T}}$; Maximum spectral efficiency $SE_{\text{max}}$; Spectral- efficiency updating step size $\Delta SE$; Three-dimensional look-up tables $\boldsymbol{LUT}(BER,~{}SE,~{}Loss)$. 2:Optimal clipping ratio $\eta_{\text{opt}}$ and optimal spectral efficiencies $\boldsymbol{SE}$. 3:Calculate the effective SNRs $\boldsymbol{ESNR}$ of the subcarriers by Eq. (10) based on subcarrier number $N$, link losses $\boldsymbol{Loss}$, noise variance $\sigma^{2}_{\text{n}}$, and signal power $P_{\text{DSCM}}$. 4:Calculate the optimal clipping ratio $\eta_{\text{opt}}$ by Eq. (12) based on the $\boldsymbol{ESNR}$. 5:Choose the corresponding three-dimensional look-up table $LUT(BER,SE,Loss)$ based on the optimal clipping ratio $\eta_{\text{opt}}$. $\triangleright$ Next, the iterative algorithm is used to find the optimal $\boldsymbol{SE}$ based on $BER_{\text{T}}$ and $\boldsymbol{Loss}$ in $LUT$. 6:Initialize: $\boldsymbol{SE}:=SE_{\text{max}}\times\text{ones}(1,N)$ 7:for $i=1$ to $N$ do 8: Initialize: $BER_{i}$ by $(SE_{i},Loss_{i})$ in $LUT$ 9: while $BER_{i}>BER_{\text{T}}$ do 10: Update $SE_{i}\leftarrow SE_{i}-\Delta SE$ 11: Update $BER_{i}$ by $(SE_{i},Loss_{i})$ in the $LUT$ 12: end while 13:end for 14:return Optimal clipping ratio $\eta_{\text{opt}}$ and optimal spectral efficiencies $\boldsymbol{SE}$ Algorithm 1 shows the proposed optimization strategy for increasing the capacity of the flexible PtMP optical networks. The input of the algorithm includes the subcarrier number $N$, link losses $\boldsymbol{Loss}$, noise variance $\sigma^{2}_{\text{n}}$, signal power $P_{\text{DSCM}}$, targeted BER $BER_{\text{T}}$, maximum spectral efficiency $SE_{\text{max}}$, spectral- efficiency updating step size $\Delta SE$, and three-dimensional look-up tables $\boldsymbol{LUT}(BER,~{}SE,~{}Loss)$. The first step of the proposed algorithm is to calculate the effective SNRs $\boldsymbol{ESNR}$ of the subcarriers by Eq. (10) based on subcarrier number $N$, link losses $\boldsymbol{Loss}$, noise variance $\sigma^{2}_{\text{n}}$, and signal power $P_{\text{DSCM}}$. In the second step, the optimal clipping ratio $\eta_{\text{opt}}$ is calculated by Eq. (12) based on the $\boldsymbol{ESNR}$. In the third step, the corresponding three-dimensional look-up table $LUT(BER,SE,Loss)$ is chosen based on the optimal clipping ratio $\eta_{\text{opt}}$. The last step uses an iterative algorithm to find the optimal $\boldsymbol{SE}$ based on $BER_{\text{T}}$ and $\boldsymbol{Loss}$ based on the $LUT$. In the iterative algorithm, the spectral efficiency of all subcarriers is initialized as the maximum spectral efficiency $SE_{\text{max}}$ (i.e., 6bit/symbol for 64QAM). The outer iterations traverse all the subcarriers to find the optimized spectral efficiencies $\boldsymbol{SE}$. In the inner iterations, the spectral efficiency $SE_{i}$ is reduced by the step size $\Delta SE$ each iteration until the BER updated by $(SE_{i},Loss_{i})$ using the $LUT$ is lower than the targeted BER $BER_{\text{T}}$. After outer and inner iterations, the optimal spectral efficiencies $\boldsymbol{SE}$ are obtained for all the subcarriers of the flexible PtMP optical network. Finally, the optimal clipping ratio $\eta_{\text{opt}}$ and the optimal spectral efficiencies $\boldsymbol{SE}$ are returned. Figure 8: The BERs of the subcarriers for the DSCM-based flexible PtMP optical networks after improving spectral efficiency by using a conventional look-up table (LUT) and a new look-up table. Figure 8 depicts the BERs of the subcarriers for the DSCM-based flexible PtMP optical networks after improving spectral efficiency by using a conventional look-up table and a new look-up table. For the conventional look-up table, the clipping noise is assumed to coincide with Gaussian distribution. Based on the conventional look-up table, the spectral efficiencies for the 8 subcarriers are set to [5.90, 5.72, 5.49, 5.19, 4.87, 4.54, 4.20, 3.90] bits/symbol, respectively. However, the BERs of all the subcarriers cannot be under the targeted 7% FEC limit. Thus, the assumption of Gaussian distribution is not exact for the clipping noise. For the new look-up table, the clipping noise coincides with the distribution based on Eq. (13). When the new look-up table in Fig. 7 is used, the BERs of all the subcarriers are below and close to the targeted 7% FEC limit. Therefore, the new look-up table is more precise than the conventional look-up table. When the clipping operation is not used, the spectral efficiencies for the 8 subcarriers are set to [4.80, 4.40, 4.00, 3.60, 3.20, 2.80, 2.40, 2.00] bits/symbol, respectively. The corresponding capacity is approximately 217.6Gb/s (i.e. 203.3GB/s excluding the 7% FEC overhead) for the flexible PtMP optical networks based on 8$\times$8Gbaud DSCM signal. When the new look-up table is employed, the spectral efficiencies for the 8 subcarriers are set to [5.65, 5.51, 5.27, 5.04, 4.75, 4.38, 4.08, 3.80] bits/symbol, respectively. The corresponding capacity is approximately 307.84Gb/s (i.e. 287.7GB/s excluding the 7% FEC overhead). Using the optimal clipping operation and new look-up table, the capacity of the DSCM-based flexible PtMP optical networks is approximately 90.4Gb/s higher than that without clipping operation, which approaches the theoretical result in Section II. Thus, the optimal clipping operation and new look-up table can be used to achieve 41% higher capacity. ## IV Conclusion In this paper, we derive the theoretical capacity limitation versus the link losses for the flexible PtMP optical networks based on the entropy-loading DSCM signal. Meanwhile, the clipping operation with the optimal clipping ratio reduces the PAPR of the entropy-loading DSCM signal to improve the effective SNR for approaching the highest capacity limit. Based on the piecewise power exponential model for the clipping noise, we establish a three-dimensional look-up table for bit-error ratio, spectral efficiency, and link loss under the optimal clipping ratio. Based on the three-dimensional look-up table, an optimization strategy is proposed to acquire optimal spectral efficiencies for achieving a higher capacity of the flexible PtMP optical networks. ## References * [1] J. Zhang, G. Li, S. Xing, and N. Chi, “Flexible and adaptive coherent PON for next-generation optical access network,” _Optical Fiber Technology_ , vol. 75, p. 103190, 2023. * [2] M. M. Hosseini, J. Pedro, N. Costa, A. Napoli, J. E. Prilepsky, and S. K. Turitsyn, “Multi-period Planning in Metro-Aggregation Networks using Point-to-Multipoint Transceivers,” in _IEEE Global Communications Conference_. IEEE, 2022, pp. 2921–2926. * [3] M. M. Hosseini, J. Pedro, A. Napoli, N. Costa, J. E. Prilepsky, and S. K. Turitsyn, “Optimized design of filterless horseshoe networks exploiting point-to-multipoint coherent transceivers,” _Journal of Optical Communications and Networking_ , vol. 15, no. 9, pp. 569–578, 2023. * [4] J. Bäck, P. Wright, J. Ambrose, A. Chase, M. Jary, F. Masoud, N. Sugden, G. Wardrop, A. Napoli, J. Pedro _et al._ , “CAPEX savings enabled by point-to-multipoint coherent pluggable optics using digital subcarrier multiplexing in metro aggregation networks,” in _European Conference on Optical Communications_. IEEE, 2020, pp. 1–4. * [5] L. A. Campos, Z. Jia, H. Zhang, and M. Xu, “Coherent optics for access from P2P to P2MP,” _Journal of Optical Communications and Networking_ , vol. 15, no. 3, pp. A114–A123, 2023. * [6] R. Li, Q. Lv, and Z. Zhu, “On the Network Planning of Wavelength Switched Optical Networks with P2MP Transceivers,” _Journal of Lightwave Technology_ , 2023. * [7] E. Wong, “Next-generation broadband access networks and technologies,” _Journal of Lightwave Technology_ , vol. 30, no. 4, pp. 597–608, 2011. * [8] F. J. Effenberger, “PON standardisation status and future prospects,” in _45th European Conference on Optical Communication (ECOC 2019)_. IET, 2019, pp. 1–3. * [9] J.-i. Kani and D. van Veen, “Current TDM-PON Technologies,” in _Springer Handbook of Optical Networks_. Springer, 2020, pp. 849–870. * [10] W. J. Shan, “The outlook for PON standardization: a tutorial,” _Journal of Lightwave Technology_ , vol. 38, no. 1, pp. 31–42, 2019. * [11] J.-i. Kani, J. Terada, K.-I. Suzuki, and A. Otaka, “Solutions for future mobile fronthaul and access-network convergence,” _Journal of Lightwave Technology_ , vol. 35, no. 3, pp. 527–534, 2017. * [12] D. Schaber, P. Schulte, S. Calabrò, G. Caruso, and M. Kuschnerov, “PON Downstream Scheme Supporting Simultaneously Different ONU Categories,” _IEEE Photonics Technology Letters_ , vol. 35, no. 21, pp. 1171–1174, 2023\. * [13] I. Dias, L. Ruan, C. Ranaweera, and E. Wong, “From 5G to beyond: Passive optical network and multi-access edge computing integration for latency-sensitive applications,” _Optical Fiber Technology_ , vol. 75, p. 103191, 2023. * [14] D. Larrabeiti, L. M. Contreras, G. Otero, J. A. Hernández, and J. P. Fernandez-Palacios, “Toward end-to-end latency management of 5G network slicing and fronthaul traffic,” _Optical Fiber Technology_ , vol. 76, p. 103220, 2023. * [15] S. Bidkar, K. Christodoulopoulos, T. Pfeiffer, and R. Bonk, “Evaluating Bandwidth Efficiency and Latency of Scheduling Schemes for 5G Fronthaul over TDM-PON,” in _2022 European Conference on Optical Communication (ECOC)_. IEEE, 2022, pp. 1–4. * [16] H. Sun, M. Torbatian, M. Karimi, R. Maher, S. Thomson, M. Tehrani, Y. Gao, A. Kumpera, G. Soliman, A. Kakkar _et al._ , “800G DSP ASIC design using probabilistic shaping and digital sub-carrier multiplexing,” _Journal of Lightwave Technology_ , vol. 38, no. 17, pp. 4744–4756, 2020\. * [17] D. Welch, A. Napoli, J. Bäck, S. Buggaveeti, C. Castro, A. Chase, X. Chen, V. Dominic, T. Duthel, T. A. Eriksson _et al._ , “Digital subcarrier multiplexing: enabling software-configurable optical networks,” _Journal of Lightwave Technology_ , vol. 41, no. 4, pp. 1175–1191, 2022. * [18] D. Che and W. Shieh, “Approaching the capacity of colored-SNR optical channels by multicarrier entropy loading,” _Journal of Lightwave Technology_ , vol. 36, no. 1, pp. 68–78, 2018. * [19] D. Welch, A. Napoli, J. Bäck, W. Sande, J. Pedro, F. Masoud, C. Fludger, T. Duthel, H. Sun, S. J. Hand _et al._ , “Point-to-multipoint optical networks using coherent digital subcarriers,” _Journal of Lightwave Technology_ , vol. 39, no. 16, pp. 5232–5247, 2021. * [20] Z. Xing, K. Zhang, X. Chen, Q. Feng, K. Zheng, Y. Zhao, Z. Dong, J. Zhou, T. Gui, Z. Ye _et al._ , “First Real-time Demonstration of 200G TFDMA Coherent PON using Ultra-simple ONUs,” in _Optical Fiber Communication Conference_. Optica Publishing Group, 2023, pp. Th4C–4. * [21] H. Wang, J. Zhou, Z. Xing, Q. Feng, K. Zhang, K. Zheng, X. Chen, T. Gui, L. Li, J. Zeng, J. Yang, W. Liu, C. Yu, and Z. Li, “Fast-Convergence Digital Signal Processing for Coherent PON Using Digital SCM,” _Journal of Lightwave Technology_ , vol. 41, no. 14, pp. 4635–4643, 2023. * [22] Y. Fan, M. Fu, H. Jiang, X. Liu, Q. Liu, Y. Xu, L. Yi, W. Hu, and Q. Zhuge, “Point-to-Multipoint Coherent Architecture with Joint Resource Allocation for B5G/6G Fronthaul,” _IEEE Wireless Communications_ , vol. 29, no. 2, pp. 100–106, 2022. * [23] D. Welch, A. Napoli, J. Bäck, N. Swenson, W. Sande, J. Pedro, F. Masoud, A. Chase, C. Fludger, H. Sun _et al._ , “Digital subcarriers: A universal technology for next generation optical networks,” in _Optical Fiber Communication Conference_. Optica Publishing Group, 2022, pp. Tu3H–1. * [24] H. Wang, J. Zhou, J. Wei, W. Mo, Y. Feng, W. Liu, C. Yu, and Z. Li, “Record Capacity-Reach of C band IM/DD Optical Systems over Dispersion-Uncompensated Links,” in _CLEO: Science and Innovations_. Optica Publishing Group, 2022, pp. SM2J–6. * [25] J. Zhou, L. Li, J. He, X. Lu, Y. Bo, G. Wang, Y. Huang, G. Liu, Y. Lu, S. Gao _et al._ , “Clipping discrete multi-tone for peak-power-constraint IM/DD optical systems,” _Science China Information Sciences_ , vol. 66, no. 5, p. 152302, 2023. * [26] B. M. Oliveira, M. S. Neves, F. P. Guiomar, M. C. Medeiros, and P. P. Monteiro, “ML-based Optimization of Geometric Constellation Shaping for Unamplified Coherent Optical Systems,” in _2023 23rd International Conference on Transparent Optical Networks (ICTON)_. IEEE, 2023, pp. 1–4. * [27] J. Zhou, J. He, X. Lu, G. Wang, Y. Bo, G. Liu, Y. Huang, L. Li, C. Yang, H. Wang _et al._ , “100G fine-granularity flexible-rate passive optical networks based on discrete multi-tone with PAPR optimization,” _Journal of Optical Communications and Networking_ , vol. 14, no. 11, pp. 944–950, 2022. * [28] P. Parolari, A. Gatto, C. Neumeyr, and P. Boffi, “Flexible transmitters based on directly modulated VCSELs for next-generation 50G passive optical networks,” _Journal of Optical Communications and Networking_ , vol. 12, no. 10, pp. D78–D85, 2020.
# Data Leakage in Tabular Federated Learning Mark Vero, Mislav Balunović, Dimitar I. Dimitrov, Martin Vechev ETH Zurich, Zurich, Switzerland <EMAIL_ADDRESS>{mislav.balunovic, dimitar.iliev.dimitrov<EMAIL_ADDRESS> ###### Abstract While federated learning (FL) promises to preserve privacy in distributed training of deep learning models, recent work in the image and NLP domains showed that training updates leak private data of participating clients. At the same time, most high-stakes applications of FL (e.g., legal and financial) use tabular data. Compared to the NLP and image domains, reconstruction of tabular data poses several unique challenges: (i) categorical features introduce a significantly more difficult mixed discrete-continuous optimization problem, (ii) the mix of categorical and continuous features causes high variance in the final reconstructions, and (iii) structured data makes it difficult for the adversary to judge reconstruction quality. In this work, we tackle these challenges and propose the first comprehensive reconstruction attack on tabular data, called TabLeak. TabLeak is based on three key ingredients: (i) a softmax structural prior, implicitly converting the mixed discrete-continuous optimization problem into an easier fully continuous one, (ii) a way to reduce the variance of our reconstructions through a pooled ensembling scheme exploiting the structure of tabular data, and (iii) an entropy measure which can successfully assess reconstruction quality. Our experimental evaluation demonstrates the effectiveness of TabLeak, reaching a state-of-the-art on four popular tabular datasets. For instance, on the Adult dataset, we improve attack accuracy by 10% compared to the baseline on the practically relevant batch size of 32 and further obtain non-trivial reconstructions for batch sizes as large as 128. Our findings are important as they show that performing FL on tabular data, which often poses high privacy risks, is highly vulnerable. ## 1 Introduction Federated Learning (McMahan et al., 2017) (FL) has emerged as the most prominent approach to training machine learning models collaboratively without requiring sensitive data of different parties to be sent to a single centralized location. While prior work has examined privacy leakage in federated learning in the context of computer vision (Zhu et al., 2019; Geiping et al., 2020; Yin et al., 2021) and natural language processing (Dimitrov et al., 2022a; Gupta et al., 2022; Deng et al., 2021), many applications of FL rely on large tabular datasets that include highly sensitive personal data such as financial information and health status (Borisov et al., 2021; Rieke et al., 2020; Long et al., 2021). However, no prior work has studied the issue of privacy leakage in the context of tabular data, a cause of concern for public institutions which have recently launched a competition111https://petsprizechallenges.com/ with a 1.6 mil. USD prize to develop privacy-preserving FL solutions for fraud detection and infection risk prediction, both being tabular datasets. #### Key challenges Leakage attacks often rely on solving optimization problems whose solutions are the desired sensitive data points. Unlike other data types, tabular data poses unique challenges to solving these problems because: (i) the reconstruction is a solution to a mixed discrete-continuous optimization problem, in contrast to other domains where the problem is fully continuous (pixels for images and embeddings for text), (ii) there is high variance in the final reconstructions because, uniquely to tabular data, discrete changes in the categorical features significantly change the optimization trajectory, and (iii) assessing the quality of reconstructions is harder compared to images and text - e.g. determining whether a person with given reconstructed characteristics exists is difficult. Together, these challenges imply that it is difficult to make existing attacks work on tabular data. #### This work In this work, we propose the first comprehensive leakage attack on tabular data in the FL setting, addressing the previously mentioned challenges. We provide an overview of our approach in Fig. 1, showing the reconstruction of a client’s private training data point $x=[\texttt{male, 18, white}]$, from the corresponding training update $\nabla f$ received by the server. In Step 1, we create $N$ separate optimization problems, each assigning different initial values $z^{0}_{1},\dots z^{0}_{N}$ to the optimization variables, representing our reconstruction of the client’s one-hot encoded data, $c(x)$. To address the first challenge of tabular data leakage, we transform the mixed discrete- continuous optimization problem into a fully continuous one, by passing our current reconstructions $z_{1}^{t},\dots,z_{N}^{t}$ through a per-feature softmax $\sigma$ at every step $t$. Using the softmaxed data $\sigma(z^{t})$, we take a gradient step to minimize the reconstruction loss, which compares the received client update $\nabla f$ with a simulated client update computed on $\sigma(z^{t})$. In Step 2, we reduce the variance of the final reconstruction by performing pooling over the $N$ different solutions $z_{1},z_{2},...,z_{N}$, thus tackling the second challenge. In Step 3, we address the challenge of assessing the fidelity of our reconstructions. We rely on the observation that often when our proposed reconstructions $z_{1},z_{2},...,z_{N}$ agree they also match the true client data, $c(x)$. We measure the agreement using entropy. In the example above, we see that the features sex and age produced a low entropy distribution. Therefore we assign high confidence to these results (green arrows). In contrast, the reconstruction of the feature race receives a low confidence rating (orange arrow); rightfully so, as the reconstruction is incorrect. We implemented our approach in an end-to-end attack called TabLeak and evaluated it on several tabular datasets. Our attack is highly effective: it can obtain non-trivial reconstructions for batch sizes as large as 128, and on many practically relevant batch sizes such as 32, it improved reconstruction accuracy by up to 10% compared to the baseline. Overall, our findings show that FL is highly vulnerable when applied to tabular data. #### Main contributions Our main contributions are: * • Novel insights enabling efficient attacks on FL with tabular data: using softmax to make the optimization problem fully continuous, ensembling to reduce the variance, and entropy to assess the reconstructions. * • An implementation of our approach into an end-to-end tool called TabLeak. * • Extensive experimental evaluation, demonstrating effectiveness of TabLeak at reconstructing sensitive client data on several popular tabular datasets. Figure 1: Overview of TabLeak. Our approach transforms the optimization problem into a fully continuous one by optimizing continuous versions of the discrete features, obtained by applying softmax (Step 1, middle boxes), resulting in $N$ candidate solutions (Step 1, bottom). Then, we pool together an ensemble of $N$ different solutions $z_{1},z_{2},...,z_{N}$ obtained from the optimization to reduce the variance of the reconstruction (Step 2). Finally, we assess the quality of the reconstruction by computing the entropy from the feature distributions in the ensemble (Step 3). ## 2 Background and Related Work In this section, we provide the necessary technical background for our work, introduce the notation used throughout the paper, and present the related work in this field. #### Federated Learning Federated Learning (FL) is a training protocol developed to facilitate the distributed training of a parametric model while preserving the privacy of the data at source (McMahan et al., 2017). Formally, we have a parametric function $f_{\theta}$, where $\theta\in\Theta$ are the (network) parameters and $f_{\theta}:\mathcal{X}\rightarrow\mathcal{Y}$. Given a dataset as the union of private datasets of clients $\mathcal{D}=\bigcup_{k=1}^{K}\mathcal{D}_{k}$, we now wish to find a $\theta^{*}\in\Theta$ such that: $\theta^{*}=\operatorname*{arg\,min}_{\theta\in\Theta}\;\frac{1}{N}\sum_{(x_{i},y_{i})\in\mathcal{D}}\,\mathcal{L}(f_{\theta}(x_{i}),y_{i}),$ (1) in a distributed manner, i.e. without collecting the dataset $\mathcal{D}$ in a central database. McMahan et al. (2017) propose two training algorithms: FedSGD (a similar algorithm was also proposed by Shokri and Shmatikov (2015)) and FedAvg, that allow for the distributed training of $f_{\theta}$, while keeping the data partitions $\mathcal{D}_{k}$ at client sources. The two protocols differ in how the clients compute their local updates in each step of training. In FedSGD, each client calculates the update gradient with respect to a randomly selected batch of their own data and shares the resulting gradient with the server. In FedAvg, the clients conduct a few epochs of local training on their own data before sharing their resulting parameters with the server. After the server has received the gradients/parameters from the clients, it aggregates them, updates the model, and broadcasts it to the clients. In each case, this process is repeated until convergence, where FedAvg usually requires fewer rounds of communication. #### Data Leakage Attacks Although FL was designed with the goal of preserving the privacy of clients’ data, recent work has uncovered substantial vulnerabilities. Melis et al. (2019) first presented how one can infer certain properties of the clients’ private data in FL. Later, Zhu et al. (2019) demonstrated that an honest but curious server can use the current state of the model and the client gradients to reconstruct the clients’ data, breaking the main privacy promise of Federated Learning (FL). Under this threat model, there has been extensive research on designing tailored attacks for images (Geiping et al., 2020; Geng et al., 2021; Huang et al., 2021; Jin et al., 2021; Balunovic et al., 2022; Yin et al., 2021; Zhao et al., 2020; Jeon et al., 2021; Dimitrov et al., 2022b) and natural language (Deng et al., 2021; Dimitrov et al., 2022a; Gupta et al., 2022). However, no prior work has comprehensively dealt with tabular data, despite its significance in real-world high-stakes applications (Borisov et al., 2021). Some works also consider a threat scenario where the malicious server is allowed to change the model or the updates communicated to the clients (Wen et al., 2022; Fowl et al., 2022); but in this work we focus on the honest-but-curious setting. In training with FedSGD, given the model $f_{\theta}$ at an iteration $t$ and the gradient $\nabla_{\theta}\,\mathcal{L}(f_{\theta}(x),y)$ of some client, we solve the following optimization problem to retrieve the client’s private data: $\hat{x},\,\hat{y}=\operatorname*{arg\,min}_{(x^{\prime},y^{\prime})\in\mathcal{X}\times\mathcal{Y}}\mathcal{E}(\nabla_{\theta}\,\mathcal{L}(f_{\theta}(x),y),\nabla_{\theta}\,\mathcal{L}(f_{\theta}(x^{\prime}),y^{\prime}))+\lambda\mathcal{R}(x^{\prime}).$ (2) Where in Eq. 2 we denote the gradient matching loss as $\mathcal{E}$ and $\mathcal{R}$ is an optional regularizer for the reconstruction. The work of Zhu et al. (2019) has used the mean square error for $\mathcal{E}$, on which Geiping et al. (2020) improved using the cosine similarity loss. Zhao et al. (2020) first demonstrated that the private labels $y$ can be estimated before solving Eq. 2, reducing the complexity of Eq. 2 and improving the attack results. Their method was later extended to batches by Yin et al. (2021) and refined by Geng et al. (2021). Eq. 2 is typically solved using continuous optimization tools such as L-BFGS (Liu and Nocedal, 1989) and Adam (Kingma and Ba, 2014). Although analytical approaches exist, they do not generalize to batches with more than a single data point (Zhu and Blaschko, 2021). Depending on the data domain, distinct tailored alterations to Eq. 2 have been proposed in the literature, e.g., using the total variation regularizer for images (Geiping et al., 2020) and exploiting pre-trained language models in language tasks (Dimitrov et al., 2022a; Gupta et al., 2022). These mostly non- transferable domain-specific solutions are necessary as each domain poses unique challenges. Our work is first to identify and tackle the key challenges to data leakage in the tabular domain. #### Mixed Type Tabular Data Mixed type tabular data is a data type commonly used in health, economic and social sciences, which entail high-stakes privacy-critical applications (Borisov et al., 2021). Here, data is collected in a table of feature columns, mostly human-interpretable, e.g., age, nationality, and occupation of an individual. We formalize tabular data as follows. Let $x\in\mathcal{X}$ be one line of data, containing discrete or categorical features and continuous or numerical features. Let $\mathcal{X}$ contain $K$ discrete feature columns and $L$ continuous feature columns, i.e. $\mathcal{X}=\mathcal{D}_{1}\times\mathcal{D}_{2}\times\dots\times\mathcal{D}_{K}\times\mathcal{U}_{1}\times\dots\times\mathcal{U}_{L}$, where $\mathcal{D}_{i}\subset\mathbb{N}$ with cardinality $|\mathcal{D}_{i}|=D_{i}$ and $\mathcal{U}_{i}\subset\mathbb{R}$. For the purpose of deep neural network training, the categorical features are often encoded in a numerical vector. We denote the encoded data batch or line as $c(x)$, where we preserve the continuous features and encode the categorical features by a one-hot encoding. The one-hot encoding of the $i$-th discrete feature $c^{D}_{i}(x)$ is a vector of length $D_{i}$ that has a one at the position marking the encoded category, while all other entries are zeros. We obtain the represented category by taking the argmax of $c^{D}_{i}(x)$ (projection to obtain $x$). Using the described encoding, one line of data $x\in\mathcal{X}$ translates to: $c(x)=\left[c^{D}_{1}(x),\,c^{D}_{2}(x),\,\dots,\,c^{D}_{K}(x),\,x^{C}_{1},\,\dots,\,x^{C}_{L}\right]$, containing $d\coloneqq L+\sum_{i=1}^{K}D_{i}$ entries. ## 3 Tabular Leakage In this section, we briefly summarize the key challenges in tabular leakage and present our solution to these challenges in the subsequent subsections and our end-to-end attack. #### Key Challenges We now list the three key challenges that we address in our work: (i) the presence of both categorical and continuous features in tabular data require the attacker to solve a significantly harder mixed discrete-continuous optimization problem (addressed in Sec. 3.1), (ii) the large distance in the encodings of the categorical features introduces high variance in the leakage problem (addressed in Sec. 3.2), and (iii) in contrast to images and text, it is hard for an adversary to assess the quality of the reconstructed data in the tabular domain, as most reconstructions may be projected to credible input data points (we address this via an uncertainty quantification scheme in Sec. 3.3). ### 3.1 The Softmax Structural Prior We now discuss our solution to challenge (i) – we introduce the softmax structural prior, which turns the hard mixed discrete-continuous optimization problem into a fully continuous one. This drastically reduces its complexity, while still facilitating the recovery of correct discrete structures. To start, notice that the recovery of one-hot encodings can be enforced by ensuring that all entries of the recovered vector are either zero or one, and exactly one of the entries equals to one. However, these constraints enforce integer properties, i.e. they are non-differentiable and can not be used in combination with the powerful continuous optimization tools used for gradient leakage attacks. Relaxing the integer constraint by allowing the reconstructed entries to take real values in $[0,1]$, we are still left with a constrained optimization problem not well suited for popular continuous optimization tools, such as Adam (Kingma and Ba, 2014). Therefore, we are looking for a method that can implicitly enforce the constraints introduced above. Let $z\in\mathbb{R}^{d}$ be our approximate intermediate solution for the true one-hot encoded data $c(x)$ at some optimization step. Then, we are looking for a differentiable function $\sigma:\mathbb{R}^{D_{i}}\rightarrow[0,1]$, such that: $\sum_{j=1}^{D_{i}}\sigma(z_{i}^{D})[j]=1\qquad\text{and}\qquad\sigma(z_{i}^{D})[j]\in[0,1]\quad\forall j\in\mathcal{D}_{i}.$ (3) Notice that the two conditions in Eq. 3 can be fulfilled by applying a softmax to $z_{i}^{D}$, i.e. define: $\sigma(z_{i}^{D})[j]\coloneqq\frac{\text{exp}(z^{D}_{i}[j])}{\sum_{k=1}^{D_{i}}\text{exp}(z^{D}_{i}[k])}\qquad\forall j\in\mathcal{D}_{i}.$ (4) Note that it is easy to show that Eq. 4 fulfills both conditions in Eq. 3 and that it is differentiable. Putting this together, in each round of optimization we will have the following approximation of the true data point: $c(x)\approx\sigma(z)=\left[\sigma(z^{D}_{1}),\,\dots,\,\sigma(z^{D}_{K}),\,z^{C}_{1},\,\dots,\,z^{C}_{L}\right]$. In order to preserve notational simplicity, we write $\sigma(z)$ to mean the application of softmax to each group of entries representing a given categorical variable separately. ### 3.2 Pooled Ensembling As mentioned earlier, the mix of categorical and continuous features introduces further variance in the difficult reconstruction problem which already has multiple local minima and high sensitivity to initialization (Zhu and Blaschko, 2021) (challenge (ii)). Concretely, as the one-hot encodings of the categorical features are orthogonal to each other, a change in the encoded category can drastically change the optimization trajectory. We alleviate this problem by adapting an established method of variance reduction in noisy processes (Hastie et al., 2009), i.e. we run independent optimization processes with different initializations and ensemble their results through feature-wise pooling. Note that the features in tabular data are tied to a certain position in the recovered data vector, thereby we can combine independent reconstructions to obtain an improved and more robust final estimate of the true data by applying feature-wise pooling. Formally, we run $N$ independent rounds of optimization with $i.i.d.$ initializations recovering potentially different reconstructions $\left\\{\sigma(z_{j})\right\\}_{j=1}^{N}$. Then, we obtain a final estimate of the true encoded data, denoted as $\sigma^{D}_{i}(\hat{z})$, by pooling them: $\sigma^{D}_{i}(\hat{z})=\text{pool}\left(\left\\{\sigma(z^{D}_{ji})\right\\}_{j=1}^{N}\right)\quad\forall i\in[K]\quad\text{and}\quad\hat{z}_{i}^{C}=\text{pool}\left(\left\\{(z^{C}_{ji})\right\\}_{j=1}^{N}\right)\quad\forall i\in[L].$ (5) Where the $\text{pool}(\cdot)$ operation can be any permutation invariant mapping that maps to the same structure as its inputs. In our attack, we use median pooling for both continuous and categorical features. Figure 2: Maximum similarity matching of a sample from the collection of reconstructions to the best-loss sample $\hat{x}^{best}$. Notice that because a batch-gradient is invariant to permutations of the datapoints in the corresponding batch, when reconstructing from such a gradient we may retrieve the batch-points in a different order at every optimization instance. Hence, we need to reorder each batch such that their lines match to each other, and only then we can conduct the pooling. We reorder by first selecting the sample that produced the best reconstruction loss at the end of optimization $\hat{z}^{best}$, with projection $\hat{x}^{best}$. Then, we match the lines of every other sample in the collection with respect to $\hat{x}^{best}$. Concretely, we calculate the similarity (described in detail in Sec. 4) between each pair of lines of $\hat{x}^{best}$ and another sample $\hat{x}_{i}$ in the collection and find the maximum similarity reordering of the lines with the help of bipartite matching solved by the Hungarian algorithm (Kuhn, 1955). This process is depicted in Fig. 2. Repeating this for each sample, we reorder the entire collection with respect to the best-loss sample, effectively reversing the permutation differences in the independent reconstructions. Therefore, after this process we can directly apply feature-wise pooling for each line over the collection. ### 3.3 Entropy-based Uncertainty Estimation We now address challenge (iii) above. To recap, it is significantly harder for an adversary to asses the quality of an obtained reconstruction when it comes to tabular data, as almost any reconstruction may constitute a credible datapoint when projected back to mixed discrete-continuous space. Note that this challenge does not arise as prominently in the image (or text) domain, because by looking at a picture one can easily judge if it is just noise or an actual image. To address this issue, we propose to estimate the reconstruction uncertainty by looking at the level of agreement over a certain feature for different reconstructions. Concretely, given a collection of reconstructions as in Sec. 3.2, we can observe the distribution of each feature over the reconstructions. Intuitively, if this distribution is "peaky", i.e. concentrates the mass heavily on a certain value, then we can assume that the feature has been reconstructed correctly, whereas if there is high disagreement between the reconstructed samples, we can assume that this feature’s recovered final value should not be trusted. We can quantify this by measuring the entropy of the feature distributions induced by the recovered samples. #### Categorical Features Let $p(\hat{x}_{i}^{D})_{m}\coloneqq\frac{1}{N}\,\text{Count}_{j}(\hat{x}_{ji}^{C}=m)$ be the relative frequency of projected reconstructions of the $i$-th discrete feature of value $m$ in the ensemble. Then, we can calculate the normalized entropy of the feature as $\bar{H}^{D}_{i}=\frac{-1}{\log\,|\mathcal{D}_{i}|}\,\sum_{m=1}^{D_{i}}p(\hat{x}_{i}^{D})_{m}\,\log\,p(\hat{x}_{i}^{D})_{m}$. Note that the normalization allows for comparing features with supports of different size, i.e. it ensures that $\bar{H}^{D}_{i}\in[0,1]$, as $0\leq H(k)\leq\log\,|\mathcal{K}|$ for any discrete random variable $k\in\mathcal{K}$ of finite support. #### Continuous Features In case of the continuous features, we calculate the entropy assuming that errors of the reconstructed samples follow a Gaussian distribution. As such, we first estimate the sample variance $\hat{\sigma}^{2}_{i}$ for the $i$-th continuous feature and then plug it in to calculate the entropy of the corresponding Gaussian: $H^{C}_{i}=\frac{1}{2}+\frac{1}{2}\,\log\,2\pi\hat{\sigma}^{2}_{i}$. As this approach is universal over all continuous features, it is enough to simply scale the features themselves to make their entropy comparable. For example, this can be achieved by working only with standardized features. Note that as the categorical and the continuous features are fundamentally different from an information theoretic perspective, we have no robust means to combine them in a way that would allow for equal treatment. Therefore, when assessing the credibility of recovered features, we will always distinguish between categorical and continuous features. ### 3.4 Combined Attack Algorithm 1 Our combined attack against training by FedSGD 1:function SingleInversion(Neural Network: $f_{\theta}$, Client Gradient: $\nabla_{\theta}\,\mathcal{L}(f_{\theta}(c(x)),y)$, Reconstructed Labels: $\hat{y}$, Initial Reconstruction: $z_{i}^{0}$, Iterations: $T$, N. of Discrete Features: $K$) 2: for $t$ in $1,2,\dots,T$ do 3: for $k$ in $1,2,\dots,K$ do 4: $\sigma(z_{ik}^{D})\leftarrow$ softmax($z_{ik}^{D}$) 5: end for 6: $z_{i}^{t+1}\leftarrow z_{i}^{t}-\eta\,\nabla_{z}\mathcal{E}_{CS}(\nabla_{\theta}\,\mathcal{L}(f_{\theta}(c(x)),y),\nabla_{\theta}\,\mathcal{L}(f_{\theta}(\sigma(z_{i}^{t})),\hat{y}))$ 7: end for 8: return $z_{i}^{T}$ 9:end function 10: 11:function TabLeak(Neural Network: $f_{\theta}$, Client Gradient: $\nabla_{\theta}\,\mathcal{L}(f_{\theta}(c(x)),y)$, Reconstructed Labels: $\hat{y}$, Ensemble Size: $N$, Iterations: $T$, N. of Discrete Features: $K$) 12: $\left\\{z_{i}^{0}\right\\}_{i=1}^{N}\sim\mathcal{U}_{[0,1]^{d}}$ 13: for $i$ in $1,2,\dots,N$ do 14: $z_{i}^{T}\leftarrow$ SingleInversion($f_{\theta}$, $\nabla_{\theta}\,\mathcal{L}(f_{\theta}(c(x)),y)$, $\hat{y}$, $z_{i}^{0}$, $T$, $K$) 15: end for 16: $\hat{z}^{best}\leftarrow\operatorname*{arg\,min}_{z_{j}^{T}}\,\mathcal{E}_{CS}(\nabla_{\theta}\,\mathcal{L}(f_{\theta}(c(x)),y),\nabla_{\theta}\,\mathcal{L}(f_{\theta}(\sigma(z_{j}^{T})),\hat{y}))$ 17: $\sigma(\hat{z})\leftarrow$ MatchAndPool($\left\\{\sigma(z_{i}^{T})\right\\}_{i=1}^{N},\sigma(\hat{z}^{best})$) 18: $\bar{H}^{D},H^{C}\leftarrow$ CalculateEntropy($\left\\{\sigma(z_{i}^{T})\right\\}_{i=1}^{N}$) 19: $\hat{x}\leftarrow$ Project($\sigma(\hat{z})$) 20: return $\hat{x}$, $\bar{H}^{D}$, $H^{C}$ 21:end function Now we provide the description of our end-to-end attack, TabLeak. Following Geiping et al. (2020), we use the cosine similarity loss as our reconstruction loss, defined as: $\mathcal{E}_{CS}(\nabla_{\theta_{t}}\,\mathcal{L}(f_{\theta_{t}}(c(x)),y),\nabla_{\theta_{t}}\,\mathcal{L}(f_{\theta_{t}}(\sigma(z)),\hat{y})),\quad\text{with}\quad\mathcal{E}_{CS}(l,g)\coloneqq 1-\frac{\langle l,\,g\rangle}{\|l\|_{2}\,\|g\|_{2}},$ (6) where $(x,y)$ are the true data, $\hat{y}$ are the labels reconstructed beforehand, and we optimize for $z$. Our algorithm is shown in Alg. 1. First, we reconstruct the labels using the label reconstruction method of Geng et al. (2021) and provide them as an input to our attack. Then, we initialize $N$ independent dummy samples for an ensemble of size $N$ (12). Starting from each initial sample we optimize independently (13-15) via the SingleInversion function. In each optimization step, we apply the softmax structural prior of Sec. 3.1, and let the optimizer differentiate through it (4). After the optimization processes have converged or have reached the maximum number of allowed iterations $T$, we identify the sample $\hat{z}^{best}$ producing the best reconstruction loss (16). Using this sample, we match and median pool to obtain the final encoded reconstruction $\sigma(\hat{z})$ in 17 as described in Sec. 3.2. Finally, we return the projected reconstruction $\hat{x}$ and the corresponding feature-entropies $\bar{H}^{D}$ and $H^{C}$, quantifying the uncertainty in the reconstruction. ## 4 Experimental evaluation In this section, we first detail the evaluation metric we used to assess the obtained reconstructions. We then briefly explain our experimental setup. Next, we evaluate our attack in various settings against baseline methods, establishing a new state-of-the-art. Finally, we demonstrate the effectiveness of our entropy-based uncertainty quantification method. #### Evaluation Metric As no prior work on tabular data leakage exists, we propose our metric for measuring the accuracy of tabular reconstruction, inspired by the 0-1 loss, allowing the joint treatment of categorical and continuous features. For a reconstruction $\hat{x}$, we define the accuracy metric as: $\text{accuracy}(x,\,\hat{x})\coloneqq\frac{1}{K+L}\left(\sum_{i=1}^{K}\mathbb{I}\\{x^{D}_{i}=\hat{x}^{D}_{i}\\}+\sum_{i=1}^{L}\mathbb{I}\\{\hat{x}^{C}_{i}\in[x^{C}_{i}-\epsilon_{i},\,x^{C}_{i}+\epsilon_{i}]\\}\right),$ (7) where $x$ is the ground truth and $\\{\epsilon_{i}\\}_{i=1}^{L}$ are constants determining how close the reconstructed continuous features have to be to the original value in order to be considered successfully leaked. We provide more details on our metric in App. A and experiments with additional metrics in Sec. C.3. #### Baselines We consider two main baselines: (i) Random Baseline does not use the gradient updates and simply randomly samples reconstructions from the per-feature marginals of the input dataset. Due to the structure of tabular datasets, we can easily estimate the marginal distribution of each feature. For the categorical features this can be done by simple counting, and for the continuous features we do it by defining a binning scheme with $100$ equally spaced bins between the lower and upper bounds of the feature. Although this baseline is usually not realizable in practice (as it assumes prior knowledge of the marginals), it helps us calibrate our metric as performing below this baseline signals that there is no information being extracted from the client updates. Note that because both the selection of a batch and the random baseline represent sampling from the (approximate) data generating distribution, the random baseline monotonously approaches perfect accuracy with increasing batch size, (ii) Cosine Baseline is based on the work of Geiping et al. (2020), who established a strong attack for images. We transfer their attack to tabular data by removing the total variation prior used for images. Note that in the case of most competitive attacks on image and text, when removing the domain specific elements, they reduce to this baseline, therefore it is a reasonable choice for benchmarking a new domain. #### Experimental Setup For all attacks, we use the Adam optimizer (Kingma and Ba, 2014) with learning rate $0.06$ for $1\,500$ iterations and without a learning rate schedule to perform the optimization in Alg. 1. In line with Geiping et al. (2020), we modify the update step of the optimizer by reducing the update gradient to its element-wise sign. The neural network we attack is a fully connected neural network with two hidden layers of $100$ neurons each. We conducted our experiments on four popular mixed-type tabular binary classification datasets, the Adult census dataset (Dua and Graff, 2017), the German Credit dataset (Dua and Graff, 2017), the Lawschool Admission dataset (Wightman, 2017), and the Health Heritage dataset from Kaggle222Source: https://www.kaggle.com/c/hhp. Due to the space constraints, here we report only our results on the Adult dataset, and refer the reader to App. D for full results on all four datasets. Finally, for all reported numbers below, we attack a neural network at initialization and estimate the mean and standard deviation of each reported metric on $50$ different batches. For experiments with varying network sizes and attacks against provable defenses, please see App. C. For further details on the experimental setup of each experiment, we refer the reader to App. B #### General Results against FedSGD In Tab. 1 we present the results of our strong attack TabLeak against FedSGD training, together with two ablation experiments, each time removing either the pooling (no pooling) or the softmax component (no softmax). We compare our results to the baselines introduced above, on batch sizes $8,16,32,64$, and $128$, once assuming knowledge of the true labels (top) and once using labels reconstructed by the method of Geng et al. (2021) (bottom). Notice that the noisy label reconstruction only influences the results for lower batch sizes, and manifests itself mostly in higher variance in the results. It is also worth to note that for batch size $8$ (and lower, see App. D) all attacks can recover almost all the data, exposing a trivial vulnerability of FL on tabular data. In case of larger batch sizes, even up to $128$, TabLeak can recover a significant portion of the client’s private data, well above random guessing, while the baseline Cosine attack fails to do so, demonstrating the necessity of a domain tailored attack. In a later paragraph, we show how we can further improve our reconstruction on this batch size and extract subsets of features with $>90\%$ accuracy using the entropy. Further, the results on the ablation attacks demonstrate the effectiveness of each attack component, both providing a non-trivial improvement over the baseline attack that is preserved when combined in our strongest attack. Demonstrating generalization beyond Adult, we include our results on the German Credit, Lawschool Admissions, and Health Heritage datasets in Sec. D.1, where we also outperform the Cosine baseline attack by at least $10\%$ on batch size $32$ on each dataset. Table 1: The mean inversion accuracy [%] and standard deviation of different methods over varying batch sizes with given true labels (True $y$) and with reconstructed labels (Rec. $\hat{y}$) on the Adult dataset. Label | Batch | TabLeak | TabLeak | TabLeak | Cosine | Random ---|---|---|---|---|---|--- | Size | | (no pooling) | (no softmax) | | True $y$ | $8$ | $\mathbf{95.1\pm 9.2}$ | $93.9\pm 10.2$ | $92.9\pm 6.5$ | $91.1\pm 7.3$ | $53.9\pm 4.4$ $16$ | $\mathbf{89.5\pm 7.6}$ | $84.5\pm 9.9$ | $80.5\pm 4.3$ | $75.0\pm 5.2$ | $55.1\pm 3.9$ $32$ | $\mathbf{77.6\pm 4.8}$ | $72.4\pm 4.6$ | $70.8\pm 3.2$ | $66.6\pm 3.5$ | $58.0\pm 2.9$ $64$ | $\mathbf{71.2\pm 2.8}$ | $66.2\pm 2.8$ | $66.9\pm 2.7$ | $62.5\pm 3.1$ | $59.0\pm 3.2$ $128$ | $\mathbf{68.8\pm 1.3}$ | $64.1\pm 1.4$ | $64.0\pm 2.1$ | $59.5\pm 2.1$ | $61.2\pm 3.1$ Rec. $\hat{y}$ | $8$ | $\mathbf{86.9\pm 11.6}$ | $84.6\pm 13.4$ | $85.8\pm 9.9$ | $83.3\pm 9.7$ | $53.9\pm 4.4$ $16$ | $\mathbf{82.4\pm 8.4}$ | $78.3\pm 9.0$ | $77.7\pm 4.1$ | $73.0\pm 3.5$ | $55.1\pm 3.9$ $32$ | $\mathbf{75.3\pm 4.8}$ | $70.6\pm 4.3$ | $70.2\pm 3.2$ | $66.3\pm 3.4$ | $58.0\pm 2.9$ $64$ | $\mathbf{70.4\pm 3.2}$ | $65.9\pm 3.6$ | $66.8\pm 2.6$ | $63.1\pm 3.2$ | $59.0\pm 3.2$ $128$ | $\mathbf{68.7\pm 1.3}$ | $64.4\pm 1.5$ | $63.8\pm 2.1$ | $59.5\pm 2.1$ | $61.2\pm 3.1$ #### Categorical vs. Continuous Features An interesting effect of having mixed type features in the data is that the reconstruction success clearly differs by feature type. As we can observe in Fig. 3, the continuous features produce an up to $30\%$ lower accuracy than the categorical features for the same batch size. We suggest that this is due to the nature of categorical features and how they are encoded. While trying to match the gradients by optimizing the reconstruction, having the correct categorical features will have a much greater effect on the gradient alignment, as when encoded, they take up the majority of the data vector. Also, when reconstructing a one-hot encoded categorical feature, we only have to be able to retrieve the location of the maximum in a vector of length $D_{i}$, whereas for the successful reconstruction of a continuous feature we have to retrieve its value correctly up to a small error. Therefore, especially when the optimization process is aware of the structure of the encoding scheme (e.g., by using the softmax structural prior), categorical features are much easier to reconstruct. This poses a critical privacy risk in tabular federated learning, as sensitive features are often categorical, e.g., gender or race. Figure 3: The inversion accuracy on the Adult dataset over varying batch size separated for discrete (D) and continuous (C) features. #### Federated Averaging In training with FedAvg (McMahan et al., 2017) participating clients conduct local training of several updates before communicating their new parameters to the server. Note that the more local updates are conducted by the clients, the harder a reconstruction attack becomes, making leakage attacks against FedAvg more challenging. Although this training method is of significantly higher practical importance than FedSGD, most prior work does not evaluate against it. Building upon the work of Dimitrov et al. (2022b) (for details please see App. B and the work of Dimitrov et al. (2022b)), we evaluate our combined attack and the cosine baseline in the setting of Federated Averaging. We present our results of retrieving a client dataset of size $32$ over varying number of local batches and epochs on the Adult dataset in Tab. 2, while assuming full knowledge of the true labels. We observe that while our combined attack significantly outperforms the random baseline of $58.0\%$ accuracy even up to $40$ local updates, the baseline attack fails to consistently do so whenever the local training is longer than one epoch. As FedAvg with tabular data is of high practical relevance, our results which highlight its vulnerability are concerning. We show further details for the experimental setup and results on other datasets in App. B and App. D, respectively. Table 2: Mean and standard deviation of the inversion accuracy [%] on FedAvg with local dataset sizes of $32$ on the Adult dataset. The accuracy of the random baseline for $32$ datapoints is $58.0\pm 2.9$. | TabLeak | Cosine ---|---|--- #batches | 1 epoch | 5 epochs | 10 epochs | 1 epoch | 5 epochs | 10 epochs 1 | $\mathbf{77.4\pm 4.5}$ | $\mathbf{71.1\pm 2.9}$ | $\mathbf{67.6\pm 3.7}$ | $65.2\pm 2.7$ | $56.1\pm 4.1$ | $53.2\pm 4.2$ 2 | $\mathbf{75.7\pm 5.0}$ | $\mathbf{71.7\pm 3.9}$ | $\mathbf{67.7\pm 4.2}$ | $64.8\pm 3.3$ | $56.4\pm 4.8$ | $56.2\pm 4.8$ 4 | $\mathbf{75.9\pm 4.4}$ | $\mathbf{71.0\pm 3.2}$ | $\mathbf{67.4\pm 3.4}$ | $64.8\pm 3.4$ | $58.7\pm 4.6$ | $56.6\pm 5.0$ Table 3: The mean accuracy [%] and entropies with the corresponding standard deviations over batch sizes of the categorical (top) and continuous (bottom) features. | Batch Size ---|--- | 8 | 16 | 32 | 64 | 128 Acc. | $98.5\pm 5.6$ | $97.2\pm 4.3$ | $91.0\pm 4.4$ | $83.2\pm 3.6$ | $78.5\pm 1.8$ $\bar{H}^{D}$ | $0.15\pm 0.13$ | $0.26\pm 0.11$ | $0.40\pm 0.06$ | $0.48\pm 0.04$ | $0.53\pm 0.03$ Acc. | $90.9\pm 14.7$ | $78.8\pm 13.5$ | $59.2\pm 6.9$ | $55.1\pm 3.0$ | $55.7\pm 2.0$ $H^{C}$ | $-1.11\pm 0.95$ | $-0.11\pm 0.63$ | $0.77\pm 0.30$ | $1.21\pm 0.19$ | $1.48\pm 0.10$ #### Assessing Reconstructions via Entropy Table 4: The mean accuracy [%] and the share of data [%] in each entropy bucket for batch size 128 on the Adult dataset. Entropy | Categorical Features ---|--- Bucket | Accuracy [%] | Data [%] 0.0-0.2 | $95.7$ | $8.1$ 0.2-0.4 | $90.5$ | $23.4$ 0.4-0.6 | $79.8$ | $27.7$ 0.6-0.8 | $69.8$ | $29.2$ 0.8-1.0 | $61.2$ | $11.6$ Overall | $78.5$ | $100$ Random | $73.8$ | $100$ We now investigate how an adversary can use the entropy (introduced in Sec. 3.3) to assess the quality of their reconstructions. In Tab. 3 we show the mean and standard deviation of the accuracy and the entropy of both the discrete and the continuous features over increasing batch sizes after reconstructing with TabLeak (ensemble size $30$). We observe an increase in the mean entropy over the increasing batch sizes, corresponding to accuracy decrease in the reconstructed batches. Hence, an attacker can understand the global effectiveness of their attack by looking at the retrieved entropies, without having to compare their results to the ground truth. We now look at a single batch of size $128$ and put each categorical feature into a bucket based on their reconstruction entropy after attacking with TabLeak (ensemble size $30$). In Tab. 4 we present our results, showing that features falling into lower entropy buckets (0.0-0.2 and 0.2-0.4) inside a batch are significantly more accurately reconstructed ($>90\%$) than the overall batch ($78.5$%). Note that this bucketing can be done without the knowledge of the ground-truth, yet the adversary can concretely identify the high-fidelity features in their noisy reconstruction. This shows that even for reconstructions of large batches that seem to contain little-to-no information (close to random baseline), an adversary can still extract subsets of the data with high accuracy. Tables containing both feature types on all four datasets can be found in Sec. D.4, providing analogous conclusions. ## 5 Conclusion In this work we presented TabLeak, the first data leakage attack on tabular data in the setting of federated learning (FL), obtaining state-of-the-art results against both popular FL training protocols in the tabular domain. As tabular data is ubiquitous in privacy critical high-stakes applications, our results raise important concerns regarding practical systems currently using FL. Therefore, we advocate for further research on advancing defenses necessary to mitigate such privacy leaks. ## 6 Ethics Statement As tabular data is often used in high-stakes applications and may contain sensitive data of natural or legal persons, confidential treatment is critical. This work presents an attack algorithm in the tabular data domain that enables an FL server to steal the private data of its clients in industry-relevant scenarios, deeming such applications potentially unsafe. We believe that exposing vulnerabilities of both recently proposed and widely adopted systems, where privacy is a concern, can benefit the development of adequate safety mechanisms against malicious actors. In particular, this view is shared by the governmental institutions of the United States of America and the United Kingdom that jointly supported the launch of a competition (https://petsprizechallenges.com/) aimed at advancing the privacy of FL in the tabular domain, encouraging the participation of both teams developing defenses and attacks. Also, as our experiments in Sec. C.1 show, existing techniques can help mitigate the privacy threat, hence we encourage practitioners to make use of them. ## References * Abadi et al. (2016) M. Abadi, A. Chu, I. Goodfellow, H. B. McMahan, I. Mironov, K. Talwar, and L. Zhang, “Deep learning with differential privacy,” in _Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security_ , ser. CCS ’16. New York, NY, USA: Association for Computing Machinery, 2016, p. 308–318. [Online]. Available: https://doi.org/10.1145/2976749.2978318 * Balunovic et al. (2022) M. Balunovic, D. I. Dimitrov, R. Staab, and M. Vechev, “Bayesian framework for gradient leakage,” in _International Conference on Learning Representations_ , 2022. [Online]. Available: https://openreview.net/forum?id=f2lrIbGx3x7 * Borisov et al. (2021) V. Borisov, T. Leemann, K. Seßler, J. Haug, M. Pawelczyk, and G. Kasneci, “Deep neural networks and tabular data: A survey,” _CoRR_ , vol. abs/2110.01889, 2021. [Online]. Available: https://arxiv.org/abs/2110.01889 * Deng et al. (2021) J. Deng, Y. Wang, J. Li, C. Wang, C. Shang, H. Liu, S. Rajasekaran, and C. Ding, “Tag: Gradient attack on transformer-based language models,” in _EMNLP (Findings)_ , 2021, pp. 3600–3610. [Online]. Available: https://aclanthology.org/2021.findings-emnlp.305 * Dimitrov et al. (2022a) D. I. Dimitrov, M. Balunović, N. Jovanović, and M. Vechev, “Lamp: Extracting text from gradients with language model priors,” 2022. * Dimitrov et al. (2022b) D. I. Dimitrov, M. Balunović, N. Konstantinov, and M. Vechev, “Data leakage in federated averaging,” 2022. [Online]. Available: https://arxiv.org/abs/2206.12395 * Dua and Graff (2017) D. Dua and C. Graff, “UCI machine learning repository,” 2017. [Online]. Available: http://archive.ics.uci.edu/ml * Fowl et al. (2022) L. H. Fowl, J. Geiping, W. Czaja, M. Goldblum, and T. Goldstein, “Robbing the fed: Directly obtaining private data in federated learning with modified models,” in _International Conference on Learning Representations_ , 2022\. [Online]. Available: https://openreview.net/forum?id=fwzUgo0FM9v * Geiping et al. (2020) J. Geiping, H. Bauermeister, H. Dröge, and M. Moeller, “Inverting gradients-how easy is it to break privacy in federated learning?” pp. 16 937–16 947, 2020. * Geng et al. (2021) J. Geng, Y. Mou, F. Li, Q. Li, O. Beyan, S. Decker, and C. Rong, “Towards general deep leakage in federated learning,” 2021. * Gupta et al. (2022) S. Gupta, Y. Huang, Z. Zhong, T. Gao, K. Li, and D. Chen, “Recovering private text in federated learning of language models,” 2022. [Online]. Available: https://arxiv.org/abs/2205.08514 * Hastie et al. (2009) T. Hastie, R. Tibshirani, and J. Friedman, _The Elements of Statistical Learning: Data Mining, Inference, and Prediction_ , ser. Springer series in statistics. Springer, 2009. [Online]. Available: https://books.google.ch/books?id=eBSgoAEACAAJ * Huang et al. (2021) Y. Huang, S. Gupta, Z. Song, K. Li, and S. Arora, “Evaluating gradient inversion attacks and defenses in federated learning,” _Advances in Neural Information Processing Systems_ , vol. 34, pp. 7232–7241, 2021. * Jeon et al. (2021) J. Jeon, K. Lee, S. Oh, J. Ok _et al._ , “Gradient inversion with generative image prior,” pp. 29 898–29 908, 2021. * Jin et al. (2021) X. Jin, P.-Y. Chen, C.-Y. Hsu, C.-M. Yu, and T. Chen, “Cafe: Catastrophic data leakage in vertical federated learning,” pp. 994–1006, 2021. * Kingma and Ba (2014) D. Kingma and J. Ba, “Adam: A method for stochastic optimization,” _International Conference on Learning Representations_ , 12 2014. * Kuhn (1955) H. W. Kuhn, “The hungarian method for the assignment problem,” _Naval Research Logistics Quarterly_ , vol. 2, no. 1-2, pp. 83–97, 1955. [Online]. Available: https://onlinelibrary.wiley.com/doi/abs/10.1002/nav.3800020109 * Liu and Nocedal (1989) D. C. Liu and J. Nocedal, “On the limited memory bfgs method for large scale optimization,” _Mathematical Programming_ , vol. 45, pp. 503–528, 1989. * Long et al. (2021) G. Long, Y. Tan, J. Jiang, and C. Zhang, “Federated learning for open banking,” 2021. [Online]. Available: https://arxiv.org/abs/2108.10749 * McMahan et al. (2017) B. McMahan, E. Moore, D. Ramage, S. Hampson, and B. A. y Arcas, “Communication-efficient learning of deep networks from decentralized data,” pp. 1273–1282, 2017. * Melis et al. (2019) L. Melis, C. Song, E. De Cristofaro, and V. Shmatikov, “Exploiting unintended feature leakage in collaborative learning,” in _2019 IEEE symposium on security and privacy (SP)_. IEEE, 2019, pp. 691–706. * Rieke et al. (2020) N. Rieke, J. Hancox, W. Li, F. Milletarì, H. R. Roth, S. Albarqouni, S. Bakas, M. N. Galtier, B. A. Landman, K. Maier-Hein, S. Ourselin, M. Sheller, R. M. Summers, A. Trask, D. Xu, M. Baust, and M. J. Cardoso, “The future of digital health with federated learning,” _npj Digital Medicine_ , vol. 3, no. 1, sep 2020. [Online]. Available: https://doi.org/10.1038%2Fs41746-020-00323-1 * Shokri and Shmatikov (2015) R. Shokri and V. Shmatikov, “Privacy-preserving deep learning,” in _Proceedings of the 22nd ACM SIGSAC Conference on Computer and Communications Security_ , ser. CCS ’15. New York, NY, USA: Association for Computing Machinery, 2015, p. 1310–1321. [Online]. Available: https://doi.org/10.1145/2810103.2813687 * Wen et al. (2022) Y. Wen, J. A. Geiping, L. Fowl, M. Goldblum, and T. Goldstein, “Fishing for user data in large-batch federated learning via gradient magnification,” in _Proceedings of the 39th International Conference on Machine Learning_ , ser. Proceedings of Machine Learning Research, K. Chaudhuri, S. Jegelka, L. Song, C. Szepesvari, G. Niu, and S. Sabato, Eds., vol. 162. PMLR, 17–23 Jul 2022, pp. 23 668–23 684. [Online]. Available: https://proceedings.mlr.press/v162/wen22a.html * Wightman (2017) F. L. Wightman, “LSAC national longitudinal bar passage study,” 2017. * Yin et al. (2021) H. Yin, A. Mallya, A. Vahdat, J. M. Alvarez, J. Kautz, and P. Molchanov, “See through gradients: Image batch recovery via gradinversion,” pp. 16 337–16 346, 2021. * Zhao et al. (2020) B. Zhao, K. R. Mopuri, and H. Bilen, “idlg: Improved deep leakage from gradients,” 2020. [Online]. Available: https://arxiv.org/abs/2001.02610 * Zhu and Blaschko (2021) J. Zhu and M. B. Blaschko, “R-{gap}: Recursive gradient attack on privacy,” in _International Conference on Learning Representations_ , 2021\. [Online]. Available: https://openreview.net/forum?id=RSU17UoKfJF * Zhu et al. (2019) L. Zhu, Z. Liu, and S. Han, “Deep leakage from gradients,” 2019. ## Appendix A Accuracy Metric To ease the understanding, we start by repeating our accuracy metric here, where we measure the reconstruction accuracy between the retrieved sample $\hat{x}$ and the ground truth $x$ as: $\text{accuracy}(x,\,\hat{x})\coloneqq\frac{1}{K+L}\left(\sum_{i=1}^{K}\mathbb{I}\\{x^{D}_{i}=\hat{x}^{D}_{i}\\}+\sum_{i=1}^{L}\mathbb{I}\\{\hat{x}^{C}_{i}\in[x^{C}_{i}-\epsilon_{i},\,x^{C}_{i}+\epsilon_{i}]\\}\right).$ (8) Note that the binary treatment of continuous features in our accuracy metric enables the combined measurement of the accuracy on both the discrete and the continuous features. From an intuitive point of view, this measure closely resembles how one would judge the correctness of numerical guesses. For example, guessing the age of a $25$ year old, one would deem the guess good if it is within $3$ to $4$ years of the true value, but the guesses $65$ and $87$ would be both qualitatively incorrect. In order to facilitate scalability of our experiments, we chose the $\left\\{\epsilon_{i}\right\\}_{i=1}^{L}$ error- tolerance-bounds based on the global standard deviation if the given continuous feature $\sigma^{C}_{i}$ and multiplied it by a constant, concretely, we used $\epsilon_{i}=0.319\,\sigma^{C}_{i}$ for all our experiments. Note that $Pr[\mu-0.319\,\sigma<x<\mu+0.319\,\sigma]\approx 0.25$ for a Gaussian random variable $x$ with mean $\mu$ and variance $\sigma^{2}$. For our metric this means that assuming Gaussian zero-mean error in the reconstruction around the true value, we accept our reconstruction as privacy leakage as long as we fall into the $25\%$ error-probability range around the correct value. In Tab. 5 we list the tolerance bounds $\epsilon_{i}$ for the continuous features of the Adult dataset produced by this method. We would like to remark here, that we fixed our metric parameters before conducting any experiments, and did not adjust them based on any obtained results. Note also that in App. C we provide results where the continuous feature reconstruction accuracy is measured using the commonly used regression metric of root mean squared error (RMSE), where TabLeak also achieves the best results, signaling that the success of our method is independent of our chosen metric. Table 5: Resulting tolerance bounds on the Adult dataset when using $\epsilon_{i}=0.319\,\sigma_{i}^{C}$, as used by us for our experiments. feature | age | fnlwgt | education-num | capital-gain | capital-loss | hours-per-week ---|---|---|---|---|---|--- tolerance | 4.2 | 33699 | 0.8 | 2395 | 129 | 3.8 ## Appendix B Further Experimental Details Here we give an extended description to our experimental details provided in Sec. 4. For all attacks, we use the Adam optimizer (Kingma and Ba, 2014) with learning rate $0.06$ for $1\,500$ iterations and without a learning rate schedule. We chose the learning rate based on our experiments on the baseline attack where it performed best. In line with Geiping et al. (2020), we modify the update step of the optimizer by reducing the update gradient to its element-wise sign. We attack a fully connected neural network with two hidden layers of $100$ neurons each at initialization. However, we provide a network- size ablation in Fig. 8, where we evaluate our attack against the baseline method for $5$ different network architectures. For each reported metric we conduct $50$ independent runs on 50 different batches to estimate their statistics. For all FedSGD experiments we clamp the continuous features to their valid ranges before measuring the reconstruction accuracy, both for our attacks and the baseline methods. We ran each of our experiments on single cores of Intel(R) Xeon(R) CPU E5-2690 v4 @ 2.60GHz. #### Federated Averaging Experiments For experiments on attacking the FedAvg training algorithm, we fix the clients’ local dataset size at $32$ and conduct an attack after local training with learning rate $0.01$ on the initialized network described above. We use the FedAvg attack-framework of Dimitrov et al. (2022b), where for each local training epoch we initialize an independent mini-dataset matching the size of the client dataset, and simulate the local training of the client. At each reconstruction update, we use the mean squared error between the different epoch data means ($D_{\text{inv}}=\ell_{2}$ and $g=\text{mean}$ in Dimitrov et al. (2022b)) as the permutation invariant epoch prior required by the framework, ensuring the consistency of the reconstructed dataset. For the full technical details, please refer to the manuscript of Dimitrov et al. (2022b). For choosing the prior parameter $\lambda_{\text{inv}}$, we conduct line- search on each setup and attack method pair individually on the parameters $[0.0,0.5,0.1,0.05,0.01,0.005,0.001]$, and pick the ones providing the best results. Further, to reduce computational overhead, we reduce the ensemble size of TabLeak from $30$ to $15$ for these experiments on all datasets. ## Appendix C Further Experiments In this subsection, we present three further experiments: * • Results of attacking neural networks defended using differentially private noisy gradients in Sec. C.1. * • Ablation study on the impacts of the neural network’s size on the reconstruction difficulty in Sec. C.2. * • Measuring the Root Mean Squared Error (RMSE) of the reconstruction of continuous features in Sec. C.3. ### C.1 Attack against Gaussian DP Differential privacy (DP) has recently gained popularity, as a way to prevent privacy violations in FL (Abadi et al., 2016; Zhu et al., 2019). Unlike, empirical defenses which are often broken by specifically crafted adversaries (Balunovic et al., 2022), DP provides guarantees on the amount of data leaked by a FL model, in terms of the magnitude of random noise the clients add to their gradients prior to sharing them with the server (Abadi et al., 2016; Zhu et al., 2019). Naturally, DP methods balance privacy concerns with the accuracy of the produced model, since bigger noise results in worse models that are more private. In this subsection, we evaluate TabLeak, and the Cosine baseline against DP defended gradient updates, where zero-mean Gaussian noise is added with standard deviations $0.001$, $0.01$, and $0.1$ to the client gradients. We present our results on the Adult, German Credit, Lawschool Admissions, and Health Heritage datasets in Fig. 4, Fig. 5, Fig. 6, and Fig. 7, respectively. Although both methods are affected by the defense, our method consistently produces better reconstructions than the baseline method. However, for high noise level ($\sigma=0.1$) and larger batch size both attacks break, advocating for the use of DP defenses in tabular FL to prevent the high vulnerability exposed by this work. (a) $\sigma=0.001$ (b) $\sigma=0.01$ (c) $\sigma=0.1$ Figure 4: Mean and standard deviation accuracy [%] curves over batch size at varying Gaussian noise level $\sigma$ added to the client gradients for differential privacy on the Adult dataset. (a) $\sigma=0.001$ (b) $\sigma=0.01$ (c) $\sigma=0.1$ Figure 5: Mean and standard deviation accuracy [%] curves over batch size at varying Gaussian noise level $\sigma$ added to the client gradients for differential privacy on the German Credit dataset. (a) $\sigma=0.001$ (b) $\sigma=0.01$ (c) $\sigma=0.1$ Figure 6: Mean and standard deviation accuracy [%] curves over batch size at varying Gaussian noise level $\sigma$ added to the client gradients for differential privacy on the Lawschool Admissions dataset. (a) $\sigma=0.001$ (b) $\sigma=0.01$ (c) $\sigma=0.1$ Figure 7: Mean and standard deviation accuracy [%] curves over batch size at varying Gaussian noise level $\sigma$ added to the client gradients for differential privacy on the Health Heritage dataset. ### C.2 Varying Network Size To understand the effect the choice of the network has on the obtained reconstruction results, we defined $4$ additional fully connected networks, two smaller, and two bigger ones to evaluate TabLeak on. Concretely, we examined the following five architectures for our attack: 1. 1. a single hidden layer with $50$ neurons, 2. 2. a single hidden layer with $100$ neurons, 3. 3. two hidden layers with $100$ neurons each (network used in main body), 4. 4. three hidden layers with $200$ neurons each, 5. 5. and three hidden layers with $400$ neurons each. We attack the above networks, aiming to reconstruct a batch of size $32$. We plot the accuracy of TabLeak and the cosine baseline as a function of the number of parameters in the network in Fig. 8 for all four datasets. We can observe that with increasing number of parameters in the network, the reconstruction accuracy significantly increases on all datasets, and rather surprsingly, allowing for near perfect reconstruction of a batch as large as $32$ in some cases. Observe that on both ends of the presented parameter scale the differences between the methods degrade, i.e. they either both converge to near-perfect reconstruction (large networks) or to random guessing (small networks). Therefore, the choice of our network for conducting the experiments was instructive in examining the differences between the methods. (a) Adult (b) German Credit (c) Lawschool Admissions (d) Health Heritage Figure 8: Mean attack accuracy curves with standard deviation for batch size $32$ over varying network size (measured in number of parameters, #Params, log scale) on all four datasets. We mark the network we used for our other experiments with a dashed vertical line. ### C.3 Continuous Feature Reconstruction Measured by RMSE In order to examine the potential influence of our choice of reconstruction metric on the obtained results, we further measured the reconstruction quality of continuous features on the widely used Root Mean Squared Error (RMSE) metric as well. Concretely, we calculate the RMSE between the $L$ continuous features of our reconstruction $\hat{x}^{C}$ and the ground truth $x$ in a batch of size $n$ as: $\text{RMSE}(x^{C},\hat{x}^{C})=\frac{1}{L}\sum_{i=1}^{L}\sqrt{\frac{1}{n}\sum_{j=1}^{n}(x_{ij}^{C}-\hat{x}_{ij}^{C})^{2}}.$ (9) As our results in Fig. 9 demonstrate, TabLeak achieves significantly lower RMSE than the Cosine baseline on large batch sizes, for all four datasets examined. This indicates that the strong results obtained by TabLeak in the rest of the paper are not a consequence of our evaluation metric. (a) Adult (b) German Credit (c) Lawschool Admissions (d) Health Heritage Figure 9: The mean and standard deviation of the Root Mean Square Error (RMSE) of the reconstructions of the continuous features on all four datasets over batch sizes. ## Appendix D All Main Results In this subsection, we include all the results presented in the main part of this paper for the Adult dataset alongside with the corresponding additional results on the German Credit, Lawschool Admissions, and the Health Heritage datasets. ### D.1 Full FedSGD Results on all Datasets In Tab. 6, Tab. 7, Tab. 8, and Tab. 9 we provide the full attack results of our method compared to the Cosine and random baselines on the Adult, German Credit, Lawschool Admissions, and Health Heritage datasets, respectively. Looking at the results for all datasets, we can confirm the observations made in Sec. 4, i.e. (i) the lower batch sizes are vulnerable to any non-trivial attack, (ii) not knowing the ground truth labels does not significantly disadvantage the attacker for larger batch sizes, and (iii) TabLeak provides a strong improvement over the baselines for practically relevant batch sizes over all datasets examined. Table 6: The mean inversion accuracy [%] and standard deviation of different methods over varying batch sizes with given true labels (top) and with reconstructed labels (bottom) on the Adult dataset. Label | Batch | TabLeak | TabLeak | TabLeak | Cosine | Random ---|---|---|---|---|---|--- | Size | | (no pooling) | (no softmax) | | True $y$ | $1$ | $99.4\pm 2.8$ | $99.1\pm 4.4$ | $\mathbf{100.0\pm 0.0}$ | $\mathbf{100.0\pm 0.0}$ | $43.3\pm 11.8$ $2$ | $99.2\pm 5.5$ | $99.1\pm 6.5$ | $\mathbf{99.9\pm 1.0}$ | $97.6\pm 6.9$ | $47.1\pm 7.9$ $4$ | $98.0\pm 4.5$ | $96.6\pm 7.5$ | $\mathbf{98.9\pm 4.0}$ | $96.4\pm 7.2$ | $49.8\pm 4.9$ $8$ | $\mathbf{95.1\pm 9.2}$ | $93.9\pm 10.2$ | $92.9\pm 6.5$ | $91.1\pm 7.3$ | $53.9\pm 4.4$ $16$ | $\mathbf{89.5\pm 7.6}$ | $84.5\pm 9.9$ | $80.5\pm 4.3$ | $75.0\pm 5.2$ | $55.1\pm 3.9$ $32$ | $\mathbf{77.6\pm 4.8}$ | $72.4\pm 4.6$ | $70.8\pm 3.2$ | $66.6\pm 3.5$ | $58.0\pm 2.9$ $64$ | $\mathbf{71.2\pm 2.8}$ | $66.2\pm 2.8$ | $66.9\pm 2.7$ | $62.5\pm 3.1$ | $59.0\pm 3.2$ $128$ | $\mathbf{68.8\pm 1.3}$ | $64.1\pm 1.4$ | $64.0\pm 2.1$ | $59.5\pm 2.1$ | $61.2\pm 3.1$ Rec. $\hat{y}$ | $1$ | $99.4\pm 2.8$ | $99.3\pm 3.6$ | $\mathbf{100.0\pm 0.0}$ | $\mathbf{100.0\pm 0.0}$ | $43.3\pm 11.8$ $2$ | $98.2\pm 8.9$ | $98.1\pm 9.1$ | $\mathbf{98.6\pm 7.5}$ | $95.9\pm 11.5$ | $47.1\pm 7.9$ $4$ | $89.5\pm 13.8$ | $88.0\pm 15.2$ | $\mathbf{90.0\pm 13.0}$ | $87.9\pm 13.7$ | $49.8\pm 4.9$ $8$ | $\mathbf{86.9\pm 11.6}$ | $84.6\pm 13.4$ | $85.8\pm 9.9$ | $83.3\pm 9.7$ | $53.9\pm 4.4$ $16$ | $\mathbf{82.4\pm 8.4}$ | $78.3\pm 9.0$ | $77.7\pm 4.1$ | $73.0\pm 3.5$ | $55.1\pm 3.9$ $32$ | $\mathbf{75.3\pm 4.8}$ | $70.6\pm 4.3$ | $70.2\pm 3.2$ | $66.3\pm 3.4$ | $58.0\pm 2.9$ $64$ | $\mathbf{70.4\pm 3.2}$ | $65.9\pm 3.6$ | $66.8\pm 2.6$ | $63.1\pm 3.2$ | $59.0\pm 3.2$ $128$ | $\mathbf{68.7\pm 1.3}$ | $64.4\pm 1.5$ | $63.8\pm 2.1$ | $59.5\pm 2.1$ | $61.2\pm 3.1$ Table 7: The mean inversion accuracy [%] and standard deviation of different methods over varying batch sizes with given true labels (top) and with reconstructed labels (bottom) on the German Credit dataset. Label | Batch | TabLeak | TabLeak | TabLeak | Cosine | Random ---|---|---|---|---|---|--- | Size | | (no pooling) | (no softmax) | | True $y$ | $1$ | $\mathbf{100.0\pm 0.0}$ | $\mathbf{100.0\pm 0.0}$ | $\mathbf{100.0\pm 0.0}$ | $\mathbf{100.0\pm 0.0}$ | $43.9\pm 9.8$ $2$ | $\mathbf{100.0\pm 0.0}$ | $\mathbf{100.0\pm 0.0}$ | $99.9\pm 0.7$ | $98.0\pm 7.1$ | $45.1\pm 6.6$ $4$ | $\mathbf{99.9\pm 0.4}$ | $99.2\pm 3.6$ | $99.5\pm 1.2$ | $97.8\pm 6.0$ | $50.3\pm 4.5$ $8$ | $99.7\pm 1.1$ | $99.1\pm 2.2$ | $\mathbf{98.2\pm 2.5}$ | $96.1\pm 5.2$ | $51.8\pm 3.2$ $16$ | $\mathbf{95.9\pm 3.4}$ | $94.0\pm 4.3$ | $84.1\pm 3.4$ | $79.3\pm 4.4$ | $54.5\pm 3.0$ $32$ | $\mathbf{83.6\pm 2.9}$ | $79.4\pm 3.1$ | $72.1\pm 1.9$ | $69.7\pm 2.2$ | $56.8\pm 2.2$ $64$ | $\mathbf{73.0\pm 1.3}$ | $70.8\pm 1.4$ | $68.9\pm 1.4$ | $66.6\pm 1.8$ | $59.4\pm 1.9$ $128$ | $\mathbf{71.3\pm 0.8}$ | $69.1\pm 0.8$ | $67.4\pm 1.5$ | $64.5\pm 1.5$ | $61.0\pm 2.1$ Rec. $\hat{y}$ | $1$ | $\mathbf{100.0\pm 0.0}$ | $\mathbf{100.0\pm 0.0}$ | $\mathbf{100.0\pm 0.0}$ | $\mathbf{100.0\pm 0.0}$ | $43.9\pm 9.8$ $2$ | $\mathbf{100.0\pm 0.0}$ | $99.5\pm 3.5$ | $99.9\pm 0.7$ | $98.8\pm 5.2$ | $45.1\pm 6.6$ $4$ | $\mathbf{99.6\pm 2.6}$ | $99.5\pm 2.9$ | $99.2\pm 3.0$ | $97.4\pm 6.4$ | $50.3\pm 4.5$ $8$ | $\mathbf{97.2\pm 6.1}$ | $96.8\pm 6.8$ | $96.0\pm 6.2$ | $94.8\pm 6.5$ | $51.8\pm 3.2$ $16$ | $\mathbf{91.7\pm 6.5}$ | $90.0\pm 7.3$ | $82.3\pm 4.6$ | $77.9\pm 4.6$ | $54.5\pm 3.0$ $32$ | $\mathbf{81.5\pm 3.4}$ | $77.6\pm 2.8$ | $71.5\pm 2.0$ | $69.1\pm 2.1$ | $56.8\pm 2.2$ $64$ | $\mathbf{72.9\pm 1.4}$ | $70.5\pm 1.4$ | $68.6\pm 1.3$ | $66.5\pm 1.7$ | $59.4\pm 1.9$ $128$ | $\mathbf{71.1\pm 0.9}$ | $69.1\pm 0.7$ | $67.1\pm 1.6$ | $64.4\pm 1.6$ | $61.0\pm 2.1$ Table 8: The mean inversion accuracy [%] and standard deviation of different methods over varying batch sizes with given true labels (top) and with reconstructed labels (bottom) on the Lawschool Admissions dataset. Label | Batch | TabLeak | TabLeak | TabLeak | Cosine | Random ---|---|---|---|---|---|--- | Size | | (no pooling) | (no softmax) | | True $y$ | $1$ | $\mathbf{100.0\pm 0.0}$ | $\mathbf{100.0\pm 0.0}$ | $\mathbf{100.0\pm 0.0}$ | $\mathbf{100.0\pm 0.0}$ | $38.9\pm 14.6$ $2$ | $\mathbf{100.0\pm 0.0}$ | $\mathbf{100.0\pm 0.0}$ | $99.9\pm 1.0$ | $96.3\pm 10.4$ | $38.4\pm 11.5$ $4$ | $\mathbf{100.0\pm 0.0}$ | $\mathbf{100.0\pm 0.0}$ | $99.7\pm 1.2$ | $97.6\pm 6.9$ | $43.2\pm 7.2$ $8$ | $98.7\pm 3.8$ | $\mathbf{98.8\pm 3.7}$ | $96.0\pm 5.0$ | $94.5\pm 5.8$ | $49.4\pm 4.6$ $16$ | $\mathbf{94.8\pm 5.6}$ | $93.5\pm 6.5$ | $81.1\pm 4.5$ | $77.3\pm 5.5$ | $53.0\pm 3.1$ $32$ | $\mathbf{84.8\pm 3.9}$ | $82.4\pm 4.1$ | $73.3\pm 2.8$ | $71.0\pm 2.8$ | $57.6\pm 2.3$ $64$ | $\mathbf{78.2\pm 2.0}$ | $76.6\pm 2.0$ | $73.0\pm 2.1$ | $71.7\pm 2.2$ | $60.4\pm 2.2$ $128$ | $\mathbf{77.3\pm 1.2}$ | $76.0\pm 1.1$ | $73.7\pm 2.6$ | $71.8\pm 2.7$ | $63.4\pm 1.5$ Rec. $\hat{y}$ | $1$ | $\mathbf{100.0\pm 0.0}$ | $\mathbf{100.0\pm 0.0}$ | $\mathbf{100.0\pm 0.0}$ | $\mathbf{100.0\pm 0.0}$ | $38.9\pm 14.6$ $2$ | $99.1\pm 6.0$ | $\mathbf{99.3\pm 5.0}$ | $98.7\pm 7.1$ | $95.9\pm 12.0$ | $38.4\pm 11.5$ $4$ | $\mathbf{99.6\pm 3.0}$ | $99.0\pm 4.9$ | $98.7\pm 6.3$ | $96.8\pm 8.5$ | $43.2\pm 7.2$ $8$ | $\mathbf{95.9\pm 7.8}$ | $95.3\pm 8.3$ | $93.4\pm 7.2$ | $91.9\pm 7.9$ | $49.4\pm 4.6$ $16$ | $\mathbf{91.2\pm 7.3}$ | $89.1\pm 8.3$ | $80.5\pm 4.7$ | $77.4\pm 5.4$ | $53.0\pm 3.1$ $32$ | $\mathbf{83.2\pm 4.1}$ | $80.9\pm 4.3$ | $72.7\pm 2.2$ | $71.0\pm 2.0$ | $57.6\pm 2.3$ $64$ | $\mathbf{77.2\pm 2.4}$ | $76.0\pm 2.2$ | $72.7\pm 2.1$ | $71.5\pm 2.4$ | $60.4\pm 2.2$ $128$ | $\mathbf{77.1\pm 1.2}$ | $75.9\pm 1.3$ | $73.9\pm 2.7$ | $71.8\pm 2.8$ | $63.4\pm 1.5$ Table 9: The mean inversion accuracy [%] and standard deviation of different methods over varying batch sizes with given true labels (top) and with reconstructed labels (bottom) on the Health Heritage dataset. Label | Batch | TabLeak | TabLeak | TabLeak | Cosine | Random ---|---|---|---|---|---|--- | Size | | (no pooling) | (no softmax) | | True $y$ | $1$ | $\mathbf{99.8\pm 1.6}$ | $\mathbf{99.8\pm 1.6}$ | $\mathbf{99.8\pm 1.6}$ | $\mathbf{99.8\pm 1.6}$ | $34.8\pm 13.1$ $2$ | $97.7\pm 8.3$ | $97.2\pm 10.2$ | $\mathbf{98.6\pm 3.3}$ | $97.9\pm 5.6$ | $36.9\pm 9.8$ $4$ | $\mathbf{98.2\pm 6.5}$ | $96.1\pm 9.8$ | $97.8\pm 4.2$ | $95.6\pm 8.1$ | $37.0\pm 5.3$ $8$ | $\mathbf{96.0\pm 8.2}$ | $94.2\pm 10.5$ | $89.2\pm 9.1$ | $86.2\pm 9.0$ | $39.2\pm 3.8$ $16$ | $\mathbf{86.1\pm 8.8}$ | $80.6\pm 9.9$ | $67.8\pm 4.8$ | $63.6\pm 5.5$ | $41.4\pm 3.7$ $32$ | $\mathbf{70.0\pm 4.5}$ | $64.7\pm 3.9$ | $61.4\pm 4.0$ | $57.7\pm 4.1$ | $43.4\pm 2.8$ $64$ | $\mathbf{64.7\pm 2.8}$ | $59.6\pm 2.7$ | $61.5\pm 4.3$ | $57.4\pm 4.7$ | $45.0\pm 3.7$ $128$ | $\mathbf{63.0\pm 1.4}$ | $57.9\pm 1.6$ | $59.9\pm 5.0$ | $55.6\pm 4.8$ | $46.8\pm 3.2$ Rec. $\hat{y}$ | $1$ | $99.8\pm 1.6$ | $\mathbf{99.9\pm 0.8}$ | $99.8\pm 1.6$ | $99.6\pm 2.5$ | $34.8\pm 13.1$ $2$ | $\mathbf{95.4\pm 13.6}$ | $94.8\pm 15.1$ | $95.2\pm 13.6$ | $92.5\pm 16.9$ | $36.9\pm 9.8$ $4$ | $\mathbf{86.6\pm 20.2}$ | $84.7\pm 22.0$ | $84.7\pm 20.8$ | $83.5\pm 20.7$ | $37.0\pm 5.3$ $8$ | $\mathbf{82.4\pm 15.6}$ | $80.5\pm 16.3$ | $77.3\pm 13.3$ | $74.5\pm 13.8$ | $39.2\pm 3.8$ $16$ | $\mathbf{75.9\pm 12.4}$ | $71.4\pm 11.4$ | $64.8\pm 7.6$ | $60.9\pm 6.3$ | $41.4\pm 3.7$ $32$ | $\mathbf{64.8\pm 5.7}$ | $60.8\pm 4.7$ | $59.8\pm 3.8$ | $56.9\pm 4.0$ | $43.4\pm 2.8$ $64$ | $\mathbf{62.6\pm 2.6}$ | $59.6\pm 2.6$ | $60.9\pm 4.0$ | $57.7\pm 4.7$ | $45.0\pm 3.7$ $128$ | $\mathbf{62.7\pm 1.6}$ | $59.2\pm 1.6$ | $59.6\pm 5.1$ | $55.7\pm 5.0$ | $46.8\pm 3.2$ ### D.2 Categorical vs. Continuous Features on all Datasets In Fig. 10, we compare the reconstruction accuracy of the continuous and the discrete features on all four datasets. We confirm our observations, shown in Fig. 3 in the main text, that a strong dichotomy between continuous and discrete feature reconstruction accuracy exists on all 4 datasets. (a) Adult (b) German Credit (c) Lawschool Admissions (d) Health Heritage Figure 10: Mean reconstruction accuracy curves with corresponding standard deviations over varying batch size, separately for the discrete and the continuous features on all four datasets. ### D.3 Federated Averaging Results on all Datasets In Tab. 10, Tab. 11, Tab. 12, and Tab. 13 we present our results on attacking the clients in FedAvg training on the Adult, German Credit, Lawschool Submissions, and Health Heritage datasets, respectively. We described the details of the experiment in App. B above. Confirming our conclusions drawn in the main part of this manuscript, we observe that TabLeak achieves non-trivial reconstruction accuracy over all settings and even for large numbers of updates, while the baseline attack often fails to outperform random guessing, when the number of local updates is increased. Table 10: Mean and standard deviation of the inversion accuracy [%] with local dataset size of $32$ in FedAvg training on the Adult dataset. The accuracy of the random baseline for $32$ datapoints is $58.0\pm 2.9$. | TabLeak | Cosine ---|---|--- #batches | 1 epoch | 5 epochs | 10 epochs | 1 epoch | 5 epochs | 10 epochs 1 | $\mathbf{77.4\pm 4.5}$ | $\mathbf{71.1\pm 2.9}$ | $\mathbf{67.6\pm 3.7}$ | $65.2\pm 2.7$ | $56.1\pm 4.1$ | $53.2\pm 4.2$ 2 | $\mathbf{75.7\pm 5.0}$ | $\mathbf{71.7\pm 3.9}$ | $\mathbf{67.7\pm 4.2}$ | $64.8\pm 3.3$ | $56.4\pm 4.8$ | $56.2\pm 4.8$ 4 | $\mathbf{75.9\pm 4.4}$ | $\mathbf{71.0\pm 3.2}$ | $\mathbf{67.4\pm 3.4}$ | $64.8\pm 3.4$ | $58.7\pm 4.6$ | $56.6\pm 5.0$ Table 11: Mean and standard deviation of the inversion accuracy [%] with local dataset size of $32$ in FedAvg training on the German Credit dataset. The accuracy of the random baseline for $32$ datapoints is $56.9\pm 2.1$. | TabLeak | Cosine ---|---|--- #batches | 1 epoch | 5 epochs | 10 epochs | 1 epoch | 5 epochs | 10 epochs 1 | $\mathbf{95.2\pm 3.8}$ | $\mathbf{87.9\pm 6.2}$ | $\mathbf{83.4\pm 4.6}$ | $78.2\pm 4.6$ | $65.4\pm 6.2$ | $62.5\pm 6.1$ 2 | $\mathbf{95.5\pm 3.9}$ | $\mathbf{88.2\pm 5.2}$ | $\mathbf{84.0\pm 6.6}$ | $78.3\pm 5.8$ | $68.8\pm 6.6$ | $63.4\pm 4.8$ 4 | $\mathbf{95.6\pm 3.6}$ | $\mathbf{85.5\pm 6.0}$ | $\mathbf{81.0\pm 6.1}$ | $79.2\pm 4.9$ | $67.4\pm 4.8$ | $62.6\pm 6.5$ Table 12: Mean and standard deviation of the inversion accuracy [%] with local dataset size of $32$ in FedAvg training on the Lawschool Admissions dataset. The accuracy of the random baseline for $32$ datapoints is $57.8\pm 2.3$. | TabLeak | Cosine ---|---|--- #batches | 1 epoch | 5 epochs | 10 epochs | 1 epoch | 5 epochs | 10 epochs 1 | $\mathbf{85.6\pm 3.8}$ | $\mathbf{83.3\pm 2.9}$ | $\mathbf{80.7\pm 4.1}$ | $72.2\pm 2.6$ | $68.1\pm 3.1$ | $65.2\pm 2.8$ 2 | $\mathbf{86.0\pm 3.8}$ | $\mathbf{83.0\pm 3.2}$ | $\mathbf{79.8\pm 3.5}$ | $72.5\pm 1.9$ | $68.3\pm 4.4$ | $66.2\pm 2.8$ 4 | $\mathbf{85.8\pm 3.5}$ | $\mathbf{81.7\pm 3.8}$ | $\mathbf{79.3\pm 4.3}$ | $72.5\pm 2.4$ | $69.4\pm 3.9$ | $67.9\pm 3.8$ Table 13: Mean and standard deviation of the inversion accuracy [%] with local dataset size of $32$ in FedAvg training on the Health Heritage dataset. The accuracy of the random baseline for $32$ datapoints is $43.4\pm 3.5$. | TabLeak | Cosine ---|---|--- #batches | 1 epoch | 5 epochs | 10 epochs | 1 epoch | 5 epochs | 10 epochs 1 | $\mathbf{68.5\pm 5.0}$ | $\mathbf{62.2\pm 3.5}$ | $\mathbf{57.4\pm 3.0}$ | $53.8\pm 5.5$ | $41.4\pm 3.6$ | $41.1\pm 3.4$ 2 | $\mathbf{68.1\pm 4.9}$ | $\mathbf{62.4\pm 4.1}$ | $\mathbf{57.0\pm 2.8}$ | $52.4\pm 5.7$ | $43.4\pm 4.28$ | $44.4\pm 4.3$ 4 | $\mathbf{67.3\pm 5.8}$ | $\mathbf{62.0\pm 3.5}$ | $\mathbf{57.0\pm 3.0}$ | $52.5\pm 6.6$ | $43.4\pm 5.7$ | $44.8\pm 4.4$ ### D.4 Full Results on Entropy on all Datasets In Tab. 14, Tab. 15, Tab. 16, and Tab. 17 we provide the mean and standard deviation of the reconstruction accuracy and the entropy of the continuous and the categorical features over increasing batch size for attacking with TabLeak on the four datasets. In support of Sec. 4, we can observe on all datasets a trend of increasing entropy over decreasing reconstruction accuracy as the batch size is increased; and as such providing a signal to the attacker about their overall reconstruction success. To generalize our results on the local information contained in the entropy, we show the mean reconstruction accuracy of both the discrete and the continuous features with respect to bucketing them based on their entropy in a batch of size $128$ in Tab. 18, Tab. 19, Tab. 20, and Tab. 21 for all four datasets, respectively. We can see that with the help of this bucketing, we can identify subsets of the reconstructed features that have been retrieved with a (sometimes significantly e.g., up to 24%) higher accuracy than the overall batch. Table 14: The mean accuracy [%] and entropies with the corresponding standard deviations over batch sizes of the categorical and the continuous features on the Adult dataset. | Discrete | Continuous ---|---|--- | Accuracy | Entropy | Accuracy | Entropy 1 | $100.0\pm 0.0$ | $0.01\pm 0.04$ | $98.7\pm 6.5$ | $-3.1\pm 0.26$ 2 | $99.5\pm 3.5$ | $0.01\pm 0.06$ | $98.8\pm 8.2$ | $-2.82\pm 0.57$ 4 | $99.5\pm 1.4$ | $0.08\pm 0.11$ | $95.8\pm 9.9$ | $-1.89\pm 0.99$ 8 | $98.5\pm 5.6$ | $0.15\pm 0.13$ | $90.9\pm 14.7$ | $-1.11\pm 0.95$ 16 | $97.2\pm 4.3$ | $0.26\pm 0.11$ | $78.8\pm 13.5$ | $-0.11\pm 0.63$ 32 | $91.0\pm 4.4$ | $0.40\pm 0.06$ | $59.2\pm 6.9$ | $0.77\pm 0.30$ 64 | $83.2\pm 3.6$ | $0.48\pm 0.04$ | $55.1\pm 3.0$ | $1.21\pm 0.19$ 128 | $78.5\pm 1.8$ | $0.53\pm 0.03$ | $55.7\pm 2.0$ | $1.48\pm 0.10$ Table 15: The mean accuracy [%] and entropies with the corresponding standard deviations over batch sizes of the categorical and the continuous features on the German Credit dataset. | Discrete | Continuous ---|---|--- | Accuracy | Entropy | Accuracy | Entropy 1 | $100.0\pm 0.0$ | $0.00\pm 0.01$ | $100.0\pm 0.0$ | $-3.10\pm 0.18$ 2 | $100.0\pm 0.0$ | $0.03\pm 0.05$ | $100.0\pm 0.0$ | $-2.41\pm 0.97$ 4 | $100.0\pm 0.0$ | $0.07\pm 0.05$ | $99.8\pm 1.1$ | $-1.66\pm 0.80$ 8 | $100.0\pm 0.0$ | $0.10\pm 0.07$ | $99.1\pm 3.1$ | $-1.38\pm 0.54$ 16 | $99.5\pm 1.4$ | $0.25\pm 0.07$ | $89.1\pm 8.1$ | $-0.35\pm 0.22$ 32 | $93.0\pm 2.1$ | $0.43\pm 0.04$ | $66.0\pm 4.9$ | $0.60\pm 0.13$ 64 | $81.9\pm 1.8$ | $0.56\pm 0.02$ | $57.5\pm 2.2$ | $1.08\pm 0.06$ 128 | $78.2\pm 1.1$ | $0.59\pm 0.02$ | $58.4\pm 1.7$ | $1.30\pm 0.05$ Table 16: The mean accuracy [%] and entropies with the corresponding standard deviations over batch sizes of the categorical and the continuous features on the Lawschool Admissions dataset. | Discrete | Continuous ---|---|--- | Accuracy | Entropy | Accuracy | Entropy 1 | $100.0\pm 0.0$ | $0.01\pm 0.03$ | $100.0\pm 0.0$ | $-3.28\pm 0.29$ 2 | $100.0\pm 0.0$ | $0.02\pm 0.05$ | $100.0\pm 0.0$ | $-2.85\pm 0.87$ 4 | $100.0\pm 0.0$ | $0.03\pm 0.04$ | $100.0\pm 0.0$ | $-2.45\pm 0.78$ 8 | $99.8\pm 1.1$ | $0.11\pm 0.10$ | $96.4\pm 11.1$ | $-1.77\pm 0.62$ 16 | $98.1\pm 2.8$ | $0.24\pm 0.11$ | $87.1\pm 13.4$ | $-0.65\pm 0.49$ 32 | $93.4\pm 3.0$ | $0.41\pm 0.08$ | $65.1\pm 8.0$ | $0.18\pm 0.20$ 64 | $87.0\pm 2.4$ | $0.55\pm 0.05$ | $57.7\pm 4.8$ | $0.78\pm 0.12$ 128 | $83.5\pm 1.5$ | $0.60\pm 0.03$ | $62.6\pm 3.4$ | $1.07\pm 0.11$ Table 17: The mean accuracy [%] and entropies with the corresponding standard deviations over batch sizes of the categorical and the continuous features on the Health Heritage dataset. | Discrete | Continuous ---|---|--- | Accuracy | Entropy | Accuracy | Entropy 1 | $100.0\pm 0.0$ | $0.02\pm 0.05$ | $99.6\pm 2.5$ | $-2.97\pm 0.33$ 2 | $100.0\pm 0.0$ | $0.05\pm 0.09$ | $96.5\pm 12.9$ | $-2.55\pm 0.80$ 4 | $99.7\pm 1.8$ | $0.08\pm 0.10$ | $97.5\pm 9.0$ | $-1.71\pm 0.79$ 8 | $99.1\pm 3.7$ | $0.13\pm 0.11$ | $94.3\pm 11.3$ | $-1.06\pm 0.64$ 16 | $96.6\pm 7.5$ | $0.26\pm 0.10$ | $80.2\pm 11.2$ | $-0.11\pm 0.42$ 32 | $85.1\pm 6.4$ | $0.43\pm 0.06$ | $61.7\pm 3.8$ | $0.72\pm 0.23$ 64 | $73.1\pm 4.7$ | $0.52\pm 0.03$ | $59.6\pm 2.5$ | $1.13\pm 0.20$ 128 | $66.1\pm 2.4$ | $0.57\pm 0.02$ | $60.7\pm 1.6$ | $1.44\pm 0.13$ Table 18: The mean accuracy [%] and the share of data [%] in each entropy bucket for batch size 128 on the Adult dataset. Entropy | Categorical Features | Entropy | Continuous Features ---|---|---|--- Bucket | Accuracy [%] | Data [%] | Bucket | Accuracy [%] | Data [%] 0.0-0.2 | $95.7$ | $8.1$ | $\infty$-0.72 | 72.7 | 1.2 0.2-0.4 | $90.5$ | $23.4$ | 0.72-1.16 | 65.5 | 13.6 0.4-0.6 | $79.8$ | $27.7$ | 1.16-1.6 | 56.4 | 50 0.6-0.8 | $69.8$ | $29.2$ | 1.6-2.04 | 51.1 | 32.4 0.8-1.0 | $61.2$ | $11.6$ | 2.04-$\infty$ | 41.8 | 2.9 Overall | $78.5$ | $100$ | Overall | 55.7 | $100$ Random | $73.8$ | $100$ | Random | 44.4 | $100$ Table 19: The mean accuracy [%] and the share of data [%] in each entropy bucket for batch size 128 on the German Credit dataset. Entropy | Categorical Features | Entropy | Continuous Features ---|---|---|--- Bucket | Accuracy [%] | Data [%] | Bucket | Accuracy [%] | Data [%] 0.0-0.2 | $98.1$ | $7.4$ | $\infty$-0.72 | 55.7 | 1.2 0.2-0.4 | $92.5$ | $15.3$ | 0.72-1.16 | 62.3 | 28.1 0.4-0.6 | $83.3$ | $22.2$ | 1.16-1.6 | 57.7 | 56.4 0.6-0.8 | $71.6$ | $33.7$ | 1.6-2.04 | 53.4 | 13.2 0.8-1.0 | $66.0$ | $21.3$ | 2.04-$\infty$ | 48.2 | 0.2 Overall | $78.2$ | $100$ | Overall | $58.4$ | $100$ Random | $73.5$ | $100$ | Random | $37.8$ | $100$ Table 20: The mean accuracy [%] and the share of data [%] in each entropy bucket for batch size 128 on the Lawschool Admissions dataset. Entropy | Categorical Features | Entropy | Continuous Features ---|---|---|--- Bucket | Accuracy [%] | Data [%] | Bucket | Accuracy [%] | Data [%] 0.0-0.2 | $95.5$ | $3.4$ | $\infty$-0.72 | 69.5 | 20.7 0.2-0.4 | $92.1$ | $14.2$ | 0.72-1.16 | 63.3 | 35.1 0.4-0.6 | $88.0$ | $32.7$ | 1.16-1.6 | 60.1 | 32.9 0.6-0.8 | $81.2$ | $30.4$ | 1.6-2.04 | 55.5 | 10.8 0.8-1.0 | $70.7$ | $19.3$ | 2.04-$\infty$ | 54.1 | 0.5 Overall | $83.5$ | $100$ | Overall | $62.6$ | $100$ Random | $81.1$ | $100$ | Random | $19.1$ | $100$ Table 21: The mean accuracy [%] and the share of data [%] in each entropy bucket for batch size 128 on the Health Heritage dataset. Entropy | Categorical Features | Entropy | Continuous Features ---|---|---|--- Bucket | Accuracy [%] | Data [%] | Bucket | Accuracy [%] | Data [%] 0.0-0.2 | $90.7$ | $6.2$ | $\infty$-0.72 | 69.1 | 1.1 0.2-0.4 | $84.7$ | $22.1$ | 0.72-1.16 | 65.6 | 17.0 0.4-0.6 | $70.5$ | $21.2$ | 1.16-1.6 | 61.8 | 52.9 0.6-0.8 | $54.8$ | $32.3$ | 1.6-2.04 | 55.9 | 26.5 0.8-1.0 | $50.3$ | $18.4$ | 2.04-$\infty$ | 49.4 | 2.5 Overall | $66.1$ | $100$ | Overall | $60.7$ | $100$ Random | $69.8$ | $100$ | Random | $34.2$ | $100$
# What will it take to generate fairness-preserving explanations? Jessica Dai Sohini Upadhyay Stephen H. Bach Himabindu Lakkaraju ###### Abstract In situations where explanations of black-box models may be useful, the fairness of the black-box is also often a relevant concern. However, the link between the fairness of the black-box model and the behavior of explanations for the black-box is unclear. We focus on explanations applied to tabular datasets, suggesting that explanations do not necessarily preserve the fairness properties of the black-box algorithm. In other words, explanation algorithms can ignore or obscure critical relevant properties, creating incorrect or misleading explanations. More broadly, we propose future research directions for evaluating and generating explanations such that they are informative and relevant from a fairness perspective. Machine Learning, ICML ## 1 Introduction & Motivation While fairness and explainability are both generally considered core components of “responsible” machine learning, surprisingly little work has explored the two principles in tandem. However, especially in light of common goals of generating explanations for a black-box models, it is critical that the explanation itself can be reliably trusted to illustrate important fairness properties of the black-box. For example, Suresh et al. (2021)’s framework for characterizing stakeholders in explainable machine learning provides objectives such as debugging or improving the model, ensuring regulatory compliance, informing downstream actions, justifying actions based on algorithm output, and contesting a decision; and specific tasks like assessing the reliability of a prediction; detecting mistaken or discriminatory behavior; and understanding the influence of different inputs. Prior work in this area has outlined similar goals for explanations (Bhatt et al., 2020a). For obvious reasons, if fairness is a concern related to the model more broadly, it is also a critical consideration for these tasks and objectives in the context of explanations. Furthermore, while of course calculating particular fairness desiderata for the underlying black-box directly might surface unfairness, the stakeholders who are using an explainable ML algorithm may not have access to the information needed for such an analysis; as a result, we might hope that explanations themselves contain sufficient and accurate information for any stakeholder to confidently make claims and downstream decisions based on the explanation. This is especially important given that end-users of explanations may be vulnerable to overtrusting or being manipulated by explanations (Lakkaraju & Bastani, 2020). However, current methods for evaluating explanations are designed to be almost entirely application-agnostic, and therefore do not consider any criteria related to fairness. While terminology varies across the literature, commonly used evaluation metrics for explanations include fidelity, the extent to which a surrogate model generated by an explanation algorithm produces predictions similar to the black-box, and stability, the extent to which explanations generated for similar (but non-identical) inputs are similar to one another (Bhatt et al., 2020b; Yeh et al., ). A growing portion of the literature points to dangers in focusing solely on these targets when designing explanation algorithms. Slack et al. (2020b) and Zhang et al. (2019), for instance, highlight the high degree of inconsistency of explanations generated by perturbation-based methods under certain parameter settings—in other words, multiple explanations generated for the same input may result in wildly different explanations. Kumar et al. (2020) and Hancox-Li & Kumar (2021), meanwhile, investigate SHAP (Lundberg & Lee, 2017), and find problems from both technical and philosophical perspectives. Under the framework of fairness specifically, Slack et al. (2020a) and Aïvodji et al. (2019) illustrate specific ways in which either a black-box algorithm or an explanation, respectively, may be adversarially constructed such that the explanation, while having high fidelity (or achieving other desirable metrics), misleadingly suggests that the black-box model is fair when in reality it is not. However, adversarial construction may not be necessary for misleading explanations to occur. For some baseline intuition as to how fairness and explanations may interact, consider the following. Explanations are often intended to provide a digestible approximation of the black-box algorithm’s decision boundary, whether locally (in the neighborhood of a particular input) or globally (for all possible inputs). Additionally, fairness concerns arise when there is a meaningful difference in how two or more demographic groups are distributed or labelled in the training data, which leads to a meaningful disparity in how the black-box machine learning algorithm performs on the two groups by whatever metric one may choose (Corbett-Davies & Goel, 2018). The demographic information may or may not be used by the black-box. In the case that it is not used, the explanation will not explicitly encode information about the sensitive attribute, and an end-user relying on the explanation alone will have little information about the fairness of the black-box. In the case that it is used, such as when fairness-constrained learning algorithms or postprocessing methods are applied, then the black-box may learn a decision boundary such that the boundaries are different when conditioned on group membership. However, explanation methods are not designed to approximate such boundaries. Furthermore, the explanation itself will include information about the sensitive attribute, such as in the form of a feature importance score; it is not immediately clear what the proper interpretation of that score should be. Finally, in either case, the known issue of isolating feature attributions when features may be correlated with one another (Kumar et al., 2020) is especially relevant when considering fairness applications. ### 1.1 A simple example Consider the following scenario where the black-box takes in three features: group, x0, and x1, where group $\in\\{0,1\\}$, x0 and x1 are continuous, and x0 is correlated with group membership. For some reason or another (perhaps by applying a fairness intervention in the training process), the black-box’s learned decision boundaries are different when conditioned on group membership: specifically, the black-box predicts 1 for group 0 when x1 $>6$, and for group 1 when x1 $>5$. Figure 1(a) illustrates this black-box decision boundary. (a) Decision boundary of the black-box (b) Decision boundary learned by LIME Figure 1: The true decision boundaries of the black-box vs the decision boundary learned by LIME. The left cluster is the minority group, comprising 27% of the total population. In the case that both groups constitute $50\%$ of the population, an explanation method that optimizes for fidelity as measured by performance on sampled neighbors will approximate the decision boundary at around x1 $>5.5$—an explanation that is simply incorrect for both groups based on what we know about the black-box. If one group is a minority of the population, however, the explanation’s approximated decision boundary will be closer to the majority group’s decision boundary, meaning overall better explanations for the majority group and overall worse explanations for the minority group. This is illustrated in Figure 1(b), which visualizes the decision boundary learned by LIME: note that the learned boundary is much closer to x1 $>5$, the majority group’s boundary, over all data points, not just the points corresponding to the majority group. Notably, this is a problem that seems to arise whenever group-conditional decision boundaries are meaningfully distinct. The explanations generated here, therefore, may be both misleading and incorrect. ## 2 Our Framework We propose a two-part framework for further work in this area: first, determining what constitutes a mismatch in fairness properties; and second, generating fairness-preserving explanations. ### 2.1 Diagnosing Fairness Mismatch First, we provide an initial attempt at outlining what metrics or diagnostic tests may be useful in detecting a mismatch in fairness; these also serve, therefore, as potential criteria or definitions for what a fairness-preserving vs fairness-obscuring explanation may look like. These metrics are broadly motivated by the principle that if the model is fair, the explanations should not raise false alarms; similarly, if the model is unfair, the explanations should not suggest that it is innocuous. In this section we attempt to pinpoint what exactly it means for an explanation to “raise a false alarm” or suggest that the model “is innocuous.” Group fairness. There are a variety of metrics through which models can be audited or monitored for group fairness: demographic parity focuses primarily on group-wise outcomes, while other metrics such as equalized odds, equal opportunity, or predictive parity, reflect some combination of the group- conditional confusion matrices (Verma & Rubin, 2018). Let $\mathcal{M}$ represent a metric of group fairness which takes in the predictions of some model (and potentially information about the true labels); $f$ represent the black-box; $E_{f}$ be the surrogate model from an explanation for $f$; and $E_{f}(\vec{x})$ represent evaluating the surrogate model on some input $\vec{x}$. Then, group fairness is preserved when: $|\mathcal{M}(f(\vec{x}))-\mathcal{M}(E_{f}(\vec{x}))|\leq\epsilon$ In other words, when, if substituting the black-box model with the explanation’s surrogate model, the predictions generated result in similar values of the fairness metric $\mathcal{M}$. Two obvious issues arise with this initial proposition. First, while this is straightforward for global explanation methods, many of the most popular explanation methods like LIME (Ribeiro et al., 2016) or SHAP (Lundberg & Lee, 2017) are local explanation methods, designed to explain specific points: that is, there is no notion of a global surrogate model from which group fairness metrics can easily be calculated. Second, only the demographic parity metric does not require information about the ground-truth labelling of data points; all other metrics require this information. The question then becomes how to determine the set of points $x$ on which $\mathcal{M}$ will be calculated for local explanations. One potential approach is to use the sampled points in the local neighborhood generated by the explanation method, and calculate $\mathcal{M}$ on the neighborhood for each of the points in the dataset. Of course, this approach means that no ground-truth labelling is available for this set of sampled points, and thus the only metric that can be verified to match or mismatch in this way is demographic parity. Counterfactual fairness. In classification, counterfactual fairness and individual fairness have similar motivations: identifying how the prediction for a particular input $x$ would change if only the group membership of $x$ was changed (Dwork et al., 2012). Though there is debate about the extent to which counterfactual or individual fairness is distinct (if not orthogonal) from group fairness (Lahoti et al., 2019; Binns, 2020), a “fairness- preserving” explanation should nevertheless capture the counterfactual behavior of the black-box model. To that end, let $x^{\prime}$ represent the input $x$ with a changed value for group membership, and $E_{f(x)}$ illustrate explanations generated for input $x$. Then, counterfactual fairness is preserved when: $E_{f(x)}-E_{f(x^{\prime})}\approx f(x)-f(x^{\prime})$ In other words, when the difference between the explanation generated for $x$ and the explanation generated for $x^{\prime}$ follows the difference between the model’s behavior on $x$ and $x^{\prime}$. This abstraction also raises open questions about how exactly the similarity should be determined. Sensitive attribute. The treatment of the sensitive attribute in cases where it is included in the inputs to the black-box model is also worth additional attention. In this case, unlike group and counterfactual fairness, we do not propose a particular normative value of how the sensitive attribute ought to be treated by the explanation algorithm in relation to the black-box. For example, the feature importance for the sensitive attribute being 0 does not necessarily imply that the black-box is not discriminatory: the influence of the sensitive attribute may have been attributed to another, correlated feature. Moreover, many algorithms for fair machine learning explicitly use the sensitive attribute in order to achieve some measure of fairness, such as the method proposed in Hardt et al. (2016). In this sense, a feature importance of 0 might even be alarming rather than reassuring. As a result, future work in this area may include methods which give more meaningful ways to interpret the influence of the sensitive attribute. Additional considerations. Finally, of note here is the distinction between evaluating an explanation algorithm itself for how well it preserves fairness properties in general, and evaluating a given, specific explanation for whether it is preserving relevant fairness properties once the explanation for a particular input or model has been generated. These are different tasks—the first, for example, may be useful for a model developer or engineer in the process of choosing an explanation method, while the second may be more relevant to auditing processes once a black-box model (and a corresponding explanation algorithm) has been deployed. Additional work distinguishing what approaches or metrics might be comparatively useful in either situation is warranted; in particular, all of these proposed metrics require a comparatively high amount of information and access to the black-box, and may be better-suited towards the first task (evaluating algorithms in general) rather than the second (auditing individual explanations). ### 2.2 Generating Fairness-Preserving Explanations As discussed above, algorithms for finding explanations typically focus on optimizing for metrics such as fidelity and sensitivity. One approach for generating a fairness-preserving explanation can be similar to early approaches to fair machine learning algorithms: adding a penalty term in the objective function for the extent to which the explanation is fairness- preserving (Kamishima et al., 2012; Zafar et al., 2017). For example, the original LIME objective function (Ribeiro et al., 2016) is as follows: $\xi(x)=\arg\min_{g\in G}\mathcal{L}(f,g,\pi_{x})+\Omega(g)$ where $\xi(x)$ is the optimal explanation for input $x$ to model $f$, $G$ is the class of sparse linear models, $\mathcal{L}$ is a measure of fidelity, $\pi_{x}$ is a local region around $x$, and $\Omega$ is a measure of complexity. A modified objective function including a term such as $\psi(f,g)$ measuring the preservation of fairness properties described in Section 2.1, fits naturally: $\xi_{\textit{fair}}(x)=\arg\min_{g\in G}\mathcal{L}(f,g,\pi_{x})+\lambda_{1}\Omega(g)+\lambda_{2}\psi(f,g)$ Here, $\lambda_{1}$ and $\lambda_{2}$ are tuning parameters for the complexity $\Omega$ and fairness-preservation term $\psi$, respectively. Figure 2 illustrates the results of using this modified, fairness-preserving objective function when finding the explanation $\xi$. Here, the dataset used was COMPAS; the black-box was a three-layer deep neural net; and $\psi$ is derived from the group fairness equation in Section 2.1. Specifically, $\psi=|DP(f(x))-DP(E_{f}(x))|$, where $DP$ is the demographic parity metric: $P(Y=1|S=1)-P(Y=1|S=0)$. In this experiment, the number of perturbations used to generate the LIME explanation was varied to show the asymptotic fairness mismatch, as a greater number of perturbations generally results in a higher- certainty explanation. The fairness mismatch plotted on the y-axis is calculated exactly in the same way as $\psi$ explained above. Our introduction of this approach is meant more as a provocation to start the conversation rather than a full-fledged proposal or argument that this method is necessarily ideal; however, the results are promising and warrant further investigation in this direction. Figure 2: Original vs fairness-preserving LIME algorithms: number of perturbations used for LIME vs average fairness mismatch over explanations for all points in the dataset. ## 3 Discussion & Conclusion In this work, we have given some intuition and preliminary results as to why it is important to probe the fairness of explanations: not just because of the often high-stakes and consequential goals for which explanations are used, but because existing explanation methods focusing on metrics like fidelity may result in misleading and incorrect explanations even in the absence of an adversarial actor constructing explicitly discriminatory black-boxes, or designing explanation methods that explicitly hide discrimination. Furthermore, fairness can also be viewed as a specific lens on performance for the model overall. In fact, the phenomenon illustrated in Figure 1 can be considered to be a performance issue—strictly incorrect decision boundaries, though the minority group’s decision boundary is much more incorrect—that can be detected by testing for fairness mismatch as proposed in Section 2. Of course, the exact behavior in this scenario may be the consequence of LIME’s choice to focus on sparse linear models, and choosing a more complex interpretable model class (such as shallow decision trees) may alleviate the issue. Nevertheless, an action like this is only made possible by the first step of diagnosing the fairness mismatch between the black-box and the explanation’s surrogate model. Thinking about explanations in this context, therefore, also raises broader questions about the extent to which explanations are in fact capturing what we want; or, alternatively, ways in which the limitations of particular explanations or explanation methods may be communicated clearly to stakeholders and end-users. In this work, we suggested a framework for evaluating the fairness-preserving properties of explanations, and proposed one generic approach for producing fairness-preserving explanations. However, this extended abstract is also meant to argue for the consideration of evaluation metrics for explanations more broadly: while fairness was the first angle we considered, there are undoubtedly additional necessary properties of the model—even privacy, for example—that explanations should preserve. ## Acknowledgements We would like to thank the anonymous reviewers for their insightful feedback. This work is supported in part by the NSF award #IIS-2008461, and Google. The views expressed are those of the authors and do not reflect the official policy or position of the funding agencies. ## References * Aïvodji et al. (2019) Aïvodji, U., Arai, H., Fortineau, O., Gambs, S., Hara, S., and Tapp, A. Fairwashing: the risk of rationalization. In _International Conference on Machine Learning_ , pp. 161–170. PMLR, 2019. * Bhatt et al. (2020a) Bhatt, U., Andrus, M., Weller, A., and Xiang, A. Machine learning explainability for external stakeholders. _arXiv preprint arXiv:2007.05408_ , 2020a. * Bhatt et al. (2020b) Bhatt, U., Weller, A., and Moura, J. M. Evaluating and aggregating feature-based model explanations. _arXiv preprint arXiv:2005.00631_ , 2020b. * Binns (2020) Binns, R. On the apparent conflict between individual and group fairness. In _Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency_ , pp. 514–524, 2020. * Corbett-Davies & Goel (2018) Corbett-Davies, S. and Goel, S. The measure and mismeasure of fairness: A critical review of fair machine learning. _arXiv preprint arXiv:1808.00023_ , 2018. * Dwork et al. (2012) Dwork, C., Hardt, M., Pitassi, T., Reingold, O., and Zemel, R. Fairness through awareness. In _Proceedings of the 3rd innovations in theoretical computer science conference_ , pp. 214–226, 2012. * Hancox-Li & Kumar (2021) Hancox-Li, L. and Kumar, I. E. Epistemic values in feature importance methods: Lessons from feminist epistemology. In _Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency_ , pp. 817–826, 2021. * Hardt et al. (2016) Hardt, M., Price, E., and Srebro, N. Equality of opportunity in supervised learning. _arXiv preprint arXiv:1610.02413_ , 2016. * Kamishima et al. (2012) Kamishima, T., Akaho, S., Asoh, H., and Sakuma, J. Fairness-aware classifier with prejudice remover regularizer. In _Joint European Conference on Machine Learning and Knowledge Discovery in Databases_ , pp. 35–50. Springer, 2012. * Kumar et al. (2020) Kumar, I. E., Venkatasubramanian, S., Scheidegger, C., and Friedler, S. Problems with shapley-value-based explanations as feature importance measures. In _International Conference on Machine Learning_ , pp. 5491–5500. PMLR, 2020. * Lahoti et al. (2019) Lahoti, P., Gummadi, K. P., and Weikum, G. ifair: Learning individually fair data representations for algorithmic decision making. In _2019 IEEE 35th International Conference on Data Engineering (ICDE)_ , pp. 1334–1345. IEEE, 2019. * Lakkaraju & Bastani (2020) Lakkaraju, H. and Bastani, O. “how do i fool you?” manipulating user trust via misleading black box explanations. In _Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society_ , pp. 79–85, 2020. * Lundberg & Lee (2017) Lundberg, S. and Lee, S.-I. A unified approach to interpreting model predictions. _arXiv preprint arXiv:1705.07874_ , 2017. * Ribeiro et al. (2016) Ribeiro, M. T., Singh, S., and Guestrin, C. “why should i trust you?” explaining the predictions of any classifier. In _Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining_ , pp. 1135–1144, 2016. * Slack et al. (2020a) Slack, D., Hilgard, S., Jia, E., Singh, S., and Lakkaraju, H. Fooling lime and shap: Adversarial attacks on post hoc explanation methods. In _Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society_ , pp. 180–186, 2020a. * Slack et al. (2020b) Slack, D., Hilgard, S., Singh, S., and Lakkaraju, H. How much should i trust you? modeling uncertainty of black box explanations. _arXiv preprint arXiv:2008.05030_ , 2020b. * Suresh et al. (2021) Suresh, H., Gomez, S. R., Nam, K. K., and Satyanarayan, A. Beyond expertise and roles: A framework to characterize the stakeholders of interpretable machine learning and their needs. In _Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems_ , pp. 1–16, 2021. * Verma & Rubin (2018) Verma, S. and Rubin, J. Fairness definitions explained. In _2018 ieee/acm international workshop on software fairness (fairware)_ , pp. 1–7. IEEE, 2018. * (19) Yeh, C.-K., Hsieh, C.-Y., Suggala, A. S., Inouye, D. I., and Ravikumar, P. On the (in) fidelity and sensitivity of explanations. * Zafar et al. (2017) Zafar, M. B., Valera, I., Rogriguez, M. G., and Gummadi, K. P. Fairness constraints: Mechanisms for fair classification. In _Artificial Intelligence and Statistics_ , pp. 962–970. PMLR, 2017. * Zhang et al. (2019) Zhang, Y., Song, K., Sun, Y., Tan, S., and Udell, M. ” why should you trust my explanation?” understanding uncertainty in lime explanations. _arXiv preprint arXiv:1904.12991_ , 2019.
# Predicting Customer Lifetime Value in Free-to-Play Games Paolo Burelli IT University of Copenhagen - Tactile Games <EMAIL_ADDRESS> ###### Abstract As game companies increasingly embrace a service-oriented business model, the need for predictive models of players becomes more pressing. Multiple activities, such as user acquisition, live game operations or game design need to be supported with information about the choices made by the players and the choices they could make in the future. This is especially true in the context of free-to-play games, where the absence of a pay wall and the erratic nature of the players’ playing and spending behavior make predictions about the revenue and allocation of budget and resources extremely challenging. In this chapter we will present an overview of customer lifetime value modeling across different fields, we will introduce the challenges specific to free-to-play games across different platforms and genres and we will discuss the state-of-the-art solutions with practical examples and references to existing implementations. ## 1 Introduction Customer lifetime value (CLV or LTV) refers broadly to the revenue that a company can attribute to one or more customer over the length of their relationship with the company [55]. The process of predicting the lifetime value consists in producing one or more monetary values that correspond to the sum of all the different types of revenues that a specific customer, or a specific cohort, will generate in the future. The purposes of this prediction are manifold: for example, having an early estimation of a customer’s potential value allows more accurate budgeting for future investment; moreover, monitoring the remaining potential revenue from an established customer could permit preemptive actions in case of decreased engagement. Predicting customer lifetime value is a complex challenge and, to date, there is no single established practice. Furthermore, due to its wide potential impact in different business aspects, the problem is being researched in different communities using a plethora of different techniques, varying from parametric statistical models to deep learning [28, 70]. One of the initial primary drivers of research in CLV was direct marketing [4] with a focus on its use in marketing decision problems, such as acquisition or the trade-off between acquisition and retention costs [6]. Research expanded progressively to other fields of marketing and customer relationship management, especially thanks to the pervasiveness of digital technology and the possibility to have more accurate tracking of the customer relationship [37]. The increased quality and quantity of customer data available for analysis widened the areas of application of CLV related principles, and helped accelerating the development of new models for CLV prediction. Initial analytical models were developed on assumptions about constant and uniform characteristics of the user behavior (e.g. transaction frequency, margin of profit). Over time, models have evolved to take into account uncertainty and variations of user behavior [63] and to be able to draw information from a wide spectrum of variables [69]. Models have traditionally been based, on transactional data, meaning that the variables describing the user behavior are based on different characteristics of the customers transactions with the company (e.g. recency, frequency, monetary (RFM) [63]). However, with the advent of e-commerce and similar technologies, the available information about the customer behavior became increasingly richer, since it started to be possible to track not only the purchases, but also which objects were observed, how often would the customer visit the store and many other non-strictly transactional details. One of the application area with probably the richest amount of user data is video games; however, until recently, due to its distribution and revenue models, the need for customer lifetime value prediction was relatively minor. Traditionally, computer games distribution has relied on a premium pricing model, in which a game is developed, released and sold for a price. Following this pricing scheme, the monetization of the customer happens before the player starts playing the game and its post-purchase behavior does not affect the customer lifetime value directly [3]. Two main aspects radically changed the relationship between the game developers and the players/customers: digital distribution platforms and the free-to-play business model. Digital distribution platforms, such as Steam111Valve Corporation. Steam. https://store.steampowered.com 2018. or the App Store222Apple Incorporated. App Store. https://www.apple.com/lae/ios/app- store 2018, along with game developers’ own on-line services allow tracking of user behavior beyond a single game, making it more meaningful and necessary to predict the customer lifetime value across multiple games, either being purchased on the same store or being linked to the same on-line account. Free-to-play (F2P/Freemium) games follow a different revenue model than the traditional one: games are freely available, at no cost, to the players and the revenue comes from in-game advertisement and the sale of in-game items. These two revenue streams, contrarily to the classic game revenue model, begin only after the players/customers have been acquired and vary dramatically between players. This kind of relationship between the company and the players resembles in part the relationship between a store and its customer, e.g. monetary transactions happen at irregular intervals, the transaction value is not constant and there is no explicit customer churn event. However, differently from physical or on-line stores, the users are not only customers but also players, and the data available about their behavior includes a wide range of features that goes beyond transactions or goods browsing, such as players progression within the game or players’ skill level. Within the field of game analytics, the analysis of these features plays a major role in evaluating the quality and the potential success of a game [19]; furthermore, the behavior has been linked with major business related metrics such as retention [53] or customer lifetime value [70]. This abundance of accurate and high frequency data, covering different aspects of the customer experience make free-to-play-games an ideal application area for research in user modeling and prediction of future customer behavior. In turn, the development of such models would see an immediate application in the industry, as they would allow a better optimization of important processes such as user acquisition and customer relationship management [45]. This, as well as the ever-expanding share of the game industry that leverages this business model, highlights customer life time value prediction as a key challenge in games data science and games research. For these reasons, in this chapter, we will make an attempt to give an overview of the research and applications of CLV modeling including an in-depth analysis of the works that have been focused on the F2P field. In the next section, we will outline a number of foundational concepts that will serve as a basis to the rest of the chapter. In section 3, we will describe different ways in which CLV can be used inside and outside of the game industry. In section 4, we will give an overview of different models used to predict CLV with a more in-depth focus on models employed in the game industry. In section 5 we will describe a number of software packages that can be used to perform CLV prediction, and finally, in section 6, we will outline a number of current and future directions within the field. ## 2 Definitions and terminology Pfeifer et al. [55], in their 2004 article, describe a problem present across the different research works on customer lifetime value: incoherent terminology. The reasons behind this problem are probably manifold, e.g. the different research communities; however, in this section we will attempt to synthesize a number of concepts and terms that are common across the different fields involved and are foundational to the study of customer lifetime value prediction in games and beyond. First of all, it is important to define what is customer lifetime value; Pfeifer et al. [55] summarize it as: “the present value of the future cash flows attributed to the customer relationship”. As the authors point out, this definition makes a couple of important assumptions: first, the value is associated to the cash flow, which means it is not limited to the inbound flow of revenue generated by a customer but also to the costs attributed to that customer; however, the costs considered in this definition are only the ones directly attributable to a given customer. Second, the term value expresses a valuation at the current point in time of some future revenues and costs, this implies some form of discount based on the time in which the future cash flows are predicted. In their definition, Pfeifer et al. do not attempt to encompass all possible definitions of CLV within the literature; instead, they propose one possible explicit definition of CLV that is terminologically correct and coherent. This means that other research works on predicting CLV do not necessarily adhere to the aforementioned definition. Berger and Nasr [4], for instance, do not consider acquisition cost as part of the lifetime value, considering instead CLV to be the “maximum profitable acquisition cost” [37]. Sifa et al. [69], on the other hand, do not take into consideration the present value and use no factor to discount the future predicted revenue. One further aspect of this definition that is not shared by the different research works is the time frame considered for the calculation. Gupta et al. [28], in their review, find that a number of researchers employ an arbitrary time horizon for the future prediction [56], while others user an infinite horizon [25]. While it might appear that employing a fixed time horizon is just a way to simplify the modeling process, both approaches have their own merits. Using an infinite horizon does permits to avoid making any assumption about the maximum length of the customer relationship with the company, therefore; it allows a more accurate prediction of Pfeifer’s definition of CLV; however, often for budgeting or other purposes, it is important to know the return on investment within a given period of time (e.g. 1 year), making a fixed horizon CLV prediction extremely useful. Another consideration relative to the temporal aspect of the customer relationship that has to be taken into account when studying CLV prediction is whether the relationship is exclusive or it allows the customer to use other competitors’ services; these two different type of relationship are often labeled as either lost-for-good or always-a-share [20]. Identifying which type of relationship is being modeled for CLV prediction is extremely important as it affects the temporal length of the relationship and the definition of churn (i.e. when a customer stops using the service). This is especially true in free-to-play-games where there is no visible churn event; in this context, the definition of the user state at any given moment (active or inactive) depends on an ad-hoc formula based one some synthesis of the players’ actions [61]. Many CLV prediction models include or are built on top of a churn prediction model, and both the formula and the choice of model will depend on whether it is possible for the customer to return after a period of inactivity [54] or whether his or her relationship with the company is considered finished [27]. Finally, it is important to specify that all of the works cited and described in this chapter do not consider the effects of the competition with other services on the customer lifetime value. As pointed out by Gupta et al. [28], this is the case for most current modeling approaches because of the lack of data about the competitors. ## 3 Applications of CLV prediction While marketing serves as the primary application area of customer lifetime value prediction, a number of other activities, such as customer relationship management or live game operations, are being increasingly driven by data and different key performance indicators (KPIs) such as CLV. Schmittlein et al. [63], in one of the earliest works on customer lifetime value estimation, motivate their research by stating that: > ”…the issue is important in at least three settings: monitoring the size and > growth rate of a firm’s ongoing customer base, evaluating a new product’s > success based on the pattern of trial and repeat purchases, and targeting a > subgroup of customers for advertising and promotions.” They envision that, by knowing which customer are active and what purchases they make, a manager is able to do more accurate budgeting, a better product evaluation and target the customers more accurately with re-engagement initiatives. Similarly to Schmittlein et al., a number of other research works have tackled the above problems; Table 1 gives and overview of the main activities for which CLV prediction has been employed. Each application area includes a number of different activities (e.g. special offers or advertisement) united by a similar usage of CLV prediction. In the case of budgeting and finance, CLV predictions is used to predict revenue and plan budgets for the different activities within the company [63, 18], finding the balance between acquisition and retention investments [6] or to make a financial estimation of the company [29]. The second activity, Product Development, is mentioned only by Schmittlein et al. [63]; however, as we will describe later in this section, CLV is an important KPI in the evaluation of the game quality in F2P games and it can be very useful to drive the game design process. ### 3.1 Customer Acquisition Customer Acquisition encompasses all activities aimed at getting new customers to start using the service or buying the products offered. As mentioned by Berger et al. [4, 5], having an estimation of the customers’ lifetime value can help deciding more accurately how much to spend on a given promotional campaign and how much margin for profitability there could be in a new market segment; Dwyer [20] further develops this concepts by identifying in CLV the ceiling of the customer acquisition spend. Having an early indication of whether the costs of acquisition are matched by the CLVs, can help deciding whether a certain promotional initiative should be continued or stopped [45, 69, 70]. ### 3.2 Customer Retention Customer retention and segmentation initiatives are often intertwined and not completely distinguishable; in this article, we describe customer retention/loyalty activities as any activity explicitly aimed at prolonging the lifetime of a customer, while, with customer targeting/segmentation, we identify all activities aimed at identifying different homogeneous groups within the customer base that can be targeted with a custom user experience. Such a customer experience could be aimed at prolonging the customer lifetime; however, for the purpose of this categorization, we define this as a secondary objective. Application area | Article ---|--- Budgeting and Finance | [63, 6, 18, 29] Product Development | [63] Customer Acquisition | [20, 4, 5, 45, 69, 70] Customer Retention/Loyalty | [4, 50, 56, 44, 59, 58] | [48, 67, 11, 61, 70] Customer Targeting/Segmentation | [50, 56, 75, 74, 35, 30, 67, 42, 40] Other | [1] Table 1: Applications of CLV prediction in different business activities Latchowetz et al., in their study on the impact of season ticket holders on the NBA franchises revenue [44], argue that a customer should not be evaluated solely on the revenue that she or he generates within a season. At the time of their estimation, an average season ticket holder had a lifetime value of more than eighty thousand US dollars; therefore, the authors advice the entertainment industry to acknowledge the importance of using customer lifetime value and developing long term retention strategies for their customers. Berger and Nasr [4] discuss how knowing the customer lifetime value can help determining the effects of adopting different marketing strategies for retention and acquisition and how to balance the spending between the two activities. The authors point out also that any acquisition strategy might impact also the customers’ retention; therefore, it is hard to see the two activities independently and both their budgets must by aligned to the predicted customer lifetime value. Mulhern extends further Berger and Nasr’s idea about the strategic use of CLV prediction by seeing it as a primary indicator to driving resources allocation in the marketing mix [50]. Rosset at al.[59, 58] extend the idea of using CLV prediction to drive retention activities and expand it by investigating how to estimate the impact of retention efforts to the CLV. They demonstrate an application of their approach in the context of a retention campaign for a group of potential churners. They show how, by having a model able to predict CLV in different configurations of the service, it is possible to compare the current estimated lifetime value of a group of potential churners with different projected lifetime values given a number of different incentives to improve the cohort’s retention; thus, taking a more informed decision on the correct initiative to be taken. The predictions used for these decisions, as any other prediction of future events, have some degree for uncertainty in the form of prediction error or bias; taking this uncertainty into account can be also important in decision making. Malhouse and Blattberg [48] study the impact of uncertainty in the management of the customer relationship helping to understand not only whether engaging into a retention activity can be profitable but also how often it could be. ### 3.3 Customer Segmentation Most of the research works presented in this section make the assumption that a higher retention (i.e. a longer lifetime for the customers) will have a positive impact on the company’s revenue in the form of an increased CLV; however, while this is generally true, Reinartz and Kumar [56] found that this assumption does not always howl and different customer segments have different spending patterns, which means that, in a number of cases, prolonging the customer’s lifetime would no yield any more revenue. For this reason, they suggest that, the company should not necessarily pursuit long-term customer relationship, but it should customize the relationship based on prediction on future customer lifetime value and retention. Segmenting the target audience and personalizing marketing activities is an established practice [17]; however, the predominant guiding factors used to define the segments and the most appropriate actions have, for long time, been based on demographics, questionnaires and interviews [47]. The advent of large scale data collection and analytics, in many industries, has dramatically changed many of the common practices in marketing research, allowing practitioners to access more detailed information from wider audiences at lowers costs. Mulhern [50], for example, proposes customer lifetime value as a possible segmentation criterion beside usage volume and brand loyalty: he suggests that one simple segmentation could divide the users in customers worth keeping, as their CLV is profitable, and customers that are not worth the retention cost. Verhoef and Donkers [75] propose a similar segmentation to design personalized insurance offers. Venkatesan and Kumar [74] pose the question whether using predicted CLV for segmentation con yield better results than more established KPIs such as previous-period customer revenue or past customer value. To compare the effectiveness of the different metrics, they rank the customers based on each metric and segment the customers based on the raking; the results show the segmentation based on CLV to be superior to the other segmentation approaches. Similar strategies to the ones mentioned so far have been applied to a number of different industries such as telecommunications [35], banking [30], retailing [67, 42] or beauty [40]. In the last few years, a similar trend has emerged in the gaming industry and, especially in the mobile game and free-to- play industry, all the aforementioned marketing practices are established and widely undertaken [66]. ### 3.4 Free-To-Play Games Figure 1: Three screen grabs from the iOS version of Bee Brilliant Blast by Tactile Games depicting, from left to right, the phone’s store interface, a typical puzzle level and the in-game store interface, in which the customer can purchase different amounts of in-game currency. Free-To-Play video games are games than can be played without an upfront payment and potentially completely free of charge; players can, within the game, purchase some virtual goods and pay to enable some features. While this type of games was originally mostly distributed through on-line social media platforms, the business model is currently vastly diffused on many different platforms, such as mobile phones, home consoles and personal computers [2]. Figure 1 shows an example of a mobile free-to-play game; as it is possible to see on the first screen grab to the left, the game is freely available on the phone’s store, however, the store specifies that the game contains the possibility to perform in-app purchases. The last image to the right shows the in-game store that allows the customer to purchase different packs of in-game currency, which can be used throughout the game to unlock different features. Using an intermediary currency, while a very common practice, is not defining characteristic of free-to-play games, and some games allow customer to enable directly in-game features directly using a real currency. The second main source of revenue in free-to-play games comes from advertisement: a large portion of free-to-play game display to the players ads in different formats (e.g. videos or interstitials). These ads are provided by external services that act as mediators between the advertiser and the publisher. The game (i.e. the publisher) received ads from the ad provider through an API and shows them to the player, the ad provider will in return pay the game developer depending on the number of ads being displayed, the number of times the ads are clicked or the number of times the application being advertised is being installed. Both aforementioned revenue streams – in-app purchase and advertisement – have a non-contractual nature, as the player can freely choose if and when to make a purchase or click on and advertisement campaign; furthermore, a conversion rate (i.e. the percentage of players making an in-app purchase) well below 10% is considered normal in the industry [51], making it especially challenging to predict the customers’ lifetime value. At the same activities such as customer acquisition and retention rely on accurate CLV predictions for accurate targeting and budgeting. As of November 2018, there are around 816000 games available only on the Apple App Store333https://www.pocketgamer.biz/metrics/app-store/categories/; therefore, targeting the right customers and accurately measure their potential impact in the company’s revenue is particularly important in customer acquisition in the free-to-play games market, given the extremely high number of games competing for the same users. A significant part of the customer acquisition efforts in the industry is executed through performance marketing campaigns: in these campaigns, he advertiser competes with other advertisers to show ads the potential new customers, often times through some form of auctioning system. Customer lifetime value predictions can be used as a benchmark to evaluate the profit margin of these campaigns as both the potential revenue coming from the new customers and their cost of acquisition vary greatly depending on the segmentation chosen for the campaign, the market context and the quality of the advertisement creative content [70]. Even beyond performance marketing, an accurate CLV prediction is very important to be able to budget expensive and higher risk marketing efforts such as television advertisement. Furthermore, the F2P games market is characterized by very low retention number – e.g. the best android games have a day 30 retention rate below 5%444https://www.forbes.com/sites/johnkoetsier/2017/10/20/the-very-best- android-game-has-just-4-5-user-retention-after-30-days/ – making any effort to improve such low number a priority for most companies. Within this context, CLV prediction can be used, as previously described, to target different segments of users with special offers; However, it is also possible to personalize the game experience at large by adapting, for instance, the flow of the game progression or by providing custom events. Harrison and Roberts show an example of such adaptation aimed to improving retention in a digital version of Scrabble [32]. Given the importance of CLV prediction in the free-to-play and other industries, a number of algorithms have been developed over time , in the next two section we will describe the most significant ones and analyses a number of software packages that can be used for CL prediction. ## 4 Models Berger and Nasr are the first researchers that attempted a categorization of CLV prediction models [4]. In this early review of the field, all methods mentioned are aimed at calculating an average customer lifetime value for the entirety of the company’s customer base or a given cohort. In a latter review [43], Kumar et al. group these approaches under the label ”average LTV approach”, while Jain and Singh [37] use the term ”basic structural model”. In this chapter we adopt the definition used by Kumar et al. and we include a number of other practices commonly used in the free-to-play industry that approach LTV prediction in a similar fashion. The methods for CLV prediction discussed in this article are dividend in two further categories: customer history models and machine learning models. The first category contains all methods that attempt to predict either a single customer’s or a cohort’s lifetime value based on some mathematical projection if the past behavior of the given customer or cohort. The second category includes all algorithms which make use of some learning algorithm to build a computational model of the customer behavior, which is then used to make predictions. ### 4.1 Average Models The aim of the models discussed in this section is to give an estimation of the future discounted cash flow for a group of customers or for the whole user base. The common aspect between these models is that they make a number of assumptions: they assume a constant (or otherwise known) margin of profit over time, they do not consider the stochastic nature of the purchase behavior of the customers and they assume that the customer behavior is uniform across the estimated cohort. The most basic model proposed by Berger and Nasr [4] assumes that the customer retention rate and the costs of retention are constant over time and both costs and revenues happen periodically at a constant rate, within these conditions the formula for CLV is as follows: $CLV=GC*\sum_{\\_}{i=0}^{n}\frac{r^{i}}{(1+d)^{i}}-M*\sum_{\\_}{i=0}^{n}\frac{r^{i-1}}{(1+d)^{i-0.5}}$ (1) Where CG is the customer’s expected gross contribution margin per period, M is the promotion costs per customer per period, n is the prediction horizon expressed in number of periods, r is the retention between one period and the next, and d is the discount rate. Further iterations of this model presented by Berger and Nasr [4], by Jain and Singh [37] and by Kumar et al. [43], allow more sophisticated ways to express non constant retention rates and margins of profit and to include acquisition costs in the calculation. In their article about the relationship between the length of service and CLV, Rosset et al. [58] formulate an extended average model that employs a Kaplan- Meier [39] estimator to calculate the duration of the relationship between the company and the average customer . A similar approach is currently among the basic models used in the free-to- play industry for CLV prediction [65]: after selecting a specific function to express the retention curve of a given cohort, the function is fit on the training data and the resulting customer lifetime value is calculated as follows: $CLV=\sum_{\\_}{i=0}^{n}ARPDAU*ret(i)$ (2) Where $ARPDAU$ stands for average revenue per daily active user and $ret(i)$ is the value of the chosen retention function at the $i^{th}$ day. This method for CLV prediction, while quite simplistic and not necessarily extremely accurate, has the advantage of being easily readable and based on established business KPIs such as APRDAU and retention. Another example of simplistic but widely adopted model is described by Runge [60]: instead of basing the projected revenue on retention and APRDAU, the method fits a function to approximate the average monetization curve, which is then used to project the customer lifetime value for a player based the amount of money the she spent at the present day. If we consider $rev(n)$ as the revenue produced by a customer up the $n^{th}$ day of the customer relationship and $mon(n)$ as the average fraction of the revenue that is produced by day $n$ the formula for CLV prediction is as follows: $CLV=rev(n)/mon(n)$ (3) The major advantage of this model is its simplicity and, similarly to the retention-based model, its readability. However, with such a model it is not possible to predict the CLV for customers that have not yet produced any revenue – i.e. the projection would always result 0; moreover, in a context such as free-to-play games, in which the amount of paying user is very small, the variably of the behavior between different paying user has a large impact on the prediction, and such variability is completely disregarded by the model. ### 4.2 Customer History Models The aforementioned methods based on retention and monetization curves are partly based on historical customer data as, in both methods, a function representing the average retention or monetization behaviors is fit on some past customer data and then used for prediction. These methods however do not allow to use any particular user’s track record to personalize the prediction, hence the average nature of the methods. Based on the customer categorization by Jackson [36] that divides the customers in two types: always-a-share and lost-for-good, Dwyer [20] argues that an average retention model is not sufficient to model CLV for customers belonging to the always-a-share category. For this type of customers, he proposed a migration model based on the customers’ recency of purchase. Always-a-share customers differ from lost-for-good customers in that the latter have a long-term commitment with the company and switching to a competitor is costly, while the first ones can have simultaneous relationships with multiple companies competing for the same product/service. Typical examples of lost-for-good customers are bank customers or tenants in a rental property. Whether free-to-play game players can be considered belonging to the first or second category is an open question as there is no monetary barrier that stops a player to switch to another game; however, to a certain extent, the longer the time a player invests in a game, the less likely is that the player will be stopping playing that game. In his customer migration model, Dwyer uses purchase recency to estimate the customer’s probability of performing a purchase in the next period and the potential value of such a purchase. In the training phase, based on past customers’ data, the method builds a model of purchase probabilities and values that depend on the length of the period of inactivity. These recency cells are then used as a lookup table to predict, depending on each customer’s past purchase behavior data, her purchases in the next time unit. Dwyer’s model allows to leverage the customer’s past behavior for a personalized CLV prediction and considers the probabilistic nature of the customers purchase behavior. #### 4.2.1 Recency, Frequency and Monetary Value The main limitation of Dwyer’s methods is that the probability of a purchase in a given period is based only in the recency of the last purchase, which is not necessarily true for all businesses. Especially in free-to-play game, different types of purchase will affect differently the future purchases, for example, a player purchasing a large package of virtual currency recently might be less likely to make purchase as the amount of resources acquired will allow her to play longer without a need for any help within the game. The concept of recency used by Dwyer is an established metric in direct marketing and alongside purchase frequency and average purchase monetary value – commonly known together as RFM – have been long used in the field to predict customer behavior [28]. Hughes describes a method for customer quality estimation based on these three variables: the current customer based of the company is sorted along these three variables each categories into 5 quantiles, creating therefore $5*5*5$ groups, these groups are then used to score the different customers and target them with specific offers [34]. Inspired by Hughes’s work, Shih and Liu propose a method based on RFM and CLV clustering to rank the customer according to their profitability [68]. As a first step the method relies on a group of expert evaluation to identify the relative importance of the recency, frequency and monetary variables using analytical hierarchical processing. The customers are than clustered based on the RFM space and the resulting clusters are ranked through a simple weighted sum of the three normalized variables. The aforementioned methods are not designed to produce a numerical predictor of CLV but only to score the customers by their potential profitability. Furthermore, these methods are limited in predicting the customer behavior only for one future time period [26]. Finally, these methods disregard the fact that customers’ pas behavior is, often times, the result of past company activities. Fader et al. describe a model for CLV prediction, using RFM as input variables based on the e Pareto/NBD framework [63], which overcomes some of the aforementioned limitations [26]. #### 4.2.2 Pareto/NBD The Pareto/NBD model [63] aims at predicting individual customer’s purchase behavior based on their past purchase patterns. More specifically, for each customer, based on the recency ($t$) and frequency ($X$) of the purchases and the length of the relationship between the customer and the company ($T$), the model estimates the expected future number of purchases. The model is based on five assumptions: * • while active, a customer makes purchases following a Poisson process with rate $\lambda$; * • the purchase rate $\lambda$ differs between customers and is distributed according to a gamma distribution across the customer base; * • the duration of the active period of the customer is exponentially distributed with a death rate $\mu$; * • the death rate $\mu$ differs between customers and is distributed according to a gamma distribution across the customer base; * • the death rate and the purchase rates are independent. Based in these assumptions, the authors find that the customers “deaths” for a sample follow a Pareto distribution of the second kind [38], while the number of purchases made by an active customer follow a negative binomial distribution [22] – hence the name Pareto/NBD. The two distributions are controlled by four parameters ($r$ and $\alpha$ for NBD and $s$ and $\beta$ fro Pareto); Schmittlein suggests two approaches to estimate the parameters based on the customers’ past behavior: maximum likelihood and fitting observed moments; in case a model in needed without prior data, Schmittlein discusses also the possibility to handpick the parameters based on management judgments. Once the parameters have been identified, each customer’s future number of purchases is predicted using the two aforementioned distributions and the customer’s $X$, $t$ and $T$. One of the main issues with the Pareto/NBD is it’s computational complexity [25] as the estimation requires multiple evaluations of the Gauss Hypergeometric Function; this problem is mitigated by the modified BG/NBD model by Fader et al. [25], which models the customer activity using a beta- geometric model that is easier to implement efficiently. Both aforementioned models are able to predict the future number of purchases for a given customer and they can, for instance, be used to count expected active users ate a give point in time, however these methods do not model in any way the value of each purchase, therefore, they can’t directly predict the customer lifetime value. Reinartz and Kumar, in their studies on profitable lifetime duration [56, 57], employ Schmittlein’s model to be able to calculate the number of time periods in which a customer will perform a purchase. The authors transform the continuous probability that a customer is active into a dichotomous measure of whether the customer is active or inactive it a given point in time. Given a probability threshold, it is possible to identify a customer’s future date of churn, which, combined with the first date of the customer relationship, gives an expected customer relationship duration. This duration, expressed as number of periods $n$, is used to calculate that customer’s lifetime value using the formula defined by Berger and Nasr [4] and described in Equation 1. Schmittlein and Peterson[64] propose an extension of the original Pareto/NBD in which the future monetary value of each transaction is samples from a normal distributed around the mean monetary value of a cohort. However, Fader et al. [26] observe that, in the data they analyzed, there are large differences between mean, mode and median, indicating that the distribution of the monetary values is highly skewed. Therefore, the authors propose to model the average transaction value using an adapted version of the gamma-gamma model proposed by Colombo and Jiang [15]. One of the main strengths and, at the same, the main limitations of the Pareto/NBD or BG/NBD models is that they rely on a small number of purely transactional data. This is a strength in that it makes the model general enough to be easily applied in different context; however, the resulting model will always risk to be sub-optimal as it disregards potentially relevant information. Sing et al. address this limitation and propose an estimation framework that can flexibly incorporate multiple statistical distributions and consider a number of covariates such as age or gender [72]. Glady et al. [27] address another important limitation of the aforementioned models: they all assume independence between the frequency of transactions and the profit per transaction. The authors of the study demonstrate that such assumption does not hold in multiple real-world data sets and propose a modified Pareto/Dependent model that performs better than Pareto/NBD in such circumstances. ### 4.3 Markov Chain Models An alternative approach to customer lifetime value prediction is proposed for the first time by Pfeifer and Carraway [54]: the authors suggest to model the customer relationship as a Markov Chain Model (MCM), in which the different states of the model represent different conditions of the relationship between the customer and the company in terms of transactions and customer activity, and the transition probabilities between states represent the probability of a customer to move from one condition to the other one – e.g. for a customer to make a purchase or to churn. Figure 2: Simple customer relationship Markov Chain Model. States r1 to r4 represent different value of recency from 1 to 4. Figure 2 shows an example of a simple Markov Chain Model representing the possible states of the relationship between a customer and the company and the valid transitions between the states. In the depicted case, the states are based on the recency of the last purchase (e.g. r1 corresponds to recency equal to 1) and there are only 5 valid stages before churn. However, as in the examples reported by Pfeifer and Carraway, in a real-world scenario, the states are based on a multitude of factors and there are many more valid transitions each with their own probability. While Pfeifer and Carraway show how it is possible to calculate CLV using a MCM representation of the customer relationship with a given set of states and transaction probabilities, Etzion et al. focus on the process of learning the states and the transition from past customer data [24]. The process requires first to identify the variables determining the states of the model, to define their ranges and discretize them. At this point the transition probabilities between the states can be calculated through the following three steps: 1. 1. initialize a transition matrix – i.e. a square matrix that contains the probability of transitioning between one state and another one – with zeros in all cells; 2. 2. for each customer performing a transition between a state $i$ and a state $j$, the matrix cells identified by $ij$ is incremented. 3. 3. at the end of the processing, each line of the matrix is normalized between 0 and 1 using a min-max normalization. The resulting matrix can be used, as suggested by Pfeifer and Carraway [54] to calculate CLV for a customer in any given state. Ching et al. [12] employ the same approach to build multiple transition matrices that describe the customer behaviors in different market conditions – e.g. with or without a promotion – finding that the transition matrices differ and so does the CLV and the customer retention. The authors further showcase how to use stochastic dynamic programming to estimate the optimal promotion strategy that optimizes CLV given the previously found transition matrices and a set of constraint on the promotions budget. ### 4.4 Supervised Learning Models The general principle at the core of all supervised learning algorithms is to learn a function between some example input and output data – e.g. past customer behavior and their recorded lifetime value. This example data is usually called training data, and the function resulting from this training phase – the model – is used to predict the target variable – e.g. CLV – on newly collected data. This procedure is not dissimilar to some of the MCM based approaches previously described [24]; however, the models we present in this section, differ from MCM models in that they trade explanatory power for the possibility to capture more complex behaviors using some form of computational/black-box model. This means that the resulting models are less useful to inform the business about specific customers’ preferences, but they can potentially be more accurate and be deployed, for instance, for marketing automation. It is important to note that this is not a hard distinction and many models described in this and the previous sections include a mix of explanatory and black-box models. An example of a model incorporating different learning algorithms is the one proposed by Haenlein et al. [30]. The authors describe an approach to CLV prediction that uses a combination of CART analysis [8] and MCM: first, the customers of a retail bank are divided into age groups and, for each age group, a regression tree model is built to predict the profit for a customer within that group; second, the transition probabilities between the groups are modeled as a Markov Chain so that the model is able to follow the transitions of the customers throughout their lifetime. The resulting lifetime value is calculated as a discounted sum of each of the CLV predicted in the possible customer states weighted by the transition probabilities. Cheng at al. [11] approach CLV prediction in a similar way but they extend the model by building a different Markov Chain for each period the customer relationship; each chain includes a number of states depending of the activity level of the customers in that specific period. The total CLV for a customer is similarly calculated by summing the predicted CLVs over the different states and periods weighted by the transition probabilities and the survival probability of each period. The models used to predict the profit contribution of a customer and a given state is modeled using an artificial neural network, while the survival probability in each period is calculated using a logistic regression model. Within the context of free-to-play games, supervised learning methods have been employed in a number of studies. Compared to the previously mentioned approaches, supervised learning offers two key advantages that make them particularly interesting within this industry: the ability to leverage a wide variety of data about the customer behavior and the ability to make predictions about CLV for players that are not yet customers – i.e. that have not yet made a single purchase. The first aspect is important as in the freemium games industry, customers are also players and the data describing the gaming behavior of players/customers is much richer and at a much higher frequency compared to their purchase behavior data. Furthermore, there are a number of studies that show how past in-game behavior is a predictor of future in-game behavior [46], of player preferences [9] and of the probability of making a purchase within the game [31]. The second aspect is partially related to the first one: since the predictions can be based also on non-purchase related data, it is potentially possible to perform predictions also for players that have not yet made a purchase and that might never be making one. As described in Section 3, most of free-to- play players do not make any in-game purchase and generate little to no revenue; it is therefore important for a model that predicts CLV in such context to be able to discern which customers will be making a purchase before the first one is made. In one of the first studies on the application of supervised learning for CLV prediction in freemium games, Lange et al. present a comparison of the performance of a series of linear and non-linear models in the prediction of CLV [45]. The dataset used for the study contains in-game events from the first 7 days of gameplay of about 38 million players; the input variables include data about player spending, gameplay, game progression, social interactions, success metrics and game settings preferences. The target variable is the amount of revenue generated by each player after 180 days. The results of the study show that an ensemble of two artificial neural networks using the absolute-differences technique over-performed the other methods in terms of mean square error in a 10-fold cross-validated comparison. A similar study is presented by Sifa et al. [69] in their article on purchase behavior prediction in mobile free-to-play games. Similarly to Lange et al., the authors present a comparative study of multiple supervised learning algorithms; however, the objective of the prediction is slightly different as the authors argue that CLV prediction is a combination of three prediction tasks: predicting whether player will ever make a purchase, predict the number of purchases and predict the value of each purchase. The results of the study show that, for the classification task, the models having the best performance are tree-based models – e.g. random forest and decision tree classifiers – and that it is extremely valuable to re-sample the dataset in to avoid as the players performing purchases are extremely under represented. For this purpose, the authors employ the SMOTE-NC [10] method to over-sample the “converted” users data points. In a latter article, Sifa et al. present a study that attempts to predict CLV in one single step [70], more in line with the approach presented by Lange et al. [45]. The study evaluates the performance of a deep neural network in predicting day 360 cumulative undiscounted revenue per player, and it compares the results against a number of algorithms such as random forest and linear regression. The results of the comparison show that the deep neural network out-performs the other regression algorithms in terms or normalized root mean square error. Furthermore, the authors demonstrate how to adapt SMOTE-NC to re-sample the dataset and reduce the imbalance between customers that made a purchase and customers that did not. This process requires a number of adjustments of the standard implementation of SMOTE-NC, as the original algorithms is design for classification problems, while a direct prediction of CLV is a regression problem. ## 5 Software Packages In Section 4, we gave an overview of the current state-of-the art in CLV prediction mostly from an academic perspective, listing and explaining a number of scientific articles describing the main approaches to solve the prediction problem. However, most of the algorithms presented so far have been translated, to different degrees, in software packages that can be applied to easily perform CLV predictions on new datasets. In this article, we identify three main packages that can be either used directly to predict CLV or that can be used to construct a model for CLV prediction. The software packages are Lifetimes [16] and BTYD [21] that implement some of the algorithms presented in Section 4.2.2, and scikit-learn [49] a popular machine learning package for the Python programming language that can be used to implement the models presented in Section 4.4. Alternative packages can be used to develop supervised learning models – e.g. Keras [14] or XGBoost [13] – however, most of them are compatible with scikit-learn, therefore we chose this package as the representative in this article. To build and operate with Markov Chain Models, there are a large number of different alternatives for both the Python and the R language – e.g. the DTMC pack [73] and PyMC3 [62] – however, these packages are designed to offer functionalities that go far beyond the code needed to implement the models presented in Section 4.3, which can be easily implemented using standard algebra functionalities. For this reason, we chose not cover any MCM package in this Section. ### 5.1 Lifetimes/BTYD The Lifetimes [16] and BTYD [21] packages are both software implementations of the Pareto/NBD, BG/NBD and Gamma-Gamma models; they allow to learn the parameters of the various distributions from past purchases data and to predict future purchases numbers and the purchases values for new customers. In this section we will give sample of how to train and produce predictions with Lifetimes; however, the procedure is very similar for the BTYD software package. The main components of the Lifetimes package are the classes BetaGeoFitter and GammaGammaFitter. The first class (BetaGeoFitter) contains all the logic necessary to fit a BG/NBD model from transactional data in the format of purchase recency and frequency, and current age of the customer in terms of customer relationship duration. It is also possible to alternatively fit a Pareto/NBD model using the ParetoNBDFitter class instead. Given a dataset with correctly formatted inputs, the code to fit the BetaGeoFitter looks as follows: ⬇ from lifetimes import BetaGeoFitter bgf = BetaGeoFitter(penalizer_coef=0.0) bgf.fit(data[’frequency’], data[’recency’], data[’T’]) The package provides a function named summarydatafromtransactiondata that allows to produce correctly formatted data for the algorithm from a list of transactions. The second class (GammaGammaFitter) provides all the methods necessary to fit a Gamma-Gamma model and can be used, in combination with the BetaGeoFitter, to predict the customer lifetime value. The Gamma-Gamma model is also trained on transactional data containing the frequency and the total monetary value of the purchases made by each customer. The code to fit the Gamma-Gamma model looks as follows: ⬇ from lifetimes import GammaGammaFitter returning_customers_summary = data[data[’frequency’]>0] ggf = GammaGammaFitter(penalizer_coef = 0) ggf.fit(returning_customers_summary[’frequency’], returning_customers_summary[’monetary_value’]) And the code to perform CLV predictions on new data using the previously fit models looks like this: ⬇ ggf.customer_lifetime_value( bgf, new_data[’frequency’], new_data[’recency’], new_data[’T’], new_data[’monetary_value’], time=180, discount_rate=0.01 The time and discount rate depend on the parameters of the specific prediction to be performed. Further documentation and tutorials about the two packages can be found on their project web pages555https://github.com/CamDavidsonPilon/lifetimes666https://cran.r-project.org/web/packages/BTYD/index.html, and both packages are currently available for download and install respectively on the Python Package Index777https://pypi.org and The Comprehensive R Archive Network888https://cran.r-project.org. ### 5.2 scikit-learn Scikit-learn [49] is a library for the Python programming language that provides functionalists to perform supervised and unsupervised learning tasks. The supervised learning part of the library implements a wide variety of algorithms ranging from linear models to tree-based methods for both classification and regression. In this section, we will show how to use scikit-learn to build a regression models based on the Random Forest algorithm similar to the one presented in [45] and [70]. Random Forest [7] is an ensemble algorithms that combines a number of tree predictors trained on different portions of the data, the output of the model is generally either the mode or the mean of the outputs of the different tree predictors depending on whether the task is a classification or a regression task. Scikit-learn implements this class of algorithms though the RandomForestRegressor and RandomForestClassifier classes; for the purpose of this article we will show how to use the RandomForestRegressor to predict CLV through regression. Given a dataset similar to the one used in [70] with one line per player, the code to fit the RandomForestRegressor looks as follows: ⬇ from sklearn.ensemble import RandomForestRegressor X = data[’number_of_sessions’, ’number_of_rounds’, ’number_of_days’, ’number_of_purchases’, ’total_purchase_amount’] y = data[”day_360_CLV”] model = RandomForestRegressor(n_estimators=100) model.fit(X, y) For readability, the number of features selected as inputs is reduced in comparison to [70]; furthermore, the number of trees in the ensemble has been chosen without any specific motivation as no precise description is included in the article. Both the size of the ensemble and maximum depth the trees, as well as the other parameters of the ensemble, are problem specific and should be selected either algorithmically or through extensive experimentation. The code necessary to perform CLV prediction using the previously fit model looks as follows: ⬇ X = new_data[’number_of_sessions’, ’number_of_rounds’, ’number_of_days’, ’number_of_purchases’, ’total_purchase_amount’] y = model.predict(X) The resulting array $y$ will contain the predicted customer lifetime values. The sci-kit learn library includes also a number of functionalities to estimate the predictions error and to perform different forms of validation and testing – e.g. k-folds cross-validation. More information can be found on the project’s documentation page999https://scikit- learn.org/stable/documentation.html. ## 6 Conclusions and Future Directions In this chapter we have described the applications of customer lifetime value prediction in the free-to-play games and other industries and we attempted to give a comprehensive description of the different methods that are currently used to predict CLV. We have identified a number of activities that can benefit from an accurate prediction ranging from customer acquisition to market segmentation and we have described how the need for such prediction is even more important in free-to-play games due to the characteristics of the marker and the customer relationship – e.g. incredibly high competition in user acquisition and very low number of paying customers compared to the player base. In Section 4, we have identifies within the literature four main group of methods for the prediction: average based, Pareto/NBD and derivatives, Markov Chain Models and Supervised Learning Models. The methods that are currently dominant within the free-to-play games industry are either average based or some form of Pareto/NBD. However, we can see an emerging trend in recent years of more studies being published on the application of different supervised learning algorithms to CLV prediction. These methods have a number of advantages over classical statistical methods in this context as they make no assumption of the distributions of the input data and easily allow multiple co-variates to be included in the model. Furthermore, in an industry with such a low rate of conversion to paying users, being able to predict revenue from a player that has not yet made any purchase has the potential to be very useful, for instance, for early estimation of the profitability of a player acquisition campaign. Based on the analysis presented and in light of the recent improvements within the field of machine learning, we close the article by proposing a number of possible interesting future directions for customer lifetime value prediction research. ##### Deep Learning One group of supervised learning algorithms that has only minimally been explored in one of the most recent articles [70] is Deep Neural Networks. The work by Sifa et al. don’t give an in-depth description of the deep multi-layer perceptron used; however, in the absence of other information, we can assume that the model was based on a standard fully-connected feed-forward neural network with multiple hidden layers. Given the results achieved with a relatively simple architecture, it would be interesting to test more advanced deep learning techniques such as auto-encoders for feature extraction [76] or deep convolutional neural networks [41]. ##### Time Series All of the models presented in this chapter act on some form of summary data representing the behavior of customers/players up to a given point in time. While some of these summary representations do include some representation of the temporal aspect of the behaviors – e.g. the recency and frequency features in RFM based models – these cannot capture any particular sequences of purchases or any sequence of in-game events that is connected to the probability that a player will perform a purchase or not. One possible approach to leverage this kind of information is to treat player behavior data as time series and perform some form of time-series regression or classification to predict CLV or a purchase event. Within aforementioned deep learning field algorithms such as Long-short term memory networks [33] or the more recent temporal convolution networks [23]. ##### Transfer Learning and Lifelong Learning Game developers in the mobile free-to-play market are challenged with the increasing need to manage a multitude of games live at the same time. These games have often very long lifetimes and, throughout their lifetimes, the game is adjusted and evolved and the player base changes dues to the game evolution and the changes in the customer acquisition initiatives. This adds a new dimension of the problem of CLV prediction as models need to be constantly updated and new models need to be built quickly for new version of the games and for new games. Within this scenario, research in cross-game player behavior analysis [52], transfer learning [77] and life-long machine learning [71], and their application to CLV prediction will become increasingly important. ## References * [1] Harsha Aeron, Tarun Bhaskar, Ramasubramanian Sundararajan, Ashwani Kumar, and Janakiraman Moorthy. A metric for customer lifetime value of credit card customers. Journal of Database Marketing & Customer Strategy Management, 15(3):153–168, jun 2008. * [2] Kati Alha, Elina Koskinen, Janne Paavilainen, Juho Hamari, and Jani Kinnunen. Free-to-Play Games : Professionals’ Perspectives. Nordic DiGRA, pages 1–14, 2014. * [3] Charles Baden-Fuller and Stefan Haefliger. Business Models and Technological Innovation. Long Range Planning, 46(6):419–426, 2013. * [4] Paul D. Berger and Nada I. Nasr. Customer lifetime value: Marketing models and applications. Journal of Interactive Marketing, 12(1):17–30, 1998. * [5] Paul D. Berger, Bruce Weinberg, and Richard C. Hanna. Customer lifetime value determination and strategic implications for a cruise-ship company. Journal of Database Marketing & Customer Strategy Management, 11(1):40–52, sep 2003. * [6] Robert C Blattberg and John Deighton. Manage Marketing by the Customer Euqity. Harvard business review, 74(4):136, 1996. * [7] Leo Breiman. Random forests. Machine Learning, 2001. * [8] Leo Breiman. Classification And Regression Trees. Routledge, oct 2017. * [9] Paolo Burelli and Georgios N. Yannakakis. Adaptive Virtual Camera Control Trough Player Modelling. User Modelling and User-Adapted Interaction, 2015. * [10] N. V. Chawla, K. W. Bowyer, L. O. Hall, and W. P. Kegelmeyer. SMOTE: Synthetic Minority Over-sampling Technique. Journal of Artificial Intelligence Research, 16:321–357, jun 2002\. * [11] C.-J. Cheng, S.W. Chiu, C.-B. Cheng, and J.-Y. Wu. Customer lifetime value prediction by a Markov chain based data mining model: Application to an auto repair and maintenance company in Taiwan. Scientia Iranica, 19(3):849–855, jun 2012. * [12] W. K. Ching, M. K. Ng, K. K. Wong, and E. Altman. Customer lifetime value: Stochastic optimization approach. Journal of the Operational Research Society, 55(8):860–868, 2004\. * [13] Hyunsu Cho. XGBoost Python Package, 2018. * [14] Francois Chollet. Keras: Deep Learning for humans, 2018. * [15] Richard Colombo and Weina Jiang. A stochastic RFM model. Journal of Interactive Marketing, 13(3):2–12, 1999. * [16] Cam Davidson-Pilon. Lifetimes: Measure customer lifetime value in Python, 2018. * [17] Peter R. Dickson and James L. Ginter. Market Segmentation, Product Differentiation, and Marketing Strategy. Journal of Marketing, 1987. * [18] Bas Donkers, Peter C. Verhoef, and Martijn de Jong. Predicting Customer Lifetime Value in Multi-Service Industries. Technical report, Erasmus Research Institute of Management (ERIM), 2003\. * [19] Anders Drachen, Magy Seif El-Nasr, and Alessandro Canossa. Game Analytics – The Basics. In Game Analytics, pages 13–40. 2013. * [20] F Robert Dwyer. Customer Lifetime Valuation to Support Marketing Decision Making. Journal of Direct Marketing, 11(4):6–13, 1997. * [21] Lukasz Dziurzynski, Edward Wadsworth, and Daniel McCarthy. BTYD: Implementing Buy ’Til You Die Models, 2014. * [22] A. S. C. Ehrenberg. Repeat Buying. North-Holland, 1972. * [23] Maha Elbayad, Laurent Besacier, and Jakob Verbeek. Pervasive Attention: 2D Convolutional Neural Networks for Sequence-to-Sequence Prediction. In Conference on Computational Natural Language Learning, Brussels, Belgium, 2018. * [24] O. Etzion, A. Fisher, and S. Wasserkrug. e-CLV: a modelling approach for customer lifetime evaluation in e-commerce domains, with an application and case study for online auctions. In IEEE International Conference on e-Technology, e-Commerce and e-Service, 2004. EEE ’04. 2004, pages 149–156. IEEE, 2004. * [25] Peter S Fader, Bruce G S Hardie, Ka Lok Lee, and Peter S Fader. Counting Your Customers the Easy Way : An Alternative to the Pareto / NBD Model. Marketing Science, 24(2):275–284, 2005. * [26] Peter S. Fader, Bruce G.S. Hardie, and Ka Lok Lee. RFM and CLV: Using Iso-Value Curves for Customer Base Analysis. Journal of Marketing Research, 42(4):415–430, nov 2005. * [27] Nicolas Glady, Bart Baesens, and Christophe Croux. A Modified Pareto / NBD Approach for Predicting Customer Lifetime Value. Expert Systems with Applications, 36(2, Part 1):1–26, 2009. * [28] Sunil Gupta, Dominique M. Hanssens, Bruce G. S. Hardie, Wiliam Kahn, V. Kumar, Nathaniel Lin, N. Ravishanker, S. Sriram, Nalini Ravishanker S. Sriram, V. Kumar, and Sunil Gupta. Modeling Customer Lifetime Value. Journal of Service Research, 9(2):139–155, 2006. * [29] Sunil Gupta and Donald R. Lehmann. Customer Lifetime Value and Firm Valuation. Journal of Relationship Marketing, 5(2-3):87–110, oct 2006. * [30] Michael Haenlein, Andreas M. Kaplan, and Anemone J. Beeser. A Model to Determine Customer Lifetime Value in a Retail Banking Context. European Management Journal, 25(3):221–234, jun 2007. * [31] Nicolai Hanner and Ruediger Zarnekow. Purchasing behavior in free to play games: Concepts and empirical validation. In Proceedings of the Annual Hawaii International Conference on System Sciences, volume 2015-March, pages 3326–3335, 2015. * [32] Brent Harrison and David L. Roberts. Analytics-Driven Dynamic Game Adaption for Player Retention in a 2-Dimensional Adventure Game. IEEE Conference on Computatonal Intelligence and Games, CIG, (Aiide):23–29, 2013. * [33] Sepp Hochreiter and Jürgen Schmidhuber. Long Short-Term Memory. Neural Computation, 9(8):1735–1780, nov 1997. * [34] Arthur M. Hughes. Strategic Database Marketing: The Masterplan for Starting and Managing a Profitable Customer-Based Marketing Program. McGraw-Hill, 2000. * [35] Hyunseok Hwang, Taesoo Jung, and Euiho Suh. An LTV model and customer segmentation based on customer value: a case study on the wireless telecommunication industry. Expert Systems with Applications, 26(2):181–188, feb 2004. * [36] Barbara Jackson. Winning and keeping industrial customers. Lexington Books, 1985. * [37] Dipak Jain and Siddhartha S. Singh. Customer lifetime value research in marketing: A review and future directions. Journal of Interactive Marketing, 16(2):34–46, jan 2002. * [38] Norman L. Johnson and Samuel Kotz. Continuous Univariate Distributions, volume 1. John Wiley & Sons, 1970. * [39] E. L. Kaplan and Paul Meier. Nonparametric Estimation from Incomplete Observations. Journal of the American Statistical Association, 1958. * [40] Mahboubeh Khajvand, Kiyana Zolfaghar, Sarah Ashoori, and Somayeh Alizadeh. Estimating customer lifetime value based on RFM analysis of customer purchase behavior: Case study. Procedia Computer Science, 3(January):57–63, 2011. * [41] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems, pages 1097–1105, Lake Tahoe, NV, USA, 2012. Neural Information Processing Systems Foundation. * [42] V. Kumar. A Customer Lifetime Value-Based Approach to Marketing in the Multichannel, Multimedia Retailing Environment. Journal of Interactive Marketing, 24(2):71–85, may 2010. * [43] V. Kumar, Girish Ramani, and Timothy Bohling. Customer lifetime value approaches and best practice applications. Journal of Interactive Marketing, 18(3):60–72, 2004. * [44] Tony Lachowetz, Mark McDonald, William Sutton, and John Clark. The National Basketball Association: Application of Customer Lifetime Value. Sport Marketing Quarterly, 10(3):181, 2001. * [45] Sascha Lange, Michael Lenz, and Martin Riedmiller. Case Study : Behavioral Prediction of Future Revenues in Freemium Games. In Workshop New Challenges in Neural Computation, pages 26–33, 2014\. * [46] Tobias Mahlmann, Anders Drachen, Julian Togelius, Alessandro Canossa, and Georgios N. Yannakakis. Predicting player behavior in Tomb Raider: Underworld. In IEEE Conference on Computational Intelligence and Games, pages 178–185, Copenhagen, Denmark, 2010. IEEE. * [47] Naresh K. Malhotra. Review of marketing research, 2008. * [48] Edward C. Malthouse and Robert C. Blattberg. Can we predict customer lifetime value? Journal of Interactive Marketing, 19(1):2–16, jan 2005. * [49] Andreas Mueller. scikit-learn: A set of python modules for machine learning and data mining, 2018. * [50] Francis Mulhern. Customer Profitability Analysis: Measurement, Concentration and Research Directions. Journal of Interactive Marketing, 13(1):25–40, 1999. * [51] David B. Nieborg. Crushing Candy: The Free-to-Play Game in Its Connective Commodity Form. Social Media and Society, 1(2), 2015. * [52] Héctor Perez Martínez, Maurizio Garbarino, Georgios N. Yannakakis, Hector Perez Martinez, Maurizio Garbarino, and Georgios N. Yannakakis. Generic physiological features as predictors of player experience. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), volume 6974 LNCS, pages 267–276, Memphis, 2011. * [53] Africa Perianez, Alain Saas, Anna Guitart, and Colin Magne. Churn Prediction in Mobile Social Games: Towards a Complete Assessment Using Survival Ensembles. In 2016 IEEE International Conference on Data Science and Advanced Analytics (DSAA), pages 564–573, 2016. * [54] Phillip E. Pfeifer and Robert L. Carraway. Modeling customer relationships as Markov chains. Journal of Interactive Marketing, 14(2):43–55, jan 2000. * [55] Phillip E Pfeifer, Mark E Haskins, and Robert M Conroy. Customer Lifetime Value, Customer Profitability, and the Treatment of Acquisition Spending. Journal of Managerial Issues, 17(1):25, 2004. * [56] Werner J. Reinartz and V. Kumar. On the Profitability of Long-Life Customers in a Noncontractual Setting: An Empirical Investigation and Implications for Marketing. Journal of Marketing, 64(4):17–35, oct 2000. * [57] Werner J. Reinartz and V. Kumar. The Impact of Customer Relationship Characteristics on Profitable Lifetime Duration. Journal of Marketing, 67(1):77–99, jan 2003. * [58] Saharon Rosset, Einat Neumann, U R I Eick, and Nurit Vatnik. Customer Lifetime Value Models for Decision Support. Data Mining and Knowledge Discovery, 7(3):321–339, 2003. * [59] Saharon Rosset, Einat Neumann, Uri Eick, Nurit Vatnik, and Yizhak Idan. Customer lifetime value modeling and its use for customer retention planning. Proceedings of the eighth ACM SIGKDD international conference on Knowledge discovery and data mining - KDD ’02, page 332, 2002. * [60] Julian Runge. The Golden Curve: Determining Player Value in Freemium Apps - https://gameanalytics.com/blog/golden-curve-determining-player-value-freemium-apps.html (Accessed: 2018-11-08), 2014. * [61] Julian Runge, Peng Gao, Florent Garcin, and Boi Faltings. Churn prediction for high-value players in casual social games. In 2014 IEEE Conference on Computational Intelligence and Games, pages 1–8. IEEE, aug 2014. * [62] John Salvatier, Thomas V Wiecki, and Christopher Fonnesbeck. Probabilistic programming in Python using {PyMC}3. {PeerJ} Computer Science, 2:e55, apr 2016. * [63] David C. Schmittlein, Donald G. Morrison, and Richard Colombo. Counting Your Customers : Who Are They and What Will They Do Next? Management Science, 33(1):1–24, 1987. * [64] David C Schmittlein and Robert A Peterson. Customer Base Analysis : An Industrial Purchase Process Application. Marketing Science, 1994. * [65] Eric Benjamin Seufert. Two Methods for Modeling LTV with a Spreadsheet - https://www.slideshare.net/EricSeufert/ltv-spreadsheet-models-eric-seufert (Accessed: 2018-11-08), 2013. * [66] Venkatesh Shankar and Sridhar Balasubramanian. Mobile Marketing: A Synthesis and Prognosis. Journal of Interactive Marketing, 23(2):118–129, 2009. * [67] Cc Shen and Hm Chuang. A study on the applications of data mining techniques to enhance customer lifetime value. WSEAS Transactions on Information Science and Applications, 6(2):319–328, 2009. * [68] Ya-Yueh Shih and Chung-Yuan Liu. A method for customer lifetime value ranking — Combining the analytic hierarchy process and clustering analysis. Journal of Database Marketing & Customer Strategy Management, 11(2):159–172, dec 2003. * [69] Rafet Sifa, Fabian Hadiji, Julian Runge, Anders Drachen, Kristian Kersting, and Christian Bauckhage. Predicting Purchase Decisions in Mobile Free-to-Play Games. In Proceedings, The Eleventh AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment (AIIDE-15), pages 79–85, 2015\. * [70] Rafet Sifa, Julian Runge, Christian Bauckhage, and Daniel Klapper. Customer Lifetime Value Prediction in Non-Contractual Freemium Settings: Chasing High-Value Users Using Deep Neural Networks and SMOTE. In Data, Text and Web Mining for Business Analytics, volume 9, pages 923–932, 2018. * [71] Daniel L Silver, Qiang Yang, and Lianghao Li. Lifelong Machine Learning Systems : Beyond Learning Algorithms. AAAI Spring Symposium Series, (Solomonoff 1989):49–55, 2013. * [72] Siddharth S. Singh, Sharad Borle, and Dipak C. Jain. A Generalized Framework for Estimating Customer Lifetime Value When Customer Lifetimes are Not Observed. SSRN Electronic Journal, pages 1–32, 2007. * [73] Giorgio Alfredo Spedicato. Discrete Time Markov Chains with R. The R Journal, 2017. * [74] Rajkumar Venkatesan and V Kumar. A Customer Lifetime Value Framework for Customer Selection and Resource Allocation Strategy. Journal of Marketing, 68(4):106–125, oct 2004. * [75] Peter C. Verhoef and Bas Donkers. Predicting Customer Potential Value: an Application in the Insurance Industry. Technical Report 0, Erasmus Research Institute of Management, 2001. * [76] Pascal Vincent, Hugo Larochelle, Yoshua Bengio, and Pierre-Antoine Manzagol. Extracting and composing robust features with denoising autoencoders. In Proceedings of the 25th international conference on Machine learning - ICML ’08, number April, pages 1096–1103, New York, New York, USA, 2008. ACM Press. * [77] Jason Yosinski, Jeff Clune, Yoshua Bengio, and Hod Lipson. How transferable are features in deep neural networks? In Advances in Neural Information Processing Systems, volume 27, pages 1–9, 2014.
# Privacy-Preserving Representations are not Enough: Recovering Scene Content from Camera Poses Kunal Chelani1 Torsten Sattler2 Fredrik Kahl1 Zuzana Kukelova3 1 Chalmers University of Technology 2 Czech Institute of Informatics, Robotics and Cybernetics, Czech Technical University in Prague 3 Visual Recognition Group, Faculty of Electrical Engineering, Czech Technical University in Prague <EMAIL_ADDRESS><EMAIL_ADDRESS><EMAIL_ADDRESS> ###### Abstract Visual localization is the task of estimating the camera pose from which a given image was taken and is central to several 3D computer vision applications. With the rapid growth in the popularity of AR/VR/MR devices and cloud-based applications, privacy issues are becoming a very important aspect of the localization process. Existing work on privacy-preserving localization aims to defend against an attacker who has access to a cloud-based service. In this paper, we show that an attacker can learn about details of a scene without any access by simply querying a localization service. The attack is based on the observation that modern visual localization algorithms are robust to variations in appearance and geometry. While this is in general a desired property, it also leads to algorithms localizing objects that are similar enough to those present in a scene. An attacker can thus query a server with a large enough set of images of objects, _e.g_., obtained from the Internet, and some of them will be localized. The attacker can thus learn about object placements from the camera poses returned by the service (which is the minimal information returned by such a service). In this paper, we develop a proof-of- concept version of this attack and demonstrate its practical feasibility. The attack does not place any requirements on the localization algorithm used, and thus also applies to privacy-preserving representations. Current work on privacy-preserving representations alone is thus insufficient. ## 1 Introduction Visual localisation refers to the problem of estimating the camera pose of a given image in a known scene. It is a core problem in several 3D computer vision applications, including self-driving cars [17, 18] and other autonomous robots [50], and Augmented Reality [25, 23, 5]. Figure 1: In the context of privacy-preserving localization, we show that it is possible to learn about the content of a scene using camera poses returned by a localization service, without any direct access to the scene representation. (1st column) Examples of images from the scene, used to build the scene representation. The images are shown for illustrative purposes and are not available to an attacker trying to learn about the scene. (2nd column) The attacker queries the service with images of objects, _e.g_., downloaded from the Internet. (3rd & 4th column) Using the camera poses for the query image returned by the localization service, the attacker is able to identify the types of objects present in the scene and to accurately place them in the scene. We show the estimated object poses overlaid over the ground truth structure of the scene (which is not accessible to the attacker). The attacker is able to faithfully recover the placement of objects. Overall, our results demonstrate that simple feedback such as camera poses is already sufficient to potentially reveal private details. A popular approach for Augmented/Mixed/Virtual Reality (XR) applications is to use a client-server mechanism for localization: the user device (client) sends image data to a cloud-based system (server) that computes and returns the camera pose [25, 46, 23]. Examples of such services include Google’s Visual Positioning System [29], Microsoft’s Azure Spatial Anchors [24], and Niantic’s Lightship [39]. Cloud-based localization services are popular for multiple reasons - first, performing localization on the server reduces storage requirements and the computational load, and thus energy consumption, which is important for client devices such as mobile phones and headsets; second, it enables using robust mapping and localization algorithms that are too expensive for mobile devices; third, in the context of collaborative mapping, _e.g_., for the AR cloud or autonomous driving, maintaining a single scene representation in a centralized place is far easier than keeping multiple copies on various mobile devices up-to-date. Naturally, sending user data to a server, _e.g_., in the form of images to be localized or 3D maps recorded by users that will be used for localization, raises privacy concerns [41, 42, 9]. Work on privacy-preserving localization aims to resolve these concerns by ensuring that private details cannot be recovered from the data sent [42, 14, 26] to or stored on the server [41, 36, 28, 52, 11, 15, 11]. Existing work focuses on scenarios where an attacker gains access to the localization service or can eavesdrop on the communication between client and server. In this work, we demonstrate that it is possible for an attacker to learn about the content of a scene stored on a localization server without direct access to the server. We show that a localization service will reveal scene-related information through estimated camera poses, _i.e_., through its normal operation process. The attack is based on two recent developments: (1) modern visual localization algorithms are designed to be robust against changes such as illumination and seasonal variations [44]. This is an essential property for cloud-based localization services in order to operate robustly and reliably. However, since these algorithms are robust to (slight) variations in appearance and geometry, they will also localize images showing objects that are similar (but not necessarily identical) to those objects present in the scene. (2) massive amounts of images depicting objects in different variations are readily available on the Internet. Taken together, both developments allow an attacker to repeatedly query the server with images and to recover the positions of the objects in the scene based on the camera poses returned by the server (_cf_. Fig. 1). In this paper, we demonstrate the feasibility of this attack by developing a proof-of-concept implementation of the attack. In summary, we make the following contributions: (1) we identify a new line of attack in the context of privacy-preserving visual localization based on the camera poses returned by a cloud-based server. (2) we show the feasibility of the attack through a proof-of-concept implementation of the attack. Through experiments, we explore the performance of our implementation as well as the trade-off between localization robustness and potential defenses against the attack. (3) the attack is agnostic to the underlying localization algorithm and thus applicable even if the localization system is otherwise perfectly privacy-preserving. This paper thus proposes a new research direction for privacy-preserving localization, where the aim for the localization service is to correctly identify whether a query image was taken in the concerned scene or not, in order to prevent leaking information through camera poses. ## 2 Related Work Visual localization. Most state-of-the-art visual localization algorithms are based on establishing 2D-3D matches between a query image and a 3D model of the scene. These correspondences are then used for camera pose estimation. The 3D model can either be stored explicitly [33, 31, 32, 20, 21, 43, 19, 27], _e.g_., in the form of a Structure-from-Motion (SfM) point cloud, or implicitly in the form of the weights of a machine learning model [3, 2, 1, 38, 45, 6]. In the former case, local feature descriptors are associated with 3D points of the model. It has been shown that this information is sufficient to recover detailed images from the 3D map [28, 40], although sparsifying these models [4, 51] might effectively make them privacy-preserving [7]. Approaches based on implicit representations map image pixels or patches to 3D points by training scene coordinate regression models [38, 3]. Recently, it was claimed that such approaches are inherently privacy-preserving [11]. However, feature-based methods currently scale better to large scenes and are able to better handle condition changes [44], such as illumination or seasonal changes, between the query image and the database images used to build the the scene representation. The resulting robustness is highly important in many applications of visual localization, including AR and robotics. The robustness is a direct consequence of recent advances in local features [10, 13, 30] and effective feature matchers [32, 53, 43, 48]. In this paper, we show that robustness to changing conditions enables an attacker to learn about the content of the scene: robustness to changing conditions not only bridges the gap between (small) variations in scene appearance and geometry observed in images depicting the same place, but also leads to correspondences between images depicting similar but not identical objects, _e.g_., different chairs. In turn, these correspondences can be used to localize the object in the scene, which is the basis for the attack described in this work. Note that the properties we exploit are inherent to robust localization algorithms and are not restricted to feature-based methods. Ultimately, any robust localization system needs to handle variations in shape and appearance. Privacy-preserving visual localization. Existing work on privacy-preserving localization focuses on two points of attack: (1) ensuring that data sent to a localization service does not reveal private information. (2) ensuring that data stored on a localization service does not reveal private information. For the former case, it has been shown that images can be recovered from local features [49, 12, 9]. Work on privacy-preserving queries to a localization server thus mostly aims at developing features that prevent image recovery [14, 26] or on obfuscating the feature geometry [42, 16]. Similarly, work on privacy-preserving scene representation aims to obfuscate the geometry [41, 37] (although scene geometry can be recovered under certain conditions [7]), splitting the maps over multiple server for increased data security [15], using implicit representations [11], or storing raw geometry without any feature descriptors [52]. This paper presents a new line of attack that complements existing work. Previous work considers a scenario where the attacker gains access to the service. In contrast, we show that it is possible to recover scene content from the very basic information provided by any localization service, namely the camera poses estimated for query images. As such, the attack is still feasible even if the data send to and stored on the server is completely privacy-preserving. Our work thus shows that existing privacy-preserving localization approaches are not sufficient to ensure user privacy. ## 3 Recovering Scenes from Camera Poses Any localization system returns the camera poses of localized query images. At the same time, modern localization algorithms aim to be robust to shape and appearance variations in order to be robust to changes in viewing conditions. This feature, however, opens up the possibility that not only genuine queries, but also images of objects that are similar to the ones present in the scene can be localized. The camera poses of the localized images can then in turn be used to infer the positions of certain objects in the scene, potentially revealing more information about the scene than the cloud-based service / a user would like to disclose. Naturally, an attacker does not know which objects are present in the scene and thus which images to use for their queries. The Internet is a source of a theoretically unlimited number of images, videos, and 3D models of objects of different types and appearances. This naturally leads to an idea of a potential attack, where an attacker just downloads such images and videos, bombards the server with localization requests, and uses poses of localized images to reveal detailed scene structure. In the following sections, we investigate this new type of attack, and we try to answer several questions: Can an attacker with access to images and videos of objects similar to those present in the scene easily learn about the presence/absence of different objects and their placement in the scene just from the poses returned by a localization service? What are the challenges of such an attack, and are these challenges easily solvable? Can cloud-based services easily prevent such attacks? To this end, we present a proof-of- concept implementation of the attack.111We only aim to show feasibility. We believe that better attack algorithms are certainly possible. Later, Sec. 6 discusses an approach to potentially mitigate the attack and why its effectiveness is limited. ### 3.1 Formalization We assume a localization server $\mathcal{L}$ that is responsible for localizing images in a scene $\mathcal{S}$. $\mathcal{L}$ tries to align each query image it receives with the scene representation as best as possible. If an image can be localized, the server returns a 6-dof camera pose $[\mathbf{R}|\mathbf{t}]$. We assume that the scale of the translation component $\mathbf{t}$ is in known. An adversary $\mathcal{A}$ is querying $\mathcal{L}$ with many images of different objects, where each image contains only one dominant object to avoid confusion about which object from the image was localized in the scene. $\mathcal{A}$, using the poses returned by $\mathcal{L}$, wants to learn about the presence/absence of objects in the scene $\mathcal{S}$, and wants to infer their (approximate) positions. As such, $\mathcal{A}$ tries to construct an (approximate) ”copy” of the scene $\mathcal{S}$ or at least its layout. In this setting $\mathcal{A}$ needs to deal with two challenges: 1. 1. $\mathcal{A}$ queries $\mathcal{L}$ with images of objects that, in general, differ geometrically from the actual objects in the scene. In the best case, the pose returned by the server provides the best-possible approximate alignment between the query and actual object. In general, the returned poses will be noisy and can be quite inaccurate if only a part of the object, _e.g_., a chair’s leg, is aligned. Creating an accurate ”copy” of the scene from such poses is a challenging problem. 2. 2. $\mathcal{A}$ has, in general, no a-priori information about the type of the scene and which objects are visible in it. Since $\mathcal{L}$ can also return poses for objects that are not in the scene, $\mathcal{A}$ needs to have a mechanism for deciding the presence/absence of an object based on the returned poses. Naturally, having to deal with noisy and inaccurate poses makes the decision process harder. In general, it is not possible to overcome these challenges by using a single image of each object. A single camera pose returned by $\mathcal{L}$, without additional information, does not provide enough data for deciding about the presence/absence of the object in the scene and the quality of the pose. However, given the large amount of images available on the Internet, and in particular the availability of videos, $\mathcal{A}$ can use several images of the same object taken from different viewpoints. Jointly reasoning about all of the corresponding poses obtained for these images can then be used to decide the presence and position of the object. Figure 2: Object alignment example: 1. A 3D model $\mathcal{M}$ of an object and corresponding camera poses $\mathbf{P}_{o}$ in the attacker’s local coordinate system, built from a sequence of object images. 2. The server scene with a similar object. 3. The noisy poses returned by the server for the queried object images. 4. Sequences of local and server-provided poses aligned to approximately place the object in the scene. ### 3.2 3D Object Placement Assuming that the attacker knows that an object is present, they still need to predict its position and orientation in the scene based on the pose estimates provided by the server. To this end, the attacker can use that multiple images of the same object taken from different viewpoints are available. These images can be used by $\mathcal{A}$ to build a local 3D model $\mathcal{M}$, _e.g_., using SfM [34] and MVS [35], and to compute the poses $\mathbf{P}_{o}$ of these images w.r.t. this model. In turn, $\mathcal{L}$ provides a set $\hat{\mathbf{P}}_{o}$ of poses for (a subset of) these images in the coordinate system of the scene model $\mathcal{S}$. The problem of placing the object in the copy of the scene $\mathcal{S}$ thus reduces to the problem of aligning both sets of poses (_cf_. Fig. 2). The camera poses $\hat{\mathbf{P}}_{o}$ provided by $\mathcal{L}$ can be very noisy and can contain outliers. Thus, the alignment process needs to be robust. Algorithm 1 Best single camera based alignment between sets of poses Input $\mathbf{P}_{o}=\\{[\mathbf{R}_{i}|\mathbf{t}_{i}]\\}$,$\mathbf{\hat{P}}_{o}=\\{[\mathbf{\hat{R}}_{i}|\mathbf{\hat{t}}_{i}]\\}$, $\delta_{r}$,$\delta_{t}$ Output $\textbf{R}_{best},\textbf{t}_{best},\epsilon$ 1:procedure Get-Best-Alignment 2: $\texttt{N}\leftarrow{|\mathbf{P}_{o}|}$ 3: $\texttt{Inliers\\_best}\leftarrow\phi$ 4: for i = 1 to N do 5: $\mathbf{R}_{est}\leftarrow\mathbf{\hat{R}}^{\top}_{i}\mathbf{R}_{i}$ 6: $\mathbf{t}_{est}\leftarrow\mathbf{\hat{R}}^{\top}_{i}(\mathbf{t}_{i}-\mathbf{\hat{t}}_{i})$ 7: $\texttt{Inliers}\leftarrow\phi$ 8: for j = 1 to N do 9: $\Delta_{r}\leftarrow\angle(\mathbf{R}_{j}\mathbf{R}^{\top}_{est}\mathbf{\hat{R}}_{j}^{\top})$ 10: $\Delta_{t}\leftarrow||\mathbf{\hat{R}}^{\top}_{j}\mathbf{\hat{t}}_{j}-\mathbf{R}_{est}\mathbf{R}^{\top}_{j}\mathbf{{t}}_{j}+\mathbf{t}_{est})||$ 11: if $\Delta_{r}<\delta_{r}$ and $\Delta_{t}<\delta_{t}$ then 12: $\texttt{Inliers}\leftarrow\texttt{Inliers}\cup\\{\texttt{j}\\}$ 13: if $|\texttt{Inliers}|>|\texttt{Inliers\\_best}|$ then 14: $\texttt{Inliers\\_best}\leftarrow\texttt{Inliers}$ 15: $\epsilon\leftarrow|\texttt{Inliers\\_best}|/\texttt{N}$ 16: $\mathbf{R}_{best},\mathbf{t}_{best}\leftarrow$ Average(Inliers_best) As mentioned above, for simplicity we assume that the scale of the 3D model stored by $\mathcal{L}$ is known.222In the context of user-generated maps, captured by devices with IMUs such as mobile phones or dedicated XR headsets, it seems realistic to assume that the scale of the maps is provided in meters. Similarly, the scale of the local model $\mathcal{M}$ can be (approximately) recovered using the known size of the object. In this case, the two poses, in the coordinate systems of $\mathcal{M}$ and $\mathcal{S}$, for a single image already provide an alignment hypothesis, _i.e_., the relative pose between them. As outlined in Alg. 1, we evaluate all hypotheses. The input to Alg. 1 are the two sets of poses, $\mathbf{P}_{o}$ and $\hat{\mathbf{P}}_{o}$, and two error thresholds - $\delta_{r}$ for rotation and $\delta_{t}$ for translation. For each pair of corresponding camera poses - local and server- provided, a relative transformation is computed (line 5-6). One set of poses is transformed using this estimated transformation, and errors for rotation and translation between corresponding pairs are computed (Lines 9-10). Using the two thresholds, we determine which other pose pairs are inliers to the pose hypothesis (Lines 11-12). The transformation with the largest number of inliers is selected (Lines 13-14) and refined by averaging the relative poses of all inliers. Obviously, not knowing the scale of the scene model $\mathcal{S}$ is insufficient to prevent the attacker from placing the object in the scene as the scale and relative transformation can be recovered from two pairs of poses. Additionally, there are ways to further robustify the alignment process. _E.g_., if images of multiple very similar instance of an object and the corresponding 3D models are available, it seems reasonable to assume that images of different instances taken from similar viewpoints will also result in similar pose estimates by $\mathcal{L}$. These estimates can then be used to average out noise in the poses. Similarly, the relation between different objects, _e.g_., a monitor standing on a desk, can be used to stabilize the process of placing objects in the scene. However, we do not investigate such advanced strategies in this paper. ### 3.3 Deciding the Presence/Absence of an Object We assume that $\mathcal{L}$ is running a localization algorithm that is robust to shape and appearance variations and that is aligning query images to the scene as best as it can. At the same time, $\mathcal{L}$ can also return poses for objects that are not in the scene, as well poses for objects that are not even from the same categories or similar to objects present in the scene. Deciding if an object is present or not in a scene based on the poses returned for its images by the localization server is therefore a challenging problem. Figure 3: Example images from IKEA-Scenes (left) and one of the objects of corresponding scenes in IKEA-Objects (right). For an attacker $\mathcal{A}$ trying to recover scene information via camera poses, it is impossible to determine which type of objects are present using just a single camera pose returned for one query image of each of the objects. To overcome this challenge, $\mathcal{A}$ can employ several possible techniques; _e.g_., they can use statistics about object co-occurrence to select the set of queries and associated camera poses having a high probability of their spatial distribution. Another simple solution is to use multiple images of the same object taken from different viewpoints or to cluster query images into groups depicting similar objects that are assumed to be matched with the same object in the scene $\mathcal{S}$. $\mathcal{A}$ can then use different images from these groups to query $\mathcal{L}$ and decide on the presence/absence of the object based on the consistency of returned poses. Even though the returned poses can be noisy and can contain outlier poses, in general, it is expected that a reasonably large subset of images depicting the same object from different viewpoints or depicting objects from the same group will show consistency of returned poses if a similar object is present in $\mathcal{S}$. On the other hand, poses obtained for images of an object that is absent can be expected to exhibit a much higher variance. In this paper, we discuss and evaluate another strategy for presence/absence decision that allows us to show the completeness of the attack and present its proof-of-concept implementation. We assume that the attacker $\mathcal{A}$ learns certain statistics for each object or category from a curated training data that comprises of scenes with known presence/absence of these objects or object categories. This can be done for different types of localization schemes over huge amounts of 3D data. The attacker can then use these learned statistics to infer the presence/absence of objects when attacking an unknown scene $\mathcal{S}$. For experimental results in the later sections, we use the inlier-ratio $\mathbf{\epsilon}$ obtained from the object positioning step (Line 15 in Alg. 1) as this statistic. We can assume that for each object (or a class of objects) $o$, $\mathcal{A}$ has inlier-ratios $\mathbf{\epsilon}^{+}_{o}$ and $\mathbf{\epsilon}^{-}_{o}$ that are trained on scenes with known presence or absence of $o$. _E.g_., $\mathbf{\epsilon}^{+}_{o}$ and $\mathbf{\epsilon}^{-}_{o}$ can be computed as the medians of $\mathbf{\epsilon}_{o}$ over all ”present(+)/absent(-)” scenes. Based on these statistics, the presence or absence of $o$ in the unknown scene $\mathcal{S}$ can be decided by comparing the distances of $\mathbf{\epsilon}^{\mathcal{S}}_{o}$ to $\mathbf{\epsilon}^{+}_{o}$ and $\mathbf{\epsilon}^{-}_{o}$. We provide concreteness to this idea when assessing its effectiveness over a real world dataset in Section 5.2. Figure 4: Qualitative results for aligning objects in different scenes of the IKEA-Scenes dataset. We evaluate three combinations of local features and matchers. Aligned objects are color-coded green to red along the gravity direction to make their orientation better visible. ## 4 Datasets We use multiple datasets for our experiments: IKEA-Scenes and IKEA-Objects \- We captured image sequences of seven different inspiration-rooms at an IKEA furniture store (_cf_. Fig.3). 1,000-2,500 images were captured for each room, depending on its size. 4-10 objects from each room were selected, and a separate sequence of images was captured for each of them in the inventory section of the store, where the surrounding environment was different from that of the inspiration rooms. Note that the two instances of each object have the same model, but in many cases differ in color and size. Presence/absence of additional objects such as cushions on a sofa, or a computer on a desk can additionally change the overall appearance of the two instances. In total, the dataset comprises 38 object instances covered by 100-200 images each. While capturing the dataset, we tried to only have a single object occupying a large part of each image. However, this was not always possible and no post processing has been applied to mask out objects. We call the inspiration-room data IKEA-Scenes and the data from the inventory section IKEA-Objects. ScanNet-Office-Scene \- To show that the objects do not need to be of the exact same model for the proposed attack to work, we consider a generic office scene - scene0040 from the ScanNet [8] dataset. Office-Objects \- We collected image sequences of 5 common office room objects - a door, a whiteboard, an office chair, a desk with computer, and a bookshelf. These images are used as queries by the attacker. RIO10 \- RIO10 [47] is a localization benchmark dataset which we use to evaluate the effectiveness of a potential defence strategy that a localization server might employ. We manually scale all local 3D models constructed by the attacker to roughly metric scale. ## 5 Experimental Evaluation This section presents a series of experiments that show the practical feasibility of the attack introduced in Sec. 3. First, we show via qualitative results that the method proposed in Sec. 3.2 allows the attacker to place the 3D models of relevant objects close to the actual corresponding objects in the scene. We then explain and evaluate a simple implementation of the method described in Sec. 3.3 that the attacker can use to decide the presence/absence of objects. For querying the localization server, we use images from the datasets described above. To implement the server, we use HLoc [31, 32] (with default thresholds and parameters), a state-of-the-art visual localization approach. HLoc uses feature descriptors to establish 2D-3D matches between features extracted from the query image and 3D scene points. The resulting correspondences are then used for pose estimation. We demonstrate the reliance of the attack on the robustness of the localization process by evaluating three different local image features and matchers: Superpoint [10] features with the SuperGlue[32] (most robust), R2D2 [30] with Nearest Neighbor (NN) matching, and SIFT [22] with NN matching (least robust). ### 5.1 3D Object Placement Figure 5: (a) Example images from ScanNet-Office-Scene and corresponding objects in Office-Objects. (b) Qualitative results for aligning generic office objects in ScanNet [8] scene0040, using Superpoint+Superglue and R2D2+NN. We qualitatively evaluate the accuracy of the 3D object placements obtained using the approach from Sec. 3.2 for the IKEA-Scenes and ScanNet-Office-Scene datasets. We use qualitative results rather than quantitative metrics since it is hard to quantify when a placement is realistic enough. _E.g_., consider the predicted positions of the oven in the 3rd row of Fig. 4. The first two predictions are far enough from the ground truth position that a metric such as the IoU of the 3D bounding boxes of the objects will discard them as wrong. Yet, the estimated positions are close enough to the ground truth to provide the attacker with a good layout of the scene. Fig. 4 shows results for placing selected items from the IKEA-Objects dataset in 4 different scenes from the IKEA-Scenes dataset. Fig. 5 shows results for placing objects from the Office-Objects dataset in the ScanNet-Office-Scene dataset. As can be seen, using a robust localization process based on Superpoint features and the Superglue matcher or R2D2 features allows the attacker to place the objects close to their ground truth positions. In particular, the results from Fig. 5 show that the alignment also works well when the queried object is not the same model of different color/size but also a very different one in terms of shape and overall appearance. The results clearly demonstrate the practical feasibility of the placement strategy. We used slightly different values for the error thresholds required by the positioning algorithm based on the object size and obtained poses. Such an approach is feasible if a human supervises the attack. Code and data is available at https://github.com/kunalchelani/ObjectPositioningFromPoses. ### 5.2 Deciding the Presence/Absence of an Object Scene | Superpoint+Superglue | R2D2+NN | SIFT + NN ---|---|---|--- $10^{\circ},0.25m$ | $30^{\circ},0.5m$ | $60^{\circ},2m$ | $10^{\circ},0.25m$ | $30^{\circ},0.5m$ | $60^{\circ},2m$ | $10^{\circ},0.25m$ | $30^{\circ},0.5m$ | $60^{\circ},2m$ Precision | Recall | P | R | P | R | P | R | P | R | P | R | P | R | P | R | P | R Scene1 | 0.6 | 0.85 | 0.75 | 0.85 | 0.67 | 0.85 | 0.57 | 0.57 | 0.36 | 0.57 | 0.28 | 0.57 | 0.33 | 0.57 | 0.45 | 0.71 | 0.33 | 0.43 Scene2 | 0.36 | 0.4 | 0.36 | 0.5 | 0.37 | 0.6 | 0.34 | 0.4 | 0.3 | 0.3 | 0.35 | 0.6 | 0.33 | 0.4 | 0.26 | 0.5 | 0.28 | 0.6 Scene3 | 0.55 | 0.71 | 0.36 | 0.57 | 0.25 | 0.43 | 0.31 | 0.71 | 0.47 | 1 | 0.41 | 1.0 | 0.3 | 0.42 | 0.5 | 0.42 | 0.44 | 1.0 Scene4 | 0.17 | 0.4 | 0.23 | 0.6 | 0.14 | 0.4 | 0.34 | 0.6 | 0.28 | 0.4 | 0.2 | 0.4 | 0.15 | 0.4 | 0.15 | 0.4 | 0.17 | 0.4 Scene5 | 0.33 | 0.6 | 0.4 | 0.8 | 0.44 | 0.8 | 0.5 | 0.6 | 0.34 | 0.4 | 0.5 | 0.6 | 0.22 | 0.4 | 0.25 | 0.4 | 0.33 | 0.4 Scene6 | 0.25 | 0.6 | 0.28 | 0.6 | 0.22 | 0.4 | 0.22 | 0.4 | 0.3 | 0.6 | 0.33 | 0.8 | 0.14 | 0.2 | 0.2 | 0.2 | 0.25 | 0.6 Scene7 | 0.5 | 0.5 | 0.5 | 0.33 | 0.33 | 0.5 | 0.6 | 0.5 | 0.5 | 0.5 | 0.38 | 0.5 | 0 | 0 | 0.14 | 0.17 | 0 | 0 Table 1: Precision (P) and recall (R) of our method to determine the presence of objects for the IKEA-scenes and IKEA-Objects datasets. Figure 6: Effectiveness of a potential approach to prevent the proposed attack based on not providing poses for queries containing only a few objects. Only objects contributing at least 10% of the inliers found on the object with the most inliers are considered. As can be seen, finding a suitable threshold for the minimum number of visible objects can be difficult. In Sec. 3.3, we suggested strategies which an attacker can use to decide whether an object is present or not in a scene $\mathcal{S}$. for each object. Concretely, using a set of training scenes, the attacker has learned representative values $\epsilon^{+}$ and $\epsilon^{-}$ for the inlier-ratio returned by Alg. 1 for cases where the object is present(+) respectively absent(-). When deciding the presence of an object $\mathbf{o}$ in a scene $\mathcal{S}$, the attacker uses the inlier ratio ($\epsilon$) from Alg. 1 to make their decision. The object $\mathbf{o}$ is considered to be present in the scene if $|\epsilon-\epsilon^{+}|<|\epsilon-\epsilon^{-}|$ and otherwise considered as absent. We use the IKEA-Scenes and IKEA-Objects dataset for this experiment. When deciding the presence/absence of an object in a scene, the other 6 scenes are used as training scenes. Many of the objects from IKEA-Objects are only present in one of the scenes from IKEA-Scenes. In these cases, no reference value for $\epsilon^{+}$ is available for these scenes. In such cases, the object is considered as present if $\epsilon>\epsilon^{-}$. This strategy is motivated by the assumption that correctly placing an object that is present results in a higher inlier-ratio than placing objects that are not present. Tab. 1 shows precision and recall of this strategy. Since the computation of the inlier-ratio $\epsilon$ depends upon the error thresholds, we present the results for three different sets of thresholds. The results show that for most scenes, it is possible to obtain a precision/recall of approx. 0.4/0.6, which, _e.g_., translates to 3 out of 5 present, and around 29 out of 33 absent objects from IKEA-objects being correctly classified. The average precision using random guessing in these scenes is 0.19. This, together with the quality of the placement, clearly validates the feasibility of the proposed attack. ## 6 Preventing the Attack? A natural way to prevent the presented attack is to try to distinguish between genuine and malicious queries. By not sending poses for query images deemed as (potentially) malicious, the localization service effectively prevents the attacker from using pose estimates to learn about the scene. One potential classification strategy is based on the fact that the attacker sends images focusing on a single object. In this case, we expect that most of the 3D points from the inlier 2D-3D matches found by HLoc lie on a single 3D object. We thus count the number of 3D objects that contribute at least a certain fraction of inliers (X% of the inliers of the object contributing the largest number of inliers). If the number is too small, the query image is considered to be malicious and is rejected. Fig. 6 shows results for three different objects used to attack three different scenes of the RIO10 dataset [47]. Here, we use the instance-level labels provided by the dataset, which include background classes such as floor and walls, to define objects. As can be seen, rejecting the majority of malicious queries while retaining genuine queries can be challenging. The reason is that even while focusing on a single object, other objects might be partially visible in the queries, _e.g_., part of a desk for monitors, different pillows on a couch, books on a shelf, _etc_. In addition, genuine queries might focus on small parts of the scene or even individual objects. Thus, finding a suitable threshold on the minimum number of visible objects can be hard. Furthermore, note that this defense strategy requires the service to have knowledge about the objects in the scene, either extracted from the queries or the scene representation. This requirement creates a potential privacy risk if an attacker is able to gain access to the service. ## 7 Conclusions and Future work In this paper, we have considered the problem of privacy-preserving localization. Prior work aims to defend attacks for the case where the attacker gains access to a cloud-based localization service. In contrast, we show that it is possible for an attacker to recover information about the scene by using the service as intended: by querying the server with images of different objects, an attacker is able to determine which objects are present and to estimate their position in the scene. The attack is based on the minimum amount of information that a localization service needs to provide to its users, _i.e_., camera poses for query images, and exploits that modern localization systems are robust to changing conditions. Experiments with our proof-of-concept implementation show the practical feasibility of the attack. The attack is applicable even if the localization algorithm used by the server is otherwise perfectly privacy-preserving. Our results show that existing privacy-preserving approaches are not sufficient to ensure user privacy, creating the need for further research. In particular, first experiments show that preventing the attack proposed in this paper without reducing localization performance and creating other angles of attack is a non-trivial task and interesting direction for future work. Acknowledgements. This work was supported by the EU Horizon 2020 project RICAIP (grant agreement No. 857306), the European Regional Development Fund under project IMPACT (No. CZ.02.1.01/0.0/0.0/15_003/0000468), the Czech Science Foundation (GAČR) JUNIOR STAR Grant No. 22-23183M, Chalmers AI Research Center (CHAIR), WASP and SSF. ## References * [1] Eric Brachmann and Carsten Rother. Learning Less is More - 6D Camera Localization via 3D Surface Regression. In CVPR, 2018. * [2] Eric Brachmann and Carsten Rother. Expert sample consensus applied to camera re-localization. In ICCV, 2019. * [3] Eric Brachmann and Carsten Rother. Visual camera re-localization from RGB and RGB-D images using DSAC. arXiv:2002.12324, 2020. * [4] Federico Camposeco, Andrea Cohen, Marc Pollefeys, and Torsten Sattler. Hybrid Scene Compression for Visual Localization. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2019. * [5] R. O. Castle, G. Klein, and D. W. Murray. Video-rate localization in multiple maps for wearable augmented reality. In ISWC, 2008. * [6] T. Cavallari, L. Bertinetto, J. Mukhoti, P. Torr, and S. Golodetz. Let’s take this online: Adapting scene coordinate regression network predictions for online rgb-d camera relocalisation. In 3DV, 2019. * [7] Kunal Chelani, Fredrik Kahl, and Torsten Sattler. How Privacy-Preserving Are Line Clouds? Recovering Scene Details From 3D Lines. In (CVPR), pages 15668–15678, June 2021. * [8] Angela Dai, Angel X. Chang, Manolis Savva, Maciej Halber, Thomas Funkhouser, and Matthias Nießner. Scannet: Richly-annotated 3d reconstructions of indoor scenes. In CVPR, 2017. * [9] Deeksha Dangwal, Vincent T. Lee, Hyo Jin Kim, Tianwei Shen, Meghan Cowan, Rajvi Shah, Caroline Trippel, Brandon Reagen, Timothy Sherwood, Vasileios Balntas, Armin Alaghi, and Eddy Ilg. Analysis and mitigations of reverse engineering attacks on local feature descriptors. BMVC, 2021. * [10] Daniel DeTone, Tomasz Malisiewicz, and Andrew Rabinovich. Superpoint: Self-supervised interest point detection and description, 2018\. * [11] Tien Do, Ondrej Miksik, Joseph DeGol, Hyun Soo Park, and Sudipta N. Sinha. Learning to detect scene landmarks for camera localization. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), June 2022. * [12] Alexey Dosovitskiy and Thomas Brox. Inverting visual representations with convolutional networks. In CVPR 2016, pages 4829–4837, 06 2016. * [13] Mihai Dusmanu, Ignacio Rocco, Tomas Pajdla, Marc Pollefeys, Josef Sivic, Akihiko Torii, and Torsten Sattler. D2-Net: A trainable CNN for joint detection and description of local features. In CVPR, 2019. * [14] Mihai Dusmanu, Johannes L. Schönberger, Sudipta N. Sinha, and Marc Pollefeys. Privacy-preserving visual feature descriptors through adversarial affine subspace embedding. CVPR, 2021. * [15] Marcel Geppert, Viktor Larsson, Johannes L. Schönberger, and Marc Pollefeys. Privacy preserving partial localization. In CVPR, 2022. * [16] Marcel Geppert, Viktor Larsson, Pablo Speciale, Johannes L. Schönberger, and Marc Pollefeys. Privacy Preserving Structure-from-Motion. In ECCV, 2020. * [17] Marcel Geppert, Peidong Liu, Zhaopeng Cui, Marc Pollefeys, and Torsten Sattler. Efficient 2D-3D Matching for Multi-Camera Visual Localization. In ICRA, 2019. * [18] L. Heng, B. Choi, Z. Cui, M. Geppert, S. Hu, B. Kuan, P. Liu, R. Nguyen, Y. C. Yeo, A. Geiger, G. H. Lee, M. Pollefeys, and T. Sattler. Project AutoVision: Localization and 3D Scene Perception for an Autonomous Vehicle with a Multi-Camera System. In ICRA, 2019. * [19] Martin Humenberger, Yohann Cabon, Nicolas Guerin, Julien Morat, Jérôme Revaud, Philippe Rerole, Noé Pion, Cesar de Souza, Vincent Leroy, and Gabriela Csurka. Robust Image Retrieval-based Visual Localization using Kapture. arXiv:2007.13867, 2020. * [20] A. Irschara, C. Zach, J.-M. Frahm, and H. Bischof. From Structure-from-Motion Point Clouds to Fast Location Recognition. In CVPR, 2009. * [21] Y. Li, N. Snavely, D. Huttenlocher, and P. Fua. Worldwide Pose Estimation Using 3D Point Clouds. In ECCV, 2012. * [22] D. Lowe. Distinctive Image Features from Scale-Invariant Keypoints. IJCV, 60(2), 2004. * [23] Simon Lynen, Bernhard Zeisl, Dror Aiger, Michael Bosse, Joel Hesch, Marc Pollefeys, Roland Siegwart, and Torsten Sattler. Large-scale, real-time visual–inertial localization revisited. IJRR, 39(9):1061–1084, 2020. * [24] Microsoft. Spatial Anchors, 2020. * [25] S. Middelberg, T. Sattler, O. Untzelmann, and L. Kobbelt. Scalable 6-DOF Localization on Mobile Devices. In ECCV, 2014. * [26] Tony Ng, Hyo Jin Kim, Vincent T. Lee, Daniel DeTone, Tsun-Yi Yang, Tianwei Shen, Eddy Ilg, Vassileios Balntas, Krystian Mikolajczyk, and Chris Sweeney. Ninjadesc: Content-concealing visual descriptors via adversarial learning. CVPR, 2022. * [27] Vojtech Panek, Zuzana Kukelova, and Torsten Sattler. MeshLoc: Mesh-Based Visual Localization. In ECCV, 2022. * [28] Francesco Pittaluga, Sanjeev J Koppal, Sing Bing Kang, and Sudipta N Sinha. Revealing scenes by inverting structure from motion reconstructions. In CVPR, pages 145–154, 2019. * [29] Tilman Reinhardt. Using Global Localization to Improve Navigation, 2019. * [30] Jerome Revaud, Philippe Weinzaepfel, César Roberto de Souza, and Martin Humenberger. R2D2: repeatable and reliable detector and descriptor. In NeurIPS, 2019. * [31] Paul-Edouard Sarlin, Cesar Cadena, Roland Siegwart, and Marcin Dymczyk. From coarse to fine: Robust hierarchical localization at large scale. In CVPR, 2019. * [32] Paul-Edouard Sarlin, Daniel DeTone, Tomasz Malisiewicz, and Andrew Rabinovich. SuperGlue: Learning feature matching with graph neural networks. In CVPR, 2020. * [33] T. Sattler, B. Leibe, and L. Kobbelt. Efficient & Effective Prioritized Matching for Large-Scale Image-Based Localization. PAMI, 39(9):1744–1756, 2017. * [34] Johannes L. Schönberger and Jan-Michael Frahm. Structure-From-Motion Revisited. In CVPR, June 2016. * [35] Johannes Lutz Schönberger, Enliang Zheng, Marc Pollefeys, and Jan-Michael Frahm. Pixelwise View Selection for Unstructured Multi-View Stereo. In European Conference on Computer Vision (ECCV), 2016. * [36] Mikiya Shibuya, Shinya Sumikura, and Ken Sakurada. Privacy Preserving Visual SLAM. In ECCV, 2020. * [37] Mikiya Shibuya, Shinya Sumikura, and Ken Sakurada. Privacy preserving visual SLAM. In (ECCV), 2020. * [38] Jamie Shotton, Ben Glocker, Christopher Zach, Shahram Izadi, Antonio Criminisi, and Andrew Fitzgibbon. Scene Coordinate Regression Forests for Camera Relocalization in RGB-D Images. In CVPR, 2013. * [39] Tory Smith. Niantic lightship. 2022\. * [40] Zhenbo Song, Wayne Chen, Dylan Campbell, and Hongdong Li. Deep Novel View Synthesis from Colored 3D Point Clouds. In ECCV, 2020. * [41] Pablo Speciale, Sing Bing Kang, Marc Pollefeys, Johannes Schönberger, and Sudipta Sinha. Privacy preserving image-based localization. In CVPR. IEEE, June 2019. * [42] Pablo Speciale, Johannes L. Schonberger, Sudipta N. Sinha, and Marc Pollefeys. Privacy Preserving Image Queries for Camera Localization. In The IEEE International Conference on Computer Vision (ICCV), 2019\. * [43] Jiaming Sun, Zehong Shen, Yuang Wang, Hujun Bao, and Xiaowei Zhou. LoFTR: Detector-free local feature matching with transformers. CVPR, 2021. * [44] Carl Toft, Will Maddern, Akihiko Torii, Lars Hammarstrand, Erik Stenborg, Daniel Safari, Masatoshi Okutomi, Marc Pollefeys, Josef Sivic, Tomas Pajdla, Fredrik Kahl, and Torsten Sattler. Long-term visual localization revisited. PAMI, 2020. * [45] J. Valentin, A. Dai, M. Niessner, P. Kohli, P. Torr, S. Izadi, and C. Keskin. Learning to Navigate the Energy Landscape. In International Conference on 3D Vision (3DV), 2016. * [46] Jonathan Ventura, Clemens Arth, Gerhard Reitmayr, and Dieter Schmalstieg. Global Localization from Monocular SLAM on a Mobile Phone. IEEE Transactions on Visualization and Computer Graphics, 20(4):531–539, 2014. * [47] Johanna Wald, Torsten Sattler, Stuart Golodetz, Tommaso Cavallari, and Federico Tombari. Beyond controlled environments: 3d camera re-localization in changing indoor scenes. In European Conference on Computer Vision (ECCV), 2020. * [48] Qing Wang, Jiaming Zhang, Kailun Yang, Kunyu Peng, and Rainer Stiefelhagen. Matchformer: Interleaving attention in transformers for feature matching, 2022. * [49] P. Weinzaepfel, H. Jégou, and P. Pérez. Reconstructing an image from its local descriptors. In CVPR, 2011. * [50] A. Wendel, A. Irschara, and H. Bischof. Natural landmark-based monocular localization for mavs. In ICRA, 2011. * [51] Luwei Yang, Rakesh Shrestha, Wenbo Li, Shuaicheng Liu, Guofeng Zhang, Zhaopeng Cui, and Ping Tan. Scenesqueezer: Learning to compress scene for camera relocalization. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 8259–8268, June 2022. * [52] Qunjie Zhou, Sérgio Agostinho, Aljoša Ošep, and Laura Leal-Taixé. Is geometry enough for matching in visual localization? In ECCV, 2022. * [53] Qunjie Zhou, Torsten Sattler, and Laura Leal-Taixe. Patch2pix: Epipolar-guided pixel-level correspondences. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2021. In Section A, we present additional qualitative results for alignments of objects from the IKEA-Objects set in all scenes from the IKEA-Scenes dataset. Section B presents qualitative results for object alignments in some of the RIO10 scenes, corresponding to the quantitative results shown in Figure 6 of the main paper. ## Appendix A Qualitative results - Ikea-Scenes and Ikea-Objects Qualitative alignment results. In Figures 7-27, we present alignment results for four selected objects present in each scene from the the IKEA-Scenes dataset. We include results for poses obtained by using 1) Superpoint [10] features with Superglue [32]-based matching and 2) R2D2 [30] features with Nearest Neighbor matching within the Hloc [31, 32] pipeline. We selected objects of varying sizes, shapes, categories, textures _etc_. as to show the feasibility of the attack. As can be seen, it is often possible to quite accurately place objects in a scene based on camera poses estimated by a visual localization system. Camera poses. The same figures also show the set of camera poses returned by the server (blue) and the subset of poses (green) selected as inliers by our alignment method (see Algorithm 1 and Section 3.1 in the main paper). Note that these poses are obtained by localizing a sequence of query images sampled from a video. Hence, temporally close frames can be expected to show spatial coherence in their pose estimates. However, due to the difference in appearance between objects that are present in the scenes and the objects that are used for the attack, as well as due to viewpoint changes, there can be many outlier poses. Still, Algorithm 1 (of the main paper) is able to identify subsets of camera poses that allow to appropriately position the 3D models of the objects, demonstrating the robustness of the approach. Failure cases. For each scene, failure cases of the alignment step are highlighted using red colored boxes. Alignments that result in positioning the attacking object such that it is either too far from the corresponding object in the scene or its ”up-direction” is very different from that of the corresponding object are considered as failure cases. Only visual inspection has been used to decide whether a case is considered a success or failure. As can be seen from visualizations, the 3D models of the attacking objects that result in failures are often quite different (in terms of appearance) from the corresponding objects in the scene. Some failure cases such as the Sofa Linanas in Figure 21 can be attributed to this difference while many other cases such as Sofa Soderhamn in Figure 18 and Stool Kyrre in Figure 21 show the success in such difficult cases. Other reasons for failure are a low number of matches from texture-less or weakly textured objects. Chair Odger in Figure 18 and Chair Froset in Figure 9 (using Superpoint features [10] with the Superglue matcher [32]), are examples of such cases. Scene02, Scene04, and Scene06, shown in Figures 10-12, 16-18, 22-24 respectively, are complex rooms (_e.g_., an open concept kitchen in Scene04) composed of very similar looking objects, and hence challenging cases for such an attack. This results in more failure cases. Still, as shown in the figures, the attack succeeds in many cases, which shows its feasibility. ## Appendix B RIO10 example alignment results In Section 6 of the main paper, we discuss a potential strategy to defend against the attack introduced in our work. Yet, Figure 6 of the main paper shows that this strategy causes the localization process to not only reject malicious queries, but also to reject genuine query images. In Figure 6 of the main paper, we consider 3 different objects attacking 3 different scenes of the RIO10 dataset. To visualize these scenarios, Figure 28 shows qualitative results for object alignments in scenes corresponding to those plots. These results further emphasize that the attack does not require images of the exact same objects as present in the scene, but can also be carried out using images of similar instances from the same class of objects. Furthermore, these alignments also do not change significantly when the HLoc[31] based server uses stricter inlier thresholds (5,2 or 1 pixel as compared to the default 12 pixels) for the RANSAC based localization process. This counters another simple defence strategy of using stricter inlier thresholds in such a localization pipeline to avoid localization of malicious queries. Figure 29 shows the qualitative results of object alignment for varying inlier thresholds. As can be seen, the alignment does not get worse (in fact, improves) upon using stricter thresholds for the three cases considered here. Note that for these examples, although the number of inliers decreases as the threshold become stricter, the solutions of poses with least sum of reprojection errors are still able to position the attack-object appropriately. A more rigorous experimentation against this defence would definitely be more insightful but that would require an appropriate quantitative measure of the quality of alignment and this is left as future work. ## Appendix C Additional results for object presence classification For the task object presence classification, we present precision and recall results for each of the objects in IKEA-Objects in Table 2. Figure 7: Scene01 of IKEA-Scenes with selected objects in focus. Figure 8: Qualitative results for aligning corresponding objects from IKEA-Objects using poses for localizing them in Scene01 of IKEA-Scenes. Figure 9: Qualitative results for aligning corresponding objects from IKEA-Objects using poses for localizing them in Scene01 of IKEA-Scenes. Failure case: The aligned model for Chair Froset when using Superpoint+Superglue is a bit far from the actual object and also oriented incorrectly. Figure 10: Scene02 of IKEA-Scenes with selected objects in focus. Figure 11: Qualitative results for aligning corresponding objects from IKEA-Objects using poses for localizing them in Scene02 of IKEA-Scenes. Failure case: The aligned model for Table Lisabo when using Superpoint+Superglue is a bit far from the actual object and also oriented incorrectly. Figure 12: Qualitative results for aligning corresponding objects from IKEA-Objects using poses for localizing them in Scene02 of IKEA-Scenes. Failure cases: The aligned models of Lamp Misterhult when using R2D2+NN or Superpoint+Superglue are very far from the actual object. This can be attributed to the similar wooden appearance of the lamp and several wooden objects in the scene, which leads to incorrect matches. Figure 13: Scene03 of IKEA-Scenes with selected objects in focus. Figure 14: Qualitative results for aligning corresponding objects from IKEA-Objects using poses for localizing them in Scene03 of IKEA-Scenes. Failure case: The aligned model for Bed table Klipsk when using Superpoint+Superglue is close to the actual object, but incorrectly oriented. Figure 15: Qualitative results for aligning corresponding objects from IKEA-Objects using poses for localizing them in Scene03 of IKEA-Scenes. Failure case: The aligned model for Chair Linneback when using R2D2+NN is close to the actual object, but incorrectly oriented. Figure 16: Scene04 of IKEA-Scenes with selected objects in focus. Figure 17: Qualitative results for aligning corresponding objects from IKEA-Objects using poses for localizing them in Scene04 of IKEA-Scenes. Figure 18: Qualitative results for aligning corresponding objects from IKEA-Objects using poses for localizing them in Scene04 of IKEA-Scenes. Failure case: The aligned model for Chair Odger when using Superpoint+Superglue is close to the actual object, but incorrectly oriented. Figure 19: Scene05 of IKEA-Scenes with selected objects in focus. Figure 20: Qualitative results for aligning corresponding objects from IKEA-Objects using poses for localizing them in Scene05 of IKEA-Scenes. Figure 21: Qualitative results for aligning corresponding objects from IKEA-Objects using poses for localizing them in Scene05 of IKEA-Scenes. Failure cases: The aligned model for Sofa Linanas when using Superpoint+Superglue or R2D2+NN is incorrectly oriented and also far from the actual object. Figure 22: Scene06 of IKEA-Scenes with selected objects in focus. Figure 23: Qualitative results for aligning corresponding objects from IKEA-Objects using poses for localizing them in Scene06 of IKEA-Scenes. Figure 24: Qualitative results for aligning corresponding objects from IKEA-Objects using poses for localizing them in Scene06 of IKEA-Scenes. Failure cases: The aligned model for Chair Strandmon when using Superpoint+Superglue or R2D2+NN is incorrectly aligned and also far from the actual object. Figure 25: Scene07 of IKEA-Scenes with selected objects in focus. Figure 26: Qualitative results for aligning corresponding objects from IKEA-Objects using poses for localizing them in Scene07 of IKEA-Scenes. Failure case: The aligned model for Organizer Tjena when using Superpoint+Superglue is far from the actual object and also incorrectly oriented. Figure 27: Qualitative results for aligning corresponding objects from IKEA-Objects using poses for localizing them in Scene07 of IKEA-Scenes. Figure 28: Alignment results for querying scenes from the RIO10 dataset [47] with objects from the Office-Objects and IKEA-Objects datasets (_cf_. Fig. 6 in the main paper for quantitative results). The textured mesh corresponds to the scene and the point cloud corresponds to the 3D model of the object used for the attack. The alignment was produced using the poses obtained from the server, using Superpoint+Superglue for matching. As can be seen, it is possible to position the objects with reasonable accuracy, despite differences in appearance and geometry. Figure 29: Impact of changing the inlier threshold for RANSAC based localization on alignment - qualitative results. The final aligned object is not impacted drastically in most cases by a stricter RANSAC threshold. In fact, in several cases, it results in a better alignment of the attacking object with the corresponding object in the scene. The textured mesh corresponds to the scene and the point cloud corresponds to the 3D model of the object used for the attack. The alignment was produced using the poses obtained from the server, using Superpoint+Superglue for matching Object Name | Superpoint + Superglue | R2D2 + NN ---|---|--- $10^{\circ}$,0.25m | $30^{\circ}$, 0.5m | $60^{\circ}$, 2m | $10^{\circ}$,0.25m | $30^{\circ}$, 0.5m | $60^{\circ}$, 2m P | R | P | R | P | R | P | R | P | R | P | R bookshelf | 1 | 0.5 | 0.5 | 0.5 | 1 | 0.5 | 0.5 | 0.5 | 1 | 0.5 | 0.5 | 0.5 chair_agam | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 chair_froset | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 chair_gaming | 0.33 | 1 | 0.5 | 1 | 0 | 0 | 1 | 1 | 1 | 1 | 0.5 | 1 chair_linneback | 0.5 | 1 | 0.5 | 1 | 0.5 | 1 | 0.5 | 1 | 0.5 | 1 | 0.5 | 1 chair_odger | 0.33 | 0.5 | 0.33 | 0.5 | 0.5 | 0.5 | 1 | 0.5 | 1 | 0.5 | 0.5 | 0.5 chair_poang_small | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 chair_storsele | 0.33 | 1 | 0.33 | 1 | 1 | 1 | 0.5 | 1 | 0.25 | 1 | 0.5 | 1 chair_strandmon | 0 | 0 | 0 | 0 | 0 | 0 | 0.5 | 1 | 0.5 | 1 | 0.5 | 1 chair_vedbo | 1 | 1 | 1 | 1 | 0.5 | 1 | 0.5 | 1 | 0.5 | 1 | 0.5 | 1 cupboard_hauga | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 cupboard_kallax | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0.25 | 1 cycle | 0 | 0 | 0 | 0 | 0 | 0 | 0.5 | 1 | 0.5 | 1 | 0.5 | 1 klipsk_bed_table | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0.5 | 1 lamp_ceiling_agunarryd | 1 | 1 | 1 | 1 | 1 | 1 | 0.33 | 1 | 0 | 0 | 0.33 | 1 lamp_ceiling_appleviken | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 lamp_ceiling_mojna | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 lamp_ceiling_nymane | 0 | 0 | 1 | 0.5 | 0.5 | 0.5 | 1 | 0.5 | 0 | 0 | 0.25 | 0.5 lamp_ceiling_ranarp | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 lamp_evedal | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 lamp_fancy | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 lamp_navlinge | 0.33 | 0.5 | 0.5 | 0.5 | 0.5 | 0.5 | 0.25 | 0.5 | 0.33 | 0.5 | 0.25 | 0.5 lamp_star | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 lamp_table_misterhult | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 lamp_table_nymane | 0.5 | 0.5 | 1 | 0.5 | 0.33 | 0.5 | 1 | 0.5 | 1 | 0.5 | 1 | 0.5 lamp_table_tertial | 0 | 0 | 0.5 | 1 | 1 | 1 | 0.5 | 1 | 0 | 0 | 0.33 | 1 organizer_kvarnik | 0.5 | 1 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 0.5 | 1 oven | 1 | 0.5 | 1 | 0.5 | 1 | 0.5 | 1 | 0.5 | 1 | 0.5 | 0.5 | 0.5 sofa_landskrona | 1 | 1 | 1 | 1 | 0.5 | 1 | 1 | 1 | 1 | 1 | 1 | 1 sofa_linanas | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 sofa_soderhamn | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 stool_kyrre | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 stool_marius | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 strainer | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 table_corner_gladom | 1 | 1 | 1 | 1 | 1 | 1 | 0.5 | 1 | 0.5 | 1 | 0.5 | 1 table_lisabo_square | 1 | 0.5 | 0.5 | 0.5 | 0.25 | 0.5 | 1 | 0.5 | 1 | 0.5 | 1 | 0.5 vas_gradvis | 0.25 | 1 | 1 | 1 | 0.5 | 1 | 0 | 0 | 0 | 0 | 0.5 | 1 wall_hanging_crescent | 0.33 | 0.5 | 0.25 | 0.5 | 0.33 | 0.5 | 0 | 0 | 1 | 0.5 | 0.5 | 0.5 Table 2: Object wise precision and recall results for IKEA-Objects when attacking IKEA-Scenes
# The cosmology dependence of galaxy clustering and lensing from a hybrid $N$-body–perturbation theory model Nickolas Kokron1,2 , Joseph DeRose3,4, Shi-Fan Chen3, Martin White3,5, Risa H. Wechsler1,2 1 Kavli Institute for Particle Astrophysics and Cosmology and Department of Physics, Stanford University, 382 Via Pueblo Mall, Stanford, CA 94305, USA 2 Kavli Institute for Particle Astrophysics and Cosmology, SLAC National Accelerator Laboratory, 2575 Sand Hill Road, Menlo Park, CA 94025, USA 3 Department of Physics, University of California, Berkeley, 366 LeConte Hall, Berkeley, CA 94720, USA 4 Santa Cruz Institute for Particle Physics, University of California, Santa Cruz, CA 95064, USA 5 Lawrence Berkeley National Laboratory, 1 Cyclotron Road, Berkeley, CA 93720, USA Contact e-mail<EMAIL_ADDRESS> ###### Abstract We implement a model for the two-point statistics of biased tracers that combines dark matter dynamics from $N$-body simulations with an analytic Lagrangian bias expansion. Using Aemulus, a suite of $N$-body simulations built for emulation of cosmological observables, we emulate the cosmology dependence of these nonlinear spectra from redshifts $z=0$ to $z=2$. We quantify the accuracy of our emulation procedure, which is sub-per cent at $k=1\,h{\rm Mpc}^{-1}$ for the redshifts probed by upcoming surveys and improves at higher redshifts. We demonstrate its ability to describe the statistics of complex tracer samples, including those with assembly bias and baryonic effects, reliably fitting the clustering and lensing statistics of such samples at redshift $z\simeq 0.4$ to scales of $k_{\rm max}\approx 0.6\,h\mathrm{Mpc}^{-1}$. We show that the emulator can be used for unbiased cosmological parameter inference in simulated joint clustering and galaxy–galaxy lensing analyses with data drawn from an independent $N$-body simulation. These results indicate that our emulator is a promising tool that can be readily applied to the analysis of current and upcoming datasets from galaxy surveys. ###### keywords: cosmology: theory – large-scale structure of Universe – methods: statistical – methods: computational ††pubyear: 2021††pagerange: The cosmology dependence of galaxy clustering and lensing from a hybrid $N$-body–perturbation theory model–LABEL:lastpage ## 1 Introduction We are entering a golden era for studying the large-scale structure of the Universe. Over the next decade, ambitious imaging surveys will map out large swathes of the sky to unprecedented depths, imaging billions of galaxies and their shapes (Ivezić et al., 2019; Laureijs et al., 2011; Doré et al., 2015, 2019), enabling studies of weak gravitational lensing by the intervening distribution of matter (Bartelmann & Schneider, 2001; Mandelbaum, 2018). Weak lensing has only recently begun to contribute competitive cosmological constraints on dark matter and dark energy (Abbott et al., 2018; Heymans et al., 2020), but is one of the most promising future directions to pursue. Meanwhile, spectroscopic surveys will observe tens of millions of radial positions of galaxies (Takada et al., 2014; Aghamousa et al., 2016), enabling unparalleled understanding of the spatial distribution of galaxies in our Universe. The cross-correlation between positions and lensing, galaxy–galaxy lensing, is and will continue to be a key driver of cosmological constraints from galaxy surveys. The quality and quantity of these upcoming datasets imposes a significant challenge in their analysis. Even now, models for summary statistics such as correlation functions and power spectra are inadequate across the full range of scales probed by such surveys (Krause et al., 2017; Nishimichi et al., 2020). Either a large amount of the data must be discarded, or mitigation schemes must be developed to prevent contamination from scales where the models are insufficiently calibrated or constrained (MacCrann et al., 2020; Park et al., 2020). Models for clustering and lensing must be substantially improved if we are to extract the maximal information about the Universe we live in, from surveys that are already ongoing or planned. To date, two separate approaches have been developed to build models for the observables of cosmic surveys: analytically, through perturbative techniques, or numerically, using non-linear $N$-body simulations. Perturbation theory provides a systematic, analytic way to compute $N$-point summary statistics to systematically higher precision and smaller scales (Bernardeau et al., 2002). Below the nonlinear scale the effects of these nonlinearities can be tamed and parametrized within the framework of effective theories (Baumann et al., 2012; Carrasco et al., 2012; Vlah et al., 2015). This increased precision, however, comes at the cost of very large inaccuracies beyond the nonlinear scale at which the self-gravitating dark matter fluid ceases to be perturbative (Blas et al., 2014; McQuinn & White, 2016). In addition, perturbative frameworks provide a rigorous, first- principles approach to include physics beyond the standard $\Lambda$CDM model in large-scale structure observables such as neutrinos, baryonic effects and more exotic early-universe scenarios (Lewandowski et al., 2015; Senatore & Zaldarriaga, 2017; Aviles & Banerjee, 2020; Chen et al., 2020c; Laguë et al., 2020; Ivanov et al., 2020; 2020arXiv200612420D; Aviles et al., 2020). Understanding the domain of applicability of perturbation theory is still an active field of research (Baldauf et al., 2016b; Nishimichi et al., 2020; Chen et al., 2020a). The other approach, simulation-based modelling, involves numerically solving the equations of motion for an initial distribution of matter (Hockney & Eastwood, 1988; Bagla, 2005; Kuhlen et al., 2012). The resulting catalogs can be analysed in a way analogous to data to obtain predictions of cosmological observables across a wide range of scales at the cosmological parameters of the simulation. However, a limiting factor in simulation-based analyses is that $N$-body simulations require significant computational resources for a single realization. Thus, standard inference procedures such as Markov Chain Monte Carlo (MCMC) become prohibitively expensive when using models derived from simulations. In order to ameliorate the issues with simulation-based inference, recent developments in statistical learning have popularized so-called emulators as models (Heitmann et al., 2010, 2009; Lawrence et al., 2010). Emulators combine a set of simulations that representatively sample cosmological parameter space with sophisticated regression techniques to ‘fill in the blanks’ across parameter space. Once trained, an emulator provides rapid evaluations of a model which can be seamlessly integrated in analysis pipelines. For example, recent emulators for the nonlinear matter power spectrum (Knabenhans et al., 2019) have runtimes with negligible overhead compared to the underlying Boltzmann codes used for linear predictions. While galaxy surveys observe luminous tracers of the underlying dark matter density distribution, most suites of $N$-body simulations used to construct emulators deal only with the dark matter component. Thus, emulators for galaxy survey observables are presented with the additional challenge of capturing the relationship between the galaxy distribution and the underlying dark matter. Understanding the details of this relationship, known as the galaxy–halo connection, is an active field of research (see e.g. Wechsler & Tinker, 2018, for a recent review). Even for well-studied samples of galaxies, there are no consensus models to describe this relationship. For any given model of the galaxy–halo connection, an entirely new emulator has to be trained (Kwan et al., 2015; Wibking et al., 2019; Zhai et al., 2019; McLaughlin et al., 2021). Emulation of models with a large number of free parameters is also a challenging task, with techniques such as Gaussian processes scaling as $\mathcal{O}(N^{3})$ with $N$ training points and a substantially larger set of training data being required as one increases the dimensionality of the model. The simplest forms of galaxy–halo connections such as halo occupation distributions have five free parameters (Zheng et al., 2005), and it is expected that for more complex selections of galaxy samples the number will grow considerably (Guo et al., 2019; Yuan et al., 2018; Favole et al., 2020; Zu, 2020). In comparison, modern perturbation theory approaches to galaxy clustering operate at the field level via so-called bias expansions, which encode the response of small-scale galaxy physics (e.g. the galaxy–halo connection) to large-scale structure via a series of bias coefficients (see e.g. Desjacques et al. 2018 for a recent review). A key advantage of bias models is that while their dependence on parameters is simple and analytic, they should describe the statistics of a broad range of galaxy (and halo) samples as long as they are formed by processes that respect the symmetries of the underlying processes of structure and galaxy formation, namely rotational and Galilean invariance and the equivalence principle. Indeed, it was recently shown that the bias expansion can be directly derived by generating all possible dynamical terms and eliminating combinations not allowed by these symmetries (Fujita & Vlah, 2020). The challenges in using bias models come, instead, from the aforementioned limitations of perturbation theory models themselves. Similarly to perturbation theories for the clustering of dark matter, bias models are not expected to hold across all scales. Instead, they are expected to be valid at scales larger than or comparable to the Lagrangian size of haloes. This regime is where one is insensitive to the internal structure of haloes (McDonald & Roy, 2009; Fujita et al., 2020; Lazeyras & Schmidt, 2019; Vlah et al., 2016). It is worth noting, however, that the nonlinear and halo scales are not identical and scale differently with redshift — at higher redshifts perturbative models may be more limited by the larger Lagrangian radii of (typically more luminous or massive) samples than dynamical nonlinearities, and vice versa at lower redshifts. This distinction is particularly apparent in the Lagrangian basis (Matsubara, 2008; Vlah et al., 2016), in which galaxy clustering due to dynamics and biasing are explicitly disentangled. Recently Modi et al. (2020) suggested a way to combine the generality of bias expansion-based models with $N$-body simulations in a manner that is particularly suited for emulation, particularly in the regime where dynamics become nonlinear on scales larger than the halo scales of interest. Since higher-order Lagrangian biases have been found in simulations to be small for low and intermediate mass haloes (Abidi & Baldauf, 2018; Lazeyras & Schmidt, 2018), this scheme keeps the dynamical nonlinearities from $N$-body simulations to all orders while including Lagrangian bias only up to second order. In the remainder of this work we concern ourselves with the construction of an emulator for the halo–halo and halo–matter correlations with analytic dependence on bias parameters, extending the method presented in Modi et al. (2020) to a generic cosmological parameter dependence which can then be readily used for cosmological clustering analyses. The structure is as follows: in section 2 we briefly review the Lagrangian description of galaxy bias. In section 3 we describe the hybrid technique which combines displacements obtained from $N$-body simulations with Lagrangian bias. Section 4 describes the Aemulus suite of simulations (DeRose et al., 2019b), which we use to build the training data for the emulator. The measurements of the ‘basis spectra’ of the hybrid Lagrangian bias model, and their emulation, are outlined in section 5. Section 6 concerns itself with assessing the performance of the emulator. Specifically, sub-section 6.1 addresses the scale and redshift-dependent error for each of the ten basis functions that span the model. Subsection 6.2 assesses how well the model describes the statistics of complicated galaxy samples, including those possessing concentration and spin secondary biases, as well as the effect of baryons at small scales. Our final test, subsection 6.3, pits the emulator against a series of increasingly complex simulated likelihood analyses, in order to assess potential biases in inferred cosmological parameters using our emulator and their origin. ## 2 Lagrangian bias expansion In the Lagrangian approach to bias formulated in Matsubara (2008), the observed clustering of galaxies is obtained through first weighting fluid elements by a local functional $F[\delta(\textbf{q})]$ at their initial (Lagrangian) positions q and then advecting these weights to their observed positions via fluid trajectories $\textbf{x}=\textbf{q}+\mathbf{\Psi}$, where $\mathbf{\Psi}(\textbf{q},t)$ is the Lagrangian displacement. As discussed in the introduction, the bias functional $F$ is obtained by summing up all scalar terms allowed by Galilean invariance and the equivalence principle up to a given order in the initial conditions; up to quadratic order we have (Vlah et al., 2016) $\displaystyle F(\bm{q})\approx\,1+$ $\displaystyle b_{1}\delta_{L}(\bm{q})+\frac{b_{2}}{2!}(\delta_{L}^{2}(\bm{q})-\langle\delta_{L}^{2}\rangle)\,+$ (1) $\displaystyle b_{s^{2}}(s_{L}^{2}(\bm{q})-\langle s_{L}^{2}\rangle)+\,b_{\nabla^{2}}\nabla^{2}\delta_{L}(\bm{q})+\,\epsilon(\bm{q}),$ where $s^{2}=s_{ij}s_{ij}$ is the tidal shear tensor. The bias expansion is local above the halo scale and the initial fields in the above functional are to be interpreted as smoothed; any ‘nonlocal’ effects as we approach this scale, as well as dependences on smoothing, are parametrized to lowest order by the derivative bias $b_{\nabla^{2}}$. Modes below the halo scale, uncorrelated with the large scales of interest, are represented by the stochastic noise $\epsilon$. From the weighting $F(\textbf{q})$, the observed clustering is given via number conservation to be $1+\delta_{\alpha}(\textbf{x},z)=\int d^{3}q\,\delta^{D}(\textbf{x}-\textbf{q}-\mathbf{\Psi}(\textbf{q},z))F(\bm{q}),$ (2) where the Lagrangian displacement $\mathbf{\Psi}$ denotes the movement of the fluid element relative to its initial position. At any given order, the Lagrangian galaxy overdensity above can be mapped onto e.g. the Eulerian basis of McDonald & Roy (2009) by Taylor expanding $\mathbf{\Psi}$. However, keeping the nonlinear mapping in the integral above will generate a tower of Eulerian bias parameters even if only a few of the Lagrangian bias parameters are nonzero (see e.g. Abidi & Baldauf, 2018). We will treat the bias values, $b_{\alpha}$, as free parameters. Ab initio predictions of the $b_{\alpha}$ for general tracer populations is a harder problem, and a current active area of research. ## 3 Lagrangian bias and simulations Recently, it has been proposed that one can combine the fully resolved dark matter dynamics of an $N$-body simulation with the analytic perturbative bias techniques we outlined in the previous section (Modi et al., 2020). The use of dynamics from an $N$-body simulation means this hybrid model circumvents the need for perturbative calculations related to the equations of motion of the dark matter fluid itself. Additionally, $N$-body simulations are relatively inexpensive (compared to hydrodynamical simulations) and well-controlled, well-defined limits for observables exist so that convergence of measured quantities can be assessed systematically (e.g. Power et al., 2016; Mansfield & Avestruz, 2020; Joyce et al., 2020). As such, this hybrid model combines two techniques with solid theoretical foundations, ensuring robustness of its predictions. We will briefly describe the technique and how one implements it below, but refer the reader to Modi et al. (2020) for a more complete discussion. When creating initial conditions of an $N$-body simulation, one starts from a noiseless linear cosmological density field, $\delta_{L}(\bm{x})$. Traditionally, this density is only used to sample initial displacements which impart a cosmological signal on a set of _pre-_ initial conditions. First- order displacements using the Zeldovich approximation, $\Psi(\bm{q})=\int\frac{d^{3}k}{(2\pi)^{3}}e^{i\bm{k}\cdot\bm{q}}\frac{i\bm{k}}{k^{2}}\delta_{L}(\bm{k}),$ (3) result in so-called 1LPT initial conditions. However, higher order initial conditions (Crocce et al., 2006; Garrison et al., 2016; Michaux et al., 2020) are now ubiquitous in modern simulations. The noiseless initial density field can also be used to construct the different component fields of the Lagrangian bias expansion of the initial conditions: ${O}_{L}\supset{1,\delta_{L},\delta_{L}^{2},s_{L}^{2},\nabla^{2}\delta_{L},\cdots},$ (4) where the subscript $L$ indicates these are the Lagrangian fields. Advecting $N$-body particles weighted by $\mathcal{O}_{L}$ to a specific snapshot results in bias-weighted fields, $\delta_{\mathcal{O}_{L}}(\textbf{x})\equiv\int d^{3}\textbf{q}\ \mathcal{O}_{L}(\textbf{q})\ \delta_{D}(\textbf{x}-\textbf{q}-\Psi(\textbf{q})),$ (5) which trace the non-linear dark matter distribution. In Fig. 1 (middle panel) we show an example of the different bias-weighted fields produced by this procedure. These fields are similar to the ‘Eulerian-shifted’ operator basis of Schmittfull et al. (2019). A notable difference is that in our case the displacements are fully resummed, while the Eulerian-shifted basis of Schmittfull et al. (2019) only resums the Zeldovich displacement (1LPT). Higher order displacements ($n$LPT) are Taylor-expanded up to third order as part of their bias expansion. The difference is because our aim in this paper is to attempt to model scales beyond the reach of standard one-loop perturbation theory, whereas the goal of Schmittfull et al. (2019) was to validate one-loop perturbation theory at the field level (see also Taruya et al. 2018). The power spectrum of any combination of tracers can then generically be written as ($X,\,Y\equiv{\delta_{\mathcal{O}_{L}}}$) $P^{ab}(k)=\sum_{X,Y}b_{X}^{a}b^{b}_{Y}P_{XY}(k)+P_{SN},$ (6) where $P_{XY}$ is the cross-power spectrum at a fixed cosmology between the different fields at a given redshift. For example, the unweighted spectrum, $P_{11}$, is the non-linear matter power spectrum. This Lagrangian bias model can handle cross-correlations of arbitrary tracers. However, we also note that given a set of bias parameters for a single tracer sample $\alpha$, $\\{b_{X}^{\alpha},\,X\in\mathcal{O}_{L}\\}$, one can also self-consistently predict the tracer–matter cross-correlation by taking the second sample to have $b_{Y}^{m}=0$ except for $Y=1$. In this case there are only $P_{X1}$ terms. The tracer–matter cross-correlation is the primary cosmic contribution to the signal of galaxy–galaxy lensing, one of the key cosmological observables of current and upcoming galaxy surveys (Prat et al., 2018; Yoo et al., 2006; Wibking et al., 2020; Mandelbaum, 2018). The tracer–matter cross-correlation is also the primary contribution to the cross- correlation between galaxy positions and lensing of the cosmic microwave background (CMB), one of the most powerful and complementary statistics that is measured between galaxy and CMB surveys (Bianchini et al., 2015; Pullen et al., 2016; DiPompeo et al., 2017; Peacock & Bilicki, 2018; Omori et al., 2019; Singh et al., 2019; Krolewski et al., 2020). For notational convenience, throughout the remainder of this paper we will refer to the tracer–tracer correlation as $P^{hh}(k)$ and the tracer–matter correlation as $P^{hm}(k)$. This hybrid approach of combining $N$-body simulations with Lagrangian bias can fit the power spectrum of tracers to significantly smaller scales than standard Lagrangian perturbation theory (Modi et al., 2020). While the dependence on the Lagrangian bias parameters $b_{X}$ is analytic in this model, one still requires an $N$-body simulation to measure the basis spectra. An $N$-body simulation at a given point of cosmological parameter space then provides a measurement of the basis spectra at that point. With $N$-body simulations that sufficiently sample parameter space one can estimate the cosmological dependence of these basis functions across the entire space. This is precisely the goal of this work. Figure 1: Visualization of the methodology implemented in this paper, from the advection process to the measurements of the basis spectra. Our emulation scheme approximates the cosmology and redshift dependence of each spectrum in the ten panels in the lower part of the figure. The top panel has each Lagrangian field scaled to have equal variance, in order to highlight the qualitative differences between the fields. The middle panel shows the bias weighted-fields that result from the advection process. Different weights highlight qualitatively different aspect of the matter density. The cross- spectra of these fields give the spectra shown in the lower panel. ## 4 The Aemulus simulations In order to properly emulate the cosmology dependence of the basis spectra $P_{XY}(k)$, the underlying suite of $N$-body simulations used for measurements of observables must be constructed carefully. The Aemulus suite of $N$-body simulations (DeRose et al., 2019b) has been purpose-built for precise emulation of cosmological observables measured in galaxy surveys. The suite is composed of a set of 75 simulations that span 47 points in the $w$CDM parameter space allowed by a combination of modern CMB, BAO and type Ia supernova experiments. Each Aemulus box has a size $L_{\rm box}=1050\,h^{-1}$Mpc with $N=1400^{3}$ particles, corresponding to a mass resolution of $3.51\times 10^{10}\left(\frac{\Omega_{m}}{0.3}\right)h^{-1}M_{\odot}$. The Aemulus simulations have undergone rigorous convergence and validation tests for several observables. There are 10 particle snapshots ranging from $0<z<3$, allowing for measurements the redshift-dependence of the non-linear basis spectra. Aemulus’ halo mass function emulator has sufficient accuracy to remain valid, for the defined cosmological parameter space, through the Rubin Observatory’s Y1 LSST survey (McClintock et al., 2019), while the galaxy correlation function can predict the clustering of massive galaxy samples, such as those observed by DESI, to within 1 per cent down to scales of $r\approx 1\,h^{-1}$Mpc (Zhai et al., 2019). Thus, Aemulus represents an appropriate setting to construct an emulator for the Lagrangian bias basis spectra described in section 3. The only missing component is that the initial conditions code used in Aemulus, 2LPTIC (Crocce et al., 2012), does not output the noiseless linear density fields. We patched the code to read out this field and re-generated the initial conditions. Figure 2: Ratio of the measured basis spectra compared to LPT predictions for one of the cosmologies in the Aemulus test set. The mean of the five independent boxes in the test set is shown, and the shaded band represents one standard deviation as inferred from the boxes. The dashed vertical line at $k=0.1h{\rm Mpc}^{-1}$ shows the point where we revert to predictions of LPT. As discussed in the text, we find some small multiplicative differences at large scales for most basis spectra, that are larger for the basis spectra built from higher powers of the density field. This is most likely due to discrepancies in growth factors obtained between linear theory and $N$-body simulations. ## 5 Emulating the basis spectra ### 5.1 Measuring basis spectra We now describe in detail our implementation of the hybrid Lagrangian biasing scheme described in Section 3. Schematically, the process of obtaining measurements of the basis spectra from an $N$-body box can be broken down into four steps: 1. 1. Compute the Lagrangian bias fields: given the noiseless density field $\delta_{L}$ one constructs the other weight fields $\mathcal{O}_{L}$ by applying the appropriate transformations. 2. 2. Advect particles to a given snapshot: every particle ID can be associated with a grid cell $\\{i,j,k\\}$ in the fields $\mathcal{O}_{L}$. Every particle in a snapshot receives a weight $\left(\frac{D(z)}{D(z_{0})}\right)^{n}\times\mathcal{O}_{L}[i,j,k]$, where $\left(\frac{D(z)}{D(z_{0})}\right)$ is the ratio of growth factors between the snapshot and initial conditions, and $n$ is the number of powers in the linear density field that make up $\mathcal{O}_{L}$. 3. 3. Paint the weighted particles to a grid, to form the late-time bias fields. 4. 4. Measure the basis spectra: the painted bias fields are cross-correlated with each other to measure the basis spectra $P_{XY}$ for that given cosmology and redshift. This procedure imposes some additional storage requirements. While a particle catalog normally has seven entries for every particle, $(ID,\,\bm{x},\bm{v})$, each bias field weight will add an additional entry. Naively saving component weights at every snapshot will lead to a 57 per cent increase in catalog size. However, the time evolution of the weights is determined entirely by the linear growth function and can be determined on the fly. Thus, the fractional increase in catalog size will only be of order $\sim(1/7)(N_{b}/N_{z})$, where $N_{b}$ is the number of bias-weighted fields computed and $N_{z}$ is the number of snapshots used. For the second order basis of $\mathcal{O}=\\{1,\delta_{L},\delta_{L}^{2},s_{L}^{2},\nabla^{2}\delta_{L}\\}$ this represents a fractional increase in catalog size of 6 per cent. Even if the weights are not stored, all of the steps outlined above can be carried out on the fly when needed. In Fig. 1 we show a comparison between the predictions of one-loop Lagrangian perturbation theory and the basis spectra averaged across five Aemulus boxes with the same cosmology, from the test suite. For all basis spectra we recover the LPT result at large scales to within a few per cent. While one would expect the agreement at large scales to be exact, it is well known that $N$-body simulations struggle to correctly recover linear growth at large scales (Heitmann et al., 2010; Schneider et al., 2016; Garrison et al., 2016) due to transients from the grid that particles are initialized on, and the discrete nature of the kick-drift-kick operators used in time-stepping. This discrepancy is also present in Aemulus, as can be seen in fig. 13 of DeRose et al. (2019b). The Aemulus simulations have a 1 per cent mismatch in growth at large scales, which is redshift independent at the largest scales. Differences in growth between linear theory and the simulations would then be amplified for the basis spectra built from multiple fields. In Appendix C we explore the $k\to 0$ differences between LPT and our emulator, present prescriptions for enforcing consistency and discuss the small impact they have on parameter inference. At small scales, we see that non-linear structure formation imbues significant differences between LPT and the simulations. At the highest redshift shown, $z=2$, the agreement for the three spectra that dominate the signal ($\langle 1,1\rangle,\,\langle 1,\delta\rangle,\mathrm{and}\langle\delta,\delta\rangle$) is close throughout all scales probed in our simulation. Thus, for the scales under consideration, we find no need to extend the emulator to $z>2$. Above $z=1$, the Aemulus simulations only have snapshots at $z=2$ and $z=3$, and thus any attempt to emulate redshift evolution between these snapshots is too poorly sampled for the emulator to achieve our desired performance. For $z\geq 2$ the emulator reverts to predictions from velocileptors (Chen et al., 2020b), a public code to predict LPT power spectra and correlation functions to one loop order. This agrees quite well with most basis spectra given Fig. 1. When reverting to LPT at $z>2$, our implementation includes an additional free parameter. This parameter corresponds to the $k^{2}$ counterterm for matter that takes into account the effects of small-scale physics not captured by perturbation theory (Vlah et al., 2015). We note that there are no specific impediments to measuring basis spectra, or the emulation scheme adopted, at higher redshifts. Given simulations that are sufficiently well sampled in time, out to the furthest bin one wishes to include, the techniques described here should apply. The LPT predictions shown in Fig. 1 are a limit of a more complete theory that includes redshift-space distortions (Chen et al., 2020a, b) . The agreement between $N$-body simulations and this subset of LPT at large scales implies the bias parameters in the full theory and our hybrid model are equivalent; a set of bias parameters obtained from fitting the emulator to a sample can then be used in tandem with RSD measurements analysed purely with perturbation theory at a slightly more restrictive $k_{\mathrm{max}}$. Since the RSD measurements are done in 3D, rather than projection, one can achieve small measurement errors at more restrictive $k_{\rm max}$ making this combination an efficient one, e.g. for testing general relativity (Alam et al., 2017; Zhang et al., 2020). We note that we omit results for the basis spectra $\langle X,\nabla^{2}\delta\rangle$. The initial weight field $\nabla^{2}\delta_{L}$ has a large amount of power at very small scales, making its Fourier transform unwieldy due to the presence of an explicit smoothing scale of $k\sim L_{\rm grid}^{-1}$. As a result, we find the basis spectra as measured through the advection procedure have a cosmology-dependent amplitude mismatch when compared to LPT predictions at large scales. Therefore we adopt the approximation $\langle X,\nabla^{2}\delta\rangle\approx-k^{2}\langle X,1\rangle$ in the actual emulation scheme. Since these higher derivative bias contributions most closely correspond to the effects of baryonic physics and finite-size effects for haloes, we check that the approximation performs similarly in Section 6.2. Specifically, in Fig. 8 we explicitly show the differences between the measured $P_{1\nabla^{2}}$ and the approximation employed. We also note the approximation lowers the complexity of the emulation scheme, reducing the full set of basis functions at second order to be emulated from 15 to 10. ### 5.2 Principal components of non-linear spectra Once the basis spectra have been measured across all boxes, the emulator is built by adopting a suitable interpolation scheme between the different spectra. While other emulators using the Aemulus simulations have been constructed using Gaussian processes (GPs), we adopt a different approach here, similar to that used in the Euclid emulator (Knabenhans et al., 2019), using a combination of principal component analysis and polynomial chaos expansions (PCE) (Xiu, 2010). We prefer PCE to GP emulation for a few practical reasons. GPs are more difficult to train, requiring explicit choices for kernels and tuning of real valued hyper–parameters. Additionally, the run time for evaluating a trained GP scales with the amount of data used for training, while the run-time of a PCE model evaluation scales only with the order of the PCE. Furthermore, the polynomial nature of PCEs means that they have fast, analytic gradients, making them easy to integrate with sampling techniques such as Hamiltonian Monte Carlo (Hoffman & Gelman, 2011), although we have not done so in this work. GPs may still be preferred when the model being emulated is highly complex, but, as we show in the following sections, we are able to attain a nearly optimal emulator performance with the simpler and faster PCE scheme. To begin, we compute one-loop LPT predictions for each basis spectrum at every cosmology and redshift in the Aemulus training design, which we will refer to as $P_{\rm XY}^{\rm LPT}(k,\mathbf{\Omega})$, where $\mathbf{\Omega}$ denotes the cosmology and redshift in question. To do this we make use of the velocileptors code (Chen et al., 2020b). We then compute the ratio between the LPT predictions and the measured basis spectra, $P_{XY}^{\rm NL}(k,\mathbf{\Omega})$, from each snapshot. These ratios are thus consistent with unity at small wavenumbers, and while they deviate significantly from unity at high $k$ they have significantly less dynamic range than the basis spectra. In order to de-noise these ratios, we apply a Savitsky–Golay (Savitzky & Golay, 1964) filter of order three using an 11-point window in $k$. Doing so dramatically reduces the amount of noise in the spectra, and is a simple alternative to reduce noise at high $k$, where techniques such as fixed amplitude, paired phase simulations do little to reduce variance (Angulo & Pontzen, 2016; Villaescusa-Navarro et al., 2018; Chuang et al., 2019). As a final preprocessing step, we also take the base-10 logarithm of these smoothed ratios in order to further decrease the dynamic range. This yields the quantity that we emulate, which we call $\Gamma^{XY}(k,\mathbf{\Omega})$, Figure 3: The first two principal components of the log-ratios between $N$-body and LPT spectra, $\Gamma^{XY}$, for each basis spectrum. The principal components are very smooth compared to the raw basis spectrum measurements from the simulations. Two principal components are sufficient to explain greater than 99 per cent of the variance in all spectra as a function of redshift and cosmology. $\Gamma^{XY}(k,\mathbf{\Omega})\equiv\log_{10}\left(\frac{P_{XY}^{\rm NL}(k,\mathbf{\Omega})}{P_{XY}^{\rm LPT}(k,\mathbf{\Omega})}\right)$ (7) After these pre-processing steps, we proceed by constructing a principal component basis for these spectra. At this point we restrict ourselves to $0.1<k<1$ and $0<z<2$, however we note that in principle there are no issues extending to broader scales and redshifts if simulations allow for it. Let $\mathbf{X}_{XY}$ be the $N\times M$ array containing $\Gamma^{XY}$, where $N=N_{\rm cosmo}\times N_{z}$, $N_{\rm cosmo}$ is the number of cosmologies in our training set, $N_{z}$ is the number of redshift outputs per cosmology in our training set and $M$ is the number of $k$ values under consideration. Then a basis of principal components can be constructed by computing the eigenvectors of the covariance matrix of $\mathbf{X}_{XY}$: $\displaystyle\mathbf{C}_{XY}$ $\displaystyle=\mathbf{X}_{XY}^{\rm T}\mathbf{X}_{XY},$ (8) $\displaystyle=\mathbf{W}_{XY}\mathbf{\Lambda}_{XY}\mathbf{W}_{XY}^{\rm T},$ where the rows of $\mathbf{W}_{XY}$ are the eigenvectors, i.e., the principal components, in question and $\mathbf{\Lambda}_{XY}$ is a diagonal matrix of the eigenvalues, which are equal to the variance of the data described by each eigenvector. In all cases, greater than 99 per cent of the variance in each basis spectrum is described by the first two principal components, shown in Figure 3. We thus disregard all other principal components for the duration of this work. Given the results discussed in Section 6, we deem this to be sufficient. Having computed the principal components, we then determine the projection of them onto each measured $\Gamma^{XY}$ via: $\displaystyle\mathbf{A}_{XY}=\mathbf{X}_{XY}\mathbf{W}_{XY},$ (9) where $\mathbf{A}_{XY}$ is an $N\times 2$ matrix containing the principle component coefficients $\alpha^{XY}_{i}(\mathbf{\Omega})$ for each cosmology and redshift in our training set. It is the dependence of these coefficients on cosmology and redshift that we build a surrogate model for using polynomial chaos expansions (Wiener, 1938). ### 5.3 Emulating cosmology dependence with polynomial chaos With our principal components in hand, every point in cosmological parameter space sampled by the training set has coefficients for the approximation $\displaystyle\Gamma^{XY}(k,\mathbf{\Omega})\approx\sum_{i}\alpha_{i}^{XY}(\mathbf{\Omega})\mathrm{PC}_{i}^{XY}(k).$ (10) The problem of emulating the cosmology dependence of the $\Gamma^{XY}$ functions is now reduced to that of figuring out the cosmology dependence of the PC coefficients $\alpha_{i}(\mathbf{\Omega})$. A polynomial chaos expansion (PCE) (of order $N$) of this dependence is the decomposition of the $\alpha_{i}$ onto a basis of products of orthogonal polynomials $\Phi_{\mathbf{i}}(\mathbf{\Omega})$ organized by a multi-index $\mathbf{i}$ (Xiu, 2010): $\displaystyle\alpha(\mathbf{\Omega})=\sum_{|\mathbf{i}|\leq N}c_{\mathbf{i}}\Phi_{\mathbf{i}}(\mathbf{\Omega}).$ (11) Each component of the multi-index $\mathbf{i}=(i_{1},\cdots,i_{d})$, denotes the order of the polynomial for that cosmological parameter, e.g., $\displaystyle\Phi_{\mathbf{i}}(\mathbf{\Omega})=\phi_{i_{1}}(\Omega_{1})\cdots\phi_{i_{d}}(\Omega_{d}),$ (12) and so $\phi_{i_{d}}(\Omega_{d})$ is a univariate orthogonal polynomial of order $i_{d}$. While this is in principle a decomposition into a combinatorially large space of coefficients $c_{\mathbf{i}}$, it is known to be a sparse representation (Blatman & Sudret, 2008, 2011), and there exist many algorithms (and numerical libraries) optimized to perform regression over this space and obtain values for the coefficients. We use the package Chaospy (Feinberg & Langtangen, 2015; Feinberg et al., 2018) to perform the decomposition and subsequent regression. Note that since the parameter dependence of the principal components is given by a combination of polynomials, our model in principle has an analytic dependence on cosmology, redshift, and bias. Since the coefficients are determined via regression, a PCE emulator does not recover the input data exactly. However, the tests conducted in section 6 indicate that this drawback is not an issue. In total, the hyperparameters in the model are: 1. 1. The number of principal components used, $N_{\mathrm{PC}}$. 2. 2. The maximum order of the multi-index $|\mathbf{i}|$. In practice we separately optimize over the maximum polynomial order of each individual parameter $i_{d}$, with $i_{d}\leq 4$. As mentioned previously, we restrict ourselves to $N_{\mathrm{PC}}=2$, as this is sufficient to capture over 99 per cent of the variance in each basis spectrum. To optimize over the polynomial orders $i_{d}$, we run a simple grid search across the aforementioned values for the seven $w$CDM parameters $\mathbf{\Omega}=(\Omega_{b}h^{2},\Omega_{c}h^{2},\sigma_{8},H_{0},n_{s},N_{\mathrm{eff}},w)$ and evaluate our results on the Aemulus test suite. We select the set of orders that minimizes global error across all test boxes and snapshots. We describe the tests of this optimized emulator below. Figure 4: Coefficients of $\mathrm{PC}_{1}^{XY}(k)$ for the first three basis spectra as a function of $\sigma_{8}$, colored by redshift. The coefficients vary smoothly for all redshifts as $\sigma_{8}$ is varied. It is the dependence of these coefficients that we emulate via PCE as a function of cosmology and redshift. The panels look similar for the remaining basis spectra. Figure 5: Emulation residuals for basis spectra. _Lower left triangle:_ the fractional error obtained for each basis spectrum when compared to the measurements averaged from each set of boxes in the test suite. _Upper right triangle:_ the relative size of the emulator residuals compared to the total halo–halo spectrum measured for a fiducial halo sample. In each panel, the dark blue curves are the mean residuals across all redshifts and test boxes, the red curves report the median residual error across the test suite as a function of redshift, and the black curves report the expected sample variance at the volume of an Aemulus training box. ## 6 Results ### 6.1 Analysis of emulator residuals A crucial step in producing viable emulators of cosmological observables is characterizing the accuracy of the emulation scheme. We use the Aemulus set of test boxes to assess the performance of the scheme described in the previous section. The test boxes span seven points in cosmological parameter space, each with five independent realizations of that cosmology. We use the average of five basis spectra at each test cosmology as reference quantities to understand the errors induced in the emulation procedure as a function of scale, across parameter space. We report the accuracy of our optimized PCE emulator for the basis spectra over the range $0.1\leq k\leq 1.0\,h\,{\rm Mpc}^{-1}$ in the lower left panel of Fig. 5. Across most redshift bins in the test suite and for most basis spectra we achieve better than 1 per cent accuracy in the test set. At $z=0$ we observe worse performance, however this can be attributed to numerical difficulties in computing the LPT spectra at $z=0$ at small scales, as can be seen in Fig. 1. As there is little cosmological information in the very low redshift universe, we do not consider this to be a significant issue. Indeed, our additional validation tests support that the model has sufficient accuracy to analyse current survey data. Adopting a fiducial set of bias parameters corresponding to a halo sample of $12.5\leq\log_{10}\left(\frac{M}{h^{-1}M_{\odot}}\right)\leq 13$, we compute the emulator residuals for each basis spectrum relative to the _total_ $P^{hh}(k)$. The results are shown in the upper right triangle of Fig. 5. The individual basis spectrum error rarely exceeds a permille of the total power. This implies that the slightly larger errors for cubic basis spectra shown in Fig. 5 are sub-leading relative to the total signal we expect to model. ### 6.2 Fitting assembly bias and baryons Beyond samples of fixed halo mass, the general bias expansion in Eq. 1 should also be able to describe the clustering statistics of more complex tracer populations. It is well known that haloes of a fixed mass bin exhibit different clustering properties depending on whether they are sub-selected on certain properties. This effect, originally discovered in the context of assembly history, and generally known as assembly bias or secondary bias, has been observed for selections on concentration, occupation, local environment, spin, and other secondary halo properties (Wechsler et al., 2002; Gao et al., 2005; Wechsler et al., 2006; Dalal et al., 2008; Mao et al., 2018; Salcedo et al., 2018; Mansfield & Kravtsov, 2020). Figure 6: Emulator predictions at fixed cosmology for halo samples exhibiting concentration (top panels) and spin (bottom panels) assembly bias. Central panels show the signal from the halo sample with no selection on a secondary parameter. The left and right panels show samples split on the lowest and highest quartiles of the relevant secondary bias parameter, respectively. Shaded bands show the regions where residuals are within 2 per cent and 1 per cent respectively, while the dashed envelope shows the expected cosmic variance for a sample with $V\approx 5.8(h^{-1}{\rm Gpc})^{3}$. The spectra are measured at $z=0.7$ and the fit is performed with the data vector out to $k_{\rm max}=0.6\,h{\rm Mpc}^{-1}$. As a test of our model, we construct halo catalogs with different amounts of concentration and spin secondary bias, splitting the sample by quartile. The magnitude of the effect varies differently as a function of mass for each secondary bias parameter. Thus, we adopt separate halo mass bins for each parameter, in a regime where we have both reliable estimates of the secondary quantities and know that the secondary bias effect is not drastic, following fig. 4 of Sato-Polito et al. (2019). The mass range $12\leq\log_{10}\left(\frac{M}{h^{-1}M_{\odot}}\right)\leq 12.5$ was used to build samples contaminated with concentration bias, and $12.5\leq\log_{10}\left(\frac{M}{h^{-1}M_{\odot}}\right)\leq 13$ for spin bias. We consider the highest and lowest quartile samples in both concentration and spin, as well as a sample with no secondary bias, sub- sampled to the same number density as the samples contaminated with secondary bias. We additionally do not subtract the shot-noise contribution from measured spectra, and opt instead to include it in our covariance matrix as detailed in Eqn. 14. Using the emulator for the basis spectra evaluated at the cosmology of these test boxes, we jointly fit the halo–halo and halo–matter spectra $\\{P_{hh},P_{hm}\\}$ with five parameters: $b_{i}=\\{b_{1},b_{2},b_{s^{2}},b_{\nabla^{2}},\bar{n}^{-1}\\}$. We minimize the $\chi^{2}$ between the mean of five simulations assuming a disconnected covariance for the observables as described in Eq. 14, with $V=5\times(1.05\,h^{-1}{\rm Gpc})^{3}$ each. The resulting fits are shown in Fig. 6. We fit the spectra to a maximum scale of $k_{\rm max}=0.6\,h\,{\rm Mpc}^{-1}$. For most panels, we see that the hybrid $N$-body/Lagrangian bias model can jointly describe the clustering and lensing spectra to within 1 per cent down to scales even smaller than employed for the model fit. At large scales, the lowest spin assembly bias bin seems to be systematically higher by at most 10 per cent. Changing the $k_{\rm max}$ of the fit down to $0.2\,h\,{\rm Mpc}^{-1}$ does not qualitatively alleviate the large-scale discrepancies. We observe similar behavior if the average of the basis spectra from this cosmology are used instead of the emulator, implying this is not an issue of the emulator and could perhaps be attributed to large-scale noise. Another possibility is that a second-order Lagrangian bias model is unable to fully capture the effects of spin secondary bias, but we leave this investigation to future work. In Fig. 7 we show the reduced $\chi^{2}$ for the fits to the samples split on concentration. We see that the goodness of fit degrades significantly past $k\simeq 0.6h\,{\rm Mpc}^{-1}$ for some subsamples. The fits to smaller $k_{\rm max}$ have $\chi^{2}/{\rm d.o.f.}\lesssim 1.5$. Note that in these tests we use the emulator at a volume that is significantly larger than the boxes it was trained on, and the covariance matrices do not have any contributions due to the emulator uncertainty. If we instead use the mean basis spectra the $\chi^{2}/{\rm d.o.f.}$ cross the $\chi^{2}/{\rm d.o.f.}\sim 1$ threshold at $k_{\rm max}\sim 0.6h\,{\rm Mpc}^{-1}$ and grow significantly afterwards, signalling a potential breakdown of the applicability of this Lagrangian bias model to these samples. Figure 7: The goodness of fit $\chi^{2}/{\rm d.o.f.}$ from increasing $k_{\rm max}$ for the halo sample selected on concentration quartiles, using the emulator as a model. Note the significant degradation of the goodness of fit for the subsample split on the lowest quartile after $k_{\rm max}=0.6$. Baryonic physics is known to impact the statistics of biased tracers at the scales we are considering (White, 2004; Zhan & Knox, 2004; Chisari et al., 2019; van Daalen et al., 2020). In our model, the $\langle 1,\nabla^{2}\delta\rangle$ basis spectrum should have the scale dependence required to capture the first-order impacts of baryons (Lewandowski et al., 2015). In order to test this, we produce mock ‘baryonified’ spectra using the fitting function of van Daalen et al. (2020), which is obtained from analysis of a comprehensive suite of hydrodynamic simulations. We compare the fitting function to two parametrizations for the impact of baryons: 1. 1. Including terms that scale as the basis functions $b_{\nabla^{2}}\langle 1,\nabla^{2}\delta\rangle$ and $b_{1}b_{\nabla^{2}}\langle\delta,\nabla^{2}\delta\rangle$. 2. 2. Same as above, but substituting the basis functions with the approximation $\langle X,\nabla^{2}\delta\rangle\simeq-k^{2}\langle X,1\rangle$. The results of this test are shown in Figure 8. While the baryonic suppression factors presented by the two parametrizations differ, in the bottom panel we see that both capture the effects of baryons to within 1 per cent out to $k\approx 0.8\,h\,\mathrm{Mpc}^{-1}$, whereas not including the contributions leads to errors larger than 1 per cent at $k\approx 0.2\,h\,\mathrm{Mpc}^{-1}$. Additionally, our framework can simultaneously treat the effects of finite halo size and baryonic physics. As both are captured by the same basis spectra, this corresponds to treating the halo tracer as having one set of $b_{\nabla}^{2}$ and the matter tracer in the $P^{hm}$ correlation as having a separate higher derivative coefficient $b^{\prime}_{\nabla^{2}}$, while keeping all other bias parameters equal to zero. Figure 8: Higher derivative bias terms and their comparison to the baryonic physics fitting function of van Daalen et al. (2020). The top panel shows the fitting function, the basis spectrum as measured in the $N$-body simulations and the approximation we employ in the text. In the lower panel we show residuals between the different treatments and the fitting function. The blue curve in the lower panel shows the difference between the unprocessed dark matter power spectrum and the fitting function. The green curve is the approximation to the higher derivative fitting functions that is implemented in our analyses. ### 6.3 Recovering input cosmology In this section we present an increasingly complex series of tests to ensure our emulator can be used for cosmological inference, i.e., to demonstrate that it can recover input cosmological parameters in an unbiased way. The general structure of the analyses we run is as follows. The input data-vectors will be the joint halo–halo and halo–matter power spectra $\mathbf{d}=\\{P_{hh}(k),P_{hm}(k)\\}$. We assume a Gaussian likelihood in the residuals between $\mathbf{d}$ and the emulator prediction at a cosmology $\mathbf{x}(\mathbf{\Omega})$: $\log\mathcal{L}(d|\mathbf{\Omega})\propto-(\mathbf{d}-\mathbf{x}(\mathbf{\Omega}))^{T}\mathbf{C}^{-1}(\mathbf{d}-\mathbf{x}(\mathbf{\Omega})).$ (13) We adopt a baseline covariance matrix that includes only dependence on the two-point functions of the tracer density field, known as the disconnected contribution (Li et al., 2019). The result is a block-diagonal matrix with format $\mathbf{C}(k,k^{\prime})\equiv\frac{2\pi^{2}\delta_{k,k^{\prime}}}{k^{2}\Delta kV}\times\begin{cases}2P_{hh}^{2}(k),&\text{ for }hh\times hh\\\ 2P_{hh}(k)P_{hm}(k),&\text{ for }hh\times hm\\\ \bigg{[}P_{hh}(k)P_{mm}(k)&\text{ for }hm\times hm\\\ +P_{hm}^{2}(k)\bigg{]},\par\end{cases}$ (14) for each sub-block. We use non-linear power spectra and $P_{hh}$ includes the shot-noise contribution. At the smaller scales we probe in the resulting analyses, the purely disconnected approximation is known to fail and off- diagonal (connected) components become increasingly important (Meiksin & White, 1999; Scoccimarro et al., 1999; Cooray & Hu, 2001; Mohammed et al., 2016; Lacasa, 2018). The intent of this paper is not to conclusively quantify the information content available at small scales. Rather, we would like to ensure that the emulator is an unbiased model when pushing to such small scales. Therefore, we consider the form of the covariance in Eqn. 14 to be a sufficient baseline to carry out our analyses. We assess its performance in more detail in Appendix A. As we will discuss in more detail in section 6.3.2, the approximation of taking only the disconnected contribution neglects two forms of error: that arising from the connected contribution, and model error from the emulator itself. We discuss the contribution to the covariance from emulator error in Appendix A, and find that in the regime under which our tests are carried out, its inclusion is important in achieving unbiased constraints. We sample the posterior distributions of the model parameters via Markov Chain Monte Carlo (MCMC), using emcee (Goodman & Weare, 2010; Foreman-Mackey et al., 2013). Chains are run with either $N=64$ or $N=128$ walkers across 8000 (4000) steps respectively. We checked that these values ensure converged chains for the simulated likelihood analyses we run; the posteriors are not altered significantly by doubling the length or number of walkers. We adopt wide uniform priors on the bias parameters, $b_{i}\sim U(-5,5),$ (15) and uniform priors surrounding the boundaries of the Aemulus training suite, specified in Table 1. Parameter | Range ---|--- $\Omega_{b}h^{2}$ | [0.0207 , 0.0237] $\Omega_{c}h^{2}$ | [0.101 , 0.132] $w_{0}$ | [-1.399 , -0.566] $n_{s}$ | [0.928 , 0.997] $\sigma_{8}$ | [0.575 , 0.964] $H_{0}$ | [61.69 , 74.77] $N_{\mathrm{eff}}$ | [2.62 , 4.28] Table 1: Boundaries of the cosmological parameters of simulations spanned by the Aemulus training suite. These are the values used as flat priors for cosmological parameters. #### 6.3.1 Synthetic Data As a first test of the emulator, we perform a simulated likelihood analysis on a noiseless data vector drawn from the emulator itself. We fit the basis spectra to a halo sample of mass $12\leq\log_{10}M_{h}/M_{\odot}\leq 12.5$ from one of Aemulus’ test boxes. The cosmology and best-fitting bias values are used as inputs to the emulator to produce a mock noiseless data-vector. As the data in this test is not a random draw from a distribution, the exact format of the covariance matrix does not matter. However, we use the block- diagonal disconnected covariance of Eqn. 14 with $V=(1050\,h^{-1}\mathrm{Mpc})^{3}$ so as to replicate an analysis on an individual Aemulus test box. The results of this first mock analysis are shown in in Fig. 9. The three- parameter analysis constrains all cosmological and bias parameters in an unbiased fashion, indicating that there are no issues in fitting the emulator to itself at this volume. We also conduct seven-parameter analysis for $w$CDM parameters. The results returns unbiased posteriors relative to the true input values, however it is hard to constrain all $w$CDM parameters using a single halo sample at the volume of a single Aemulus box. For this reason, several of the cosmological parameters simply saturate the priors and remain unconstrained. #### 6.3.2 Halo samples from the test suite Figure 9: Cosmological parameter inference using the emulator where the data are a noiseless draw from itself. We vary the subset of parameters $\omega_{c},\,\sigma_{8},$ and $H_{0}$, using a Gaussian likelihood and purely disconnected covariance with volume $V=(1.05)^{3}(h^{-1}\mathrm{Gpc})^{3}$. The fiducial values used to generate the data vector are shown in the dashed lines. The bias parameter posteriors are equally unbiased and Gaussian, but omitted from the figure for aesthetic purposes. Figure 10: Cosmological parameter inference using the emulator fit to the mean of five realizations at the seven Aemulus test cosmologies. We vary the cosmological parameters $\omega_{c},\,\sigma_{8},$ and $H_{0}$, using a disconected covariance with volume $V=(1.05)^{3}(h^{-1}\mathrm{Gpc})^{3}$, including a contribution arising from correlated emulator residuals. The contours are shown in the space of differences relative to the true cosmology of each box. A subsequent test we perform is inference on halo catalogs drawn from the Aemulus test suite. We refer to Aemulus I (DeRose et al., 2019b) for details on how the halo finding procedure was done. This fiducial halo sample contains the mass bin $13\leq\log_{10}\left(\frac{M_{h}}{h^{-1}M_{\odot}}\right)\leq 13.5$ at $z=0.4$. We run a suite of chains for this halo sample, to assess emulator performance in terms of inferring cosmological parameters. We measure the halo–halo and halo–matter power spectra for each independent test box across the seven different test cosmologies. The data vector is averaged over the five independent realizations from the test suite. This set of chains allows us to assess the emulator biases such that they are less susceptible to projection effects. We can also study the cosmology dependence of the emulator in this way and the interplay between the bias coefficients of our model and cosmological parameters. Using solely the purely disconnected covariance matrix in Eqn. 14 leads to strong biases in inferred cosmological parameters, despite all residuals being smaller than 1 per cent as a function of scale. This can be understood by the fact that sample variance at small scales will eventually become smaller than the $1-2$ per cent emulator error observed in Fig. 5 (see also Fig. 13). However, the aforementioned figure allows us to estimate the emulator uncertainty as a function of scale. This can then be included as a separate contribution to the covariance matrix. We detail how this is done in Appendix A 111All contour plots shown from this point forward will include the effects of emulator error unless stated otherwise.. The result of the test is shown in Fig. 10, with the full set of contours shown in Fig. 17. The cosmological parameters inferred scatter around the best fits for $\omega_{c}$ and $\sigma_{8}$, whereas they recover $H_{0}$ to within one standard deviation for most cosmologies but biased slightly high. However, we note these tests are conservative, as they neglect the contribution to the covariance matrix arising from shape noise, the lensing equivalent of shot- noise that would contribute to the $hm\,\times\,hm$ term of the covariance matrix. Given the conservative nature of this test we deem the emulator performance to be sufficient and continue with the final and most stringent test we consider in this work. #### 6.3.3 A redMaGiC sample from an independent simulation $\log M_{\mathrm{min}}$ | $\sigma_{\log M}$ | $f_{c}$ | $\log M_{0}$ | $\log M_{1}^{{}^{\prime}}$ | $\alpha$ ---|---|---|---|---|--- 12.1 | 0.4 | 0.13 | 11.45 | 13.73 | 1.48 Table 2: HOD parameters used to populate the redMaGiC sample described in section 6.3.3. So far, we have reported tests performed on samples that originate either from the emulator itself or from the same suite of simulations used to construct it. It is also important that the model is useful for inference on spectra measured from tracer samples generated by independent methods, both in how halo samples are defined and the underlying $N$-body simulation used. For example, in Modi et al. (2020) it was shown that this hybrid Lagrangian bias model can successfully fit galaxy power spectra produced from a halo occupation distribution (HOD; see e.g. Zheng et al. 2005). We perform a final test: a simulated likelihood analysis with spectra produced from populating an independent $N$-body simulation with an HOD that matches the density and clustering properties of redMaGiC galaxies (Rozo et al., 2016). redMaGiC galaxies are the primary photometric Luminous Red Galaxy sample used in current and future weak lensing surveys (Elvin-Poole et al., 2018). The HOD parametrization we adopt is an extension of the model presented in Zheng et al. (2007), allowing for the central occupation at high mass to be less than unity $\displaystyle\langle N_{\mathrm{cen}}(M)\rangle=\frac{f_{c}}{2}\left[1+\mathrm{erf}\left(\frac{\log M-\log M_{\mathrm{min}}}{\sigma_{\log M}}\right)\right],$ (16) $\displaystyle\langle N_{\mathrm{sat}}(M)\rangle=\frac{1}{2}\left[1+\left(\frac{\log M-\log M_{\mathrm{min}}}{\sigma_{\log M}}\right)\right]\left(\frac{M-M_{0}}{M_{1}^{{}^{\prime}}}\right)^{\alpha}.$ (17) The HOD parameters corresponding to the redMaGiC samples used can be found in Table 2, and are derived from a redMaGiC sample selected from simulations similar to those presented in DeRose et al. (2019a). We paint redMaGiC galaxies onto halo catalogs measured from the UNIT simulations (Chuang et al., 2019) at $z\approx 0.59$, a redshift different from the Aemulus snapshots. A UNIT realization boasts a comparable volume to Aemulus of $V=1\,(h^{-1}\mathrm{Gpc})^{3}$ at a significantly higher number of particles, $N=(4096)^{3}$. Every UNIT simulation has two realizations with opposite phases and fixed amplitudes. Averaging two-point statistics measured from these paired–fixed realizations leads to very high sample variance suppression at large scales, comparable to averaging $\sim 150$ simulations of the same volume. The cosmological parameter constraints corresponding to this test are shown in Fig. 11. The emulator recovers the input cosmology of UNIT within its $68$ per cent contours. Although this test is idealized, the constraints inferred are promising if they translate even moderately well to a realistic analysis: a 2.5 per cent constraint on $\omega_{c}$, a 0.5 per cent constraint on $\sigma_{8}$ and a 1.6 per cent constraint of $H_{0}$. In a realistic lensing analysis one would expect these quantities to be degraded due to the inclusion of shape noise and only having access to two-dimensional lensing maps instead of the 3D matter field. Nevertheless, even a 100% degradation of these constraints due to the aforementioned complications would still result in highly competitive measurements of these parameters. Note we adopt no priors beyond the (moderately informative) priors set by the boundaries of the Aemulus suite. Figure 11: Cosmological parameter constraints from the redMaGiC sample constructed from the UNIT simulations. The true cosmological parameters and the best-fit bias parameters assuming the true cosmology are shown in the dashed lines. All parameters are recovered to well within the one-sigma errors. The simulated likelihood analysis performed on this sample additionally allow us to quantify both model and emulator errors in a space that is closer to observations that will be carried out in the near future. As redMaGiC galaxies are commonly used as lens samples in galaxy–galaxy lensing analyses, we can translate the $P^{hh},P^{hm}$ residuals to those in the observables $C_{\ell}^{gg},C_{\ell}^{g\kappa}$. We assume a redshift distribution $n(z)$ for redMaGiC galaxies consistent with data (Elvin-Poole et al., 2018) and fiducial parametrizations for the source sample that are consistent with those that will be achieved in future imaging surveys (Mandelbaum et al., 2018). For a redMaGiC sample spanning $z=[0.45,0.6]$ we present the results in Fig. 12. The harmonic space observables are calculated assuming the Limber approximation, with the additional approximation that the residuals between 3D power spectra do not evolve as a function of redshift. The residuals stay within one per cent out to $\ell\approx 1000$. If we instead use residuals from fitting the emulator at fixed cosmology to the same sample out to $k_{\rm max}=1.0\,h{\rm Mpc}^{-1}$ the residuals remain within ten per cent out to $\ell_{\rm max}=2000$, at the cost of worse performance at large scales. This indicates that the combined emulator and model error remain well under control for the analysis of current galaxy–galaxy lensing datasets. Figure 12: Residuals of the emulator fit to the redMaGiC sample in the space of a projected analysis. Residuals are shown for the range $\ell\in[50,2000]$. The redshift distributions of this analysis are consistent with those of current and upcoming surveys. The dashed envelope corresponds to the sample variance contribution in the absence of noise with sky coverage consistent with upcoming surveys and angular binning of $\Delta\ell=50$. That is, shot/shape noise will only increase the size of this envelope. The light gray and dark gray bands correspond to 2 and 1 per cent error bands, respectively. ## 7 Conclusions In this work we have built an emulator to study the cosmology dependence of the model of Modi et al. (2020) for the two-point statistics of biased tracers. The model combines $N$-body simulations with a symmetries-based bias expansion to provide accurate predictions beyond the regime of validity of standard perturbative approaches. Specifically, we built an emulator for the cosmology and redshift dependence of the ten non-linear basis functions that span this model. We use measurements from the Aemulus suite of simulations, which has been designed to enable the construction of emulators that satisfy the modelling requirements of upcoming cosmic surveys. The model and emulation techniques used are general; there are no limitations to extending the range of validity given the availability of an improved suite of simulations. We find that: 1. 1. The emulator recovers each basis spectrum to $\lesssim$ 1 per cent accuracy across a wide range of scales, $0.1<k/\left(h^{-1}{\rm Mpc}\right)\leq 1.0$, and redshifts, $0\leq z\leq 2$. 2. 2. The Lagrangian bias model is capable of capturing the clustering and lensing statistics of samples imbued with non-trivial amounts of secondary bias and contamination from baryonic physics. 3. 3. The test set used to validate the emulator can also be used to calibrate its ‘theoretical uncertainty’. This allows us to include contributions to the covariance matrix of an analysis related to model error, which cannot be neglected when pushing to small scales. 4. 4. The emulator, as constructed, can recover unbiased cosmological parameters from realistic simulated likelihood analyses. These findings indicate that our emulator is a robust tool that can be readily applied to analyses of current and even upcoming datasets. The code will be made publicly available github and can be integrated with modern sampling packages such as Cobaya (Torrado & Lewis, 2020). We also point out a few further directions to be investigated as a result of this work. First, while the simulations used here are sufficient to obtain per cent level emulator accuracy, improved simulations will be important for maximizing the applicability of this model. The biggest immediate limitation of this emulator is the extent of the cosmological parameter space that it is trained on. We plan on running simulations over a broader parameter space, including massive neutrinos, in the near future. Another limiting factor in the current emulator construction is our ability to match the basis spectra measured from our simulations to their perturbation theory analogs at low $k$. Running larger simulation volumes, or implementing a method for sample variance mitigation such as that presented in Chartier et al. (2020), would ameliorate this issue by reducing noise in the $N$-body measurements. This will allow them to be matched more easily to the perturbation theory predictions at scales that are still safely within perturbative reach. Mismatches in the linear growth predictions from $N$-body simulations also limit the accuracy of the large scale matching. Simulations with more stringent time-stepping criteria would reduce these inaccuracies, at the cost of increased run-time. For this reason, methods that explicitly enforce linear growth on large scales may be worth exploring in the future (Feng et al., 2016; Howlett et al., 2015). Finally, the accuracy of the model for redshift evolution of the basis spectra in the current emulator is limited by the number of snapshots saved in the Aemulus suite. For this reason, saving snapshots with finer redshift resolution out to higher redshifts will be a priority when running future simulations to upgrade the current emulator. While in this paper we have restricted ourselves to predictions of survey observables in Fourier space, one could use this same field-level approach to measure configuration-space correlation statistics instead. The model employed should also be able to describe the statistics of biased tracers at the field level, beyond two-point statistics. This includes both field-level characterizations of the Lagrangian bias model similarly to what was investigated in Schmittfull et al. (2019) and higher order functions such as the bispectrum or the collapsed tri-spectra that form the connected component of covariance matrices. The field-level approach to bias modelling described in Schmittfull et al. (2019) was recently extended to redshift space (Schmittfull et al., 2020). For our emulator to be used to describe the statistics of 3D galaxy clustering in spectroscopic galaxy surveys, it would need to be extended to redshift space in a similar manner. Alternatively, we note that the bias parameters in this model are equivalent to those of the Lagrangian perturbation theory of Chen et al. (2020a, b). This suggests one could perform a joint analysis that combines perturbation theory for describing the redshift-space clustering, where the 3D nature of the measurements allow tight constraints even on quasi-linear scales, and an emulator for describing projected statistics, which need to extend to smaller scales in order to beat down sample variance. In addition to providing a large dynamic range and sensitivity to both metric potentials, the combination of measurements would help to break bias parameter degeneracies and thus improve cosmological constraints. The second release of the Aemulus suite, Aemulus-$\nu$, will include two-fluid simulations that capture the effects of massive neutrinos on the matter density field. The techniques described in this paper can be translated to this new set of simulations to construct an emulator that can be used to constrain the sum of neutrino masses, one of the key science drivers of ongoing and future cosmological surveys. We leave these extensions to future work. ## Acknowledgements We thank Simone Ferraro and Anže Slosar for helpful comments on a draft of the paper and Sean McLaughlin for many helpful discussions. We are grateful to the Aemulus collaboration for making the simulation suite used here publicly available. This work was supported in part by U.S. Department of Energy contracts to SLAC (DE-AC02-76SF00515) and by Stanford University. N.K. thanks the LSSTC Data Science Fellowship Program, which is funded by LSSTC, NSF Cybertraining Grant #1829740, the Brinson Foundation, and the Moore Foundation. S.C. is supported by the National Science Foundation Graduate Research Fellowship (Grant No. DGE 1106400) and by the UC Berkeley Theoretical Astrophysics Center Astronomy and Astrophysics Graduate Fellowship. M.W. is supported by the U.S. Department of Energy and the NSF. This research has made use of NASA’s Astrophysics Data System and the arXiv preprint server. Some of the computing for this project was performed on the Sherlock cluster. We would like to thank Stanford University and the Stanford Research Computing Center for providing computational resources and support that contributed to these research results. Calculations and figures in this work have been made using nbodykit (Hand et al., 2018), GetDist (Lewis, 2019), and the SciPy Stack (Harris et al., 2020; Virtanen et al., 2020; Hunter, 2007). ## Data Availability The data underlying this article are available in the Aemulus Project’s website. ## References * Abbott et al. (2018) Abbott T., et al., 2018, Phys. Rev. D, 98, 043526 * Abidi & Baldauf (2018) Abidi M. M., Baldauf T., 2018, JCAP, 07, 029 * Aghamousa et al. (2016) Aghamousa A., et al., 2016, arXiv e-prints * Alam et al. (2017) Alam S., Miyatake H., More S., Ho S., Mandelbaum R., 2017, Mon. Not. Roy. Astron. Soc., 465, 4853 * Angulo & Pontzen (2016) Angulo R. E., Pontzen A., 2016, Mon. Not. Roy. Astron. Soc., 462, L1 * Aviles & Banerjee (2020) Aviles A., Banerjee A., 2020, J. Cosmology Astropart. Phys., 2020, 034 * Aviles et al. (2020) Aviles A., Valogiannis G., Rodriguez-Meza M. A., Cervantes-Cota J. L., Li B., Bean R., 2020, arXiv e-prints, p. arXiv:2012.05077 * Bagla (2005) Bagla J. S., 2005, Curr. Sci., 88, 1088 * Baldauf et al. (2016a) Baldauf T., Mirbabayi M., Simonović M., Zaldarriaga M., 2016a, arXiv e-prints * Baldauf et al. (2016b) Baldauf T., Schaan E., Zaldarriaga M., 2016b, JCAP, 03, 007 * Bartelmann & Schneider (2001) Bartelmann M., Schneider P., 2001, Phys. Rept., 340, 291 * Baumann et al. (2012) Baumann D., Nicolis A., Senatore L., Zaldarriaga M., 2012, J. Cosmology Astropart. Phys., 2012, 051 * Bernardeau et al. (2002) Bernardeau F., Colombi S., Gaztanaga E., Scoccimarro R., 2002, Phys. Rept., 367, 1 * Bianchini et al. (2015) Bianchini F., et al., 2015, The Astrophysical Journal, 802, 64 * Blas et al. (2014) Blas D., Garny M., Konstandin T., 2014, J. Cosmology Astropart. Phys., 2014, 010 * Blatman & Sudret (2008) Blatman G., Sudret B., 2008, Comptes Rendus Mecanique, 336, 518 * Blatman & Sudret (2011) Blatman G., Sudret B., 2011, Journal of Computational Physics, 230, 2345 * Carrasco et al. (2012) Carrasco J. J. M., Hertzberg M. P., Senatore L., 2012, JHEP, 09, 082 * Chartier et al. (2020) Chartier N., Wandelt B., Akrami Y., Villaescusa-Navarro F., 2020, arXiv e-prints, p. arXiv:2009.08970 * Chen et al. (2020a) Chen S.-F., Vlah Z., Castorina E., White M., 2020a, Redshift-Space Distortions in Lagrangian Perturbation Theory (arXiv:2012.04636) * Chen et al. (2020b) Chen S.-F., Vlah Z., White M., 2020b, JCAP, 07, 062 * Chen et al. (2020c) Chen S.-F., Vlah Z., White M., 2020c, JCAP, 11, 035 * Chisari et al. (2019) Chisari N. E., et al., 2019, Open J. Astrophys., 2, 4 * Chuang et al. (2019) Chuang C.-H., et al., 2019, Mon. Not. Roy. Astron. Soc., 487, 48 * Chudaykin et al. (2020) Chudaykin A., Ivanov M. M., Simonović M., 2020, arXiv e-prints * Cooray & Hu (2001) Cooray A., Hu W., 2001, The Astrophysical Journal, 554, 56–66 * Crocce et al. (2006) Crocce M., Pueblas S., Scoccimarro R., 2006, Mon. Not. Roy. Astron. Soc., 373, 369 * Crocce et al. (2012) Crocce M., Pueblas S., Scoccimarro R., 2012, 2LPTIC: 2nd-order Lagrangian Perturbation Theory Initial Conditions (ascl:1201.005) * Dalal et al. (2008) Dalal N., White M., Bond J. R., Shirokov A., 2008, Astrophys. J., 687, 12 * DeRose et al. (2019a) DeRose J., et al., 2019a, arXiv e-prints, p. arXiv:1901.02401 * DeRose et al. (2019b) DeRose J., et al., 2019b, Astrophys. J., 875, 69 * Desjacques et al. (2018) Desjacques V., Jeong D., Schmidt F., 2018, Phys. Rept., 733, 1 * DiPompeo et al. (2017) DiPompeo M. A., Hickox R. C., Eftekharzadeh S., Myers A. D., 2017, MNRAS, 469, 4630 * Doré et al. (2015) Doré O., et al., 2015, Cosmology with the SPHEREX All-Sky Spectral Survey (arXiv:1412.4872) * Doré et al. (2019) Doré O., et al., 2019, WFIRST: The Essential Cosmology Space Observatory for the Coming Decade (arXiv:1904.01174) * Elvin-Poole et al. (2018) Elvin-Poole J., et al., 2018, Phys. Rev. D, 98, 042006 * Favole et al. (2020) Favole G., et al., 2020, Mon. Not. Roy. Astron. Soc., 497, 5432 * Feinberg & Langtangen (2015) Feinberg J., Langtangen H. P., 2015, Journal of Computational Science, 11, 46 * Feinberg et al. (2018) Feinberg J., Eck V. G., Langtangen H. P., 2018, SIAM Journal on Scientific Computing, 40, A199 * Feng et al. (2016) Feng Y., Chu M.-Y., Seljak U., McDonald P., 2016, MNRAS, 463, 2273 * Foreman-Mackey et al. (2013) Foreman-Mackey D., Hogg D. W., Lang D., Goodman J., 2013, PASP, 125, 306 * Fujita & Vlah (2020) Fujita T., Vlah Z., 2020, J. Cosmology Astropart. Phys., 2020, 059 * Fujita et al. (2020) Fujita T., Mauerhofer V., Senatore L., Vlah Z., Angulo R., 2020, JCAP, 01, 009 * Gao et al. (2005) Gao L., Springel V., White S. D., 2005, Mon. Not. Roy. Astron. Soc., 363, L66 * Garrison et al. (2016) Garrison L. H., Eisenstein D. J., Ferrer D., Metchnik M. V., Pinto P. A., 2016, Mon. Not. Roy. Astron. Soc., 461, 4125 * Goodman & Weare (2010) Goodman J., Weare J., 2010, Communications in Applied Mathematics and Computational Science, 5, 65 * Guo et al. (2019) Guo H., et al., 2019, Astrophys. J., 871, 147 * Hand et al. (2018) Hand N., Feng Y., Beutler F., Li Y., Modi C., Seljak U., Slepian Z., 2018, The Astronomical Journal, 156, 160 * Harris et al. (2020) Harris C. R., et al., 2020, Nature, 585, 357–362 * Heitmann et al. (2009) Heitmann K., Higdon D., White M., Habib S., Williams B. J., Wagner C., 2009, Astrophys. J., 705, 156 * Heitmann et al. (2010) Heitmann K., White M., Wagner C., Habib S., Higdon D., 2010, Astrophys. J., 715, 104 * Heymans et al. (2020) Heymans C., et al., 2020, arXiv e-prints, p. arXiv:2007.15632 * Hockney & Eastwood (1988) Hockney R., Eastwood J., 1988, Computer Simulation Using Particles. CRC Press, https://books.google.com/books?id=nTOFkmnCQuIC * Hoffman & Gelman (2011) Hoffman M. D., Gelman A., 2011, arXiv e-prints, p. arXiv:1111.4246 * Howlett et al. (2015) Howlett C., Manera M., Percival W. J., 2015, Astronomy and Computing, 12, 109 * Hunter (2007) Hunter J. D., 2007, Computing in Science Engineering, 9, 90 * Ivanov et al. (2020) Ivanov M. M., McDonough E., Hill J. C., Simonović M., Toomey M. W., Alexander S., Zaldarriaga M., 2020, Phys. Rev. D, 102, 103502 * Ivezić et al. (2019) Ivezić v., et al., 2019, Astrophys. J., 873, 111 * Joyce et al. (2020) Joyce M., Garrison L., Eisenstein D., 2020, Mon. Not. Roy. Astron. Soc. * Knabenhans et al. (2019) Knabenhans M., et al., 2019, Mon. Not. Roy. Astron. Soc., 484, 5509 * Krause et al. (2017) Krause E., et al., 2017, arXiv e-prints * Krolewski et al. (2020) Krolewski A., Ferraro S., Schlafly E. F., White M., 2020, J. Cosmology Astropart. Phys., 2020, 047 * Kuhlen et al. (2012) Kuhlen M., Vogelsberger M., Angulo R., 2012, Phys. Dark Univ., 1, 50 * Kwan et al. (2015) Kwan J., Heitmann K., Habib S., Padmanabhan N., Finkel H., Lawrence E., Frontiere N., Pope A., 2015, Astrophys. J., 810, 35 * Lacasa (2018) Lacasa F., 2018, Astronomy & Astrophysics, 615, A1 * Laguë et al. (2020) Laguë A., Bond J. R., Hložek R., Marsh D. J., Söding L., 2020, arXiv e-prints * Laureijs et al. (2011) Laureijs R., et al., 2011, Euclid Definition Study Report (arXiv:1110.3193) * Lawrence et al. (2010) Lawrence E., Heitmann K., White M., Higdon D., Wagner C., Habib S., Williams B., 2010, The Astrophysical Journal, 713, 1322–1331 * Lazeyras & Schmidt (2018) Lazeyras T., Schmidt F., 2018, J. Cosmology Astropart. Phys., 2018, 008 * Lazeyras & Schmidt (2019) Lazeyras T., Schmidt F., 2019, JCAP, 11, 041 * Lewandowski et al. (2015) Lewandowski M., Perko A., Senatore L., 2015, JCAP, 05, 019 * Lewis (2019) Lewis A., 2019, GetDist: a Python package for analysing Monte Carlo samples (arXiv:1910.13970) * Li et al. (2019) Li Y., Singh S., Yu B., Feng Y., Seljak U., 2019, Journal of Cosmology and Astroparticle Physics, 2019, 016–016 * MacCrann et al. (2020) MacCrann N., Blazek J., Jain B., Krause E., 2020, Mon. Not. Roy. Astron. Soc., 491, 5498 * Mandelbaum (2018) Mandelbaum R., 2018, Ann. Rev. Astron. Astrophys., 56, 393 * Mandelbaum et al. (2018) Mandelbaum R., et al., 2018, arXiv e-prints * Mansfield & Avestruz (2020) Mansfield P., Avestruz C., 2020, Mon. Not. Roy. Astron. Soc. * Mansfield & Kravtsov (2020) Mansfield P., Kravtsov A. V., 2020, Mon. Not. Roy. Astron. Soc., 493, 4763 * Mao et al. (2018) Mao Y.-Y., Zentner A. R., Wechsler R. H., 2018, Mon. Not. Roy. Astron. Soc., 474, 5143 * Matsubara (2008) Matsubara T., 2008, Phys. Rev. D, 78, 083519 * McClintock et al. (2019) McClintock T., et al., 2019, Astrophys. J., 872, 53 * McDonald & Roy (2009) McDonald P., Roy A., 2009, Journal of Cosmology and Astroparticle Physics, 2009, 020–020 * McLaughlin et al. (2021) McLaughlin S., Wechsler R. H., Banerjee A., DeRose J., Mao Y.-Y., Tinker J. L., Zhai Z., 2021, arXiv e-prints, p. To appear * McQuinn & White (2016) McQuinn M., White M., 2016, J. Cosmology Astropart. Phys., 2016, 043 * Meiksin & White (1999) Meiksin A., White M., 1999, Monthly Notices of the Royal Astronomical Society, 308, 1179–1184 * Michaux et al. (2020) Michaux M., Hahn O., Rampf C., Angulo R. E., 2020, Mon. Not. Roy. Astron. Soc., 500, 663 * Modi et al. (2017) Modi C., White M., Vlah Z., 2017, J. Cosmology Astropart. Phys., 2017, 009 * Modi et al. (2020) Modi C., Chen S.-F., White M., 2020, Mon. Not. Roy. Astron. Soc., 492, 5754 * Mohammed et al. (2016) Mohammed I., Seljak U., Vlah Z., 2016, Monthly Notices of the Royal Astronomical Society, 466, 780–797 * Nishimichi et al. (2020) Nishimichi T., D’Amico G., Ivanov M. M., Senatore L., Simonović M., Takada M., Zaldarriaga M., Zhang P., 2020, arXiv e-prints * Omori et al. (2019) Omori Y., et al., 2019, Physical Review D, 100 * Park et al. (2020) Park Y., Rozo E., Krause E., 2020, arXiv e-prints * Peacock & Bilicki (2018) Peacock J. A., Bilicki M., 2018, MNRAS, 481, 1133 * Power et al. (2016) Power C., Robotham A. S. G., Obreschkow D., Hobbs A., Lewis G. F., 2016, Monthly Notices of the Royal Astronomical Society, 462, 474–489 * Prat et al. (2018) Prat J., et al., 2018, Physical Review D, 98 * Pullen et al. (2016) Pullen A. R., Alam S., He S., Ho S., 2016, MNRAS, 460, 4098 * Rozo et al. (2016) Rozo E., et al., 2016, Mon. Not. Roy. Astron. Soc., 461, 1431 * Salcedo et al. (2018) Salcedo A. N., Maller A. H., Berlind A. A., Sinha M., McBride C. K., Behroozi P. S., Wechsler R. H., Weinberg D. H., 2018, Monthly Notices of the Royal Astronomical Society, 475, 4411–4423 * Sato-Polito et al. (2019) Sato-Polito G., Montero-Dorta A. D., Abramo L. R., Prada F., Klypin A., 2019, Mon. Not. Roy. Astron. Soc., 487, 1570 * Savitzky & Golay (1964) Savitzky A., Golay M. J. E., 1964, Analytical Chemistry, 36, 1627 * Schmittfull et al. (2019) Schmittfull M., Simonović M., Assassi V., Zaldarriaga M., 2019, Physical Review D, 100 * Schmittfull et al. (2020) Schmittfull M., Simonović M., Ivanov M. M., Philcox O. H. E., Zaldarriaga M., 2020, Modeling Galaxies in Redshift Space at the Field Level (arXiv:2012.03334) * Schneider et al. (2016) Schneider A., et al., 2016, Journal of Cosmology and Astroparticle Physics, 2016, 047–047 * Scoccimarro et al. (1999) Scoccimarro R., Zaldarriaga M., Hui L., 1999, The Astrophysical Journal, 527, 1–15 * Senatore & Zaldarriaga (2017) Senatore L., Zaldarriaga M., 2017, arXiv e-prints, p. arXiv:1707.04698 * Singh et al. (2019) Singh S., Mandelbaum R., Seljak U., Rodríguez-Torres S., Slosar A., 2019, Monthly Notices of the Royal Astronomical Society, 491, 51–68 * Takada et al. (2014) Takada M., et al., 2014, PASJ, 66, R1 * Taruya et al. (2018) Taruya A., Nishimichi T., Jeong D., 2018, Phys. Rev. D, 98, 103532 * Torrado & Lewis (2020) Torrado J., Lewis A., 2020, Cobaya: Code for Bayesian Analysis of hierarchical physical models (arXiv:2005.05290) * Villaescusa-Navarro et al. (2018) Villaescusa-Navarro F., et al., 2018, Astrophys. J., 867, 137 * Virtanen et al. (2020) Virtanen P., et al., 2020, Nature Methods, 17, 261 * Vlah et al. (2015) Vlah Z., White M., Aviles A., 2015, J. Cosmology Astropart. Phys., 2015, 014 * Vlah et al. (2016) Vlah Z., Castorina E., White M., 2016, Journal of Cosmology and Astroparticle Physics, 2016, 007–007 * Wechsler & Tinker (2018) Wechsler R. H., Tinker J. L., 2018, Ann. Rev. Astron. Astrophys., 56, 435 * Wechsler et al. (2002) Wechsler R. H., Bullock J. S., Primack J. R., Kravtsov A. V., Dekel A., 2002, Astrophys. J., 568, 52 * Wechsler et al. (2006) Wechsler R. H., Zentner A. R., Bullock J. S., Kravtsov A. V., 2006, Astrophys. J., 652, 71 * White (2004) White M. J., 2004, Astropart. Phys., 22, 211 * Wibking et al. (2019) Wibking B. D., et al., 2019, Mon. Not. Roy. Astron. Soc., 484, 989 * Wibking et al. (2020) Wibking B. D., Weinberg D. H., Salcedo A. N., Wu H.-Y., Singh S., Rodríguez-Torres S., Garrison L. H., Eisenstein D. J., 2020, Mon. Not. Roy. Astron. Soc., 492, 2872 * Wiener (1938) Wiener N., 1938, American Journal of Mathematics, 60, 897 * Xiu (2010) Xiu D., 2010, Numerical Methods for Stochastic Computations: A Spectral Method Approach. Princeton University Press, USA * Yoo et al. (2006) Yoo J., Tinker J. L., Weinberg D. H., Zheng Z., Katz N., Dave R., 2006, The Astrophysical Journal, 652, 26–42 * Yuan et al. (2018) Yuan S., Eisenstein D. J., Garrison L. H., 2018, Mon. Not. Roy. Astron. Soc., 478, 2019 * Zhai et al. (2019) Zhai Z., et al., 2019, Astrophys. J., 874, 95 * Zhan & Knox (2004) Zhan H., Knox L., 2004, Astrophys. J. Lett., 616, L75 * Zhang et al. (2020) Zhang Y., et al., 2020, Mon. Not. Roy. Astron. Soc. * Zheng et al. (2005) Zheng Z., et al., 2005, Astrophys. J., 633, 791 * Zheng et al. (2007) Zheng Z., Coil A. L., Zehavi I., 2007, Astrophys. J., 667, 760 * Zu (2020) Zu Y., 2020, arXiv e-prints * van Daalen et al. (2020) van Daalen M. P., McCarthy I. G., Schaye J., 2020, Mon. Not. Roy. Astron. Soc., 491, 2424 ## Appendix A Including emulator error in the covariance matrix As seen in Fig. 5, there is a scale-dependent error associated with our emulation scheme. This error is small, on the order of $\sim$ 1 per cent, and within the accuracy requirements for the next generation of surveys. However, at the smallest scales we would like to test this model, $k\simeq 0.6\,h\,\mathrm{Mpc}^{-1}$, it will often be larger than the combined cosmic variance and shot noise (and absence of shape noise) of our tests. In this regime, the combination of using the average of only five boxes as our data, the approximate disconnected form of the covariance in Eqn. 14 and failing to include model uncertainty in an analysis could then lead to biased inference on cosmological parameters (Baldauf et al., 2016a; Chudaykin et al., 2020). Since the Aemulus test suite is composed of 35 simulations, at seven distinct points of cosmological parameter space, we can use the emulator residuals at these points to construct a model for the theoretical uncertainty. In this appendix we discuss our procedure to construct this model and study its impact when employed in inference. Let $\bar{P}_{XY}(k,\Omega_{i})$ be the mean basis spectrum measured from five Aemulus boxes at the cosmology $\Omega_{i}$. For a given box, we define normalized emulator residuals as $\hat{r}^{XY}(k)=\frac{\hat{P}_{XY}(k,\Omega_{i})-P^{\mathrm{Emu}}_{XY}(k,\Omega_{i}))}{\bar{P}_{XY}(k,\Omega_{i})},$ (18) where $P^{\mathrm{Emu}}_{XY}$ is the emulator prediction at the same cosmology. Normalized this way, we assume the residuals are cosmology independent. At each redshift we have 35 sets of residuals. With these measurements we can build an estimate of the residual correlation matrix $\mathrm{Corr}^{\mathrm{Emu}}(k,k^{\prime})=\frac{\mathrm{Cov}[\hat{r}^{XY}(k),\hat{r}^{XY}(k^{\prime})]}{\sqrt{\mathrm{Cov}(k,k)\mathrm{Cov}(k^{\prime},k^{\prime})}}$ (19) which captures how correlated the emulator residuals are across the test set as a function of scale. The quantities in the numerator and denominator of Eqn. 19 are the same, but we apply the shorthand $\mathrm{Cov}(k,k)\equiv\mathrm{Cov}[\hat{r}^{XY}(k),\hat{r}^{XY}(k)]$ to not overload the expression. We proceed to define an emulator floor, $f_{\mathrm{Emu}}$, specifying what fraction of the signal is of the order emulator error. From Fig. 5, the dominant source of uncertainty will come from the error in the $P_{11}$ spectrum. This implies $f_{\mathrm{Emu}}\simeq 0.01$ at small scales for redshifts $z>0$. We then estimate that the emulator error will scale as $\mathrm{Cov}^{\mathrm{Err}}(k,k^{\prime})=(f_{\mathrm{Emu}}P_{hh,hm}(k))^{2}\times\mathrm{Corr}^{\mathrm{Emu}}(k,k^{\prime}),$ (20) where $P_{hh,hm}$ is used depending on whether we are including this contribution to the block corresponding to the halo–halo correlation or the halo-matter correlation. We then add this contribution in quadrature to Eq. 14 $\mathrm{Cov}(k,k^{\prime})=\mathrm{Cov}^{G}(k,k^{\prime})+\mathrm{Cov}^{\mathrm{Err}}(k,k^{\prime}).$ (21) We run chains with the covariance in Eq. 21, as well as chains including only the diagonal contribution due to uncertainty, which we will call the ‘floor’ covariance. The contours for cosmological parameters are shown in Fig. 15. While this is clearly an approximate treatment, we observe that including this contribution helps prevent significant biases in cosmological parameter inference due to the noisy input data and very low noise assumed in the fit. Figure 13: Comparison of our model for emulator uncertainty compared to the disconnected component of the covariance matrix. The left panel corresponds to $P_{hh}P_{hh}$ contribution and the right panel to $P_{hm}P_{hm}$. Figure 14: Correlation matrix of emulator residuals described in Appendix A. We see at small scales, past $k\simeq 0.4h{\rm Mpc}^{-1}$, the emulator residuals are significantly correlated. Figure 15: UNIT contours with the different covariance forms discussed in Appendix A. Chains are run with the standard scale cuts of $k_{\rm max}=0.6\,h^{-1}{\rm Mpc}$. ## Appendix B Subsets of the bias model A common critique of EFT-based models is that they are over-parametrized, and can fit to any signal due to the large number of free parameters. For perturbative Lagrangian bias models, this question has been previously explored in the context of CMB lensing cross-correlations. In Modi et al. (2017), it was shown that significant biases are obtained in $\sigma_{8}$ in these analyses if one uses a simplified model with linear galaxy bias and non- linear matter power spectra. To address whether this holds for our model, we run a series of tests of the emulator, with differing subsets of the bias parameters set to zero. The full set we adopt is 1. 1. ‘All $b_{i}$’s ’, the full bias parametrization. 2. 2. ‘$b_{1}$ only’, where $b_{2}=b_{s^{2}}=b_{\nabla^{2}}=0$. 3. 3. ‘$b_{1},\,b_{\nabla^{2}}$’, where $b_{2}=b_{s^{2}}=0$. 4. 4. ‘No $b_{s^{2}}$’, where $b_{s^{2}}=0$. 5. 5. ‘No $b_{2}$’, where $b_{2}=0$. Figure 16: Posteriors for varying subsets of the bias model in Eqn. 1, for two different scale cut configurations. A contribution due to shot-noise is included in all of the chains. All chains in Fig. 16 are run with the same data vector and covariance matrices, and the $k_{\rm max}$ cuts highlighted in each row. We observe significant biases for every subset of bias parameters, except for the complete parameterization which recovers the input cosmological parameters as previously discussed in section 6.3.3. This implies, at least in this simplified analysis, that the full set of bias parameters is required to achieve unbiased inference with this model. To check the scale-dependence of the importance of the full parameterization, the second row of Fig. 16 repeats this test limiting ourselves to $k_{\mathrm{max}}=0.4h{\rm Mpc}^{-1}$. The full bias model and the subset including only linear, quadratic and higher derivative biases perform comparatively well. ## Appendix C The $k\to 0$ limit of the emulator In this appendix we investigate the impact of not correctly recovering large- scale linear growth in $N$-body simulations on the emulator, as highlighted in §1. We implement two different forms of enforcing consistency with linear theory at large scales: * • Strictly reverting to LPT at $k<k_{\rm min}$. This introduces a ‘kink’ in the basis spectra predicted by the emulator. * • Extrapolating the principal component predictions out to $k<k_{\rm min}$, but with a filter to enforce linear growth. The filter is applied to the $\Gamma^{XY}(k)$ that we use to build the emulator, $\displaystyle\Gamma^{XY}(k,{\bf\Omega)}\to F(k)\Gamma^{XY}(k,{\bf\Omega}).$ (22) With this filtering approach, we recover LPT at large scales by construction without the discontinuity introduced by simply forcing LPT after some transition. The functional form we adopted for $F(k)$ is $\displaystyle F(k)=\frac{1}{2}\left[1+\tanh\left(\alpha\frac{k-k_{*}}{k_{*}}\right)\right].$ (23) This quantity asymptotes to 0 at large scales, ensuring the $\Gamma^{XY}$ are 0, and thus the ratios are consistent with unity. Fiducial values adopted are $k_{*}=0.125$ and $\alpha=2.5$ but the impact is similar for other values. Since the samples we use to test the emulator are also derived from boxes with incorrect growth, for all figures in this paper we adopt a ‘fiducial model’ where we use the $\Gamma^{XY}$ with no corrections at large-scales. The emulator then has large-scale growth compatible with the boxes. If we perform a simulated likelihood analysis with the other variants that enforce LPT at large scales we see small shifts in some cosmological parameters away from their true values. The shifts in parameters are all less than one $\sigma$, and one must keep in mind that the noise levels in our analysis are quite stringent (for example, we have no shape noise in the simulated lensing constraint). When phrased in terms of the uncertainties in parameters obtained by recent analyses (Heymans et al., 2020), these shifts are less than $(1/4)\,\sigma$. Figure 17: The same chains as Fig. 10 but showing all parameters varied.
# Vortex solution in elliptic coordinates Wladimir Lyra New Mexico State University, Department of Astronomy, PO Box 30001 MSC 4500, Las Cruces, NM 88001, USA ###### Abstract Vortices (flows with closed elliptic streamlines) are exact nonlinear solutions to the compressible Euler equation. In this contribution, we use differential geometry to derive the transformations between Cartesian and elliptic coordinates, and show that in elliptic coordinates a constant vorticity flow reduces to $\dot{\mu}=0$ and $\dot{\nu}={\rm const}$ along the streamline $\mu_{0}$ that matches the vortex eccentricity. ## 1 Introduction Vortices are important for planet formation, theorized as favorable locations for dust trapping (Barge & Sommeria, 1995). Crescent-shaped asymmetries have been observed in sub-mm images of protoplanetary disks (van der Marel et al., 2013), although their unambiguous identification vortices has been elusive. A patch of constant vorticity follows the solution ${\bm{u}}=\Omega y/\chi\hat{{\bm{x}}},-x\chi\hat{{\bm{y}}}$, where $(x,y)$ are the Cartesian coordinates, $\Omega$ is a constant and $\chi=x/y>1$ is the vortex aspect ratio. Given the elliptic streamlines, a solution in terms of elliptic coordinates $(\mu,\nu)$ is desirable. ## 2 Elliptical coordinates The orthogonal elliptical coordinate system is $\displaystyle x$ $\displaystyle=$ $\displaystyle f\cosh\mu\cos\nu,$ (1) $\displaystyle y$ $\displaystyle=$ $\displaystyle f\sinh\mu\sin\nu,$ (2) where $f=a\epsilon$ is the focal length, $a$ the semi-major axis, and $\epsilon$ the eccentricity. Constant $\mu$ define ellipses, constant $\nu$ define hyperbolae. The coordinates describe confocal ellipses: the focal distance is constant, so changing $\mu$ changes not only the semimajor axis but also the eccentricity. ### 2.1 Metric The metric of this system is $\displaystyle g_{ij}$ $\displaystyle=$ $\displaystyle\frac{\partial{x^{\alpha}}}{\partial{q^{i}}}\frac{\partial{x^{\beta}}}{\partial{q^{j}}}g_{\alpha\beta},$ (3) $\displaystyle=$ $\displaystyle f^{2}\left(\sinh^{2}\mu+\sin^{2}\nu\right)\,\delta_{ij}.$ where ${\bm{x}}=(x,y)$ and ${\bm{q}}=(\mu,\nu)$ are Cartesian and elliptic coordinates; $g_{\alpha\beta}=\delta_{\alpha\beta}$ is the metric of Cartesian space. From this transformation, it follows that the scale factors are equal $\displaystyle h_{\mu}$ $\displaystyle=$ $\displaystyle\sqrt{g_{\mu\mu}}=f\sqrt{\sinh^{2}\mu+\sin^{2}\nu},$ (4) $\displaystyle h_{\nu}$ $\displaystyle=$ $\displaystyle\sqrt{g_{\nu\nu}}=f\sqrt{\sinh^{2}\mu+\sin^{2}\nu}.$ (5) We hereafter use $h=h_{\mu}=h_{\nu}$. We also use the equivalent definition $h=\frac{f}{\sqrt{2}}\sqrt{\cosh\,2\mu-\cos\,2\nu}.$ (6) The derivatives with respect to the coordinates are $\displaystyle\partial_{\mu}h$ $\displaystyle=$ $\displaystyle\frac{f^{2}}{2h}\,\sinh\,2\mu,$ (7) $\displaystyle\partial_{\nu}h$ $\displaystyle=$ $\displaystyle\frac{f^{2}}{2h}\,\sin\,2\nu.$ (8) We calculate the Christoffel symbols in non-coordinate basis $\displaystyle\Gamma_{\hat{\alpha}\hat{\beta}\hat{\gamma}}$ $\displaystyle=$ $\displaystyle\frac{1}{2}\left(c_{\hat{\alpha}\hat{\beta}\hat{\gamma}}+c_{\hat{\alpha}\hat{\gamma}\hat{\beta}}-c_{\hat{\beta}\hat{\gamma}\hat{\alpha}}\right),$ (9) $\displaystyle\Gamma^{\hat{\alpha}}_{\hat{\beta}\hat{\gamma}}$ $\displaystyle=$ $\displaystyle g^{\hat{\alpha}\hat{\zeta}}\Gamma_{\hat{\zeta}\hat{\beta}\hat{\gamma}},$ (10) where $c_{\hat{\beta}\hat{\gamma}\hat{\alpha}}=g_{\hat{\alpha}\hat{\zeta}}{c_{\hat{\beta}\hat{\gamma}}}^{\hat{\zeta}}$ are the connection coefficients, given by $[e_{\hat{\beta}},e_{\hat{\gamma}}]={c_{\hat{\beta}\hat{\gamma}}}^{\hat{\alpha}}\partial_{\hat{\alpha}}.$ (11) Given that $e_{\hat{\mu}}=h^{-1}\partial_{\mu}$ and $e_{\hat{\nu}}=h^{-1}\partial_{\nu}$, we have $\displaystyle[e_{\hat{\mu}},e_{\hat{\nu}}]$ $\displaystyle=$ $\displaystyle\frac{1}{h}\left[\frac{\partial{}}{\partial{\mu}}\left(\frac{1}{h}\frac{\partial{}}{\partial{\nu}}\right)-\frac{\partial{}}{\partial{\nu}}\left(\frac{1}{h}\frac{\partial{}}{\partial{\mu}}\right)\right]$ (12) $\displaystyle=$ $\displaystyle\frac{f^{2}}{2h^{4}}\left(\sin\,2\nu\frac{\partial{}}{\partial{\mu}}-\sinh\,2\mu\frac{\partial{}}{\partial{\nu}}\right)$ $\displaystyle=$ $\displaystyle\frac{f^{2}}{2h^{3}}\left(\sin\,2\nu\,\partial_{\hat{\mu}}-\sinh\,2\mu\,\partial_{\hat{\nu}}\right)$ $\displaystyle=$ $\displaystyle-[e_{\hat{\nu}},e_{\hat{\mu}}].$ The connection coefficients are thus $\displaystyle{c_{{\hat{\mu}}{\hat{\nu}}}}^{\hat{\mu}}=-{c_{{\hat{\nu}}{\hat{\mu}}}}^{\hat{\mu}}$ $\displaystyle=$ $\displaystyle{\color[rgb]{1,1,1}-}\frac{f^{2}}{2h^{3}}\sin\,2\nu,$ (13) $\displaystyle{c_{{\hat{\mu}}{\hat{\nu}}}}^{\hat{\nu}}=-{c_{{\hat{\nu}}{\hat{\mu}}}}^{\hat{\nu}}$ $\displaystyle=$ $\displaystyle-\frac{f^{2}}{2h^{3}}\sinh\,2\mu.$ (14) And the Christoffel symbols are $\displaystyle\Gamma^{\hat{\mu}}_{{\hat{\nu}}{\hat{\mu}}}=-\Gamma^{\hat{\nu}}_{{\hat{\mu}}{\hat{\mu}}}$ $\displaystyle=$ $\displaystyle\frac{f^{2}}{2h^{3}}\sin\,2\nu,$ (15) $\displaystyle\Gamma^{\hat{\nu}}_{{\hat{\mu}}{\hat{\nu}}}=-\Gamma^{\hat{\mu}}_{{\hat{\nu}}{\hat{\nu}}}$ $\displaystyle=$ $\displaystyle\frac{f^{2}}{2h^{3}}\sinh\,2\mu.$ (16) The elliptic and Cartesian unit vectors $\hat{e}_{i}$ and $\hat{x}_{i}$ transform according to $\hat{e}_{i}=\frac{1}{h_{i}}\frac{\partial x_{j}}{\partial e_{i}}\hat{x_{j}},$ (17) i.e., $\displaystyle\left[\begin{array}[]{c}\hat{\mu}\\\ \hat{\nu}\\\ \end{array}\right]$ $\displaystyle=$ $\displaystyle\left[\begin{array}[]{ccc}h_{\mu}^{-1}\partial_{\mu}x&{\color[rgb]{1,1,1}..}&h_{\mu}^{-1}\partial_{\mu}y\\\ h_{\nu}^{-1}\partial_{\nu}x&{\color[rgb]{1,1,1}..}&h_{\nu}^{-1}\partial_{\nu}y\\\ \end{array}\right]\left[\begin{array}[]{c}\hat{x}\\\ \hat{y}\\\ \end{array}\right]$ (24) $\displaystyle=$ $\displaystyle\frac{f}{h(\mu,\nu)}\left[\begin{array}[]{ccc}{\color[rgb]{1,1,1}-}\sinh\mu\cos\nu&{\color[rgb]{1,1,1}..}&\cosh\mu\sin\nu\\\ -\cosh\mu\sin\nu&{\color[rgb]{1,1,1}..}&\sinh\mu\cos\nu\\\ \end{array}\right]\left[\begin{array}[]{c}\hat{x}\\\ \hat{y}\\\ \end{array}\right]$ (29) This can be written compactly as $\hat{e}_{i}=E_{ij}\hat{x}_{j},$ (30) where $E_{ij}$ is the elliptic rotation matrix. Its inverse is $\displaystyle E^{-1}$ $\displaystyle=$ $\displaystyle\frac{f}{h}\left[\begin{array}[]{ccc}\sinh\mu\cos\nu&{\color[rgb]{1,1,1}..}&-\cosh\mu\sin\nu\\\ \cosh\mu\sin\nu&{\color[rgb]{1,1,1}..}&{\color[rgb]{1,1,1}-}\sinh\mu\cos\nu\\\ \end{array}\right].$ (33) The velocity is $u_{i}=h_{i}\dot{q}_{i}\hat{q}_{i},$ (34) which means $\displaystyle{\bm{u}}$ $\displaystyle=$ $\displaystyle\dot{x}\hat{x}+\dot{y}\hat{y},$ (35) $\displaystyle=$ $\displaystyle h_{\mu}\dot{\mu}\hat{\mu}+h_{\nu}\dot{\nu}\hat{\nu}.$ We can also get the velocity by the rotation matrix $u_{{\hat{e}}_{i}}=E_{ij}u_{c_{j}},$ (36) i.e., $\displaystyle u_{\hat{\mu}}=E_{11}u_{x}+E_{12}u_{y},$ (37) $\displaystyle u_{\hat{\nu}}=E_{21}u_{x}+E_{22}u_{y}.$ (38) Yielding the variation of the coordinate bases $\displaystyle\dot{\mu}$ $\displaystyle=$ $\displaystyle fh^{-2}\left({\color[rgb]{1,1,1}-}\sinh\mu\cos\nu\,u_{x}+\cosh\mu\sin\nu\,u_{y}\right),$ (39) $\displaystyle\dot{\nu}$ $\displaystyle=$ $\displaystyle fh^{-2}\left(-\cosh\mu\sin\nu\,u_{x}+\sinh\mu\cos\nu\,u_{y}\right).$ (40) ## 3 Vortex motion Consider a vortex in Cartesian coordinates $\displaystyle u_{x}$ $\displaystyle=$ $\displaystyle-\varOmega y\chi$ (41) $\displaystyle u_{y}$ $\displaystyle=$ $\displaystyle{\color[rgb]{1,1,1}-}\varOmega x/\chi$ (42) We seek to transform this into elliptic coordinates. The vortex motion occurs on ellipses of constant eccentricity, whereas the elliptic coordinate system defines confocal ellipses of different eccentricity. An elliptic coordinate system based on constant eccentricity (Chang & Oishi, 2010), although matching the flow geometry, is not orthogonal, which complicates analysis (Lyra & Lin, 2013). If the streamlines matched the eccentricities of the confocal ellipses, the velocity would everywhere reduce to $\dot{\mu}=0$ and $\dot{\nu}={\rm const}$. However, that is not the case, as one can verify that this is not divergenceless. In fact, there is only one streamline that obeys $\dot{\mu}=0$ and $\dot{\nu}={\rm const}$, which is the streamline of eccentricity matching the eccentricity of the vortex. This is the particular ellipse $\mu_{0}$, given by $\tanh\mu_{0}=\chi^{-1}$. We write the velocities as $\displaystyle u_{x}$ $\displaystyle=$ $\displaystyle-\varOmega\,f\,\cosh\mu\sin\nu,$ (43) $\displaystyle u_{y}$ $\displaystyle=$ $\displaystyle{\color[rgb]{1,1,1}-}\varOmega\,f\,\sinh\mu\cos\nu.$ (44) We transform these into elliptical coordinates by the rotation matrix $u_{{\hat{e}}_{i}}=E_{ij}u_{c_{j}}$ (45) yielding $\displaystyle u_{\hat{\mu}}$ $\displaystyle=$ $\displaystyle-\Omega\frac{f^{2}}{2h}\left(\frac{\cosh\,2\mu-\cosh\,2\mu_{0}}{\sinh\,2\mu_{0}}\right)\sin\,2\nu,$ (46) $\displaystyle u_{\hat{\nu}}$ $\displaystyle=$ $\displaystyle\Omega\frac{f^{2}}{2h}\left(\frac{\sinh\,2\mu}{\sinh\,2\mu_{0}}\right)(\cosh\,2\mu_{0}-\cos\,2\nu).$ (47) The divergence is $\displaystyle{\bm{\nabla}}\cdot{A}$ $\displaystyle=$ $\displaystyle u^{\hat{\alpha}}_{;\hat{\alpha}}=u^{\hat{\alpha}}_{,\hat{\alpha}}+\Gamma^{\hat{\alpha}}_{\hat{\beta}\hat{\alpha}}u^{\hat{\beta}},$ (49) $\displaystyle=$ $\displaystyle u^{\hat{\alpha}}_{,\hat{\alpha}}+\frac{f^{2}}{2h^{3}}\left(u^{\hat{\mu}}\sin\,2\nu+u^{\hat{\nu}}\sinh\,2\mu\right),$ or, abandoning the co-variant formulation, ${\bm{\nabla}}\cdot{A}=\frac{1}{h^{2}}\left(\frac{\partial{hu_{\hat{\mu}}}}{\partial{\mu}}+\frac{\partial{hu_{\hat{\nu}}}}{\partial{\nu}}\right).$ (51) we conclude that the flow is divergenceless. Eq. (46) and Eq. (47) may seem daunting at first, but following the motion at the ellipse of $\mu$=$\mu_{0}$ simplifies it considerably. For $\mu=\mu_{0}$, Eq. (46) cancels. For Eq. (47), the factor in parentheses becomes unity; the next term, given Eq. (6), is $2h/f^{2}$. Thus, for $\mu$=$\mu_{0}$, the motion is $u_{\hat{\mu}}$=0, $u_{\hat{\nu}}$=$\varOmega\,h_{0}$. Comparing with Eq. (35) yields $\displaystyle\dot{\mu}$ $\displaystyle=$ $\displaystyle 0,$ (52) $\displaystyle\dot{\nu}$ $\displaystyle=$ $\displaystyle\varOmega.$ (53) For the particular $\mu=\mu_{0}$ ellipse, the motion has constant $\mu$: a closed elliptic streamline. The angle $\nu$ rotates uniformly. Notice that this does not mean that the velocity itself is uniform, since $h$ depends on $\nu$. The explicit dependency of $u_{\nu}$ on $\nu$ is $\displaystyle u_{\nu}^{2}$ $\displaystyle=$ $\displaystyle\frac{\varOmega^{2}\,f^{2}}{2}\left(\cosh\,2\mu-\cos\,2v\right)$ (54) $\displaystyle=$ $\displaystyle\varOmega^{2}\,f^{2}\left(\sinh^{2}\mu+\sin^{2}\,v\right).$ ## 4 Energy conservation That the kinetic energy $K=u_{\nu}^{2}/2$ depends on $\nu$, a function of time, may seem strange at first. We show that this happens because the velocity change is compensated by a change in pressure ($p$), conserving the total energy. The energy equation is $\frac{\partial E}{\partial t}=-{\bm{\nabla}}\cdot{\left[{\bm{u}}\left(E+p\right)\right]}+{\bm{F}}_{b}\cdot{\bm{u}}$ (55) where ${\bm{F}}_{b}$ is a body force. The total energy is $E=K+\varepsilon$, where and $\varepsilon=k_{B}T$ is the internal energy ($k_{B}$ is Boltzmann’s constant and $T$ is the temperature). In the absence of a body force and for constant temperature, this reduces to $\frac{\partial}{\partial t}\left(\frac{u^{2}}{2}\right)=-\left({\bm{u}}\cdot{\bm{\nabla}}\right)\left(\frac{u^{2}}{2}+p/\rho\right)$ (56) therefore $\frac{d}{dt}\left(\frac{u^{2}}{2}\right)=-\left({\bm{u}}\cdot{\bm{\nabla}}\right)p/\rho$ (57) The enthalpy is found by Euler’s equation $\displaystyle\partial_{x}p/\rho$ $\displaystyle=$ $\displaystyle- u_{x}\partial_{x}u_{y}=\varOmega^{2}x$ (58) $\displaystyle\partial_{y}p/\rho$ $\displaystyle=$ $\displaystyle-u_{y}\partial_{y}u_{x}=\varOmega^{2}y$ (59) Taking the $x$-derivative above and the $y$ below, we find $\nabla^{2}{p/\rho}=\varOmega^{2}$; therefore $p/\rho=\frac{1}{2}\varOmega^{2}\left(x^{2}+y^{2}\right)+C$ (60) which is an intriguing result: an incompressible elliptical vortex produces an axis-symmetric pressure distribution. Transforming into elliptic coordinates and eliminating the constant $p/\rho=\frac{1}{2}\Omega^{2}f^{2}(\cosh^{2}\mu\cos^{2}\nu+\sin^{2}\mu\sin^{2}\nu).$ (61) Along the $\mu_{0}$ streamline, the advection reduces to the $\nu$-term $\displaystyle-\left({\bm{u}}\cdot{\bm{\nabla}}\right)P/\rho$ $\displaystyle=$ $\displaystyle-u_{\nu}h^{-1}\partial_{\nu}p/\rho$ (62) $\displaystyle=$ $\displaystyle\frac{1}{2}\varOmega^{3}f^{2}\sin\,2\nu,$ whereas the time derivative of the kinetic energy is $\displaystyle\frac{d}{dt}\left(\frac{u^{2}}{2}\right)$ $\displaystyle=$ $\displaystyle\Omega^{2}h\frac{dh}{dt}$ (63) $\displaystyle=$ $\displaystyle\frac{1}{2}\Omega^{3}f^{2}\sin\,2\nu.$ That the two variations match amounts to conservation of energy: along the ellipse, the material slows down or speeds up in order to match the pressure variation. ## 5 Euler equation in elliptical coordinates We consider now the force balance. We use the transformations here derived to write the Euler equation in elliptic coordinates $\partial_{t}{\bm{u}}+\left({\bm{u}}\cdot{\bm{\nabla}}\right){\bm{u}}=-{\bm{\nabla}}{p/\rho}.$ (64) Using covariant derivatives, this reads $\partial_{t}u^{\hat{k}}+u^{\hat{p}}\partial_{\hat{p}}u^{\hat{k}}+\Gamma^{\hat{k}}_{\hat{m}\hat{n}}u^{\hat{m}}u^{\hat{n}}=-\partial_{\hat{k}}{p/\rho}.$ (65) For $\mu$, the correction due to the Christoffel symbols is $\displaystyle\Gamma^{\hat{\mu}}_{{\hat{m}}{\hat{n}}}u^{{\hat{m}}}u^{{\hat{n}}}$ $\displaystyle=$ $\displaystyle\Gamma^{{\hat{\mu}}}_{{\hat{\nu}}{\hat{\mu}}},$ (66) $\displaystyle=$ $\displaystyle\frac{f^{2}}{2h^{3}}\left[u^{{\hat{\nu}}}u^{{\hat{\mu}}}\,\sin\,2\nu-(u^{{\hat{\nu}}})^{2}\sinh\,2\mu\right].$ The same procedure for $\nu$ yields $\displaystyle\Gamma^{{\hat{\nu}}}_{{\hat{m}}{\hat{n}}}u^{{\hat{m}}}u^{{\hat{\nu}}}$ $\displaystyle=$ $\displaystyle\Gamma^{\hat{\nu}}_{{\hat{\nu}}{\hat{\mu}}}u^{\hat{\mu}}u^{\hat{\nu}}+\Gamma^{\hat{\mu}}_{{\hat{\mu}}{\hat{\mu}}}(u^{{\hat{\mu}}})^{2},$ (67) $\displaystyle=$ $\displaystyle\frac{f^{2}}{2h^{3}}\left[u^{\hat{\nu}}u^{\hat{\mu}}\,\sinh\,2\mu-(u^{\hat{\mu}})^{2}\,\sin\,2\nu\right].$ Abandoning the co-variant notation $\displaystyle\partial_{t}\,u_{\mu}$ $\displaystyle=$ $\displaystyle-\left(h^{-1}u_{\mu}\partial_{\mu}+h^{-1}u_{\nu}\partial_{\nu}\right)u_{\mu}-h^{-1}\partial_{\mu}{p/\rho}-\frac{f^{2}}{2h^{3}}\left(u_{\nu}u_{\mu}\,\sin\,2\nu- u_{\nu}^{2}\sinh\,2\mu\right),$ (68) $\displaystyle\partial_{t}\,u_{\nu}$ $\displaystyle=$ $\displaystyle-\left(h^{-1}u_{\mu}\partial_{\mu}+h^{-1}u_{\nu}\partial_{\nu}\right)u_{\nu}-h^{-1}\partial_{\nu}{p/\rho}-\frac{f^{2}}{2h^{3}}\left(u_{\nu}u_{\mu}\,\sinh\,2\mu- u_{\mu}^{2}\,\sin\,2\nu\right).$ (69) This differs from the usual equations by the presence of the extra force $\displaystyle{\bm{F}}$ $\displaystyle=$ $\displaystyle\frac{f^{2}}{2h^{3}}\left[\left(u_{\nu}u_{\mu}\,\sin\,2\nu- u_{\nu}^{2}\sinh\,2\mu\right)\hat{\mu}+\left(u_{\nu}u_{\mu}\,\sinh\,2\mu- u_{\mu}^{2}\,\sin\,2\nu\right)\hat{\nu}\right]$ (70) Contracting this force with the velocity yields ${\bm{F}}\cdot{\bm{u}}=0$, which shows that this force is inertial. For the vortical flow, again following the $\mu_{0}$ streamline where $\dot{\mu}=0$ and $\dot{\nu}=\varOmega$, these reduce to $\displaystyle-\frac{\varOmega^{2}f^{2}}{2}\sinh\,2\mu_{0}$ $\displaystyle=$ $\displaystyle-\partial_{\mu}{p/\rho}$ (71) $\displaystyle\left(u_{\nu}\partial_{\nu}\right)u_{\nu}$ $\displaystyle=$ $\displaystyle-\partial_{\nu}{p/\rho}$ (72) I.e, a constant centrifugal force that balances the normal pressure gradient, and inertia in the tangential direction exchanging kinetic energy with the pressure field. Fig 1 sketches the forces. Figure 1: Force balance in an elliptic vortex streamline (solid line). Dotted circles represent the pressure contours. The velocity (black arrow) is tangent to the streamline, in the $\hat{\nu}$ direction. The pressure gradient (blue arrow) is broken down in its $\hat{\mu}$ and $\hat{\nu}$ components (brown arrows). The $\hat{\mu}$ component is balanced by the centrifugal force (red arrow); the $\hat{\nu}$ component is balanced by advection. ## References * Barge & Sommeria (1995) Barge, P., & Sommeria, J. 1995, A&A, 295, L1 * Chang & Oishi (2010) Chang, P., & Oishi, J. S. 2010, ApJ, 721, 1593, doi: 10.1088/0004-637X/721/2/1593 * Lyra & Lin (2013) Lyra, W., & Lin, M.-K. 2013, ApJ, 775, 17, doi: 10.1088/0004-637X/775/1/17 * van der Marel et al. (2013) van der Marel, N., van Dishoeck, E. F., Bruderer, S., et al. 2013, Science, 340, 1199, doi: 10.1126/science.1236770
# Sharp Signal Detection under Ferromagnetic Ising Models Sohom Bhattacharya Sohom Bhattacharya Department of Statistics, Stanford University, CA, USA, <EMAIL_ADDRESS>, Rajarshi Mukherjee Rajarshi Mukherjee Department of Biostatistics, Harvard University, MA, USA, <EMAIL_ADDRESS>and Gourab Ray Gourab Ray Department of Mathematics and Statistics, University of Victoria, Vancouver, BC, Canada, <EMAIL_ADDRESS> ###### Abstract. In this paper we study the effect of dependence on detecting a class of structured signals in Ferromagnetic Ising models. Natural examples of our class include Ising Models on lattices, and Mean-Field type Ising Models such as dense Erdős-Rényi, and dense random regular graphs. Our results not only provide sharp constants of detection in each of these cases and thereby pinpoint the precise relationship of the detection problem with the underlying dependence, but also demonstrate how to be agnostic over the strength of dependence present in the respective models. ## 1\. Introduction ††2010 Mathematics Subject Classification: 62G10, 62G20, 60C20††Keywords and phrases: Ising Model, Signal Detection, Structured Sparsity, Sharp Constants Let $\mathbf{X}=(X_{1},\ldots,X_{n})^{\top}\in\\{\pm 1\\}^{n}$ be a random vector with the joint distribution of $\mathbf{X}$ given by an Ising model defined as: $\displaystyle\mathbb{P}_{\beta,\mathbf{Q},\boldsymbol{\mu}}(\mathbf{X}=\mathbf{x}):=\frac{1}{Z(\beta,\mathbf{Q},\mathbf{\boldsymbol{\mu}})}\exp{\left(\frac{\beta}{2}\mathbf{x}^{\top}\mathbf{Q}\mathbf{x}+\boldsymbol{\mu}^{\top}\mathbf{x}\right)},\qquad\forall\mathbf{x}\in\\{\pm 1\\}^{n},$ (1) where $\mathbf{Q}$ is an $n\times n$ symmetric and hollow matrix, $\boldsymbol{\mu}:=(\mu_{1},\ldots,\mu_{n})^{\top}\in\mathbb{R}^{n}$ is an unknown parameter vector to be referred to as the external magnetization vector, $\beta\in\mathbb{R}$ is a real number usually referred to as the “inverse temperature”, and $Z(\beta,\mathbf{Q},\mathbf{\boldsymbol{\mu}})$ is a normalizing constant. It is clear that the pair $(\beta,\mathbf{Q})$ characterizes the dependence among the coordinates of $\mathbf{X}$, and $X_{i}$’s are independent if $\beta\mathbf{Q}=\mathbf{0}_{n\times n}$. The matrix $\mathbf{Q}$ will usually be associated with a certain sequence of simple labeled graphs $\mathbb{G}_{n}=(V_{n},E_{n})$ with vertex set $V_{n}=\\{1,\dots,n\\}$ and edge set $E_{n}\subseteq V_{n}\times V_{n}$ and corresponding $\mathbf{Q}=|V_{n}|G_{n}/2|E_{n}|$, where $G_{n}$ is the adjacency matrix of $\mathbb{G}_{n}$. Note that we do not absorb $\beta$ in the matrix $\mathbf{Q}$. This is because we want to understand the effect of the nuisance parameter $\beta$ on the inference about $\boldsymbol{\mu}$. We are interested in testing against a collection of alternatives defined by a class of subsets $\mathcal{C}_{s}$ of $\mathbb{R}_{+}^{n}$ each of which is of size $s$. More precisely, given any class of subsets $\mathcal{C}_{s}\subset\\{S\subset\mathbb{R}_{+}^{n}:|S|=s\\}$ of $\mathbb{R}_{+}^{n}$ having size $s$ each, we consider testing the following hypotheses $H_{0}:\boldsymbol{\mu}=\mathbf{0}\quad{\rm vs}\quad H_{1}:\boldsymbol{\mu}\in\Xi(\mathcal{C}_{s},A),$ (2) where ${\Xi}(\mathcal{C}_{s},A):=\left\\{\begin{array}[]{c}\boldsymbol{\mu}\in\mathbb{R}_{+}^{n}:\mathrm{supp}(\boldsymbol{\mu})\in\mathcal{C}_{s}{\rm\ and\ }\min\limits_{i\in{\rm supp}(\boldsymbol{\mu})}\mu_{i}\geq A\end{array}\right\\},$ and ${\rm supp}(\boldsymbol{\mu}):=\\{i\in\\{1,\ldots,n\\}:\mu_{i}\neq 0\\}.$ Thus the class of alternatives $\Xi(\mathcal{C}_{s},A)$ puts non-zero signals on one of candidate sets in $\mathcal{C}_{s}$ where each signal set has size $s$. Of primary interest here is to explore the effect of $(\beta,\mathbf{Q})$ in testing (2) when $\mathcal{C}_{s}$ has low complexity in a suitable sense. In this regard, previously, [6, 4] studied the detection of block-sparse and thick shaped signals on lattices while [1] considered general class of signals of combinatorial nature. However these papers crucially assume independence between the outcomes and thereby correspond to $\beta=0$ in (1) in our context. Following up on this line of research, several other papers have also considered detection of signals over lattices and networks (see e.g. [26, 50, 7, 45, 49, 13, 35] and references therein). However, in overwhelming majority of the literature, the underlying networks only describe the nature of signals – such as rectangles or thick clusters in lattices [6]. A fundamental question however remains – “how does dependence characterized by a network modulate the behavior of such detection problems ?” Only recently, [26] explored the effect of dependence 111Effect of dependence in signal detection for Gaussian outcomes has also been explored for detecting unstructured arbitrary sparse signals [31, 32]. on such structured detection problems for stationary Gaussian processes – with examples including linear lattices studied through the lens of Gaussian auto-regressive observation schemes. Dependence structures beyond Gaussian random variables are often more challenging to analyze (due to possible lack of closed form expressions of resulting distributions) and allow for interesting and different behavior of such testing problems – see e.g. [42]. One of the motivations of this paper is to fill this gap in the literature and show how dependent binary outcomes can substantially change the results for detecting certain classes structured signals. One of the motivations of this paper is to pinpoint the precise effect of dependence on the behavior of such testing problems. In particular, [42, 18] demonstrate how dependence might have a subtle effect on the minimax separation rate of sparse testing problems. In this paper we crystallize the effect of such dependence by going beyond optimal rates and characterizing sharp asymptotic constant for minimax separation while testing against suitably structured hypotheses $\mathcal{C}_{s}$. To describe our results in the context of the model-problem pair (1)-(2) we adopt a standard minimax framework as follows. Let a statistical test for $H_{0}$ versus $H_{1}$ be a measurable $\\{0,1\\}$ valued function of the data $\mathbf{X}$, with $1$ denoting rejecting the null hypothesis $H_{0}$ and $0$ denoting not rejecting $H_{0}$. The worst case risk of a test $T:\\{\pm 1\\}^{\Lambda_{n}(d)}\to\\{0,1\\}$ for testing (2) is defined as $\displaystyle\mathrm{Risk}(T,{\Xi}(\mathcal{C}_{s},A),\beta,\mathbf{Q})$ $\displaystyle:=\mathbb{P}_{\beta,\mathbf{Q},\mathbf{0}}\left(T(\mathbf{X})=1\right)+\sup_{\boldsymbol{\mu}\in{\Xi}(\mathcal{C}_{s},A)}\mathbb{P}_{\beta,\mathbf{Q},\boldsymbol{\mu}}\left(T(\mathbf{X})=0\right).$ (3) We say that a sequence of tests $T_{n}$ corresponding to a sequence of model- problem pair (1) and (2), to be asymptotically powerful (respectively asymptotically not powerful) against $\Xi(\mathcal{C}_{s},A)$ if $\limsup\limits_{n\rightarrow\infty}\mathrm{Risk}(T_{n},\Xi(\mathcal{C}_{s},A),\beta,\mathbf{Q})=0\text{ (respectively }\liminf\limits_{n\rightarrow\infty}\mathrm{Risk}(T_{n},\Xi(\mathcal{C}_{s},A),\beta,\mathbf{Q})>0).$ (4) The goal of the current paper is to characterize how for some low complexity class $\mathcal{C}_{s}$, the sparsity $s$, and strength $A$ of the signal jointly determine if there is an asymptotically powerful test, and how the behavior changes with $(\beta,\mathbf{Q})$. With the above framework, our main results can be summarized as follows. 1. (i) For some classical mean-field type models we show that detecting low complexity sets (see Theorem 1 for exact definition) has same constant of detection for low and critical dependence and has a larger constant of minimax separation (i.e. strictly more information theoretic hardness) for higher dependence. Our examples naturally include the complete graph (Theorem 1), dense Erdős-Rényi and dense random regular graphs(Theorem 3). 2. (ii) For detecting thick rectangular signals in Ising models over lattices of general dimensions, we present the sharp minimax separation constants (Theorem 6) for low dependence and high dependence (under a “pure phase” defined in Section 3). In contrast to dense regular graphs, the problem has a strict monotone increasing nature of the constant as one approaches criticality from the low dependence direction (Lemma 7). The exact monotonic nature of the constant in the high dependence case is not clear and is left as future research direction. 3. (iii) We further demonstrate how the sharp optimal tests can be obtained adaptively over the dependence strength $\beta$ (Theorem 2 and Theorem 8). The rest of this paper is organized as follows. In Section 2 we present our results for detecting low complexity type signals in dense regular type graphs. Subsequently, Section 3 considers detecting thick rectangular signals over lattices. Finally all the proofs and associated technical lemmas are collected in Section 6. ### 1.1. Notation $[n]$, for $\mathbf{v}=(v_{1},\ldots,v_{d})^{T}\in\mathbb{R}^{d}$ define $\bar{\mathbf{v}}=\frac{1}{d}\sum_{i=1}^{d}v_{i}$, for any set $A$ define $\mathbbm{1}(\cdot\in A)$ as the indicator function for the set. For any set $S\in\mathcal{S}$, let $\boldsymbol{\mu}_{S}(A)$ denote the vector with $\mu_{i}=A\mathbbm{1}_{i\in S}$. Let $\mathbf{1}$ denote the vector of all $1$s. We also let $\mathbbm{1}$ to denote generic indicator functions. The results in this paper are mostly asymptotic (in $n$) in nature and thus requires some standard asymptotic notations. If $a_{n}$ and $b_{n}$ are two sequences of real numbers then $a_{n}\gg b_{n}$ (and $a_{n}\ll b_{n}$) implies that ${a_{n}}/{b_{n}}\rightarrow\infty$ (and ${a_{n}}/{b_{n}}\rightarrow 0$) as $n\rightarrow\infty$, respectively. Similarly $a_{n}\gtrsim b_{n}$ (and $a_{n}\lesssim b_{n}$) implies that $\liminf_{n\rightarrow\infty}{{a_{n}}/{b_{n}}}=C$ for some $C\in(0,\infty]$ (and $\limsup_{n\rightarrow\infty}{{a_{n}}/{b_{n}}}=C$ for some $C\in[0,\infty)$). Alternatively, $a_{n}=o(b_{n})$ will also imply $a_{n}\ll b_{n}$ and $a_{n}=O(b_{n})$ will imply that $\limsup_{n\rightarrow\infty}\ a_{n}/b_{n}=C$ for some $C\in[0,\infty)$). If $C>0$ then we write $a_{n}=\Theta(b_{n})$. If $a_{n}/b_{n}\rightarrow 1$, then we say $a_{n}\sim b_{n}$. For any $a,b\in\mathbb{Z}$, $a\leq b$, let $[a,b]:=\\{a,a+1,\ldots,b\\}$. ## 2\. Mean-Field type Interactions In this section, we collect our results on the testing problem (2) for some specific examples of mean-field type models [8] such as Ising models on the complete graph, and dense Erdős-Rényi and random regular graphs. However, for the precise statement the upper bounds of our results we first define a class of signals $\mathcal{C}_{s}$. This is captured by a notion of complexity defined through the following weighted Hamming type of metric (see e.g. – [4, 6]) on $2^{[n]}$. Mathematically, for any two subsets $S_{1},S_{2}\subset[n]$ we let $\gamma(S_{1},S_{2}):=\sqrt{2}\left(1-\frac{|S_{1}\cap S_{2}|}{\sqrt{|S_{1}||S_{2}|}}\right)$ denote their distance. Subsequently, for any $\varepsilon>0$ we let $|\mathcal{N}(\mathcal{C}_{s},\gamma,\varepsilon)|$ denote the $\varepsilon$-covering number of $\mathcal{C}_{s}$ w.r.t. $\gamma$. Our main result in terms of detecting signals in $\Xi(\mathcal{C}_{s},A)$ pertains to classes of signals with suitably low complexity defined through the asymptotic behavior of $|\mathcal{N}(\mathcal{C}_{s},\gamma,\varepsilon)|$. In this regard we first provide a complete picture for all temperature regimes in the mean-field Curie-Weiss model followed by demonstrating how similar results might be obtained in high temperature regimes for other dense regular type graphs. ### 2.1. Complete Graph We begin by stating and discussing our results for the Curie-Weiss model. ###### Theorem 1. Consider testing (2) in the model (1) with $\mathbf{Q}_{ij}=\frac{\mathbf{1}(i\neq j)}{n}$ correspond to the complete graph. 1. i. Assume that $\mathcal{C}_{s}$ satisfies the following condition w.r.t. the metric $\gamma$ for some sequence $\varepsilon_{n}\rightarrow 0$: $\Theta(\log n)=\log|\mathcal{N}(\mathcal{C}_{s},\gamma,\varepsilon_{n})|\ll s\ll n/\log|\mathcal{N}(\mathcal{C}_{s},\gamma,\varepsilon_{n})|.$ Then the following hold. 1. _a_) For $\beta<1$, $s\ll\frac{n}{\log n}$, there exists a sequence of asymptotically powerful test if $\liminf_{n\rightarrow\infty}\sqrt{s}\tanh(A)(\log|\mathcal{N}(\mathcal{C}_{s},\gamma,\varepsilon_{n})|)^{-1/2}>\sqrt{2}.$ (5) 2. _b_) For $\beta=1$, the same conclusion is valid for $s\ll\frac{\sqrt{n}}{\log n}$. 3. _c_) For $\beta>1$ and $s\ll\frac{n}{\log n}$, there exists a sequence of asymptotically powerful test if $\liminf_{n\rightarrow\infty}\sqrt{s}\tanh(A)(\log|\mathcal{N}(\mathcal{C}_{s},\gamma,\varepsilon_{n})|)^{-1/2}>\sqrt{2}\cosh(\beta m),$ (6) where $m$ is the unique positive root of the equation $m=\tanh(\beta m)$. 2. ii. Assume that, there exists a subset $\tilde{\mathcal{C}}_{s}\subset\mathcal{C}_{s}$ of disjoint sets such that $\Theta(\log n)=\log|\tilde{\mathcal{C}}_{s}|\ll s\ll n/\log|\tilde{\mathcal{C}}_{s}|.$ Then the following hold. 1. _a_) For $\beta<1$ and $s\ll\frac{n}{\log n}$, all tests are asymptotically powerless if $\liminf_{n\rightarrow\infty}\sqrt{s}\tanh(A)(\log|\tilde{\mathcal{C}}_{s}|)^{-1/2}<\sqrt{2}.$ (7) 2. _b_) For $\beta=1$, the same conclusion is valid for $s\ll\frac{\sqrt{n}}{\log n}$. 3. _c_) For $\beta>1$ and $s\ll\frac{n}{\log n}$, no tests are asymptotically powerful if $\liminf_{n\rightarrow\infty}\sqrt{s}\tanh(A)(\log|\tilde{\mathcal{C}}_{s}|)^{-1/2}<\sqrt{2}\cosh(\beta m),$ (8) A few remarks are in order regarding the conditions, involved optimal procedures, and implications of Theorem 1. We first note that the upper and lower bounds are sharp as long as there exists $\tilde{C}_{s},\mathcal{N}(\mathcal{C}_{s},\gamma,\varepsilon_{n})$ such that $\log|\tilde{C}_{s}|\sim\log|\mathcal{N}(\mathcal{C}_{s},\gamma,\varepsilon_{n})|$ for some $\varepsilon_{n}\rightarrow 0$. In particular, the sharp constants match for testing a class of signals whose size is dominated (on log-scale) by the size of a subclass of disjoint sets. Next we note that the conditions on $s$ posited in the various parts of the theorem are actually optimal. Indeed, as was discussed in [18], for $\beta=1$ and $s\gg\sqrt{n}$ the rate of detection is much faster (requiring only $s\tanh(A)\gg n^{1/4}$) and the problem might not offer a phase transition at the level of sharp constants. Next we note that the optimal test above is based on a suitably calibrated scanning procedure. However, the scanning procedure is somewhat different depending on the regime of dependence. In high and critical dependence ($0\leq\beta\leq 1$) the procedure can be described as follows. For $S\in\mathcal{N}(\mathcal{C}_{s},\gamma,\varepsilon_{n})$ we first define $Z_{S}=\sum_{i\in S}X_{i}/\sqrt{s}$ and our scan test rejects for large values of $Z_{\max}:=\max\limits_{S\in\mathcal{N}(\mathcal{C}_{s},\gamma,\varepsilon_{n})}Z_{S}$. In contrast, for the high dependence regime ($\beta>1$) we perform a somewhat randomized scan test by first generating $W_{n}\sim N(\bar{\mathbf{X}},1/(n\beta))$ and subsequently for $W_{n}>0$, rejecting $H_{0}$ for large values of $Z_{\max}-m\sqrt{s}$ and when $W_{n}\leq 0$ rejecting $H_{0}$ for large values of $Z_{\max}+m\sqrt{s}$ 222$m:=m(\beta)$ is the unique positive root of $m=\tanh(\beta m)$. The fact that these sequence of tests are indeed sharp optimal is thereafter demonstrated by precise analyses of the Type II errors (through a mean-variance control) and a matching lower bound calculation obtained through a truncated second moment approach. Both the analyses of the tests and the proofs for matching lower bounds rely on the moderation deviation behavior of $Z_{S},S\in\mathcal{N}(\mathcal{C}_{s},\gamma,\varepsilon_{n})$ – which are obtained through Lemma 1. Finally, a direct analysis of the sharp constants of detection, $\sqrt{2}$ for $\beta\leq 1$ and $\sqrt{2\cosh^{2}(\beta m(\beta))}$ for $\beta>1$, reveals that the problem becomes harder as one moves away from the critical dependence $\beta=1$. In particular, the sharp constant of detection can be succinctly defined through by $\sqrt{2\cosh^{2}(\beta m(\beta))}$ which we display below in Figure 1 to demonstrate this phenomenon. Figure 1. Behavior of $\sqrt{2}\cosh(m(\beta))$ To describe our next result, we note that the proof of the upper bound in Theorem 1 assumes the knowledge of $\beta>0$. Our next result therefore pertains to showing that the sharp rates obtained above can actually be obtained adaptively over the knowledge of the inverse temperature $\beta>0$. To this end we begin by noting that using a consistent of $\beta>0$ might seem hopeless to begin with since consistent estimation of $\beta$ is not possible when $\beta\in[0,1)$. However, when $\beta\leq 1$, our optimal test does not depend on the specific knowledge of $\beta$. Therefore our idea can be described as follows: (1) construct a consistent test to decide whether $\beta\leq 1$ or $\beta>1$; (2) if the test rejects in favor of $\beta>1$ then use a pseudo-likelihood estimator of $\beta$ under the working model of $\boldsymbol{\mu}=0$ 333this is important since joint estimation of $\beta,\boldsymbol{\mu}$ can be significantly harder information theoretically and if the test return in favor of $\beta\leq 1$ then construct the $\beta$ independent optimal test in Theorem 1. ###### Theorem 2. Theorem 1 holds for unknown $\beta>0$ if $\|\boldsymbol{\mu}\|_{\infty}=O(1)$. The proof of Theorem 2, which is deferred to Section 6, requires the assumption on $\boldsymbol{\mu}$ in terms of its maximal element. Although this requirement can be relaxed to $\|\boldsymbol{\mu}\|_{\infty}=o(n/s)$, our arguments were unable to get rid of it completely. We keep further explorations in this regard for future research. ### 2.2. Dense Regular Graphs The results in the Curie-Weiss model in the last section provides insight on the possible behavior of this testing problem under mean-field type models [8]. Here we demonstrate that this intuition of similar behavior to the Curie- Weiss model with regard to this inferential problem is indeed true for some specific examples of mean-field type models such as dense Erdős-Rényi and random regular graphs, which is our first result in this direction. In particular, we let $\mathbb{G}_{n}=(V_{n},E_{n})\sim\mathcal{G}_{n}(n,p)$ denote an Erdős-Rényi random graph with edges $E_{n}$ formed by joining pairs of vertices $i,j\in V_{n}=\\{1,\ldots,n\\}$ independently with probability $p\in(0,1)$. In a similar vein we let $\mathbb{G}_{n}=(V_{n},E_{n})\sim\mathcal{G}_{n}(n,d)$ denote an randomly drawn graph from the collection of all $d$-regular graphs on $V_{n}=\\{1,\ldots,n\\}$. ###### Theorem 3. The same conclusion of Theorem 1 hold for any $\beta\geq 0$ when either (i) $\mathbf{Q}=\frac{G_{n}}{np}$ with $G_{n}$ being the adjacency matrix of $\mathbb{G}_{n}\sim\mathcal{G}_{n}(n,p)$ with $p=\Theta(1)$; or (ii) $\mathbf{Q}=\frac{G_{n}}{d}$ with $G_{n}$ being the adjacency matrix of $\mathbb{G}_{n}\sim\mathcal{G}_{n}(n,d)$ with $d=\Theta(n)$. A few remarks in order regarding the statement and proof of Theorem 3. First, the result should be understood as a high probability statement w.r.t. the randomness of the underlying Erdős-Rényi random graph. In particular, we prove that same results as in Theorem 1 hold with probability converging to $1$ under the Erdős-Rényi measure on $\mathbb{G}_{n}$. In this regard, the requirement of $p=\Theta(1)$ is mostly used for the $\beta>1$ case and can be relaxed for $\beta\leq 1$ regime. However, to keep our discussions consistent over values of $\beta$ we only consider the $p=\Theta(1)$ case. Finally, the proof of the theorem mainly operates through careful comparison of suitable event probabilities and partition functions under Ising models over Erdős- Rényi and complete graphs respectively – the proofs of which can be found in Section 6.4. We also show that the result above can be obtain without the knowledge of $\beta$. ###### Theorem 4. Theorem 3 holds for unknown $\beta>0$ if $\|\boldsymbol{\mu}\|_{\infty}=O(1)$. The requirement of $\|\boldsymbol{\mu}\|_{\infty}=O(1)$ can be relaxed to $\|\boldsymbol{\mu}\|_{\infty}=o(n/s)$. However, at this moment we have not been able to relax this completely. ## 3\. Short Range Interactions on Lattices To describe the detection problem for nearest neighbor interaction type models, it is convenient to represent the points $i=1,\dots,n$ to be vertices of $d$-dimensional hyper-cubic lattice and the underlying graph (i.e. the graph corresponding to $\mathbf{Q}$) to be the nearest neighbor (in sense of Euclidean distance) graph on these vertices. More precisely, given positive integers $n,d$, we consider a growing sequence of lattice boxes of dimension $d$ defined as $\\{\Lambda_{n,d}\\}_{n\geq 1}=\\{[-n^{1/d},n^{1/d}]^{d}\cap\mathbb{Z}^{d}\\}_{n\geq 1}$ where $\mathbb{Z}^{d}$ denotes the d-dimensional integer lattice. Subsequently, we consider a family of random variables defined on the vertices of $\Lambda_{n,d}$ as $\mathbf{X}\in\\{-1,+1\\}^{\Lambda_{n,d}}$ with the following probability mass function (p.m.f.) $\displaystyle\mathbb{P}_{\beta,\mathbf{Q},\boldsymbol{\mu}}(\mathbf{X}=\mathbf{x})=\frac{1}{Z(\beta,\mathbf{Q}(\Lambda_{n,d}),\mathbf{\boldsymbol{\mu}})}\exp{\left(\frac{\beta}{2}\mathbf{x}^{\top}\mathbf{Q}(\Lambda_{n,d})\mathbf{x}+\boldsymbol{\mu}^{\top}\mathbf{x}\right)},\qquad\forall\mathbf{x}\in\\{\pm 1\\}^{\Lambda_{n,d}},$ (9) where as usual $\mathbf{Q}(\Lambda_{n,d})=(\mathbf{Q}(\Lambda_{n,d})_{ij})_{i,j\in\Lambda_{n,d}}$ is a symmetric and hollow array (i.e. $\mathbf{Q}(\Lambda_{n,d})_{ii}=0$ for all $i\in\Lambda_{n,d}$) with elements indexed by pairs of vertices in $\Lambda_{n,d}$ (organized in some pre-fixed lexicographic order), $\boldsymbol{\mu}:=(\mu_{i}:i\in\Lambda_{n,d})^{\top}\in\mathbb{R}^{\Lambda_{n,d}}$ referred to as the external magnetization vector indexed by vertices of $\Lambda_{n,d}$ , $\beta>0$ is the “inverse temperature”, and $Z(\beta,\mathbf{Q}(\Lambda_{n,d}),\mathbf{\boldsymbol{\mu}})$ is a normalizing constant. Note that in this notation $i,j\in\Lambda_{n,d}$ are vertices of the $d$-dimensional integer lattice and hence correspond to $d$-dimensional vector with integer coordinates. Therefore, using this notation, by nearest neighbor graph will correspond to the matrix $\mathbf{Q}_{ij}=\mathbf{Q}_{ij}(\Lambda_{n,d})=\mathbf{1}(0<\|i-j\|_{1}=1)$. Our results in this model will be derived for any dependence other than a critical dependence $\beta_{c}(d)$ to be defined next. The case of critical dependence for lattices remains open even in terms of optimal rate for the minimax separation and therefore we do not pursue the issue of optimal constants in this regime. To describe a notion of critical temperature, consider $\mathcal{Q}(d)$ be the sequence of matrices $\\{\mathbf{Q}(\Lambda_{n,d})\\}_{n\geq 1}$ and define $\displaystyle\beta_{c}(d)=\beta_{c}(\mathcal{Q}(d))=\inf\left\\{\beta>0:\lim_{h\downarrow 0}\lim_{n\rightarrow\infty}\mathbb{E}_{\beta,\mathbf{Q}(\Lambda_{n,d}),\boldsymbol{\mu}(h)}\left(\frac{1}{n}\sum_{i\in\Lambda_{n,d}}X_{i}\right)>0\right\\},$ where we let $\boldsymbol{\mu}(h)$ denote the vector in $\mathbb{R}^{|\Lambda_{n,d}|}$ with all coordinates equal to $h$. The existence and equivalence of the above notions (such as ones including uniqueness of infinite volume measure) of critical temperature in nearest neighbor Ising Model is a topic of classical statistical physics and we refer the interested reader to the excellent expositions in [27, 20] for more details. This value of $\beta_{c}(d)$ (which is known to be strictly positive for any fixed $d\geq 1$) is referred to as the critical inverse temperature in dimension $d$ and the behavior of the system of observations $\mathbf{X}$ changes once $\beta$ exceeds this threshold. For $d=1$, it is known from the first work in this area [34] that $\beta_{c}(1)=+\infty$ and consequently the Ising model in 1-dimension is said to have no phase transitions. The seminal work of [43] provides a formula for $\beta_{c}(2)$ and obtaining an analytical formula for $\beta_{c}(d)$ for $d\geq 3$ remains open. Consequently, results only pertain to the existence of a strictly non-zero finite $\beta_{c}(d)$ which governs the macroscopic behavior of the system of observations $X_{i},i\in\Lambda_{n,d}$ as $n\rightarrow\infty$. In particular, the average magnetization $n^{-1}\sum X_{i}$ converges to $0$ in probability for $\beta<\beta_{c}(d)$ and to an equal mixture of two delta-dirac random variables $m_{+}(\beta)$ and $m_{-}(\beta)=-m_{+}(\beta)$, for $\beta>\beta_{c}(d)$ ([38]). This motivates defining Ising models in pure phases as follows. Let $\Lambda^{\mathsf{w}}_{n,d}$ denote the graph obtained by identifying the vertices not in $\Lambda_{n,d}$ into a single vertex $\partial\Lambda_{n,d}$ and then erasing all the self loops. Let $\mathbf{Q}(\Lambda^{\mathsf{w}}_{n,d})$ denote the adjacency matrix corresponding to nearest neighbour interaction in this modified graph. We denote $\displaystyle\mathbb{P}^{+}_{\beta,\mathbf{Q}(\Lambda_{n,d}),\boldsymbol{\mu}}(\mathbf{X}=\mathbf{x})=\mathbb{P}_{\beta,\mathbf{Q}(\Lambda^{\mathsf{w}}_{n,d}),\boldsymbol{\mu}}(\mathbf{X}=\mathbf{x}|\partial\Lambda_{n,d}=+1),$ $\displaystyle\mathbb{P}^{-}_{\beta,\mathbf{Q}(\Lambda_{n,d}),\boldsymbol{\mu}}(\mathbf{X}=\mathbf{x})=\mathbb{P}_{\beta,\mathbf{Q}(\Lambda^{\mathsf{w}}_{n,d}),\boldsymbol{\mu}}(\mathbf{X}=\mathbf{x}|\partial\Lambda_{n,d}=-1),$ to be Ising Models (see (1)) with $+$ and $-$ boundary conditions respectively. On the other hand $\mathbb{P}_{\beta,\mathbf{Q},\boldsymbol{\mu}}$ is referred to as the Ising model with _free boundary condition_. It is well known [27], that for $0\leq\beta<\beta_{c}(d)$ the asymptotic properties of the models $\mathbb{P}^{+}_{\beta,\mathbf{Q},\boldsymbol{\mu}}$, $\mathbb{P}^{-}_{\beta,\mathbf{Q},\boldsymbol{\mu}}$, and $\mathbb{P}_{\beta,\mathbf{Q},\boldsymbol{\mu}}$ are similar (i.e. they have all the same infinite volume $n\rightarrow\infty$ weak limit). However, for $\beta>\beta_{c}(d)$, the model $\mathbb{P}_{\beta,\mathbf{Q},\boldsymbol{\mu}}$ behaves asymptotically as the mixture of $\mathbb{P}^{+}_{\beta,\mathbf{Q},\boldsymbol{\mu}}$ and $\mathbb{P}^{-}_{\beta,\mathbf{Q},\boldsymbol{\mu}}$. We only present our result for the measure $\mathbb{P}^{+}_{\beta,\mathbf{Q},\boldsymbol{\mu}}$ in such cases. Although we believe that a similar result might hold for both negative boundary condition (i.e. $\mathbb{P}^{-}_{\beta,\mathbf{Q},\boldsymbol{\mu}}$) as well as free boundary condition (i.e. the original model $\mathbb{P}_{\beta,\mathbf{Q},\boldsymbol{\mu}}$) we do not yet have access to a rigorous argument in this regard. In the rest of this section, we therefore shall use the superscript “$\rm bc$” in our probability, expectation, and variance operators (e.g. $\mathbb{P}^{\rm bc}_{\beta,\mathbf{Q}(\Lambda_{n,d}),\boldsymbol{\mu}}$, $\mathbb{E}^{\rm bc}_{\beta,\mathbf{Q}(\Lambda_{n,d}),\boldsymbol{\mu}}$, and $\mathrm{Var}^{\rm bc}_{\beta,\mathbf{Q}(\Lambda_{n,d}),\boldsymbol{\mu}}$) where $\mathrm{bc}\in\\{\mathsf{f},+\\}$ and stands for “boundary condition” referring to either the free boundary condition (when $\rm bc=\mathsf{f}$) or plus boundary condition (when $\rm bc=+$). To present our results in this case, we begin with describing the structure of our alternatives. Since our model has an inherent geometry given by the lattice structure in $d$-dimensions, it is natural to consider signals which can be described by such geometry. Similar to one of the emblematic cases considered in [4, 6, 49, 13, 35], here we discuss testing against block sparse alternatives of size $s$ define by $\Xi(\mathcal{C}_{s},A)$ with $\displaystyle\mathcal{C}_{s}=\mathcal{C}_{s,\rm rect}:=\left\\{\prod\limits_{j=1}^{d}[a_{j}:b_{j}]\cap\Lambda_{n}(d):\ b_{j}-a_{j}=\lceil s^{1/d}\rceil\right\\}.$ (10) Although we only present the results for sub-cube detection with equal size of the sides , one can extend the results to detection of thick clusters (see [6] for details) with modifications of the arguments presented here. The development and analysis of multi-scale tests similar to those explored in [4, 49, 35] is also important. However we keep this for future research to keep our discussions in this paper focused on understanding the main driving principles behind the sharp constants of detection under Ising dependence. This a class of alternatives is known to require $\tanh(A)\asymp\sqrt{\frac{\log{n}}{s}}$ for consistent detection (see e.g. [4, 5, 6] for independent outcomes case and [26, 18] for dependent models) and allows a sharp transition at a level of multiplicative constants. Indeed, such sharp constants of phase transition has been derived for either independent outcome models [4, 5, 6] and for dependent Gaussian outcomes [26]. For our problem, the derivation of the sharp optimal constant of detection is somewhat subtle and we first discuss a way to formalize this asymptotic constant below. We begin by noting that it is reasonable to believe that a sequence of optimal test can be obtained by scanning over a suitable subclass of the potentially signal rectangles. We define the test first to gain intuition about the sharp constant of detection. To put ourselves in the context of notation similar to Section 2, we use $\mathcal{C}_{s,\rm rect}$ to denote the class of all thick rectangles of volume $s$ and for any $S\in\mathcal{C}_{s,\rm rect}$ $\displaystyle Z_{S}=\frac{1}{\sqrt{s}}\sum_{i\in S}\left(X_{i}-\mathbb{E}^{\rm bc}_{\beta,\mathbf{Q}(\Lambda_{n,d}),\mathbf{0}}(X_{i})\right),$ We would like to scan over a suitable subclass $\tilde{\mathcal{C}}_{s,\rm rect}$ which captures the essential complexity of $\mathcal{C}_{s,\rm rect}$ both in terms of theoretical and computational aspects. From a theoretical perspective, we shall require a precise understanding of the moderate deviation behavior of $Z_{S}$ and which in turn crucially relies on understanding the asymptotic behavior of the variance of $Z_{S}$. In particular, it is reasonable to believe that in the “pure phase” (which is the case for free boundary high temperature and low temperature plus boundary case) $Z_{S}$ should asymptotically behave like a Gaussian random variable and thus its moderate deviation exponent is characterized by its variance (See [24, Theorem V.7.2] for a result of this nature). To justify that there is a valid candidate for the limit of $\mathrm{Var}^{+}_{\beta,\mathbf{Q}(\Lambda_{n,d}),\mathbf{0}}(Z_{S})$ we have the following result which is one of the main components of this section. ###### Theorem 5. Suppose $\mathrm{dist}(S,\partial\Lambda_{n,d}):=\inf\limits_{j\in\partial\Lambda_{n,d},i\in S}\|i-j\|_{1}\gg\log^{2}{n}$. Then there exists $\chi^{\mathsf{f}}(\beta)$ (for $\beta<\beta_{c}(d)$) and $\chi^{+}(\beta)$ (for $\beta>\beta_{c}(d)$) such that $\displaystyle\lim_{s\rightarrow\infty}\mathrm{Var}^{\mathrm{b}c}_{\beta,\mathbf{Q}(\Lambda_{n,d}),\mathbf{0}}(Z_{S})$ $\displaystyle=\chi^{\mathrm{b}c}(\beta),\quad\text{for}\quad\beta<\beta_{c}(d),\mathrm{bc}\in\\{\mathsf{f},+\\};$ (11) $\displaystyle\lim_{s\rightarrow\infty}\mathrm{Var}^{+}_{\beta,\mathbf{Q}(\Lambda_{n,d}),\mathbf{0}}(Z_{S})$ $\displaystyle=\chi^{+}(\beta),\quad\text{for}\quad\beta>\beta_{c}(d).$ (12) Armed with Theorem 5 we are now ready to state the main sharp detection threshold for detecting rectangles over lattices. ###### Theorem 6. Suppose $\mathbf{X}\sim\mathbb{P}^{\rm bc}_{\beta,\mathbf{Q}(\Lambda_{n,d}),\boldsymbol{\mu}}$ for $\rm bc\in\\{\mathsf{f},+\\}$and consider testing (2) against $\mathcal{C}_{s}=\mathcal{C}_{s,\rm rect}$. Then the following hold with $\rm bc\in\\{\mathsf{f},+\\}$ for $\beta<\beta_{c}(d)$ and $\rm bc=+$ for $\beta>\beta_{c}(d)$. 1. (1) A test based on $\max_{S\in\tilde{\mathcal{C}}_{s,\rm rect}}Z_{S}$ is asymptotically powerful if $s\gg(\log{n})^{d}$ $\liminf\tanh(A)\sqrt{s/\log{(n/s)}}>\sqrt{2\chi^{\rm bc}(\beta)}.$ 2. (2) All tests are asymptotically powerless if $\limsup\tanh(A)\sqrt{s/\log{(n/s)}}<\sqrt{2\chi^{\rm bc}(\beta)}$. A few remarks are in order regarding the results in Theorem 6. First, we have not tried optimize the requirement $s\gg(\log{n})^{d}$ in our upper bound above. Indeed, as noted in [18] one needs $s\gg\log{n}$ for any successful detection and hence our results on sharp constants matches the requirement on $s$ up to log-factors. Moreover, our next verifies that the susceptibility is indeed an increasing function of $0\leq\beta<\beta_{c}(d)$. ###### Proposition 7. $\chi(\beta)$ is differentiable and strictly monotonically increasing for $\beta\in(0,\beta_{c}(d))$. It is worth noting that this monotonicity without the strictness can be seen as a consequence of the Edwards-Sokal coupling and the monotonicity in $p$ of the FK-Ising model ([30]). Finally, we demonstrate that the results above can be obained without the knowledge of $\beta\neq\beta_{c}(d)$. ###### Theorem 8. Theorem 6 continues to hold for unknown $\beta\neq\beta_{c}(d)$ and $\rm bc=\mathsf{f}$. Although this completes the picture for non-critical temperature in nearest neighbor Ising models over lattices, the behavior of the testing problem at $\beta=\beta_{c}(d)$ remains open. At this moment we believe that the blessing of getting a better rate and/or constant at criticality continues to hold for lattices as well. ## 4\. Discussions In this paper we have considered sharp constants of detecting structured signals under Ising dependence. Although we have derived sharp phase transitions in some popular classes of of both mean-field type Ising models (at all temperatures) and nearest-neighbor Ising models on lattices (at all non-critical temperatures) several related directions remain open. As an immediate interesting question pertains to the Ising model on lattices and figuring out the exact detection thresholds at the critical temperature to complete the narrative of precise benefit of critical dependence in this model. As was discussed in [41] this might require new ideas. Moreover, even for non-critical temperatures it remains to explore the multi-scale procedures for adaptive testing of thick clusters for Ising models over lattices (see e.g. [4, 49, 35] and references therein). Moreover distributional approximation for the test statistics used here is also a crucial direction for the sake of improved practical applicability of our result. We keep these questions as future research directions. ## 5\. Acknowledgements GR is supported by NSERC 50311-57400. ## References * Addario-Berry et al. [2010] Louigi Addario-Berry, Nicolas Broutin, Luc Devroye, Gábor Lugosi, et al. On combinatorial testing problems. _The Annals of Statistics_ , 38(5):3063–3092, 2010. * Aizenman et al. [1987] Michael Aizenman, David J Barsky, and Roberto Fernández. The phase transition in a general class of ising-type models is sharp. _Journal of Statistical Physics_ , 47(3-4):343–374, 1987. * Aizenman et al. [2015] Michael Aizenman, Hugo Duminil-Copin, and Vladas Sidoravicius. Random currents and continuity of ising model’s spontaneous magnetization. _Communications in Mathematical Physics_ , 334(2):719–742, 2015. * Arias-Castro et al. [2005] Ery Arias-Castro, David L Donoho, and Xiaoming Huo. Near-optimal detection of geometric objects by fast multiscale methods. _IEEE Transactions on Information Theory_ , 51(7):2402–2425, 2005. * Arias-Castro et al. [2008] Ery Arias-Castro, Emmanuel J Candès, Hannes Helgason, and Ofer Zeitouni. Searching for a trail of evidence in a maze. _The Annals of Statistics_ , pages 1726–1757, 2008. * Arias-Castro et al. [2011] Ery Arias-Castro, Emmanuel J Candes, and Arnaud Durand. Detection of an anomalous cluster in a network. _The Annals of Statistics_ , pages 278–304, 2011. * Arias-Castro et al. [2018] Ery Arias-Castro, Rui M Castro, Ervin Tánczos, and Meng Wang. Distribution-free detection of structured anomalies: Permutation and rank-based scans. _Journal of the American Statistical Association_ , 113(522):789–801, 2018. * Basak and Mukherjee [2015] Anirban Basak and Sumit Mukherjee. Universality of the mean-field for the potts model. _Probability Theory and Related Fields_ , pages 1–44, 2015. * Besag [1974] Julian Besag. Spatial interaction and the statistical analysis of lattice systems. _Journal of the Royal Statistical Society: Series B (Methodological)_ , 36(2):192–225, 1974. * Bhattacharya and Mukherjee [2018a] Bhaswar B Bhattacharya and Sumit Mukherjee. Inference in ising models. _Bernoulli_ , 24(1):493–525, 2018a. * Bhattacharya and Mukherjee [2018b] Bhaswar B Bhattacharya and Sumit Mukherjee. Inference in ising models. _Bernoulli_ , 24(1):493–525, 2018b. * Bodineau [2003] Thierry Bodineau. Slab percolation for the ising model. _arXiv preprint math/0309300_ , 2003. * Butucea and Ingster [2013] Cristina Butucea and Yuri I Ingster. Detection of a sparse submatrix of a high-dimensional noisy matrix. _Bernoulli_ , 19(5B):2652–2688, 2013. * Chatterjee [2007a] Sourav Chatterjee. Estimation in spin glasses: A first step. _The Annals of Statistics_ , pages 1931–1946, 2007a. * Chatterjee [2007b] Sourav Chatterjee. Estimation in spin glasses: A first step. _The Annals of Statistics_ , pages 1931–1946, 2007b. * Daskalakis et al. [2019] Constantinos Daskalakis, Nishanth Dikkala, and Gautam Kamath. Testing ising models. _IEEE Transactions on Information Theory_ , 2019. * Deb and Mukherjee [2020] Nabarun Deb and Sumit Mukherjee. Fluctuations in mean-field ising models. _arXiv preprint arXiv:2005.00710_ , 2020. * Deb et al. [2020] Nabarun Deb, Rajarshi Mukherjee, Sumit Mukherjee, and Ming Yuan. Detecting structured signals in ising models. _arXiv preprint arXiv:2012.05784_ , 2020. * Duminil-Copin [2016] Hugo Duminil-Copin. Random currents expansion of the ising model. _arXiv preprint arXiv:1607.06933_ , 2016. * Duminil-Copin [2017] Hugo Duminil-Copin. Lectures on the ising and potts models on the hypercubic lattice. _arXiv preprint arXiv:1707.00520_ , 2017. * Duminil-Copin and Tassion [2016] Hugo Duminil-Copin and Vincent Tassion. A new proof of the sharpness of the phase transition for bernoulli percolation and the ising model. _Communications in Mathematical Physics_ , 343(2):725–745, 2016. * Duminil-Copin et al. [2017] Hugo Duminil-Copin, Aran Raoufi, and Vincent Tassion. Sharp phase transition for the random-cluster and potts models via decision trees. _arXiv preprint arXiv:1705.03104_ , 2017. * Duminil-Copin et al. [2018] Hugo Duminil-Copin, Subhajit Goswami, and Aran Raoufi. Exponential decay of truncated correlations for the ising model in any dimension for all but the critical temperature. _arXiv preprint arXiv:1808.00439_ , 2018. * Ellis [2006] Richard S Ellis. _Entropy, large deviations, and statistical mechanics_ , volume 1431\. Taylor & Francis, 2006. * Ellis and Newman [1978] Richard S. Ellis and Charles M. Newman. The statistics of curie-weiss models. _Journal of Statistical Physics_ , 19(2):149–161, 1978. * Enikeeva et al. [2020] Farida Enikeeva, Axel Munk, Markus Pohlmann, and Frank Werner. Bump detection in the presence of dependency: Does it ease or does it load? _Bernoulli_ , 26(4):3280–3310, 2020. * Friedli and Velenik [2017] Sacha Friedli and Yvan Velenik. _Statistical mechanics of lattice systems: a concrete mathematical introduction_. Cambridge University Press, 2017. * Ghosal and Mukherjee [2020] Promit Ghosal and Sumit Mukherjee. Joint estimation of parameters in ising model. _The Annals of Statistics_ , 48(2):785–810, 2020\. * Grimmett [1999] Geoffrey Grimmett. _Percolation_ , volume 321 of _Grundlehren der Mathematischen Wissenschaften [Fundamental Principles of Mathematical Sciences]_. Springer-Verlag, Berlin, second edition, 1999. ISBN 3-540-64902-6. doi: 10.1007/978-3-662-03981-6. URL https://doi.org/10.1007/978-3-662-03981-6. * Grimmett [2006] Geoffrey R Grimmett. _The random-cluster model_ , volume 333. Springer Science & Business Media, 2006. * Hall and Jin [2008] Peter Hall and Jiashun Jin. Properties of higher criticism under strong dependence. _The Annals of Statistics_ , pages 381–402, 2008. * Hall and Jin [2010] Peter Hall and Jiashun Jin. Innovated higher criticism for detecting sparse signals in correlated noise. _The Annals of Statistics_ , 38(3):1686–1732, 2010. * Ingster et al. [2010] Yuri I Ingster, Alexandre B Tsybakov, and Nicolas Verzelen. Detection boundary in sparse regression. _Electronic Journal of Statistics_ , 4:1476–1526, 2010\. * Ising [1925] Ernst Ising. Beitrag zur theorie des ferromagnetismus. _Zeitschrift für Physik A Hadrons and Nuclei_ , 31(1):253–258, 1925. * König et al. [2020] Claudia König, Axel Munk, and Frank Werner. Multidimensional multiscale scanning in exponential families: Limit theory and statistical consequences. _The Annals of Statistics_ , 48(2):655–678, 2020\. * Lebowitz [1972] Joel L Lebowitz. Bounds on the correlations and analyticity properties of ferromagnetic ising spin systems. _Communications in Mathematical Physics_ , 28(4):313–321, 1972. * Lebowitz [1974] Joel L Lebowitz. Ghs and other inequalities. _Communications in Mathematical Physics_ , 35(2):87–92, 1974. * Lebowitz [1977] Joel L Lebowitz. Coexistence of phases in ising ferromagnets. _Journal of Statistical Physics_ , 16(6):463–476, 1977. * Liggett et al. [1997] Thomas M Liggett, Roberto H Schonmann, and Alan M Stacey. Domination by product measures. _The Annals of Probability_ , 25(1):71–95, 1997\. * Martin-Löf [1973] Anders Martin-Löf. Mixing properties, differentiability of the free energy and the central limit theorem for a pure phase in the ising model at low temperature. _Communications in Mathematical Physics_ , 32(1):75–92, 1973. * Mukherjee and Ray [2019] Rajarshi Mukherjee and Gourab Ray. On testing for parameters in ising models. _arXiv preprint arXiv:1906.00456_ , 2019. * Mukherjee et al. [2016] Rajarshi Mukherjee, Sumit Mukherjee, and Ming Yuan. Global testing against sparse alternatives under ising models. _arXiv preprint arXiv:1611.08293_ , 2016. * Onsager [1944] Lars Onsager. Crystal statistics. i. a two-dimensional model with an order-disorder transition. _Physical Review_ , 65(3-4):117, 1944. * Pisztora [1996] Agoston Pisztora. Surface order large deviations for Ising, Potts and percolation models. _Probab. Theory Related Fields_ , 104(4):427–466, 1996. ISSN 0178-8051. doi: 10.1007/BF01198161. URL https://doi.org/10.1007/BF01198161. * Sharpnack et al. [2015] James Sharpnack, Alessandro Rinaldo, and Aarti Singh. Detecting anomalous activity on networks with the graph fourier scan statistic. _IEEE Transactions on Signal Processing_ , 64(2):364–379, 2015. * Simon [1980] Barry Simon. Correlation inequalities and the decay of correlations in ferromagnets. _Communications in Mathematical Physics_ , 77(2):111–126, 1980. * Tikhomirov and Youssef [2019] Konstantin Tikhomirov and Pierre Youssef. The spectral gap of dense random regular graphs. _The Annals of Probability_ , 47(1):362–419, 2019\. * Vu [2005] Van H Vu. Spectral norm of random matrices. In _Proceedings of the thirty-seventh annual ACM symposium on Theory of computing_ , pages 423–430, 2005. * Walther [2010] Guenther Walther. Optimal and fast detection of spatial clusters with scan statistics. _The Annals of Statistics_ , 38(2):1010–1033, 2010. * Zou et al. [2017] Shaofeng Zou, Yingbin Liang, and H Vincent Poor. Nonparametric detection of geometric structures over networks. _IEEE Transactions on Signal Processing_ , 65(19):5034–5046, 2017. ## 6\. Proofs of Main Results We divide the proofs our main results according to Sections 2 and 3. ### 6.1. Proof of Results in Section 2.1 #### 6.1.1. Proof of Theorem 1 ##### Proof of Theorem 1i. We divide our proof according to the various parts of the theorem. ##### Proof of Theorem 1i.i._a_) Recall from the discussion following the statement of Theorem 1 that the claimed optimal test is given the by the scan test which can be described as follows. For $S\in\mathcal{N}(\mathcal{C}_{s},\gamma,\varepsilon_{n})$ we first define $Z_{S}=\sum_{i\in S}X_{i}/\sqrt{s}$ and our scan test rejects for large values of $Z_{\max}:=\max\limits_{S\in\mathcal{N}(\mathcal{C}_{s},\gamma,\varepsilon_{n})}Z_{S}$. The cut-off for the test is decided by the moderate deviation behavior of $Z_{S}$’s given in Lemma 1 – which implies that for any $\delta>0$, the test given by $T_{n}(\delta)=\mathbf{1}\left\\{Z_{\max}>\sqrt{2(1+\delta)\log{|\mathcal{N}(\mathcal{C}_{s},\gamma,\varepsilon_{n})|}}\right\\}$ has Type I error converging to $0$. Turning to the Type II error, consider any $\mathbb{P}_{\beta,\mathbf{Q},\boldsymbol{\mu}}\in\Xi(\mathcal{C}_{s},A)$ and note that by monotonicity arguments (i.e. stochastic increasing nature of the distribution of $Z_{\max}$ as a function of coordinates of $\boldsymbol{\mu}$) it is enough to restrict to the case where $A=\Theta(\sqrt{\log{|\mathcal{N}(\mathcal{C}_{s},\gamma,\varepsilon_{n})|}})$. Thereafter, note that by GHS inequality(cf. 15) one has $\mathrm{Var}_{\beta,\mathbf{Q},\boldsymbol{\mu}}(\sum_{i\in\tilde{S}^{\star}}X_{i})\leq\mathrm{Var}_{\beta,\mathbf{Q},\mathbf{0}}(\sum_{i\in\tilde{S}^{\star}}X_{i})=O(s)$ for $\beta<1$ since $s\ll\frac{n}{\log{n}}$ by appealing to [18, Lemma 9(a)]. As a result, $\frac{\sum_{\i\in\tilde{S^{\star}}}(X_{i}-\mathbb{E}_{\beta,\mathbf{Q},\boldsymbol{\mu}}(X_{i}))}{\sqrt{s}}=O_{\mathbb{P}_{\beta,\mathbf{Q},\boldsymbol{\mu}}}(1)$. Therefore, as usual it is enough to show that there exists $\delta>0$ such that $t_{n}(\delta)-\frac{1}{\sqrt{s}}\mathbb{E}_{\beta,\mathbf{Q},\boldsymbol{\mu}}\left(\sum_{i\in\tilde{S}^{\star}}X_{i}\right)\rightarrow-\infty$. To show this end, first let $S^{\star}\in\mathcal{C}_{s}$ be such that the signal lies on $S^{\star}$ i.e. for all $i\in S^{\star}$ one has $\boldsymbol{\mu}_{i}\geq A$. Note that by monotonicity arguments it is enough to consider $\boldsymbol{\mu}_{i}=A$. By definition of covering, we can find a $\tilde{S}^{\star}\in\mathcal{N}(\mathcal{C}_{s},\gamma,\varepsilon_{n})$ such that $\gamma\left(\tilde{S}^{\star},S^{\star}\right)\leq\varepsilon_{n}$ i.e. $|\tilde{S}^{\star}\cap S^{\star}|\geq s(1-\varepsilon_{n}/\sqrt{2})$. Thereafter note that by Lemma 20 we have that $\displaystyle\mathbb{E}_{\beta,\mathbf{Q},\boldsymbol{\mu}}(\sum_{i\in\tilde{S}^{\star}}X_{i})\geq A|\tilde{S}^{\star}\cap S^{\star}|-A^{2}\sum_{i\in\tilde{S}^{\star}\cap S^{\star}}\sum_{j\in S}\mathrm{Cov}_{\beta,\mathbf{Q},\tilde{\mathbf{\eta}}_{S^{\star}}(A)}(X_{i},X_{j}),$ However, by [18, Lemma 9 (a)] we have that $\displaystyle A^{2}\sum_{i\in\tilde{S}^{\star}\cap S^{\star}}\sum_{j\in S}\mathrm{Cov}_{\beta,\mathbf{Q},\tilde{\mathbf{\eta}}_{S^{\star}}(A)}(X_{i},X_{j})$ $\displaystyle\lesssim\frac{\log{|\mathcal{N}(\mathcal{C}_{s},\gamma,\varepsilon_{n})|}}{s}\left(\frac{|\tilde{S}^{\star}\cap S^{\star}||S^{\star}|}{n}+|\tilde{S}^{\star}\cap S^{\star}|\right)$ $\displaystyle\leq\frac{\log{|\mathcal{N}(\mathcal{C}_{s},\gamma,\varepsilon_{n})|s}}{{n}}+\log{|\mathcal{N}(\mathcal{C}_{s},\gamma,\varepsilon_{n})|}\ll As.$ Consequently, we immediately have that there exists $\epsilon>0$ such that $\displaystyle\mathbb{E}_{\beta,\mathbf{Q},\boldsymbol{\mu}}\left(\sum_{i\in\tilde{S}^{\star}}X_{i}\right)$ $\displaystyle\geq As(1+o(1))\geq\sqrt{2(1+\epsilon)s\log{|\mathcal{N}(\mathcal{C}_{s},\gamma,\varepsilon_{n})|}}$ Therefore, we can conclude that for any $\delta<\epsilon$ we have $t_{n}(\delta)-\frac{1}{\sqrt{s}}\mathbb{E}_{\beta,\mathbf{Q},\boldsymbol{\mu}}\left(\sum_{i\in\tilde{S}^{\star}}X_{i}\right)\rightarrow-\infty$. This completes the proof of the upper bound for $0<\beta<1$. ##### Proof of Theorem 1i.i._b_) An optimal test is given the by the scan test which can be described similar to the case $\beta<1$. For $S\in\mathcal{N}(\mathcal{C}_{s},\gamma,\varepsilon_{n})$, define $Z_{S}=\sum_{i\in S}X_{i}/\sqrt{s}$ and reject for large values of $Z_{\max}:=\max\limits_{S\in\mathcal{N}(\mathcal{C}_{s},\gamma,\varepsilon_{n})}Z_{S}$. Lemma 1 implies that for any $\delta>0$, the test given by $T_{n}(\delta)=\mathbf{1}\left\\{Z_{\max}>\sqrt{2(1+\delta)\log{|\mathcal{N}(\mathcal{C}_{s},\gamma,\varepsilon_{n})|}}\right\\}$ has Type I error converging to $0$ whenever $s\ll\sqrt{n}/\log{n}$. Turning to the Type II error, consider any $\mathbb{P}_{\beta,\mathbf{Q},\boldsymbol{\mu}}\in\Xi(\mathcal{C}_{s},A)$ and note that by monotonicity arguments it is enough to restrict to the case where $A=\Theta(\sqrt{\log{|\mathcal{N}(\mathcal{C}_{s},\gamma,\varepsilon_{n})|}})$. Thereafter, note that by GHS inequality(cf. 15) one has $\mathrm{Var}_{\beta,\mathbf{Q},\boldsymbol{\mu}}(\sum_{i\in\tilde{S}^{\star}}X_{i})\leq\mathrm{Var}_{\beta,\mathbf{Q},\mathbf{0}}(\sum_{i\in\tilde{S}^{\star}}X_{i})=O(s)$ even for $\beta=1$ since $s\ll\frac{\sqrt{n}}{\log{n}}$ by appealing to [18, Lemma 9(c)]. As a result, $\frac{\sum_{\i\in\tilde{S^{\star}}}(X_{i}-\mathbb{E}_{\beta,\mathbf{Q},\boldsymbol{\mu}}(X_{i}))}{\sqrt{s}}=O_{\mathbb{P}_{\beta,\mathbf{Q},\boldsymbol{\mu}}}(1)$. Therefore, as usual it is enough to show that there exists $\delta>0$ such that $t_{n}(\delta)-\frac{1}{\sqrt{s}}\mathbb{E}_{\beta,\mathbf{Q},\boldsymbol{\mu}}\left(\sum_{i\in\tilde{S}^{\star}}X_{i}\right)\rightarrow-\infty$. To show this end, first let $S^{\star}\in\mathcal{C}_{s}$ be such that the signal lies on $S^{\star}$ i.e. for all $i\in S^{\star}$ one has $\boldsymbol{\mu}_{i}\geq A$. Note that by monotonicity arguments it is enough to consider $\boldsymbol{\mu}_{i}=A$. By definition of covering, we can find a $\tilde{S}^{\star}\in\mathcal{N}(\mathcal{C}_{s},\gamma,\varepsilon_{n})$ such that $\gamma\left(\tilde{S}^{\star},S^{\star}\right)\leq\varepsilon_{n}$ i.e. $|\tilde{S}^{\star}\cap S^{\star}|\geq s(1-\varepsilon_{n}/\sqrt{2})$. Thereafter note that by Lemma 20 we have that $\displaystyle\mathbb{E}_{\beta,\mathbf{Q},\boldsymbol{\mu}}(\sum_{i\in\tilde{S}^{\star}}X_{i})\geq A|\tilde{S}^{\star}\cap S^{\star}|-A^{2}\sum_{i\in\tilde{S}^{\star}\cap S^{\star}}\sum_{j\in S}\mathrm{Cov}_{\beta,\mathbf{Q},\tilde{\mathbf{\eta}}_{S^{\star}}(A)}(X_{i},X_{j}),$ However, by [18, Lemma 9 (c)] we have that $\displaystyle A^{2}\sum_{i\in\tilde{S}^{\star}\cap S^{\star}}\sum_{j\in S}\mathrm{Cov}_{\beta,\mathbf{Q},\tilde{\mathbf{\eta}}_{S^{\star}}(A)}(X_{i},X_{j})$ $\displaystyle\lesssim\frac{\log{|\mathcal{N}(\mathcal{C}_{s},\gamma,\varepsilon_{n})|}}{s}\left(\frac{|\tilde{S}^{\star}\cap S^{\star}||S^{\star}|}{\sqrt{n}}+|\tilde{S}^{\star}\cap S^{\star}|\right)$ $\displaystyle\leq\frac{\log{|\mathcal{N}(\mathcal{C}_{s},\gamma,\varepsilon_{n})|s}}{\sqrt{n}}+\log{|\mathcal{N}(\mathcal{C}_{s},\gamma,\varepsilon_{n})|}\ll As.$ Consequently, we immediately have that there exists $\epsilon>0$ such that $\displaystyle\mathbb{E}_{\beta,\mathbf{Q},\boldsymbol{\mu}}\left(\sum_{i\in\tilde{S}^{\star}}X_{i}\right)$ $\displaystyle\geq As(1+o(1))\geq\sqrt{2(1+\epsilon)s\log{|\mathcal{N}(\mathcal{C}_{s},\gamma,\varepsilon_{n})|}}$ Therefore, we can conclude that for any $\delta<\epsilon$ we have $t_{n}(\delta)-\frac{1}{\sqrt{s}}\mathbb{E}_{\beta,\mathbf{Q},\boldsymbol{\mu}}\left(\sum_{i\in\tilde{S}^{\star}}X_{i}\right)\rightarrow-\infty$. This completes the proof of the upper bound for $\beta=1$. ##### Proof of Theorem 1i.i._c_) We use a randomized scan test here described as follows: Given data $X_{i},i\in[n]$, generate a random variable $W_{n}\sim N(\bar{\mathbf{X}},1/(n\beta))$. If $W_{n}>0$, we reject the null hypotheses if $Z_{\max}:=\max\limits_{S\in\mathcal{N}(\mathcal{C}_{s},\gamma,\varepsilon_{n})}Z_{S}>m\sqrt{s}+t_{n}(\delta)$. If $W_{n}\leq 0$ we reject if $Z_{\max}>-m\sqrt{s}+t_{n}(\delta)$, where $m:=m(\beta)$ is the unique positive root of $m=\tanh(\beta m)$ and $t_{n}(\delta)=\sqrt{2(1+\delta)(1-m^{2})\log{|\mathcal{N}(\mathcal{C}_{s},\gamma,\varepsilon_{n})|}}$. It turns out that our analysis of this test works whenever $\sum_{i=1}^{n}\mu_{i}\ll n$. This is however not an issue since the test based on conditionally centered $\sum_{i=1}^{n}X_{i}$ works as soon as $\sum_{i=1}^{n}\mu_{i}\gg\sqrt{n}$ [42] – and hence simple Bonferroni correction (i.e. the test rejects when either the randomized test described above or the the conditionally centered sum based test of [42]) yields our desired result. Hence we focus here on the case when $\sqrt{n}\ll\sum_{i=1}^{n}\mu_{i}\ll n^{3/4}$. By the moderate deviation behavior of $Z_{S}$’s given in Lemma 1, the Type-I error converges to $0$. Turning to the Type II error, consider any $\mathbb{P}_{\beta,\mathbf{Q},\boldsymbol{\mu}}\in\Xi(\mathcal{C}_{s},A)$. Now, let $S^{*}$ be the rectangle with non-zero signal with the true signal set under the alternative. We choose $\delta$ with $\tanh(A)=\sqrt{2(1+\delta)^{2}(1-m^{2})^{-1}\log{|\mathcal{N}(\mathcal{C}_{s},\gamma,\varepsilon_{n})|}/s}$. By definition of covering, we can find a $\tilde{S}^{\star}\in\mathcal{N}(\mathcal{C}_{s},\gamma,\varepsilon_{n})$ such that $\gamma\left(\tilde{S}^{\star},S^{\star}\right)\leq\varepsilon_{n}$ i.e. $|\tilde{S}^{\star}\cap S^{\star}|\geq s(1-\varepsilon_{n}/\sqrt{2})$. We want to control $\mathbb{P}_{\beta,\mathbf{Q},\boldsymbol{\mu}}(A_{n,\tilde{S}^{\star}}\cup B_{n,\tilde{S}^{\star}})$ where $\displaystyle A_{n,S}$ $\displaystyle=\\{Z_{S}>m\sqrt{s}+t_{n}(\delta),W_{n}>0\\},$ $\displaystyle B_{n,S}$ $\displaystyle=\\{Z_{S}>-m\sqrt{s}+t_{n}(\delta),W_{n}\leq 0\\}.$ (13) We have to show $\mathbb{P}_{\beta,\mathbf{Q},\boldsymbol{\mu}}(A_{n,\tilde{S}^{\star}}\cup B_{n,\tilde{S}^{\star}})\rightarrow 1.$ (14) We plan to show $\mathbb{P}_{\beta,\mathbf{Q},\boldsymbol{\mu}}(A_{n,\tilde{S}^{\star}})-\mathbb{P}_{\beta,\mathbf{Q},\boldsymbol{\mu}}(W_{n}>0)\rightarrow 0$ and $\mathbb{P}_{\beta,\mathbf{Q},\boldsymbol{\mu}}(B_{n,\tilde{S}^{\star}})-\mathbb{P}_{\beta,\mathbf{Q},\boldsymbol{\mu}}(W_{n}\leq 0)\rightarrow 0$. We prove of the first limit and the proof of the second one is similar. $\displaystyle\mathbb{P}_{\beta,\mathbf{Q},\boldsymbol{\mu}}(A_{n,\tilde{S}^{\star}})=\mathbb{E}_{\beta,\mathbf{Q},\boldsymbol{\mu}}\left(\mathbf{1}(W_{n}>0)\mathbb{P}_{\beta,\mathbf{Q},\boldsymbol{\mu}}\left[\bar{X}_{S}>m(\beta)+t_{n}(\delta)/\sqrt{s}|W_{n}\right]\right)$ $\displaystyle\leq\mathbb{E}_{\beta,\mathbf{Q},\boldsymbol{\mu}}\left(\mathbf{1}(W_{n}>0)\underbrace{\mathbb{P}_{\beta,\mathbf{Q},\boldsymbol{\mu}}\left[\frac{\sum_{i\in S}X_{i}-\mathbb{E}_{\beta,\mathbf{Q},\boldsymbol{\mu}}(\sum\limits_{i\in S}X_{i}|W_{n})}{\sqrt{s}}>\sqrt{s}(m-\tanh(\beta W_{n}+A))+t_{n}(\delta)|W_{n}\right]}_{G_{n}(W_{n})}\right)$ We claim that $G_{n}(W_{n})\mathbf{1}(W_{n}>0)-\mathbf{1}(W_{n}>0)\stackrel{{\scriptstyle\mathbb{P}}}{{\rightarrow}}0$. This will imply $\mathbb{P}_{\beta,\mathbf{Q},\boldsymbol{\mu}}(A_{n})-\mathbb{P}_{\beta,\mathbf{Q},\boldsymbol{\mu}}(W_{n}>0)\rightarrow 0$ by DCT. To show the in probability convergence of $G_{n}(W_{n})\mathbf{1}(W_{n}>0)$ we note that for any $\epsilon>0$ $\displaystyle\mathbb{P}_{\beta,\mathbf{Q},\boldsymbol{\mu}}(\mathbf{1}(W_{n}>0)(1-G_{n}(W_{n}))>\epsilon)$ $\displaystyle\leq\mathbb{P}_{\beta,\mathbf{Q},\boldsymbol{\mu}}(G_{n}(W_{n})<1-\epsilon,W_{n}>0).$ We therefore need to understand $G_{n}(W_{n})$ on the event $W_{n}>0$. By Lemma 4, $\mathbb{P}_{\beta,\mathbf{Q},\boldsymbol{\mu}}(|W_{n}-m|>\zeta_{n},W_{n}>0)\rightarrow 0$ for some slowly decreasing $\zeta_{n}$. More specifically, this follows from 4 since this lemma not only implies concentration of $W_{n}$ around $m_{n}$ in $\sqrt{n}$-scale and but also that $m_{n}-m=\Theta(\frac{1}{n}\sum_{i=1}^{n}\mu_{i})=o(n^{-1/4})$ by the property of $\boldsymbol{\mu}$. Next fix $\eta>0$ such that if $W_{n}>0,|W_{n}-m|\leq\zeta_{n}$ for some decreasing $\frac{1}{\sqrt{n}}\ll\zeta_{n}\rightarrow\ll\frac{1}{n^{1/4}}$, $\operatorname{sech}^{2}(\beta W_{n}+A)\leq(1+\eta)^{2}\operatorname{sech}^{2}(\beta m)$ and $\sqrt{s}(m-\tanh(\beta W_{n}+A))\leq-(1-\eta)\operatorname{sech}^{2}(\beta m)A\sqrt{s}$. Hence, on the event $W_{n}>0,|W_{n}-m|\leq\zeta_{n}$ we have $\displaystyle\sqrt{\frac{s}{\operatorname{sech}^{2}(\beta W_{n}+A)}}(m-\tanh(\beta W_{n}+A))+\frac{t_{n}(\delta)}{\operatorname{sech}^{2}(\beta W_{n}+A)}$ $\displaystyle\leq-\sqrt{2\log{|\mathcal{N}(\mathcal{C}_{s},\gamma,\varepsilon_{n})|}}\left[\frac{(1-\eta)(1+\delta)}{1+\eta}-1\right]$ Choosing $\eta$ small enough we have the last display $\Theta(-\sqrt{\log|\mathcal{N}(\mathcal{C}_{s},\gamma,\varepsilon_{n})|})$ and the result follows by Chebyshev’s Inequality. The same proof goes through for $W_{n}\leq 0$ by appealing to Lemma 4. ##### Proof of Theorem 1ii. We divide our proof according to the various parts of the theorem. ##### Proof of Theorem 1ii.ii._a_) We follow the path of truncated second moment method with respect to a suitable prior over $\tilde{\mathcal{C}}_{s}\subset\mathcal{C}_{s}$. Owing to the exchangeability of the Curie-Weiss model it is natural to consider the uniform prior $\pi$ (say) over all $\boldsymbol{\mu}\in\Xi(\mathcal{C}_{s},A)$ such $\mathrm{supp}(\boldsymbol{\mu})\in\tilde{C}_{s}$ and $\boldsymbol{\mu}_{i}=A$ for $i\in\mathrm{supp}(\boldsymbol{\mu})$. The likelihood ratio $L_{\pi}$ (say) corresponding to this prior can be written as $\displaystyle L_{\pi}=\frac{1}{|\tilde{C}_{s}|}\sum_{S\in\tilde{C}_{s}}\frac{Z(\beta,\mathbf{Q},\boldsymbol{\mu}_{S}(A))}{Z(\beta,\mathbf{Q},\mathbf{0})}\exp\left(A\sum_{i\in S}X_{i}\right),$ where $\boldsymbol{\mu}_{S}(A)$ is the vector with support $S$ and entries equal to $A$ on its support. Now recall the definition of $Z_{S}$ from the proof of Theorem 1i.i._a_) define an event $\Omega_{n}(S,\delta)=\\{Z_{S}\leq\tilde{t}_{n}(\delta)\\}$ – where for any $\delta>0$ we define $\tilde{t}_{n}(\delta)=\sqrt{2(1+\delta)\log|\tilde{C}_{s}|}$. Subsequently, we let $\displaystyle\tilde{L}_{\pi}(\delta):=\frac{1}{|\tilde{C}_{s}|}\sum_{S\in\tilde{C}_{s}}\frac{Z(\beta,\mathbf{Q},\boldsymbol{\mu}_{S}(A))}{Z(\beta,\mathbf{Q},\mathbf{0})}\exp\left(A\sum_{i\in S}X_{i}\right)\mathbf{1}(\mathbf{X}\in\Omega_{n}(S,\delta))$ (15) denote a truncated likelihood ratio at level $\delta>0$. It is thereafter enough the verify that there exists $\delta>0$ such that the following three claims hold [33] $\displaystyle\mathbb{P}_{\beta,\mathbf{Q},\mathbf{0}}(L_{\pi}\neq\tilde{L}_{\pi}(\delta))$ $\displaystyle\rightarrow 0;$ (16) $\displaystyle\mathbb{E}_{\beta,\mathbf{Q},\mathbf{0}}(\tilde{L}_{\pi}(\delta))$ $\displaystyle\rightarrow 1;$ (17) $\displaystyle\mathbb{E}_{\beta,\mathbf{Q},\mathbf{0}}(\tilde{L}_{\pi}^{2}(\delta))$ $\displaystyle\leq 1+o(1).$ (18) We now show them in sequence. To show (16) note that $\displaystyle\mathbb{P}_{\beta,\mathbf{Q},\mathbf{0}}(L_{\pi}\neq\tilde{L}_{\pi}(\delta))=\sum_{S\in\tilde{C}_{s}}\mathbb{P}_{\beta,\mathbf{Q},\mathbf{0}}\left(\frac{1}{\sqrt{s}}\sum_{i\in S}X_{i}>\tilde{t}_{n}(\delta)\right)\rightarrow 0.$ The convergence to $0$ in the display above follows from Lemma 1 for any $\delta>0$ – by the same verbatim argument that showed the Type I error convergence to $0$ in the proof of Theorem 1i.i._a_). Next we turn to (17). To verify this, we first note that by a simple change of measure $\displaystyle\mathbb{E}_{\beta,\mathbf{Q},\mathbf{0}}(\tilde{L}_{\pi}(\delta))=\frac{1}{|\tilde{\mathcal{C}}_{s}|}\sum_{S\in\tilde{C}_{s}}\mathbb{P}_{\beta,\mathbf{Q},\boldsymbol{\mu}_{S}(A)}\left(\frac{1}{\sqrt{s}}\sum_{i\in S}X_{i}\leq\tilde{t}_{n}(\delta)\right).$ To analyze the R.H.S of the display above note that $\frac{1}{\sqrt{s}}\sum_{i\in S}(X_{i}-\mathbb{E}_{\beta,\mathbf{Q},\boldsymbol{\mu}_{S}(A)})$ is tight since $\mathrm{Var}_{\beta,\mathbf{Q},\boldsymbol{\mu}_{S}(A)}(\sum_{i\in S}X_{i})\leq\mathrm{Var}_{\beta,\mathbf{Q},\mathbf{0}}(\sum_{i\in S}X_{i})=O(s)$ by G.H.S inequality. Also, by arguments similar to the control of Type II error in the proof of Theorem 1i.i._a_) and the fact that $\limsup_{n\rightarrow\infty}\sqrt{s}\tanh(A)(\log|\tilde{\mathcal{C}}_{s}|)^{-1/2}<\sqrt{2}$, we have that for any $\delta>0$ that $\tilde{t}_{n}(\delta)-\frac{1}{\sqrt{s}}\mathbb{E}_{\beta,\mathbf{Q},\boldsymbol{\mu}_{S}(A)}(\sum_{i\in S}X_{i})\rightarrow\infty$ uniformly in $S\in\tilde{\mathcal{C}}_{s}$. Therefore by Chebyshev’s Inequality we conclude that $1-\mathbb{E}_{\beta,\mathbf{Q},\mathbf{0}}(\tilde{L}_{\pi}(\delta))\rightarrow 0$ for any $\delta>0$. Finally we shall make a choice of $\delta>0$ while verifying (18). First note that since $\tilde{\mathcal{C}}_{s}$ consists of disjoint sets we have $\displaystyle\mathbb{E}_{\beta,\mathbf{Q},\mathbf{0}}\left(\tilde{L}_{\pi}^{2}(\delta)\right)$ $\displaystyle=\frac{1}{|\tilde{\mathcal{C}}_{s}|^{2}}\sum_{S\in\tilde{\mathcal{C}}_{s}}\frac{Z^{2}(\beta,\mathbf{Q},\mathbf{0})Z(\beta,\mathbf{Q},\boldsymbol{\mu}_{S}(2A))}{Z^{2}(\beta,\mathbf{Q},\boldsymbol{\mu}_{S}(A))Z(\beta,\mathbf{Q},\mathbf{0})}\mathbb{P}_{\beta,\mathbf{Q},\boldsymbol{\mu}_{S}(2A)}(\Omega_{n}(S,\delta))$ $\displaystyle+\frac{1}{|\tilde{\mathcal{C}}_{s}|^{2}}\sum_{S_{1}\neq S_{2}\in\tilde{\mathcal{C}}_{s}}\frac{Z^{2}(\beta,\mathbf{Q},\mathbf{0})Z(\beta,\mathbf{Q},\boldsymbol{\mu}_{S_{1}\cup S_{2}}(A))}{Z(\beta,\mathbf{Q},\boldsymbol{\mu}_{S_{1}}(A))Z(\beta,\mathbf{Q},\boldsymbol{\mu}_{S_{2}}(A))Z(\beta,\mathbf{Q},\mathbf{0})}\mathbb{P}_{\beta,\mathbf{Q},\boldsymbol{\mu}_{S_{1}\cup S_{2}}(A)}(\Omega_{n}(S_{1},\delta)\cap\Omega_{n}(S_{2},\delta))$ $\displaystyle=T_{1}+T_{2}\quad\text{(say),}$ (19) We will first show that $T_{2}\leq 1+o(1)$. To see this note that $\displaystyle T_{2}\leq\frac{1}{|\tilde{\mathcal{C}}_{s}|^{2}}\sum_{S_{1}\neq S_{2}\in\tilde{\mathcal{C}}_{s}}\frac{Z^{2}(\beta,\mathbf{Q},\mathbf{0})Z(\beta,\mathbf{Q},\boldsymbol{\mu}_{S_{1}\cup S_{2}}(A))}{Z(\beta,\mathbf{Q},\boldsymbol{\mu}_{S_{1}}(A))Z(\beta,\mathbf{Q},\boldsymbol{\mu}_{S_{2}}(A))Z(\beta,\mathbf{Q},\mathbf{0})}=1+o(1),$ by the proof for the control of the second term in equation (9) of [18, Theorem 3] along with the correlation bounds presented in [18, Lemma 9(a)]. By symmetry, the value of $T_{1}$ equals $T_{1}=\frac{1}{|\tilde{\mathcal{C}}_{s}|}\frac{Z^{2}_{n}(\beta,0)}{Z^{2}_{n}(\beta,s,A)}\frac{Z^{2}(\beta,\mathbf{Q},\mathbf{0})Z(\beta,\mathbf{Q},\boldsymbol{\mu}_{S}(2A))}{Z^{2}(\beta,\mathbf{Q},\boldsymbol{\mu}_{S}(A))Z(\beta,\mathbf{Q},\mathbf{0})}\mathbb{P}_{\beta,\mathbf{Q},\boldsymbol{\mu}_{S}(2A)}(\Omega_{n}(S,\delta)).$ (20) Let $\varepsilon>0$ small enough such that $\sqrt{s}\tanh(A)(\log|\tilde{\mathcal{C}}_{s}|)^{-1/2}=2(1-\varepsilon)$. Set $\lambda>0$ such that $\tanh(2A-\lambda/\sqrt{s})=\tilde{t}_{n}(\delta)/\sqrt{s}$. Note, $\cosh(2A)\leq(1-\frac{(t_{n}(\delta)+\lambda)^{2}}{s})^{-1/2}$. Therefore, $\displaystyle\mathbb{P}_{\beta,\mathbf{Q},\boldsymbol{\mu}_{S}(2A)}\Big{(}Z_{S}\leq\tilde{t}_{n}(\delta)\Big{)}$ $\displaystyle\leq e^{\lambda\tilde{t}_{n}(\delta)}\mathbb{E}_{\beta,\mathbf{Q},\boldsymbol{\mu}_{S}(2A)}\Big{(}\exp{(-\frac{\lambda}{\sqrt{s}}\sum\limits_{i\in S}X_{i})}\Big{)}$ $\displaystyle=(1+o(1))e^{\lambda\tilde{t}_{n}(\delta)}\Big{(}\frac{\cosh(2A-\frac{\lambda}{\sqrt{s}})}{\cosh(2A)}\Big{)}^{s}$ $\displaystyle\leq(1+o(1))e^{\lambda\tilde{t}_{n}(\delta)}e^{\frac{\tilde{t}^{2}_{n}(\delta)}{2}}\frac{1}{\cosh^{s}(2A)}$ $\displaystyle\leq(1+o(1))\exp\left(\lambda\tilde{t}_{n}(\delta)+\frac{\tilde{t}^{2}_{n}(\delta)}{2}-\frac{(\lambda+t_{n}(\delta))^{2}}{2}\right)$ (21) $\displaystyle=(1+o(1))\exp(-\frac{\lambda^{2}}{2}).$ (22) where the first equality is by Lemma 3. Choose $\eta>0$ small enough such that, $2\sqrt{\frac{1-\varepsilon}{1+\delta}}(1-\eta)\frac{\tilde{t}_{n}(\delta)}{\sqrt{s}}\leq 2\sqrt{(1-\varepsilon)}(1-\eta)\tanh(A)\leq\tanh(2A)\leq\frac{\lambda+\tilde{t}_{n}(\delta)}{\sqrt{s}}.$ Hence, $\mathbb{P}_{\beta,\mathbf{Q},\boldsymbol{\mu}_{S}(2A)}\left(Z_{S}\leq\tilde{t}_{n}(\delta)\Big{)}\leq(1+o(1))\exp\Big{(}-(2\sqrt{\frac{1-\varepsilon}{1+\delta}}(1-\eta)-1)^{2}\frac{\tilde{t}^{2}_{n}(\delta)}{2}\right).$ (23) By (20), $\displaystyle T_{1}$ $\displaystyle=(1+o(1))\exp\Big{(}s\\{\log\cosh(2A)-2\log\cosh(A)\\}-\log|\tilde{\mathcal{C}}_{s}|-(2\sqrt{\frac{1-\varepsilon}{1+\delta}}(1-\eta)-1)^{2}\frac{\tilde{t}^{2}_{n}(\delta)}{2}\Big{)}$ $\displaystyle\leq(1+o(1))\exp\Big{(}sA^{2}-\log|\tilde{\mathcal{C}}_{s}|-(2\sqrt{\frac{1-\varepsilon}{1+\delta}}(1-\eta)-1)^{2}\frac{\tilde{t}^{2}_{n}(\delta)}{2}\Big{)}=o(1),$ (24) for small enough $\eta>0$. This completes the proof of (18), finishing the proof of lower bound. ##### Proof of Theorem 1ii.ii._b_) The proof is verbatim same as that for $\beta<1$ in part Theorem 1ii.ii._a_) and the only change comes from applying Lemma 1 which is applicable now for $s\ll\sqrt{n}/\log{n}$ – a condition that is part of the theorem’s assumption. ##### Proof of Theorem 1ii.ii._c_) To prove the lower bound, fix $\varepsilon>0$ such that $\sqrt{s}\tanh(A)(\log|\tilde{\mathcal{C}}_{s}|)^{-1/2}=2(1-\varepsilon)(1-m^{2})^{-1}$. Defining $L_{\pi}=\frac{1}{|\tilde{C}_{s}|}\sum_{S\in\tilde{C}_{s}}\frac{Z(\beta,\mathbf{Q},\boldsymbol{\mu}_{S}(A))}{Z(\beta,\mathbf{Q},\mathbf{0})}\exp\left(A\sum_{i\in S}X_{i}\right),$ (25) it suffices to show $L_{\pi}\rightarrow 1$ in probability. Defining $\Omega_{n}(S,\delta)=\\{(A_{n,S}\cup B_{n,S})^{c}\\}$ – where for any $\delta>0$ we define $\tilde{t}_{n}(\delta)=\sqrt{2(1+\delta)\log|\tilde{C}_{s}|}$. Subsequently, we let $\displaystyle\tilde{L}_{\pi}(\delta):=\frac{1}{|\tilde{C}_{s}|}\sum_{S\in\tilde{C}_{s}}\frac{Z(\beta,\mathbf{Q},\boldsymbol{\mu}_{S}(A))}{Z(\beta,\mathbf{Q},\mathbf{0})}\exp\left(A\sum_{i\in S}X_{i}\right)\mathbf{1}(\mathbf{X}\in\Omega_{n}(S,\delta)),$ (26) and it is enough to show (16),(17),(18). The proof of type I error in Theorem 1i.i._c_) yields $\mathbb{P}(L_{\pi}\neq\tilde{L}_{\pi})\rightarrow 0$. The proof of (17) follows similar to the Type II error of in Theorem 1i.i._c_). To prove (18), recall that $\displaystyle\mathbb{E}_{\beta,\mathbf{Q},\mathbf{0}}\left(\tilde{L}_{\pi}^{2}(\delta)\right)$ (27) $\displaystyle=\frac{1}{|\tilde{\mathcal{C}}_{s}|^{2}}\sum_{S\in\tilde{\mathcal{C}}_{s}}\frac{Z^{2}(\beta,\mathbf{Q},\mathbf{0})Z(\beta,\mathbf{Q},\boldsymbol{\mu}_{S}(2A))}{Z^{2}(\beta,\mathbf{Q},\boldsymbol{\mu}_{S}(A))Z(\beta,\mathbf{Q},\mathbf{0})}\mathbb{P}_{\beta,\mathbf{Q},\boldsymbol{\mu}_{S}(2A)}(\Omega_{n}(S,\delta))$ $\displaystyle+\frac{1}{|\tilde{\mathcal{C}}_{s}|^{2}}\sum_{S_{1}\neq S_{2}\in\tilde{\mathcal{C}}_{s}}\frac{Z^{2}(\beta,\mathbf{Q},\mathbf{0})Z(\beta,\mathbf{Q},\boldsymbol{\mu}_{S_{1}\cup S_{2}}(A))}{Z(\beta,\mathbf{Q},\boldsymbol{\mu}_{S_{1}}(A))Z(\beta,\mathbf{Q},\boldsymbol{\mu}_{S_{2}}(A))Z(\beta,\mathbf{Q},\mathbf{0})}\mathbb{P}_{\beta,\mathbf{Q},\boldsymbol{\mu}_{S_{1}\cup S_{2}}(A)}(\Omega_{n}(S_{1},\delta)\cap\Omega_{n}(S_{2},\delta))$ $\displaystyle=T_{1}+T_{2},$ (28) Here, $T_{2}\leq 2+o(1)$, because $\frac{Z^{2}(\beta,\mathbf{Q},\mathbf{0})Z(\beta,\mathbf{Q},\boldsymbol{\mu}_{S_{1}\cup S_{2}}(A))}{Z(\beta,\mathbf{Q},\boldsymbol{\mu}_{S_{1}}(A))Z(\beta,\mathbf{Q},\boldsymbol{\mu}_{S_{2}}(A))Z(\beta,\mathbf{Q},\mathbf{0})}=(2+o(1))\frac{\cosh^{2s}(\beta m+A)+\cosh^{2s}(\beta m-A)}{(\cosh^{s}(\beta m+A)+\cosh^{s}(\beta m-A))^{2}}=2+o(1),$ where we have used $\frac{\cosh^{s}(\beta m-A)}{\cosh^{s}(\beta m+A)}=o(1)$. To bound $T_{1}$, choose $\eta>0$ small to be specified later. Set $\lambda=\frac{\tilde{t}_{n}(\delta)\sqrt{s}(1-\eta)}{1-m^{2}}$. $\displaystyle\mathbb{P}_{\beta,\mathbf{Q},\boldsymbol{\mu}_{S}(2A)}(Z_{S}\leq\sqrt{s}+\tilde{t}_{n}(\delta)|W_{n}>0)$ $\displaystyle\leq\exp\Big{(}{\lambda(m+\tilde{t}_{n}(\delta)/\sqrt{s})+s[\log\cosh(\beta m+2A-\frac{\lambda}{s})-\log\cosh(\beta m+2A)]}\Big{)}$ $\displaystyle\leq\exp\Big{(}\lambda(m-\tanh(\beta m+2A)+\frac{\tilde{t}_{n}(\delta)}{\sqrt{s}})+\frac{\lambda^{2}}{2s}\operatorname{sech}^{2}(\beta m)+o(\log|\tilde{C}_{s}|)\Big{)}$ $\displaystyle\leq\exp\Big{(}-\frac{\lambda\tilde{t}_{n}(\delta)}{\sqrt{s}}(1-\eta)+\frac{\lambda^{2}}{2s}\operatorname{sech}^{2}(\beta m)+o(\log|\tilde{C}_{s}|)\Big{)}$ $\displaystyle=\exp\Big{(}-(1+\varepsilon)(1-\eta)^{2}\log|\tilde{C}_{s}|+o(\log|\tilde{C}_{s}|)\Big{)},$ where the third inequality is by $\tanh(\beta m+2A)\geq m+2A\operatorname{sech}^{2}(\beta m)$ and small $\eta>0$. Similar bound holds for $W_{n}<0$. Therefore, $\displaystyle T_{1}$ $\displaystyle\leq C\exp\Big{(}s[\log\cosh(\beta m+2A)-2\log\cosh(\beta m+A)+\log\cosh(\beta m)]-\log|\tilde{C}_{s}|-(1+\varepsilon)(1-\eta)^{2}\log|\tilde{C}_{s}|\Big{)}$ $\displaystyle\leq C\exp\Big{(}sA^{2}-\log|\tilde{C}_{s}|-(1+\varepsilon)(1-\eta)^{2}\log|\tilde{C}_{s}|\Big{)}=o(1),$ by small $\varepsilon,\eta>0$ and since $\tanh(A)\sim A$. Therefore, $\mathbb{E}_{\beta,\mathbf{Q},\mathbf{0}}(\tilde{L}^{2}_{\pi})\leq 2+o(1)$. Hence, $\liminf_{n\rightarrow\infty}\inf_{T}\mathrm{Risk}(T,{\Xi}(\mathcal{C}_{s},A),\beta,\mathbf{Q})\geq 1-\limsup_{n\rightarrow\infty}\frac{1}{2}\sqrt{\mathbb{E}_{\beta,\mathbf{Q},\mathbf{0}}(L^{2}_{\pi})-1}>0.$ Consequently, no test is asymptotically powerful. #### 6.1.2. Proof of Theorem 2 To obtain an adaptive test, we first test the hypothesis $H_{0,\beta}:\beta\leq 1$ vs $H_{1,\beta}:\beta>1$. This is obtained through the test $T_{n}^{(1)}=\mathbbm{1}\left(|\bar{\mathbf{X}}|\geq\frac{1}{\log{n}}\right).$ Subsequently, if $T_{n}^{(1)}=0$ we simply use the test $T_{n}^{(2)}=\mathbbm{1}\left(Z_{\max}>\sqrt{2(1+\delta)\log{|\mathcal{N}(\mathcal{C}_{s},\gamma,\varepsilon_{n})|}}\right)$ for $\delta>0$ as in the proof of Theorem 1i.i._a_). In contrast, if $T_{n}^{(1)}=1$ we estimate $\beta$ assuming the working model “$\mathbf{X}\stackrel{{\scriptstyle\text{working}}}{{\sim}}\mathbb{P}_{\beta,\mathbf{Q},\mathbf{0}}$” using the Pseudo-likelihood method [9, 15, 10, 28] and with $\hat{\beta}_{\rm working}$ denote this estimator we consider rejecting $H_{0}$ using $\displaystyle T_{n}^{(3)}$ $\displaystyle=\mathbbm{1}(Z_{\max}>m(\hat{\beta}_{\rm working})+\sqrt{2(1+\delta)(1-m^{2}(\hat{\beta}_{\rm working}))\log|\mathcal{N}(\mathcal{C}_{s},\gamma,\varepsilon_{n})|},W_{n}\geq 0)$ $\displaystyle+\mathbbm{1}(Z_{\max}>-m(\hat{\beta}_{\rm working})+\sqrt{2(1+\delta)(1-m^{2}(\hat{\beta}_{\rm working}))\log|\mathcal{N}(\mathcal{C}_{s},\gamma,\varepsilon_{n})|},W_{n}<0)$ where $\delta>0$ can be chosen as in the proof of Theorem Theorem 1i.i._c_) and $m(\beta)$ is the unique positive root of $m=\tanh(\beta m)$. Our final $\beta$-adaptive test is thereafter given by $T_{n}=(1-T_{n}^{(1)})T_{n}^{(2)}+T_{n}^{(1)}T_{n}^{(3)}$. We note that in this part of the proof we do not perform the Bonferroni correction of the test $T_{n}^{(3)}$ with a test based on conditionally centered version of $\sum_{i=1}^{n}X_{i}$ since here $\sum_{i=1}^{n}\mu_{i}\ll s\ll n$. Therefore as it is clear from the proof of Theorem 1i.i._c_), it suffices to only consider the randomized scan test described through $T_{n}^{(3)}$. We first show that uniformly over all $\boldsymbol{\mu}\in\\{\mathbf{0}\\}\cup\Xi(\mathcal{C}_{s},A)$ one has that $|\bar{\mathbf{X}}|=o_{\mathbb{P}_{\beta,\mathbf{Q},\boldsymbol{\mu}}}(\frac{1}{\log{n}})$ for $\beta\leq 1$ and $|\bar{\mathbf{X}}|\gg\frac{1}{\log{n}}$ with probability converging to $1$ for $\beta>1$. For the first claim, first let $\beta\leq 1$. Then from Lemma 5 we have that $|\bar{\mathbf{X}}|=o_{\mathbb{P}_{\beta,\mathbf{Q},\mathbf{0}}}(\frac{1}{\log{n}})$ for $\beta\leq 1$. Now note that by Lemma 20 we have that there exists constants $C_{1},C_{2}>0$ such that $\displaystyle\mathbb{E}_{\beta,\mathbf{Q},\boldsymbol{\mu}}\left(\sum_{i=1}^{n}X_{i}\right)\leq C_{1}\sum_{i=1}^{n}\tanh(\mu_{i})+C_{2}\sum_{i=1}^{n}\sum_{j=1}^{n}\tanh(\mu_{j})\left(\mathbf{Q}_{i,j}+\mathrm{Cov}_{\beta,\mathbf{Q},\boldsymbol{\mu}}(X_{i},X_{j})\right)$ Now suppose $\beta<1$. Then we have from [18] that $0\leq\mathrm{Cov}_{\beta,\mathbf{Q},\boldsymbol{\mu}}(X_{i},X_{j})\lesssim 1/n$. Hence for $\beta<1$ we have $\displaystyle\mathbb{E}_{\beta,\mathbf{Q},\boldsymbol{\mu}}\left(\sum_{i=1}^{n}X_{i}\right)\lesssim\sum_{i=1}^{n}\tanh(\mu_{i})\sum_{i=1}^{n}\sum_{j=1}^{n}\tanh(\mu_{j})\left(\mathbf{Q}_{i,j}+\mathrm{Cov}_{\beta,\mathbf{Q},\boldsymbol{\mu}}(X_{i},X_{j})\right)\lesssim\sum_{j=1}^{n}\tanh(\mu_{j})\lesssim s$ since $\|\boldsymbol{\mu}\|_{0}\leq s$. Hence under any $\boldsymbol{\mu}$ and $\beta<1$ one has $\displaystyle\mathbb{P}_{\beta,\mathbf{Q},\boldsymbol{\mu}}(T_{n}^{(1)}=1)$ $\displaystyle=\mathbb{P}_{\beta,\mathbf{Q},\boldsymbol{\mu}}\left(\sum_{i=1}^{n}X_{i}>\frac{n}{\log{n}}\right)+\mathbb{P}_{\beta,\mathbf{Q},\boldsymbol{\mu}}\left(\sum_{i=1}^{n}X_{i}<-\frac{n}{\log{n}}\right)$ $\displaystyle\leq\mathbb{P}_{\beta,\mathbf{Q},\boldsymbol{\mu}}\left(\sum_{i=1}^{n}X_{i}>\frac{n}{\log{n}}\right)+\mathbb{P}_{\beta,\mathbf{Q},\mathbf{0}}\left(\sum_{i=1}^{n}X_{i}<-\frac{n}{\log{n}}\right)\quad\text{by monotonicity of}\ \sum_{i=1}^{n}X_{i}$ $\displaystyle=\mathbb{P}_{\beta,\mathbf{Q},\boldsymbol{\mu}}\left(\frac{\sum_{i=1}^{n}X_{i}-\mathbb{E}_{\beta,\mathbf{Q},\boldsymbol{\mu}}(\sum_{i=1}^{n}X_{i})}{\sqrt{\mathrm{Var}_{\beta,\mathbf{Q},\boldsymbol{\mu}}(\sum_{i=1}^{n}X_{i})}}>\frac{n}{\log{n}\sqrt{\mathrm{Var}_{\beta,\mathbf{Q},\boldsymbol{\mu}}(\sum_{i=1}^{n}X_{i})}}-\frac{\mathbb{E}_{\beta,\mathbf{Q},\boldsymbol{\mu}}(\sum_{i=1}^{n}X_{i})}{\sqrt{\mathrm{Var}_{\beta,\mathbf{Q},\boldsymbol{\mu}}(\sum_{i=1}^{n}X_{i})}}\right)+o(1)$ $\displaystyle\leq\mathbb{P}_{\beta,\mathbf{Q},\boldsymbol{\mu}}\left(\frac{\sum_{i=1}^{n}X_{i}-\mathbb{E}_{\beta,\mathbf{Q},\boldsymbol{\mu}}(\sum_{i=1}^{n}X_{i})}{\sqrt{\mathrm{Var}_{\beta,\mathbf{Q},\boldsymbol{\mu}}(\sum_{i=1}^{n}X_{i})}}>C_{1}\frac{\sqrt{n}}{\log{n}}-C_{2}\frac{s}{\sqrt{n}}\right)+o(1)$ where in the last line we have used the fact that by GHS inequality (Lemma 15) and Lemma 5 we have that $n\lesssim\mathrm{Var}_{\beta,\mathbf{Q},\boldsymbol{\mu}}(\sum_{i=1}^{n}X_{i})\leq\mathrm{Var}_{\beta,\mathbf{Q},\mathbf{0}}(\sum_{i=1}^{n}X_{i})\lesssim n$ for $\beta<1$. Now note that by our assumptions $s/\sqrt{n}\ll\sqrt{n}/\log{n}$ since $s\ll n/\log{n}$ and hence the first term of the display above goes to $0$ uniformly in $\boldsymbol{\mu}$ for $\beta<1$. Turning to $\beta=1$, we have from [18] that $0\leq\mathrm{Cov}_{\beta,\mathbf{Q},\boldsymbol{\mu}}(X_{i},X_{j})\lesssim 1/\sqrt{n}$. Hence for $\beta=1$ we have $\displaystyle\mathbb{E}_{\beta,\mathbf{Q},\boldsymbol{\mu}}(\sum_{i=1}^{n}X_{i})\lesssim\sum_{i=1}^{n}\tanh(\mu_{i})\sum_{i=1}^{n}\sum_{j=1}^{n}\tanh(\mu_{j})\left(\mathbf{Q}_{i,j}+\mathrm{Cov}_{\beta,\mathbf{Q},\boldsymbol{\mu}}(X_{i},X_{j})\right)\lesssim\sum_{j=1}^{n}\tanh(\mu_{j})\lesssim s\sqrt{n}$ since $\|\boldsymbol{\mu}\|_{0}\leq s$. Hence under any $\boldsymbol{\mu}$ and $\beta=1$ one has $\displaystyle\mathbb{P}_{\beta,\mathbf{Q},\boldsymbol{\mu}}(T_{n}^{(1)}=1)$ $\displaystyle=\mathbb{P}_{\beta,\mathbf{Q},\boldsymbol{\mu}}\left(\sum_{i=1}^{n}X_{i}>\frac{n}{\log{n}}\right)+\mathbb{P}_{\beta,\mathbf{Q},\boldsymbol{\mu}}\left(\sum_{i=1}^{n}X_{i}<-\frac{n}{\log{n}}\right)$ $\displaystyle\leq\mathbb{P}_{\beta,\mathbf{Q},\boldsymbol{\mu}}\left(\sum_{i=1}^{n}X_{i}>\frac{n}{\log{n}}\right)+\mathbb{P}_{\beta,\mathbf{Q},\mathbf{0}}\left(\sum_{i=1}^{n}X_{i}<-\frac{n}{\log{n}}\right)\quad\text{by monotonicity of}\ \sum_{i=1}^{n}X_{i}$ $\displaystyle=\mathbb{P}_{\beta,\mathbf{Q},\boldsymbol{\mu}}\left(\frac{\sum_{i=1}^{n}X_{i}-\mathbb{E}_{\beta,\mathbf{Q},\boldsymbol{\mu}}(\sum_{i=1}^{n}X_{i})}{\sqrt{\mathrm{Var}_{\beta,\mathbf{Q},\boldsymbol{\mu}}(\sum_{i=1}^{n}X_{i})}}>\frac{n}{\log{n}\sqrt{\mathrm{Var}_{\beta,\mathbf{Q},\boldsymbol{\mu}}(\sum_{i=1}^{n}X_{i})}}-\frac{\mathbb{E}_{\beta,\mathbf{Q},\boldsymbol{\mu}}(\sum_{i=1}^{n}X_{i})}{\sqrt{\mathrm{Var}_{\beta,\mathbf{Q},\boldsymbol{\mu}}(\sum_{i=1}^{n}X_{i})}}\right)+o(1)$ $\displaystyle\leq\mathbb{P}_{\beta,\mathbf{Q},\boldsymbol{\mu}}\left(\frac{\sum_{i=1}^{n}X_{i}-\mathbb{E}_{\beta,\mathbf{Q},\boldsymbol{\mu}}(\sum_{i=1}^{n}X_{i})}{\sqrt{\mathrm{Var}_{\beta,\mathbf{Q},\boldsymbol{\mu}}(\sum_{i=1}^{n}X_{i})}}>C_{1}\frac{n^{1/4}}{\log{n}}-C_{2}\frac{s}{n^{1/4}}\right)+o(1)$ where in the last line we have used the fact that by GHS inequality (Lemma 15) and Lemma 5 we have that $n^{3/4}\lesssim\mathrm{Var}_{\beta,\mathbf{Q},\boldsymbol{\mu}}(\sum_{i=1}^{n}X_{i})\leq\mathrm{Var}_{\beta,\mathbf{Q},\mathbf{0}}(\sum_{i=1}^{n}X_{i})\lesssim n^{3/4}$ for $\beta<1$. Now note that by our assumptions $s/n^{1/4}\ll n^{1/4}/\log{n}$ since $s\ll\sqrt{n}/\log{n}$ and hence the first term of the display above goes to $0$ uniformly in $\boldsymbol{\mu}$ for $\beta=1$ as well. This shows that for any $\boldsymbol{\mu}$ one has that $T_{n}^{(1)}$ has Type I error converging to $0$ for testing $H_{0,\beta}$ vs $H_{1,\beta}$. Next we show that uniformly over all $\boldsymbol{\mu}\in\\{\mathbf{0}\\}\cup\Xi(\mathcal{C}_{s},A)$ $|\bar{\mathbf{X}}|\gg\frac{1}{\log{n}}$ with probability converging to $1$ for $\beta>1$. The claim is trivial for $\boldsymbol{\mu}=\mathbf{0}$ by [25]. In particular, it also follows that $\mathbb{P}_{\beta,\mathbf{Q},\mathbf{0}}(T_{n}^{(1)}=0)\leq e^{-Cn}$ for some $C>0$ depending on $\beta>1$. Then for any $\boldsymbol{\mu}$ with $\|\boldsymbol{\mu}\|_{\infty}\leq M$ for some $M>0$ one has $\displaystyle\sup_{\boldsymbol{\mu}:\|\boldsymbol{\mu}\|_{\infty}\leq M}\mathbb{P}_{\beta,\mathbf{Q},\mathbf{0}}(T_{n}^{(1)}=0)$ $\displaystyle=\sup_{\boldsymbol{\mu}:\|\boldsymbol{\mu}\|_{\infty}\leq M}\frac{Z(\beta,\mathbf{Q},\mathbf{0})}{Z(\beta,\mathbf{Q},\boldsymbol{\mu})}\mathbb{E}_{\beta,\mathbf{Q},\mathbf{0}}\left(\mathbbm{1}(T_{n}^{(1)}=0)e^{\sum_{i=1}^{n}\mu_{i}X_{i}}\right)$ $\displaystyle=\sup_{\boldsymbol{\mu}:\|\boldsymbol{\mu}\|_{\infty}\leq M}e^{2s\|\boldsymbol{\mu}\|_{\infty}}\mathbb{P}_{\beta,\mathbf{Q},\mathbf{0}}(T_{n}^{(1)}=0)\leq e^{2sM-Cn}\rightarrow 0$ since $s\ll n$. This completes the proof for $\beta>1$ for the consistency of testing $H_{0,\beta}$ vs $H_{1,\beta}$ using $T_{n}^{(1)}$ under any $\boldsymbol{\mu}\in\mathbb{R}_{+}^{n}$ s.t. $\|\boldsymbol{\mu}\|_{0}\leq s\ll n$ and $\|\boldsymbol{\mu}\|_{\infty}=O(1)$. We next turn to the consistency of the test $T_{n}=(1-T_{n}^{(1)})T_{n}^{(2)}+T_{n}^{(1)}T_{n}^{(3)}$ for testing the actual hypotheses (2) of interest. First consider the case $\beta\leq 1$. Let us first consider the Type I error of the test $T_{n}$. $\displaystyle\mathbb{P}_{\beta,\mathbf{Q},\mathbf{0}}(T_{n}=1)=\mathbb{P}_{\beta,\mathbf{Q},\mathbf{0}}(T_{n}^{(1)}=0,T_{n}^{(2)}=1)+\mathbb{P}_{\beta,\mathbf{Q},\mathbf{0}}(T_{n}^{(1)}=1,T_{n}^{(3)}=1).$ By consistency of the test $T-n^{(1)}$ for testing $H_{0,\beta}$ vs $H_{1,\beta}$ one has that for $\beta\leq 1$ one has $\mathbb{P}_{\beta,\mathbf{Q},\mathbf{0}}(T_{n}^{(1)}=1)\rightarrow 0$. Hence $\mathbb{P}_{\beta,\mathbf{Q},\mathbf{0}}(T_{n}^{(1)}=1,T_{n}^{(3)}=1)\rightarrow 0$ as well. Hence it is enough to show that $\mathbb{P}_{\beta,\mathbf{Q},\mathbf{0}}(T_{n}^{(2)}=1)\rightarrow 0$ under $\beta\leq 1$. However this follows by arguments verbatim to the proof of Type I errors in Theorem 1i.i._a_) and Theorem 1i.i._b_) since the test $T_{n}^{(2)}$ is free of $\beta$. This completes the control of Type I error for $\beta\leq 1$. Turning to the Type II error, once again note that for any $\boldsymbol{\mu}\in\mathcal{C}_{s}$ $\displaystyle\mathbb{P}_{\beta,\mathbf{Q},\boldsymbol{\mu}}(T_{n}=0)=\mathbb{P}_{\beta,\mathbf{Q},\mathbf{0}}(T_{n}^{(1)}=0,T_{n}^{(2)}=0)+\mathbb{P}_{\beta,\mathbf{Q},\mathbf{0}}(T_{n}^{(1)}=1,T_{n}^{(3)}=0).$ Once again by consistency of the test $T-n^{(1)}$ for testing $H_{0,\beta}$ vs $H_{1,\beta}$ (uniformly against any $\boldsymbol{\mu}$ in our class having $\|\boldsymbol{\mu}\|_{\infty}=O(1)$) one has that for $\beta\leq 1$ one has $\mathbb{P}_{\beta,\mathbf{Q},\boldsymbol{\mu}}(T_{n}^{(1)}=1)\rightarrow 0$ uniformly in such $\boldsymbol{\mu}$’s. Hence $\mathbb{P}_{\beta,\mathbf{Q},\mathbf{0}}(T_{n}^{(1)}=1,T_{n}^{(3)}=1)\rightarrow 0$ uniformly as well and is enough to show that $\mathbb{P}_{\beta,\mathbf{Q},\boldsymbol{\mu}}(T_{n}^{(2)}=0)\rightarrow 0$ uniformly in $\boldsymbol{\mu}$ under $\beta\leq 1$. This once again follows by arguments verbatim to the proof of Type II errors in Theorem 1i.i._a_) and Theorem 1i.i._b_) since the test $T_{n}^{(2)}$ is free of $\beta$. We next turn to the final case of analyzing the test $T_{n}$ under $\beta>1$. Let us first consider the Type I error of the test $T_{n}$. $\displaystyle\mathbb{P}_{\beta,\mathbf{Q},\mathbf{0}}(T_{n}=1)=\mathbb{P}_{\beta,\mathbf{Q},\mathbf{0}}(T_{n}^{(1)}=0,T_{n}^{(2)}=1)+\mathbb{P}_{\beta,\mathbf{Q},\mathbf{0}}(T_{n}^{(1)}=1,T_{n}^{(3)}=1).$ By consistency of the test $T_{n}^{(1)}$ for testing $H_{0,\beta}$ vs $H_{1,\beta}$ one has that for $\beta>1$ one has $\mathbb{P}_{\beta,\mathbf{Q},\mathbf{0}}(T_{n}^{(1)}=0)\rightarrow 0$. Hence $\mathbb{P}_{\beta,\mathbf{Q},\mathbf{0}}(T_{n}^{(1)}=0,T_{n}^{(2)}=1)\rightarrow 0$ as well. Hence it is enough to show that $\mathbb{P}_{\beta,\mathbf{Q},\mathbf{0}}(T_{n}^{(3)}=1)\rightarrow 0$ under $\beta>1$. Now note that the only dependence of the test $T_{n}^{(3)}$ on $\hat{\beta}_{\rm working}$ is in its cutoff and the proof of the Type I error in Theorem 1i.i._c_) shows that it is enough to show that $m(\hat{\beta}_{\rm working})\stackrel{{\scriptstyle\mathbb{P}_{\beta,\mathbf{Q},\mathbf{0}}}}{{\rightarrow}}m(\beta)$ which follows from Lemma 19. In particular, Lemma 19 is applicable since $\liminf_{n\rightarrow\infty}\frac{1}{n}\log\frac{1}{2^{n}}Z(\beta,\mathbf{Q}^{\rm CW},\mathbf{0})>0,$ (29) using [11, (7.9)]. Hence, by Lemma 19, is applicable. Turning to the Type II error of our test $T_{n}$ once again by consistency of the test $T_{n}^{(1)}$ for testing $H_{0,\beta}$ vs $H_{1,\beta}$ (uniformly against any $\boldsymbol{\mu}$ in our class having $\|\boldsymbol{\mu}\|_{\infty}=O(1)$) one has that for $\beta>1$ one has $\mathbb{P}_{\beta,\mathbf{Q},\boldsymbol{\mu}}(T_{n}^{(1)}=0)\rightarrow 0$ uniformly in such $\boldsymbol{\mu}$’s. Hence $\mathbb{P}_{\beta,\mathbf{Q},\boldsymbol{\mu}}(T_{n}^{(1)}=0,T_{n}^{(2)}=0)\rightarrow 0$ uniformly as well and is enough to show that $\mathbb{P}_{\beta,\mathbf{Q},\boldsymbol{\mu}}(T_{n}^{(2)}=0)\rightarrow 0$ uniformly in $\boldsymbol{\mu}$ under $\beta>1$. Once again from the proof of the Type II error in Theorem 1i.i._c_) it is enough to show that $m(\hat{\beta}_{\rm working})\stackrel{{\scriptstyle\mathbb{P}_{\beta,\mathbf{Q},\boldsymbol{\mu}}}}{{\rightarrow}}m(\beta)$ uniformly in $\boldsymbol{\mu}$ in our class which follows from Lemma 19. ### 6.2. Technical Lemmas for Proofs of Theorems in Section 2.1 ###### Lemma 1. Suppose $\mathbf{X}\sim\mathbb{P}_{\beta,\mathbf{Q}^{\rm CW},\boldsymbol{\mu}}$ with $\mathbf{Q}^{\rm CW}_{i,j}=\mathbf{1}(i\neq j)/n$ and let $Z_{S}=\sum_{i\in S}X_{i}/\sqrt{s}$. Define $t_{n}(\delta):=\sqrt{2(1+\delta)(1-m^{2})\log{|\mathcal{N}(\mathcal{C}_{s},\gamma,\varepsilon_{n})|}}$ for some $\delta>0$, where $m:=m(\beta)$ is the unique positive root of $m=\tanh(\beta m)$. 1. (a) If $\beta<1,s\ll\frac{n}{\log n}$, $\mathbb{P}_{\beta,\mathbf{Q}^{\rm CW},\mathbf{0}}(Z_{S}>t_{n}(\delta))\leq(1+o(1))e^{-t^{2}_{n}(\delta)}$. 2. (b) If $\beta=1,s\ll\frac{\sqrt{n}}{\log n}$, then same conclusion as (a) holds. 3. (c) If $\beta>1,s\ll\frac{n}{\log n}$, $P_{\beta,\mathbf{Q}^{\rm CW},\mathbf{0}}(Z_{S}>m\sqrt{s}+t_{n}(\delta))\leq(1+o(1))\exp\left(-\frac{t^{2}_{n}(\delta)}{2(1-m^{2})}\right)$ (30) Further, $P_{\beta,\mathbf{Q}^{\rm CW},\mathbf{0}}(Z_{S}>-m\sqrt{s}+t_{n}(\delta),W_{n}<0)\leq(1+o(1))\exp\left(-(1+o(1))\frac{t^{2}_{n}(\delta)}{2(1-m^{2})}\right)$ (31) ###### Proof. 1. (a) Pick $\lambda>0$ such that $\tanh\left(\frac{\lambda}{\sqrt{s}}\right)=\frac{t_{n}(\delta)}{\sqrt{s}}$. So, $\lambda\geq t_{n}(\delta)$, and $\cosh\left(\frac{\lambda}{\sqrt{s}}\right)=\left(1-\frac{t^{2}_{n}(\delta)}{s}\right)^{-1/2}.$ Also, $s\tanh\left(\frac{\lambda}{\sqrt{s}}\right)=\sqrt{st_{n}(\delta)}=O(\sqrt{s\log|\mathcal{N}(\mathcal{C}_{s},\gamma,\varepsilon_{n})|})\ll\sqrt{n}$. Hence, $\displaystyle\mathbb{P}_{\beta,\mathbf{Q}^{\rm CW}\mathbf{0}}(Z_{S}>t_{n}(\delta)$ $\displaystyle\leq e^{-\lambda t_{n}(\delta)}\frac{Z(\beta,\mathbf{Q}^{\rm CW},\boldsymbol{\mu}_{S}(\frac{\lambda}{\sqrt{s}}))}{Z(\beta,\mathbf{Q}^{\rm CW},\mathbf{0})}$ $\displaystyle=(1+o(1))e^{-\lambda t_{n}(\delta)}\cosh^{s}\left(\frac{\lambda}{\sqrt{s}}\right)\leq(1+o(1))e^{-t^{2}_{n}(\delta)},$ where the equality is due to Lemma 2. 2. (b) When $\beta=1$, we pick same $\lambda$. Now, $s\tanh\left(\frac{\lambda}{\sqrt{s}}\right)\ll n^{1/4}$ and the bound is obtained using Lemma 2 again and the $o(1)$ term does not depend on the choice of $S$. 3. (c) When $\beta>1$, pick $\lambda=\frac{t_{n}(\delta)\sqrt{s}}{1-m^{2}}$. Here, $s\tanh(\lambda/s)\ll\sqrt{n}$ and $\lambda/s\rightarrow 0$. $\displaystyle\mathbb{P}_{\beta,\mathbf{Q}^{\rm CW},\mathbf{0}}(Z_{S}>m\sqrt{s}+t_{n}(\delta))$ $\displaystyle\leq\exp\left(-\frac{\lambda t}{\sqrt{s}}-\lambda m\right)\mathbb{E}_{\beta,\mathbf{Q}^{\rm CW},\mathbf{0}}\exp\left(\frac{\lambda}{s}\sum_{i\in S}X_{i}\right)$ $\displaystyle=\exp\left(-\frac{\lambda t}{\sqrt{s}}-\lambda m\right)\frac{Z(\beta,\mathbf{Q}^{\rm CW},\boldsymbol{\mu}_{S}(\frac{\lambda}{s}))}{Z(\beta,\mathbf{Q}^{\rm CW},\mathbf{0})}$ $\displaystyle\leq(1+o(1))\exp\left(-\frac{\lambda t}{\sqrt{s}}-\lambda m\right)\left(\frac{\cosh(\beta m+\lambda/s)}{\cosh(\beta m)}\right)^{s}$ $\displaystyle=(1+o(1))\exp\left(-\frac{\lambda t}{\sqrt{s}}-\lambda m+s\log\cosh(\beta m+\lambda/s)-s\log\cosh(\beta m)\right)$ $\displaystyle\leq(1+o(1))\exp\left(-\frac{\lambda t}{\sqrt{s}}\underbrace{-\lambda m(\beta)+\frac{s\lambda}{s}\tanh(\beta m(\beta))}_{=0}+\frac{s\lambda^{2}}{2s^{2}}\operatorname{sech}^{2}(\beta m(\beta))\right)$ $\displaystyle=(1+o(1))\exp\left(-\frac{\lambda t}{\sqrt{s}}+\frac{\lambda^{2}}{2s}\operatorname{sech}^{2}(\beta m(\beta))\right)$ $\displaystyle=(1+o(1))\exp\left(-\frac{t^{2}}{2(1-m^{2})}\right),$ where the second inequality is due to Lemma 2 and the third inequality occurs since the third term of the Taylor expansion is negative. When $W_{n}<0$, note that, $\displaystyle\mathbb{P}_{\beta,\mathbf{Q}^{\rm CW},\mathbf{0}}(Z_{S}>-m\sqrt{s}+t_{n}(\delta),W_{n}<0)$ $\displaystyle\leq\exp\left(-\frac{\lambda t}{\sqrt{s}}+\lambda m\right)\mathbb{E}_{\beta,\mathbf{Q}^{\rm CW},\mathbf{0}}\exp\left(\frac{\lambda}{s}\sum_{i\in S}X_{i}\mathbbm{1}_{W_{n}\leq 0}\right)$ $\displaystyle\leq(1+o(1))\exp\left(-\frac{\lambda t_{n}(\delta)}{\sqrt{s}}+\lambda m\right)\left(\frac{\cosh(\beta m-\lambda/s)}{\cosh(\beta m)}\right)^{s}$ Previously, the third term in Taylor expansion was $<0$, here we simply bound it by $O\left(\frac{t^{3}_{n}(\delta)}{s}\right)=o(t^{2}_{n}(\delta))$. ∎ The following Lemma provides estimates for ratios of normalizing constants in the Curie-Weiss model. ###### Lemma 2. For $s\in[n]$ and $A,\beta>0$ let $Z_{n}(\beta,s,A)=Z(\beta,\mathbf{Q}^{\rm CW},\boldsymbol{\mu}_{S}(A))$ be the partition function of the Curie-Weiss model $\mathbb{P}_{\beta,\mathbf{Q}^{\rm CW},\boldsymbol{\mu}_{S}(A)}$ with $\mathbf{Q}^{\rm CW}_{i,j}=\mathbbm{1}(i\neq j)/n$ Then the following conclusions hold for any $C>0$: 1. (a) If $\beta<1,s\ll\frac{n}{\log n}$ and $s\tanh(A)\ll\sqrt{n}$, then we have $\displaystyle\frac{Z_{n}(\beta,s,A)}{Z_{n}(\beta,0,0)}=(1+o(1))\cosh(A)^{s}.$ (32) 2. (b) If $\beta>1,s\ll\frac{n}{\log n}$ and $s\tanh(A)\ll\sqrt{n}$ we have $\mathbb{E}_{\beta,\mathbf{Q}^{\rm CW},\mathbf{0}}\left(\exp(A\sum_{i\in S}X_{i})|W_{n}>0\right)=(1+o(1))\frac{\cosh(\beta m+A)^{s}}{\cosh(\beta m)^{s}},$ $\mathbb{E}_{\beta,\mathbf{Q}^{\rm CW},\mathbf{0}}\left(\exp(A\sum_{i\in S}X_{i})|W_{n}<0\right)=(1+o(1))\frac{\cosh(\beta m-A)^{s}}{\cosh(\beta m)^{s}},$ where $W_{n}$ is the auxilliary variable for the Curie-Weiss model (cf. [42, lemma 3]). Therefore, $\displaystyle\frac{Z_{n}(\beta,s,A)}{Z_{n}(\beta,0,0)}=(1+o(1))\frac{1}{2}\left(\frac{\cosh(\beta m-A)^{s}}{\cosh(\beta m)^{s}}+\frac{\cosh(\beta m+A)^{s}}{\cosh(\beta m)^{s}}\right)$ (33) where $m$ is the unique positive root of the equation $m=\tanh(\beta m)$. 3. (c) If $\beta=1$ and $s\ll\frac{n}{\log n}$, then for $s\tanh(A)\ll n^{1/4}$ we have (32) holds. ###### Proof. We begin with the representation used in [42, lemma 3]. Define a random variable $W_{n}$ which given $\mathbf{X}$ has a distribution $N(\bar{\mathbf{X}},1/n)$. Then under $\mathbb{P}_{\beta,\mathbf{Q}^{\rm CW},\mathbf{0}}$, given $W_{n}=w_{n}$, each $X_{i}$’s are i.i.d with $\displaystyle\mathbb{P}_{\beta,\mathbf{Q}^{\rm CW},\mathbf{0}}(X_{i}=x_{i}|W_{n}=w_{n})=\frac{e^{\beta w_{n}x_{i}}}{e^{\beta w_{n}}+e^{-\beta w_{n}}}.$ Therefore, for any $\mu_{i}\in\mathbb{R}$ one has $\mathbb{E}_{\beta,\mathbf{Q}^{\rm CW},\mathbf{0}}(\exp(\mu_{i}X_{i})|W_{n})=\frac{\cosh(\mu_{i}+\beta W_{n})}{\cosh(\beta W_{n})}$. Therefore for any $\boldsymbol{\mu}\in\mathbb{R}^{n}$ $\displaystyle\frac{Z_{n}(\beta,\mathbf{Q}^{\rm CW},\boldsymbol{\mu})}{Z_{n}(\beta,\mathbf{Q}^{\rm CW},\mathbf{0})}$ $\displaystyle=\frac{\sum_{\mathbf{x}\in\\{\pm 1\\}^{n}}\exp\left(\frac{\beta}{n}\sum_{i<j}x_{i}x_{j}+\sum_{i=1}^{n}\mu_{i}x_{i}\right)}{\sum_{\mathbf{x}\in\\{\pm 1\\}^{n}}\exp\left(\frac{\beta}{n}\sum_{i<j}x_{i}x_{j}\right)}=\mathbb{E}_{\beta,\mathbf{Q}^{\rm CW},\mathbf{0}}\left(\exp(\sum_{i=1}^{n}\mu_{i}X_{i})\right)$ $\displaystyle=\mathbb{E}_{\beta,\mathbf{Q}^{\rm CW},\mathbf{0}}\left[\exp\left\\{\sum_{i=1}^{n}\left(\log\cosh(\mu_{i}+\beta W_{n})-\log\cosh(\beta W_{n})\right)\right\\}\right].$ (34) Consequently we have $\displaystyle\frac{Z_{n}(\beta,s,A)}{Z_{n}(\beta,0,0)}=\frac{Z_{n}(\beta,\mathbf{Q}^{\rm CW},\boldsymbol{\mu}_{S}(A))}{Z_{n}(\beta,\mathbf{Q}^{\rm CW},\mathbf{0})}=\mathbb{E}_{\beta,\mathbf{Q}^{\rm CW},\mathbf{0}}(e^{T_{n}}),\quad T_{n}:=s[\log\cosh(\beta W_{n}+A)-\log\cosh(\beta W_{n})].$ (35) Also use [42, Lemma 3] to note that $W_{n}$ marginally has a density proportional to $e^{-nf(w)}$, with $f(w):=\beta w^{2}/2-\log\cosh(\beta w)$. With this we are ready to prove the lemma. 1. (a) To begin note that $|\log\cosh(\beta W_{n})|\leq\frac{\beta^{2}}{2}W_{n}^{2}$. Also, a two term Taylor’s series expansion in $W_{n}$ gives $\displaystyle\Big{|}\log\cos(\beta W_{n}+A)-\log\cosh(A)-\beta W_{n}\tanh(A)\Big{|}\leq$ $\displaystyle\frac{\beta^{2}}{2}W_{n}^{2}.$ Combining these two and setting $B:=\tanh(A)$ we have $|T_{n}-s\log\cosh(A)|\leq\beta|W_{n}|sB+\beta^{2}sW_{n}^{2},$ the RHS of which converges to $0$ in probability, as $\sqrt{n}W_{n}=O_{\mathbb{P}_{\beta,\mathbf{Q}^{\rm CW},\mathbf{0}}}(1)$ by part (a) of Lemma 5. Thus to prove (32), using uniform integrability it suffices to show that $\displaystyle\limsup_{n\rightarrow\infty}\mathbb{E}_{\beta,\mathbf{Q}^{\rm CW},\mathbf{0}}e^{2T_{n}-2s\log\cosh(A)}<\infty.$ (36) To this effect, again using the above Taylor’s expansion gives $T_{n}-s\log\cosh(A)\leq\beta s|W_{n}|B+s\beta^{2}W_{n}^{2},$ using which the RHS (36) can be bounded as follows: $\displaystyle\mathbb{E}_{\beta,\mathbf{Q}^{\rm CW},\mathbf{0}}e^{2T_{n}-2s\log\cosh(A)}\leq\mathbb{E}_{\beta,\mathbf{Q}^{\rm CW},\mathbf{0}}e^{2\beta sB|W_{n}|+2s\beta^{2}W_{n}^{2}}\leq 2\mathbb{E}_{\beta,\mathbf{Q}^{\rm CW},\mathbf{0}}e^{2\beta sBW_{n}+2s\beta^{2}W_{n}^{2}}$ Setting $\lambda_{1}:=\beta(1-\beta),\lambda_{2}:=\beta$ we have $\lambda_{1}\frac{w^{2}}{2}\leq f(w)\leq\lambda_{2}\frac{w^{2}}{2},$ which readily gives $\displaystyle\mathbb{E}_{\beta,\mathbf{Q}^{\rm CW},\mathbf{0}}e^{2\beta sW_{n}B+s\beta^{2}W_{n}^{2}}\leq$ $\displaystyle 2\frac{\int_{\mathbb{R}}e^{2\beta sBz+2s\beta^{2}z^{2}-n\lambda_{1}z^{2}/2}dz}{\int_{\mathbb{R}}e^{-n\lambda_{2}z^{2}/2}dz}$ $\displaystyle=$ $\displaystyle\sqrt{\frac{n\lambda_{2}}{n\lambda_{1}-4\beta^{2}s}}\text{exp}\Big{\\{}\frac{2\beta^{2}s^{2}B^{2}}{n\lambda_{1}-4\beta^{2}s}\Big{\\}},$ the RHS of which converges to $1$. Thus (36) holds, and so the proof is complete. 2. (b) To begin, setting $C_{n}:=\int_{\mathbb{R}}e^{-nf(w)}dw$ we have $\displaystyle C_{n}\mathbb{E}e^{T_{n}}=$ $\displaystyle\int_{-\infty}^{0}\prod_{i=1}^{n}e^{\log\cosh(\beta w+\mu_{i})-\log\cosh(\beta w)}e^{-nf(w)}dw+\int_{0}^{\infty}\prod_{i=1}^{n}e^{\log\cosh(\beta w+\mu_{i})-\log\cosh(\beta w)}e^{-nf(w)}dw$ $\displaystyle=$ $\displaystyle\int_{0}^{\infty}\prod_{i=1}^{n}e^{\log\cosh(-\beta w+\mu_{i})-\log\cosh(-\beta w)}e^{-nf(w)}dw+\int_{0}^{\infty}\prod_{i=1}^{n}e^{\log\cosh(\beta w+\mu_{i})-\log\cosh(\beta w)}e^{-nf(w)}dw$ $\displaystyle\leq$ $\displaystyle 2\int_{0}^{\infty}\prod_{i=1}^{n}e^{\log\cosh(\beta w+\mu_{i})-\log\cosh(\beta w)}e^{-nf(w)}dw,$ where the last inequality uses the fact that $\displaystyle\log\cosh(\beta w+\mu_{i})-\log\cosh(\beta w)=$ $\displaystyle\int_{0}^{\mu_{i}}\tanh(\beta w+z)dz$ $\displaystyle\leq$ $\displaystyle\int_{0}^{\mu_{i}}\tanh(-\beta w+z)dz=\log\cosh(-\beta w+\mu_{i})-\log\cosh(-\beta w),$ as $\tanh$ is monotone increasing. This shows that $\displaystyle\mathbb{E}_{\beta,\mathbf{Q}^{\rm CW},\mathbf{0}}e^{T_{n}}$ $\displaystyle=\frac{1}{2}\mathbb{E}_{\beta,\mathbf{Q}^{\rm CW},\mathbf{0}}(e^{T_{n}}|W_{n}>0)+\frac{1}{2}\mathbb{E}_{\beta,\mathbf{Q}^{\rm CW},\mathbf{0}}(e^{T_{n}}|W_{n}<0).$ We will now show that, $\displaystyle\lim_{n\rightarrow\infty}\mathbb{E}_{\beta,\mathbf{Q}^{\rm CW},\mathbf{0}}(e^{T_{n}}|W_{n}>0)=(1+o(1))\frac{\cosh(\beta m+A)^{s}}{\cosh(\beta m)^{s}}.$ (37) A similar argument takes care of the second term. For this, use part (b) of Lemma 5 to note that $\Big{(}\sqrt{n}(W_{n}-m)|W_{n}>0\Big{)}=O_{p}(1)$. For $W_{n}>0$, a Taylor’s series expansion gives $\displaystyle\Big{|}\log\cosh(\beta W_{n}+A)-\log\cosh(\beta m+A)-\beta(W_{n}-m)\tanh(\beta m+A)\Big{|}$ $\displaystyle\leq$ $\displaystyle\frac{\beta^{2}}{2}(W_{n}-m)^{2}$ $\displaystyle\Big{|}\log\cosh(\beta W_{n})-\log\cosh(\beta m)-\beta(W_{n}-m)\tanh(\beta m)\Big{|}$ $\displaystyle\leq$ $\displaystyle\frac{\beta^{2}}{2}(W_{n}-m)^{2},$ which on taking a difference gives $\Big{|}T_{n}-s(\log\cosh(\beta m+A)-\log\cosh(\beta m))\Big{|}\leq s\tilde{B}|W_{n}-m|+\beta^{2}s(W_{n}-m)^{2},$ where $\tilde{B}:=|\tanh(\beta m+A)-\tanh(\beta m)|$. The RHS in the display above converges to $0$ in probability, conditioned on the event $T_{n}>0$. As before, (37) will follow from this via uniform integrability if we can show that $\displaystyle\mathbb{E}_{\beta,\mathbf{Q}^{\rm CW},\mathbf{0}}e^{2T_{n}-2s\log\cosh(\beta m+A)+2s\log\cosh(\beta m)}<\infty.$ (38) For showing (38), again use the above Taylor’s expansion to note that $T_{n}-s\log\cosh(\beta m+A)+s\log\cosh(\beta m)\leq s\tilde{B}|W_{n}-m|+\beta^{2}s(W_{n}-m)^{2}.$ We now claim that on $[0,\infty)$ the function $f(w)$ satisfies $\displaystyle\frac{\lambda_{2}}{2}(w-m)^{2}\leq f(w)-f(m)\leq\frac{\lambda_{1}}{2}(w-m)^{2}$ (39) for some positive constants $\lambda_{1}>\lambda_{2}$. Given the claim, a similar calculation as in part (a) shows that $\displaystyle\mathbb{E}_{\beta,\mathbf{Q}^{\rm CW},\mathbf{0}}(e^{T_{n}}|W_{n}>0)\leq$ $\displaystyle 2\frac{\int_{\mathbb{R}}e^{2\beta s\tilde{B}z+2s\beta^{2}z^{2}-n\lambda_{1}z^{2}/2}dz}{\int_{-m}^{\infty}e^{-n\lambda_{2}z^{2}/2}dz}$ $\displaystyle=$ $\displaystyle(1+o(1))\sqrt{\frac{n\lambda_{2}}{n\lambda_{1}-4\beta^{2}s}}\text{exp}\Big{\\{}\frac{2\beta^{2}s^{2}\tilde{B}^{2}}{n\lambda_{1}-4\beta^{2}s}\Big{\\}},$ which converges to $1$ as before, thus verifying (38) and hence completing the proof of the Lemma. It thus remains to prove (39). To this effect, note that the function $f(.)$ on $[0,\infty)$ has a unique global minima at $z=m$, and $f^{\prime\prime}(m)>0$. Thus the function $F:[0,\infty)\mapsto\mathbb{R}$ defined by $F(w)=\frac{f(w)-f(m)}{2(w-m)^{2}},z\neq m,\quad F(m):=f^{\prime\prime}(m)$ is continuous and strictly positive on $[0,\infty)$, and $\lim_{t\rightarrow\infty}F(t)=\beta_{2}>0$. Thus setting $\lambda_{2}:=\inf_{t\in[0,\infty)}F(t)$, $\lambda_{1}:=\sup_{t\in[0,\infty)}F(t)$, it follows that $\lambda_{2}>0$ and $\lambda_{1}<\infty$, and so the proof of (39) is complete. 3. (c) We now use a four term Taylor expansion to get $\displaystyle\Big{|}\log\cosh(W_{n}+A)-\log\cosh(A)-\frac{W_{n}^{2}}{2}\text{sech}^{2}(A)|$ $\displaystyle\leq$ $\displaystyle B|W_{n}|+\frac{1}{2}B|W_{n}|^{3}+\frac{1}{4}|W_{n}|^{4}$ $\displaystyle\Big{|}\log\cosh(W_{n})-\frac{W_{n}^{2}}{2}\Big{|}$ $\displaystyle\leq$ $\displaystyle\frac{1}{4}|W_{n}|^{4}$ This gives $|T_{n}-s\log\cosh(A)|\leq sB|W_{n}|+sB^{2}\frac{W_{n}^{2}}{2}+\frac{1}{2}sB|W_{n}|^{3}+\frac{1}{2}sW_{n}^{4}$ The RHS above converges to $0$ on noting that $sB\ll n^{1/4}$, and $W_{n}=O_{p}(n^{-1/4})$ (which follows by part (c) of Lemma 5). As before, using uniform integrability it suffices to show that $\displaystyle\limsup_{n\rightarrow\infty}\mathbb{E}_{\beta,\mathbf{Q}^{\rm CW},\mathbf{0}}e^{2T_{n}-2s\log\cosh(A)}<\infty.$ (40) To this end, use the bound $x-\log 2\leq\log\cosh(x)\leq x$ for $x\geq 0$ to conclude $T_{n}-s\log\cosh(A)\leq 2s\log 2.$ Also, since $\lim_{z\rightarrow 0}\frac{f(z)}{z^{4}}=\frac{1}{12},$ there exists $\delta\in(0,1)$ and $0<\lambda_{2}\leq\lambda_{1}<\infty$ such that for $|z|\leq\delta$ we have $\lambda_{1}z^{4}\leq f(z)\leq\lambda_{2}z^{4},$ which gives $\displaystyle\mathbb{E}_{\beta,\mathbf{Q}^{\rm CW},\mathbf{0}}e^{2W_{n}-2s\log\cosh(A)}\leq$ $\displaystyle e^{2s\log 2}\mathbb{P}(|T_{n}|>\delta)+\frac{\int_{-\delta}^{\delta}e^{5sB|z|-n\lambda_{1}z^{4}}dz}{\int_{-\delta}^{\delta}e^{-n\lambda_{2}z^{4}}dz},$ (41) where the last line uses the bound $\max\Big{(}sB^{2}z^{2},sB|z|^{3}+sz^{4}\Big{)}\leq sB|z|$ for all $|z|\leq\delta$. We now bound each of the terms in the RHS of (41). The denominator in the RHS of (41) by a change of variable equals $n^{-1/4}\int_{-\delta n^{1/4}}^{\delta n^{1/4}}e^{-\lambda_{2}t^{4}}dt=(1+o(1))n^{-1/4}\int_{-\infty}^{\infty}e^{-\lambda_{2}t^{4}}dt.$ For estimating the numerator of RHS, set $t_{n}:=\Big{(}\frac{10sB}{n\lambda_{1}}\Big{)}^{1/3}$ and note that for $t>t_{n}$ we have $5sB|z|\leq\frac{1}{2}n\lambda_{1}z^{4}$. This gives $\displaystyle\int_{-\delta}^{\delta}e^{5sB|z|-n\lambda_{1}z^{4}}dz\leq$ $\displaystyle 2\int_{0}^{t_{n}}e^{5sBz}dz+2\int_{t_{n}}^{\delta}e^{-\frac{n\lambda_{1}}{2}t^{4}}dt$ $\displaystyle\leq$ $\displaystyle 2t_{n}e^{5sBt_{n}}+2\int_{0}^{\infty}e^{-\frac{n\lambda_{1}}{2}t^{4}}dt$ $\displaystyle=$ $\displaystyle 2t_{n}e^{5sBt_{n}}+2n^{-1/4}\int_{0}^{\infty}e^{-\lambda_{1}t^{4}/2}dt.$ The first term above is asymptotically $2t_{n}\ll n^{-1/4}$, where we use the bound $sB\ll n^{1/4}$. Thus the numerator in the second term in the RHS of (41) is $O(n^{-1/4})$. Proceeding to estimate the first term, use Lemma 5 to note that $\mathbb{P}(|W_{n}|>\delta)$ decays exponentially in $n$, whereas $e^{2s\log 2}\leq e^{\frac{2n\log 2}{\log n}}$ is sub exponential. It thus follows that the RHS of (41) is bounded, and so the proof of part (c) is complete. ∎ ###### Lemma 3. Suppose $\mathbf{X}\sim\mathbb{P}_{\beta,\mathbf{Q}^{\rm CW},\boldsymbol{\mu}_{S}(A)}$ with $\mathbf{Q}^{\rm CW}_{i,j}=\mathbf{1}(i\neq j)/n$. If $\lambda\in\mathbb{R}$, $\beta<1$, $s\ll n/\log n$, $B:=s\max\\{\tanh(\lambda+A),\tanh(A)\\}\ll\sqrt{n}$, then $\mathbb{E}_{\beta,\mathbf{Q}^{\rm CW},\boldsymbol{\mu}_{S}(A)}\big{(}\exp{(\lambda\sum\limits_{i\in S}X_{i})}\big{)}=(1+o(1))\Big{(}\frac{\cosh(A+\lambda)}{\cosh(A)}\Big{)}^{s}.$ (42) ###### Proof. Observe that, $\mathbb{E}_{\beta,\mathbf{Q}^{\rm CW},\boldsymbol{\mu}_{S}(A)}\big{(}\exp{(\lambda\sum\limits_{i\in S}X_{i})}\big{)}=\frac{Z(\beta,\mathbf{Q}^{\rm CW},\boldsymbol{\mu}_{S}(A+\lambda))}{Z(\beta,\mathbf{Q}^{\rm CW},\boldsymbol{\mu}_{S}(A))}=\frac{Z(\beta,\mathbf{Q}^{\rm CW},\boldsymbol{\mu}_{S}(A+\lambda))/Z(\beta,\mathbf{Q},\mathbf{0})}{Z(\beta,\mathbf{Q}^{\rm CW},\boldsymbol{\mu}_{S}(A))/Z(\beta,\mathbf{Q},\mathbf{0})}.$ Now, Lemma 2 completes the proof. ∎ ###### Lemma 4. Suppose $\mathbf{X}\sim\mathbb{P}_{\beta,\mathbf{Q}^{\rm CW},\boldsymbol{\mu}}$ with $\mathbf{Q}^{\rm CW}_{i,j}=\mathbf{1}(i\neq j)/n$. When $\beta>1$, let $m$ be the unique positive root of $m=\tanh(\beta m)$. Fix $\boldsymbol{\mu}\in\Xi(s,A)$. Then the following hold uniformly in $\boldsymbol{\mu}\in\mathbb{R}_{+}^{n}$ such that $\sum_{\i=1}^{n}\boldsymbol{\mu}_{i}\ll n$. 1. (a) Fix any sequence $\kappa_{n}\rightarrow 0$. Then then there exists a sequence $\zeta_{n}\rightarrow 0$ (depending only on the pre-fixed sequence $\kappa_{n}$) such that, $\sup_{\boldsymbol{\mu}:\sum_{i=1}^{n}\mu_{i}\leq\kappa_{n}n}\mathbb{P}_{\beta,\mathbf{Q}^{\rm CW},\boldsymbol{\mu}}\left(|W_{n}-m|>\zeta_{n},W_{n}\geq 0\right)\rightarrow 0$ 2. (b) Fix any sequence $\kappa_{n}\rightarrow 0$. Then then there exists a sequence $\zeta_{n}\rightarrow 0$ (depending only on the pre-fixed sequence $\kappa_{n}$) such that, $\sup_{\boldsymbol{\mu}:\sum_{i=1}^{n}\mu_{i}\leq\kappa_{n}n}\mathbb{P}_{\beta,\mathbf{Q}^{\rm CW},\boldsymbol{\mu}}\left(|W_{n}+m|>\zeta_{n},W_{n}\leq 0\right)\rightarrow 0$ ###### Proof. For the proof of the case $W_{n}\geq 0$ we first note that by [24] there exists $C>0$ $\mathbb{P}_{\beta,\mathbf{Q},\mathbf{0}}\left(|W_{n}-m|>{\zeta_{n}},W_{n}\geq 0\right)\leq\exp\left(-Cn\zeta_{n}^{2}\right)$ and hence for $\sum_{i=1}^{n}\mu_{i}\ll n$ we have with $\mathcal{E}_{n}=\\{|W_{n}-m|>\frac{\zeta_{n}}{\sqrt{n}},W_{n}\geq 0\\}$ and any sequence $\kappa_{n}\rightarrow 0$ that $\displaystyle\sup_{\boldsymbol{\mu}:\sum_{i=1}^{n}\mu_{i}\leq\kappa_{n}n}\mathbb{P}_{\beta,\mathbf{Q},\boldsymbol{\mu}}\left(\mathcal{E}_{n}\right)$ $\displaystyle=\sup_{\boldsymbol{\mu}:\sum_{i=1}^{n}\mu_{i}\leq\kappa_{n}n}\frac{Z(\beta,\mathbf{Q},\mathbf{0})}{Z(\beta,\mathbf{Q},\boldsymbol{\mu})}\mathbb{E}_{\beta,\mathbf{Q},\mathbf{0}}\left(\mathbbm{1}(\mathcal{E}_{n})e^{\sum_{i=1}^{n}\mu_{i}X_{i}}\right)$ $\displaystyle=\sup_{\boldsymbol{\mu}:\sum_{i=1}^{n}\mu_{i}\leq\kappa_{n}n}e^{2\sum_{i=1}^{n}\mu_{i}}\mathbb{P}_{\beta,\mathbf{Q},\mathbf{0}}(\mathcal{E}_{n})\leq e^{2\sum_{i=1}^{n}\boldsymbol{\mu}_{i}-Cn\zeta_{n}^{2}}\rightarrow 0$ whenever $n\zeta_{n}^{2}\gg\kappa_{n}n$. This completes the proof of the lemma. For the case $W_{n}\leq 0$ [24] there exists $C>0$ $\mathbb{P}_{\beta,\mathbf{Q},\mathbf{0}}\left(|W_{n}+m|>{\zeta_{n}},W_{n}\geq 0\right)\leq\exp\left(-Cn\zeta_{n}^{2}\right)$ and the rest of the proof is similar. ∎ The next lemma follows from [25, Theorems 1-3]. ###### Lemma 5. Suppose $\mathbf{X}\sim\mathbb{P}_{\beta,\mathbf{Q}^{\rm CW},\boldsymbol{\mu}}$ with $\mathbf{Q}^{\rm CW}_{i,j}=\mathbbm{1}(i\neq j)/n$. 1. (a) If $\beta\in(0,1)$ then under $\mathbb{P}_{\beta,\mathbf{Q}^{\rm CW},\mathbf{0}}$ we have $\sqrt{n}\bar{\mathbf{X}}\stackrel{{\scriptstyle d}}{{\rightarrow}}N\left(0,\frac{1}{1-\beta}\right).$ 2. (b) If $\beta>1$ then under $\mathbb{P}_{\beta,\mathbf{Q}^{\rm CW},\mathbf{0}}$ we have $\sqrt{n}(\bar{\mathbf{X}}-m|\bar{\mathbf{X}}>0)\stackrel{{\scriptstyle d}}{{\rightarrow}}N\left(0,\frac{1-m^{2}}{1-\beta(1-m^{2})}\right),$ where $m$ is the unique positive root of the equation $t=\tanh(\beta t)$. 3. (c) If $\beta=1$ then under $\mathbb{P}_{\beta,\mathbf{Q}^{\rm CW},\mathbf{0}}$ we have $n^{1/4}\bar{\mathbf{X}}\stackrel{{\scriptstyle d}}{{\rightarrow}}Y,$ where $Y$ is a random variable with density proportional to $e^{-y^{4}/12}$. ### 6.3. Proof of Results in Section 2.2 #### 6.3.1. Proof of Theorem 3 We will divide the proof into three parts based on high $(\beta<1)$, critical $(\beta=1)$, and low $(\beta>1)$ temperature. Throughout we will use $\mathbf{Q}$ for either scaled adjacency matrix of of the graph $G(n,p)$ (with $p=\Theta(1)$) or random regular graph on $n$ vertices each having degree $d=\Theta(n)$. ##### Proof for upper bound in High Temperature $(0\leq\beta<1)$ Regime: We first focus on the upper bound. As before, the optimal test is given the by the scan test which rejects for large values of $Z_{\max}:=\max\limits_{S\in S\in\mathcal{N}(\mathcal{C}_{s},\gamma,\varepsilon_{n})}Z_{S}$ (recall that for $S\in\mathcal{N}(\mathcal{C}_{s},\gamma,\varepsilon_{n})$ we defined $Z_{S}=\sum_{i\in S}X_{i}/\sqrt{s}$). The cut-off for the test is decided by the moderate deviation behavior of $Z_{S}$’s given in Lemma 10 – which implies that for any $\delta>0$, the test given by $T_{n}(\delta)=\mathbf{1}\left\\{Z_{\max}>\sqrt{2(1+\delta)\log{|\mathcal{N}(\mathcal{C}_{s},\gamma,\varepsilon_{n})|}}\right\\}$ has Type I error converging to $0$. Turning to the Type II error, consider any $\mathbb{P}_{\beta,\mathbf{Q},\boldsymbol{\mu}}\in\Xi(\mathcal{C}_{s},A)$. First note that by GHS inequality $\mathrm{Var}_{\beta,\mathbf{Q},\boldsymbol{\mu}}(\sum_{i\in\tilde{S}^{\star}}X_{i})\leq\mathrm{Var}_{\beta,\mathbf{Q},\mathbf{0}}(\sum_{i\in\tilde{S}^{\star}}X_{i})=O(s)$. As a result, $\frac{\sum_{i\in\tilde{S^{\star}}}(X_{i}-\mathbb{E}_{\beta,\mathbf{Q},\boldsymbol{\mu}}(X_{i}))}{\sqrt{s}}=O_{\mathbb{P}_{\beta,\mathbf{Q},\boldsymbol{\mu}}}(1)$. Therefore, as usual it is enough to show that there exists $\delta>0$ such that $t_{n}(\delta)-\frac{1}{\sqrt{s}}\mathbb{E}_{\beta,\mathbf{Q},\boldsymbol{\mu}}\left(\sum_{i\in\tilde{S}^{\star}}X_{i}\right)\rightarrow-\infty$. To show this end, first let $S^{\star}\in\mathcal{C}_{s}$ be such that the signal lies on $S^{\star}$ i.e. for all $i\in S^{\star}$ one has $\boldsymbol{\mu}_{i}\geq A$. Note that by monotonicity arguments it is enough to consider $\boldsymbol{\mu}_{i}=A$. By definition of covering, we can find a $\tilde{S}^{\star}\in\mathcal{N}(\mathcal{C}_{s},\gamma,\varepsilon_{n})$ such that $\gamma\left(\tilde{S}^{\star},S^{\star}\right)\leq\varepsilon_{n}$ i.e. $|\tilde{S}^{\star}\cap S^{\star}|\geq s(1-\varepsilon_{n}/\sqrt{2})$. Thereafter note that by Lemma 20 we have that $\displaystyle\mathbb{E}_{\beta,\mathbf{Q},\boldsymbol{\mu}}(\sum_{i\in\tilde{S}^{\star}}X_{i})\geq A|\tilde{S}^{\star}\cap S^{\star}|-A^{2}\sum_{i\in\tilde{S}^{\star}\cap S^{\star}}\sum_{j\in S}\mathrm{Cov}_{\beta,\mathbf{Q},\tilde{\mathbf{\eta}}_{S^{\star}}(A)}(X_{i},X_{j}),$ (43) However, by [18, Lemma 9 (a) and Theorem 7] we have that $\displaystyle A^{2}\sum_{i\in\tilde{S}^{\star}\cap S^{\star}}\sum_{j\in S}\mathrm{Cov}_{\beta,\mathbf{Q},\tilde{\mathbf{\eta}}_{S^{\star}}(A)}(X_{i},X_{j})$ $\displaystyle\lesssim\frac{\log{|\mathcal{N}(\mathcal{C}_{s},\gamma,\varepsilon_{n})|}}{s}\left(\frac{|\tilde{S}^{\star}\cap S^{\star}||S^{\star}|}{n}+|\tilde{S}^{\star}\cap S^{\star}|\right)$ $\displaystyle\leq\frac{\log{|\mathcal{N}(\mathcal{C}_{s},\gamma,\varepsilon_{n})|s}}{{n}}+\log{|\mathcal{N}(\mathcal{C}_{s},\gamma,\varepsilon_{n})|}\ll As,$ since $\log{|\mathcal{N}(\mathcal{C}_{s},\gamma,\varepsilon_{n})|}\ll s\ll n/\log{|\mathcal{N}(\mathcal{C}_{s},\gamma,\varepsilon_{n})|}$. Consequently, we immediately have that there exists $\epsilon>0$ such that $\displaystyle\mathbb{E}_{\beta,\mathbf{Q},\boldsymbol{\mu}}\left(\sum_{i\in\tilde{S}^{\star}}X_{i}\right)$ $\displaystyle\geq As(1+o(1))\geq\sqrt{2(1+\epsilon)s\log{|\mathcal{N}(\mathcal{C}_{s},\gamma,\varepsilon_{n})|}}$ Therefore, we can conclude that for any $\delta<\epsilon$ we have $t_{n}(\delta)-\frac{1}{\sqrt{s}}\mathbb{E}_{\beta,\mathbf{Q},\boldsymbol{\mu}}\left(\sum_{i\in\tilde{S}^{\star}}X_{i}\right)\rightarrow-\infty$. This completes the proof of the upper bound for $0<\beta<1$. ##### Proof for upper bound at Critical Temperature $(\beta=1)$ Regime: Similar to $\beta<1$ regime, the optimal test is given the by the scan test which rejects for large values of $Z_{\max}:=\max\limits_{S\in S\in\mathcal{N}(\mathcal{C}_{s},\gamma,\varepsilon_{n})}Z_{S}$ (recall that for $S\in\mathcal{N}(\mathcal{C}_{s},\gamma,\varepsilon_{n})$ we defined $Z_{S}=\sum_{i\in S}X_{i}/\sqrt{s}$). The cut-off for the test is decided by the moderate deviation behavior of $Z_{S}$’s given in Lemma 10 – which implies that for any $\delta>0$, the test given by $T_{n}(\delta)=\mathbf{1}\left\\{Z_{\max}>\sqrt{2(1+\delta)\log{|\mathcal{N}(\mathcal{C}_{s},\gamma,\varepsilon_{n})|}}\right\\}$ has Type I error converging to $0$. Turning to the Type II error, we again consider any $\mathbb{P}_{\beta,\mathbf{Q},\boldsymbol{\mu}}\in\Xi(\mathcal{C}_{s},A)$. First note that by GHS inequality $\mathrm{Var}_{\beta,\mathbf{Q},\boldsymbol{\mu}}(\sum_{i\in\tilde{S}^{\star}}X_{i})\leq\mathrm{Var}_{\beta,\mathbf{Q},\mathbf{0}}(\sum_{i\in\tilde{S}^{\star}}X_{i})=O(s)$ even for $\beta=1$ since $s\ll\frac{\sqrt{n}}{\log{n}}$ by appealing to [18, Lemma 9(c)]. Hence, it is enough to show that there exists $\delta>0$ such that $t_{n}(\delta)-\frac{1}{\sqrt{s}}\mathbb{E}_{\beta,\mathbf{Q},\boldsymbol{\mu}}\left(\sum_{i\in\tilde{S}^{\star}}X_{i}\right)\rightarrow-\infty$. As before, let $S^{\star}\in\mathcal{C}_{s}$ be such that the signal lies on $S^{\star}$ i.e. for all $i\in S^{\star}$ one has $\boldsymbol{\mu}_{i}\geq A$. By monotonicity arguments it is enough to consider $\boldsymbol{\mu}_{i}=A\mathbbm{1}_{i\in S^{\star}}$. Following (43), it is enough to lower bound $A^{2}\sum_{i\in\tilde{S}^{\star}\cap S^{\star}}\sum_{j\in S}\mathrm{Cov}_{\beta,\mathbf{Q},\tilde{\mathbf{\eta}}_{S^{\star}}(A)}(X_{i},X_{j})$. By [18, Lemma 9 (c) and Theorem 7], we have that $\displaystyle A^{2}\sum_{i\in\tilde{S}^{\star}\cap S^{\star}}\sum_{j\in S}\mathrm{Cov}_{\beta,\mathbf{Q},\tilde{\mathbf{\eta}}_{S^{\star}}(A)}(X_{i},X_{j})$ $\displaystyle\lesssim\frac{\log{|\mathcal{N}(\mathcal{C}_{s},\gamma,\varepsilon_{n})|}}{s}\left(\frac{|\tilde{S}^{\star}\cap S^{\star}||S^{\star}|}{\sqrt{n}}+|\tilde{S}^{\star}\cap S^{\star}|\right)$ $\displaystyle\leq\frac{\log{|\mathcal{N}(\mathcal{C}_{s},\gamma,\varepsilon_{n})|s}}{\sqrt{n}}+\log{|\mathcal{N}(\mathcal{C}_{s},\gamma,\varepsilon_{n})|}\ll As,$ since $\log{|\mathcal{N}(\mathcal{C}_{s},\gamma,\varepsilon_{n})|}\ll s\ll\frac{n}{\log{|\mathcal{N}(\mathcal{C}_{s},\gamma,\varepsilon_{n})|}}$. This completes the proof of the upper bound for $\beta=1$. ##### Proof for upper bound in Low Temperature $(\beta>1)$ Regime: To prove the upper bound on detection threshold, we will use the same randomized test as described in Theorem 1i.i._c_). Let the rejection region of the test be denoted by $\Omega_{n}$. Using [17, Theorem 1.6], we obtain $\log\mathbb{P}_{\beta,\mathbf{Q},\mathbf{0}}(\Omega_{n})\leq C_{\delta}\Big{(}\log\mathbb{P}_{\beta,\mathbf{Q}^{\rm CW},\boldsymbol{\mu}_{S}(A)}(\Omega_{n})+\|\mathbf{Q}\|^{2}_{F}+\sum_{i=1}^{n}(R_{i}-1)^{2}\Big{)}.$ Now, $\mathbb{P}_{\beta,\mathbf{Q}^{\rm CW},\mathbf{0}}(\Omega_{n})=o(1)$ by Theorem 1i.i._c_). The error term in RHS is $O(1)$ by [17, Section 1.2]. Hence, type-I error of the proposed test converges to $0$. For the type-II error, fix $\eta>0$ and $\tanh(A)=\sqrt{2(1+\delta)^{2}(1-m^{2})^{-1}\log{|\mathcal{N}(\mathcal{C}_{s},\gamma,\varepsilon_{n})|}/s}$. To get the desired cut-off of the test, take $\delta>0$ to be chosen later and set $t_{n}(\delta)=\sqrt{2(1+\delta)(1-m^{2})\log{|\mathcal{N}(\mathcal{C}_{s},\gamma,\varepsilon_{n})|}}$. Assume $\boldsymbol{\mu}$ be the true signal with support $S^{\star}$ and we scan over $S$ such that $\gamma(S,S^{\star})\leq\varepsilon_{n}=o(1)$. We will show that $\mathbb{P}_{\beta,\mathbf{Q},\boldsymbol{\mu}}(\sum_{i\in S}X_{i}>ms+t\sqrt{s}|\bar{\mathbf{X}}\geq 0)\rightarrow 1$. Note that, $\displaystyle\mathbb{P}_{\beta,\mathbf{Q},\boldsymbol{\mu}}(\sum_{i\in S}X_{i}>ms+t\sqrt{s}|\bar{\mathbf{X}}\geq 0)$ $\displaystyle=\mathbb{P}_{\beta,\mathbf{Q},\boldsymbol{\mu}}(\frac{\sum_{i\in S}X_{i}-\mathbb{E}_{\beta,\mathbf{Q},\boldsymbol{\mu}}(\sum_{i\in S}X_{i}|\bar{\mathbf{X}}>0)}{\sqrt{Var_{\beta,\mathbf{Q},\boldsymbol{\mu}}(\sum_{i\in S}X_{i}|\bar{\mathbf{X}}>0)}}>\frac{ms+t\sqrt{s}-\mathbb{E}_{\beta,\mathbf{Q},\boldsymbol{\mu}}(\sum_{i\in S}X_{i}|\bar{\mathbf{X}}>0)}{\sqrt{Var_{\beta,\mathbf{Q},\boldsymbol{\mu}}(\sum_{i\in S}X_{i}|\bar{X}>0)}}|\bar{\mathbf{X}}\geq 0)$ Since LHS is $O_{\mathbb{P}_{\beta,\mathbf{Q},\boldsymbol{\mu}}}(1)$, it is enough to show the RHS (henceforth denoted by $T_{n}$) converges to $-\infty$. Defining $\alpha_{n}=\sqrt{\frac{\log n}{n}}$, we obtain the following estimates from [18, Lemma 9] for some absolute constants $c_{i}>0$, $i\in[3]$: $\displaystyle\max_{i,j}\mathrm{Cov}_{\beta,\mathbf{Q},\boldsymbol{\mu}}(X_{i},X_{j}|\bar{\mathbf{X}}\geq 0)\leq\frac{c_{1}}{n}$ (44) $\displaystyle\max_{i}|\mathrm{Var}_{\beta,\mathbf{Q},\boldsymbol{\mu}}(X_{i}|\bar{X}\geq 0)-\operatorname{sech}^{2}(\beta m+\mu_{i})|\leq c_{2}\alpha_{n}$ (45) $\displaystyle\max_{i}|\mathbb{E}_{\beta,\mathbf{Q},\boldsymbol{\mu}}(X_{i}|\bar{X}\geq 0)-\tanh(\beta m+\mu_{i})|\leq c_{3}\alpha_{n}.$ (46) All the above statements hold with high probability with respect to the randomness of $\mathbb{G}_{n}$. Using the bounds, we obtain $\displaystyle T_{n}$ $\displaystyle\leq\frac{ms- t_{n}(\delta)\sqrt{s}-\sum_{i\in S}\tanh(\beta m+\mu_{i})+sc_{3}\alpha_{n}}{\sqrt{\sum_{i\in S}\operatorname{sech}^{2}(\beta m+\mu_{i})+sc_{2}\alpha_{n}-\frac{cs^{2}}{n}}}\leq\frac{ms- t_{n}(\delta)\sqrt{s}-s\tanh(\beta m+A)+sc_{3}\alpha_{n}}{\sqrt{s\operatorname{sech}^{2}(\beta m+A)+sc_{2}\alpha_{n}-\frac{cs^{2}}{n}}}$ $\displaystyle\leq\frac{1}{C\sqrt{s}}\left(ms- t_{n}(\delta)\sqrt{s}-s\big{(}m-(1+o(1)A\operatorname{sech}^{2}(\beta m)\big{)}+sc_{3}\alpha_{n}\right)$ $\displaystyle\leq\frac{1}{C}\left(-t_{n}(\delta)+As(1+o(1))(1-m^{2})+o(As)\right),$ since $\alpha_{n}=o(A)$. By choosing $\delta$ based on $\eta$, the final bound converges to $\infty$. The same calculation holds for $\mathbb{P}_{\beta,\mathbf{Q},\boldsymbol{\mu}}\big{(}\sum_{i\in S}X_{i}>-ms+t\sqrt{s}|\bar{\mathbf{X}}<0\big{)}$ yielding Type-II error converging to $0$. ##### Proof for lower bound in $\beta>0$ Regime: To prove the lower bound, fix $\beta>$ and $\varepsilon>0$ such that $\sqrt{s}\tanh(A)(\log|\tilde{\mathcal{C}}_{s}|)^{-1/2}=2(1-\varepsilon)c_{\beta}$ where $c_{\beta}=1$ if $\beta\leq 1$, and $(1-m^{2})^{-1}$ otherwise. Suppose there exists a test with rejection region $\Omega_{n}$ which can test $H_{0}$ vs $H_{1}$. Hence, $\mathbb{P}_{\beta,\mathbf{Q},\mathbf{0}}(\Omega_{n})\rightarrow 0$. Using Lemma 6, we have $\mathbb{P}_{\beta,\mathbf{Q}^{\rm CW},\mathbf{0}}(\Omega_{n})\rightarrow 0$. However, since every test not asymptotically powerful for Curie-Weiss model in this regime of signal, implying that, $\mathbb{P}_{\beta,\mathbf{Q}^{\rm CW},\boldsymbol{\mu}_{S}(A)}(\Omega^{c}_{n})\geq\eta$, for some $\eta>0$ for all large enough $n$. This implies, using Lemma 6, there exists $\nu>0$ such that $\mathbb{P}_{\beta,\mathbf{Q},\mu_{S}(A)}(\Omega_{n})\geq\nu$. Hence, the type-I and type-II errors cannot converge to $0$ simultaneously completing the proof of the lower bound. #### 6.3.2. Proof of Theorem 4 To obtain an adaptive test, we apply the same procedure as described by the proof of Theorem 2. The proof of type-I error convergence when $\boldsymbol{\mu}=0$ follows from concentration of $\bar{\mathbf{X}}$ which is immediate by [17, Theorem 1.1, Theorem 1.2]. For general $\boldsymbol{\mu}$, we only require the fact $\mathrm{Cov}_{\beta,\mathbf{Q},\boldsymbol{\mu}}(X_{i},X_{j})\lesssim 1/n$ which follows from [18, Lemma 9]. Finally we also can obtain consistent estimator of $\beta$ thanks to [11, Corollary 3.1, Corollary 3.2] and Lemma 19. This completes our proof exactly as Theorem 2. ### 6.4. Technical Lemmas for Proofs of Theorems in Section 2.2 ###### Lemma 6. Fix $\beta>0$. Let $\mathbf{Q}^{\rm CW}$ be the scaled adjacency matrix of the complete graph and $\mathbf{Q}$ is either scaled adjecency matrix of of the graph $G(n,p)$ (with $p=\Theta(1)$) or random regular graph on $n$ vertices each having degree $d=\Theta(n)$. Fix any $\delta>0$. For any event $\mathcal{E}_{n}$: the following holds with probability $\geq 1-\delta$: If $\exists\eta>0$ such that $\mathbb{P}_{\beta,\mathbf{Q}^{\rm CW},\mu_{S}(A)}(\mathcal{E}_{n})\geq\eta$, then $\exists\nu>0$ such that $\mathbb{P}_{\beta,\mathbf{Q},\mu_{S}(A)}(\mathcal{E}_{n})>\nu$. ###### Proof. Defining $\mathcal{A}_{n}:=G_{n}-11^{T}/n+I/n$, note that $\mathbb{P}_{\beta,\mathbf{Q},\boldsymbol{\mu}_{S}(A)}(\mathcal{E}_{n})=\frac{Z(\beta,\mathbf{Q}^{\rm CW},\boldsymbol{\mu}_{S}(A))}{Z(\beta,\mathbf{Q},\boldsymbol{\mu}_{S}(A))}\mathbb{E}_{\beta,\mathbf{Q}^{\rm CW},\boldsymbol{\mu}_{S}(A)}e^{\frac{\beta}{2}\sigma^{\top}\mathcal{A}_{n}\sigma}\mathbbm{1}_{\sigma\in\mathcal{E}_{n}}.$ For the auxiliary variable $W_{n}$, define a vector $\tilde{T}_{n}=\tilde{T}_{n}(W_{n})$ such that $\tilde{T}_{n,i}=\tanh(\beta W_{n}+A\mathbbm{1}_{i\in S})$. Observe that, $\sigma^{\top}\mathcal{A}_{n}\sigma=(\sigma^{\top}-\tilde{T}_{n})\mathcal{A}_{n}(\sigma-\tilde{T}_{n})+2\tilde{T}^{\top}_{n}\mathcal{A}_{n}(\sigma-\tilde{T}_{n})+\tilde{T}^{\top}_{n}\mathcal{A}\tilde{T}_{n}=:Y_{n}+\tilde{T}^{\top}_{n}\mathcal{A}\tilde{T}_{n}.$ Define the good set $J_{n,\varepsilon}:=\\{m-\varepsilon\leq|W_{n}|\leq m+\varepsilon\\}$. Then, $\mathbb{P}_{\beta,\mathbf{Q},\boldsymbol{\mu}_{S}(A)}(\mathcal{E}_{n})\geq\frac{Z(\beta,\mathbf{Q}^{\rm CW},\boldsymbol{\mu}_{S}(A))}{Z(\beta,\mathbf{Q},\boldsymbol{\mu}_{S}(A))}\left(\mathbb{E}_{\beta,\mathbf{Q}^{\rm CW},\boldsymbol{\mu}_{S}(A)}e^{\sigma^{\top}\mathcal{A}_{n}\sigma}\mathbbm{1}_{J_{n,\varepsilon}}\right)$ (47) The ratio of partition function is $\geq 1/C_{u}$ for some $C_{u}>0$, w.p. $\geq 1-\delta$ by Lemma 8. To analyse the expectation above, define $A_{M}=\\{\sigma:|Y_{n}|\leq M\\}$ for some $M>0$ and observe that, $\displaystyle\mathbb{E}_{\beta,\mathbf{Q}^{\rm CW},\boldsymbol{\mu}_{S}(A)}e^{\frac{\beta}{2}\sigma^{\top}\mathcal{A}_{n}\sigma}\mathbbm{1}_{J_{n,\varepsilon}}\mathbbm{1}_{\sigma\in\mathcal{E}_{n}}$ $\displaystyle\geq\mathbb{E}\left(\mathbb{E}_{\beta,\mathbf{Q}^{\rm CW},\boldsymbol{\mu}_{S}(A)}\Big{(}e^{\frac{\beta}{2}\sigma^{\top}\mathcal{A}_{n}\sigma}\mathbbm{1}_{J_{n,\varepsilon}}\mathbbm{1}_{\sigma\in\mathcal{E}_{n}}\mathbbm{1}_{A_{M}}|W_{n}\Big{)}\right)$ $\displaystyle\geq e^{-\frac{\beta}{2}M}\mathbb{E}\left(e^{\frac{\beta}{2}\tilde{T}^{\top}_{n}\mathcal{A}\tilde{T}_{n}}\mathbb{P}_{\beta,\mathbf{Q},\boldsymbol{\mu}_{S}(A)}(J_{n,\varepsilon}\cap\mathcal{E}_{n}\cap A_{M}|W_{n})\right)$ The proof of Lemma 8 yields $\exists\theta>0$ and $\delta>0$ small enough such that $\inf_{W_{n}}\exp(\frac{\beta}{2}\tilde{T}_{n}(W_{n})^{T}\mathcal{A}_{n}\tilde{T}_{n}(W_{n}))\geq e^{-\theta}$ with probability $\geq 1-\delta$. Therefore, $\displaystyle\mathbb{E}_{\beta,\mathbf{Q}^{\rm CW},\boldsymbol{\mu}_{S}(A)}e^{\frac{\beta}{2}\sigma^{\top}\mathcal{A}_{n}\sigma}\mathbbm{1}_{J_{n,\varepsilon}}\mathbbm{1}_{\sigma\in\mathcal{E}_{n}}\geq e^{-\frac{\beta}{2}M-\theta}\mathbb{P}_{\beta,\mathbf{Q},\boldsymbol{\mu}_{S}(A)}(J_{n,\varepsilon}\cap\mathcal{E}_{n}\cap A_{M}).$ By Hanson-Wright inequality(cf. Lemma 21), pick $M$ large enough such that $\mathbb{P}_{\beta,\mathbf{Q}^{\rm CW},\boldsymbol{\mu}_{S}(A)}(A_{M}|W_{n})\geq 1-\eta/4$. Hence, $\mathbb{P}_{\beta,\mathbf{Q}^{\rm CW},\boldsymbol{\mu}_{S}(A)}(A_{M})\geq 1-\eta/4$ By picking $\varepsilon>0$ small enough, we can also get $\mathbb{P}_{\beta,\mathbf{Q}^{\rm CW},\boldsymbol{\mu}_{S}(A)}(J_{n,\varepsilon})\geq 1-\eta/4$. Hence, $\displaystyle\mathbb{E}_{\beta,\mathbf{Q}^{\rm CW},\boldsymbol{\mu}_{S}(A)}e^{\frac{\beta}{2}\sigma^{\top}\mathcal{A}_{n}\sigma}\mathbbm{1}_{J_{n,\varepsilon}}\geq\frac{\eta}{2}e^{-\frac{\beta}{2}M-\theta}=:\nu>0.$ This concludes the proof of this lemma. ∎ ###### Lemma 7. There exists a constant $C_{\varepsilon}>0$ such that $\displaystyle\mathbb{P}_{\beta,\mathbf{Q}^{\rm CW},\boldsymbol{\mu}_{S}(A)}(J^{c}_{n,\varepsilon})\leq e^{-Cn},$ where $\mathbf{Q}^{\rm CW}$ corresponds to the coupling matrix of a Curie- Weiss model on $n$ vertices. ###### Proof. $\displaystyle\mathbb{P}_{\beta,\mathbf{Q}^{\rm CW},\boldsymbol{\mu}_{S}(A)}(J^{c}_{n,\varepsilon})$ $\displaystyle=\frac{Z(\beta,\mathbf{Q}^{\rm CW},\mathbf{0})}{Z(\beta,\mathbf{Q}^{\rm CW},\boldsymbol{\mu}_{S}(A))}\mathbb{E}_{\beta,\mathbf{Q},\mathbf{0}}\left(e^{A\sum_{i\in S}{X_{i}}}\mathbbm{1}_{J^{c}_{n,\varepsilon}}\right)$ $\displaystyle\leq e^{As}\mathbb{P}_{\beta,\mathbf{Q}^{\rm CW},\mathbf{0}}(J^{c}_{n,\varepsilon})\leq e^{As-Cn},$ for some $C>0$ depending on $\varepsilon>0$ (where the proof of $\mathbb{P}_{\beta,\mathbf{Q}^{\rm CW},\mathbf{0}}(J^{c}_{n,\varepsilon})\leq e^{-Cn}$ follows from [24]). The proof follows from the assumption that $As\ll n$. ∎ ###### Lemma 8. Let $\mathbf{Q}^{\rm CW}_{i,j}$ denote the coupling matrix of Curie-Weiss model and $\mathbf{Q}$ be the scaled adjacency matrix of either the graph $G(n,p)$ (with $p=\Theta(1)$) or random regular graph on $n$ vertices each having degree $d=\Theta(n)$. Then for any $\delta>0$ the following holds with probability $\geq 1-\delta$: for any $\beta>0,\boldsymbol{\mu}_{S}(A)$: $\displaystyle C_{l}\leq\frac{Z(\beta,\mathbf{Q},\boldsymbol{\mu}_{S}(A))}{Z(\beta,\mathbf{Q}^{\rm CW},\boldsymbol{\mu}_{S}(A))}\leq C_{u}\quad,$ for constants $C_{l},C_{u}$ depending on $\beta,\delta$. ###### Proof. Define $\mathcal{A}_{n}:=G-\mathbf{1}\mathbf{1}^{T}+I/n$. For the auxiliary variable $W_{n}$, define a vector $\tilde{T}_{n}=\tilde{T}_{n}(W_{n})$ such that $\tilde{T}_{n,i}=\tanh(\beta W_{n}+A\mathbbm{1}_{i\in S})$. Observe that, $\mathbf{X}^{\top}\mathcal{A}_{n}\mathbf{X}=(\mathbf{X}-\tilde{T}_{n})^{\top}\mathcal{A}_{n}(\mathbf{X}-\tilde{T}_{n})+2\tilde{T}^{\top}_{n}\mathcal{A}_{n}(\mathbf{X}-\tilde{W}_{n})+\tilde{T}^{\top}_{n}\mathcal{A}\tilde{T}_{n}=:Y_{n}+\tilde{T}^{\top}_{n}\mathcal{A}\tilde{T}_{n}.$ Define the good set $J_{n,\varepsilon}:=\\{m-\varepsilon\leq|W_{n}|\leq m+\varepsilon\\}$, where $m$ is the unique positive root of $m=\tanh(\beta m)$. For the upper bound, note that, $\displaystyle\frac{Z(\beta,\mathbf{Q},\boldsymbol{\mu}_{S}(A))}{Z(\beta,\mathbf{Q}^{\rm CW},\boldsymbol{\mu}_{S}(A))}$ $\displaystyle=\mathbb{E}_{\beta,\mathbf{Q}^{\rm CW},\boldsymbol{\mu}_{S}(A)}\Big{(}e^{\frac{\beta}{2}\mathbf{X}^{T}\mathcal{A}_{n}\mathbf{X}}\Big{)}$ $\displaystyle=\mathbb{E}_{\beta,\mathbf{Q}^{\rm CW},\boldsymbol{\mu}_{S}(A)}e^{\mathbf{X}^{\top}\mathcal{A}_{n}\mathbf{X}}\mathbbm{1}_{J^{c}_{n,\varepsilon}}+\mathbb{E}_{\beta,\mathbf{Q}^{\rm CW},\boldsymbol{\mu}_{S}(A)}e^{\mathbf{X}^{\top}\mathcal{A}_{n}\mathbf{X}}\mathbbm{1}_{J_{n,\varepsilon}},$ To bound the first summand, note that for any $\delta>0$ one has that there exists a sequence $\epsilon_{n}(\delta)\rightarrow 0$ such that with probability $1-\epsilon_{n}(\delta)$ one has that $\|\mathcal{A}_{n}\|_{\rm op}\leq n^{-1/2+\delta}$ for either $G(n,p)$ [48, Theorem 1.1] or for random regular graph [47, Theorem A]. As a result, for any $\delta>0$ one has that with probability with probability $1-\epsilon_{n}(\delta)$, $\sup\limits_{\sigma\in\\{\pm\\}^{n}}e^{\mathbf{X}^{T}\mathcal{A}_{n}\mathbf{X}}\leq e^{n^{1/2+\delta}}$. Subsequently, with probability larger than $1-\epsilon_{n}(\delta)$ the following hold $\displaystyle\mathbb{E}_{\beta,\mathbf{Q}^{\rm CW},\boldsymbol{\mu}_{S}(A)}e^{\sigma^{\top}\mathcal{A}_{n}\sigma}\mathbbm{1}_{J^{c}_{n,\varepsilon}}\leq e^{n^{1/2+\delta}}\mathbb{P}_{\beta,\mathbf{Q}^{\rm CW},\boldsymbol{\mu}_{S}(A)}(J^{c}_{n,\varepsilon}).$ This inequality and Lemma 7 yields that the first summand is $o(1)$. To analyze $\mathbb{E}_{\beta,\mathbf{Q}^{\rm CW},\boldsymbol{\mu}_{S}(A)}e^{\mathbf{X}^{\top}\mathcal{A}_{n}\mathbf{X}}\mathbbm{1}_{J_{n,\varepsilon}}$ we plan to invoke Hanson-Wright inequality (cf. Lemma 21). To this end, we choose $\varepsilon>0$ small enough such that $\lim\sup\max_{i}s_{\tilde{T}_{n,i}}\lambda_{1}(\beta\mathcal{A}_{n})<1,$ where $s$ in Lemma 21. The choice of $\varepsilon$ is possible since $A=o(1)$. Now, we apply 21 on $\mathbb{E}(e^{\frac{\beta}{2}Y_{n}}|W_{n})$ and the error term in Lemma 21 is $O(1)$ as shown in [17, Section 1.2]. Hence, the proof concludes once we observe that $\log\mathbb{E}^{\rm CW}e^{\tilde{T}^{\top}_{n}\mathcal{A}\tilde{T}_{n}}=O(1)$ uniformly over $S$. To see this, note that it is enough to show that for any $\delta>0$ there exists a $C_{\delta}>0$ such that $\sup_{W_{n}}\exp(\tilde{T}_{n}(W_{n})^{T}\mathcal{A}_{n}\tilde{T}_{n}(W_{n}))\leq C_{\delta}$ with probability $\geq 1-\delta$ under either Erdős-Rényi randomness or random regular graph. To that end, note that with $S(\mu)$ denoting the support of $\boldsymbol{\mu}$ one has $\displaystyle\tilde{T}_{n}(W_{n})^{T}\mathcal{A}_{n}\tilde{T}_{n}(W_{n})$ $\displaystyle=T_{1}+2T_{2}+T_{3}$ where $\displaystyle T_{1}$ $\displaystyle=\frac{1}{\tilde{d}}\sum_{(i,j)\in S(\boldsymbol{\mu})^{c}\times S(\boldsymbol{\mu})^{c}}\tanh^{2}(\beta W_{n})\left(G_{i,j}-\frac{\tilde{d}}{n}\right);$ $\displaystyle T_{2}$ $\displaystyle=\frac{1}{\tilde{d}}\sum_{(i,j)\in S(\boldsymbol{\mu})\times S(\boldsymbol{\mu})^{c}}\tanh(\beta W_{n}+\mu_{i})\tanh(\beta W_{n})\left(G_{i,j}-\frac{\tilde{d}}{n}\right);$ $\displaystyle T_{3}$ $\displaystyle=\frac{1}{\tilde{d}}\sum_{(i,j)\in S(\boldsymbol{\mu})\times S(\boldsymbol{\mu})}\tanh(\beta W_{n}+\mu_{i})\tanh(\beta W_{n}+\mu_{j})\left(G_{i,j}-\frac{\tilde{d}}{n}\right).$ Above $\tilde{d}=np$ when $\mathbb{G}_{n}\sim\mathcal{G}_{n}(n,p)$ and $\tilde{d}=d$ when $\mathbb{G}_{n}\sim\mathcal{G}_{n}(n,d)$. In particular, in that case $\displaystyle T_{1}$ $\displaystyle=\tanh^{2}(\beta W_{n})\frac{1}{\tilde{d}}\sum_{(i,j)\in S(\boldsymbol{\mu})^{c}\times S(\boldsymbol{\mu})^{c}}\left(G_{i,j}-\frac{\tilde{d}}{n}\right)$ $\displaystyle=O_{\mathbb{P}_{\beta,\mathbf{Q},\boldsymbol{\mu}}}(1)O_{\mathbb{P}_{G_{n}}}\left(\frac{(n-s)}{\tilde{d}}\right)\quad\text{by Lemma \ref{lem:random_regular_covariance}}.$ Similarly, $\displaystyle T_{2}$ $\displaystyle=\tanh(\beta W_{n})\tanh(\beta W_{n}+A)\frac{1}{\tilde{d}}\sum_{(i,j)\in S(\boldsymbol{\mu})\times s(\boldsymbol{\mu})^{c}}\left(G_{i,j}-\frac{\tilde{d}}{n}\right)$ $\displaystyle=O_{\mathbb{P}_{\beta,\mathbf{Q},\boldsymbol{\mu}}}(1)O_{\mathbb{P}_{G_{n}}}\left(\frac{\sqrt{(n-s)s}}{\tilde{d}}\right)\quad\text{by Lemma \ref{lem:random_regular_covariance}},$ and $\displaystyle T_{3}$ $\displaystyle=\tanh^{2}(\beta W_{n}+A)\frac{1}{\tilde{d}}\sum_{(i,j)\in S(\boldsymbol{\mu})\times s(\boldsymbol{\mu})}\left(G_{i,j}-\frac{\tilde{d}}{n}\right)$ $\displaystyle=O_{\mathbb{P}_{\beta,\mathbf{Q},\boldsymbol{\mu}}}(1)O_{\mathbb{P}_{G_{n}}}\left(\frac{\sqrt{(n-s)s}}{\tilde{d}}\right)\quad\text{by Lemma \ref{lem:random_regular_covariance}},$ These estimates immediately verifies claim that $T_{j}$’s $O_{\mathbb{P}}(1)$ whenever $\tilde{d}=\Theta(n)$ – which is guaranteed by either $p=\Theta(1)$ when $\mathbb{G}_{n}\sim\mathcal{G}_{n}(n,p)$ or $d=\Theta(n)$ when $\mathbb{G}_{n}\sim\mathcal{G}_{n}(n,d)$. For the lower bound on ratio of partition functions we begin by noting that by convexity of $\mathbf{Q}\rightarrow\log{Z(\beta,\mathbf{Q},\boldsymbol{\mu})}$ for every $\boldsymbol{\mu}$,444for any differentiable convex function $f:\mathbb{R}^{d}\rightarrow\mathbb{R}$ one has $f(y)-f(x)\geq\langle\nabla f(x),y-x\rangle$ $\displaystyle\exp\left(\beta\langle\mathbb{E}_{\beta,\mathbf{Q}^{\rm CW},\boldsymbol{\mu}_{S}(A)}(\hat{\Sigma}),-\Delta_{\mathbf{Q}}\rangle_{\rm TR}\right)\leq\frac{Z(\beta,\mathbf{Q},\boldsymbol{\mu}_{S}(A))}{Z(\beta,\mathbf{Q}^{\rm CW},\boldsymbol{\mu}_{S}(A))}\leq\exp\left(-\beta\langle\mathbb{E}_{\beta,\mathbf{Q},\boldsymbol{\mu}_{S}(A)}(\hat{\Sigma}),\Delta_{\mathbf{Q}}\rangle_{\rm TR}\right)$ where $\hat{\Sigma}_{i,j}=X_{i}X_{j}$,
# Fingerprinting Robot Movements via Acoustic Side Channel Ryan Shah University of StrathclydeUnited Kingdom<EMAIL_ADDRESS>, Mujeeb Ahmed University of StrathclydeUnited Kingdom <EMAIL_ADDRESS>and Shishir Nagaraja Newcastle UniversityUnited Kingdom<EMAIL_ADDRESS> ###### Abstract. In this paper, we present an acoustic side channel attack which makes use of smartphone microphones recording a robot in operation to exploit acoustic properties of the sound to fingerprint a robot’s movements. In this work we consider the possibility of an insider adversary who is within physical proximity of a robotic system (such as a technician or robot operator), equipped with only their smartphone microphone. Through the acoustic side- channel, we demonstrate that it is indeed possible to fingerprint not only individual robot movements within 3D space, but also patterns of movements which could lead to inferring the purpose of the movements (i.e. surgical procedures which a surgical robot is undertaking) and hence, resulting in potential privacy violations. Upon evaluation, we find that individual robot movements can be fingerprinted with around 75% accuracy, decreasing slightly with more fine-grained movement meta-data such as distance and speed. Furthermore, workflows could be reconstructed with around 62% accuracy as a whole, with more complex movements such as pick-and-place or packing reconstructed with near perfect accuracy. As well as this, in some environments such as surgical settings, audio may be recorded and transmitted over VoIP, such as for education/teaching purposes or in remote telemedicine. The question here is, can the same attack be successful even when VoIP communication is employed, and how does packet loss impact the captured audio and the success of the attack? Using the same characteristics of acoustic sound for plain audio captured by the smartphone, the attack was 90% accurate in fingerprinting VoIP samples on average across baseline movements, which is around 15% higher than the baseline without the VoIP codec employed. This is an interesting result as it opens up new research questions regarding anonymous communications to protect robotic systems from acoustic side channel attacks via VoIP communication networks. robot, security, privacy, acoustic, side channel, attack, passive, deep learning, neural network, voip ††ccs: Security and privacy Systems security††ccs: Security and privacy Side- channel analysis and countermeasures ## 1\. Introduction The prominence of teleoperated robotic systems has seen a recent rise in a variety of application areas, such as industrial (Quarta et al., 2017) and surgical environments (Ahn et al., 2015; Tewari et al., 2002), with promises of higher levels of accuracy and precision. Given that many of these systems are becoming increasingly connected, they are vulnerable to an expanded threat landscape in the cyber domain. Attacks from this angle are primarily active attacks such as tampering with the integrity of messages in-flight or hijacking the robot controller directly (Bonaci et al., 2015). However, little attention has been paid to the capabilities of a passive attacker and the damage potential of stealthier attacks. Specifically, passive attacks such side channel attacks which exploit information leakages without the need to change the normal behaviour of the system, can result in huge losses that stem from the compromise of operational confidentiality. Side channel attacks in the cyber domain have the potential to compromise the operational confidentiality of organisations that own such systems (Shah et al., 2022), yet those targeting robots in the physical domain are still to be explored. In this paper, we aim to investigate whether an adversary can exploit information leakages from the acoustic side channel, by capturing audible emanations from a robotic system during normal operations, to mount an attack that targets operational confidentiality. In this context, we look at two possible threats posed by an insider attacker. First, a malicious robot operator or technician on the ground could use a recording device, such as a smartphone, near the robot to record entire workflows or individual movements. By fingerprinting this leaked information, they could sell this on to competing organisations for a malicious advantage. While it can be argued that an attacker may not be able to get close enough to the robot to place the recording device, many robotic systems now employ sensors to aid safety mechanisms to prevent harm to nearby humans or environmental changes. This can allow the attacker windows of opportunity to place the recording device near the robot or be near enough to capture meaningful acoustic emanations. A second possible threat comes from a telemonitoring perspective. While telemonitoring is less common in industrial settings, in surgical settings the use of medical recording devices, such as medical data recorders or intraoperative video recorders, are used for post-surgical review or teaching (alongside patient consent) to learn from suboptimal scenarios and improve performance (Saun et al., 2019; Vodafone, 2019; Al-Jabir et al., 2020). While privacy laws and medicolegal requirements govern the use of such devices, data from them is not typically required as evidence in court so long as patient confidentiality is maintained (Dalen et al., 2019). However, acoustic emanations captured by such recordings could reveal the operations the robot is carrying out, and ultimately piece together surgical procedures. In combination with other metadata, such as patient admission and exit times, this could compromise patient confidentiality. In this attack, we recorded the acoustic emanations, through a smartphone recorder, for individual robot movements, as well as recording entire workflows corresponding to typical warehousing operations such as picking and placing objects from one place to another. Using the collected data, we extract a set of acoustic characteristics which are used as input to an artificial neural network (ANN). We found that baseline movements (of minimum speed and distance) can be fingerprinted with at least ~75% accuracy as a baseline. The speed and distance of movements are not as successfully fingerprinted in this attack, compared to the radio frequency side channel. Entire warehousing workflows with ~64% accuracy. Ultimately, it is clear that a passive insider adversary has the potential not only to reveal what a robot is doing but take the resulting liabilities of such an attack to an extreme that impacts even the organisations that employ them. As well as this, in certain robotics environments, such as in surgical settings, procedures may be streamed and/or recorded for viewing, education or research (Muensterer et al., 2014; Kulkarni et al., 2020; Hosseini et al., 2013). Therefore, it is important to question how VoIP impacts the audio samples for movements and workflows and, ultimately, the success of the attack. Using the Opus codec – a common choice for most modern VoIP applications – the attack was 90% accurate for computing movement fingerprints for baseline speed and distance, which is nearly 15% more accurate than the baseline without the Opus codec employed, presenting new research questions regarding side channel attacks via VoIP communication networks which target robotic systems. The remainder of this paper is as follows. In Section 2 we provide background on teleoperated robots and acoustic emanations, to which we then describe the threat model. We then outline the attack and our findings in Section 3, and provide an in-depth discussion in Section 5. In Section 6 we discuss related work and conclude in Section 7. ## 2\. Background ### 2.1. Teleoperated Robots The use of robotics has seen an increase in installations in a variety of application areas (Reuters, 2019) and play pivotal roles bringing benefits to quality of service, efficiency and precision, among others. Among them, teleoperated robots are most prominent in many industrial (Aschenbrenner et al., 2015; Avila et al., 2020; Grabowski et al., 2021; Li et al., 2017; Bartoš et al., 2021) and surgical (Sung and Gill, 2001; Tewari et al., 2002; Hannaford et al., 2012) environments, and share a common system architecture. This type of system makes use of a human operator (i.e. a specially-trained surgeon) who operates the controller (i.e. surgeon’s console or teach pendant) which translates human movements or inputs into those which the robot can interpret. These input (console and other sources of information) and output (actuators) devices are linked together via an electronic control system (ECS) and typically connected to the organisation’s network in which the robot operates. An overview of a typical teleoperated robot architecture can be seen in Figure 1. Figure 1. Teleoperated Robot Architecture ### 2.2. Acoustic Characteristics While the robot moves, its electromechanical components will emit (audible) sound, which when captured could be used to mount an information leakage attack. The first step to determining an appropriate attack strategy is to understand the different characteristics of acoustic emanations, and what may be most useful from an attack perspective. #### 2.2.1. Root Mean Square Energy Root Mean Square (RMS) energy (Bachu et al., 2008) is a measure of the amplitude based on all samples in a frame of audio, and can be thought of as an indicator of loudness of the audio signal (Panagiotakis and Tziritas, 2005). This may be useful in the context of this attack given that combinations of movements (i.e. simultaneous movements along 2 or more axes) may emanate a louder sound given the use of more stepper motors, for example. As well as this, as the robot axes pass the microphone, the sound may be louder and thus this feature may help provide further information to the discrimination between movements in different positions. #### 2.2.2. Zero-Crossing Rate Zero-Crossing Rate (ZCR) is a measure of the number of times a signal crosses the horizontal time axis and can help identify pitch variations in monophonic tones (sound emitted from one location) (Panagiotakis and Tziritas, 2005). Given the robot is stationary in this case, the ZCR may be a useful feature candidate. #### 2.2.3. Spectral Centroid The spectral centroid provides information corresponding to frequency bands contain most of the energy, wherein lower centroid (energy) values is likened to duller sounds and higher centroid values for brighter sounds (Le et al., 2011). In a robotic system, smaller movement distances and speeds will naturally require less energy and appear more dull sounding to the human ear, whereas faster and longer movements have better tonality, and may ultimately provide useful for distinguishing between different movements of the same source. #### 2.2.4. Spectral Bandwidth Spectral bandwidth is defined as the full width of band of light (wavelength interval) at half the peak maximum (Klapuri and Davy, 2007; Atahan et al., 2021). Acoustic signals oscillate about a point, and the bandwidth for each time interval in a signal is the sum of maximum deviation on both sides of this point. The point of the centroid of the signal may vary for different robot movements and thus may be an important feature for fingerprinting. #### 2.2.5. Spectral Rolloff Spectral rolloff is the fraction of frequency bins under a cutoff point where the total energy of the spectrum is contained, and can help distinguish between noisy sounds and more harmonic sounds (below roll-off point) (Kos et al., 2013). This feature may provide useful to this attack as it can roll off frequencies that may fall outside of the useful range of frequencies where the energy of the acoustic energy of movements is contained. #### 2.2.6. Spectral Contrast Spectral contrast is the measure of energy of frequencies in windows of time (Jiang et al., 2002) and can help identify strong spectral peaks to reflect the distribution between harmonic and non-harmonic components of the acoustic emanations. As a robot moves, the frequency contents may have energy that changes with time and capturing the spectral contrast can help measure this energy variation. #### 2.2.7. Chroma Feature Chroma feature, sometimes referred to as a chromagram, profiles a sound into 12 pitch class profiles (Müller, 2015). In music analysis, the attempt is to capture the harmonic and melodic characteristics of a song where pitches can be categorised to one of the scales in the equally-tempered set of the notes $\\{C,C\\#,D,D\\#,E,F,F\\#,G,G\\#,A,A\\#,B\\}$ (Cho and Bello, 2013; Paulus et al., 2010). While recorded robot movements are not akin to songs that are analysed in this fashion, the pitch of sound may correlate with the speed and distance of movement and may provide useful as a mid-level feature for fingerprinting movements. #### 2.2.8. Mel-Cepstrum Frequency Coefficients The Mel scale is a scale of pitches that is felt to be equal in distance from one another. For example, in audible acoustics listened by a human, differences in frequency content can be observed if the source of acoustic emanations are in the same distance and atmosphere (Greenwood, 1997; Martinez et al., 2012). The short-term power spectrum of acoustic emanations can be represented by the Mel frequency cepstral (MFC) and a combination of coefficients (MFCCs) make up the MFC. The MFC equally distributes frequency bands to approximate human auditory response. If variations in robot movements can be inferred from audible sound, then looking at the mel frequency coefficients (the list of amplitudes of the spectrum in the mel scale) will provide useful information to the attack. ### 2.3. Threat Model Many previous attacks focus on an active attacker, which can involve the tampering of messages (Bonaci et al., 2015) or replaying attacks between the robot or controller (McClean et al., 2013). In this work, the primary attacker is a passive insider, such as a malicious technician or operator. Being an insider close to the robot would allow them to record the acoustic emanations during the robot’s normal operations using a smartphone, which they may have on them and use covertly (Peng and Choi, 2013). As well as this, it is also possible than an insider attacker is able to covertly plant a microphone which could transmit recorded audio to the attacker remotely or be retrieved at a later time. In either case, if an attacker is able to mount an information leakage attack to fingerprint robot movement patterns from acoustic emanations, this could lead to the revelation of daily workflows (i.e. in a warehouse) and ultimately compromise the operational confidentiality of the organisation. For example, this information could be given to competitors to gain an advantage or use it maliciously. A second possible threat comes from a telemonitoring perspective. While telemonitoring is less common in industrial settings, in surgical settings the use of medical recording devices, such as medical data recorders or intraoperative video recorders, are used for post-surgical review or teaching (alongside patient consent) to learn from suboptimal scenarios and improve performance (Saun et al., 2019; Vodafone, 2019; Al-Jabir et al., 2020). While privacy laws and medicolegal requirements govern the use of such devices, data from them is not typically required as evidence in court so long as patient confidentiality is maintained (Dalen et al., 2019). However, acoustic emanations captured by such recordings could reveal the operations the robot is carrying out, and ultimately piece together surgical procedures. In combination with other metadata, such as patient admission and exit times, this could compromise patient confidentiality. Ultimately, reviewing the nature of acoustic emanations in robotic systems, as well as the proposed threat model, the aim is to investigate whether an advesary will be able to record the acoustic emanations from a robot during its normal operation, and make use of distinct features present across the recorded audio to fingerprint robot movements and workflows. Several hypothetical factors will come into play which could influence the potential success of this attack. First, the type of operations being carried out by the robot can vary in terms of speed and distance of movement, and so the attack should be robust enough to fingerprint between these parameters. Second, the distance at which the microphone is situated away from the robot, naturally due to the Doppler effect (Doppler, 1903) wherein sounds soften with distance, will also have an impact on the success of the attack and should be investigated. Finally, given that in some cases VoIP technology will be employed, such as for recording purposes or to livestream medical procedures with surgical robots, the impact of VoIP on the attack should be evaluated. ### 2.4. Hypotheses and Goals In this work we aim to investigate whether an advesary will be able to effectively record the acoustic emanations from a robot during its normal operation, and make use of distinct features present across the recorded audio to fingerprint robot movements and workflows. We hypothesise that several factors will come into play which could influence the potential success of this attack. First, the type of operations being carried out by the robot can vary in terms of speed and distance of movement, and so the attack should be robust enough to fingerprint between these parameters. Second, we also hypothesise that the distance at which the microphone is situated away from the robot, naturally due to the Doppler effect (Doppler, 1903) wherein sounds soften with distance, will also have an impact on the success of the attack and should be investigated. Ultimately, the following research questions are proposed: 1. ($R_{1}$) Can an attacker fingerprint individual robot movements on each axes, as well as permutations of them? 2. ($R_{2}$) How is movement fingerprinting affected by: 1. (i) The speed and distance of movements? 2. (ii) The distance the recording device (i.e. smartphone) is away from the robot? 3. ($R_{3}$) Can entire robot workflows be reconstructed from acoustic emanations? 4. ($R_{4}$) How do VoIP codecs influence the success of the attack? ## 3\. Attack Methodology In this paper, we investigate an acoustic side-channel attack which exploits audio emanations from a robot during its operation. Specifically, the aim of this attack is to fingerprint a robots movements from acoustic characteristics alone, recorded by smartphone devices in a passive manner. For subsequent discussion, we aim to answer the following questions: 1. ($R_{1}$) Can an attacker fingerprint individual robot movements on each axes, as well as permutations of them? 2. ($R_{2}$) How is movement fingerprinting affected by: 1. (i) The speed and distance of movements? 2. (ii) The distance the smartphone or microphone is away from the robot? 3. ($R_{3}$) Can an attacker recover information about the objects a robot is handling, such as its weight? Figure 2. Robot Environment for Acoustic Side Channel ### 3.1. Robot Environment The context of this study surrounds modern teleoperated surgical robots, whos typical architecture can be viewed (at a high level) as a pairing between the robotic system itself and its controller (surgeon’s console). For this work, we use uFactory’s uARM Swift Pro which runs on an Arduino Mega 2560 with MicroPython installed. The controller is emulated on a Windows 10 laptop which uses the uARM Python (3.8.X) SDK to enable controller instructions to be written in Python which are then translated into Gcode that is understood by the robot. An overview of the robot environment used in this study is depicted in Figure 2. For capturing the acoustic emanations which arise when the robot operates, we position the robot in the center of a table with the smartphone/microphone placed in several distances away (30cm to 1m) from the robot as shown in Figure 2. ### 3.2. Experiment Parameters With the robot setup for evaluating our acoustic side-channel attack for fingerprinting the robot’s movements, we now outline the parameters of our study. Specifically, we will discuss the speed and distance of the movement operation being carried out, the type of smartphone/micrphone, the distance the smartphone microphone is away from the robot and finally, Speed and Distance. In addition to capturing the acoustic emanations which arise during operation along the X, Y and Z axes, and combined movement operations, it is important to evaluate more fine-grained movements. To this, we programmed robot movements with varying distances (in millimetres) as well as varying speeds of movement (mm/s). This is because in realistic cases, a surgical robot for example would not move in each direction with constant distance and speed. Therefore, it is vital to understand whether an adversary can also fingerprint meta-information as well as just the movements themselves. Microphone Distance. In terms of recording the acoustic emanations during robot operation, it is important to evaluate the impact of distance the microphone is away from the robot. In a real situation, it is highly unlikely that an adversary would be very close to or in front of the robot, especially in cases like surgical robots where it could not only be dangerous to stand too close but being close enough may trigger potential safety features implemented to prevent injury. For this study, given the size of our uARM robot ($150mm\times 140mm\times 281mm$) and the volume of sound which is given off during its operation, we cannot investigate large distances as would be granted with a large surgical robot, for example. However, given this limitation, we recorded sounds at distances ranging from 30cm to 1m. VoIP. The final parameter for this study is to evaluate the impact VoIP has on the success of the attack. For this study, the codec employed by the majority of VoIP applications is Opus (Valin et al., 2012, 2016). The first step is to observe how the codec performs, but also how packet loss will also affect audio quality and the success of the attack. Figure 3. Depiction of Common Warehousing Workflows Our dataset contains common warehousing workflows such as pushing, pulling, packing and moving objects ### 3.3. Movement Dataset After determining the appropriate acoustic features to extract from the captured sounds, the next step was to create the dataset. In this dataset, there are 2 subsets. Within both subsets, there are samples pertaining to both individual and permutations of movements with varying speeds and distances of movement, the microphone distance, and robotic warehousing workflows (Figure 3). These workflows include those such as pick-and-place, push and pull operations, which were replicated from those found in existing industrial robot datasets such as the Forward Dynamics Dataset Using KUKA LWR and Baxter (Polydoros and Nalpantidis, 2016) for pick and place and the Inverse Dynamics Dataset Using KUKA (Rueckert et al., 2017) for push/pull. For these workflows, movements were slightly perturbated to account for a small degree of entropy that may be present in real-world operations (i.e. those that may arise due to drift in equipment calibration or wear-and-tear). In contrast to the first subset, the second subset contains the same samples but are passed through the Opus codec to evaluate the impact of VoIP on recorded audio in this attack. Specifically, while all samples are passed through the Opus codec, they are further split by packet loss. Packet loss has been shown to negatively impact call quality in VoIP communications (Ortega et al., 2018; Laghari et al., 2020), as they induce impact in the form of dropped calls or parts of speech, slow rate of speech (latency) or noise/interference. Because of this, these further subsets are divided by packet loss values of 1%, 5%, 10%, 25% and 50%. As a whole, the first subset contains 27.2K samples for individual movements and 658 samples for warehousing workflows, with each using 20% of the total samples for validation and another 20% for testing. The second contains the same amount of samples for each of the packet losses evaluated. Dataset Pre-Processing. The features in the dataset, as listed above, are computed using the librosa (McFee et al., 2015) Python library. For each feature, the mean value of each feature across each signal sample is taken and computed from a Short-Time Fourier Transform (STFT) with a Hann window and FFT length of 8192. For the MFCCs, 14 coefficients were used. Typically, 8–13 are used with the zeroth excluded given it only represents the average log-energy of the input signal (Rao and Vuppala, 2014). However, given this is a new problem to be explored, this is also kept to later examine its importance for fingerprinting. ### 3.4. Neural Network Before an evaluation can take place, an important step is constructing an appropriate neural network architecture for fingerprinting movements and ensuring a successful attack. To create the neural network, a sequential model was used where neurons are grouped in a linear fashion. This was created using the Keras API (Chollet et al., 2015). The parameters and structure for the layers in the neural network were evaluated on the dataset using a cross- validated grid search to find the most optimal number of neurons, layers, acttion function and dropouts if necessary. The input for the maximum number of neurons to be tested was calculated using the formula proposed by Demuth et al. (Demuth et al., 2014) with an alpha branching factor of $2$. Using the grid search with 3 cross validations, the most optimal neural network architecture for this feature set consists of 5 layers. First, the input layer containing 21 neurons for each of the input features. Next, there are 4 hidden layers. The first is a Dense layer with 290 neurons and uses the ReLU activation function (Eckle and Schmidt-Hieber, 2019). The next hidden layer is a Dropout layer which is used to randomly set input units to 0 at a rate of $0.05$ at each step during training to prevent overfitting. The next layer is another Dense layer of 350 neurons with ReLU activation, followed by another Dropout with a rate of $0.05$ to prevent overfitting. Finally, the last layer is a Dense output layer of 7 neurons, one for each of the movement classes, and uses the SoftMax activation function (Dunne and Campbell, 1997) to have the output in the range of $[0,1]$ for use as predicted probabilities. Sparse categorical cross-entropy is used as labels are integers and not one-hot encoded, for which categorical cross-entropy would be used (Zhang and Sabuncu, 2018). The optimiser used is Adam with a learning rate of $0.001$. This learning rate was chosen as others, such as those with higher learning rates, resulted in lowered accuracy scores. The model was fitted with a batch size of 32 and was run for 1000 epochs. Choice of Activation and Optimisation Functions. The ReLU activation function was chosen over other activation functions, as the reduced likelihood of vanishing gradient allows for a constant gradient resulting in faster learning. Further, the sparsity of representations are shown to be more beneficial than dense representations, as seen in other activations such as sigmoids (Krizhevsky et al., 2012; Li and Yuan, 2017; Agarap, 2018). The softmax activation function, combined with categorical cross-entropy (Zhang and Sabuncu, 2018) for the loss function, was chosen due to the simple fact that this is a multi-class classification problem. Simply, a sample can belong to one of the 7 classes, with each class corresponding to one of the robot movements. As well as this, the Adam optimiser was an ideal candidate. It is an extension to the Stochastic Gradient Descent (SGD) method, based on adaptive estimation of first- and second-order moments (Kingma and Ba, 2014). Specifically, it allows for the updating of network weights iteratively based on the training data, and fits best with the weighted sample sets in opposition to other tried methods such as standard SGD, RMSProp and SGD + Nesterov Momentum. ## 4\. Evaluation After setting up the robot environment and capturing the acoustic emanations during various stages of operations, the next step is to evaluate the success of the attack. As per the research questions listed above, the evaluation of this attack and related results will be set out in that order. ### 4.1. Individual Movement Fingerprints The first research question ($R_{1}$) aims to investigate whether an attacker can infer individual movements (on each axis) and permutations of these movements from the recorded audio. To compare this against other parameters, this experiment is considered as a baseline where the speed and distance of movement are the lowest possible values (1mm and 12.5mm/s respectively), and no VoIP codec used. As seen in Table 1, an average accuracy of around 75% can be observed across all movements, with the YZ movement having the highest precision among the movements. In comparison with the RF side channel, there is a clear drop in accuracy of around 20% but the acoustic side channel outperforms traffic analysis by around 10%. Interestingly, Y-involved movements are better recovered than other movements overall, which was not the case in the RF side channel (albeit a higher accuracy). This may be due to the Y-axis moving across the microphone range. Looking at the Z-involved movements, these are among the lowest. This may be due to the Z axis involving a vertical movement only and not moving nearer the microphone for better recording. Movement | Precision | Recall ---|---|--- X | 76% | 81% Y | 77% | 78% Z | 61% | 71% XY | 78% | 80% XZ | 68% | 65% YZ | 85% | 78% XYZ | 72% | 67% Accuracy | 75% Table 1. Baseline Classification Results As a whole, the baseline accuracy is 75% which is fairly good inference accuracy for an attacker. Z-based movements show the lowest precision and recall for fingerprinting, perhaps due to vertical motion and no horizontal spread across the recording device ### 4.2. Impact of Movement Distance For the next research question ($R_{2}$), the evaluation will look into how the distance and speed ($R_{2(i)}$) of robot movements, and the distance of the recording device ($R_{2(ii)}$), impact the success of fingerprinting movements from the acoustic side channel. First, as a robot moves, there is likely to be more sound that can be recovered as the distance of movement increases. As seen in Table 2, an increase by a single distance unit increases the model accuracy by 1%, improving Y-involved movement precision by around 10%. furthermore, the Z movement also gains a slight increase in precision. Unfortunately, this results in lowered accuracy for the other movements. This increase in distance results in the sound of movement being held for longer and may either provide useful for distinguishing variance between movements or even reduce this variance. To explore this, larger distances of movements are explored. At 5mm, there is a drop in accuracy of around 4%, with X-involved movements having much higher accuracy. At 10mm, the accuracy of the model overall decreases significantly to 57%. Y-involved movements in this case are much poorly fingerprinted, yet X-involved movements have a further increase in precision. For the Z movement at this stage, there is unfortunately a further drop in precision but the recall remains relatively similar. At 25mm, the accuracy starts to improve by 7% with the X movement having similar precision and recall to 10mm, and most other movements have an increase in both precision and recall. Finally, at 50mm, the accuracy nears that of the baseline and 2mm, however X-involved movement accuracy is significantly improved. D = Distance (mm), P = Precision, R = Recall --- | D=2 | D=5 | D=10 | D=25 | D=50 | P | R | P | R | P | R | P | R | P | R | X | 69% | 71% | 77% | 87% | 85% | 58% | 83% | 70% | 86% | 84% | Y | 88% | 77% | 77% | 79% | 80% | 54% | 66% | 43% | 90% | 88% | Z | 65% | 83% | 64% | 79% | 51% | 81% | 64% | 71% | 83% | 66% | XY | 68% | 60% | 67% | 63% | 63% | 53% | 57% | 65% | 79% | 81% | XZ | 62% | 57% | 83% | 47% | 67% | 49% | 60% | 61% | 64% | 79% | YZ | 94% | 94% | 66% | 83% | 37% | 54% | 55% | 57% | 56% | 59% | XYZ | 69% | 81% | 76% | 68% | 45% | 52% | 63% | 84% | 64% | 58% | Accuracy | 76% | 72% | 57% | 64% | 74% | Table 2. Classification Results With Distance Parameter At a slight increase in distance, the accuracy remains similar to the baseline, but further increases in distances lead to a reduction in fingerprinting accuracy. Notably, unlike the baseline, X-involved movement are better fingerprinted at distance ### 4.3. Impact of Movement Speed After looking at movement distance, the next parameter for robot movements is the speed at which the robot is moving along each of the axes ($R_{2(i)}$). As seen in Table 3, the speed parameter is less accurately fingerprinted by the attack compared to the distance parameter by at least 10% on average. Interestingly, a similar pattern is observed rgarding X-involved movements, with accuracy increasing with speed, except from the XYZ movement. While there are slight drops in accuracy, the precision and recall across most movements remains similar as speed increases. This is interesting, as the initial hypothesis was that a higher speed would result in higher pitched acoustic emanations, however the results seem to contradict this. In any case, perhaps the perceptual characteristics for human audio, while a clear pitch change is present listening to the robot in the lab, the feature algorithms regarding pitch (i.e. chroma feature) may not pick up on this for robot sounds. S = Speed (mm/s), P = Precision, R = Recall | ---|--- | S=25 | S=50 | S=75 | S=100 | P | R | P | R | P | R | P | R | X | 57% | 81% | 54% | 74% | 78% | 53% | 72% | 81% | Y | 79% | 76% | 61% | 42% | 59% | 45% | 72% | 69% | Z | 50% | 56% | 52% | 58% | 62% | 84% | 77% | 75% | XY | 73% | 72% | 46% | 40% | 67% | 57% | 57% | 70% | XZ | 79% | 57% | 75% | 79% | 57% | 60% | 60% | 56% | YZ | 67% | 59% | 66% | 45% | 53% | 65% | 66% | 69% | XYZ | 65% | 63% | 51% | 66% | 54% | 57% | 62% | 47% | Accuracy | 66% | 58% | 60% | 66% | Table 3. Classification Results With Speed Parameter The speed parameter performs worse than the distance parameter in the acoustic side channel, a similar pattern as seen with the radio frequency side channel ### 4.4. Microphone Distance While observing more fine-grained information leakage is useful to an attacker, one problem that may impact the success of the attack is the distance the recording device is away from the robot – in this case, the smartphone. Naturally, due to the Doppler effect, the intensity of sound (i.e. loudness) decreases over distances, and one would hypothesise that because of this the accuracy may be significantly impacted as the distance of recording increases. In this experiment, two other microphone distances (50cm and 100cm) are also tested in addition to the baseline recorded at 30cm. While these are not large recording distances, given the small scale of the robot used for the evaluation of the attack, these are relatively suitable candidates to be tested. As seen in Table 4, as the distance the microphone is away the robot is increased, the accuracy of the attack compared to the baseline decreases by around 10% at each recording distance step. Notably, this is much more significant for Z-based movements which were previously described to have poorer fingerprinting accuracy due to the limited range of motion that does not cross the recording device (remains stationary and moves vertically). In this case, a point a future work may be to evaluate the impact on position of the smartphone around the robot, aside from facing in front. Collectively, inference from multiple angles may provide better fingerprinting accuracy in all cases. MD = Microphone Distance (cm), P = Precision, R = Recall --- | MD=30 | MD=50 | MD=100 | P | R | P | R | P | R X | 76% | 81% | 57% | 79% | 75% | 76% Y | 77% | 78% | 67% | 74% | 67% | 68% Z | 61% | 71% | 48% | 66% | 45% | 63% XY | 78% | 80% | 88% | 91% | 64% | 68% XZ | 68% | 65% | 61% | 52% | 51% | 40% YZ | 85% | 78% | 83% | 54% | 47% | 35% XYZ | 72% | 67% | 52% | 39% | 33% | 33% Accuracy | 75% | 65% | 54% Table 4. Classification Results With Microphone Distance As the microphone distance increases away from the robot being recorded, on average the accuracy decreases around 10% at each step compared to the baseline - more significantly for Z-based movements ### 4.5. Workflow Recovery The next step in the evaluation looks at whether entire warehousing workflows can be reconstructed through the acoustic side channel attack. While a pattern matching approach can be successful using individual movement fingerprints, the ability to reconstruct entire workflows may be useful from an auditing perspective, for example, where offsets in normal movement signals can be flagged and investigated further. As seen in Table 5, the explored warehousing workflows can be recovered on average with around 62% accuracy. Notably, the pick-and-place and packing workflows are recovered with much higher success than the push and pull workflows. Simply, the former have much more variation in the pattern of movements and thus the variance helps with fingerprinting. In the case of push and pull movements, they are highly similar and it can be hypothesised that only the direction of movement away from the microphone (i.e. pull is a reverse of push) provides at least some degree of accuracy between the two. Workflow | Precision | Recall ---|---|--- Push | 37% | 16% Pull | 31% | 59% Pick-and-Place | 100% | 96% Packing | 97% | 100% Accuracy | 64% Table 5. Workflow Reconstruction Results Common warehousing workflows can be reconstructed in their entirety are better recovered through the acoustic side channel if they are more complex and varied. Push and pull operations are less accurate due to the fact they are very similar movements ### 4.6. Impact of VoIP In certain robotics environments, such as in surgical settings, procedures may be streamed and/or recorded for viewing, education or research (Muensterer et al., 2014; Kulkarni et al., 2020; Hosseini et al., 2013). Therefore, it is important to question how VoIP impacts the audio samples for movements and workflows and, ultimately, the success of the attack. In many modern VoIP applications, the Opus codec is the preferred choice (Valin et al., 2012, 2016) given its standardisation and rank of higher quality compared to other audio formats for a variety of bitrates. To explore this,the open-source nature of Opus allows for easy implementation to encode and decode the audio samples and, during decoding, investigate various packet losses. In VoIP applications, Packet Loss Concealment (PLC) is used as a decoder feature for receiving data from an unreliable source, which masks the effects of packet loss in VoIP communications. In realistic settings, packets may arrive late, be dropped or be corrupted, which may result in not only a lowered audio quality but in the worst case, dropped parts of the audio or the entire audio sample entirely. Given that in VoIP applications, a 1% packet loss is considered an acceptable rate for VoIP to minimise impact on call quality (James et al., 2004; Amirzade Dana et al., 2020), however in the event of network failures or availability attacks this may be higher. For completeness, 5 packet losses of 1%, 5%, 10%, 25% and 50% are evaluated. Furthermore, as it was shown that constant bitrate quality does not perform as well as variable bitrate quality (Rämö and Toukomaa, 2011), samples are encoded and decoded with variable bitrate. This experiment used the same model as the previous experiments, but with a batch size of 256 and 100 epochs of training. As seen in Table 6, the results for the baseline speed and distance of movement (12.5mm/s and 1mm respectively) under various packet losses via the Opus codec can be seen. Interestingly, at low packet loss, the classification accuracy is around 90% and increases by around 15% compared to the baseline without VoIP employed. Further, X movements are more accurately fingerprinted across all packet losses compared to the baseline without VoIP. As the packet loss reaches more undesirable amounts of 25% and 50%, the accuracy slightly decreases but the accuracy still remains much higher than the baseline without VoIP. This may be due to the PLC algorithm switching between CELT or SILK mode and variable bit rate. Specifically, frames that are deemed important are re- encoded at a lower bitrate and allows for partial recovery for improtant lost packets. This may be targetting the movement audio within the sample thus leading to higher variance among classes. L = Loss (%), P = Precision, R = Recall | ---|--- | L=1 | L=5 | L=10 | L=25 | L=50 | P | R | P | R | P | R | P | R | P | R | X | 99% | 99% | 99% | 100% | 100% | 100% | 100% | 100% | 99% | 100% | Y | 90% | 94% | 87% | 96% | 90% | 92% | 82% | 97% | 86% | 97% | Z | 86% | 72% | 88% | 68% | 91% | 73% | 86% | 72% | 90% | 74% | XY | 88% | 91% | 91% | 88% | 88% | 91% | 94% | 81% | 90% | 81% | XZ | 89% | 93% | 82% | 83% | 82% | 82% | 80% | 85% | 80% | 85% | YZ | 86% | 89% | 87% | 88% | 85% | 89% | 91% | 80% | 91% | 85% | XYZ | 93% | 98% | 94% | 97% | 92% | 96% | 89% | 97% | 90% | 97% | Accuracy | 90% | 90% | 90% | 88% | 89% | Table 6. Classification Results (Baseline) With Opus Codec and Packet Loss Interestingly, the precision and recall remains relatively similar across packet losses, with a slightly drop in accuracy for undesirable large packet losses. Notably, there is an increase in accuracy of around 15% compared to the baseline without the Opus codec employed ## 5\. Discussion The acoustic side channel attack we propose showcases the potential for successfully compromising the operational confidentiality of organisations in which robotic systems under attack are deployed. While many active attacks have shown to result in potentially devastating consequences, the capabilities of a passive insider attacker are truly underestimated. In this section, a discussion on the proposed attack is provided. ### 5.1. Influence of Noise During the recording of acoustic samples for robot movements, there is likely some degree of background noise that should be accounted for. Given the recordings were made in a computer lab, background noise effects may include the likes of light chatter, keyboard tapping and rolling chairs, among others. While relatively good accuracy is observed even with the background noise, it is important to also look into techniques to eliminate such noise to determine whether this results in better fingerprinting accuracy. In human audio, sound recordings contain the relative signal of the oscillations due to density and pressure of air in the ear. In digital audio, sound waves are encoded in digital form as numerical samples in a continuous sequence. The recordings taken in this attack are recorded at a sampling rate of 44.1KHz with 16-bit depth and thus there are $65,536$ possible values the signal can take in the sequence. Figure 4. FFT of Acoustic Signal Peaks can be observed at 60Hz corresponding to electric hum, with other peaks at 150Hz and 200Hz (among others) which may correlate with robot movement Figure 5. FFT of Acoustic Signal (Filtered) The amplitude at points correlating with electric hum or those outwidth the human hearing range are set to 0 (filtered out) As shown in Figure 4, the amplitude of the frequency content of the acoustic signal can be observed using the Fast Fourier Transform (FFT). In this attack, we make use of techniques originally applied to human acoustics, but given that the robot movements produce sound that is audible to the human ear as well. Looking at the frequency content, notable amplitude is not found past 1KHz, so this is zoomed in further to 250Hz. There is a notable spike around 60Hz, which is the frequency standard common to alternating current and is an effect known as electric hum due to electrical noise getting into an acoustic recording medium. The next largest peaks can be observed at around 150Hz and 200Hz which may correspond with the robot movements. As a first step to noise reduction/filtering, one technique is amplitude filtering, where the amplitudes of FFT values to be filtered can be set to 0Hz, to which the original signal can be recreated using an inverse FFT. In this experiment, the electric hum, as well as frequencies outwidth the human hearing range of ~20Hz–20KHz are filtered by dropping the amplitude of these ranges. A depiction of the amplitude drop can be seen in Figure 5. Looking at Table 7, the accuracy of baseline movement fingerprints can be observed with amplitude filtering in place. While the accuracy overall decreases by 1% compared to the baseline without amplitude filtering, the precision for Y and XY movements increase. This may be due to unfortunate noise events present in these samples that the filter has rectified. However, there is still a reduction in overall accuracy, which may mean that electric hum and other peaks may not be the best indicators of noise to remove when recording a robotics system. In this case, as a point of future work other noise reduction techniques that have shown to be successful in other areas, such as stationary or non-stationary spectral gating (Neumann and Schuller, 1991; Inouye et al., 2014) which reduces noise in time-domain signals by estimating noise thresholds for the frequency bands in a signal to gate (mask) noise below the threshold, are worth exploring in the hope the attack accuracy may increase. Movement | Precision | Recall ---|---|--- X | 72% | 81% Y | 75% | 73% Z | 68% | 72% XY | 86% | 71% XZ | 69% | 68% YZ | 76% | 83% XYZ | 72% | 70% Accuracy | 74% Table 7. Amplitude Filtering Classification Results While the accuracy is slightly reduced compared to the baseline with no filtering, the precision for some movements increases further, with better recall seen in most cases ### 5.2. Other VoIP Codecs Opus is the primary choice for many VoIP applications due to its royalty free and open source nature, alongside the benefits of higher quality and low- bandwidth streaming, in comparison with other codecs such as Speex (Valin, 2016) or SILK (Vos et al., 2010) (Opus’ predecessor). While it may be interesting to evaluate other codecs, Opus is the main choice for the majority of modern applications, such as Zoom, Teams and Discord (Rajaratnam et al., 2018; Castro, 2020) and is taking over previously dominating codecs. ### 5.3. Defences While the attack is successful, and even more so when the attack targets VoIP communiations, a natural question pertains to countermeasures and defences against the acoustic side channel attack. In this work, acoustic emanations result in unintentional information leakages about robot behaviours and can ultimately lead to the compromise of operational confidentiality. One defence that could be considered is to make use of vibration- or sound- reduction mechanisms to hinder the effect of the attack. As seen in Section 4.4, as the microphone distance increases the accuracy of fingerprinting also decreases. While this is due to the Doppler effect that is naturally at play with regard to sound intensity (i.e. loudness), a reduction in this from other means may result in the same outcome of reduced success of fingerprinting. Techniques in this space include the likes of using vibration isolation pads (Desai and Patil, [n.d.]) or damping to reduce vibration (Gravagne et al., 2001; Khan and Li, 2020) for the robot as a whole. In the case of noise reduction for robot components such as stepper motors, potential defences include using a clean damper (Ma et al., 2019) or higher resolution stepper motors. Another potential defence is to make use of a masking noise, to interfere with attack inference by distorting the signal related to information leakage in the acoustic side channel (Backes et al., 2010; Anand and Saxena, 2016; Kim et al., 2015). Adding a masking signal has shown success, but two challenges need to be addressed. First, the mask must be similar to the signal requiring masking to ensure difficult separation. Second, the masking noise should not cause any degrading effect on usability of the robotic system. For example, if the masking noise is to cover up other sound such as those used for emergencies or other operator feedback, then this will be much less than ideal and potentially lead to catastrophic liabilities. ## 6\. Related Work While acoustic side channel attacks have not been explored for robotic systems, enhancing the novelty of this work, there has been previous research in the area of acoustic side channels. In a similar respect to robotics, the exploration of information leakage in the acoustic side channel has been explored for 3D printers (Backes et al., 2010) – some of which making use of smartphones to carry out the attack (Song et al., 2016; Bilal, 2017) – and additive manufacturing systems (Chhetri et al., 2017). However, many of these attacks solely focus on IP theft. The acoustic side channel attack presented in this work focus solely on the movement of the robot arm and the compromise of operational confidentiality, which when looking at the bigger picture is much more valuable to an attacker. Furthermore, the reconstruction of G-code is an unnecessary extra step as movements which correspond to these can be inferred from individual movement fingerprinting under the assumption the robot is operated by an Arduino. Furthermore, while the robot in this work is operated by an Arduino, the focus is on reconstructing movements from the acoustic emanations, irrespective of the microcontroller used and thus applies to robotic systems in general and not those restricted to being operated by an Arduino. ## 7\. Conclusion In conclusion, it is clear that even acoustic emanations provide a high level of accuracy for fingerprinting movements and showcases a highly important passive side channel attack in the physical domain, which can be carried out with a fairly cheap smartphone. While more fine-grained movements and entire workflows in warehousing settings can be inferred, our contributions demonstrate that the recent usage of VoIP technologies also leave potential for information leakage through these communication channels, with the result leaving movement fingeprints to be more accurately reconstructed. This is an interesting result as it opens up new research questions regarding anonymous communications to protect robotic systems from acoustic side channel attacks via VoIP communication networks. ###### Acknowledgements. The authors are grateful for the support by the Engineering and Physical Sciences Research Council (11288S170484-102) and the support of the National Measurement System of the UK Department of Business, Energy & Industrial Strategy, which funded this work as part of NPL’s Data Science program. ## References * (1) * Agarap (2018) Abien Fred Agarap. 2018\. Deep learning using rectified linear units (relu). _arXiv preprint arXiv:1803.08375_ (2018). * Ahn et al. (2015) Ho Seok Ahn, Min Ho Lee, and Bruce A MacDonald. 2015. Healthcare robot systems for a hospital environment: CareBot and ReceptionBot. In _2015 24th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)_. IEEE, 571–576. * Al-Jabir et al. (2020) A. Al-Jabir, A. Kerwan, M. Nicola, Z. Alsafi, M. Khan, C. Sohrabi, N. O’Neill, C. Iosifidis, M. Griffin, G. Mathew, and R. Agha. 2020\. Impact of the Coronavirus (COVID-19) pandemic on surgical practice - Part 1. _Int J Surg_ 79 (Jul 2020), 168–179. * Amirzade Dana et al. (2020) Parvaneh Amirzade Dana, Zahra Esmaeilbeig, and Mohammad-Reza Sadeghi. 2020. Reliability enhancement and packet loss recovery of any steganographic method in voice over IP. _Wireless Networks_ 26, 8 (2020), 5817–5823. * Anand and Saxena (2016) S Abhishek Anand and Nitesh Saxena. 2016. A sound for a sound: Mitigating acoustic side channel attacks on password keystrokes with active sounds. In _International Conference on Financial Cryptography and Data Security_. Springer, 346–364. * Aschenbrenner et al. (2015) Doris Aschenbrenner, Michael Fritscher, Felix Sittner, Markus Krauß, and Klaus Schilling. 2015\. Teleoperation of an industrial robot in an active production line. _IFAC-PapersOnLine_ 48, 10 (2015), 159–164. * Atahan et al. (2021) Yunus Atahan, Ahmet Elbir, Abdullah Enes Keskin, Osman Kiraz, Bulent Kirval, and Nizamettin Aydin. 2021. Music Genre Classification Using Acoustic Features and Autoencoders. In _2021 Innovations in Intelligent Systems and Applications Conference (ASYU)_. IEEE, 1–5. * Avila et al. (2020) Jose Luis Ordoñez Avila, Hector Jimenez, Tania Marquez, Carlos Muñoz, Alberto Max Carrazco, Maria Elena Perdomo, David Miselem, and David Nolasco. 2020. Study Case: Teleoperated Voice Picking Robots prototype as a logistic solution in Honduras. In _2020 5th International Conference on Control and Robotics Engineering (ICCRE)_. IEEE, 19–24. * Bachu et al. (2008) RG Bachu, S Kopparthi, B Adapa, and BD Barkana. 2008\. Separation of voiced and unvoiced using zero crossing rate and energy of the speech signal. In _American Society for Engineering Education (ASEE) zone conference proceedings_. American Society for Engineering Education, 1–7. * Backes et al. (2010) Michael Backes, Markus Dürmuth, Sebastian Gerling, Manfred Pinkal, Caroline Sporleder, et al. 2010\. Acoustic $\\{$Side-Channel$\\}$ Attacks on Printers. In _19th USENIX Security Symposium (USENIX Security 10)_. * Bartoš et al. (2021) Michal Bartoš, Vladimír Bulej, Martin Bohušík, Ján Stanček, Vitalii Ivanov, and Peter Macek. 2021\. An overview of robot applications in automotive industry. _Transportation Research Procedia_ 55 (2021), 837–844. * Bilal (2017) Muhammad Bilal. 2017\. A review of internet of things architecture, technologies and analysis smartphone-based attacks against 3D printers. _arXiv preprint arXiv:1708.04560_ (2017). * Bonaci et al. (2015) Tamara Bonaci, Jeffrey Herron, Tariq Yusuf, Junjie Yan, Tadayoshi Kohno, and Howard Jay Chizeck. 2015. To make a robot secure: An experimental analysis of cyber security threats against teleoperated surgical robots. _arXiv preprint arXiv:1504.04339_ (2015). * Castro (2020) Rodolfo Castro. 2020\. Is your company’s network ready for Microsoft teams. * Chhetri et al. (2017) Sujit Rokka Chhetri, Arquimedes Canedo, and Mohammad Abdullah Al Faruque. 2017. Confidentiality breach through acoustic side-channel in cyber-physical additive manufacturing systems. _ACM Transactions on Cyber-Physical Systems_ 2, 1 (2017), 1–25. * Cho and Bello (2013) Taemin Cho and Juan P Bello. 2013. On the relative importance of individual components of chord recognition systems. _IEEE/ACM Transactions on Audio, Speech, and Language Processing_ 22, 2 (2013), 477–492. * Chollet et al. (2015) François Chollet et al. 2015\. Keras. https://keras.io. * Dalen et al. (2019) ASHM Dalen, J Legemaate, WS Schlack, DA Legemate, and MP Schijven. 2019. Legal perspectives on black box recording devices in the operating environment. _Journal of British Surgery_ 106, 11 (2019), 1433–1441. * Demuth et al. (2014) Howard B Demuth, Mark H Beale, Orlando De Jess, and Martin T Hagan. 2014. _Neural network design_. Martin Hagan. * Desai and Patil ([n.d.]) Tanvi D Desai and SR Patil. [n.d.]. Experimental and Numerical Analysis of Vibration Isolation Materials on Vibration Reduction within Plazma Torch. ([n. d.]). * Doppler (1903) Christian Doppler. 1903\. _Ueber das farbige Licht der Doppelsterne und einiger anderer Gestirne des Himmels: Versuch einer das Bradley’sche Aberrations-Theorem als integrirenden Theil in sich schliessenden allgemeineren Theorie_. K. Böhm Gesellschaft der Wissenschaften. * Dunne and Campbell (1997) Rob A Dunne and Norm A Campbell. 1997. On the pairing of the softmax activation and cross-entropy penalty functions and the derivation of the softmax activation function. In _Proc. 8th Aust. Conf. on the Neural Networks, Melbourne_ , Vol. 181. Citeseer, 185\. * Eckle and Schmidt-Hieber (2019) Konstantin Eckle and Johannes Schmidt-Hieber. 2019. A comparison of deep networks with ReLU activation function and linear spline-type methods. _Neural Networks_ 110 (2019), 232–242. * Grabowski et al. (2021) Andrzej Grabowski, Jarosław Jankowski, and Mieszko Wodzyński. 2021. Teleoperated mobile robot with two arms: the influence of a human-machine interface, VR training and operator age. _International Journal of Human-Computer Studies_ 156 (2021), 102707\. * Gravagne et al. (2001) Ian A Gravagne, Christopher D Rahn, and Ian D Walker. 2001\. Good vibrations: a vibration damping setpoint controller for continuum robots. In _Proceedings 2001 ICRA. IEEE International Conference on Robotics and Automation (Cat. No. 01CH37164)_ , Vol. 4. IEEE, 3877–3884. * Greenwood (1997) Donald D Greenwood. 1997\. The Mel Scale’s disqualifying bias and a consistency of pitch-difference equisections in 1956 with equal cochlear distances and equal frequency ratios. _Hearing research_ 103, 1-2 (1997), 199–224. * Hannaford et al. (2012) Blake Hannaford, Jacob Rosen, Diana W Friedman, Hawkeye King, Phillip Roan, Lei Cheng, Daniel Glozman, Ji Ma, Sina Nia Kosari, and Lee White. 2012\. Raven-II: an open platform for surgical robotics research. _IEEE Transactions on Biomedical Engineering_ 60, 4 (2012), 954–959. * Hosseini et al. (2013) Azamossadat Hosseini, Hamid Moghaddasi, Samad Sajadi, and Mozhgan Karimi. 2013. Telesurgery information management systems in university hospitals of Tehran. _Archives of Advances in Biosciences_ 4, 4 (2013). * Inouye et al. (2014) Joshua M Inouye, Silvia S Blemker, and David I Inouye. 2014\. Towards undistorted and noise-free speech in an MRI scanner: correlation subtraction followed by spectral noise gating. _The Journal of the Acoustical Society of America_ 135, 3 (2014), 1019–1022. * James et al. (2004) Jim H James, Bing Chen, and Laurie Garrison. 2004. Implementing VoIP: a voice transmission performance progress report. _IEEE Communications Magazine_ 42, 7 (2004), 36–41. * Jiang et al. (2002) Dan-Ning Jiang, Lie Lu, Hong-Jiang Zhang, Jian-Hua Tao, and Lian-Hong Cai. 2002. Music type classification by spectral contrast feature. In _Proceedings. IEEE International Conference on Multimedia and Expo_ , Vol. 1. IEEE, 113–116. * Khan and Li (2020) Ameer Hamza Khan and Shuai Li. 2020. Sliding mode control with PID sliding surface for active vibration damping of pneumatically actuated soft robots. _IEEE Access_ 8 (2020), 88793–88800. * Kim et al. (2015) Younghyun Kim, Woo Suk Lee, Vijay Raghunathan, Niraj K Jha, and Anand Raghunathan. 2015. Vibration-based secure side channel for medical devices. In _2015 52nd ACM/EDAC/IEEE Design Automation Conference (DAC)_. IEEE, 1–6. * Kingma and Ba (2014) Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. _arXiv preprint arXiv:1412.6980_ (2014). * Klapuri and Davy (2007) Anssi Klapuri and Manuel Davy. 2007. Signal processing methods for music transcription. (2007). * Kos et al. (2013) Marko Kos, Zdravko Kačič, and Damjan Vlaj. 2013\. Acoustic classification and segmentation using modified spectral roll-off and variance-based features. _Digital Signal Processing_ 23, 2 (2013), 659–674. * Krizhevsky et al. (2012) Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. 2012\. Imagenet classification with deep convolutional neural networks. In _Advances in neural information processing systems_. 1097–1105. * Kulkarni et al. (2020) Sushmita Kulkarni, Dattaprasad A Torse, and Deepak Kulkarni. 2020. A Cloud based Medical Transcription using Speech Recognition Technologies. _International Research Journal of Engineering and Technology (IRJET)_ 7, 5 (2020), 6160–6163. * Laghari et al. (2020) Asif Ali Laghari, Rashid Ali Laghari, Asif Ali Wagan, and Aamir Iqbal Umrani. 2020. Effect of packet loss and reorder on quality of audio streaming. _EAI Endorsed Transactions on Scalable Information Systems_ 7, 25 (2020). * Le et al. (2011) Phu Ngoc Le, Eliathamby Ambikairajah, Julien Epps, Vidhyasaharan Sethu, and Eric HC Choi. 2011\. Investigation of spectral centroid features for cognitive load classification. _Speech Communication_ 53, 4 (2011), 540–551. * Li et al. (2017) Chunxu Li, Chenguang Yang, Jian Wan, Andy SK Annamalai, and Angelo Cangelosi. 2017. Teleoperation control of Baxter robot using Kalman filter-based sensor fusion. _Systems Science & Control Engineering_ 5, 1 (2017), 156–167. * Li and Yuan (2017) Yuanzhi Li and Yang Yuan. 2017. Convergence analysis of two-layer neural networks with relu activation. In _Advances in neural information processing systems_. 597–607. * Ma et al. (2019) Xinbo Ma, Pak Kin Wong, and Jing Zhao. 2019. Practical multi-objective control for automotive semi-active suspension system with nonlinear hydraulic adjustable damper. _Mechanical Systems and Signal Processing_ 117 (2019), 667–688. * Martinez et al. (2012) Jorge Martinez, Hector Perez, Enrique Escamilla, and Masahisa Mabo Suzuki. 2012. Speaker recognition using Mel frequency Cepstral Coefficients (MFCC) and Vector quantization (VQ) techniques. In _CONIELECOMP 2012, 22nd International Conference on Electrical Communications and Computers_. IEEE, 248–251. * McClean et al. (2013) Jarrod McClean, Christopher Stull, Charles Farrar, and David Mascarenas. 2013. A preliminary cyber-physical security assessment of the robot operating system (ros). In _Unmanned Systems Technology XV_ , Vol. 8741. International Society for Optics and Photonics, 874110. * McFee et al. (2015) Brian McFee, Colin Raffel, Dawen Liang, Daniel P Ellis, Matt McVicar, Eric Battenberg, and Oriol Nieto. 2015. librosa: Audio and music signal analysis in python. In _Proceedings of the 14th python in science conference_ , Vol. 8. Citeseer, 18–25. * Muensterer et al. (2014) Oliver J Muensterer, Martin Lacher, Christoph Zoeller, Matthew Bronstein, and Joachim Kübler. 2014. Google Glass in pediatric surgery: an exploratory study. _International journal of surgery_ 12, 4 (2014), 281–289. * Müller (2015) Meinard Müller. 2015\. Fundamentals of Music Processing. (2015). * Neumann and Schuller (1991) Ingrid Neumann and Gerd Schuller. 1991. Spectral and temporal gating mechanisms enhance the clutter rejection in the echolocating bat, Rhinolophus rouxi. _Journal of comparative physiology A_ 169, 1 (1991), 109–116. * Ortega et al. (2018) Martín Ortega Ortega, Gustavo Chafla Altamirano, and Mara Falconí Abad. 2018. Evaluation of the voice quality and QoS in real calls using different voice over IP codecs. In _2018 IEEE Colombian Conference on Communications and Computing (COLCOM)_. IEEE, 1–6. * Panagiotakis and Tziritas (2005) Costas Panagiotakis and Georgios Tziritas. 2005. A speech/music discriminator based on RMS and zero-crossings. _IEEE Transactions on multimedia_ 7, 1 (2005), 155–166. * Paulus et al. (2010) Jouni Paulus, Meinard Müller, and Anssi Klapuri. 2010\. State of the Art Report: Audio-Based Music Structure Analysis.. In _Ismir_. Utrecht, 625–636. * Peng and Choi (2013) Yinni Peng and Susanne YP Choi. 2013. Mobile phone use among migrant factory workers in south China: Technologies of power and resistance. _The China Quarterly_ 215 (2013), 553–571. * Polydoros and Nalpantidis (2016) Athanasios S Polydoros and Lazaros Nalpantidis. 2016. A reservoir computing approach for learning forward dynamics of industrial manipulators. In _2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)_. IEEE, 612–618. * Quarta et al. (2017) Davide Quarta, Marcello Pogliani, Mario Polino, Federico Maggi, Andrea Maria Zanchettin, and Stefano Zanero. 2017. An experimental security analysis of an industrial robot controller. In _2017 IEEE Symposium on Security and Privacy (SP)_. IEEE, 268–286. * Rajaratnam et al. (2018) Krishan Rajaratnam, Kunal Shah, and Jugal Kalita. 2018\. Isolated and ensemble audio preprocessing methods for detecting adversarial examples against automatic speech recognition. _arXiv preprint arXiv:1809.04397_ (2018). * Rämö and Toukomaa (2011) Anssi Rämö and Henri Toukomaa. 2011. Voice quality characterization of IETF Opus codec. In _Twelfth Annual Conference of the International Speech Communication Association_. * Rao and Vuppala (2014) K Sreenivasa Rao and Anil Kumar Vuppala. 2014. _Speech processing in mobile environments_. Springer. * Reuters (2019) Reuters. 2019. U.S. companies put record number of robots to work in 2018\. https://www.reuters.com/article/us-usa-economy-robots/u-s-companies-put-record-number-of-robots-to-work-in-2018-idUSKCN1QH0K0. * Rueckert et al. (2017) Elmar Rueckert, Moritz Nakatenus, Samuele Tosatto, and Jan Peters. 2017. Learning inverse dynamics models in o (n) time with lstm networks. In _2017 IEEE-RAS 17th International Conference on Humanoid Robotics (Humanoids)_. IEEE, 811–816. * Saun et al. (2019) Tomas J Saun, Kevin J Zuo, and Teodor P Grantcharov. 2019\. Video technologies for recording open surgery: a systematic review. _Surgical innovation_ 26, 5 (2019), 599–612. * Shah et al. (2022) Ryan Shah, Chuadhry Mujeeb Ahmed, and Shishir Nagaraja. 2022\. Can You Still See Me?: Reconstructing Robot Operations Over End-to-End Encrypted Channels. _arXiv preprint arXiv:2205.08426_ (2022). * Song et al. (2016) Chen Song, Feng Lin, Zhongjie Ba, Kui Ren, Chi Zhou, and Wenyao Xu. 2016\. My smartphone knows what you print: Exploring smartphone-based side-channel attacks against 3d printers. In _Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security_. 895–907. * Sung and Gill (2001) Gyung Tak Sung and Inderbir S Gill. 2001. Robotic laparoscopic surgery: a comparison of the da Vinci and Zeus systems. _Urology_ 58, 6 (2001), 893–898. * Tewari et al. (2002) Ashutosh Tewari, James Peabody, Richard Sarle, Guruswami Balakrishnan, Ashok Hemal, Alok Shrivastava, and Mani Menon. 2002\. Technique of da Vinci robot-assisted anatomic radical prostatectomy. _Urology_ 60, 4 (2002), 569–572. * Valin (2016) Jean-Marc Valin. 2016\. Speex: A free codec for free speech. _arXiv preprint arXiv:1602.08668_ (2016). * Valin et al. (2016) Jean-Marc Valin, Gregory Maxwell, Timothy B Terriberry, and Koen Vos. 2016. High-quality, low-delay music coding in the opus codec. _arXiv preprint arXiv:1602.04845_ (2016). * Valin et al. (2012) Jean-Marc Valin, Koen Vos, and Timothy Terriberry. 2012\. _Definition of the Opus audio codec_. Technical Report. * Vodafone (2019) Vodafone. 2019. 5GDig: The winners – from Skype for surgeons to AR. https://www.vodafone.com/news/technology/5gdig-winners-2019-supponor-ar-surgeonmate. * Vos et al. (2010) Koen Vos, Soeren Jensen, and Karsten Soerensen. 2010. SILK speech codec. _IETF draft_ 30 (2010). * Zhang and Sabuncu (2018) Zhilu Zhang and Mert Sabuncu. 2018. Generalized cross entropy loss for training deep neural networks with noisy labels. In _Advances in neural information processing systems_. 8778–8788.
# The 6Li$(p,\gamma)^{7}$Be reaction rate in the light of the new LUNA data S. B. Dubovichenko Fesenkov Astrophysical Institute ”NCSRT” ASA MDASI RK, 050020 Almaty, Kazakhstan al-Farabi Kazakh National University, 050040 Almaty, Kazakhstan A. S. Tkachenko Fesenkov Astrophysical Institute ”NCSRT” ASA MDASI RK, 050020 Almaty, Kazakhstan R. Ya. Kezerashvili New York City College of Technology, City University of New York, Brooklyn, 11201 New York, USA Graduate School and University Center, City University of New York, 10016 New York, USA N. A. Burkova al-Farabi Kazakh National University, 050040 Almaty, Kazakhstan A. V. Dzhazairov-Kakhramanov Fesenkov Astrophysical Institute ”NCSRT” ASA MDASI RK, 050020 Almaty, Kazakhstan ###### Abstract We present new calculations of the astrophysical $S-$factor and reaction rate for the 6Li$(p,\gamma)^{7}$Be reaction at energies of 10 keV to 5 MeV in the framework of a modified potential cluster model with forbidden states, including low lying resonances. The astrophysical $S(E)-$factor is compared with the available experimental data and calculations done within different models. The results for the $S-$factor are in good agreement with the data set (for $E<0.3$ MeV) and calculations (for $E<0.6$ MeV) of LUNA collaboration (Phys. Rev. C 102, 052802, 2020). The recommended extrapolated zero value $S(0)$ turned out to be 101 eV $\cdot$ b. Using the theoretical total cross- sections, the 6Li$(p,\gamma)^{7}$Be capture reaction rate is calculated at temperatures ranging from 0.01 to 10 $T_{9}$ and compared with NACRE and NACRE II. Analytical expressions for the $S-$factor and reaction rate are given, and the effect of low-lying resonances on the reaction rate is estimated. We suggest to update the NACRE and NACRE II databases in light of the new LUNA data and present calculations. low and astrophysical energies, $p^{6}$Li system, thermonuclear reaction rate, potential cluster model ††preprint: APS/123-QED ## I Introduction The radiative 6Li$(p,\gamma)^{7}$Be capture reaction is of great interest in nuclear astrophysics [1, 2]. Since 1955 the 6Li$(p,\gamma)^{7}$Be reaction at low energies has been studied by several experimental groups [3, 4, 5, 6, 7, 8, 9, 10, 11]. Measurements of the astrophysical $S-$factor of this reaction were limited to the energy range of 35 keV to 1.2 MeV. The astrophysical $S-$factor and reaction rate were studied in the framework of different theoretical approaches and methods [12, 13, 14, 15, 16, 17, 18, 11]. A detailed review of the theoretical and experimental current status is given in Ref. [9]. In 2020, new experimental data were obtained in the Laboratory for Underground Nuclear Astrophysics (LUNA) [9], and it excluded the possibility of resonance mentioned in [8]. It seems challenging to consider this reaction in the astrophysical energy range, for which experimental data are available. In particular, our interest is the re-examination of $S-$factor and has two foci: i. to consider 6Li$(p,\gamma)^{7}$Be within the framework of the modified potential cluster model with the classification of bound and scattering states according to Young’s orbital diagrams [28]; ii. to describe the $S-$factor using all available experimental data and to obtain the reaction rate. While assessing the reliability of MPCM, it is reasonable to extend the energy interval up to 5 MeV to estimate the role of resonances in this energy range. Ten years ago, we studied this reaction, but limited ourselves to an energy range up to 1 MeV. Moreover, we did not take into account resonances, nor did we consider the reaction rate [14, 15]. In this paper, we investigate the energy dependence of the astrophysical $S-$factor of $\ {}^{6}$Li$(p,\gamma)^{7}$Be reaction at energies of 10 keV to 5 MeV. By considering several resonances, including a wide resonance at $E_{x}=9.9$ MeV, the reaction rate in the temperature range of 0.01 to 10 $T_{9}$ is calculated. We demonstrate that it is possible to correctly convey the available experimental data based on potentials that are consistent with the energies of bound states and their asymptotic constants. For the scattering potentials, we use the parameters consistent with the resonance spectrum of the final nucleus. The results obtained for the reaction rate are approximated by curves of a particular type to simplify their use in applied research. These results apply to problems in nuclear astrophysics related to light atomic nuclei and ultra- low energies. This work is organized as follows. Secs. II and III present the theoretical framework and fundamentals of the modified potential cluster model (MPCM), constructing principles of discrete states potentials and the description of the $p^{6}$Li channel in the continuous spectrum in the MPCM. Cross-section for the radiative capture processes and discussion of wave functions asymptotics are given in Sec. IV. Classification of cluster states and bound states potentials are given in Sec. V, and Sec. VI contains the description of the $p^{6}$Li channel in the continuous spectrum. Sec. VII is devoted to the astrophysical $S-$factor and the 6Li$(p,\gamma)^{7}$Be reaction rate. Appendices A and B include the description of the finite-difference method that we are using in present calculations and a table of numerical values of the $p^{6}$Li reaction rate in the temperature range of 0.001 $T_{9}$ to 10 $T_{9}$, respectively. We outline conclusions in Sec. VIII. ## II Theoretical framework Charged-particle induced reactions represent one of the main inputs in stellar evolution. There are a number of theoretical methods used for description of nuclear reactions at stellar energies that are based on fundamental principles of quantum mechanics [20]. Since its first application in 1963 [21], the potential model approach has a special place among models for description of low energy reactions. However, over the course of over a half century, this model has been significantly modified and improved. Below we present the fundamentals of the modified potential cluster model (MPCM), where the Young diagrams are used for the classification of orbital states and construction of potentials ([28] and Refs. herein). The basic features of the MPCM approach are as follows: 1. 1. The MPCM is a two-particle model that accounts for the internal characteristics of clusters: their sizes, charges, masses, quadrupole and magnetic momenta, which are used to calculate the reaction total cross- sections or other characteristics of the final nucleus. 2. 2. The classification of cluster states is performed according to Young’s orbital diagrams, leading to the concept of forbidden states in some partial waves [28]. The Pauli principle is implemented via exclusion of the forbidden states (FSs), manifesting in proper node behavior of the radial wave function (WF). Forbidden states that lead to low-lying bound states are not physically realized due to the orthogonality of corresponding functions and allowed state functions. 3. 3. The Gaussian type inter-cluster interaction potentials are constructed, taking into account these forbidden states in certain partial waves. For each partial wave with specified quantum numbers, the potential is constructed with two parameters, assuming it depends explicitly on Young’s orbital diagrams. 4. 4. Potentials of the bound states (BSs) are constructed based on asymptotic constants (AC) and binding energies. Potentials of the scattering processes are constructed based on the spectra of the final nucleus or the scattering phase shifts of the particles of the input channel. Parameters of the potentials are fixed or variable within the AC error intervals and vary within the energy or width errors of resonant or excited states. 5. 5. The radial WFs of the allowed states of the continuous and discrete spectra are tailored appropriately using correct asymptotics. In light of the new experimental LUNA data [9], we reexamine the 6Li$(p,\gamma)^{7}$Be reaction $S-$factor and reaction rate within the framework of the MPCM. ### Classification of cluster states The total wave functions (WF) have the form of an antisymmetrized product of completely antisymmetric internal wave functions of clusters $\Psi(1,\dots,A_{1})=\Psi(\mathbf{R_{1}})$ and $\Psi(A_{1}+1,\dots,A)=\Psi(\mathbf{\ R_{2}})$, multiplied by the corresponding wave function $\Phi(\mathbf{R})$ of relative motion [22, 23, 24] $\Psi=\hat{A}\\{\Psi(\mathbf{R_{1}})\Psi(\mathbf{R_{2}})\Phi(\mathbf{R})\\}.$ (1) In Eq. (1) $\hat{A}$ is the antisymmetrization operator that permutes nucleons from the clusters $A_{1}$ and $A_{2}$, $\mathbf{\ R_{1}}$ and $\mathbf{R_{2}}$ are the center-of-mass radius-vectors of the clusters, and $\mathbf{R=R}_{1}\mathbf{-R}_{2}$ is the relative motion coordinate. The wave functions (1) are characterized by specific quantum numbers, including $JLS$ — total momentum, orbital quantum momentum, and spin, respectively — and Young’s diagrams $\\{f\\}$, which determine the orbital part of WF permutation symmetry of the relative motion of the clusters. In the general case, the possible Young’s orbital diagram $\\{f\\}_{L}$ of some nucleus $A(\\{f\\})$, consisting of two parts $A_{1}(\\{f_{1}\\})+A_{2}(\\{f_{2}\\})$, is the direct outer product of Young’s orbital diagrams $\\{f\\}_{L}=\\{f_{1}\\}_{L}\times\\{f_{2}\\}_{L}$ and is determined by Littlewood’s theorem [23, 24]. According to Elliot’s theorem, each Young’s diagram is associated with a certain orbital angular momentum or their combination. Spin-isospin diagrams are the direct inner product of the spin and isospin Young diagrams of a nucleus consisting of $A$ nucleons $\\{f\\}_{ST}=\\{f\\}_{S}\otimes\\{f\\}_{T}$. For a system with no more than eight particles such diagrams are provided in Table C of Ref. [25]. A detailed procedure for defining the corresponding momenta can be found in the classical monograph [26]. Let us note that in Ref. [26] the definition for inner and outer products is reverses. The total Young’s diagram of the nucleus is defined as the direct inner product of the orbital and spin-isospin diagram $\\{f\\}=\\{f\\}_{L}\otimes\\{f\\}_{ST}$. The total wave function of the system under antisymmetrization does not vanish identically, only if it contains an antisymmetric component $\\{1^{N}\\}$, where $N$ is the number of nucleons. In this case the conjugates $\\{f\\}_{L}$ and $\\{f\\}_{ST}$ are multiplied. Therefore, the diagrams $\\{f\\}_{L}$ conjugated to $\\{f\\}_{ST}$ are allowed in this channel. All other orbital symmetries are forbidden since they lead to zero total wave function of the particle system after antisymmetrization. ## III The potentials construction within the MPCM Let us describe in more detail the procedure for constructing the intercluster partial potentials. Below we define the criteria and outline the sequence for finding parameters for the potentials and indicating their errors as well as ambiguities. ### III.1 Discrete states For the bound states of two clusters, the interaction potentials within the framework of the MPCM are constructed based on the requirement imposed to describe the main observable characteristics of such a nucleus. In this case, the potential parameters are fixed. It should be noted that this requirement is an idealized scenario that exists in the nucleus since it assumes that the ground state (GS) is a two-body single channel with probability closed to unity. First, we find the parameters of the bound state potentials. For the GS with a given number of bound allowed and forbidden states in the partial wave, these parameters are fixed unambiguously in terms of the binding energy, the radius of the nucleus, and the AC. When constructing the partial interaction potentials in the MPCM, it is assumed that interactions depend not only on the orbital angular momentum $L$ but also on the total spin $S$ and the total angular momentum $J$ of the system and also depend on Young’s orbital diagrams. As in earlier work [28], we use Gaussian interaction potentials, which depend on the quantum numbers $JLS$, and Young’s diagrams $\\{f\\}_{L}$. Therefore, for different $JLS$, we have different values of the parameters of the partial potentials. The accuracy of determining the parameters of the BS potential is connected directly with the accuracy of the AC. The potential does not contain any other ambiguities, since according to Young’s diagrams, the classification of states makes it possible to unambiguously fix the number of bound forbidden and allowed states in a given partial wave. The number of bound states ultimately determines the depth of the potential, while the width depends entirely on the value of the AC. If one fixes two parameters of the potential using two particular quantities — the binding energy and the AC — the error of the binding energy is seen to be much less than that of the AC. It should be noted that any calculations of the charge radius reflect the errors of the underlying model. In any model, the magnitude of such a radius depends on the integral of the model wave functions, thereby compounding sources of error. At the same time, the values of AC are determined from the asymptotic behavior of the model WFs at one point and contain significantly less error. The potentials of the BSs are constructed to obtain the best agreement with the values of the AC extracted independently from the experimental data. For more details, see Ref. [27]. ### III.2 Continuum states For the potentials of the continuous spectrum, the intercluster potential of the nonresonant scattering process for a given number of allowed and forbidden BSs in the considered partial wave is also constructed quite unambiguously based on the scattering phase shifts. The accuracy of the potential parameters sometimes as high as 20–30%, is associated with the precision of the extracted scattering phase shifts from experimental data. For the 6Li$(p,\gamma)^{7}$Be reaction, the potential is unambiguous since the classification, according to Young’s diagrams, makes it possible to fix the number of bound states. This completely determines the potential depth, and its width is determined by the shape of the scattering phase shifts. When constructing the nonresonant scattering potential based on the data for the nuclear spectra, it is difficult to estimate the accuracy of the parameters even for a given number of BSs. However, one can expect that it will not exceed the error discussed above. This potential should lead to a scattering phase shift close to zero or rise to a smoothly decreasing phase shift at low energies, since there are no resonance levels in the spectra of the nucleus. In resonance scattering, when a relatively narrow resonance is present in the partial wave at low energies for a given number of BSs, the potential is constructed completely unambiguously. The accuracy of determining the parameters of the interaction potentials is determined by the following factors. The depth of the potential depends on the resonance energy $E_{x}$ and the number of BSs. The width is determined by the accuracy of the experimental values of the level width $\Gamma$. The error of the parameters, approximately 5–10%, usually does not exceed the error of the energy level width. This also applies to the construction of the partial potential from the resonant scattering phase shifts and the determination of its parameters from the spectral resonance of the nucleus [19, 28]. ## IV Cross-section and WFs asymptotics To calculate the total cross-sections of radiative capture processes, we use the well-known formula for the transitions of $NJ$ multipolarity [19, 28] $\begin{gathered}\sigma(NJ,J_{f})=\frac{8\pi Ke^{2}}{\hbar^{2}\ k^{3}}\frac{\mu}{(2S_{1}+1)(2S_{2}+1)}\frac{J+1}{J[(2J+1)!!]^{2}}A_{J}^{2}(NJ,K)\times\\\ \times\sum\limits_{L_{i},J_{i}}P_{J}^{2}(NJ,J_{f},J_{i})I_{J}^{2}(k,J_{f},J_{i})\end{gathered}$ (2) where the matrix elements of orbital $EJ(L)$-transitions have the following form $(S=S_{i}=S_{f})$ $P_{J}^{2}(EJ,J_{f},J_{i})=\delta_{S_{i}S_{f}}[(2J+1)(2L_{i}+1)(2J_{i}+1)(2J_{f}+1)](L_{i}0J0|L_{f}0)^{2}\begin{Bmatrix}L_{i}&S&J_{i}\\\ J_{f}&J&L_{f}\end{Bmatrix},$ (3) $A_{J}(EJ,K)=K^{J}\mu^{J}\left(\frac{Z_{1}}{m_{1}^{J}}+(-1)^{J}\frac{Z_{2}}{m_{2}^{J}}\right),\ \ \ \ I_{J}(k,J_{f},J_{i})=\langle\chi_{f}|r^{J}|\chi_{i}\rangle,$ (4) and the matrix elements of the magnetic $M1(S)$-transition are written as follows $(S=S_{i}=S_{f},L=L_{i}=L_{f})$ $P_{1}^{2}(M1,J_{f},J_{i})=\delta_{S_{i}S_{f}}\delta_{L_{i}L_{f}}[S(S+1)(2S+1)(2J_{i}+1)(2J_{f}+1)]\begin{Bmatrix}S&L&J_{i}\\\ J_{f}&1&S\end{Bmatrix},$ (5) $A_{1}(M1,K)=\frac{\hbar K}{m_{0}c}\sqrt{3}\left(\muup_{1}\frac{m_{2}}{m_{1}+m_{2}}-\muup_{2}\frac{m_{1}}{m_{1}+m_{2}}\right),\ \ \ \ I_{J}(k,J_{f},J_{i})=\langle\chi_{f}|r^{J-1}|\chi_{i}\rangle,\ J=1.$ (6) In Eqs. (2)–(6) $K=E_{\gamma}/\hbar c$ is the wave number of the emitted photon with energy $E_{\gamma}$, $m_{1}$, $m_{2}$ and $\muup_{1}$, $\muup_{2}$ are the masses and magnetic momenta of the clusters, respectively, and $\mu$ is the reduced mass of the system. Namely, in the present calculations for the reaction 6Li$(p,\gamma)^{7}$Be, $m_{1}\equiv m_{p}=1.00727646677$ amu, $m_{2}\equiv m_{{}^{6}\text{Li}}=6.0151232$ amu [29] and $\muup_{1}\equiv\muup_{p}=2.792847\muup_{0},$ $\muup_{2}\equiv\muup_{{}^{6}\text{Li}}=0.822\muup_{0}$ [30, 31], where $\muup_{0}$ is nuclear magneton, $\hbar^{2}/m_{0}=41.4686$ MeV$\cdot$fm2, where $m_{0}=931.494$ MeV is the atomic mass unit (amu). The point-like Coulomb potential is of the form $V_{\text{Coul}}\text{(MeV)}=1.439975Z_{1}Z_{2}/r$, where $r$ is the relative distance between the particles of the channel in fm, and $Z_{1}$ and $Z_{2}$ are the charges in units of the elementary charge. The Coulomb parameter $\eta=\mu Z_{1}Z_{2}e^{2}/k\hbar^{2}$ is represented in the form $\eta=3.44476\cdot 10^{-2}\ Z_{1}Z_{2}\mu/k$, where $k$ is the wavenumber in fm-1 and is determined by the energy $E_{c.m.}$ of the interacting particles, $k^{2}=2\mu E_{c.m.}/\hbar^{2}$. In our calculations, we use the dimensionless AC denoted as $C_{\text{w}}$ [32] $\chi_{L}(r)=\sqrt{2k_{0}}\ C_{\text{w}}\ W_{-\eta L+1/2}(2k_{0}r).$ (7) A dimensional asymptotic constant $C$ is related to the asymptotic normalization coefficient (ANC) $A_{NC}$ by the expression [27] $A_{NC}=\sqrt{S_{F}}C,$ (8) where $S_{F}$ is the spectroscopic factor and $C$ is the dimensional AC that can be represented using the asymptotics of the WF $\chi_{L}(r)\xrightarrow[r\rightarrow R]{}CW_{-\eta L+1/2}\left(2k_{0}r\right).$ (9) In Eq. (9) $R$ is the large distance where the nuclear potential vanishes and $\chi_{L}(r)$ is the wave function of the bound state obtained from the solution of the radial Schr$\ddot{\text{o}}$dinger equation and normalized to unity. The Whittaker function $W_{-\eta L+1/2}$ of the bound state determines the asymptotic behaviour of the WF. The wave number $k_{0}$ is related to the channel binding energy $E_{b}$. For a continuous spectrum, the function $\chi_{i}$ found numerically is matched to asymptotics $u_{L}(R)$ of the form $N_{L}u_{L}(r)\xrightarrow[r\rightarrow R]{}F_{L}(kr)+\tan(\deltaup_{S,L}^{J})G_{L}(kr).$ (10) Here $F_{L}$ and $G_{L}$ are Coulomb regular and irregular functions [33]. They are the solutions of the Schr$\ddot{\text{o}}$dinger equation with the Coulomb potential. $\deltaup_{S,L}^{J}$ are the scattering phase shifts depending on the $JLS$ momenta of the system and $N_{L}$ is the normalizing constant of the numerical radial function $u_{L}(R)$ for the continuum. ## V Cluster states classification and the BS potentials Consider the classification of the BSs of the $p^{6}$Li system according to Young’s diagrams. In this case there is only one Young’s orbital diagram for the 6Li nucleus $\\{42\\}$. It is believed that the system’s potentials are dependent on the diagrams or combinations of these diagrams in various states. Thus, if the orbital diagram $\\{42\\}$ allowed in the 2H4He-cluster channel is accepted for the 6Li nucleus, then the $p^{6}$Li system with spin $S=1/2$ contains a forbidden level with diagram $\\{52\\}$ and orbital momenta of $L=0,2$, and the allowed states with configurations $\\{43\\}$ for $L=1,3$ and $\\{421\\}$ for $L=1,2$. Hence, the $p^{6}$Li potentials must have a forbidden state related to $\\{52\\}$ in the $S$ wave. The allowed bound state corresponds to the $P$ wave with the two Young’s diagrams $\\{43\\}$ and $\\{421\\}$. In the quartet spin channel $S=3/2$ of the system, only one diagram $\\{421\\}$ is allowed for $L=1,2$ [28]. Since there are two allowed diagrams $\\{43\\}$ and $\\{421\\}$ in the doublet spin state of the $p^{6}$Li system, the scattering states turn out to be mixed in orbital symmetries. At the same time, only one allowed diagram $\\{43\\}$ usually corresponds to the doublet ground state of the 7Be nucleus in the $p^{6}$Li channel with $J^{\pi}=3/2^{-}$ and $L=1$. Here, the $p^{6}$Li system is completely analogous to the $p^{2}$H channel in the 3He nucleus. In the latter case the doublet state is also mixed according to Young’s diagrams $\\{3\\}$ and $\\{21\\}$ [24]. Therefore, the potentials constructed based on the elastic scattering phase shifts of the $p^{6}$Li system or the level spectra of the 7Be nucleus cannot be used to describe the GS of the 7Be nucleus in the $p^{6}$Li channel. Pure in orbital symmetry with Young’s diagram $\\{43\\}$, the ${}^{2}P_{3/2}$ potential of the ground state of 7Be reproduces the binding energy of the GS of the nucleus consistent with the $p^{6}$Li system and its asymptotic constant. The scattering potentials are constructed based on the spectra of the 7Be nucleus from Ref. [30]; it has no major difference from the newer compilation [34]. The orbital state’s classification of the $p^{6}$Li system is shown in Table 1. Table 1: The classification of the orbital states of the $p^{6}$Li system [14, 19]. The following notations are used: $S$ and $L$ are spin and orbital angular momentum of the system, respectively, $\\{f\\}_{S}$, $\\{f\\}_{T}$ and $\\{f\\}_{ST}$ for isospin $T=1/2$, and $\\{f\\}_{L}$ are spin, isospin and spin-isospin, and possible orbital Young’s diagrams, and $\\{f\\}_{\text{AS}}$, $\\{f\\}_{\text{FS}}$ are Young’s diagrams of allowed and forbidden orbital states. $S$ | $\\{f\\}_{S}$ | $\\{f\\}_{T}$ | $\\{f\\}_{ST}$ | $\\{f\\}_{L}$ | $L$ | $\\{f\\}_{\text{AS}}$ | $\\{f\\}_{\text{FS}}$ ---|---|---|---|---|---|---|--- $1/2$ | {43} | {43} | {7}+{61}+{52}+{511}+{43}+ | {52} | $0,2$ | — | {52} +{421}+{331}+{4111}+ | {43} | $1,3$ | {43} | — +{322}+{3211}+{2221} | {421} | $1,2$ | {421} | — $3/2$ | {52} | {43} | {61}+{52}+{511}+ | {52} | $0,2$ | — | {52} +{43}+2{431}+{331}+ | {43} | $1,3$ | — | {43} +{322}+{3211} | {421} | $1,2$ | {421} | — We use a Gaussian potential that depends on the momenta of the system [28] and Young’s diagrams $V(r,JLS,\\{f_{L}\\})=-V_{0}(JLS,\\{f_{L}\\})\exp(-\alpha_{JLS,\\{f_{L}\\}}r^{2}).$ (11) In Eq. (11) $V_{0}$ is the potential depth and $\alpha$ is related to the potential width. The choice of parameters for the bound and scattering states is discussed in detail in Sec. VI. A compilation of the AC data for the ground and first excited state in the $p^{6}$Li channel of 7Be is presented in Table 2, with $C_{\text{w}}^{2}=A_{NC}^{2}/2k_{0}S_{F}$. Here $\sqrt{2k_{0}}=0.983$ fm-1/2 for the GS and $\sqrt{2k_{0}}=0.963$ fm-1/2 for the FES. Table 2: AC data for the ground and the first excited states in $p^{6}$Li channel of 7Be. BS | Reference | $A_{NC}$, fm-1/2 | $S_{F}$ | $C_{\text{w}}$ ---|---|---|---|--- GS | Nollett and Wiringa [35], 2011 | 2.85(3) | 1 | 2.90(3) | Huang et al. [13], 2010 | 2.01 | 0.66 – 1 | 2.28(24) | Timofeyuk [36], 2013 | 1.80 | 0.46 – 0.87 | 2.32(37) | Burtebayev et al. [37], 2013 | 1.77(8) | 0.55 – 0.81 | 2.23(31) | Gnech and Marcucci [18], 2019 | 2.654 | 1.003 | 2.65 | Kiss et al. [11], 2021 | 2.19(9) | 0.98(30) | 2.35(43) FES | Huang et al. [13], 2010 | 1.91 | 0.66 – 1.02 | 2.20(24) | Timofeyuk [36], 2013 | 1.91 | 0.62 – 1.21 | 2.17(36) | Burtebayev et al. [37], 2013 | 1.95(9) | 0.85 – 1.03 | 2.10(20) | Gnech and Marcucci [18], 2019 | 2.528 | 1.131 | 2.53 | Kiss et al. [11], 2021 | 2.18(6) | 1.08(32) | 2.26(39) Summarizing the data in Table 2, we conclude that all cited data on the AC are overlapped. While constructing the corresponding GS and FES potentials, we used the average values indicated in bold font in Table 2. Just at the end of our calculation story, a publication by Kiss et al. [11] appeared, and it happened that these latest experimental results turned to be within the defined intervals for ANC given above. We use GS and FES in the form of only doublet ${}^{2}P_{3/2}$ and ${}^{2}P_{1/2}$ states, but we take the experimental data on ANC from [37] since it is assumed that these states result in the observed ANC values. We do not consider these states as a mix of doublet and quartet states, for instance, ${}^{2+4}P_{3/2}$, is a prime example, as the quartet channel is not allowed for the orbital Young’s diagram $\\{43\\}$ in Table 1. All AC values are used here as a framework to obtain the parameters of the $p^{6}$Li interaction BSs potentials. These potentials correspond to the lower, upper, and average values of AC and accurately reproduce the binding energies [30] of the bound states. The parameters of the potentials are presented in Table 3. Table 3: Parameters of $p^{6}$Li system bound state potentials and bound states characteristics. $E_{x}$, $E_{b}$, and $V_{0}$ are provided in MeV. The $R_{\text{ch}}$ and $R_{\text{m}}$ are given in fm. $\\#$ | BS | $E_{x}$ | $J^{\pi}$ | $E_{b}$ | ${}^{2S+1}L_{J}$ | $V_{0}$ | $\alpha$, fm-2 | $C_{\text{w}}$ | $R_{\text{ch}}$ | $R_{\text{m}}$ ---|---|---|---|---|---|---|---|---|---|--- 1 | GS | 0 | 3/2- | –5.60580 | ${}^{2}P_{3/2}$ | $100.750920$ | $0.25$ | $1.75(1)$ | $2.49$ | $2.51$ 2 | GS | 0 | 3/2- | –5.60580 | ${}^{2}P_{3/2}$ | $74.504070$ | $0.17$ | $2.26(1)$ | $2.58$ | $2.58$ 3 | GS | 0 | 3/2- | –5.60580 | ${}^{2}P_{3/2}$ | $60.998575$ | $0.13$ | $2.74(1)$ | $2.64$ | $2.61$ 4 | FES | 0.4291 | 1/2- | –5.17670 | ${}^{2}P_{1/2}$ | $99.473500$ | $0.25$ | $1.68(1)$ | $2.52$ | $2.54$ 5 | FES | 0.4291 | 1/2- | –5.17670 | ${}^{2}P_{1/2}$ | $73.333835$ | $0.17$ | $2.16(1)$ | $2.59$ | $2.59$ 6 | FES | 0.4291 | 1/2- | –5.17670 | ${}^{2}P_{1/2}$ | $59.898120$ | $0.13$ | $2.61(1)$ | $2.65$ | $2.62$ ## VI $\boldmath p^{6}\text{Li}$ channel in the continuous spectrum We usually assume that the scattering potentials can lead to the FSs [28], and if the FSs are absent, the potential depth can be set to zero. The present case refers to the scattering $P$ potentials without the FS, while the $S$ and $D$ potentials have the bound forbidden state and, even at zero phase shifts, must have a nonzero depth. In the presence of one BS, the phase shift starts at $180^{\circ}$ [24], as shown in Fig. 1 for the $S$ scattering phase shifts. Furthermore, we consider $E2$ transitions from resonant $F$ scattering states with phase shifts shown in Fig. 2. The parameters of potentials for all scattering processes for transitions to GS and FES are given in Tables 4 and 5, respectively. Table 4: The spectrum of 7Be levels [30] and scattering states in the $p^{6}$Li channel for the capture to the ${}^{2}P_{3/2}$ GS at a binding energy of 5.6058 MeV, along with $P^{2}_{J}$ from expressions (3) and (5). $E_{x}$, $E_{\text{res}}$, $\Gamma_{c.m.}$ and $V_{0}$ are provided in MeV. # | $E_{x}$, | $J^{\pi}$ | $E_{\text{res}}$, | $\Gamma_{c.m.}$, | GS transition: | $P^{2}_{J}$ | $V_{0}$ | $\alpha$, | $E_{\text{res}}$ | $\Gamma_{c.m.}$ ---|---|---|---|---|---|---|---|---|---|--- exp. | exp. | exp. | $\left[{}^{2S+1}L_{J}\right]_{i}\rightarrow\left[{}^{2S+1}L_{J}\right]_{f}$ | fm-2 | theory | theory 1 | No res. | $5/2^{+}$ | — | — | $E1:\ ^{2}D_{5/2}\rightarrow\ ^{2}P_{3/2}$ | $36/5$ | $58.0$ | $0.4$ | — | — 2 | No res. | $3/2^{+}$ | — | — | $E1:\ ^{2}D_{3/2}\rightarrow\ ^{2}P_{3/2}$ | $4/5$ | $58.0$ | $0.4$ | — | — 3 | No res. | $1/2^{+}$ | — | — | $E1:\ ^{2}D_{5/2}\rightarrow\ ^{2}P_{3/2}$ | $4$ | $58.0$ | $0.4$ | — | — 4 | No res. | $1/2^{-}$ | — | — | $M1:\ ^{2}P_{1/2}\rightarrow\ ^{2}P_{3/2}$ | $4/3$ | $0.0$ | $1.0$ | — | — 5 | $7.2(1)$ | $5/2^{-}$ | $1.59(10)$ | $0.40(5)$ | $E2:\ ^{2}F_{5/2}\rightarrow\ ^{2}P_{3/2}$ | $12/7$ | $111.60$ | $0.1$ | $1.60(1)$ | $0.62(1)$ 6 | $9.29(31)$ | $7/2^{-}$ | $3.68(31)$ | $1.93(96)$ | $E2:\ ^{2}F_{7/2}\rightarrow\ ^{2}P_{3/2}$ | $72/7$ | $44.34$ | $0.05$ | $3.68(1)$ | $1.50(1)$ 7 | $9.9$ | $3/2^{-}$ | $4.3$ | $1.8$ | $M1:\ ^{2}P_{3/2}\rightarrow\ ^{2}P_{3/2}$ | $5/3$ | $432.0$ | $1.5$ | $4.30(1)$ | $1.80(2)$ Table 5: The spectrum of 7Be energy levels [30] and scattering states in the $p^{6}$Li channel for the proton capture to the ${}^{2}P_{1/2}$ FES at a binding energy of 5.1767 MeV, along with $P^{2}_{J}$ from expressions (3) and (5). $E_{\text{x}}$, $E_{\text{res}}$, $\Gamma_{c.m.}$ and $V_{0}$ are provided in MeV. # | $E_{\text{x}}$, | $J^{\pi}$ | $E_{\text{res}}$, | $\Gamma_{c.m.}$, | FES transition: | $P^{2}_{J}$ | $V_{0}$ | $\alpha$, | $E_{\text{res}}$ | $\Gamma_{c.m.}$ ---|---|---|---|---|---|---|---|---|---|--- exp. | exp. | exp. | $\left[{}^{2S+1}L_{J}\right]_{i}\rightarrow\left[{}^{2S+1}L_{J}\right]_{f}$ | fm-2 | theory | theory 1 | No res. | $3/2^{+}$ | — | — | $E1:\ ^{2}D_{3/2}\rightarrow\ ^{2}P_{1/2}$ | $4$ | $58.0$ | $0.4$ | — | — 2 | No res. | $1/2^{+}$ | — | — | $E1:\ ^{2}S_{1/2}\rightarrow\ ^{2}P_{1/2}$ | $2$ | $58.0$ | $0.4$ | — | — 3 | No res. | $1/2^{-}$ | — | — | $M1:\ ^{2}P_{1/2}\rightarrow\ ^{2}P_{1/2}$ | $1/6$ | $0.0$ | $1.0$ | — | — 4 | $7.2(1)$ | $5/2^{-}$ | $1.59(10)$ | $0.40(5)$ | $E2:\ ^{2}F_{5/2}\rightarrow\ ^{2}P_{1/2}$ | $6$ | $111.6$ | $0.1$ | $1.60(1)$ | $0.62(1)$ 5 | $9.9$ | $3/2^{-}$ | $4.3$ | $1.8$ | $M1:\ ^{2}P_{3/2}\rightarrow\ ^{2}P_{1/2}$ | $4/3$ | $432.0$ | $1.5$ | $4.30(1)$ | $1.80(2)$ Figure 1: Doublet and quartet $S$ phase shifts of elastic $p^{6}$Li scattering at low energies. The ${}^{2}S$ and ${}^{4}S$ phase shifts are taken from Ref. [38] and shown by $\medblackcircle$ and $\medblacktriangleup$, respectively. Results from [14, 15] obtained according to [38] are shown by the dashed- dotted curves. Results of the present work are shown by the solid curve. In addition, we consider the resonance at an excitation energy of 9.9 MeV [30] according to Fig. 3 (4.3 MeV above the threshold) in a ${}^{2}P_{3/2}$ scattering state of width 1.8 MeV in c.m. Considering such an $M1$ transition to the ${}^{2}P_{3/2}$ ground state or an $M1$ transition from the ${}^{2}P_{1/2}$ scattering state to the ${}^{2}P_{1/2}$ FES are possible due to the presence of different Young’s diagrams in the bound and scattering states. Recall that the BSs have the diagram $\\{43\\}$, and the scattering states are mixed according to the two diagrams $\\{43\\}+\\{421\\}$ [15]. Table 4 shows possible transitions to the 7Be nucleus GS from various $p^{6}$Li scattering states with ${}^{2S+1}L_{J}$. The possible transitions to the FES from different scattering states with ${}^{2S+1}L_{J}$ are shown in Table 5. The resonance energies and widths are obtained with the corresponding parameters of the scattering potentials. For the $P_{1/2}$ scattering wave, zero-depth potentials are used since the scattering $P$ waves do not contain forbidden BSs. For the ${}^{2}D$ wave potentials, ${}^{2}S$ wave parameters are used for $L=2$. Figure 2: $P$ and $F$ phase shifts of elastic $p+^{6}$Li scattering obtained for scattering potentials with the parameters from Tables 4 and 5. Figure 3: Schematics of the energy spectrum of 7Be. The energy are given in MeV and figure is not drown to scale [30]. a the above-threshold resonance at 6.73 MeV with a width of 1.2 MeV refers to the 4He3He channel that is not considered in the present work. Resonant phase shifts of elastic scattering $p+^{6}$Li are shown in Fig. 2. The above-threshold resonance at 6.73 MeV with a width of 1.2 MeV indicated in Fig. 3 refers to the 4He3He channel [30] and is not considered in our previous works [14, 15]. Note again that in the MPCM we used, Young’s orbital diagram {43} is forbidden in the quartet state, as shown in Table 1, and this particular diagram corresponds to the GS of the 7Li nucleus. Therefore, in the GS there is only a doublet ${}^{2}P_{3/2}$ state (without impurity of ${}^{4}P_{3/2}$), which is allowed for the diagram {43}. Thus, our model [14, 15] predicted the absence of resonance at 6.73 MeV with $J^{\pi}=5/2$ in the nucleon channel, or, in other words, the impossibility of the $M1$ transition from this resonance to the GS. This has been confirmed by the new LUNA results [9] and, indirectly, by the data of [30]. The width of the resonance peak at 9.29 MeV is taken from Table 7.10 of [30], although another state, ${}^{2}P_{1/2}$, is indicated therein. At 9.27 MeV, the given moment is $7/2$, as per Table 7.7 in Ref. [30], so we infer the presence of an $F$ state. However, this resonance leads to a minimal increase in cross-sections at the $E2$ transition. It is negligible against the background of the $E1$ resonance at 4.3 MeV with a transition from the ${}^{2}P_{3/2}$ scattering state for the potential parameters #7 (Table 4) or #5 (Table 5), respectively. In Fig. 1 are shown the doublet and quartet $S$ phase shifts of elastic $p^{6}$Li scattering at low energies. The ${}^{2}S$ potential from Table 4 with a depth of 58 MeV has one FS and allows one to describe the phase shifts of [14] up to 1 MeV, shown in Fig. 1 by solid circles. Moreover, it gives phase shifts below 2 MeV that coincide with the phase shifts obtained with the potential from Ref. [14], with a depth of 126 MeV and a width of 0.15 fm-2. This early potential has two FSs and does not agree with our new classification from Table 1. To compare the results, we construct a new potential (depth 58 MeV) that gives the most overlap in phase shifts from prior work [14]. The phase shift of the new potential is given in Fig. 1 by a solid curve, while dash-dotted curves refers to results from Ref. [14]. ## VII Astrophysical S-factor and reaction rate A special feature of cross-sections of nuclear reactions with charged particles at low and ultra-low energies is an extreme reduction by several orders of a cross-section magnitude due to the decrease in transmission probability through the Coulomb barrier. For practical purposes, the astrophysical S-factor is introduced as $S(E)=\dfrac{\sigma(E)}{P(E)}E,\ \ \ P(E)=e^{-2\pi\eta},$ (12) where the factor $P(E)$ reflects the permeability of the Coulomb barrier. The advantage of the astrophysical $S-$factor is that it shows a smooth energy dependence at low energies. Following the excellent manuscript by Christian Iliadis [39], we would like to provide a brief discussion of the use of the $S-$factor in conventional calculation schemes in order to clarify the current approach. The definition of the $S-$factor (12) allows to write the following expression for the reaction rate: $N_{A}\langle\sigma\nu\rangle=\left(\dfrac{8}{\pi\mu}\right)^{1/2}N_{A}\left(\kappa T_{9}\right)^{-3/2}\int\limits_{0}^{\infty}e^{-2\pi\eta}S(E)e^{-E/\kappa T_{9}}dE.$ (13) In Eq.(13) $\kappa$ is the Boltzmann constant and $N_{A}$ is Avogadro’s number. A notable effort has been expended to bring the integral in (13) to analytical form. This is possible only if the expansion of the $S(E)-$factor in the $E$ series, given by $S(E)\approx S(0)+S^{\prime}(0)E+S^{\prime\prime}E^{2}.$ (14) is valid at low energies (c.f. Ref. [39], Section 3.2). Expression (14) explains the active interest to determine the value $S(0)$ as a key one for the calculation of the reaction rate in the form of (13) with its further analytical parameterizations. Table 6 presents the available experimental data of the astrophysical $S-$factor for the 6Li$(p,\gamma)^{7}$Be reaction, as well as the extrapolated $S(0)$ values. Results of previous theoretical calculations and the present work are presented in Table 7. Our calculations are analyzed and interpreted based on the experimental data [9] as one of the newest and most accurate. In the present work we introduce some corrections to [14, 15] that allow us to extend the energy interval for the cross-sections and corresponding $S-$factors. It is also worth mentioning that in Ref. [18], the authors used the calculation scheme based on [15]. The value $S(0)=95.0$ eV$\cdot$b is obtained which is consistent with the LUNA experimental data [9]. However, the astrophysical reaction rate is missing in Ref. [18]. Table 6: Experimental data on the astrophysical $S-$factor of6Li$(p,\gamma)^{7}$Be. $S(E)$ and $S(0)$ are given in eV$\cdot$b and $S(0)$ are extrapolated values. $E$, keV | $S(E)$ | $S(0)$ | Reference | Method/project ---|---|---|---|--- $135$ | $51(15)$ | — | Switkowski et al. [4], 1979 | $\gamma$-ray Ge(Li) spectrometers for proton bombarding energies 200 – 1200 keV $340$ | $43(3)$ | — | Ostojic et al. [5], 1983 | Direct radiative capture $40$ | $65$ | $65$ | Cecil et al. [40], 1992 | Thick-target $\gamma$-ray-to-charged-particle branching ratio measurements $35$ | $40(14)$ | — | Bruss [6], 1993 | — | — | $79(18)$ | Prior et al. [10], 2004 | Polarized proton beams, TUNL $250$ | $95(10)$ | — | He et al. [8], 2013 | 320 keV platform with highly charged ions $60$ | $92(6)$ | $95(9)$ | Piatti et al. [9], 2020 | LUNA collaboration Table 7: Theoretical calculations of the astrophysical $S-$factor of 6Li$(p,\gamma)^{7}$Be. $S(0)$, eV$\cdot$b | Reference | Model ---|---|--- $106$ | Barker [41], 1980 | Direct-capture potential model $105$ | Arai et al. [12], 2002 | Four-cluster microscopic model $95,5$ | Huang et al. [13], 2010 | Single-particle model $114$ | Dubovichenko et al. [14, 15], 2010, 2011 | Modified potential cluster model $73^{+56}_{-11}$ | Xu et al. [16], 2013 | Direct-capture potential model $88,34$ | Dong et al. [17], 2017 | Gamow shell model $103,9$ | Gnech and Marcucci [18], 2019 | Potential cluster model $96,5\pm 5,7$ | Kiss et al. [11], 2021 | Modified two-body potential method $92\pm 12$ | Kiss et al. [11], 2021 | Modified two-body potential method $98,3$ | Present work | Modified potential cluster model The results of the present calculations of $S-$factors along with available experimental data are shown in Figs. 4, 5, and 6. Fig. 4 shows the astrophysical $S(E)-$factor of 6Li$(p,\gamma_{0})^{7}$Be capture to the GS of the 7Be nucleus in an energy range up to 5 MeV. The solid red curve 2 and the two dashed curves, blue 1 and green 3, show the calculation for all transitions to the GS given in Table 4. Parameters $V_{0}$ (only integer values) and $\alpha$ of the corresponding potentials as well as $C_{\text{w}}$ from Table 3 are indicated in the figures, and the parameters of the GS potentials are taken from Table 4. The solid red curve 2 is the result for the potential with the set of parameters #2 from Table 3, leading to the average value of AC. Figure 4: Astrophysical $S-$factor of the 6Li$(p,\gamma_{0})^{7}$Be capture to GS. Experimental data are taken from $\medblacktriangleup$ – [4], $\medblackcircle$ – [8], $\medblacksquare$ – [9], $\medblackdiamond$ – [7], $\medblackstar$ – [42], $\medsquare$ – [3], $\medtriangledown$ – [5]. Parameters of the continuum potential are given in Table 4. A band shows the sensitivity to changes in $C_{\text{w}}$. In Fig. 5, similar curves show the results for transitions and potentials from Table 5 to FES. FES potentials have three sets of parameters from Table 3. The solid red curve 2 shows the results for capture with set # 5 from Table 3, allowing us to determine the average value of AC. This result is in good agreement with the experimental data [6] presented in Fig. 5. The two dashed curves 1 and 3 almost completely cover the interval or band of cross-section errors of the capture to the FES. Figure 5: Astrophysical $S-$factor of the 6Li$(p,\gamma_{1})^{7}$Be capture to FES. Experimental data $\medcircle$ are taken from [6]. Parameters of the continuum potential are given in Table 5. A band shows the sensitivity to changes in $C_{\text{w}}$. In Fig. 6, similar curves show the astrophysical $S-$factor for the total cross-sections corresponding to the transition to GS and FES. The two dashed curves 1 and 3 show the range of $S-$factor values due to ambiguities in the AC of the GS and FES. For the scattering potentials, the parameters from Tables 4 and 5 are used. Figure 6: Astrophysical $S-$factor of the 6Li$(p,\gamma_{0+1})^{7}$Be capture to GS and FES. Experimental data for capture are from $\medblacktriangleup$ – [4], $\medblackcircle$ – [8], $\medblacksquare$ – [9], $\medblackdiamond$ – [7], $\medsquare$ – [3], $\medtriangledown$ – [5], $\medblackstar$ – [42], $\medcircle$ – [6]. The parameters of GS and FES potentials are listed in the figure. A band shows the sensitivity to changes in $C_{\text{w}}$. The best agreement of the $S-$factor with experimental data is achieved for the values of $C_{\text{w}}=2.74$ for the GS and $C_{\text{w}}=1.68$ for the FES. We recommend these values as the most reliable benchmarks for future experimental studies. Fig. 6 shows that almost all experimental data lie between the solid red curve 2 and the green dashed curve 3. If we use the GS and FES potentials set of parameters #3 and #4 from Table 3, the result is shown in Fig. 6 by the black curve 4. In this case the experimental data [9] are reproduced entirely, and $S-$factor at 10 keV is found to be 101 eV$\cdot$b. For the scattering potentials, the data from Table 4 and Table 5 are used. Due to the uncertainty of the $S-$factor that arises from the uncertainty of the AC, it is desirable to select other options for the potentials of the GS and FES to correctly describe the LUNA data [9]. This can be the subject of future work if more accurate data is compiled for the AC of the ground and first excited states of the 7Be nucleus. The approximation of the $S-$factor shown by the black curve 4 in Fig. 6 has an analytical form $S(E)=S_{0}+S_{1}E+S_{2}E^{2}$ (15) with parameters $S_{0}=98.31$ eV$\cdot$b, $S_{1}=-187.18$ MeV${}^{-1}\cdot$eV$\cdot$b and $S_{2}=442.51$ MeV${}^{-2}\cdot$eV$\cdot$b. This approximation leads to $\chi^{2}=2.4\cdot 10^{-4}$ with the error of 5% in the energy range of 30 to 100 keV. This shows that $S(0)=98.3$ eV$\cdot$b and $S(30)=93.1$ eV$\cdot$b. New experimental data from LUNA [9] can be approximated to the first order $S(E)=S_{0}+S_{1}E$ (16) with parameters $S_{0}=91.952$ eV$\cdot$b and $S_{1}=-75.471$ MeV${}^{-1}\cdot$eV$\cdot$b, leading to $\chi^{2}=0.6$ and $S(0)=92$ eV$\cdot$b. To compare the calculated $S-$factor at zero energy (10 keV), we present the known results for the total $S(0)$: $79(18)$ eV$\cdot$b [10], 105 eV$\cdot$b (at 10 keV) [12] and 106 eV$\cdot$b [41].The $S-$factor for transitions to the ground state in [40], 39 eV$\cdot$b is specified, and for the transition to the first excited state, the $S-$factor value is equal to 26 eV$\cdot$b, the total $S-$factor is 65 eV$\cdot$b. In our previous works [14, 15], a value of 114 eV$\cdot$b was obtained. The summary for $S-$factor experimental and theoretical values are presented in Tables 6 and 7. To sum up, our astrophysical $S-$factor is given in Fig. 7 with a solid red curve, together with experimental data and theoretical calculations. The $R-$matrix fit of the data from LUNA collaboration [9] and Switkowski et al. [4] is represented with the solid blue curve. A solid green curve was obtained by Kiss et al. [11] using the weighted means of the ANCs from the analysis of the 6Li(3He,$d$)7Be transfer reaction within the modified two-body potential method (MTBPM). In addition, [11] contains the results for the $S-$factor of the 6Li$(p,\gamma)^{7}$Be reaction calculated within the MTBPM, using the values of ANCs obtained from the analysis of the experimental astrophysical $S-$factors of the 6Li$(p,\gamma)^{7}$Be reaction [9]. These results are given in Fig. 7 with the solid black curve. Figure 7: Comparison of 6Li$(p,\gamma)^{7}$Be reaction astrophysical $S-$factors. Experimental data for capture to GS and FES are from $\medblacktriangleup$ – [4], $\medblackcircle$ – [8], $\medblacksquare$ – [9], $\medblackdiamond$ – [7], $\medsquare$ – [3], $\medtriangledown$ – [5], $\medblackstar$ – [42]. Results of calculations: red curve – present work; blue curve – Ref. [9]; black and green curves – Ref. [11]. To calculate the 6Li$(p,\gamma)^{7}$Be capture reaction rate in units of cm 3mol-1s-1, we used the expression [43] analogous to Eq. (13), but substituting the corresponding constants values $N_{A}\langle\sigma\nu\rangle=3.7313\cdot 10^{4}\mu^{-1/2}T_{9}^{-3/2}\int\limits_{0}^{\infty}\sigma(E)E\exp(-11.605E/T_{9})dE.$ (17a) In Eq. (17a) $E$ is given in MeV, the total cross-section $\sigma(E)$ is taken in $\muup$b, and $\mu$ is the reduced mass in amu and $T_{9}=10^{9}$ K [43]. Using real integration limits $E_{min}$ and $E_{max}$ Eq. (17a) becomes $N_{A}\langle\sigma\nu\rangle=3.7313\cdot 10^{4}\mu^{-1/2}T_{9}^{-3/2}\int\limits_{E_{min}}^{E_{max}}\sigma(E)E\exp(-11.605E/T_{9})dE.$ (17b) It is important to stress this fact as the choice of $E_{max}$ in Eq. (17b) may have a significant impact on the final result for the reaction rate. The reaction rate (17b) is calculated based on cross-sections, displayed in the form of $S-$factors (12) in Figs. 4, 5 and 6 within the energy range $E_{min}=1$ keV to $E_{max}=5$ MeV. The results of these calculations are plotted in Fig. 8. Figure 8: Total 6Li$(p,\gamma)^{7}$Be capture reaction rate. Curves indicate the different sums of the capture rates to the GS and FES. Curves are designated as in Fig. 6. The yellow band is taken from Ref. [16]. As noted above, curve 4 in Fig. 6 is in best agreement with all experimental data of the $S-$factor. Therefore, the corresponding reaction rate, also marked as curve 4 in Fig. 7 is most recommended description of the reaction rate. Curve 4 can be approximated by a function of the form [44] $\begin{gathered}N_{A}\langle\sigma\nu\rangle=a_{1}/T_{9}^{1/3}\exp\left(-a_{2}/T_{9}^{2/3}\right)\left(1+a_{3}T_{9}^{1/3}+a_{4}T_{9}^{2/3}+a_{5}T_{9}+a_{6}T_{9}^{4/3}+a_{7}T_{9}^{5/3}\right)+\\\ +a_{8}T_{9}^{2/3}\exp\left(-a_{9}/T_{9}^{1/3}\right).\end{gathered}$ (18) The parameters of approximation (18) with an average value of $\chi^{2}=0.014$ and the error of 5% are given in Table 8. Table 8: The reaction rate approximation parameters. $i$ | $a_{i}$ | $i$ | $a_{i}$ | $i$ | $a_{i}$ ---|---|---|---|---|--- $1$ | $0.00319$ | $4$ | $-544516.9$ | $7$ | $85197.34$ $2$ | $4.16292$ | $5$ | $16401.8$ | $8$ | $924167.3$ $3$ | $3721.884$ | $6$ | $-8044.932$ | $9$ | $8.36494$ Comparing our results with the reaction rate presented in NACRE [43] and NACRE II [16], we follow the format of Fig. 4 from [9]. We added our results for the reaction rate, normalized to the reaction rate from NACRE [43]. The comparison is shown in Figs. 10 and 10. Figure 9: Comparison of the astrophysical reaction rates in the range 0.001 to 1 $T_{9}$ from [43, 16, 9] to present work, normalized to the NACRE rate [43]. Dotted curves represent the uncertainty of the NACRE [43] rate, while shaded areas represent the uncertainties from LUNA [9] (pale red) and NACRE II [16] (yellow). Figure 10: Comparison of the astrophysical reaction rates in the range 0.001 to 10 $T_{9}$ from [43, 16, 9] to present work, normalized to the NACRE rate [43]. Dotted curves represent the uncertainty of the NACRE [43] rate, while shaded areas represent the uncertainties from LUNA [9] (pale red) and NACRE II [16] (yellow). As stated in Ref. [9], the LUNA ”thermonuclear reaction rate is 9% lower than NACRE [43] and 33% higher than reported in NACRE II [16] at 2 MK, and the reaction rate uncertainty has been significantly reduced”. Fig. 10 shows that the deviation between the adopted reaction rate obtained in [9] and the present calculations in the range of 0.01 to 1 $T_{9}$ does not exceed 5%. Therefore, the present calculations confirm the above conclusion by Piatti et al. [9]. Fig. 10 shows two of our results — the blue curve 1 is the reaction rate calculated on the basis of the total cross sections in the energy range from 1 keV to 5 MeV, and the green one 2 shows the rate at the upper range of the integration limit $E_{max}=0.6$ MeV, and this value corresponded to the upper limit of energy in work [9]. The difference between these curves illustrates the importance of the resonance region contribution for the cross sections in the range of 0.6 MeV to 5 MeV at temperatures above 1 $T_{9}$. ## VIII Conclusion We present the results of calculations and analyses of the $S-$factor and astrophysical reaction rate for the 6Li$(p,\gamma)^{7}$Be reaction in the framework of MPCM. It is demonstrated that the MPCM approach has only one ambiguity arising from the accuracy of the experimentally determined asymptotic constants. This effect manifests as bands in Figs. 4 – 6 for the astrophysical $S-$factor. Precise LUNA experimental data played a role of the criterion, in reducing the ANC ambiguity with theoretical simulations. Comparing the $R-$matrix method which is constrained by the parameterization of the experimental cross-sections data, MPCM enables to implement calculations in wider energy ranges. We extended the energy interval for the total cross-sections and $S-$factors up to 5 MeV, including resonances in the continuum. The numerical signature of this extension is seen in Fig. 10 for the reaction rate. It was also shown in the present work that MPCM had predicted the absence of resonance at 6.73 MeV in the nucleon channel [14, 15], which was confirmed by the LUNA results [9], as well as, indirectly, by the data from [30]. We suggest that the NACRE [43] and NACRE II [16] databases should be updated in light of LUNA data [9] and present calculations. ## Dedication We dedicate this paper to the memory of our colleague, Dr. Albert Dzhazairov- Kakhramanov, who recently passed away from COVID19. ## Acknowledgments This work was supported by the grant of the Ministry of Education and Science of the Republic of Kazakhstan #AP08855556 ”Study of additional thermonuclear reactions flowing in the process of controlled thermonuclear fusion on lithium isotopes” through the V.G. Fesenkov Astrophysical Institute of the ”National Center of Space Research and Technology” of the Aerospace committee of the Ministry of Digital Development, Innovations and Aerospace Industry of the Republic of Kazakhstan. ## Appendix A A solution of a two-body problem for a discrete energy spectrum with a given potential requires finding the binding energy of the system and the wave function of the state. This problem can be solved using the Variational Method and the Finite Difference Method (FDM) [45]. If both methods are used for the same system of particles, it is possible to control the correctness of the search for the binding energy and WF of the state. We already have used such an approach for $p^{2}$H and $p^{3}$H systems in [46, 28] and demonstrated that the FDM provides more precise description of the systems. Below we present the FDM approach. The calculation of the binding energy of a two-cluster system by the FDM relies on the representation of the Schr$\ddot{\text{o}}$dinger equation in finite differences [47]. The radial equation for the central potential [45] $u^{\prime\prime}_{L}(r)+\left[k^{2}-V(r)\right]u_{L}(r)=0$ (19) with some boundary condition for $k^{2}<0$, ($k^{2}=2\mu E/\hbar$) takes the form of a Sturm-Liouville type boundary value problem. Recasting the second derivative in finite difference form, we obtain $\begin{gathered}u^{\prime\prime}=\left[u_{n+1}-2u_{n}+u_{n-1}\right]/h^{2},\ \ \ u_{n}=u(r_{n})\end{gathered}$ (20) and (19) becomes a closed system of linear algebraic equations. Thus, for a certain $k_{0}$, $D_{N}(k)=0$ $D_{N}(k)=\begin{pmatrix}\thetaup_{1}&1&0&.&.&.&0\\\ \alphaup_{2}&\thetaup_{2}&1&0&.&.&0\\\ 0&\alphaup_{3}&\thetaup_{3}&1&0&.&0\\\ .&.&.&.&.&.&.\\\ .&.&.&.&.&.&.\\\ 0&.&0&0&\alphaup_{N-1}&\thetaup_{N-1}&1\\\ 0&.&0&0&0&\alphaup_{N}&\thetaup_{N}\end{pmatrix}=0.$ (21) Eq. (21) allows one to determine the binding energy $E_{b}$ of a system of two particles. The elements of the tridiagonal determinant (21) are defined as follows: $\displaystyle\alphaup_{n}=1,$ $\displaystyle\thetaup_{n}=k^{2}h^{2}-2-V_{n}h^{2},\ \ \ n=1,2,\dots,N-1,$ (22) $\displaystyle\alphaup_{N}=2,$ $\displaystyle\thetaup_{N}=k^{2}h^{2}-2-V_{N}h^{2}+2hf(\eta,L,Z_{N}),$ $\displaystyle Z_{n}=2kr_{n},$ $\displaystyle f(k,\eta,L,Z_{n})=-k-\dfrac{2k\eta}{Z_{n}}-\dfrac{2k(L-\eta)}{Z_{n}^{2}}.$ Here $\eta$ is the Coulomb parameter, $k=|\sqrt{k^{2}}|$ is the wave number expressed in fm-1 and determined by the energy of interacting particles in the input channel, and $V_{n}=V(r_{n})$ is the interaction potential of clusters at the point $r_{n}=nh$ from the interval of zero to $R$. The number of equations $N$ or the dimension of the determinant, which usually turns out to be in the range $100\ 000-1\ 000\ 000$ [45], $h=\Delta r/N$ is the step of the finite difference grid and $\Delta r$ is the solution interval of the system (usually from zero to $r_{N}=R$). By writing $f(k,\eta,L,Z_{n})$ in the form given in Eq. (22) it is possible to take the Coulomb interaction into account [33]. The form of the logarithmic derivative of the WF in the external region can be obtained from the integral representation of the Whittaker function [33] $f(k,\eta,L,Z)=-k-\dfrac{2k\eta}{Z}-\dfrac{2k(L-\eta)}{Z^{2}}S(\eta,L,Z),$ (23) where $S(\eta,L,Z)=\dfrac{\int\limits_{0}^{\infty}t^{L+\eta+1}(1+t/Z)^{L-\eta-1}e^{-t}dt}{\int\limits_{0}^{\infty}t^{L+\eta}(1+t/Z)^{L-\eta}e^{-t}dt}.$ (24) Calculations show that the value $S(\eta,L,Z)$ does not exceed 1.05, and its effect on the binding energy of a two-particle system is negligible [45]. When $f(k,0,0,Z)=-k$ in Eq. (23), the binding energy search process is noticeably accelerated. The calculation of the band determinant $D_{N}(k)$ for a given $k$ is carried out using recurrent formulas of the form [47] $\displaystyle D_{-1}=0,$ $\displaystyle D_{n}=\thetaup_{n}D_{n-1}-\alphaup_{n}D_{n-2},$ (25) $\displaystyle D_{0}=1,$ $\displaystyle n=1,\dots,N.$ Any energy $E$ or wave number $k$ that leads to zero determinant $D_{N}(k_{0})=0$ (26) is an eigenenergy of the system $E_{b}$ or $k_{0}$, and the wave function at this energy, determined by recurrent process below, is an eigenfunction of the problem. Methods for determining the zero of some functional of one variable $k$ are well known [48]. The number $N_{D}$ of determinant values is determined automatically from the accuracy condition of the binding energy value. The latter one is usually set to the level $\epsilon\approx 10^{-5}$–$10^{-9}$MeV, and $r_{N}=R$ is fixed on the range 20–30 fm [45]. After determining the eigenenergy $E_{b}$, the WF of this state is sought. To find the shape of the eigenfunctions of bound states, the recurrent procedure $\displaystyle u_{0}=0,$ $\displaystyle u_{n}=\thetaup_{n-1}u_{n-1}+u_{n-2},$ (27) $\displaystyle u_{1}=\text{const},$ $\displaystyle n=2,\dots,N.$ is carried out, where $u_{1}$ is an arbitrary number, usually fixed on the range 0.01–0.1 [48]. For bound states, the determined WF is normalized to unity. Comparing it to Whittaker asymptotics, one can find an asymptotic constant denoted by $C_{\text{w}}$ (see Sec. IV). The WF search area $R$ is usually of 20 to 30 fm, and the number of steps $N_{WF}$ for the desired WF is fixed between 10 000 and 50 000. Only in the case of a very low binding energy (0.1–0.2 MeV) the WF search area increased to 100–200 fm or more. The recurrence relation (27) is also used to search for WFs in the case of a continuous spectrum of eigenvalues at predetermined positive energy $(k^{2}>0)$ of interacting particles[45]. However, the WF must now be matched with asymptotics of the form $N_{L}u_{L}(r)\xrightarrow[r\rightarrow R]{}F_{L}(kr)+\tan\left(\deltaup_{S,L}^{J}\right)G_{L}(kr).$ (28) Matching the numerical solution $u_{L}(R)$ of Eq. (19) for two points at large distances ($R$ on the order of 10–20 fm) with asymptotics (28), it is possible to calculate the scattering phase shifts for each value of the momenta $JLS$ for a given energy of interacting particles, as well as the normalization of the WF for scattering processes [45]. To calculate the WF, one can also use the Numerov method [49]. When the number of steps exceeds 10 000, both methods yield the same results within the typical required accuracy. Such results can be compared by calculating the values of AC or charge radii for the BS or the matrix elements for the scattering processes [45]. ## Appendix B Table 9: The astrophysical 6Li$(p,\gamma)^{7}$Be reaction rate in the range of 0.001 to 10 $T_{9}$ $T_{9}$ | Rate | $T_{9}$ | Rate | $T_{9}$ | Rate | $T_{9}$ | Rate ---|---|---|---|---|---|---|--- 0.001 | $3.20\times 10^{-29}$ | 0.035 | $6.90\times 10^{-5}$ | 0.19 | $1.35\times 10^{0}$ | 2.25 | $7.80\times 10^{2}$ 0.002 | $7.07\times 10^{-22}$ | 0.040 | $1.92\times 10^{-4}$ | 0.20 | $1.67\times 10^{0}$ | 2.50 | $9.07\times 10^{2}$ 0.003 | $2.53\times 10^{-18}$ | 0.045 | $4.57\times 10^{-4}$ | 0.25 | $3.98\times 10^{0}$ | 2.75 | $1.03\times 10^{3}$ 0.004 | $4.35\times 10^{-16}$ | 0.050 | $9.60\times 10^{-4}$ | 0.30 | $7.64\times 10^{0}$ | 3.00 | $1.16\times 10^{3}$ 0.005 | $1.68\times 10^{-14}$ | 0.055 | $1.83\times 10^{-3}$ | 0.35 | $1.28\times 10^{1}$ | 3.25 | $1.29\times 10^{3}$ 0.006 | $2.71\times 10^{-13}$ | 0.060 | $3.25\times 10^{-3}$ | 0.40 | $1.94\times 10^{1}$ | 3.50 | $1.42\times 10^{3}$ 0.007 | $2.49\times 10^{-12}$ | 0.065 | $5.41\times 10^{-3}$ | 0.45 | $2.75\times 10^{1}$ | 3.75 | $1.56\times 10^{3}$ 0.008 | $1.54\times 10^{-11}$ | 0.070 | $8.55\times 10^{-3}$ | 0.50 | $3.71\times 10^{1}$ | 4.0 | $1.69\times 10^{3}$ 0.009 | $7.20\times 10^{-11}$ | 0.075 | $1.30\times 10^{-2}$ | 0.55 | $4.81\times 10^{1}$ | 4.5 | $1.95\times 10^{3}$ 0.010 | $2.70\times 10^{-10}$ | 0.080 | $1.89\times 10^{-2}$ | 0.60 | $6.03\times 10^{1}$ | 5.0 | $2.22\times 10^{3}$ 0.011 | $8.58\times 10^{-10}$ | 0.085 | $2.68\times 10^{-2}$ | 0.65 | $7.37\times 10^{1}$ | 5.5 | $2.50\times 10^{3}$ 0.012 | $2.38\times 10^{-9}$ | 0.090 | $3.69\times 10^{-2}$ | 0.70 | $8.83\times 10^{1}$ | 6.0 | $2.77\times 10^{3}$ 0.013 | $5.92\times 10^{-9}$ | 0.095 | $4.97\times 10^{-2}$ | 0.75 | $1.04\times 10^{2}$ | 6.5 | $3.05\times 10^{3}$ 0.014 | $1.35\times 10^{-8}$ | 0.10 | $6.55\times 10^{-2}$ | 0.80 | $1.20\times 10^{2}$ | 7.0 | $3.32\times 10^{3}$ 0.015 | $2.83\times 10^{-8}$ | 0.11 | $1.08\times 10^{-1}$ | 0.85 | $1.38\times 10^{2}$ | 7.5 | $3.59\times 10^{3}$ 0.016 | $5.60\times 10^{-8}$ | 0.12 | $1.67\times 10^{-1}$ | 0.90 | $1.56\times 10^{2}$ | 8.0 | $3.86\times 10^{3}$ 0.017 | $1.05\times 10^{-7}$ | 0.13 | $2.48\times 10^{-1}$ | 0.95 | $1.74\times 10^{2}$ | 8.5 | $4.13\times 10^{3}$ 0.018 | $1.86\times 10^{-7}$ | 0.14 | $3.52\times 10^{-1}$ | 1.00 | $1.94\times 10^{2}$ | 9.0 | $4.38\times 10^{3}$ 0.019 | $3.18\times 10^{-7}$ | 0.15 | $4.84\times 10^{-1}$ | 1.25 | $2.99\times 10^{2}$ | 9.5 | $4.63\times 10^{3}$ 0.020 | $5.23\times 10^{-7}$ | 0.16 | $6.48\times 10^{-1}$ | 1.50 | $4.13\times 10^{2}$ | 10 | $4.88\times 10^{3}$ 0.025 | $4.13\times 10^{-6}$ | 0.17 | $8.45\times 10^{-1}$ | 1.75 | $5.32\times 10^{2}$ | | 0.030 | $1.98\times 10^{-5}$ | 0.18 | $1.08\times 10^{0}$ | 2.00 | $6.55\times 10^{2}$ | | ## References * [1] C. A. Barnes, D. D. Clayton, and D. N. Schramm, eds., _Essays in nuclear astrophysics: presented to William A. Fowler, on the occasion of his seventieth birthday_ (Cambridge University Press, New York, 1982) p. 562. * [2] R. N. Boyd, C. R. Brune, G. M. Fuller, and C. J. Smith, Phys. Rev. D 82, 105005 (2010), arXiv:1008.0848. * [3] S. Bashkin and R. R. Carlson, Phys. Rev. 97, 1245 (1955). * [4] Z. E. Switkowski, J. C. Heggie, D. L. Kennedy, D. G. Sargood, F. C. Barker, and R. H. Spear, Nucl. Phys. A 331, 50 (1979). * [5] R. Ostojic, K. Subotic, and B. Stepancic, Il Nuovo Cimento A 76, 73 (1983). * [6] R. Bruss, in _Nuclei in the Cosmos: Proceedings of the Second International Symposium on Nuclear Astrophysics_ , edited by F. Kappeler and K. Wisshak (IOP Publishing Ltd, Karlsruhe, Germany, 1993) p. 648. * [7] T. Paradellis (1999), unpublished, quoted by [12] as Ref. [25]. * [8] J. J. He, S. Z. Chen, C. E. Rolfs, S.W. Xu, J. Hu, X.W. Ma, M.Wiescher, R. J. DeBoer, T. Kajino, M. Kusakabe, L. Y. Zhang, S. Q. Hou, X. Q. Yu, N. T. Zhang, G. Lian, Y. H. Zhang, X. H. Zhou, H. S. Xu, G. Q. Xiao, and W. L. Zhan, Phys. Let. B 725, 287 (2013). * [9] D. Piatti, T. Chillery, R. Depalo, M. Aliotta, D. Bemmerer, A. Best, A. Boeltzig, C. Broggini, C. G. Bruno, A. Caciolli, F. Cavanna, G. F. Ciani, P. Corvisiero, L. Csedreki, T. Davinson, A. Di Leva, Z. Elekes, F. Ferraro, E. M. Fiore, A. Formicola, Z. Fulop, G. Gervino, A. Gnech, A. Guglielmetti, C. Gustavino, G. Gyurky, G. Imbriani, M. Junker, I. Kochanek, M. Lugaro, L. E. Marcucci, P. Marigo, E. Masha, R. Menegazzo, V. Mossa, F. R. Pantaleo, V. Paticchio, R. Perrino, P. Prati, L. Schiavulli, K. Stockel, O. Straniero, T. Szucs, M. P. Takacs, and S. Zavatarelli, Phys. Rev. C 102, 052802 (2020). * [10] R. M. Prior, M. C. Spraker, A. M. Amthor, K. J. Keeter, S. O. Nelson, A. Sabourov, K. Sabourov, A. Tonchev, M. Ahmed, J. H. Kelley, D. R. Tilley, H. R. Weller, and H. M. Hofmann, Phys. Rev. C 70, 10.1103 (2004). * [11] G. G. Kiss, M. La Cognata, R. Yarmukhamedov, K. I. Tursunmakhatov, I. Wiedenhover, L. T. Baby, S. Cherubini, A. Cvetinovic, G. D’Agata, P. Figuera, G. L. Guardo, M. Gulino, S. Hayakawa, I. Indelicato, L. Lamia, M. Lattuada, F. Mudo, S. Palmerini, R. G. Pizzone, G. G. Rapisarda, S. Romano, M. L. Sergi, R. Spart‘a, C. Spitaleri, O. Trippella, A. Tumino, M. Anastasiou, S. A. Kuvin, N. Rijal, B. Schmidt, S. B. Igamov, S. B. Sakuta, Z. Fulop, G. Gyurky, T. Szucs, Z. Halasz, E. Somorjai, Z. Hons, J. Mrazek, R. E. Tribble, and A. M. Mukhamedzhanov, Phys. Rev. C 104, 015807 (2021). * [12] K. Arai, D. Baye, and P. Descouvemont, Nucl. Phys. A 699, 963 (2002). * [13] J. T. Huang, C. A. Bertulani, and V. Guimaraes, Atomic Data and Nuclear Data Tables 96, 824 (2010), arXiv:0810.3867. * [14] S. B. Dubovichenko, N. Burtebaev, D. M. Zazulin, and A. S. Amar, Russian Physics Journal 53, 743 (2010). * [15] S. B. Dubovichenko, N. Burtebaev, D. M. Zazulin, Z. K. Kerimkulov, and A. S. Amar, Physics of Atomic Nuclei 74, 984 (2011). * [16] Y. Xu, K. Takahashi, S. Goriely, M. Arnould, M. Ohta, and H. Utsunomiya, Nucl. Phys. A 918, 61 (2013), arXiv:1310.7099. * [17] G. X. Dong, N. Michel, K. Fossez, M. Płoszajczak, Y. Jaganathen, and R. M. Betan, Journal of Physics G 44, 1 (2017), arXiv:arXiv:1601.06660v1. * [18] A. Gnech and L. E. Marcucci, Nucl. Phys. A 987, 1 (2019). * [19] S. B. Dubovichenko and A. V. Dzhazairov-Kakhramanov, Nucl. Phys. A 941, 335 (2015). * [20] P. Descouvemont, Frontiers in Astronomy and Space Sciences 7, 9 (2020). * [21] T. A. Tombrello and P. D. Parker, Phys. Rev. 131, 2582 (1963). * [22] K. Wildermuth and Y. C. Tang, _A Unified Theory of the Nucleus_ (Vieweg+Teubner Verlag, 1977). * [23] V. I. Kukulin, V. G. Neudatchin, I. T. Obukhovski, and Y. F. Smirnov, in _Clusters as Subsystems in Light Nuclei_ (Vieweg+Teubner Verlag, Wiesbaden, 1983) pp. 1–155. * [24] V. G. Neudatchin, V. I. Kukulin, V. N. Pomerantsev, and A. A. Sakharuk, Phys. Rev. C 45, 1512 (1992). * [25] C. Itzykson and M. Nauenberg, Reviews of Modern Physics 38, 95 (1966). * [26] A. Bohr and B. R. Mottelson, in _Nuclear Structure_ (World Scientific Publishing Company, 1998) pp. 137–307. * [27] A. M. Mukhamedzhanov and R. E. Tribble, Phys. Rev. C 59, 3418 (1999). * [28] S. B. Dubovichenko, Thermonuclear processes in Stars and Universe, 2nd ed. (Scholar’s Press, Saarbrucken, 2015) p. 332. * [29] Physical Measurement Laboratory; http://physics.nist.gov/cgi-bin/cuu/Value?mud%7Csearch_for =atomnuc! * [30] D. R. Tilley, C. M. Cheves, J. L. Godwin, G. M. Hale, H. M. Hofmann, J. H. Kelley, C. G. Sheu, and H. R. Weller, Energy levels of light nuclei A = 5, 6, 7 (2002). * [31] V. Varlamov, B. Ishkhanov, and S. Komarov, Atomic Nuclei Map (2015); http://cdfe.sinp.msu.ru/services/ground/NuclChart_release.html * [32] G. R. Plattner and R. D. Viollier, Nucl. Phys. A 365, 8 (1981). * [33] M. Abramowitz and I. A. Stegun, eds., Handbook of mathematical function, 10th ed. (U.S. Government Printing Office, Washington, 1972) p. 1046. * [34] Sukhoruchkin S.I., Supplement to I/25 A-F (Springer Berlin Heidelberg, 2016). * [35] K. M. Nollett and R. B. Wiringa, Phys. Rev. C 83, 10.1103 (2011). * [36] N. K. Timofeyuk, Phys. Rev. C 88, 044315 (2013). * [37] N. Burtebayev, J. Burtebayeva, N. Glushchenko, Z. Kerimkulov, A. Amar, M. Nassurlla, S. Sakuta, S. Artemov, S. Igamov, A. Karakhodzhaev, K. Rusek, and S. Kliczewski, Nucl. Phys. A 909, 20 (2013). * [38] M. Skill, R. Baumann, G. Keil, N. Kniest, E. Pfaff, M. Preiss, G. Reiter, G. Clausnitzer, M. Haller, and W. Kretschmer, Nucl. Phys. A 581, 93 (1995). * [39] C. Iliadis, Nuclear physics of stars, 2nd ed. (Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim, Germany, 2015) p. 672. * [40] F. E. Cecil, D. Ferg, H. Liu, J. C. Scorby, J. A. McNeil, and P. D. Kunz, Nucl. Phys. A 539, 75 (1992). * [41] F. Barker, Australian Journal of Physics 33, 159 (1980). * [42] A. Amar and N. Burtebayev, Journal of Nuclear Sciences 1, 15 (2014). * [43] C. Angulo, M. Arnould, M. Rayet, P. Descouvemont, D. Baye, C. Leclercq-Willain, A. Coc, S. Barhoumi, P. Aguer, C. Rolfs, R. Kunz, J.W. Hammer, A. Mayer, T. Paradellis, S. Kossionides, C. Chronidou, K. Spyrou, S. Degl’Innocenti, G. Fiorentini, B. Ricci, S. Zavatarelli, C. Providencia, H. Wolters, J. Soares, C. Grama, J. Rahighi, A. Shotter, and M. Lamehi Rachti, Nucl. Phys. A 656, 3 (1999). * [44] G. R. Caughlan and W. A. Fowler, Thermonuclear reaction rates V (1988). * [45] S. B. Dubovichenko, Methods for calculating nuclear characteristics. Nuclear and thermonuclear processes., 2nd ed. (Lambert Academic Publishing, Saarbrucken, 2012) p. 425. * [46] S. B. Dubovichenko, A. V. Dzhazairov-Kakhramanov, and N. V. Afanasyeva, Nucl. Phys. A 963, 52 (2017). * [47] G. Marchuk and V. Kolesov, Application of Numerical Methods to Neutron Cross-Section Calculations (Atomizdat, Moscow, 1970) p. 304\. * [48] T. Korn and G. A. Korn, Mathematical Handbook for Scientists and Engineers (McGraw-Hill, New-York, 1968) p. 832. * [49] E. Hairer, S. P. Nørsett, and G. Wanner, Solving Ordinary Differential Equations I (Springer-Verlag Berlin Heidelberg, 1993) p. 528.
# Collective dynamics of evanescently coupled excitable lasers with saturable absorber M. Lamperti1 and A. M. Perego2 1,∗ Politecnico di Milano, Department of Physics and IFN-CNR, Via G. Previati 1/C, 23900, Lecco, Italy 2 Aston Institute of Photonic Technologies, Aston University, Birmingham B4 7ET, Aston Street, UK <EMAIL_ADDRESS> ###### Abstract We present a numerical study of the collective dynamics in a population of coupled excitable lasers with saturable absorber. At variance with previous studies where real-valued (lossy) coupling was considered, we focus here on the purely imaginary coupling (evanescent wave coupling). We show that evanescently coupled excitable lasers synchronize in a more efficient way compared to the lossy coupled ones. Furthermore we show that out-of-diagonal disorder-induced localization of excitability takes place for imaginary coupling too, but it can be frustrated by nonvanishing linewidth enhancement factor. ## 1 Introduction The last decade has been characterized by the rise and spread of data storage and analytics over many industrial and commercial fields, due to the decreasing cost of data collection, storage and transmission. While it is clear that large datasets are needed to obtain meaningful statistical information about complex systems, the instruments used for such information distillation have not evolved with the same speed as the data infrastructure, due on one hand to our still rough understanding of large, complex systems, and on the other hand to the exponentially increasing computing resources needed to apply more and more complex models for data interpretation. A promising framework for advanced data processing tasks is constituted by neural networks [1], which currently see widespread usage to perform actions that are traditionally a premise of human beings, such as pattern recognition, natural language processing (information extraction from texts and translation), and identification of objects in images. Although specialized hardware and software has been developed to increment the efficiency of implementing such framework with the currently most advanced computing paradigma, based on a Von Neumann architecture executed on CMOS-based integrated circuits, the level of complexity and efficiency of the human brain is far from reach by many orders of magnitude. The maturity and apparent limits of the current technology have pushed scientists from many disciplines to propose new computing paradigms that could enable a leap towards the higher processing power required for the aforementioned purposes; the most promising results take inspiration from what is considered to be the most advanced naturally evolved computer: the human brain. The resulting field of neuromorphic computing aims at taking advantage from systems that exhibit naturally the characteristics that make biological neural networks so powerful and efficient, mapping the process paradigm to its underlying dynamics rather than abstracting away from it, as it is currently done with software implementation of neural networks on serialized, digital hardware [2]. The two key ingredients for implementing neuromorphic computing are neuron- like behavior and a large network of interconnections [1]. The first ingredient is provided by excitable systems: excitability is an ubiquitous process in nature mostly known in biology [3, 4] in the context of the neuronal cell activity and can be defined as the generation of spike-like behaviour in one or more system dynamical variables in response to external perturbations whose magnitude exceeds a given threshold. In such regime the system response does not depend on the perturbation strength and each spike generation is followed by a refractory time during which the system remains silent; after such refractory time the emission of another spike can take place again. Excitability needs to be provided by a suitable neuron-like system, which in turn will constitute the building block to perform simple computations in a highly parallel fashion, making the system fast and efficient. What is needed to perform complex computations is a large network of interconnects that weight the outputs of previous neurons and route them to other computation elements, i.e. other neuron-like blocks. A network of interconnects is thus the key ingredient to transform a bunch of excitable systems into a highly-efficient computer able to solve complex tasks. Currently, the golden standard for electronics-based computing, CMOS integrated circuits, is unable to provide these two elements while keeping the power efficiency high [5]. As we approach the few-atom transistor, with bigger and bigger challenges without any significant decrease of power consumption, new directions have been probed in search for another platform that could provide high integration, low power consumption and scalability. Thanks to recent advances in optoelectronics integration, photonics technologies constitute one of the most promising platforms for neuromorphic computing: neuromorphic photonics is gaining momentum as a research field where emerging photonics technologies are used to mimic neuronal dynamics and/or to perform computational tasks based on brain inspired strategies [6]. Excitability in photonics has been reported in a variety of different lasers and amplifiers systems [7, 8, 9, 10, 11, 12, 13, 14, 15]. Possibly the first historically studied example of a photonics system exhibiting excitability is the semiconductor laser with saturable absorber [7]. Excitability in such laser is enabled by a phase space portrait exhibiting a limit cycle close to a saddle node bifurcation. In the regime where the absorption is the slow dynamical variable, if the below, but close-to-threshold laser is perturbed strongly enough, then the stimulated emission process builds up producing the emission of giant light pulse. Such emission depletes the gain and is followed by a refractory time after which the laser is ready to be excited again. Such dynamics can be associated to a so called type III excitability [8]. Engineering optically excitable elements connectivities and studying their collective properties and dynamics are both crucial tasks towards achieving a general understanding of neuromorphic photonics systems. Focussing our attention to the excitable laser with saturable absorber, it is important to stress that coupling engineering of excitable semiconductor lasers with saturable absorber has been shown to allow pattern recognition [14], neuronal circuits design [16] and coincidence detection devices [17]. As far as the collective dynamics is concerned, we have shown theoretically in two recent works [18, 19] that temporal and intensity synchronization, array enhanced coherence resonance, and even disorder-induced localization of excitability can take place in arrays of excitable lasers with nearest neighbour real- valued (lossy) coupling. In this work we extend our previous studies on synchronization and disorder-induced localization of excitability to the case of an ensemble of excitable lasers with saturable absorber coupled via a nearest neighbour purely imaginary coupling coefficient. Imaginary valued couplings describe the physical evanescent waves interaction between adjacent laser cavities. Such interaction is most likely the relevant coupling mechanism for micropillars lasers, where collective excitable dynamics could be observed [20]. ## 2 The model We consider here a population of $n$ lasers with nearest neighbour coupling. The following normalized Yamada model describes the $i$-th laser dynamics: $\displaystyle\dot{F}_{i}$ $\displaystyle=$ $\displaystyle\frac{1}{2}[G_{i}(1-i\alpha)-Q_{i}(1-i\beta)-1]F_{i}+\sigma_{i}$ $\displaystyle-$ $\displaystyle i(K_{i,i+1}+K_{i,i-1})F_{i}+iK_{i+1,i}F_{i+1}+iK_{i-1,i}F_{i-1},$ $\displaystyle\dot{G}_{i}$ $\displaystyle=$ $\displaystyle\gamma(A-G_{i}-I_{i}G_{i}),$ $\displaystyle\dot{Q}_{i}$ $\displaystyle=$ $\displaystyle\gamma(B-Q_{i}-aQ_{i}I_{i}).$ (1) Here $F_{i}$ denotes the electric field strength, $I_{i}=|F_{i}|^{2}$ its intensity, $G_{i}$ and $Q_{i}$ gain and absorption respectively. $A$ is the pump parameter, $B$ the background absorption, $a$ the differential absorption relative to the differential gain, $\gamma$ is the absorber and gain decay rate, $\alpha$ and $\beta$ denote the linewidth enhancement factors for the gain and the absorber respectively; these parameters do not have subscript $i$ since they have been considered identical for all lasers. $\sigma_{i}$ describes a delta correlated Gaussian noise term of strength $D$ with $\langle\sigma_{i}(t_{1})\sigma_{j}(t_{2})\rangle=\sqrt{2D}\delta(t_{1}-t_{2})\delta_{ij}$ providing the perturbations needed for excitable behaviour. The dot denotes temporal derivative and the time variable has been normalized to the uncoupled laser photon lifetime. $K_{i,j}$ denotes the nearest neighbour coupling strength describing light coupling from the $i$-th to the $j$-th laser where the reciprocity condition $K_{i,j}=K_{j,i}$ has been imposed. The identical coupling case $K_{i,i+1}=K$ results in an effective discrete Laplace diffraction operator in the array. Periodic conditions at the array boundary have been applied. Across the whole paper we have set $A=6.5$, $B=5.8$, $a=1.8$, $\gamma=10^{-3}$ while for $\alpha$ and $\beta$ different values have been used and specified across the paper. ## 3 Synchronization We have first studied the effect of coupling on the synchronization of the firing events in the lasers array as a function of the number of lasers constituting the array itself and of the values of the linewidth enhancement factors. To this purpose we have considered independent additive noise sources to be present in all lasers and varied their amplitude, $D$, from 0 to 0.15. The coupling strength, $K=K_{0}$, has been considered identical for all lasers and has been varied from 0 to 1. For each pair $(D,K)$ we run one simulation for a time duration $T=100000$. The pulse synchronization can be understood both in space and time: in the first case it refers to neighbor lasers emitting a pulse at the same time while in the second case it characterizes a regular (i.e. equally spaced) emission of pulses from the single laser. To give a qualitative flavour of temporally synchronous pulses (spikes) emission of coupled lasers we have plotted in Fig. 1 the temporal traces corresponding to coupled and uncoupled laser (left and right column, respectively). Figure 1: In panels a) to e) the temporal traces of five different uncoupled lasers have been plotted showing uncorrelated lasers firing. In panels f) to j) the time traces of five randomly-picked up lasers in a chain of 50 coupled elements have been depicted. Spatial synchronization is clearly evident in presence of non vanishing coupling. Parameters used are $D=0.01$, $\alpha=\beta=0$ for both the coupled and uncoupled case; $K_{0}=0$ in a) to e) and $K=0.05$ in f) to j). We have characterized the spatial synchronization by using an indicator ($S$) that quantifies the phase slip between the firing events of nearest-neighbors lasers [18, 21]. The phase of the $i$-th laser reads: $\displaystyle\phi_{i}(t)=\frac{t-\tau_{k}}{\tau_{k+1}-\tau_{k}}+2k\pi$ (2) where $\tau_{k}$ is the time of the $k$-th firing event, i.e. the position in time of the $k$-th emitted pulse. We define then $\displaystyle s_{i}=\sin\left(\frac{\phi_{i}-\phi_{i+1}}{2}\right)^{2}$ (3) which characterizes the phase synchronization between the $i$-th and $i+1$-th lasers. The average both across all the array elements and along temporal duration of the time trace gives the $S$ indicator, which provides a measure of the spatial synchronization of the whole system: $\displaystyle S=\lim_{T\to\infty}\frac{1}{T}\int_{0}^{T}\left(\frac{1}{n}\sum_{i=1}^{n}s_{i}\right)dt.$ (4) The maximum synchronization occurs for $S=0$ while for the completely non synchronized state $S=0.5$. We have furthermore characterized the regular emission of pulses of the individual laser (temporal synchronization) by means of the single laser pulse jitter defined as $J=\sigma_{T}/\langle T\rangle$, where $\langle T\rangle$ is the average time interval between two consecutive pulses and $\sigma_{T}$ is the standard deviation. A value of $J$ close to 0 indicates regular pulses emission, while values close or greater than 1 indicate poor regularity. Figure 2: The single-laser temporal synchronization index, $J$, is plotted in the $(D,K)$ space for different numbers of lasers (rows - 4, 20 and 150 from top to bottom) and different values of the line enhancement factors (columns - 0, 1, 3 and 5 from left to right). The panels in Fig. 2 and in Fig. 3 show the $J$ and $S$ synchronization indicators, respectively, for different number of lasers constituting the chain and different values of the line enhancement factors, versus the noise and the coupling strength. The behavior observed here is qualitatively different from the case of real (dissipative) coupling, showing no array- enhanced coherence resonance [18]. Instead of enlarging the region of the $(D,K)$ parameter space characterized by high synchronization, an increase in the number of coupled lasers improves the temporal synchronization in a limited region of the parameters space (the lower-left one) while worsening the synchronization in all the remaining region. In other words, enlarging the array creates a sharper separation between the regions of high and low synchronization, without changing its area. On the other hand, increasing the number of lasers in the array has the effect of reducing the region of highest spatial synchronization, i.e. low value of the $S$ indicator. This effect is well visible in the part of the $(D,K)$ space characterized by $D>0.05$, and manifests itself quickly with the increase of lasers forming the chain (compare first and second row of Fig. 3). It is worth noting that even in the worst synchronization cases, the $S$ indicator is smaller than $10^{-5}$, indicating good spatial synchronization in all the $(D,K)$ space. Figure 3: The logarithm of the spatial synchronization, $\log_{10}(S)$, is plotted in the $(D,K)$ space for different numbers of lasers (rows - 4, 20 and 150 from top to bottom) and different values of the line enhancement factors (columns - 0, 1, 3 and 5 from left to right). The second control parameter we considered, the line enhancement factor, differently from the number of lasers modifies the shape and size of the synchronization region. For a small number of lasers (4, first row in Figs. 2-3), its progressive increase from 0 to 5 causes an enlargement of the region of high synchronization towards larger values of the noise, both in time and space. For a longer chain (20 and 150 lasers, two bottom rows), high synchronization occurs in a more and more localized region towards small values of $K$ and $D$; even in a strong coupling regime high temporal synchronization of spikes emitted by the same laser results more difficult to achieve. As compared to the real coupling case [18] we can observe that a much smaller value of the coupling strength is needed to achieve comparable synchronization. Imaginary coupling is hence a much more efficient way to achieve synchronization of excitability. ## 4 Disorder-induced localization In our previous study for real-valued couplings we pointed out how randomness in the laser array coupling strength is strikingly able to induce spatial exponential localization of excitable behavior in a given area of the array [19]. Such disorder-induced localization does not occur due to a trivial breaking of the lasers chain but due to a dynamical process entailing both scattering and dissipation. We want to stress that the localization we are discussing here, although mediated by disorder, can not be assimilated directly to the celebrated _Anderson localization_ [22]. The latter, although it has inspired our work, is occurring in ideally conservative systems while in our system dissipation plays of course a paramount important role. Still inspired by the conspicuous literature on _Anderson localization_ we can borrow some useful terminology. In the traditional studies of _Anderson localization_ of electronic transport in solid state physics, the distinction between diagonal and out-of-diagonal disorder is made when referring to disorder on the individual atomic sites, or in the hopping terms between neighbour atomic sites respectively [23]. In our system we identify clearly the former as corresponding to randomness in some internal laser parameter (e.g. pump parameter), while the latter to the coupling between the lasers. In this paper we will consider out-of-diagonal disorder defining the coupling as: $K_{i,i+1}=K_{0}+\rho_{i,i+1}$, being $K_{0}$ a value common to all lasers and $\rho_{i,i+1}$ a random number constant in time drawn from a uniform distribution in the interval $[-r,+r]$ which describes light coupling from the $i$-th laser to nearest neighbour on the right. We recall also that the reciprocity constraint implies $\rho_{i,i+1}=\rho_{i+1,i}$. Figure 4: In a) the intensity dynamics in absence disorder: an excitability wave propagates through the array. In b) an example of the temporal dynamics in presence of disorder is shown: excitability is spatially localized. Parameters used are: $\alpha=\beta=0$, $K_{0}=0.05$, $r=0.005$. In order to investigate the disorder-induced localization we have considered that the additive noise is present, without loss of generality, only in the central laser of the array. If all the lasers are coupled with identical coupling strength ($K_{i,i+1}=K_{0}$ $\forall$ $i$), then an excitability wave propagates from the centre of the array both towards left and right (see Fig. 4). We have considered a chain of 150 lasers with an average coupling $K_{0}=0.05$, a noise on the central laser with strength $D=0.1$ and a random coupling distribution varying in the interval $r\in[0,0.002]$. We have verified that in this condition, with noise acting only on one laser, we have regular emission of pulses that propagate to all the lasers in the chain for $r=0$. Figure 5: The temporal average of 100 traces characterized by different realizations of disorder with the same value of $r$ is plotted here together with the fit to Eq. 5 used to extract the value of the localization exponent, $\lambda$. To extract the correct localization exponent we discard the central laser, whose average intensity is systematically too high due to the presence of additive noise, and its nearest neighbors. Parameters used are the same of Fig. 6, with $\alpha=\beta=0$ and $r=0.002$. For each value of $r$, 100 numerical experiments with a time duration of $T=50000$ has been carried out. For each disordered coupling configuration we have computed the average intensity trace across the array. We have then fitted the averaged intensity distribution after the 100 realizations of the disorder with the same strength $r$, with the following exponential function: $\displaystyle f=b+\exp\left(-\lambda|i-i_{0}|\right)$ (5) where $i_{0}$ denotes the position of the central laser (see Fig. 5 for an example). This procedure has been repeated 8 times, so that the average value and standard deviation of the fit parameters can be extracted; the average localization exponent $\langle\lambda\rangle$ versus the disorder strength $r$ is plotted in Fig. 6, together with error bars indicating the standard deviation. Figure 6: The average localization exponent $\langle\lambda\rangle$ has been plotted versus randomness strength $r$ for 3 different values of the linewidth enhancement factors $\alpha$ and $\beta$. For vanishing $\alpha$ and $\beta$ a transition from ballistic to localized regime occurs, low values of $\alpha$ and $\beta$ require an higher threshold for localization. At large values of the linewidth enhancement factor localization is lost. Parameters used are $D=0.1$ and $K=0.05$. Such disorder-induced localization occurs for coupling values such that if we couple all the lasers with identical value of $K_{0}-r$ equal to the minimum possible value generated by the random process, we would still have an excitability wave spreading from the array center both towards left and right; this ensures the fact that localization is not caused by a trivial breaking of the laser chain but is indeed provoked by a dynamical effect. For vanishing $\alpha$ and $\beta$ we can notice that localization occurs for coupling strength much smaller than the ones needed in the real coupling case (about 0.3, see [19]). We have furthermore investigated the impact of linewidth enhancement factor both for the gain medium ($\alpha$) and for the saturable absorber ($\beta$) on the disorder-induced localization of excitability. Physically $\alpha$ and $\beta$ are responsible for coupling of amplitude to phase fluctuations. It is interesting to appreciate that linewidth enhancement factor ($\alpha$,$\beta\neq 0$) causes a decrease (for $\alpha=\beta=1$) and even a vanishing (for $\alpha=\beta=3$ or $5$) of the localization. ## 5 Conclusions We have demonstrated with the help of numerical simulations that evanescently coupled excitable lasers with saturable absorber synchronize in a more efficient way compared to lossy coupled ones, but that they do not exhibit array enhanced coherence resonance. We have furthermore demonstrated that disorder-induced localization of excitability caused by randomness in the coupling strength exists for imaginary coupling too. Taking into account the linewidth enhancement factor both in the gain medium and in the saturable absorber we have shown that localization is reduced and eventually quenched by an increase of the linewidth enhancement factor. This fact may constitute a serious obstacle in the experimental observation of the disorder-induced localization of excitability. Our results shed further light on the collective dynamics of coupled excitable lasers with saturable absorber and suggest interesting directions for experimental studies. ## 6 Acknowledgements The authors gratefully acknowledge useful discussion with Dr. Sylvain Barabay. ## References * [1] S. Furber, Journal of neural engineering 13, 051001 (2016) * [2] B.J. Shastri, A.N. Tait, T.F. de Lima, M.A. Nahmias, H.T. Peng, P.R. Prucnal, _Principles of neuromorphic photonics_ (2017), 1801.00016 * [3] J.D. Murray, _Mathematical Biology: I. An Introduction_ , Vol. 1 (Springer-Verlag, 3rd Edition Berlin Heidelberg, 2002) * [4] E.M. Izhikevich, International Journal of Bifurcation and Chaos 10, 1171 (2000) * [5] I.K. Schuller, R. Stevens, Tech. rep. (2015), https://science.energy.gov/$\sim$/media/bes/ pdf/reports/2016/NCFMtSA_rpt.pdf * [6] P.R. Prucnal, B.V. Shastri, _Neuromorphic Photonics_ (Taylor & Francis, 1st Edition Boca Raton, 2017) * [7] J.L. Dubbeldam, B. Krauskopf, D. Lenstra, Physical Review E 60, 6580 (1999) * [8] S. Barbay, R. Kuszelewicz, A.M. Yacomotti, Optics letters 36, 4476 (2011) * [9] S. Barland, O. Piro, M. Giudici, J.R. Tredicce, S. Balle, Physical Review E 68, 036209 (2003) * [10] M. Giudici, C. Green, G. Giacomelli, U. Nespolo, J. Tredicce, Physical Review E 55, 6414 (1997) * [11] F. Plaza, M. Velarde, F. Arecchi, S. Boccaletti, M. Ciofini, R. Meucci, EPL (Europhysics Letters) 38, 85 (1997) * [12] M. Brunstein, A.M. Yacomotti, I. Sagnes, F. Raineri, L. Bigot, A. Levenson, Physical Review A 85, 031803 (2012) * [13] M. Turconi, M. Giudici, S. Barland, Physical Review Letters 111, 233901 (2013) * [14] B.J. Shastri, M.A. Nahmias, A.N. Tait, A.W. Rodriguez, B. Wu, P.R. Prucnal, Scientific Reports 6, 19126 (2016) * [15] B. Romeira, R. Avó, J.M. Figueiredo, S. Barland, J. Javaloyes, Scientific reports 6, 19510 (2016) * [16] B.J. Shastri, M.A. Nahmias, A.N. Tait, B. Wu, P.R. Prucnal, Optics express 23, 8029 (2015) * [17] B.J. Shastri, A.N. Tait, M. Nahmias, B. Wu, P. Prucnal, _Coincidence detection with graphene excitable laser_ , in _CLEO: Science and Innovations_ (Optical Society of America, 2014), pp. STu3I–5 * [18] A.M. Perego, M. Lamperti, Physical Review A 94, 033839 (2016) * [19] M. Lamperti, A.M. Perego, Physical Review A 96, 041803(R) (2017) * [20] F. Selmi, R. Braive, G. Beaudoin, I. Sagnes, R. Kuszelewicz, S. Barbay, Opt. Lett. 40, 5690 (2015) * [21] M.G. Rosenblum, A.S. Pikovsky, J. Kurths, Circuits and Systems I: Fundamental Theory and Applications, IEEE Transactions on 44, 874 (1997) * [22] P.W. Anderson, Physical Review 109, 1492 (1958) * [23] J.B. Pendry, J. Phys. C: Solid State Phys. 15, 5773 (1982)
††thanks: C. X. and H. Y. contributed equally to this manuscript.††thanks: C. X. and H. Y. contributed equally to this manuscript. # Three-nodal surface phonons in solid-state materials: Theory and material realization Chengwu Xie School of Physical Science and Technology, Southwest University, Chongqing 400715, China; Hongkuan Yuan School of Physical Science and Technology, Southwest University, Chongqing 400715, China; Ying Liu ying <EMAIL_ADDRESS>(Y. L.); School of Materials Science and Engineering, Hebei University of Technology, Tianjin 300130, China; Xiaotian Wang <EMAIL_ADDRESS>(X. W.); School of Physical Science and Technology, Southwest University, Chongqing 400715, China; Gang Zhang <EMAIL_ADDRESS>(G. Z.); Institute of High Performance Computing, Agency for Science, Technology and Research (A*STAR), 138632, Singapore ###### Abstract This year, Liu et al. [Phys. Rev. B 104, L041405 (2021)] proposed a new class of topological phonons (TPs; i.e., one-nodal surface (NS) phonons), which provides an effective route for realizing one-NSs in phonon systems. In this work, based on first-principles calculations and symmetry analysis, we extended the types of NS phonons from one- to three-NS phonons. The existence of three-NS phonons (with NS states on the $k_{i}$ = $\pi$ ($i$ = $x$, $y$, $z$) planes in the three-dimensional Brillouin zone (BZ)) is enforced by the combination of two-fold screw symmetry and time reversal symmetry. We screened all 230 space groups (SGs) and found nine candidate groups (with the SG numbers (Nos.) 19, 61, 62, 92, 96, 198, 205, 212, and 213) hosting three-NS phonons. Interestingly, with the help of first-principles calculations, we identified $P2_{1}$2121-type YCuS2 (SG No. 19), $Pbca$-type NiAs2 (SG No. 61), $Pnma$-type SrZrO2 (SG No. 62), $P4_{1}$212-type LiAlO2 (SG No. 92), $P4_{3}$212-type ZnP2 (SG No. 96), $P2_{1}$3-type NiSbSe (SG No. 198), $Pa\bar{3}$-type As2Pt (SG No. 205), $P4_{3}$32-type BaSi2 (SG No. 212), and $P4_{1}$32-type CsBe2F5 (SG No. 213) as realistic materials hosting three-NS phonons. The results of our presented study enrich the class of NS states in phonon systems and provide concrete guidance for searching for three-NS phonons and singular Weyl point phonons in realistic materials. ## I Introduction Topological quantum states of matter add1 ; add2 ; add3 are an important topic in the field of modern condensed-matter physics. Over the past 15 years, we have witnessed the emergence of many types of topological electronic materials, such as topological insulators add4 ; add5 ; add6 ; add7 , topological crystalline insulators add8 ; add9 ; add10 , topological Kondo insulators add11 ; add12 ; add13 , higher-order topological insulators add14 ; add15 ; add16 ; add17 , topological semimetals add18 ; add19 ; add20 , and higher-order topological semimetals add21 ; add22 ; add23 ; add24 . In particular, the types and numbers of topological semimetals add20 are rapidly increasing. In contrast to Dirac, Weyl, and Majorana fermions, which are allowed in high-energy physics, the types of quasiparticles in topological semimetals add25 are more diverse owing to fewer constraints imposed by the space group (SG) symmetries of the crystal. Based on the dimensionality of the band-crossings in the momentum space, the topological semimetals can be classified into nodal point add26 ; add27 ; add28 ; add29 ; add30 , nodal line add31 ; add32 ; add33 ; add34 ; add35 , and nodal surface (NS) add36 ; add37 ; add38 ; add39 ; add40 semimetals with zero-, one-, and two-dimensional band- crossings, respectively. Three-dimensional topological semimetals with two-dimensional band-crossings can host NS states in the Brillouin zone (BZ). Each point on the NS should be a two-fold degenerate point with linear band dispersion along the surface normal direction. Researchers hope that NS semimetals exhibit exotic physical properties, such as stronger quantum oscillations and peculiar plasmon excitations. Wu et al. add36 summarized an essential NS state dictated by nonsymmorphic symmetry without spin-orbit coupling (SOC). The existence of a series of NS semimetals in realistic electronic systems has been predicted, including BaVS3 add40 , ZrSiS add41 ; add42 , K6YO4 add36 , FeB4 add43 , Ti3Al add37 , and X(MoS)3 (X = K, Rb, and Cs) add44 . However, in general, SOC in electronic materials cannot be ignored; thus, the proposed two-dimensional nonsymmorphic symmetry-enforced NS states in electronic systems will usually be destroyed or reduced to one-dimensional nodal lines when SOC is considered add42 . Moreover, the NS states in some materials are far from the Fermi level and exhibit large energy variations, which hinder their experimental detection. The proposed topological phonons (TPs) add45 ; add46 have renewed the interest in topological quantum states; TPs are a basic kind of boson-type quasiparticles; they are not affected by the Pauli exclusion principle and SOC. Therefore, TPs can normally be observed in spinless phononic systems in all frequency ranges. In addition to the proposed nodal point phonons add47 ; add48 ; add49 ; add50 ; add51 ; add52 ; add53 ; add54 ; add55 and nodal line phonons add56 ; add57 ; add58 ; add59 ; add60 ; add61 ; add62 ; add63 ; add64 , one-NS phonons add65 have been presented by Liu et al. based on symmetry analysis and first-principles calculations. The researchers provided a complete list of the one-NS phonons in the 230 SGs and discovered that RbTeAu family materials with SG number (No.) 51 may contain one-NS states (on the $k_{x}$ = $\pi$ plane). The occurrence of one-NS states is ensured by screw rotation symmetry along the $i$ axis ($i$ = $x$, $y$, or $z$) and time- reversal symmetry $\mathcal{T}$. Fig. 1(a) presents a schematic diagram of one-NS phonons. Moreover, two more types of NS phonons should exist: two- and three-NS phonons, as illustrated in Fig. 1(b) and Fig. 1(c), respectively. In this study, we extended the class of NS phonons from one- to three-NS phonons. For the three-NS phonons, the NS states are localized on the $k_{i}$ = $\pi$ ($i$ = $x$, $y$, $z$) planes in the three-dimensional BZ. We screened all 230 SGs; the SGs with Nos. 19, 61, 62, 92, 96, 198, 205, 212, and 213 are candidate groups that can obtain three-NS phonons. Because the three-NS phonons in these SGs are symmetry-enforced, one can easily achieve three-NS phonons in realistic materials with the previously presented SGs. For example, in this work, we identified $P2_{1}$2121-type YCuS2 (SG No. 19), $Pbca$-type NiAs2 (SG No. 61), $Pnma$-type SrZrO2 (SG No. 62), $P4_{1}$212-type LiAlO2 (SG No. 92), $P4_{3}$212-type ZnP2 (SG No. 96), $P2_{1}$3-type NiSbSe (SG No. 198), $Pa\bar{3}$-type As2Pt (SG No. 205), $P4_{3}$32-type BaSi2 (SG No. 212), and $P4_{1}$32-type CsBe2F5 (SG No. 213) as realistic materials that can host three-NS phonons. Figure 1: Schematic diagrams of (a) one-NS, (b) two-NS, (c) and three-NS phonons, respectively. ## II Symmetry Analysis of Three-NS Phonons. In this part, we searched all essential NSs, which are only dictated by symmetries, in spinless systems add36 . Such an NS is protected by the combination of time-reversal symmetry ($\mathcal{T}$) and two-fold screw rotation symmetry ($S_{2i}$). Without loss of generalization, we take two-fold screw rotation along the z-direction as an example: $S_{2z}$:$(x,y,z)\to(-x,-y,z+\frac{1}{2})$ with a half translation in the lattice constant along its rotation axis. It also affects the momentum space: $S_{2z}$:$(k_{x},k_{y},k_{z})\to(-k_{x},-k_{y},k_{z})$, thereby only preserving the momentum along $k_{z}$. Without SOC, $S_{2z}^{2}$=$T_{100}$=$e^{-ik_{z}}$, where $T_{100}$ is the translation along the z-direction. For time-reversal symmetry, in spinless systems, $\mathcal{T}^{2}$ = 1, which is antiunitary and inverses the momentum k. Consequently, their combination $\mathcal{T}S_{2z}$ is also antiunitary. Remarkably, on planes where $k_{z}$ = $\pm\pi$, $(\mathcal{T}S_{2z})^{2}$=$e^{-ik_{z}}|_{k_{z}=\pm\pi}$ = $-1$, which suggests Kramer-like degeneracy on these planes. Thereby, it leads to Kramer-like degeneracy. Hence, the phonon bands on the $k_{i}$ = $\pi$ ($i$ = $x$, $y$, $z$) planes must become two-fold degenerate, thereby forming three-NS phonons. Furthermore, the presence of three two-fold rotation symmetries (i.e., $S_{2x}$, $S_{2y}$, and $S_{2z}$) leads to three NSs on the planes $k_{i}$ = $\pm\pi$ ($i$ = $x$, $y$, $z$). In this study, we proposed all the three-NS phonons by searching all 230 SGs in phonon systems. According to the results, the SGs with Nos. 19, 61, 62, 92, 96, 198, 205, 212, and 213 (see Table 1) can host three-NS phonons. ## III Computational Details First-principles calculations based on density functional theory were performed to study the ground states of $P2_{1}$2121-type YCuS2, $Pbca$-type NiAs2, $Pnma$-type SrZrO2, $P4_{1}$212-type LiAlO2, $P4_{3}$212-type ZnP2, $P2_{1}$3-type NiSbSe, $Pa\bar{3}$-type As2Pt, $P4_{3}$32-type BaSi2, and $P4_{1}$32-type CsBe2F5 materials, as implemented in the Vienna Ab Initio Simulation Package. The projector augmented wave method and generalized gradient approximation add66 with Perdew–Burke–Ernzerhof functions were used for the ionic potential and exchange-correlation interaction. In addition, a plane wave cutoff energy of 500 eV was used for the structural relaxation. The following $k$-mesh samples were used for YCuS2, NiAs2, SrZrO2, LiAlO2, ZnP2, NiSbSe, As2Pt, BaSi2, and CsBe2F5: $9\times 7\times 5$, $5\times 5\times 3$, $5\times 5\times 5$, $7\times 7\times 7$, $7\times 7\times 3$, $7\times 7\times 7$, $7\times 7\times 7$, $9\times 9\times 9$, and $5\times 5\times 5$, respectively. All these materials are experimentally synthesized materials. The phononic dispersions of the $2\times 2\times 1$ YCuS2, $2\times 2\times 1$ NiAs2, $2\times 1\times 2$ SrZrO2, $2\times 2\times 2$ LiAlO2, $2\times 2\times 1$ ZnP2, $2\times 2\times 2$ NiSbSe, $2\times 2\times 2$ As2Pt, $2\times 2\times 2$ BaSi2, and $1\times 1\times 1$ CsBe2F5 cells were examined with density functional perturbation theory and PHONOPY codes add67 . Table 1: A complete list of three-NS phonons in 230 SGs. The first and second columns present the SG numbers and SG symbols, the third column lists the three-NSs along the symmetry paths, and the fourth column presents the corresponding realistic materials. Space group | Space group | Three-nodal | Realistic ---|---|---|--- numbers | symbols | surfaces | materials 19 | $P2_{1}$2121 | NSTYS, NSSXU, and NSUZT | YCuS2 61 | $Pbca$ | NSTYS, NSSXU, and NSUZT | NiAs2 62 | $Pnma$ | NSTYS, NSSXU, and NSUZT | SrZrO2 92 | $P4_{1}$212 | NSZRA, and NSMXR | LiAlO2 96 | $P4_{3}$212 | NSZRA, and NSMXR | ZnP2 198 | $P2_{1}$3 | NSRXM | NiSbSe 205 | $Pa\bar{3}$ | NSRXM | As2Pt 212 | $P4_{3}$32 | NSRXM | BaSi2 213 | $P4_{1}$32 | NSRXM | CsBe2F5 ## IV Materials with Three-NS phonons For phonon systems with SG Nos. 19, 61, 62, the three-NSs (i.e., NSTYS, NSSXU, and NSUZT) appear on the planes $k_{y}$ = $\pi$, $k_{x}$ = $\pi$, and $k_{z}$ = $\pi$, respectively. Some realistic materials were selected as examples to demonstrate that they host three-NSs in their phonon dispersions: $P2_{1}$2121-type YCuS2 (SG No. 19) can be prepared add68 by fusing the high- purity elements in evacuated quartz ampoules; Murray and Heyding add69 prepared $Pbca$-type NiAs2 (SG No. 61) by heating the elements in sealed evacuated Vycor tubes; $Pnma$-type SrZrO2 powders (SG No. 62) were prepared with the polymeric precursor method by Cavalcante et al. add70 . The crystal structures of these three materials are shown in Fig. 2. They are completely relaxed; their theoretically determined lattice constants and the previously published experimentally determined data are listed in Table 2. Figure 2: Crystal structures of $P2_{1}$2121-type YCuS2 (SG No. 19), $Pbca$-type NiAs2 (SG No. 61), and $Pnma$-type SrZrO2 (SG No. 62), respectively. Table 2: Theoretically and experimentally determined lattice constants of YCuS2, NiAs2, and SrZrO2. Materials | Theoretical lattice | Experimental lattice ---|---|--- | constants | constants add68 ; add69 ; add70 YCuS2 | a = 3.96 Å, b = 6.26 Å, | a = 3.97 Å, b = 6.27 Å, | c = 13.48 Å | c = 13.38 Å NiAs2 | a = 5.80 Å, b = 5.89 Å, | a = 5.77 Å, b = 5.83 Å, | c = 11.50 Å | c = 11.41 Å SrZrO2 | a = 5.91 Å, b = 8.29 Å, | a = 5.81 Å, b = 8.19 Å, | c = 5.84 Å | c = 5.79 Å Figure 3: (a) Three-dimensional BZ and symmetry points. Three-NS states (red, green, and yellow) are localized on the $k_{i}$ = $\pi$ ($i$ = $x$, $y$, $z$) planes in three-dimensional BZ. (b)–(d) Calculated phonon dispersions of YCuS2, NiAs2, and SrZrO2, respectively. Three NS regions (i.e., NSTYS, NSSXU, and NSUZT) are highlighted in green, red, and yellow, respectively. Figure 4: (a), (c), and (e) Enlarged phonon dispersions of three regions (see Fig. 3(b)–(d)) of YCuS2, NiAs2, and SrZrO2, respectively. (b), (d), and (f) phonon dispersions along $a$ (0.70, 0.25, 0.00) -$P$ (0.50, 0.25, 0.00) -$a^{\prime}$ (0.30, 0.25, 0.00), $b$ (0.00, 0.25, 0.7) -$Q$ (0.00, 0.25, 0.5) -$b^{\prime}$ (0.00, 0.25, 0.3), and $c$ (0.00, 0.7, 0.25) -$N$ (0.00, 0.5, 0.25) -$c^{\prime}$ (0.00, 0.3, 0.25), respectively. All points at $P$, $Q$, and $N$ points are two-fold degenerate with linear phonon band dispersions. The phonon dispersions of YCuS2, NiAs2, and SrZrO2 along the $\Gamma-X- S-Y-\Gamma-Z-S-X-U-Z-T-Y-S-R$ paths (see Fig. 3(a)) are shown in Fig. 3(b)–(d), respectively. Three regions (highlighted in red, yellow, and green) are of interests in this study. The enlarged figures of the phonon dispersions of YCuS2, NiAs2, and SrZrO2 in the three regions are shown in Fig. 4(a), (c) and (e), respectively. All the phonon bands along the $S$-$X$-$U$, $U$-$Z$-$T$, and $T$-$Y$-$S$ planes have two-fold degeneracy. To explain this in more detail, some symmetry lines (i.e., $a$-$P$-$a^{\prime}$, $b$-$Q$-$b^{\prime}$, and $c$-$N$-$c^{\prime}$; see Fig. 3(a)) are selected, they are perpendicular to the $S$-$X$, $T$-$Z$, and $Y$-$T$ symmetry lines, respectively. Subsequently, we calculate the phonon dispersions along the $a$-$P$-$a^{\prime}$, $b$-$Q$-$b^{\prime}$, and $c$-$N$-$c^{\prime}$ paths; the results are presented in Fig. 4(b), (d), (f), respectively. Evidently, the points (highlighted by red circles) at the $P$, $Q$, and $N$ symmetry points are two-fold degenerate and have linear band dispersions. In addition, two- fold Kramer-like degeneracy occurs at every point on the $S$-$X$-$U$, $U$-$Z$-$T$, and $T$-$Y$-$S$ planes, thereby forming three-NSs on the $k_{i}$ = $\pi$ ($i$ = $x$, $y$, $z$) planes. These density functional theory results agree well with the argument (Section II) that the antiunitary symmetry $\mathcal{T}S_{2i}$ ensures the existence of three-NS phonons on the $k_{i}$ = $\pm\pi$ ($i$ = $x$, $y$, $z$) planes. Figure 5: (a) and (b) Crystal structures of $P4_{1}$212-type LiAlO2 (SG No. 92) and $P4_{3}$212-type ZnP2 (SG No. 96), respectively; (c) Three-dimensional BZ and symmetry points. Three-NS states (highlighted in yellow and green) are localized on $k_{i}$ = $\pi$ ($i$ = $x$, $y$, $z$) planes in three-dimensional BZ. (b)–(d) Calculated phonon dispersions of LiAlO2 and ZnP2, respectively. NS regions (i.e., NSMXR, and NSZRA) are highlighted in green and yellow, respectively. In the next step, some realistic materials with SG Nos. 92 and 96 and three-NS phonons are presented. The first example is $P4_{1}$212-type LiAlO2 (SG No. 92); Remeika and Ballman add71 prepared these single crystals from a flux. The second example is $P4_{3}$212-type ZnP2 (SG No. 96). Researchers add72 have reported that ZnP2 crystals can exist in an enantiomorphic form with SG $P4_{3}$212 = $D_{4}^{8}$. The theoretically determined and previously published experimental lattice constants are shown in Table 3. The crystal structures of the two materials are shown in Fig. 5(a) and (b). The phonon dispersions of the two materials along the $\Gamma-X-M-\Gamma-Z-R-A-M-X-R$ paths (see Fig. 5(c)) are presented in Fig. 5(d) and (e). To examine the three-NS phonons in these two materials, we only focused on two paths: $Z$-$R$-$A$ and $M$-$X$-$R$, which are highlighted in yellow and green in Fig. 5(d) and (e), respectively. The enlarged phonon dispersions of these two regions for LiAlO2 and ZnP2 are shown in Fig. 6(a) and (c), respectively. All the phonon bands along the $Z$-$R$-$A$ and $M$-$X$-$R$ paths are two-fold degenerate. To present examples, we select two symmetry points $N$ and $Q$ on the $k_{y}$ = $\pi$ and $k_{z}$ = $\pi$ planes, respectively. We construct the two paths $b$-$Q$-$b^{\prime}$ and $c$-$N$-$c^{\prime}$, which vertically pass through the $k_{z}$ = $\pi$ and $k_{y}$ = $\pi$ planes, respectively. The obtained phonon dispersions along the $b$-$Q$-$b^{\prime}$ and $c$-$N$-$c^{\prime}$ paths for LiAlO2 and ZnP2 are shown in Fig. 6(b) and (d), respectively. There are two two-fold degenerate points at $Q$ and $N$, which represent the NS states on the $k_{z}$ = $\pi$ and $k_{y}=\pi$ planes. Because LiAlO2 and ZnP2 with SG Nos. 92 and 96 host four-fold screw rotation, $S_{4z}$ = $\\{C_{4z}|00\frac{1}{2}\\}$, there should be NSs on the $k_{x}$ = $\pm\pi$ planes. Table 3: Theoretically and experimentally determined lattice constants of LiAlO2 and ZnP2. Materials | Theoretical lattice | Experimental lattice ---|---|--- | constants | constants add71 ; add72 LiAlO2 | a = b = 5.21 Å, | a = b = 5.17 Å, | c = 6.30 Å | c = 6.59 Å ZnP2 | a = b = 5.06 Å, | a = b = 5.10 Å, | c = 18.53 Å | c = 18.62 Å Figure 6: (a), (c) Enlarged phonon dispersions of two regions (see Fig. 5(d) and (e)) of LiAlO2 and ZnP2, respectively. (b), (d) Phonon dispersions along $b$ (0.00, 0.25, 0.7) -$N$ (0.00, 0.25, 0.5) -$b^{\prime}$ (0.00, 0.25, 0.3), and $c$ (0.00, 0.7, 0.25) -$Q$ (0.00, 0.5, 0.25) -$c^{\prime}$ (0.00, 0.3, 0.25), respectively. All the points at the $Q$ and $N$ points are two-fold degenerate points with linear phonon band dispersions. Finally, some realistic materials with SG Nos. 198, 205, 212, and 213 are presented, which host three-NS phonons, i.e., NSRXM. The first example is $P2_{1}$3-type NiSbSe (SG No. 198). It was prepared by letting powders of binary nickel chalcogenides react with the respective pnictogen component in evacuated sealed silica tubes add73 . The second example is $Pa\bar{3}$-type As2Pt (SG No. 205). Ramsdell add74 produced artificial PtAs2, which is identical to natural sperrylite. The third example is $P4_{3}$32-type BaSi2 (SG No. 212). This compound is an interesting material add75 that can host three types of polymorphs (orthorhombic, trigonal, and cubic crystal classes with $Pnma$, $P\bar{3}m1$, and $P4_{3}32$ SGs) at up to 40 kbar and 1000 ${}^{\circ}C$. $P4_{3}$32-type BaSi2 represents one kind of the high-pressure phases. The fourth example is $P4_{1}$32-type CsBe2F5 (SG No. 213). Le Fur and Al$\rm{\acute{e}}$onard add76 dissolved Cs2CO2 carbonate in a hydrofluoric solution containing excess BeF2. A single CsBe2F5 crystal can be obtained via evaporation at 55 ${}^{\circ}C$. The crystal structures of these materials are shown in Fig. 7. We determined their lattice constants with structural- relaxation calculations (see Fig. 7 and Table 4). Figure 7: (a)–(d) Crystal structures of $P2_{1}$3-type NiSbSe (SG No. 198), $Pa\bar{3}$-type As2Pt (SG No. 205), $P4_{3}$32-type BaSi2 (SG No. 212), and $P4_{1}$32-type CsBe2F5 (SG No. 213), respectively. Table 4: Theoretically and experimentally determined lattice constants of NiSbSe, As2Pt, BaSi2, and CsBe2F5 Materials | Theoretical lattice | Experimental lattice ---|---|--- | constants | constants add73 ; add74 ; add75 ; add76 NiSbSe | a = b = c = 6.13 Å | a = b = c = 6.08 Å As2Pt | a = b = c = 6.06 Å | a = b = c = 5.92 Å BaSi2 | a = b = c = 6.77 Å | a = b = c = 6.71 Å CsBe2F5 | a = b = c = 8.06 Å | a = b = c = 7.93 Å Figure 8: (a) Three-dimensional BZ and symmetry points. Three-NS states (green color) are localized on $k_{i}$ = $\pi$ ($i$ = $x$, $y$, $z$) planes in three- dimensional BZ. (b)–(e) Calculated phonon dispersions of NiSbSe, As2Pt, BaSi2, and CsBe2F5 materials, respectively. NS regions (i.e., NSRXM) are highlighted in green. We calculated the phonon dispersions of NiSbSe, As2Pt, BaSi2, and CsBe2F5 along the symmetry paths $\Gamma-X-M-\Gamma-R-X-M$ (see Fig. 8(a)); the results are shown in Fig. 8(b)–(d). Let us focus on the two-fold degenerate phonon bands along the $R$-$X$-$M$ paths (see Fig. 9(a), (c), (e), and (g)). To prove that these bands are degenerated, we chose the path $a$-$P$-$a^{\prime}$ that vertically passes through the $k_{y}$ = $\pi$ plane. The obtained phonon dispersions along the $a$-$P$-$a^{\prime}$ path for these materials are shown in Fig. 9(b), (d), (f), and (h), respectively. There are two evident two-fold degenerate points at $P$ with linear band dispersions. We can conclude that an NS phonon exists on the $k_{y}$ = $\pi$ plane on which the two low-energy phonon bands cross linearly. Owing to $C_{3,111}$ symmetry, equivalent NS phonons can be found on the $k_{x}$ = $\pi$ and $k_{y}$ = $\pi$ planes. We would like to point out that although symmetry requires the occurrence of three-NS phonons and limits the possible positions on the $k_{i}$ = $\pi$ ($i$ = $x$, $y$, $z$) planes, it does not limit the frequencies and dispersions of three-NS phonons. Figure 9: (a), (c), (e), (g) Enlarged phonon dispersions of $R$-$X$-$M$ path (see Fig. 8(b)–(e)) of As2Pt, BaSi2, and CsBe2F5 materials, respectively. (b), (d), (f), (h) Phonon dispersions along $a$ (0.00, 0.70, 0.25) -$P$ (0.00, 0.50, 0.25) -$a^{\prime}$ (0.00, 0.30, 0.25) path. All points at the $P$ symmetry point are two-fold degenerate points with linear phonon band dispersions (indicated by red circles in (b), (d), (f), and (h)). ## V Summary and remarks In conclusion, according to the symmetry analysis results, there are three-NS phonons in the SGs with SG Nos. 19, 61, 62, 92, 96, 198, 205, 212, and 213 of the 230 SGs. More interestingly, by performing first-principles calculations, we discovered that the realistic materials $P2_{1}$2121-type YCuS2 (SG No. 19), $Pbca$-type NiAs2 (SG No. 61), $Pnma$-type SrZrO2 (SG No. 62), $P4_{1}$212-type LiAlO2 (SG No. 92), $P4_{3}$212-type ZnP2 (SG No. 96), $P2_{1}$3-type NiSbSe (SG No. 198), $Pa\bar{3}$-type As2Pt (SG No. 205), $P4_{3}$32-type BaSi2 (SG No. 212), and $P4_{1}$32-type CsBe2F5 (SG No. 213) include three-NS phonons in their phonon dispersions. We present the following remarks: (i) Because phonons obey Bose–Einstein statistics and are not limited by the Fermi energy, three-NS in the phonon system may be more common in realistic materials; (ii) Unlike fermions in electronic systems with heavy elements, SOC can be neglected for TPs in phonon systems. Hence, three-NS phonons in phonon systems can be considered real NS states without SOC-induced gaps; (iii) Although three-NS phonons in SGs 19, 61, 62, 92, 96, 198, 205, 212, and 213 can be determined by the combination of two-fold screw symmetry and time reversal symmetry, the frequencies and dispersions of three-NS phonons are not limited; (iv) One may ask what is the difference between three-NS and one-/two-NS states? As we know, constrained by the no-go theorem, among all the Weyl semimetals add77 discovered in experiments before 2019, Weyl points always occur in pairs in the momentum space, without exception. Interestingly, in 2019, as demonstrated by Yu et al. add78 , the three-NS state is a good platform for realizing a singular Weyl point by circumventing the no-go theorem. However, for two- and one-NS states, although Weyl points and NS states can coexist, there must be more than one Weyl point in the BZ. Fortunately, in this year, Ma et al. add77 observed a singular Weyl point surrounded by three-NSs in PtGa with SG No. 198 in an experiment. Acknowledgments X.T.W. thanks Prof. Zhi-Ming Yu for his help regarding to this manuscript. Y.L. is grateful for the support from the Nature Science Foundation of Hebei Province (No. A2021202002). X.T.W. is grateful for the support from the National Natural Science Foundation of China (No. 51801163) and the Natural Science Foundation of Chongqing (No. cstc2018jcyjA0765). ## References * (1) F. D. M. Haldane,?Rev. Mod. Phys. 89, 040502 (2017). * (2) C.-K. Chiu, J. C. Y. Teo, A. P. Schnyder, and S. Ryu, Rev. Mod. Phys. 88, 035005 (2016). * (3) N. Goldman, J. C. Budich, and P. Zoller, Nat. Phys. 12, 639 (2016). * (4) M. Z. Hasan and C. L. Kane, Rev. Mod. Phys. 82, 3045 (2010). * (5) X.-L. Qi and S.-C. Zhang, Rev. Mod. Phys. 83, 1057 (2011). * (6) L. Fu, C. L. Kane, and E. J. Mele, Phys. Rev. Lett. 98, 106803 (2007). * (7) Y. Tokura, K. Yasuda, and A. Tsukazaki, Nat. Rev. Phys. 1, 126 (2019). * (8) L. Fu, Phys. Rev. Lett. 106, 106802 (2011). * (9) Y. Ando and L. Fu, Annu. Rev. Condens. Matter Phys. 6, 361 (2015). * (10) T. H. Hsieh, H. Lin, J. Liu, W. Duan, A. Bansil, and L. Fu, Nat. Commun. 3, 982 (2012). * (11) M. Dzero, K. Sun, V. Galitski, and P. Coleman, Phys. Rev. Lett. 104, 106408 (2010). * (12) M. Dzero, J. Xia, V. Galitski, and P. Coleman, Annu. Rev. Condens. Matter Phys. 7, 249 (2016). * (13) M. Dzero, K. Sun, P. Coleman, and V. Galitski, Phys. Rev. B 85, 045130 (2012). * (14) F. Schindler, A. M. Cook, M. G. Vergniory, Z. Wang, S. S. P. Parkin, B. A. Bernevig, and T. Neupert, Sci. Adv. 4, eaat0346 (2018). * (15) E. Khalaf, Phys. Rev. B 97, 205136 (2018). * (16) M. Ezawa, Phys. Rev. Lett. 120, 026801 (2018). * (17) R. Chen, C.-Z. Chen, J.-H. Gao, B. Zhou, and D.-H. Xu, Phys. Rev. Lett. 124, 036803 (2020). * (18) A. A. Burkov, Nat. Mater. 15, 1145 (2016). * (19) H. Gao, J. W. F. Venderbos, Y. Kim, and A. M. Rappe, Annu. Rev. Mater. Res. 49, 153 (2019). * (20) B. Q. Lv, T. Qian, and H. Ding, Rev. Mod. Phys. 93, 025002 (2021). * (21) D. C$\rm{\breve{a}lug\breve{a}ru,V.Juri\check{c}i\acute{c}}$, and B. Roy, Phys. Rev. B 99, 041301(R) (2019). * (22) Z. Zhang, Z.-M. Yu, and S. A. Yang, Phys. Rev. B 103, 115112 (2021). * (23) W. Wu, Z.-M. Yu, X. Zhou, Y. X. Zhao, and S. A. Yang, Phys. Rev. B 101, 205314 (2020). * (24) Z.-M. Yu, W. K. Wu, X.-L. Sheng, Y. X. Zhao, and S. Y. A. Yang, Phys. Rev. B 99, 121106(R) (2019). * (25) Z.-M. Yu, Z. Zhang, G.-B. Liu, W. Wu, X.-P. Li, R.-W. Zhang, S. A. Yang, and Y. Yao, arXiv:2102.01517 (2021). * (26) N. P. Armitage, E. J. Mele, and A. Vishwanath, Rev. Mod. Phys. 90, 015001 (2018). * (27) Q. D. Gibson, L. M. Schoop, L. Muechler, L. S. Xie, M. Hirschberger, N. P. Ong, R. Car, and R. J. Cava, Phys. Rev. B 91, 205128 (2015). * (28) B.-J. Yang and N. Nagaosa, Nat. Commun. 5, 4898 (2014). * (29) B. Yan and C. Felser, Annu. Rev. Condens. Matter Phys. 8, 337 (2017). * (30) A. A. Soluyanov, D. Gresch, Z. Wang, Q. Wu, M. Troyer, X. Dai, and B.?A. Bernevig, Nature (London) 527, 495 (2015). * (31) C. Fang, Y. Chen, H.-Y. Kee, and L. Fu, Phys. Rev. B 92, 081201 (2015). * (32) Q. Xu, R. Yu, Z. Fang, X. Dai, and H. Weng, Phys. Rev. B 95, 045136 (2017). * (33) T. He, X. Zhang, Y. Liu, X. Dai, G. Liu, Z.-M. Yu, and Y. Yao, Phys. Rev. B 102, 075133 (2020). * (34) L. Jin, X. M. Zhang, Y. Liu, X. F. Dai, X. N. Shen, L. Y. Wang, and G. D. Liu, Phys. Rev. B 102, 125118 (2020). * (35) H. Zhang, X. Zhang, T. He, X. Dai, Y. Liu, G. Liu, L. Wang, and Y. Zhang, Phys. Rev. B 102, 155116 (2020). * (36) W. Wu, Y. Liu, S. Li, C. Zhong, Z.-M. Yu, X.-L. Sheng, Y. X. Zhao, and S. A. Yang, Phys. Rev. B 97, 115125 (2018). * (37) X. M. Zhang, Z.-M. Yu, Z. M. Zhu, W. K. Wu, S.-S. Wang, X.-L. Sheng, and S. A. Yang, Phys. Rev. B 97, 235150 (2018). * (38) B.-B. Fu, C.-J. Yi, T.-T. Zhang, M. Caputo, J.-Z. Ma, X. Gao, B. Q. Lv, L.-Y. Kong, Y.-B. Huang, P. Richard, M. Shi, V. N. Strocov, C. Fang, H.-M. Weng, Y.-G. Shi, T. Qian, and H. Ding, Sci. Adv. 5, eaau6459 (2019). * (39) S. Z. Chen, S. Li, Y. Chen, and W. Duan, Nano Lett. 20, 5400 (2020). * (40) Q.-F. Liang, J. Zhou, R. Yu, Z. Wang, and H. Weng, Phys. Rev. B 93, 085427 (2016). * (41) A. Topp, R. Queiroz, A. Grneis, L. M$\rm{\ddot{u}}$chler, A. W. Rost, A. Varykhalov, D. Marchenko, M. Krivenkov, F. Rodolakis, J. L. McChesney, B. V. Lotsch, L. M. Schoop, and C. R. Ast, Phys. Rev. X 7, 041073 (2017). * (42) C. Chen, X. Xu, J. Jiang, S.-C. Wu, Y. P. Qi, L. X. Yang, M. X. Wang, Y. Sun, N. B. M. Schr$\rm{\ddot{o}}$ter, H. F. Yang, L. M. Schoop, Y. Y. Lv, J. Zhou, Y. B. Chen, S. H. Yao, M. H. Lu, Y. F. Chen, C. Felser, B. H. Yan, K. Liu, and Y. L. Chen, Phys. Rev. B 95, 125126 (2017). * (43) F. Zhou, Y. Liu, J. H. Wang, M. Q. Kuang, T. Yang, H. Chen, X. T. Wang, and Z. X. Cheng, Phys. Rev. Materials 5, 074201 (2021). * (44) T. Yang and X. Zhang, J. Mater. Chem. C 8, 9046 (2020). * (45) O. Stenull, C. L. Kane, and T. C. Lubensky, Phys. Rev. Lett. 117, 068001 (2016). * (46) Y. Liu, X. Chen, and Y. Xu, Adv. Funct. Mater. 30, 1904784 (2019). * (47) T. T. Zhang, Z. D. Song, A. Alexandradinata, H. M. Weng, C. Fang, L. Lu, and Z. Fang, Phys. Rev. Lett. 120, 016401 (2018). * (48) J. X. Li, Q. Xie, S. Ullah, R. H. Li, H. Ma, D. Z. Li, Y. Y. Li, and X.-Q. Chen, Phys. Rev. B 97, 054305 (2018). * (49) H. Miao, T. T. Zhang, L. Wang, D. Meyers, A. H. Said, Y. L. Wang, Y. G. Shi, H. M. Weng, Z. Fang, and M. P. M. Dean, Phys. Rev. Lett. 121, 035302 (2018). * (50) B. W. Xia, R. Wang, Z. J. Chen, Y. J. Zhao, and H. Xu, Phys. Rev. Lett. 123, 065501 (2019). * (51) Q.-B. Liu, Y. Qian, H.-H. Fu, and Z. Wang, Npj Comput. Mater. 6, 95 (2020). * (52) Q.-B. Liu, Z. Wang, and H.-H. Fu, Phys. Rev. B 103, L161303 (2021). * (53) J. Liu, W. Hou, E. Wang, S. Zhang, J.-T. Sun, and S. Meng, Phys. Rev. B 100, 081204(R) (2019). * (54) Y. J. Jin, Z. J. Chen, X. L. Xiao, and H. Xu, Phys. Rev. B 103, 104101 (2021). * (55) J.-Y. You, X.-L. Sheng, and G. Su, Phys. Rev. B 103, 165143 (2021). * (56) G. Liu, Y. J. Jin, Z. G. Chen, and H. Xu, Phys. Rev. B 104, 024304 (2021). * (57) B. Zheng, B. Xia, R. Wang, Z. Chen, J. Zhao, Y. Zhao, and H. Xu, Phys. Rev. B 101, 100303(R) (2020). * (58) Y. J. Jin, Z. J. Chen, B. W. Xia, Y. J. Zhao, R. Wang, and H. Xu, Phys. Rev. B 98, 220103(R) (2018). * (59) R. Y. Wang, Z. J. Chen, Z. Q. Huang, B. W. Xia, and H. Xu, Phys. Rev. Materials 5, 084202 (2021). * (60) B. B. Zheng, F. Y. Zhan, X. Z. Wu, R. Wang, and J. Fan, Phys. Rev. B 104, L060301 (2021). * (61) J. J. Zhu, W. K. Wu, J. Z. Zhao, Hao. Chen, L. F. Zhang, S. Y. A. Yang, arXiv:2104.14816 (2021). * (62) Q.-B. Liu, H.-H. Fu, and R. Q. Wu, Phys. Rev. B 104, 045409 (2021). * (63) J. Li, J. Liu, S. A. Baronett, M. Liu, L. Wang, and R. Li, Y. Chen, D. Li, Q. Zhu, and X.-Q. Chen, Nat. Commun. 12, 1204 (2021). * (64) T. T. Zhang, H. Miao, Q. Wang, J. Q. Lin, Y. Cao, G. Fabbris, A. H. Said, X. Liu, H. C. Lei, Z. Fang, H. M. Weng, and M. P. M. Dean, Phys. Rev. Lett. 123, 245302 (2019). * (65) Q.-B. Liu, Z.-Q. Wang, and H.-H. Fu, Phys. Rev. B 104, L041405 (2021). * (66) J. P. Perdew, K. Burke, and M. Ernzerhof, Phys. Rev. Lett. 77, 3865 (1996). * (67) A. Togo and I. Tanaka, Scr. Mater. 108, 1 (2015). * (68) L. D. Gulay, V. Ya. Shemet, and I. D. Olekseyuk, J. Alloys Compd. 388, 59 (2005). * (69) J. J. Murray and R. D. Heyding, Can. J. Chem. 45, 2675 (1967). * (70) L. S. Cavalcante, A. Z. Simoes, E. Longo, J. C. Sczancoski, R. Erlo, M. T. Escote, V. M. Longo, and J. A. Varela, Solid State Sci. 9, 1020 (2007). * (71) J. P. Remeika and A. A. Ballman, Appl. Phys. Lett. 5, 180 (1964). * (72) K. B. Aleinikova, A. I. Kozlov, S. G. Kozlova, and V. V. Sobolev, Phys. Solid State, 44, 1257 (2002). * (73) A. J. Foecker and W. Jeitschko, J. Sol. State. Chem. 162, 69 (2001). * (74) L. S. Ramsdell, Am. Mineral. 10, 281 (1925). * (75) J. Evers, J. Solid State Chem. 32, 77 (1980). * (76) Y. L. Fur and S. Al$\rm{\acute{e}}$onard, Acta Cryst. 28, 2115 (1972). * (77) J.-Z. Ma, Q.-S. Wu, M. Song, S.-N. Zhang, E. B. Guedes, S. A. Ekahana, M. Krivenkov, M. Y. Yao, S.-Y. Gao, W.-H. Fan, T. Qian, H. Ding, N. C. Plumb, M. Radovic, J. H. Dil, Y.-M. Xiong, K. Manna, C. Felser, O. V. Yazyev, and M. Shi, Nat. Commun. 12, 1 (2021). * (78) Z.-M. Yu, W. Wu, Y. X. Zhao, and S. A. Yang, Phys. Rev. B 100, 041118(R) (2019).
# Exemplar-based Pattern Synthesis with Implicit Periodic Field Network Haiwei Chen1,2 Jiayi Liu1,2 Weikai Chen3 Shichen Liu1,2 Yajie Zhao1,2 1University of Southern California 2USC Institute for Creative Technologies 3Tencent Game AI Research Center <EMAIL_ADDRESS><EMAIL_ADDRESS><EMAIL_ADDRESS> ###### Abstract Synthesis of ergodic, stationary visual patterns is widely applicable in texturing, shape modeling, and digital content creation. The wide applicability of this technique thus requires the pattern synthesis approaches to be scalable, diverse, and authentic. In this paper, we propose an exemplar- based visual pattern synthesis framework that aims to model the inner statistics of visual patterns and generate new, versatile patterns that meet the aforementioned requirements. To this end, we propose an implicit network based on generative adversarial network (GAN) and periodic encoding, thus calling our network the Implicit Periodic Field Network (IPFN). The design of IPFN ensures scalability: the implicit formulation directly maps the input coordinates to features, which enables synthesis of arbitrary size and is computationally efficient for 3D shape synthesis. Learning with a periodic encoding scheme encourages diversity: the network is constrained to model the inner statistics of the exemplar based on spatial latent codes in a periodic field. Coupled with continuously designed GAN training procedures, IPFN is shown to synthesize tileable patterns with smooth transitions and local variations. Last but not least, thanks to both the adversarial training technique and the encoded Fourier features, IPFN learns high-frequency functions that produce authentic, high-quality results. To validate our approach, we present novel experimental results on various applications in 2D texture synthesis and 3D shape synthesis. ## 1 Introduction The synthesis of visual patterns, may that be a wooden texture for painting, or a simulation of natural cave systems, is a technique that is applied ubiquitously in computer-aided design and digital content creation. Visual patterns can be understood as arts, shapes, or natural textures following certain geometric structures. In an application context, let us start by defining several characteristics that are desirable for an algorithm that generates visual patterns: * • Authenticity. Probably the most prioritized quality of synthesized visual patterns is its visual quality. When patterns are synthesized from an exemplar, the quality is determined by whether they faithfully recreate the source pattern. * • Diversity. It would be undesirable for a synthesizer to only copy patterns from the source. Diversity is thus an equally important measurement that evaluates whether the synthesized patterns vary from the source and each other. We strive to achieve two different levels of diversity: the patterns should be diversified both within a generated sample and across samples. * • Scalability. As patterns are usually demanded at different and potentially large scales for many practical applications, we want a scalable synthesizer to be able to efficiently generate patterns of arbitrary size. Scalability is particularly valuable when it comes to the synthesis of 3D models, as the extra dimension translates to a much larger amount of computations. A scalable design choice leads us to formulate the synthesis problem as generating patterns from a continuous, real coordinate space. This is generally known as the implicit formulation, where a nonlinear function maps points defined in $\mathbb{R}^{2}$ or $\mathbb{R}^{3}$ to features that represent the synthesized subjects. In particular, the implicit function has been shown to be an efficient representation for synthesizing 3D volumes [henzler2020learning, park2019deepsdf, mildenhall2020nerf]. Patterns that scale well to an infinitely large space, in general, possess a stationary property - a shift-invariant structure that can be expanded by tiling or stacking blocks of elements. We therefore develop our method by pivoting on the fact that many types of natural and artistic patterns can be analyzed and recreated in a stationary framework. The goal of synthesizing an authentic and diverse stationary pattern from an exemplar, however, requires careful modeling that is compatible with the underlying structure of the pattern. Generative adversarial networks (GAN) [goodfellow2014generative] is one of the most promising techniques so far to model data distribution in an unsupervised manner and has been frequently adapted to convolutional models that synthesize visually authentic images [shaham2019singan, jetchev2016texture, bergmann2017learning, zhou2018non, xian2018texturegan, shocher2019ingan]. How would a GAN generator be leveraged to modeling stationary pattern? As all stationary patterns contain a repeating structure with local variations that “vivifies” its appearance, in an ideal situation, a stationary pattern can be modeled by a discrete random field, where each random variable is associated with a patch of the basic element. Thus a natural GAN formulation models image patches with a spatially defined latent field [jetchev2016texture, bergmann2017learning]. In a convolutional framework, however, problems arise when fake samples generated from a discrete noise image are discriminated from randomly sampled patches from a real image. The first problem is that the sampled patch does not necessarily agree with the scale of the repeating structure. The second problem is that the sampled patch can be arbitrarily shifted from the center of a stationary element. A typical deconvolutional network [dong2015image, zeiler2010deconvolutional] that upsamples from an evenly-spaced noise image may not sufficiently address the previously mentioned problems. To study their effects we designed a convolutional network following the DCGAN architecture [radford2015unsupervised] to synthesize a honeycomb pattern from a $2\times 2$ noise map, which is trained against random patches sampled from a source image. The comparison between its result and that synthesized by our generator network that is trained with an identical discriminator is shown in Figure. 1. We found that the noise map does not capture well the honeycomb structure as seams and intercepting elements are visible from the synthesized image. Though various techniques have been proposed in the past to address the aforementioned issues (e.g. [jetchev2016texture, bergmann2017learning]), in this paper, we consider a more natural way to model stationary patterns with an implicit periodic generator. The core of the formulation is to match the repeatable structure of a stationary pattern to the period of a learnable continuous periodic function. Instead of modeling the pattern with a discrete noise tensor, we define latent variables in a continuous space, where the extent of each latent factor is learned to match with the extent of the repeating elements. The benefits of this design align well with the desirable characteristics for visual pattern synthesis: 1) learned periodicity of the implicit field encourages latent factors to model the stationary variations observed from the exemplar pattern; 2) a continuous representation provides flexibility during training to learn a distribution from randomly shifted patches cropped from the exemplar; 3) a Fourier encoding scheme learns high- frequency details from the exemplar. This allows our model to synthesize visually authentic results, and 4) multilayer perceptron (MLP) that implicitly takes coordinates as input scales well to the generation of 3D shapes when compared to 3D convolution. Based on these design choices, we term our network the Implicit Periodic Field Network (IPFN). We validate our proposed design by showing various applications in texture image synthesis and 3D volume synthesis in Section. 4. Specifically, besides synthesizing stationary patterns, we design a conditional formulation of our model to tackle the synthesis of directional patterns and to provide application in controllable shape generation. An ablation study is also conducted to verify the effectiveness of our design choices. Figure 1: Comparison between synthesized honeycomb from a DCGAN convolution generator and the periodic MLP generator. Seams and intercepting patterns are visible in the former result due to difficulty for the convolution generator to capture the repeating structure. ## 2 Related Work #### Pattern Synthesis 2D visual patterns are generally referred to as “texture” due to their prevalent applications in computer-aided design. Well-known early attempts to synthesize texture derive patterns from smoothly interpolated noise [perlin1985image, perlin2002improving, worley1996cellular] and create aesthetically pleasing materials that display a high level of randomness. [henzler2020learning, portenier2020gramgan] are two recent works related to us that utilize randomness from a continuously-defined Perlin noise [perlin1985image] to synthesize exemplar-based textures in an implicit field. Their works demonstrate the advantage of smooth noise field and implicit formulation in efficiently and diversely generating a 3D texture field. However, just like their procedural precedence, these methods have been shown to be limited in synthesizing patterns that are more complicated in structures (e.g. bricks, rocky surface) [portenier2020gramgan]. Later works in traditional image synthesis are generally divided into pixel- based method (e.g. [efros1999texture, wei2003texture]), patch-based method (e.g. [kopf2007solid, efros2001image, kwatra2003graphcut]) and optimization- based method (e.g. [kopf2007solid, kwatra2005texture]) and have shared with us important considerations for recreating new patterns from an exemplar. For instance, synthesis on a patch level encourages preservation of fine local details, and the efforts are focused on ”quilting” the discontinuity between patches [efros2001image, kwatra2003graphcut] and encouraging global similarity [kopf2007solid]. Early statistical model [efros1999texture, portilla2000parametric] utilizes a random field representation that captures variations in stationary patterns and synthesizes a variety of new patterns under the Julesz’ conjecture. Our work is inspired by key ideas from the traditional modeling of stationary textures, while we strive to unify authenticity, diversity, and scalability with a neural representation to overcome limitations that existed in the traditional approaches. Compared to earlier parametric models, the artificial neural network is powerful in its generalizability in pattern recognition [hansen1990neural]. Thus neural synthesis of texture images has marked the advances in recent endeavors. How does a neural network learn the stylish representation of a texture without simply reconstructing it? A milestone that unifies both synthesized quality and sampling power adversarially trains a generator network and a discriminator network to learn the mapping from a latent space to the texture distribution [goodfellow2014generative]. We are particularly interested in the generative adversarial networks (GANs) that models the inner statistics of the exemplars. This is marked by a patch-based approach that represents an image as a collection of smaller images: [li2016precomputed, jetchev2016texture] formalizes patch-based synthesis in the GAN setting with the concept of Markovian and spatial GAN. [bergmann2017learning] motivated us with its periodic design in latent codes, which effectively captures stationary patterns from an input exemplar. [zhou2018non] can be seen as a complement to our design by focusing on addressing non-stationary texture synthesis by learning to expand an image patch. In addition, [shaham2019singan, shocher2019ingan] present multi-scale, global-focused approaches to effectively recreate natural images. While the aforementioned approaches all utilize convolutional designs, our work extends texture synthesis to the continuous domain with an implicit representation of textures, as we argue that such representation provides a more natural and efficient way to synthesize stationary patterns. The synthesis of 3D shapes is of particular interest in computer graphics and thus has a rich history. To name a few, this includes volumetric field design [zhang2006vector, palacios2016tensor, chen2016synthesis], procedural generation [ijiri2008example, merrell2007example] and 3D texturing [zhou2006mesh, bhat2004geometric]. However, very few works have considered the synthesis of 3D patterns with neural networks, with the exception of [henzler2020learning, portenier2020gramgan], which explores the generation of 3D solid textures. #### Implicit Network Implicit network refers to multilayer perceptron that learns a continuous function in real coordinate space. In particular, the implicit network is mainly utilized in the reconstruction of 3D shapes [park2019deepsdf, saito2019pifu, liu2019learning, sitzmann2020implicit], where shapes are represented by a distance function, or a radiance field [mildenhall2020nerf, yu2021pixelnerf]. We are motivated by the signed distance representation of shapes and the Fourier encoding explored in [mildenhall2020nerf, tancik2020fourier] in our design of the implicit networks, and our work adopts these features to a generative setting where novel 3D patterns are synthesized. ## 3 Method Our method is best introduced by expanding from the Wasserstein GAN value function [gulrajani2017improved] constructed as the Earth-Mover distance between the real data distribution $\mathbb{P}_{data}$ and the generator distribution $\mathbb{P}_{g}$: $\underset{G}{\min}\>\underset{D}{\max}\>\mathbb{E}_{\bm{x}\sim\mathbb{P}_{data}}[\log(D(\bm{x}))]-\mathbb{E}_{\bm{z}\sim\mathbb{P}_{Z}}[\log(D(G(\bm{z}))].$ (1) Our first change of the above objective is to draw real-valued coordinates $\bm{c}\in\mathbb{R}^{k}$ ($k=2$ for image and $k=3$ for volume) from a distribution $\\{s\bm{c}\>|\>\bm{c}\sim\mathbb{P}_{\bm{c}}\\}$ as input to the generator, where $s$ is a constant scalar. This underlies the implicit nature of the generative network, as $G$ learns a mapping from the real coordinate space in a stochastic process. Instead of being sampled from a prior distribution $\mathbb{P}_{Z}$, latent variables are drawn from a random field $f_{z}(\bm{c})$ defined on the real coordinate space. $G$ is therefore a function of both the coordinates $\bm{c}\in\mathbb{R}^{k}$ and the latent variables $f_{z}(\bm{c})\in\mathbb{R}^{d}$. This updates Eq.1 to $\begin{split}\underset{G}{\min}\>\underset{D}{\max}\>&\mathbb{E}_{\bm{x}\sim\mathbb{P}_{data}}[\log(D(\bm{x}))]-\\\ &\mathbb{E}_{\bm{c}\sim\mathbb{P}_{c},\bm{z}\sim\mathbb{P}_{f_{z}(\bm{c})}}[\log(D(G(\bm{z},\bm{c}))].\end{split}$ (2) In the implementation, our goal is to synthesize color, distance function, normal vectors, etc. that are defined in a grid-like structure $X:\mathbb{R}^{H\times W\times C}$ or $X:\mathbb{R}^{H\times W\times D\times C}$ and thus we also sample grid coordinate input $C:\mathbb{R}^{H\times W\times 2}$ or $C:\mathbb{R}^{H\times W\times D\times 3}$ whose center is drawn from a uniform distribution $U(-s,s)$. The randomness to the grid center position is critical in encouraging smooth and seamless transition between blocks of patterns as it models the distribution of randomly sampled patches from the input pattern. In the following sections, key modules are discussed in detail: 1) a deformable periodic encoding of the coordinates to model stationary patterns; 2) implementation of the latent random field; 3) a conditional variance of our objective for the synthesis of directional exemplar and controllability of the synthesized patterns, and 4) the overall network structure that completes our pattern synthesis framework. ### 3.1 Periodic Encoding As discussed in the introduction, it is critical to represent stationary patterns from the input by a repeated structure that avoids simply reconstructing the original exemplar. Simply mapping spatial coordinates to the visual pattern does not satisfy this requirement: since each coordinate is unique in the real-value space, the network would learn to overfit the coordinates to their associated positions in the exemplar and therefore fail to capture a repeatable structure. The benefits of a periodic encoding to the coordinates are two-fold: Firstly, it disentangles patch-level appearance from their specific position in the exemplar, which allows the pattern synthesized by the generator to be shift- invariant. Secondly, recent advances in implicit network [mildenhall2020nerf, tancik2020fourier, sitzmann2020implicit] have found a Fourier feature mapping with a set of sinusoids effective in learning high-frequency signals by addressing the “spectral bias”[basri2020frequency] inherent to MLPs. In our work, we use the following periodic mapping for the input coordinates: $\begin{split}\gamma(\bm{c})=[\cos(2^{0}\pi a\bm{c}),\sin(2^{0}\pi a\bm{c}),&\\\ \cdot\cdot\cdot,\cos(2^{i}\pi a\bm{c}),\sin(2^{i}\pi a\bm{c})],\end{split}$ (3) where the learnable parameter $a\in\mathbb{R}^{k}$ determines the period of the encoding. This design allows the network to learn to match the period of the encoding to the repeatable structure of the exemplar. It thus provides robustness to the scale of the patches sampled in training as such scale no longer dictates the period of the synthesized pattern. ### 3.2 Latent Random Field In a generative framework, latent noise sampled from a prior distribution model the variation in observations. A noise function that is smoothly defined in the real coordinate space encourages smooth transition between the synthesized patches. In a 2D example, we start with a discrete random field that maps a uniform grid of coordinates to random variables $\\{f_{z}(\bm{c})\>|\>\bm{c}:\mathbb{R}^{H\times W\times 2}\\}$. Then the discrete random field is smoothly interpolated to form a smooth latent field (see the visualization in Figure 2). In our implementation, we used an exponential interpolation: $f(\bm{x})=\sum_{i=1}^{K}w_{i}(\bm{x})f_{z}(\bm{c}_{i}),\>w_{i}(\bm{x})=\frac{\mathrm{e}^{\frac{||\bm{x}-\bm{c}_{i}||_{2}}{\sigma}}}{\sum_{j=1}^{K}\mathrm{e}^{\frac{||\bm{x}-\bm{c}_{j}||_{2}}{\sigma}}},$ (4) where the latent code at spatial position $x$ is interpolated from $K=4$ latent vectors defined at the grid corners. In implementation, the discrete grid used to define the random field has a spacing of 1. To match the extent of a latent factor with the learned period $a$, we simply scale the uniform grid of the discrete random field accordingly. ### 3.3 Conditional IPFN Extending our GAN objective to be conditional enables many practical applications. Assume each input patch is paired with a guidance factor $\bm{g}$, the conditional objective is simply an extension: $\begin{split}\underset{G}{\min}\>\underset{D}{\max}\>&\mathbb{E}_{\bm{x}\sim\mathbb{P}_{data}}[\log(D(\bm{x}|\bm{g}))]-\\\ &\mathbb{E}_{\bm{c}\sim\mathbb{P}_{c},\bm{z}\sim\mathbb{P}_{f_{z}(\bm{c})}}[\log(D(G(\bm{z},\bm{c}|\bm{g}))].\end{split}$ (5) Here we outline two applications in pattern synthesis using the conditional formulation: * • Synthesis of directional pattern: Many natural patterns have a directional distribution that is oftentimes considered non-stationary. A typical example is a leaf texture - a midrib defines the major direction that separates the blade regions by half (see Figure 3). A conditional extension of our model is able to model patch distribution along a specified direction. For simplicity, we present a 2D formulation that can be easily extended to 3D. With a user- defined implicit 2D line equation $ax+by+c=0$, the guidance factor is defined as $\bm{g}(x,y)=ax+by+c$. Pixel coordinates $(p_{x},p_{y})$ from an input texture image with width $w$ and height $h$ are transformed as $(x,y)=(\frac{2p_{x}}{w}-1,\frac{2p_{y}}{h}-1)$ to be normalized to the value range of $[-1,1]$. In our experiments we have found it sufficient to condition our model on the horizontal ($y=0$) and vertical direction ($x=0$) for the evaluated exemplars. * • Controlling synthesis of 3D shapes: In the modeling of geometric patterns, it is often desirable for the synthesis algorithm to provide explicit control of certain geometric properties such as density and orientation. These geometric properties can be calculated from the exemplar shape. Let $g(x)$ be a shape operator that defines the geometric property of interest, our conditional model trained with the sample pair $(x,g(x))$ then learns a probabilistic mapping from a guidance vector field to the target 3D shape. An intuitive example of this application can be found in Section 4.5. ### 3.4 Network Structure Figure 2: Overview of our network architecture discussed in Section 3.4. The overall structure of IPFN is visualized in Figure 2. The Generator Network $G$ is a 10-layer MLP with ReLU activations between layers and a sigmoid function in the end. A grid of coordinates is sampled based on a randomly shifted center. The coordinates are then passed to two separate branches: 1) the periodic encoder, and 2) a projection on the latent field to obtain a 5-dim latent vector for each coordinate. The latent codes and the periodically encoded coordinates are then concatenated as input to the generator mlp, which outputs the fake sample. The discriminator $D$ discriminates between the generated samples and randomly cropped patches from the real input. We implement $D$ by following the DCGAN architecture [radford2015unsupervised] with a stride of 2. For discriminating 3D volumes, 2D convolution layers are replaced by 3D convolution. Figure 3: Main results for 2D texture synthesis with comparisons to Henzler et al. [henzler2020learning], Bergmann et al. [bergmann2017learning], and Zhou et al. [zhou2018non] on synthesizing two stationary patterns (top four rows) and two directional patterns (bottom four rows). ## 4 Experiments We hypothesize that our approach is most suitable for synthesizing texture patterns and 3D shapes with repeating structures and local variations in appearance. To this end, we demonstrate our main results by applying IPFN to the synthesis of 2D texture images and 3D structured shapes. In addition, IPFN is adapted to two applications in 3D texturing and shape manipulation. To evaluate the effectiveness of the proposed techniques, we have also conducted an ablation study where several key designs are altered. Evaluation metric While the quality for pattern synthesis is not easily quantifiable, human eyes usually provide a reasonable qualitative assessment for whether the synthesized patterns capture the aesthetics and structure of the exemplar. In our evaluation, we present comparisons of visual results that are self-manifesting, since the synthesized patterns bear obvious characteristics of the underlying designs of the synthesizer. In addition, we have provided quantifiable metrics in terms of Single Image Frechét Inception Distance and inference time and memory. Implementation details For all of our experiments, the network is optimized under WGAN loss with gradient penalty [gulrajani2017improved]. Adam optimizer [kingma2014adam] is used with a learning rate of $1e-4$ for both the discriminator D and the generator G. In each iteration, both D and G are updated for 5 steps sequentially. Input images and volumes are randomly cropped to a smaller-scale patch. For positional encoding, we choose a bandwidth $i=5$ as a wider kernel tends to produce sinusoidal artifacts, whereas a narrower kernel produces blurry results. The input coordinates are randomly shifted by an offset in the range $[-4,4]$ to accommodate for the chance that the network may learn an increased period for the periodic encoding. Accordingly, noises are interpolated from a $5\times 5$ grid ($5^{3}$ for 3D volume) discrete random field, where the point locations are ranged between $[-5,5]$. A single-exemplar experiment is typically trained for 12,500 iterations with a batch size of 8 and runs on a single Nvidia GTX 1080 GPU, which takes about 6-9 hours to complete. Inference Time: IPFN only requires 24 milliseconds to generate a $1024\times 1024$ image. 3D volumes are generated iteratively and a large-scale $512^{3}$ volume takes only 22.9 seconds to be generated. Source code will be made publicly available upon acceptance. ### 4.1 Texture Pattern Synthesis Image sources selected from the Oxford Describable Textures Dataset (DTD) [cimpoi2014describing] are shown in Figure 3. Specifically, we selected two exemplars with stationary patterns (top 4 rows in Figure 3) and two exemplars with directional patterns (bottom 4 rows in Figure 3) to demonstrate that IPFN synthesizes visually similar patterns in both cases. During training, images were randomly cropped into patches of size $128\times 128$. During inference, the synthesized images were scaled up four times to a size of $512\times 512$. Our results are compared to the three most relevant baseline generative methods that synthesize texture patterns from a single image: * • Henzler et al. [henzler2020learning]: A method that similarly utilizes implicit network and smooth noise field for texture synthesis. The synthesized results were obtained from running the officially released code. * • Bergmann et al. [bergmann2017learning]: A convolutional method that combines noise with periodic signals to synthesize stationary pattern. The synthesized results were obtained from running the officially released code. * • Zhou et al. [zhou2018non]: A convolutional image expansion approach targeted for non-stationary texture synthesis. Since [zhou2018non] expands from an image input deterministically and does not utilize latent code, only one synthesized result is shown per row. The synthesized results were obtained from the authors. Visual inspection is sufficient to show that IPFN provides promising results. When compared to [henzler2020learning], IPFN synthesized results with obvious structures as noise is not directly mapped to the output. While [bergmann2017learning] synthesizes periodic samples that display diversity across samples and similarity to the stationary exemplars, their synthesized patterns lack variation within the image. In comparison, our synthesized patterns show a higher level of local variations and adapt well to the directional cases. [zhou2018non] has provided the most visually authentic results among the baselines. However, in the stationary cases, radial distortion is noticeable near the boundaries of its synthesized images. Moreover, without requiring image input, IPFN provides a more direct approach to synthesizing diversified samples from random noise. | honey | crosshatch | rock | leaf ---|---|---|---|--- Henzler [henzler2020learning] | 332.66 | 310.49 | 351.23 | 225.11 Bergmann [bergmann2017learning] | 62.75 | 177.88 | 120.64 | 164.37 Zhou [zhou2018non] | 14.54 | 154.63 | 118.29 | 38.13 Ours | 10.15 | 130.83 | 113.81 | 103.6 Table 1: SIFID scores between the exemplars and the generated patterns from ours and different baselines. Single Image Frechét Inception Distance (SIFID) SIFID introduced in [shaham2019singan] is a metric commonly used to assess the realism of generated images. For the computation of SIFID, we have used a patch size of $128\times 128$ in all experiments, where the synthesized patterns have the same resolution as the original exemplars. Table 1 shows the SIFID comparisons between ours and the baselines in various categories of exemplars. For Zhou et al. [zhou2018non], only the generated (expanded) portion of the images were used. The results show that our method can generate results that better resemble the distribution of the real texture in the stationary categories (honey, crosshatch, rock) as our generated patterns receive lower SIFID scores. For the leaf category, a typical directional pattern, Zhou et al. [zhou2018non] achieves the best performance as its method specifically targets non-stationary expansions, while our method still performs better than other baselines that have not taken into consideration the synthesis of non- stationary patterns. Time (ms) / memory (GB) | $128^{2}$ | $256^{2}$ | $512^{2}$ | $1024^{2}$ ---|---|---|---|--- Henzler [henzler2020learning] | 218/1.38 | 278/1.62 | 328/2.72 | 458/6.45 Bergmann [bergmann2017learning] | 7/2.37 | 13/5.79 | 42/19.68 | 115/31.88 Zhou [zhou2018non] | 356/1.20 | 349/1.34 | 510/2.00 | 612/4.66 Ours | 8/0.76 | 11/0.85 | 15/1.23 | 24/2.81 Table 2: Comparisons of inference time and inference memory consumption, measured in milliseconds (ms) / gigabytes (GB), when patterns of increasing size (top row) are generated. Inference Time and Memory Table 2 measures the inference time and memory consumption of our network compared to the baselines when generating image at different sizes. Our implicit formulation is shown to be significantly more efficient in both time and space without the needs to rely on computation of pseudo-random noise ([henzler2020learning]) or convolutional operations ([bergmann2017learning, zhou2018non]). This validates our claim on the scalability of our design. ### 4.2 Ablation Study Figure 4: Synthesized honeycomb textures for the ablation study. The blue boxes represent the learned scale of periodic encoding in ours, where in w/o deformation, the period is default to 1, which does not match with the repeating structure of the honeycomb pattern and results in visual artifact. To validate our design choices, we have conducted an ablation study by removing two designs that we consider critical for our network to be effective. The comparison results are shown in Figure 4. w/o deformation is a network model that encodes input coordinates without the learnable parameters $a$ described in Section 3.1. w/o shift is a model that is trained without randomly shifting the input coordinates. The resulted patterns are indicative of the effects of these designs: when coordinates are encoded at a fixed scale, the w/o deformation model generates hexagons that are seemingly glued together as the presumed scale does not match with the actual period of the repeating structure. The w/o shift model synthesized distorted patterns as we speculate that, without the random sampling of the input coordinates, the network faces difficulty in matching the patch-based priors of the image patches. ### 4.3 Volumetric Shape Synthesis Figure 5: Main results for 3D volume synthesis. a. Exemplar porous structure. b. Synthesized structure models interior tunnels. c. Global views of synthesized porous structures. c. Exemplar foam structure. e. Two scales of noise fields for the foam structure synthesis. f. Synthesized foam structures. Larger scale of the noise field leads to more isotropic foam structures. Figure 6: IPFN learns multi-channel textures that are applicable to seamless 3D texturing. The original 3D texture in this example is not symmetric and therefore visible seams can be found on the texture-mapped surface and in the closeup view (A in figure). As synthesized patterns learnt from this exemplar can be tiled in any direction, the mapped surface (B in surface) is seamless. For the evaluation on volumetric shape synthesis, We have obtained two 3D models from turbosquid.com: a porous structure (Figure 5.a) and a foam structure ((Figure 5.d). The 3D meshes are preprocessed into signed distance fields by uniformly sampling points in a volumetric grid. For the porous structure, we have sampled $256\times 256\times 256$ points and extracted $64\times 64\times 64$ patches during training. For the foam structure, we have sampled $200\times 200\times 128$ points and extracted $32\times 32\times 32$ patches during training. During inference, porous structures are synthesized at their original resolution, while we scale the synthesized foam structures to be twice as large as the original shape in the XY direction. Figure 5.c and Figure 5.f show the synthesized shapes. For the porous structures, both outer structures and interior structures are learned (see Figure 5.b for zoom-in interior views) and the structures are diversified both across and within samples. For the foam structures, we have shown different results by varying the extent of the latent random field (see Figure 5.e). A larger-scale random field encourages the synthesizer to generate globally varied structures, whereas a smaller scale produces locally anisotropic structures. ### 4.4 Application: Seamless 3D Texturing Due to the periodic nature of the synthesized patterns, noise manipulation allows IPFN to create textures that are mirror-symmetric. This property provides an immediate application to seamless 3D texture mapping: in Figure 6, the original 9-channel texture, composed of color, normal, and bump maps, is tiled and directly mapped to a planar surface. Due to discrepancies on the edges, as the textures are wrapped to create the tiled patterns, the mapped surfaces show visible seams. We recreate this texture through our network under a constant latent vector (Figure 6 B). When repeatedly mapped to the surface, the symmetric texture is seamless while faithfully reflecting the appearance and structure of the original texture. Figure 7: Synthesized foam structures with controllable density. a. The grey scale bar controls the synthesized structure from the highest density (white) to the lowest (black). b. Smooth interpolation of the guidance factor allows us to synthesize a foam structure with smoothly changing densities. ### 4.5 Application: 3D Foam with controllable density The original foam shape used in our experiment contains holes of various sizes, which corresponds to the density of the foam structure. This geometric property $g$ can be easily approximated with an unsigned distance field representation. For a patch $X^{\prime}$ of size $H^{\prime}\times W^{\prime}\times D^{\prime}$, we estimate its density by: $g(X^{\prime})=\frac{1}{m}\sum_{i=1}^{H^{\prime}}\sum_{j=1}^{W^{\prime}}\sum_{k=1}^{D^{\prime}}|sdf(X_{ijk})|$ (6) , where m is a normalization factor. Figure 7 shows the synthesized foam structures by gradually increasing the density factor (Figure 7.a) and by a linearly interpolated density map ((Figure 7.b). ## 5 Limitations and Discussions Figure 8: The examples demonstrating limitations of our network The main limitation of our method is its emphasis on modeling stationary patterns. While this is based on our observation that a broad range of natural patterns is stationary or directional, our method does not provide a natural way to address a class of patterns that are radial, which are exemplified by web structures and spiral patterns (see the third column in Figure 8). While our conditional formulation is in theory compatible with the synthesis of landscape images, experiments found that the quality of the synthesized landscapes are subpar - while the synthesized landscape appears globally similar to the exemplar, some local regions contain ”fading-out” elements that are blended with the background (see the first two columns in Figure 8). We speculate that this phenomenon is due to an under-representation of these elements in the exemplar. The above limitations have inspired us to consider many potential improvements of our methods in future works. A multi-scale synthesis approach marked in [shaham2019singan, shocher2019ingan] strikes a good balance between learning the distribution of global structure and local, high-frequency details of an image. Different geometric encoding schemes may also extend our framework to synthesize beyond stationary patterns. We believe there are still ample opportunities for the extension of our methods to a broader range of 3D applications. ## Acknowledgements This research was sponsored by the Army Research Office and was accomplished under Cooperative Agreement Number W911NF-20-2-0053, and by the U.S. Army Research Laboratory (ARL) under contract number W911NF-14-D-0005. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the Army Research Office or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notation.
# Is First Person Vision Challenging for Object Tracking? Matteo Dunnhofer∙ Antonino Furnari⋆ Giovanni Maria Farinella⋆ Christian Micheloni∙ ∙Machine Learning and Perception Lab, University of Udine, Udine, Italy ⋆Image Processing Laboratory, University of Catania, Catania, Italy ###### Abstract Understanding human-object interactions is fundamental in First Person Vision (FPV). Tracking algorithms which follow the objects manipulated by the camera wearer can provide useful cues to effectively model such interactions. Visual tracking solutions available in the computer vision literature have significantly improved their performance in the last years for a large variety of target objects and tracking scenarios. However, despite a few previous attempts to exploit trackers in FPV applications, a methodical analysis of the performance of state-of-the-art trackers in this domain is still missing. In this paper, we fill the gap by presenting the first systematic study of object tracking in FPV. Our study extensively analyses the performance of recent visual trackers and baseline FPV trackers with respect to different aspects and considering a new performance measure. This is achieved through TREK-150, a novel benchmark dataset composed of 150 densely annotated video sequences. Our results show that object tracking in FPV is challenging, which suggests that more research efforts should be devoted to this problem so that tracking could benefit FPV tasks. ## 1 Introduction Understanding the interactions between a camera wearer and the surrounding objects is a fundamental problem in First Person Vision (FPV) [19, 87, 57, 33, 20, 73, 12, 4, 5, 13]. To model such interactions, the continuous knowledge of where an object of interest is located inside the video frame is advantageous. The benefits of tracking in FPV have been explored by a few previous works to predict future active objects [32], analyze social interactions [2], improve the performance of hand detection for rehabilitation purposes [83], locate hands and capture their movements for action recognition [44] and human-object interaction forecasting [57]. On a more abstract level, the features computed after the frame by frame localization of objects have been increasingly used for egocentric action recognition [87, 89, 62] and anticipation [33, 76, 78]. Figure 1: Qualitative examples of some sequences contained in the proposed TREK-150 benchmark dataset. The white rectangle represents the ground-truth bounding box of the target object. Each number in the top left corner identifies the frame index. For each sequence, the action performed by the camera wearer is also reported (verb in orange, noun in blue). As can be noted, objects undergo significant appearance and state changes due to the manipulation by the camera wearer, which makes the proposed setting challenging for current trackers. Despite the aforementioned attempts to leverage tracking in egocentric vision pipelines, most approaches rely on object detection models that evaluate video frames independently. This paradigm has the drawback of ignoring all the temporal information coming from the object appearance and motion contained in consecutive video frames and generally requires a higher computational cost due to the repeated detection process on every frame. In contrast, visual object tracking aims to exploit past information about the target to infer its position and shape in the next frames of a video [63]. This process is subject to different challenges including occlusions, appearance changes, illumination variation, fast motion, and motion blur. Additionally, many practical applications pose real-time constraints to the computation, which specifically hold in FPV when the localization of objects is needed by higher-level real- time algorithms. While the use cases of object tracking in egocentric vision are manifold as previously discussed, it is clear that tracking is still not a dominant technology in the FPV field. We experimentally show that this is mainly due to the limited performance of current trackers in egocentric videos due to the involved FPV challenges such as camera motion, persistent occlusion, significant scale and state changes, as well as motion blur (see Figure 1). Due to these challenges, previous works have proposed customized approaches to track specific targets like people [3], people faces [1], or hands [44, 83, 65, 37, 81] from the FPV perspective. A solution specifically designed to track arbitrary objects in egocentric videos is still missing. Instead, the computer vision community has made significant progress in the visual tracking of generic objects. This has been possible thanks to development of new and effective tracking principles [11, 40, 7, 21, 8, 16, 98, 18], and to the careful design of benchmark datasets [91, 66, 35, 52, 31, 41] and challenges [49, 48, 50, 47]. Nowadays, the state-of-the-art tracking solutions achieve excellent results on a large variety of tracking domains [91, 66, 35, 41, 50, 47]. However, all these research endeavours have taken into account mainly the classic third person scenario in which objects are observed from an external point of view and are not manipulated by the camera wearer. Additionally, the performance of existing trackers has never been evaluated in the FPV domain, which raises the question of whether current solutions can be used “off-the-shelf” or more domain-specific investigations should be carried out. To answer the aforementioned questions, in this paper we aim to extensively analyze the problem of visual object tracking in the FPV domain. Given the lack of suitable benchmarks, we follow the standard practice of the visual tracking community that suggests to build an accurate dataset for evaluation [91, 56, 66, 52, 35, 50, 59]. Therefore, we propose a novel visual tracking benchmark, TREK-150 (TRacking-Epic-Kitchens-150), which is obtained from the large and challenging FPV dataset EPIC-KITCHENS-55 (EK-55) [19]. TREK-150 provides 150 video sequences densely annotated with the bounding boxes of a target object the camera wearer interacts with. Additionally, sequences have been labelled with attributes that identify the visual changes the object is undergoing, the class of the target object and the action the person is performing. Using the dataset, we present an in-depth study of the accuracy and speed performance of both non-FPV and FPV visual trackers. A new performance measure is also introduced to evaluate trackers with respect to FPV scenarios. In sum, the contributions of this paper are: (i) the first systematic analysis of visual object tracking in FPV; (ii) the description and release of the new TREK-150 dataset, which offers new challenges and complementary features with respect to existing visual tracking benchmarks; (iii) two FPV baseline trackers combining a state-of-the-art generic object tracker and FPV object detectors; (iv) a new and improved measure to assess the tracker’s ability to maintain temporal reference to targets. Our results show that FPV offers challenging tracking scenarios for the most recent and accurate trackers [18, 22, 80, 21, 9] and even for FPV trackers. Considering the potential impact of tracking on FPV, we suggest that more research efforts should be devoted to the considered task, for which we believe the proposed TREK-150 benchmark will be a key research tool. Annotations, trackers’ results, and code are available at https://machinelearning.uniud.it/datasets/trek150/. Table 1: Statistics of the proposed TREK-150 benchmark compared with other benchmarks designed for SOT evaluation. Benchmark | OTB-50 | OTB-100 | TC-128 | UAV123 | NUS-PRO | NfS | VOT2019 | CDTB | TREK-150 ---|---|---|---|---|---|---|---|---|--- [90] | [91] | [56] | [66] | [52] | [35] | [50] | [59] # videos | 51 | 100 | 128 | 123 | 365 | 100 | 60 | 80 | 150 # frames | 29K | 59K | 55K | 113K | 135K | 383K | 20K | 102K | 97K Min frames across videos | 71 | 71 | 71 | 109 | 146 | 169 | 41 | 406 | 161 Mean frames across videos | 578 | 590 | 429 | 915 | 371 | 3830 | 332 | 1274 | 649 Median frames across videos | 392 | 393 | 365 | 882 | 300 | 2448 | 258 | 1179 | 484 Max frames across videos | 3872 | 3872 | 3872 | 3085 | 5040 | 20665 | 1500 | 2501 | 4640 Frame rate | 30 FPS | 30 FPS | 30 FPS | 30 FPS | 30 FPS | 240 FPS | 30 FPS | 30 FPS | 60 FPS # target object classes | 10 | 16 | 27 | 9 | 8 | 17 | 30 | 23 | 34 # sequence attributes | 11 | 11 | 11 | 12 | 12 | 9 | 6 | 13 | 17 FPV | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✓ # action verbs | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | 20 ## 2 Related Work ### Visual Tracking in FPV. There have been some attempts to tackle visual tracking in FPV. Alletto et al. [3] improved the TLD tracker [43] with a 3D odometry based module to track people. For a similar task, Nigam et al. [70] proposed a combination of the Struck [38] and MEEM [95] trackers with a person re-identification module. Face tracking was tackled by Aghaei et al. [1] through a multi-object tracking approach termed extended-bag-of-tracklets. Hand tracking was studied in several works [44, 83, 65, 37, 81]. Sun et al. [81] developed a particle filter framework for hand pose tracking. Müller et al. [65] proposed a solution based on an RGB camera and a depth sensor. Kapidis et al. [44] and Visée et al. [83] proposed to combine the YOLO [74] detector trained for hand detection with trackers. The former used the multi-object tracker DeepSORT [88], whereas the latter employed the KCF [40] single object tracker. Han et al. [37] exploited a detection-by-tracking approach on video frames acquired with 4 fisheye cameras. All the presented solutions focused on tracking specific targets (i.e., people, faces, or hands), and thus they are likely to fail in generalizing to arbitrary target objects. Moreover, they have been validated on custom designed datasets, which limits their reproducibility and the ability to compare them to other works. In contrast, we focus on the evaluation of algorithms for the generic object tracking task. We design our evaluation to be reproducible and extendable by releasing TREK-150, a dataset of 150 videos of different objects manipulated by the camera wearer, which we believe will be useful to study object tracking in FPV. To the best of our knowledge, ours is the first attempt to evaluate systematically generic object tracking in the FPV context. ### Visual Tracking for Generic Settings. In recent years, there has been an increased interest in developing accurate and robust single object tracking (SOT) algorithms for generic targets and domains. Preliminary trackers were based on mean shift algorithms [17], key- point [64], part-based methods [14, 69], or SVM learning [38]. Later, solutions based on correlation filters gained popularity thanks to their processing speed [11, 40, 23, 6, 46]. More recently, algorithms based on deep learning have been proposed to extract efficient image and object features. This kind of representation has been used in deep regression networks [39, 30], online tracking-by-detection methods [68, 80], approaches based on reinforcement learning [94, 15, 27, 36], deep discriminative correlation filters [21, 22, 8, 24, 60, 9], and trackers based on siamese networks [7, 53, 86, 16, 98]. All these methods have been designed for tracking arbitrary target objects in unconstrained domains. However, no solution has been studied and validated on a number of diverse FPV sequences as we propose in this paper. Figure 2: (a) Distribution of the sequences within TREK-150 with respect to the attributes. (b) Comparison of the distributions of common attributes in different benchmarks. Distributions of (c) action verb labels, and (d) target object categories (nouns). ### Visual Tracking Benchmarks. Disparate bounding box level benchmarks are available today to evaluate the performance of SOT algorithms. The Object Tracking Benchmarks (OTB) OTB-50 [90] and OTB-100 [91] are two of the most popular benchmarks in the visual tracking community. They provide 51 and 100 sequences respectively including generic targets like vehicles, people, faces, toys, characters, etc. The Temple-Color 128 (TC-128) dataset [56] comprises 128 videos and was designed for the evaluation of color-enhanced trackers. The UAV123 dataset [66] was constructed to benchmark the tracking of 9 classes of target in 123 videos captured by unmanned aerial vehicle (UAV) cameras. The NUS-PRO dataset [52] contains 365 sequences and aims to benchmark human and rigid object tracking with targets belonging to one of 8 categories. The Need for Speed (NfS) dataset [35] provides 100 sequences with a frame rate of 240 FPS. The aim of the authors was to benchmark the effects of frame rate variations on the tracking performance. The VOT2019 benchmark [50] was the last iteration of the annual Visual Object Tracking challenge that required bounding-boxes as target object representation. This dataset contains 60 highly challenging videos, with generic target objects belonging to 30 different categories. The Color and Depth Tracking Benchmark (CDTB) dataset [59] offers 80 RGB sequences paired with a depth channel. This benchmark aims to explore the use of depth information to improve tracking performance. Following the increased development of deep learning based trackers, large-scale generic-domain SOT datasets have been recently released [67, 41, 31]. These include more than a thousand videos normally split into training and test subsets. The evaluation protocol associated with these sets requires the evaluation of the trackers after they have been trained on the provided training set. Despite the fact that all the presented benchmarks offer various tracking scenarios, limited work has focused on FPV, with some studies tackling the problem of tracking pedestrians or cars from a moving camera [77]. Some datasets of egocentric videos such as ADL [72] and EK-55 [19] contain bounding-box object annotations. But due to the sparse nature of such annotations (typically 1/2 FPS), these datasets cannot be used for the accurate evaluation of trackers in FPV context. To the best of our knowledge, our proposed TREK-150 dataset is the first benchmark for tracking objects which are relevant to (or manipulated by) a camera wearer in egocentric videos. We believe that TREK-150 is tantalizing for the tracking community because it offers complementary tracking situations (which we characterize with a total of $17$ attributes) and new target object categories (for a total of $34$) that are not present in other tracking benchmarks. Since in this paper we aim to benchmark generic approaches to visual tracking (that would not necessarily consider the deep learning approach), we follow the practice of previous works [91, 56, 66, 52, 35, 50, 59] and set up a well described dataset for evaluation of generic SOT algorithms. We believe that TREK-150 can be a useful research tool for both the FPV and visual tracking research communities. ## 3 The TREK-150 Benchmark Dataset The proposed TREK-150 dataset is composed of 150 video sequences. In each, a single target object is labeled with a bounding box which encloses the visible parts of the object. The bounding boxes are given for each frame in which the object is visible (as a whole or in part). To be compliant with other tracking challenges, every sequence is additionally labeled with one or more of 17 attributes describing the visual variability of the target in the sequence, plus two additional action verb and noun attributes indicating the action performed by the camera wearer and the class of the target. Qualitative examples of the video sequences are shown in Figure 1, whereas Table 1 reports key statistics of our dataset in comparison with existing benchmarks.111Please see Appendix A of the supplementary material for additional motivations and details. Table 2: Selected sequence attributes. The first block of rows describes attributes commonly used by the visual tracking community. The last four rows describe additional attributes introduced in this paper to characterize FPV tracking sequences. Attribute | Meaning ---|--- SC | Scale Change: the ratio of the bounding-box area of the first and the current frame is outside the range [0.5, 2] ARC | Aspect Ratio Change: the ratio of the bounding-box aspect ratio of the first and the current frame is outside the range [0.5, 2] IV | Illumination Variation: the area of the target bounding-box is subject to light variation SOB | Similar Objects: there are objects in the video of the same object category or with similar appearance to the target RIG | Rigid Object: the target is a rigid object DEF | Deformable Object: the target is a deformable object ROT | Rotation: the target rotates in the video POC | Partial Occlusion: the target is partially occluded in the video FOC | Full Occlusion: the target is fully occluded in the video OUT | Out Of View: the target completely leaves the video frame MB | Motion Blur: the target region is blurred due to target or camera motion FM | Fast Motion: the target bounding-box has a motion change larger than its size LR | Low Resolution: the area of the target bounding-box is less than 1000 pixels in at least one frame HR | High Resolution: the area of the target bounding-box is larger than 250000 pixels in at least one frame HM | Head Motion: the person moves their head significantly thus causing camera motion 1H | 1 Hand Interaction: the person interacts with the target object with one hand for consecutive video frames 2H | 2 Hands Interaction: the person interacts with the target object with both hands for consecutive video frames ### Data Collection. The videos have been sampled from EK-55 [19], which is a public, large-scale, and diverse dataset of egocentric videos focused on human-object interactions in kitchens. EK-55 provides videos annotated with the actions performed by the camera wearer in the form of temporal bounds and verb-noun labels. The dataset also contains sparse bounding-box references of manipulated objects annotated at 2 frames per second in a temporal window around each action. To obtain a suitable pool of video sequences interesting for object tracking, we cross- referenced the original verb-noun temporal annotations of EK-55 to the sparse bounding box labels. This allowed to select sequences in which the camera wearer manipulates an object. Each sequence is composed of the video frames contained within the temporal bounds of the action, extracted at the original 60 FPS frame rate and at the original full HD frame size [19]. According to the authors of [19], this frame rate is necessary in FPV to contrast the fast motion and motion blur happening due to the proximity of the main scene and the camera point of view. From the initial pool, we selected 150 video sequences which were characterized by attributes such as scale changes, partial/full occlusion and fast motion, which are commonly considered in standard tracking benchmarks [91, 66, 67, 31, 50]. The top part of Table 2 reports the $13$ attributes considered for the selection. ### Data Labeling. After selection, the 150 sequences were associated to only 3000 bounding boxes, due to the sparse nature of the object annotations in EK-55. Since it has been shown that visual tracking benchmarks require dense and accurate annotations [50, 66, 31, 82], we re-annotated the bounding boxes of the target objects on the 150 sequences. Batches of sequences were delivered to annotators who were explicitly instructed to perform the labeling. Such initial annotations were then carefully checked and refined by a visual tracking expert. This process produced 97296 frames labeled with bounding boxes related to the position and visual presence of objects the camera wearer is interacting with. Following the initial annotations, we employed axis- aligned bounding boxes. This kind of representation is widely used in many FPV pipelines [32, 34, 33, 19, 45, 83, 79], and thus it allows us to give immediate results on the impact of trackers in such contexts. Moreover, the recent progress of trackers on various benchmarks that use this state representation [91, 66, 35, 59, 67, 31, 41] demonstrates that it provides sufficient information about the target for consistent and reliable performance evaluation. Along with the bounding boxes, the sequences have been labeled considering 17 attributes which define the motion and visual appearance changes the target object is subject. These include the aforementioned 13 standard tracking attributes, plus 4 additional ones (High Resolution, Head Motion, 1-Hand Interaction, 2-Hands Interaction) which have been introduced to characterize FPV sequences and are summarized in the bottom part of Table 2. Figure 2(a) reports the distributions of the sequences with respect to the 17 attributes. Figure 2(b) compares the distributions of the most common SOT attributes in TREK-150 and in other well-known benchmarks. Our dataset provides a larger number of sequences affected by partial occlusions (POC), changes in scale (SC) and/or aspect ratio (ARC), and motion blur (MB). We claim that these peculiarities, which are complementary to those of existing datasets, are due to the particular first person viewpoint, camera motion, and the human-object interactions contained in the videos. Based on EK-55’s verb-noun labels, sequences were also associated to 20 verb labels (e.g., “wash” - see Figure 1) and 34 noun labels indicating the category of the target object (e.g., “box”). Figures 2(c-d) show the distributions of the videos relative to verbs and target nouns. As can be noted, TREK-150 reflects the EK-55’s long-tail distribution of labels. ## 4 Trackers We considered 33 trackers in our benchmark evaluation. 31 of these trackers have been selected to represent different popular approaches to SOT, for instance with respect to the matching strategy, type of image representations, learning strategy, etc. Specifically, in the analysis we have included short- term trackers [50] based on both correlation-filters with hand-crafted features (MOSSE [11], DSST [23], KCF [40], Staple [6], BACF [46], DCFNet [85], STRCF [54], MCCTH [84]) and deep features (ECO [21], ATOM [22], DiMP [8], PrDiMP [24], KYS [9]). We also considered deep siamese networks (SiamFC [7], GOTURN [39], DSLT [58], SiamRPN++ [53], SiamDW [97], UpdateNet [96], SiamFC++ [92], SiamBAN [16], Ocean [98]), tracking-by-detection methods (MDNet [68], VITAL [80]), as well as trackers based on target segmentation representations (SiamMask [86], D3S [60]), meta-learning (MetaCrest [71]), and fusion strategies (TRASFUST [29]). The long-term [50] trackers SPLT [93], GlobalTrack [42], and LTMU [18] have been also taken into account in the study. These trackers are designed to address longer target occlusion and out of view periods by exploiting object re-detection modules. All of the selected trackers are state-of-the-art approaches published between the years 2010-2020. In addition to the aforementioned generic object trackers, we developed 2 baseline FPV trackers that combine the LTMU tracker [18] with (i) the EK-55 trained Faster-R-CNN [19] and (ii) the Faster-R-CNN-based hand-object detector [79]. We refer to them as LTMU-F and LTMU-H respectively. These baseline trackers exploit the respective detectors as object re-detection modules according to the LTMU scheme [18]. In short, the re-detection happens when a verification module notices that the tracker is not following the correct target. In such a case, the module triggers the execution of the respective FPV detector which proposes candidate locations of the target object. Each of the candidates is evaluated by the verification module, and the location with highest confidence is used to re-initialize the tracker.222More details are given in Appendix B of the supplementary material. The two modules implement conceptually different strategies for FPV-based object localization. The first aims to find objects in the scene, while the second looks for the interaction between the camera wearer and objects. ## 5 Evaluation ### Evaluation Protocols. We employed three standard protocols to perform our analysis.333See Appendix C of the supplementary material for further details. The first is the one-pass evaluation (OPE) protocol detailed in [91], which implements the most realistic way to execute trackers. It consists in initializing a tracker with the ground-truth bounding box of the target in the first frame and let the tracker run on every subsequent frame until the end of the sequence. To obtain a more robust evaluation [51], especially for the analysis over sequence attributes and action verbs, we employ the recent protocol of [47] which defines different points of initialization along a sequence. A tracker is initialized with the ground-truth in each point and let run either forward or backward in time (depending on the longest sub-sequence yielded by the initialization point) until the end of the sub-sequence. This protocol allows a tracker to better cover all the situations happening in the sequences, ultimately leading to more robust evaluation scores. We refer to this setup as multi-start evaluation (MSE). Since many FPV tasks such as object interaction [20] and early action recognition [34], or action anticipation [19], require real-time computation, we evaluated the ability of trackers to provide their object localization in such a setting. This was achieved by following the details given in [48, 55]. In short, this protocol, which we refer to as RTE, runs an algorithm considering its running time. The protocol skips all the frames, considered to occur regularly according to the frame rate, which appeared during the interval between the algorithm’s execution start and end times. Figure 3: Performance of the selected trackers on the proposed TREK-150 benchmark under the OPE protocol. The curves in solid colors report the performance of the 33 benchmarked trackers on TREK-150, whereas the curves overlaid in semi-transparent colors outline the performance obtained by the same trackers on the standard OTB-100 [91] dataset. In brackets, next to the trackers’ names, we report the SS, NPS and GR values achieved on TREK-150 (in black) and on OTB-100 [91] (in gray). As can be noted, all the trackers exhibit a significant performance drop when tested on our challenging FPV benchmark. LTMU-H and LTMU-F achieve marginally better performance, while we expect significant boosts to be achievable with a careful design of FPV trackers. Figure 4: SS, NPS, and GSR of 17 of the benchmarked trackers on the sequence attributes of proposed TREK-150 benchmark under the MSE protocol. The red plain line highlights the average performance. (The results for POC are not reported because this attribute is present in every sequence). ### Performance Measures. To quantitatively assess the performance of the trackers on the proposed dataset, we used different measures that compare all tracker’s predicted bounding boxes with respect to the temporally aligned ground-truth bounding boxes. To evaluate the localization accuracy of the trackers, we employ the success plot [91], which shows the percentage of predicted bounding boxes whose intersection-over-union with the ground-truth is larger than a threshold varied from 0 to 1 (Figure 3 (a)). We also use the normalized precision plot [67], that reports, for a variety of distance thresholds, the percentage of bounding boxes whose center points are within a given normalized distance (in pixels) from the ground-truth (Figure 3 (b)). As summary measures, we report the success score (SS) [91] and normalized precision scores (NPS) [67], which are computed as the Area Under the Curve (AUC) of the success plot and normalized precision plot respectively. Along with these standard metrics, we employ a novel plot which we refer to as generalized success robustness plot (Figure 3 (c)). We take inspiration from the robustness metric proposed in [47] which measures the normalized extent of a tracking sequence before a failure. But differently from [47], which uses a fixed overlap threshold to detect a collapse, we propose to use different thresholds ranging in [0, 0.5]. This allows to assess the length of tracking sequences for different application scenarios. We consider 0.5 as the maximum threshold as higher overlaps are usually associated to positive predictions in many computer vision tasks. Similarly as [91, 67], we use the AUC of the generalized robustness plot to obtain an aggregate score which we refer to as generalized success robustness (GSR). This new measure evaluates trackers’ capability of maintaining long temporal reference to targets. We think this aspect is especially important in FPV as longer references to the target can lead to a better modeling of the camera viewer’s actions and interactions with objects. Finally, we evaluate the trackers’ processing speed in frames per second (FPS) to quantify their efficiency. Figure 5: SS, NPS, and GSR performance of 17 among the 33 selected trackers with respect to the action verbs (first row of plots) and target nouns (second row of plots) in TREK-150. The red plain line highlights the average performance. ## 6 Results Table 3: Performance achieved by 17 of the benchmarked trackers on TREK-150 using the RTE protocol. Metric | Ocean | SiamBAN | SiamRPN++ | DiMP | KYS | ATOM | LTMU | D3S | ECO | GlobalTrack | Staple | MOSSE | LTMU-H | MetaCrest | LTMU-F | VITAL | KCF ---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|--- FPS | 21 | 24 | 23 | 16 | 12 | 15 | 8 | 16 | 15 | 8 | 13 | 26 | 4 | 8 | 4 | 4 | 6 SS | 0.365 | 0.360 | 0.362 | 0.336 | 0.327 | 0.319 | 0.284 | 0.276 | 0.252 | 0.253 | 0.249 | 0.227 | 0.213 | 0.207 | 0.205 | 0.204 | 0.186 NPS | 0.358 | 0.366 | 0.356 | 0.331 | 0.317 | 0.312 | 0.257 | 0.263 | 0.231 | 0.227 | 0.236 | 0.190 | 0.174 | 0.175 | 0.161 | 0.165 | 0.157 GSR | 0.294 | 0.313 | 0.293 | 0.224 | 0.237 | 0.179 | 0.169 | 0.182 | 0.173 | 0.139 | 0.169 | 0.141 | 0.161 | 0.165 | 0.162 | 0.158 | 0.177 Table 4: Accuracy results on TREK-150 of a video-based hand-object detection solution which considers each of the considered trackers as localization method for the object involved in the interaction. As a baseline, we employ the object detection capabilities of the hand-object interaction solution Hands-in-contact [79]. Hands-in-contact [79] | LTMU-H | LTMU-F | ATOM | LTMU | Ocean | SiamBAN | SiamRPN++ | MetaCrest | D3S | DiMP | KYS | VITAL | GlobalTrack | MOSSE | ECO | Staple | KCF ---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|--- 0.354 | 0.368 | 0.367 | 0.361 | 0.354 | 0.340 | 0.340 | 0.311 | 0.293 | 0.292 | 0.292 | 0.279 | 0.253 | 0.251 | 0.231 | 0.230 | 0.197 | 0.177 ### How Do the Trackers Perform in the FPV Scenario? Figure 3 reports the performance of the selected trackers on TREK-150 using the OPE protocol. For reference, we also report the performance of the trackers on the popular OTB-100 [91] benchmark (semi-transparent curves - gray numbers in brackets). It can be clearly noted that the overall performance of the trackers is decreased across all measures when considering the challenging FPV scenario of TREK-150. For example, the SS, NPS, and GSR scores of LTMU on TREK-150 are 43.7% , 44.8%, and 43.1%, which are much lower than the respective 69.6%, 76%, and 78%, achieved on OTB-100. With the MSE protocol, LTMU achieves the respective scores of 46.9%, 48.3%, 38.6%.444See Appendix D for the overall MSE results of all trackers. These results show that the particular characteristics of FPV present in TREK-150 introduce challenging scenarios for visual trackers. Some qualitative examples of the trackers’ performance are shown in Figure 11 of the Appendix. Generally speaking, trackers based on deep learning (e.g. LTMU, TRASFUST, ATOM, KYS, Ocean) perform better in SS and NPS than those based on hand- crafted features (e.g. BACF, MCCTH, DSST, KCF). Among the first class of trackers, the ones leveraging online adaptation mechanisms (e.g. LTMU, ATOM, VITAL, ECO, KYS, DiMP) are more accurate than the ones based on single-shot instances (e.g. Ocean, D3S, SiamRPN++). The generalized success robustness plot in Figure 3(c) and the GSR results of Figure 10 of the supplementary report a different rankings of the trackers, showing that more spatially accurate trackers are not always able to maintain longer reference to targets. Under both the OPE and MSE protocols, the proposed FPV trackers LTMU-H and LTMU-F are largely better in SS and NPS, while they lose some performance in GSR. Such outcome shows that adapting a state-of-the-art method to FPV allows to marginally improve results, while we expect significant performance improvements to be achievable by a tracker accurately designed to tackle the FPV challenges introduced by this benchmark. ### In Which Conditions Do the Trackers Work Better? Figure 4 reports the SS, NPS, and GSR scores, computed with the MSE protocol, of 17 trackers with respect to the attributes introduced in Table 2.555The analysis was restricted to 17 trackers for a better visualization of the plots/tables. The 17 trackers were selected to represent various methodologies. The results for all trackers are available in Appendix D. We do not report results for the POC attribute as it is present in every sequence, as shown in Figure 2 (a). It stands out clearly that full occlusion (FOC), out of view (OUT) and the small size of targets (LR) are the most difficult situations for trackers. The fast motion of targets (FM) and the presence of similar objects (SOB) are also critical factors that cause drops in performance. Trackers show to be less vulnerable to rotations (ROT) and to the illumination variation (IV). Generally, tracking rigid objects (RIG) results easier than tracking deformable ones (DEF). With respect to the new 4 sequence attributes related to FPV, it results that tracking objects held with two hands (2H) is more difficult than tracking objects held with a single hand (1H). This is probably due to the additional occlusions generated in the 2H scenario. Trackers are instead quite robust to head motion (HM) and seem to cope better with objects appearing in larger size (HR). ### How Do the Trackers Perform With Respect to the Actions? The first row of plots in Figure 5 reports the MSE protocol results of SS, NPS, and GSR with respect to the associated verb action labels.${}^{\ref{note1}}$ Actions that mainly cause a spatial displacement of the target (e.g. “move”, “store”, “check”) generally have less impact on the performance. Actions that change the state, shape, or aspect ratio of an object (e.g. “remove”, “squeeze”, “cut”, “attach”) generate harder tracking scenarios. Also the sequences characterized by the “wash” verb lead trackers to poor performance. Indeed, the wash action can cause many occlusions and make the object harder to track. The second row of the same figure presents the performance scores of the trackers with respect to the associated noun labels. Rigid, regular-sized objects such as “pan”, “kettle”, “bowl”, “plate”, and “bottle” are among the ones associated with high average scores. On the other hand, some rigid objects such as “knife”, “spoon”, “fork” and “can” are harder to track, probably due to their particularly thin shape and the light reflectance they are easily subject to. Deformable objects such as “sponge”, “onion”, “cloth” and “rubbish” are in general also difficult to track. ### How Fast Are the Trackers? Table 3 reports the FPS performance of the trackers and the SS, NPS, and GSR scores achieved under the RTE protocol.${}^{\ref{note1}}$ None of the trackers achieve the frame rate speed of 60 FPS. We argue that this is due the full HD resolution of frames which requires demanding image crop and resize operations with targets of considerable size. Thanks to their non-reliance of online adaptation mechanisms, trackers based on siamese networks (e.g. Ocean, SiamBAN, SiamRPN++) emerge as the fastest trackers and exhibit a less significant performance drop of the proposed scores. Trackers using online learning approaches (e.g. ATOM, DiMP, ECO, KYS) generally achieve a below real-time speed, consequently causing a major accuracy loss when deployed to real-time scenarios. In general, we observe that the GSR score is the measure on which all trackers present the major drop in the real-time setting, suggesting that particular effort should be spent to better model actions and interactions in such scenarios. ### Do Trackers Already Offer Any Advantage in FPV? Despite we are demonstrating that FPV is challenging for current trackers, we assess whether these already offer an advantage in the FPV domain to obtain information about the objects’ locations and movements in the scene [87, 32, 33, 78, 79]. To this aim, we performed two experiments.666Details are given in the Appendix C of the supplementary material. First, we evaluated the performance of a Faster R-CNN [75] instance trained on EK-55 [19] when used as a naive tracking baseline. Such a solution achieves an SS, NPS, and GSR of 0.323, 0.369, 0.044, by running at 1 FPS. Comparing these results with the ones presented in Figure 3, we clearly notice that trackers, if properly initialized by a detection module, can deliver faster, more accurate and much more temporally long object localization than detectors. As a second experiment, we evaluated the accuracy of a video-based hand-object interaction detection solution [79] whose object localisation is given by a tracker rather than a detector. The tracker is initialized with the object detector’s predicted bounding-box at the first detection of the hand-object interaction, and let run until its end. By this setting, we created a ranking of the trackers which is presented in Table 4. The results demonstrate that stronger trackers can improve the accuracy and efficiency of current detection-based methodologies [79]. Interestingly, the trackers’ ranking differs from what shown in Figure 3, suggesting that trackers can manifest other capabilities when deployed into application scenarios. Given these preliminary results, we hence expect that trackers will likely gain more importance in FPV as new methodologies explicitly considering the first person point of view are investigated. ## 7 Conclusions In this paper, we proposed the first systematic evaluation of visual object tracking in FPV. The analysis has been conducted with standard and novel measures on the newly introduced TREK-150 benchmark, which contains 150 video sequences extracted from the EK-55 [19] FPV dataset. TREK-150 has been densely annotated with 97K bounding-boxes, 17 sequence attributes, 20 action verb attributes and 34 target object attributes. The performance of 31 state-of- the-art visual trackers and two baseline FPV trackers was analysed extensively on the proposed dataset. The results show a generalized drop in accuracy with respect to the performance achieved on existing tracking benchmarks. Furthermore, our analysis provided insights about which scenarios and actions cause the performance to change. Finally, we have shown that object tracking gives an advantage in terms of object localization accuracy and efficiency over object detection. These results suggest that FPV is a challenging scenario for current trackers and that tracking will likely get more importance in this domain as new FPV-specific solutions will be investigated. Annotations, results, and code, are available at https://machinelearning.uniud.it/datasets/trek150/. Acknowledgements. Research at the University of Udine has been supported by the ACHIEVE-ITN H2020 project. Research at the University of Catania has been supported by MIUR AIM - Attrazione e Mobilita Internazionale Linea 1 \- AIM1893589 - CUP: E64118002540007. ## References * [1] Maedeh Aghaei, Mariella Dimiccoli, and Petia Radeva. Multi-face tracking by extended bag-of-tracklets in egocentric photo-streams. Computer Vision and Image Understanding, 149:146–156, aug 2016\. * [2] Maedeh Aghaei, Mariella Dimiccoli, and Petia Radeva. With whom do i interact? Detecting social interactions in egocentric photo-streams. In Proceedings - International Conference on Pattern Recognition, volume 0, pages 2959–2964. Institute of Electrical and Electronics Engineers Inc., jan 2016. * [3] Stefano Alletto, Giuseppe Serra, and Rita Cucchiara. Egocentric object tracking: an odometry-based solution. In International Conference on Image Analysis and Processing, pages 687–696. Springer, 2015. * [4] Gedas Bertasius, Hyun Soo Park, Stella X. Yu, and Jianbo Shi. First-person action-object detection with egonet. In Proceedings of Robotics: Science and Systems, July 2017. * [5] Gedas Bertasius, Hyun Soo Park, Stella X Yu, and Jianbo Shi. Unsupervised learning of important objects from first-person videos. In Proceedings of the IEEE International Conference on Computer Vision, pages 1956–1964, 2017. * [6] Luca Bertinetto, Jack Valmadre, Stuart Golodetz, Ondrej Miksik, and Philip H.S. Torr. Staple: Complementary learners for real-time tracking. In IEEE Conference on Computer Vision and Pattern Recognition, volume 2016-Decem, pages 1401–1409, 2016. * [7] Luca Bertinetto, Jack Valmadre, João F. Henriques, Andrea Vedaldi, and Philip H.S. Torr. Fully-convolutional siamese networks for object tracking. European Conference on Computer Vision, 9914 LNCS:850–865, 2016\. * [8] Goutam Bhat, Martin Danelljan, Luc Van Gool, and Radu Timofte. Learning Discriminative Model Prediction for Tracking. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 2019. * [9] Goutam Bhat, Martin Danelljan, Luc Van Gool, and Radu Timofte. Know Your Surroundings: Exploiting Scene Information for Object Tracking. In European Conference on Computer Vision, mar 2020. * [10] Goutam Bhat, Felix Järemo Lawin, Martin Danelljan, Andreas Robinson, Michael Felsberg, Luc Van Gool, and Radu Timofte. Learning what to learn for video object segmentation. In Andrea Vedaldi, Horst Bischof, Thomas Brox, and Jan-Michael Frahm, editors, Computer Vision – ECCV 2020, pages 777–794, Cham, 2020. Springer International Publishing. * [11] David S Bolme, J Ross Beveridge, Bruce A Draper, and Yui Man Lui. Visual object tracking using adaptive correlation filters. In IEEE Conference on Computer Vision and Pattern Recognition, pages 2544–2550. IEEE, 2010. * [12] Minjie Cai, Kris M Kitani, and Yoichi Sato. Understanding hand-object manipulation with grasp types and object attributes. In Robotics: Science and Systems, volume 3. Ann Arbor, Michigan;, 2016. * [13] Zhe Cao, Ilija Radosavovic, Angjoo Kanazawa, and Jitendra Malik. Reconstructing hand-object interactions in the wild. arXiv preprint arXiv:2012.09856, 2020. * [14] Luka Čehovin, Matej Kristan, and Aleš Leonardis. Robust visual tracking using an adaptive coupled-layer visual model. IEEE Transactions on Pattern Analysis and Machine Intelligence, 35(4):941–953, 2013. * [15] Boyu Chen, Dong Wang, Peixia Li, Shuang Wang, and Huchuan Lu. Real-time ’Actor-Critic’ Tracking. In European Conference on Computer Vision, pages 318–334, 2018\. * [16] Zedu Chen, Bineng Zhong, Guorong Li, Shengping Zhang, and Rongrong Ji. Siamese Box Adaptive Network for Visual Tracking. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 6667–6676, 2020. * [17] Dorin Comaniciu, Visvanathan Ramesh, and Peter Meer. Real-time tracking of non-rigid objects using mean shift. IEEE Conference on Computer Vision and Pattern Recognition, 2:142–149, 2000. * [18] Kenan Dai, Yunhua Zhang, Dong Wang, Jianhua Li, Huchuan Lu, and Xiaoyun Yang. High-Performance Long-Term Tracking With Meta-Updater. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 6297–6306. Institute of Electrical and Electronics Engineers (IEEE), aug 2020. * [19] D. Damen, H. Doughty, G. M. Farinella, S. Fidler, A. Furnari, E. Kazakos, D. Moltisanti, J. Munro, T. Perrett, W. Price, and M. Wray. Scaling egocentric vision: The epic-kitchens dataset. In European Conference on Computer Vision (ECCV), 2018. * [20] Dima Damen, Teesid Leelasawassuk, and Walterio Mayol-Cuevas. You-do, i-learn: Egocentric unsupervised discovery of objects and their modes of interaction towards video-based guidance. Computer Vision and Image Understanding, 149:98–112, 2016. * [21] Martin Danelljan, Goutam Bhat, Fahad Shahbaz Khan, and Michael Felsberg. ECO: Efficient Convolution Operators for Tracking. In IEEE Conference on Computer Vision and Pattern Recognition, nov 2017. * [22] Martin Danelljan, Goutam Bhat, Fahad Shahbaz Khan, and Michael Felsberg. ATOM: Accurate Tracking by Overlap Maximization. In IEEE Conference on Computer Vision and Pattern Recognition, 2019\. * [23] Martin Danelljan, Gustav Hager, Fahad Shahbaz Khan, and Michael Felsberg. Discriminative Scale Space Tracking. IEEE Transactions on Pattern Analysis and Machine Intelligence, 39(8):1561–1575, 2017. * [24] Martin Danelljan, Luc Van Gool, and Radu Timofte. Probabilistic Regression for Visual Tracking. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 7181–7190, 2020. * [25] Patrick Dendorfer, Aljosa Osep, Anton Milan, Konrad Schindler, Daniel Cremers, Ian Reid, Stefan Roth, and Laura Leal-Taixé. Motchallenge: A benchmark for single-camera multiple target tracking. International Journal of Computer Vision, 129(4):845–881, 2021\. * [26] J. Deng, W. Dong, R. Socher, L. Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In 2009 IEEE Conference on Computer Vision and Pattern Recognition, pages 248–255, 2009. * [27] Matteo Dunnhofer, Niki Martinel, Gian Luca Foresti, and Christian Micheloni. Visual Tracking by means of Deep Reinforcement Learning and an Expert Demonstrator. In Proceedings of The IEEE/CVF International Conference on Computer Vision Workshops, 2019. * [28] Matteo Dunnhofer, Niki Martinel, and Christian Micheloni. An exploration of target-conditioned segmentation methods for visual object trackers. In Adrien Bartoli and Andrea Fusiello, editors, Computer Vision – ECCV 2020 Workshops, pages 618–636, Cham, 2020. Springer International Publishing. * [29] Matteo Dunnhofer, Niki Martinel, and Christian Micheloni. Tracking-by-Trackers with a Distilled and Reinforced Model. In Asian Conference on Computer Vision, 2020. * [30] Matteo Dunnhofer, Niki Martinel, and Christian Micheloni. Weakly-supervised domain adaptation of deep regression trackers via reinforced knowledge distillation. IEEE Robotics and Automation Letters, 6(3):5016–5023, 2021. * [31] Heng Fan, Liting Lin, Fan Yang, Peng Chu, Ge Deng, Sijia Yu, Hexin Bai, Yong Xu, Chunyuan Liao, and Haibin Ling. LaSOT: A High-quality Benchmark for Large-scale Single Object Tracking. In IEEE Conference on Computer Vision and Pattern Recognition, sep 2019. * [32] Antonino Furnari, Sebastiano Battiato, Kristen Grauman, and Giovanni Maria Farinella. Next-active-object prediction from egocentric videos. Journal of Visual Communication and Image Representation, 49:401–411, nov 2017. * [33] Antonino Furnari and Giovanni Farinella. Rolling-Unrolling LSTMs for Action Anticipation from First-Person Video. IEEE Transactions on Pattern Analysis and Machine Intelligence, pages 1–1, may 2020. * [34] Antonino Furnari and Giovanni Maria Farinella. What Would You Expect? Anticipating Egocentric Actions With Rolling-Unrolling LSTMs and Modality Attention. In IEEE/CVF International Conference on Computer Vision, 2019. * [35] Hamed Kiani Galoogahi, Ashton Fagg, Chen Huang, Deva Ramanan, and Simon Lucey. Need for Speed: A Benchmark for Higher Frame Rate Object Tracking. In Proceedings of the IEEE International Conference on Computer Vision, volume 2017-Octob, pages 1134–1143. Institute of Electrical and Electronics Engineers Inc., mar 2017. * [36] Qing Guo, Ruize Han, Wei Feng, Zhihao Chen, and Liang Wan. Selective Spatial Regularization by Reinforcement Learned Decision Making for Object Tracking. IEEE Transactions on Image Processing, 29:2999–3013, 2020. * [37] Shangchen Han, Beibei Liu, Randi Cabezas, Christopher D. Twigg, Peizhao Zhang, Jeff Petkau, Tsz Ho Yu, Chun Jung Tai, Muzaffer Akbay, Zheng Wang, Asaf Nitzan, Gang Dong, Yuting Ye, Lingling Tao, Chengde Wan, and Robert Wang. MEgATrack: Monochrome Egocentric Articulated Hand-Tracking for Virtual Reality. ACM Transactions on Graphics, 39(4):13, jul 2020. * [38] Sam Hare, Stuart Golodetz, Amir Saffari, Vibhav Vineet, Ming Ming Cheng, Stephen L Hicks, and Philip H.S. Torr. Struck: Structured Output Tracking with Kernels. IEEE Transactions on Pattern Analysis and Machine Intelligence, 38(10):2096–2109, 2016. * [39] David Held, Sebastian Thrun, and Silvio Savarese. Learning to Track at 100 FPS with Deep Regression Networks. European Conference on Computer Vision, abs/1604.0, 2016. * [40] Joao F. Henriques, Rui Caseiro, Pedro Martins, and Jorge Batista. High-speed tracking with kernelized correlation filters. IEEE Transactions on Pattern Analysis and Machine Intelligence, 37(3):583–596, 2015. * [41] Lianghua Huang, Xin Zhao, and Kaiqi Huang. GOT-10k: A Large High-Diversity Benchmark for Generic Object Tracking in the Wild. IEEE Transactions on Pattern Analysis and Machine Intelligence, pages 1–1, oct 2019. * [42] Lianghua Huang, Xin Zhao, and Kaiqi Huang. GlobalTrack: A Simple and Strong Baseline for Long-term Tracking. In AAAI Conference on Artificial Intelligence, dec 2020. * [43] Zdenek Kalal, Krystian Mikolajczyk, and Jiri Matas. Tracking-learning-detection. IEEE Transactions on Pattern Analysis and Machine Intelligence, 34(7):1409–1422, 2012. * [44] Georgios Kapidis, Ronald Poppe, Elsbeth Van Dam, Lucas Noldus, and Remco Veltkamp. Egocentric hand track and object-based human action recognition. In 2019 IEEE SmartWorld, Ubiquitous Intelligence & Computing, Advanced & Trusted Computing, Scalable Computing & Communications, Cloud & Big Data Computing, Internet of People and Smart City Innovation (SmartWorld/SCALCOM/UIC/ATC/CBDCom/IOP/SCI), pages 922–929. IEEE, 2019. * [45] Georgios Kapidis, Ronald Poppe, Elsbeth Van Dam, Lucas Noldus, and Remco Veltkamp. Egocentric hand track and object-based human action recognition. In Proceedings - 2019 IEEE SmartWorld, Ubiquitous Intelligence and Computing, Advanced and Trusted Computing, Scalable Computing and Communications, Internet of People and Smart City Innovation, SmartWorld/UIC/ATC/SCALCOM/IOP/SCI 2019, pages 922–929. Institute of Electrical and Electronics Engineers Inc., may 2019. * [46] Hamed Kiani Galoogahi, Ashton Fagg, and Simon Lucey. Learning Background-Aware Correlation Filters for Visual Tracking. In IEEE Conference on Computer Vision and Pattern Recognition, 2017\. * [47] Matej Kristan, Aleš Leonardis, Jiří Matas, Michael Felsberg, Roman Pflugfelder, Joni-Kristian Kämäräinen, Martin Danelljan, Luka Čehovin Zajc, Alan Lukežič, Ondrej Drbohlav, Linbo He, Yushan Zhang, Song Yan, Jinyu Yang, Gustavo Fernández, et al. The eighth visual object tracking vot2020 challenge results. In Adrien Bartoli and Andrea Fusiello, editors, Computer Vision – ECCV 2020 Workshops, pages 547–601, Cham, 2020. Springer International Publishing. * [48] Matej Kristan, Ales Leonardis, Jiri Matas, Michael Felsberg, Roman Pflugfelder, Luka Cehovin Zajc, Tomas Vojir, Gustav Hager, Alan Lukezic, Abdelrahman Eldesokey, Gustavo Fernandez, et al. The Visual Object Tracking VOT2017 Challenge Results. In Proceedings of the IEEE International Conference on Computer Vision Workshops, pages 1949–1972. IEEE, oct 2017. * [49] Matej Kristan, Jiri Matas, Ales Leonardis, Michael Felsberg, Luka Cehovin, Gustavo Fernandez, Tomas Vojir, Gustav Hager, Georg Nebehay, Roman Pflugfelder, et al. The Visual Object Tracking VOT2015 Challenge Results. In Proceedings of the IEEE International Conference on Computer Vision Workshops, pages 564–586. IEEE, dec 2015. * [50] Matej Kristan, Jiří Matas, Aleš Leonardis, Michael Felsberg, Roman Pflugfelder, Joni-Kristian Kämäräinen, LukaČehovinLukaˇLukaČehovin Zajc, Ondrej Drbohlav, Alan Lukežič, Amanda Berg, Abdelrahman Eldesokey, Jani Käpylä, Gustavo Fernández, et al. The Seventh Visual Object Tracking VOT2019 Challenge Results. In Proceedings of the IEEE/CVF International Conference on Computer Vision Workshops, 2019. * [51] M. Kristan, J. Matas, A. Leonardis, T. Vojíř, R. Pflugfelder, G. Fernández, G. Nebehay, F. Porikli, and L. Čehovin. A novel performance evaluation methodology for single-target trackers. IEEE Transactions on Pattern Analysis and Machine Intelligence, 38(11):2137–2155, 2016. * [52] Annan Li, Min Lin, Yi Wu, Ming Hsuan Yang, and Shuicheng Yan. NUS-PRO: A New Visual Tracking Challenge. IEEE Transactions on Pattern Analysis and Machine Intelligence, 38(2):335–349, feb 2016. * [53] Bo Li, Wei Wu, Qiang Wang, Fangyi Zhang, Junliang Xing, and Junjie Yan. SIAMRPN++: Evolution of siamese visual tracking with very deep networks. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, volume 2019-June, pages 4277–4286, 2019\. * [54] Feng Li, Cheng Tian, Wangmeng Zuo, Lei Zhang, and Ming Hsuan Yang. Learning Spatial-Temporal Regularized Correlation Filters for Visual Tracking. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pages 4904–4913. IEEE Computer Society, mar 2018. * [55] Mengtian Li, Yu-Xiong Wang, and Deva Ramanan. Towards streaming perception. In Andrea Vedaldi, Horst Bischof, Thomas Brox, and Jan-Michael Frahm, editors, Computer Vision – ECCV 2020, pages 473–488, Cham, 2020. Springer International Publishing. * [56] Pengpeng Liang, Erik Blasch, and Haibin Ling. Encoding Color Information for Visual Tracking: Algorithms and Benchmark. IEEE Transactions on Image Processing, 24(12):5630–5644, dec 2015\. * [57] Miao Liu, Siyu Tang, Yin Li, and James Rehg. Forecasting human object interaction: Joint prediction of motor attention and actions in first person video. In Proceedings of the European Conference on Computer Vision (ECCV), volume 2, 2020. * [58] Xiankai Lu, Chao Ma, Bingbing Ni, Xiaokang Yang, Ian Reid, and Ming Hsuan Yang. Deep regression tracking with shrinkage loss. In European Conference on Computer Vision, volume 11218 LNCS, pages 369–386, 2018. * [59] Alan Lukezic, Ugur Kart, Jani Kapyla, Ahmed Durmush, Joni Kristian Kamarainen, Jiri Matas, and Matej Kristan. CDTB: A color and depth visual object tracking dataset and benchmark. In Proceedings of the IEEE International Conference on Computer Vision, volume 2019-Octob, pages 10012–10021. Microsoft Research Asia, jul 2019\. * [60] Alan Lukežič, Jiří Matas, and Matej Kristan. D3S – A Discriminative Single Shot Segmentation Tracker. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, nov 2020. * [61] Alan Lukežič, Luka Čehovin Zajc, Tomáš Vojíř, Jiří Matas, and Matej Kristan. Now you see me: evaluating performance in long-term visual tracking, 2018\. * [62] Minghuang Ma, Haoqi Fan, and Kris M Kitani. Going deeper into first-person activity recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1894–1903, 2016. * [63] Dr Emilio Maggio and Dr Andrea Cavallaro. Video Tracking: Theory and Practice. Wiley Publishing, 1st edition, 2011. * [64] Mario Edoardo Maresca and Alfredo Petrosino. MATRIOSKA: A multi-level approach to fast tracking by learning. In International Conference on Image Analysis and Processing, volume 8157 LNCS, pages 419–428, 2013. * [65] Franziska Mueller, Dushyant Mehta, Oleksandr Sotnychenko, Srinath Sridhar, Dan Casas, and Christian Theobalt. Real-Time Hand Tracking Under Occlusion from an Egocentric RGB-D Sensor. In Proceedings - 2017 IEEE International Conference on Computer Vision Workshops, ICCVW 2017, volume 2018-Janua, pages 1284–1293, 2017. * [66] Matthias Mueller, Neil Smith, and Bernard Ghanem. A Benchmark and Simulator for UAV Tracking. In European Conference on Computer Vision, pages 445–461. Springer, Cham, 2016. * [67] Matthias Müller, Adel Bibi, Silvio Giancola, Salman Alsubaihi, and Bernard Ghanem. TrackingNet: A Large-Scale Dataset and Benchmark for Object Tracking in the Wild. In European Conference on Computer Vision, volume 11205 LNCS, pages 310–327. Springer Verlag, mar 2018. * [68] Hyeonseob Nam and Bohyung Han. Learning Multi-domain Convolutional Neural Networks for Visual Tracking. IEEE Conference on Computer Vision and Pattern Recognition, 2016-Decem:4293–4302, 2016. * [69] Hyeonseob Nam, Seunghoon Hong, and Bohyung Han. Online graph-based tracking. In European Conference on Computer Vision, volume 8693 LNCS, pages 112–126. Springer Verlag, 2014. * [70] Jyoti Nigam and Renu M Rameshan. EgoTracker: Pedestrian Tracking with Re-identification in Egocentric Videos. In IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, volume 2017-July, pages 980–987, 2017. * [71] Eunbyung Park and Alexander C. Berg. Meta-tracker: Fast and Robust Online Adaptation for Visual Object Trackers. In European Conference on Computer Vision, volume 11207 LNCS, pages 587–604. Springer Verlag, jan 2018. * [72] Hamed Pirsiavash and Deva Ramanan. Detecting activities of daily living in first-person camera views. In 2012 IEEE Conference on Computer Vision and Pattern Recognition, pages 2847–2854. IEEE, 2012. * [73] Francesco Ragusa, Antonino Furnari, Salvatore Livatino, and Giovanni Maria Farinella. The meccano dataset: Understanding human-object interactions from egocentric videos in an industrial-like domain. In IEEE Winter Conference on Application of Computer Vision (WACV), 2020. * [74] Joseph Redmon, Santosh Kumar Divvala, Ross B Girshick, and Ali Farhadi. You Only Look Once: Unified, Real-Time Object Detection. In IEEE Conference on Computer Vision and Pattern Recognition, pages 779–788, 2016. * [75] Shaoqing Ren, Kaiming He, Ross Girshick, and Jian Sun. Faster r-cnn: Towards real-time object detection with region proposal networks. In Advances in Neural Information Processing Systems, pages 91–99, 2015. * [76] Ivan Rodin, Antonino Furnari, Dimitrios Mavroedis, and Giovanni Maria Farinella. Predicting the future from first person (egocentric) vision: A survey. Computer Vision and Image Understanding, 2021. * [77] Ricardo Sanchez-Matilla and Andrea Cavallaro. A predictor of moving objects for first-person vision. In 2019 IEEE International Conference on Image Processing (ICIP), pages 2189–2193. IEEE, 2019. * [78] Fadime Sener, Dipika Singhania, and Angela Yao. Temporal aggregate representations for long-range video understanding. In European Conference on Computer Vision, pages 154–171. Springer, 2020. * [79] Dandan Shan, Jiaqi Geng, Michelle Shu, and David F. Fouhey. Understanding human hands in contact at internet scale. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), June 2020. * [80] Yibing Song, Chao Ma, Xiaohe Wu, Lijun Gong, Linchao Bao, Wangmeng Zuo, Chunhua Shen, Rynson W.H. Lau, and Ming Hsuan Yang. VITAL: VIsual Tracking via Adversarial Learning. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pages 8990–8999. IEEE Computer Society, apr 2018. * [81] Li Sun, Ulrich Klank, and Michael Beetz. EYEWATCHME-3D Hand and object tracking for inside out activity analysis. In IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, pages 9–16. Institute of Electrical and Electronics Engineers (IEEE), mar 2010. * [82] Jack Valmadre, Luca Bertinetto, João F. Henriques, Ran Tao, Andrea Vedaldi, Arnold W.M. Smeulders, Philip H.S. Torr, and Efstratios Gavves. Long-Term Tracking in the Wild: A Benchmark. In European Conference on Computer Vision, volume 11207 LNCS, pages 692–707. Springer Verlag, mar 2018. * [83] Ryan J. Visee, Jirapat Likitlersuang, and Jose Zariffa. An Effective and Efficient Method for Detecting Hands in Egocentric Videos for Rehabilitation Applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering, 28(3):748–755, mar 2020. * [84] Ning Wang, Wengang Zhou, Qi Tian, Richang Hong, Meng Wang, and Houqiang Li. Multi-cue Correlation Filters for Robust Visual Tracking. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pages 4844–4853, 2018. * [85] Qiang Wang, Jin Gao, Junliang Xing, Mengdan Zhang, and Weiming Hu. DCFNet: Discriminant Correlation Filters Network for Visual Tracking. apr 2017. * [86] Qiang Wang, Li Zhang, Luca Bertinetto, Weiming Hu, and Philip H S Torr. Fast Online Object Tracking and Segmentation: A Unifying Approach. In IEEE Conference on Computer Vision and Pattern Recognition, 2019\. * [87] Xiaohan Wang, Yu Wu, Linchao Zhu, and Yi Yang. Symbiotic attention with privileged information for egocentric action recognition. In AAAI Conference on Artificial Intelligence, 2020. * [88] Nicolai Wojke, Alex Bewley, and Dietrich Paulus. Simple online and realtime tracking with a deep association metric. In Proceedings - International Conference on Image Processing, ICIP, volume 2017-Septe, pages 3645–3649. IEEE Computer Society, mar 2018. * [89] Chao-Yuan Wu, Christoph Feichtenhofer, Haoqi Fan, Kaiming He, Philipp Krahenbuhl, and Ross Girshick. Long-term feature banks for detailed video understanding. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 284–293, 2019. * [90] Yi Wu, Jongwoo Lim, and Ming Hsuan Yang. Online object tracking: A benchmark. In IEEE Conference on Computer Vision and Pattern Recognition, pages 2411–2418. IEEE Computer Society, 2013. * [91] Yi Wu, Jongwoo Lim, and Ming Hsuan Yang. Object tracking benchmark. IEEE Transactions on Pattern Analysis and Machine Intelligence, 37(9):1834–1848, sep 2015. * [92] Yinda Xu, Zeyu Wang, Zuoxin Li, Ye Yuan, and Gang Yu. SiamFC++: Towards Robust and Accurate Visual Tracking with Target Estimation Guidelines. In AAAI Conference on Artificial Intelligence, nov 2020. * [93] Bin Yan, Haojie Zhao, Dong Wang, Huchuan Lu, and Xiaoyun Yang. ’Skimming-perusal’ tracking: A framework for real-time and robust long-term tracking. In Proceedings of the IEEE International Conference on Computer Vision, volume 2019-Octob, pages 2385–2393, 2019. * [94] Sangdoo Yun, Jongwon Choi, Youngjoon Yoo, Kimin Yun, and Jin Young Choi. Action-decision networks for visual tracking with deep reinforcement learning. In IEEE Conference on Computer Vision and Pattern Recognition, volume 2017-Janua, pages 1349–1358. IEEE, jul 2017. * [95] Jianming Zhang, Shugao Ma, and Stan Sclaroff. MEEM: Robust tracking via multiple experts using entropy minimization. In European Conference on Computer Vision, volume 8694 LNCS, pages 188–203. Springer Verlag, 2014. * [96] Lichao Zhang, Abel Gonzalez-Garcia, Joost Van De Weijer, Martin Danelljan, and Fahad Shahbaz Khan. Learning the model update for siamese trackers. In Proceedings of the IEEE International Conference on Computer Vision, volume 2019-Octob, pages 4009–4018. Institute of Electrical and Electronics Engineers Inc., oct 2019. * [97] Zhipeng Zhang and Houwen Peng. Deeper and Wider Siamese Networks for Real-Time Visual Tracking. IEEE Conference on Computer Vision and Pattern Recognition, jan 2019\. * [98] Zhipeng Zhang, Houwen Peng, Jianlong Fu, Bing Li, and Weiming Hu. Ocean: Object-aware Anchor-free Tracking. In European Conference on Computer Vision, jun 2020. * [99] L. Čehovin, A. Leonardis, and M. Kristan. Visual object tracking performance measures revisited. IEEE Transactions on Image Processing, 25(3):1261–1274, 2016. Figure 6: Examples of target objects contained in TREK-150 that are difficult to represent with more sophisticated representations (e.g. rotated bounding box or segmentation mask). The first two images from the left show objects such as “cheese” and “onion” which prevent the determination of the angle for an oriented bounding box, or an accurate segmentation mask. The last two images present objects which prevent a consistent definition of a segmentation. ## Appendix A Motivations And Details Behind TREK-150 In this section, we provide more motivations and details behind the construction of the TREK-150 benchmark dataset. First of all, we remark that TREK-150 has been designed for the _evaluation_ of visual tracking algorithms in FPV regardless of their methodology. Indeed, this paper does not aim to provide a large-scale dataset to improve the performance of deep learning based trackers. Instead, its goal is to assess the impact of the first-person viewpoint on current trackers and, to the best of our knowledge, this analysis was never done before. Hence, as a _first step_ towards providing an answer to such a point (which is also highlighted in the title of the paper), we focused on benchmarking the tracking progress made by the computer vision community in the last years. ### Video Collection. The video sequences contained in TREK-150 have been sampled from the EPIC- Kitchens-55 (EK-55) dataset [19]. This has been done because EK-55 is currently the largest dataset for understanding human-object interactions in FPV (it provides up to 55 hours of human-object interaction examples). Thanks to its dimension, it is the only database that provides a significant amount of diverse interaction situations between various people and several different types of objects. Hence, it allowed us to select suitable diverse tracking sequences that reflect the common scenarios tackled in FPV tasks. ### Bounding Box Annotations. To represent the spatial localization of objects, we employed axis-aligned bounding-boxes. This design choice for the TREK-150 benchmark is supported by the fact that this representation is largely used in many FPV pipelines [32, 34, 33, 19, 45, 83, 79]. Therefore, computing performance results based on such allows us to correlate them to the results of other FPV tasks that employ the same object representation. Hence, we can better highlight the impact that trackers would have in such contexts. Moreover, we would like to highlight the difficulty that the FPV setting poses on the development of more sophisticated annotations for object categories that appear commonly in FPV scenarios. Figure 6 shows some examples of these. The first two images from the left show the objects “cheese” and “onion” (these are considered as single objects according to the EK-55 annotations [19]) which prevent the determination of the angle for an oriented bounding-box, or an even accurate segmentation mask due to their spatial sparsity. The two images on the right present objects for which providing a segmentation is very ambiguous. Indeed, most of the pixels in the image area of the knife (third image) belong actually to foam, while the heavy motion blur happening on the object of the fourth image (where the target is a bottle) prevents the definition of the actual pixels belonging to the object. In all these scenarios, axis-aligned bounding-boxes result in robust target representations that provide a consistent delineation of the object. For these motivations, and to make representations and annotations consistent across the whole dataset, we employed such annotation representations. Moreover, the latest progress of visual tracking algorithms on various benchmarks that use this state representation [91, 66, 35, 59, 67, 31, 41] demonstrates that it provides sufficient information about the target for consistent and reliable performance evaluation. Furthermore, using more sophisticated target representation would have restricted our analysis [86, 60, 28, 10] since the majority of state-of-the-art trackers output just axis- aligned bounding boxes [11, 23, 40, 68, 6, 7, 39, 21, 46, 80, 54, 84, 71, 53, 97, 22, 8, 93, 24, 42, 92, 18, 16, 98, 9]. Finally, we point out that the proposed axis-aligned bounding-boxes have been carefully and tightly drawn around the visible parts of the objects. Figure 7 shows some examples of the quality of the bounding box annotations of TREK-150 in contrast to the ones available in the popular OTB-100 tracking benchmark. Figure 7: Examples of the quality of the bounding box annotations contained in TREK-150 in comparison with the ones available in the popular OTB-100 benchmark. TREK-150 provides careful and high-quality annotations that tightly enclose all the target objects. ### Frame Rate. The videos contained in TREK-150 have a frame rate of 60 FPS. This is inherited from the EK-55 dataset [19], from which videos are sampled. According to the authors [19], EK-55 has been acquired with such a setting because of the proximity of the camera point of view and the main scene (i.e. manipulated objects), which causes very fast motion and heavy motion blur when the camera wearer moves (especially when he/she moves the head). We empirically evaluated the fast motion issue by assessing the average normalized motion happening on the frames that include fast motion (FM) (we computed them by considering the automatic procedure defined in [91, 67] to assign the FM attribute). Such a motion quantity has been computed as the distance between the center of two consecutive ground-truth bounding boxes normalized by the frame size. Considering TREK-150 with the videos at 30 FPS, such a value achieves 0.075. This is higher than the 0.068 obtained for OTB-100, the 0.033 of UAV123, or the 0.049 of NfS considered at 30 FPS. These comparisons demonstrate that the FPV scenario effectively includes challenging scenarios due to the faster motion of targets/scene. Considering the 60 FPS frame rate, the fast motion quantity of TREK-150 is reduced to 0.062, which is comparable to the values obtained in other third-person tracking benchmarks. ### Sequence Labels. To study the performance of trackers under different aspects, the sequences of TREK-150 have been associated with one or more of 17 attributes that indicate the visual variability of the target in the sequence (see Table 2 of the main paper for the details). The extended usage of this practice [91, 66, 35, 52, 67, 31, 41] showed how this kind of labeling is sufficient to estimate the trackers’ performance on particular scenarios. We therefore follow such an approach to associate labels on TREK-150’s videos. However, we argue that, by using this labeling setting, attention must be paid to how trackers are evaluated. The standard OPE protocol, which has been generally used to perform such evaluations, could lead to less accurate estimates. For example, it could happen that a tracker would fail for some event described by an attribute (e.g. FOC) in the first frames of a video, but that the sequence also contains some other event (e.g. MB) in the end. With the score averaging procedure defined by the OPE protocol, the low results achieved due to the first event would set low scores also for the second event, while the tracker failed just for the first one. Therefore, the performance estimate for the second attribute would not be realistic. We believe a reasonable option is to use a more robust evaluation protocol such as the multi-start evaluation (MSE). Thanks to its points of initialization which generate multiple diverse sub- sequences, this protocol allows a tracker to better cover all the possible situations happening along the videos, both forward and backward in time. All the results achieved on the sub-sequences are then averaged to obtain the overall scores on a sequence. We think the scores computed in this way to be more robust and accurate estimates of the real performance of the trackers. Hence, in this work, we follow such an approach to evaluate trackers over sequence attributes. Figure 8: Comparison between TREK-150 (last column of plots) and other popular visual tracking benchmarks on the distributions computed for different bounding box characteristics. Each column of plots reports the distribution of bounding box sizes, scale changes, and aspect ratio change (the x-axis of each plot reports the range of the bounding box statistic). ### Single Object Tracking. In this paper, we restricted our analysis to the tracking of a single object per video. This has been done because in the FPV scenario a person interacts through hands with one or two objects at a time in general [19] (if a person interacts with two objects they can be still tracked by two single-object trackers). Moreover, focusing on a single object allows us to analyze better all the challenging and relevant factors that characterize the tracking problem in FPV. We believe that future work could investigate the employment of multiple object tracking (MOT) solutions [25] for a general understanding of the position and movement of all objects visible in the scene. We think that the study presented in this paper will give useful insights even for the development of such methods. ### Differences With Other Tracking Benchmarks. We believe that the proposed TREK-150 benchmark dataset offers _complementary_ features with respect to existing visual tracking benchmarks. Table 1 and Figure 2(a) and (b) of the main paper show that TREK-150 provides complementary characteristics to what is available today to study the performance of visual trackers. Particularly, our proposed dataset offers different distributions of the common challenging factors encountered in other datasets. For example, TREK-150 includes a larger number of examples with occlusions (POC), fast motion (FM), scale change (SC), aspect ratio change (ARC), illumination variation (IV), and motion blur (MB), while it provides a competitive number of scenarios for low resolution (LR), full occlusion (FOC), deformable objects (DEF), and presence of similar objects (SOB). Additionally, even though the 4 new attributes high resolution (HR), head motion (HM), one- hand interaction (1H), two-hands interaction (2H), define particular FPV scenarios, we think that they can be of interest even for the visual tracking community. For example, as shown by the second row of images of Figure 7, 1H and 2H can be considered as attributes that define different levels of occlusion, as objects manipulated with two hands generally cause more extended hiding of the targets. Besides these sequence-level features, TREK-150 offers up to 34 target categories which, to the best of our knowledge, have never been studied. As shown by the Figures 6 and 7, these objects have challenging appearances (e.g. transparent or reflective objects like lids, bottles, or food boxes) and shapes (e.g. knives, spoons, cut food) that change dramatically due to the interaction or motion induced by the camera viewer. We additionally computed some statistics on the bounding box ground-truth trajectories contained in the proposed dataset. Figure 8 reports these distributions. For comparison, we report the distributions computed on the popular tracking benchmarks VOT2019, UAV123, OTB-100. As can be noted, our dataset exhibits different distributions, and thus offers different behaviors of the target appearances and motions. Particularly, observing the first plot of the last column, it can be noted that TREK-150 has a wider distribution of bounding box sizes, hence making it suitable for the evaluation of trackers with targets of many different sizes. Particularly, TREK-150 has a larger number of bounding boxes with greater dimension. The plot just below the first shows that TREK-150 provides more references to assess the trackers’ capabilities in tracking objects that become smaller. Finally, the last plot shows a wider distribution for the aspect ratio change, showing that TREK-150 offers a large variety of examples to evaluate the capabilities of trackers in predicting the shape change of targets. Additionally to these characteristics, we think TREK-150 is interesting because it allows the study of visual object tracking in unconstrained scenarios of _every-day_ situations. Table 5: Details of the trackers involved in our evaluation. In the Image Representation column the acronyms stand for: CNN - Convolutional Neural Network; HOG - Histogram of Oriented Gradients; Pixel - Pixel Intensity; Color - Color Names or Intensity. Regarding the Matching strategy column the acronyms stand for: CF - Correlation Filter; CC - Cross Correlation; T-by-D - Tracking by Detection; Reg - Regression; Had - Hadamard Correlation. The ✓ symbol in the Model Update column expresses the target model update during the tracking procedure. The last column reports the tracking method class according to [61] (ST \- Short-Term trackers, LT \- Long-Term trackers). Tracker | Venue | Image Representation | Matching | Model Update | [61] Class ---|---|---|---|---|--- MOSSE [11] | CVPR 2010 | Pixel | CF | ✓ | $\text{ST}_{0}$ DSST [23] | BMVC 2014 | HOG+Pixel | CF | ✓ | $\text{ST}_{0}$ KCF [40] | TPAMI 2015 | HOG | CF | ✓ | $\text{ST}_{0}$ MDNet [68] | CVPR 2016 | CNN | T-by-D | ✓ | $\text{ST}_{1}$ Staple [6] | CVPR 2016 | HOG+Color | CF | ✓ | $\text{ST}_{0}$ SiamFC [7] | ECCVW 2016 | CNN | CC | ✗ | $\text{ST}_{0}$ GOTURN [39] | ECCV 2016 | CNN | Reg | ✗ | $\text{ST}_{0}$ ECO [21] | CVPR 2017 | CNN | CF | ✓ | $\text{ST}_{0}$ BACF [46] | ICCV 2017 | HOG | CF | ✓ | $\text{ST}_{0}$ DCFNet [85] | ArXiv 2017 | CNN | CF | ✓ | $\text{ST}_{0}$ VITAL [80] | CVPR 2018 | CNN | T-by-D | ✓ | $\text{ST}_{1}$ STRCF [54] | CVPR 2018 | HOG | CF | ✓ | $\text{ST}_{0}$ MCCTH [84] | CVPR 2018 | HOG +Color | CF | ✓ | $\text{ST}_{0}$ DSLT [58] | ECCV 2018 | CNN | CC | ✓ | $\text{ST}_{0}$ MetaCrest [71] | ECCV 2018 | CNN | CF | ✓ | $\text{ST}_{1}$ SiamRPN++ [53] | CVPR 2019 | CNN | CC | ✗ | $\text{ST}_{0}$ SiamMask [86] | CVPR 2019 | CNN | CC | ✗ | $\text{ST}_{0}$ SiamDW [97] | CVPR 2019 | CNN | CC | ✗ | $\text{ST}_{0}$ ATOM [22] | CVPR 2019 | CNN | CF | ✓ | $\text{ST}_{1}$ DiMP [8] | ICCV 2019 | CNN | CF | ✓ | $\text{ST}_{1}$ SPLT [93] | ICCV 2019 | CNN | CF | ✓ | $\text{LT}_{1}$ UpdateNet [96] | ICCV 2019 | CNN | CC | ✓ | $\text{ST}_{0}$ SiamFC++ [92] | AAAI 2020 | CNN | CC | ✗ | $\text{ST}_{0}$ GlobalTrack [42] | AAAI 2020 | CNN | Had | ✗ | $\text{LT}_{0}$ PrDiMP [24] | CVPR 2020 | CNN | CF | ✓ | $\text{ST}_{1}$ SiamBAN [16] | CVPR 2020 | CNN | CC | ✗ | $\text{ST}_{0}$ D3S [60] | CVPR 2020 | CNN | CF | ✗ | $\text{ST}_{0}$ LTMU [18] | CVPR 2020 | CNN | CF/CC | ✓ | $\text{LT}_{1}$ Ocean [98] | ECCV 2020 | CNN | CC | ✗ | $\text{ST}_{0}$ KYS [9] | ECCV 2020 | CNN | CF | ✓ | $\text{ST}_{1}$ TRASFUST [29] | ACCV 2020 | CNN | Reg | ✗ | $\text{ST}_{1}$ ## Appendix B Tracker Details ### Generic Object Trackers Details. Table 5 reports some additional information about the 31 considered generic- object trackers such as: venue and year of publication; type of image representation used; type of matching strategy; employment of target model updates; and category of tracker according to the classification of [61]. For each tracker, we used the code publicly available and adopted default parameters for evaluation purposes. ### FPV Trackers Details. In this section, we provide details on the LTMU-F and LTMU-H FPV trackers considered as baselines in our study. For a better understanding, we briefly recap the processing procedure of the LTMU tracker [18]. After being initialized with the target in the first frame of a sequence, at every other frame LTMU first executes the short-term tracker DiMP [8] that tracks the target in a local area (based on the target’s last known position) of the frame. The image patch extracted from the bounding box prediction of DiMP is evaluated by an online-learned verifying module which outputs a probability estimate for the target being contained in the patch. Such an estimate is employed to decide if the short-term tracker is tracking the target or not. If it is, the box predicted by the short-term tracker is given as output for the current frame. In the other case, a re-detection module is executed to search for the target in the global frame. The detector returns some candidate locations to contain the target and each of these is checked by the verification module. The candidate patch with the highest confidence is given as output and used as a new target location to reset the short-term tracker. Figure 9: Visual representation of the LTMU [18] scheme performed at every frame that has been adapted for the development of the baseline FPV trackers LTMU-F and LTMU-H. In our setting, we employ FPV-based detectors to implement such a re-detection module. For LTMU-F, we employed the EK-55 trained Faster R-CNN [19]. Among the many detections given as output, this module has been set to retain the first 10, considering a ranking based on the scores attributed by the detector to each detection. If no detection is given for a frame, the last available position of the target is considered as candidate location. For LTMU-H, we employ the object localization contained in the hand-object interaction detections given by the FPV version of Hands-in-contact [79] to obtain the target candidate locations. Such a solution [79] is implemented as an improved Faster R-CNN which is set to learn to provide, at the same time, the localization of hands and objects, and their state of interaction. As before, if no detection is given for a frame, the last available position of the target is considered as candidate location. For both methods, the original pre-trained models (made available by the authors) which consider FPV data have been used. The described setups, which the common scheme is presented in Figure 9, give birth to two trackers that implement conceptually different strategies for FPV-based object localization. Indeed, the first solution reasons just to find objects in the scene, while the second reasons in terms of the interaction happening between the camera wearer (i.e. hands) and the objects. We would like to remark that other FPV trackers (such as the ones described in Section 2 of the main paper) have not been tested on TREK-150 because their implementations are not available. ### Implementation Details. The evaluations were performed on a machine with an Intel Xeon E5-2690 v4 @ 2.60GHz CPU, 320 GB of RAM, and an NVIDIA TITAN V GPU. We considered the Python publicly available implementations of each tracker and adopted default parameters. Annotations, results of the trackers, and code are available at https://machinelearning.uniud.it/datasets/trek150/. ## Appendix C Experimental Details ### Details On The Generalized Robustness. The robustness measure has been first introduced in [51]. This metric was defined as the number of drifts (i.e. the complete non-overlap between predictions and ground-truths) performed by a visual tracking algorithm. In the last iteration of the VOT challenge [47], such a robustness measure has been revised and defined as the extent of a tracking sequence before the tracker’s failure. Such an extent is determined as the number of frames positively tracked normalized by the total number of frames in the sequence. The failure event is triggered when the overlap between the predicted and ground-truth bounding-boxes becomes lower than a fixed threshold (the value 0.1 is used in [47]). In simpler words, this measure expresses the fraction of a tracking sequence that is correctly tracked from its beginning. We think this measure is of special interest to the FPV community. Indeed, it can assess the ability of a tracker to maintain reference in time to the target objects. Since many FPV tasks are devoted to understand the action performed by the camera viewer or its interaction with objects [32, 34, 33, 19, 73], having solutions capable of maintaining temporally longer references to target nouns can be advantageous to model such events. However, we believe that having a single fixed threshold is restrictive, as different applications can make different assumptions on the concept of tracking failure. Therefore, following [91] which proposed to evaluate trackers with plots computed after thresholding bounding-box overlaps with different values, we propose to build a plot considering different overlap thresholds for the determination of failure in the robustness measure [47]. This leads to the creation of the Generalized Success Robustness Plot (Figure 3(c) of the main paper) which reports the different robustness scores for the different thresholds. The latters have been studied just in the range [0, 0.5] because it is common practice, in the computer vision literature, to consider overlaps greater than 50% as positive predictions. Notice that failures could be also defined in terms of center error. In this paper, we focused on overlap-based failures since bounding-box overlap has been shown to be superior for target localization accuracy [99], but future work will investigate the employment of the center error as this kind of bounding box distance is used in FPV tasks [79]. Moreover, to compare trackers with a single score, following [91] and [67], we compute the AUC of the Generalized Success Robustness Plot which we refer as to generalized success robustness (GSR). This value expresses the average of all the scores obtained with the different thresholds. In other words, the GSR score expresses the average successful extent of the predictions of a tracker. Figure 10: SS, NPS, and GSR performance of the 33 benchmarked trackers on the proposed TREK-150 benchmark under the MSE protocol. The general low performances confirms the conclusions achieved with the OPE protocol. ### Details On The Evaluation Protocols. In this section, we give further details on the experimental protocols used for the execution of the trackers. The one-pass evaluation (OPE) protocol, which is detailed in [91], consists of two main stages: (i) initializing a tracker with the ground-truth bounding box of the target in the first frame; (ii) let the tracker run on every subsequent frame until the end of the sequence and record predictions to be considered for the evaluation. For each sequence, predictions and ground-truth bounding boxes are compared according to the employed measures (only for frames where ground-truths are present) to obtain the performance scores. The overall scores, presented in brackets in Figure 3 of the main paper, are obtained by averaging the scores achieved for every sequence. For the implementation of the multi-start evaluation (MSE) protocol, we followed the details given in [47]. For each sequence, different points of initialization (called anchors) separated by 2 seconds (in our setting every 120 frames) are defined. Anchors are always set in the first and last frame of a sequence. Some anchors are shifted forward for a few frames to obtain a more consistent bounding-box for tracker initialization. A tracker is run on each of the sub-sequences yielded by the anchor (in total 1032 sub-sequences are generated), either forward or backward in time depending on the longest sub- sequence the anchor generates. The tracker is initialized with the ground- truth in the first frame of the sub-sequence and let run until its end. Then, similarly as for the OPE, predicted and ground-truth bounding boxes are compared to obtain the performance scores for each sub-sequence. Scores for a single sequence are computed by a weighted average where the scores of each sub-sequence are weighted by its length (as number of frames). Similarly, the overall scores for the whole dataset (which are shown in Figure 10) are obtained by a weighted average where each sequence’s score is weighted by the number of frames in that sequence. The real-time evaluation (RTE) protocol has been implemented following the details in [48, 55]. Similar to the OPE protocol, a tracker is initialized with the ground-truth in the first frame of a sequence. Then the tracker is presented with a new frame only after its execution over the previous frame has finished. The new presented frame is the last frame available for the time instant in which the tracker becomes ready to be executed, considering that frames occur regularly according to the frame rate of the video. In other words, all the frames occurring in the time interval between the start and end time instants of the tracker’s execution are skipped. For all the frames skipped, the last bounding box given by the tracker is used as location for the target in such frames. The sequence and overall scores are ultimately obtained as for the OPE protocol. ### Experiments On The Impact of Trackers In FPV. In this section, we report more details on the experiments performed to evaluate the impact of trackers in FPV tasks (presented in the paragraph “Do Trackers Already Offer Any Advantage in FPV?” of Section 6 of the main paper.) In the first experiment, we assessed the capabilities of continuous object localization of an object detector, as this method is usually exploited in many FPV pipelines. We executed the EK-55 trained Faster-R-CNN [19] on all the frames of TREK-150, by recording in each frame the bounding box of the detection having the same class of the target and the highest confidence score. Such bounding box predictions were then compared to the ground truth annotations using the considered tracking evaluation measures. This experimental strategy respects the evaluation procedure of the OPE protocol. In this way, we can compare the performance of the tracking approach and the detection approach in providing localization and temporal reference of/to objects. In the second experiment, we evaluated the impact of trackers in a video-based hand-object interaction detection setting. Since this paper is focused on objects (visual object tracking), we restricted our study in evaluating the detection of the objects involved in the interactions. To this aim, we first built tracks of hand-object interactions over the sequences of TREK-150. The hand-object interaction detector Hands-in-contact [79] has been executed to obtain sparse interaction detections that involved the object defined by the ground-truths of TREK-150. Clusters of detections have been then set to form separate tracks if the interval between two detections was longer than 30 frames. The missing detections within a cluster have been filled with the TREK-150’s object ground-truth bounding boxes and the most frequent interaction state (i.e. if the object was in interaction with the left hand, the right hand, or both) appearing in the cluster. Once had these references, we ran the trackers in an OPE-like fashion. Each tracker was initialized in the first frame of a track with the object detection given by Hands-in-contact [79], and then let run for the other frames of the track. We then evaluated the performance in a track by the normalized count of frames having intersection-over-union $\geq 0.5$ with the object’s ground-truth. The overall result is obtained by averaging the outcomes of all tracks. This experimental procedure gives us an estimate of the accuracy of the hand-object interaction detection system if trackers would have been included in its pipeline. More interestingly, it allows also to build a ranking of the trackers based on the results of a downstream application. Table 6: Performance achieved by the 33 benchmarked trackers on TREK-150 using the RTE protocol. Tracker | FPS | SS | NPS | GSR ---|---|---|---|--- Ocean | 21 | 0.365 | 0.358 | 0.294 SiamBAN | 24 | 0.360 | 0.366 | 0.313 SiamRPN++ | 23 | 0.362 | 0.356 | 0.293 PrDiMP | 13 | 0.352 | 0.349 | 0.243 DiMP | 16 | 0.336 | 0.331 | 0.224 SiamMask | 23 | 0.335 | 0.333 | 0.298 SiamFC++ | 45 | 0.330 | 0.331 | 0.308 SiamDW | 32 | 0.327 | 0.334 | 0.317 KYS | 12 | 0.327 | 0.317 | 0.237 ATOM | 15 | 0.319 | 0.312 | 0.179 UpdateNet | 21 | 0.311 | 0.297 | 0.295 DCFNet | 49 | 0.299 | 0.286 | 0.335 TRASFUST | 13 | 0.296 | 0.270 | 0.185 SiamFC | 34 | 0.293 | 0.295 | 0.280 LTMU | 8 | 0.284 | 0.257 | 0.169 D3S | 16 | 0.276 | 0.263 | 0.182 BACF | 9 | 0.276 | 0.262 | 0.234 SPLT | 8 | 0.265 | 0.247 | 0.203 STRCF | 10 | 0.264 | 0.250 | 0.218 DSLT | 7 | 0.260 | 0.234 | 0.211 ECO | 15 | 0.252 | 0.231 | 0.173 GlobalTrack | 8 | 0.253 | 0.227 | 0.139 MCCTH | 8 | 0.251 | 0.231 | 0.232 Staple | 13 | 0.249 | 0.236 | 0.169 GOTURN | 44 | 0.247 | 0.242 | 0.119 MOSSE | 26 | 0.227 | 0.190 | 0.141 LTMU-H | 4 | 0.213 | 0.174 | 0.161 MetaCrest | 8 | 0.207 | 0.175 | 0.165 LTMU-F | 4 | 0.205 | 0.161 | 0.162 VITAL | 4 | 0.204 | 0.165 | 0.158 DSST | 2 | 0.191 | 0.145 | 0.161 KCF | 6 | 0.186 | 0.157 | 0.177 MDNet | 1 | 0.185 | 0.140 | 0.161 ## Appendix D Additional Results ### MSE Protocol Results. Figure 10 reports the overall performance of the 33 benchmarked trackers on TREK-150 using the MSE protocol. The overall low performances of all the trackers confirm the conclusions achieved using the OPE protocol. The FPV setting introduces challenging factors for current visual trackers. ### Qualitative Examples The first 7 rows of images of Figure 11 present qualitative results of 10 of the generic-object trackers in comparison with the ground-truth (which is identified by the white rectangles). The action performed by the camera wearer is also reported for each sequence. The remaining 4 rows show the qualitative performance of the FPV baseline trackers LTMU-F and LTMU-H in comparison with LTMU and the ground-truth. For a better visualization, a video can be found at https://youtu.be/oX1nICHgEJM. Figure 11: Qualitative results of some of the studied trackers on the proposed TREK-150 dataset. The first 7 rows of images show the qualitative performance of 10 of the selected generic-object trackers, while the last 4 rows show the results of the baseline FPV trackers LTMU-F and LTMU-H in comparison with LTMU. For a better visualization, a video can be found at https://youtu.be/oX1nICHgEJM. ### Per Attribute/Action Results. Figure 12 presents the SS, NPS, and GSR scores achieved by the 33 trackers considering the attributes assigned to sequences. Similarly, Figures 13 and 14 report the results for the whole batch of trackers with respect to action verbs and target nouns. ### RTE Protocol Results. Table 6 reports the FPS, SS, NPS, and GSR performance of all 33 benchmarked trackers obtained using the RTE protocol. As stated in the main paper, offline siamese trackers emerge as the best solution in this scenario. Online deep discriminative trackers achieve comparable results in SS and NPS, but demonstrate a larger drop in performance in the GSR score, showing that online learning mechanisms influence this performance in the real-time setting. Table 7: Performance of the offline trackers SiamFC and SiamRPN++ on a subset of 50 sequences of TREK-150 without and with fine-tuning on the remaining 100 videos. Tracker | Fine-tuning | OPE | MSE ---|---|---|--- SS | NPS | GSR | SS | NPS | GSR SiamFC | ✗ | 0.311 | 0.332 | 0.317 | 0.307 | 0.317 | 0.307 ✓ | 0.267 | 0.275 | 0.278 | 0.287 | 0.305 | 0.292 SiamRPN++ | ✗ | 0.384 | 0.395 | 0.377 | 0.367 | 0.385 | 0.333 ✓ | 0.348 | 0.407 | 0.313 | 0.336 | 0.406 | 0.314 ### Adaptation Of Offline Trackers. Many current visual trackers employ deep learning architectures. Among these, trackers based on siamese neural networks emerged as the most popular approaches nowadays. These trackers are said to be offline (e.g. SiamFC [7], SiamRPN++ [53], SiamMask [86], SiamBAN [16]) because they are trained to track objects on large-scale tracking dataset [26, 67, 41, 31], and do not use online adaptation mechanisms at test time. In our evaluation, such trackers have been employed as they are described and trained in their original paper. Given their generally low performance, one could wonder how these trackers perform if knowledge about the FPV domain is exploited for learning. Our TREK-150 dataset, which is designed to evaluate the progress of visual tracking solutions in FPV, does not provide a large-scale database of learning examples as needed by these methods. Instead, it well aligns with real-world datasets where millions of frames are not available for training. In such scenarios, the reasonable options the machine learning community suggest are to use trackers as they are because of their general knowledge, or to adapt them through fine-tuning using a smaller training set. We tried the second strategy by randomly splitting TREK-150 in a training and test set of 100 and 50 videos respectively. We fine-tuned the popular offline trackers SiamFC and SiamRPN++ on the training set according to their original learning strategy. We then tested the fine-tuned versions on the test set and the results are reported in Table 7. It shows that fine-tuning leads to substantial overfitting that cause the performance to drop in general. These outcomes prove the decision to evaluate offline trackers as they are is the right one given the current lack of large-scale FPV tracking datasets. Moreover, given the overall results presented in this paper, we hypothesize that visual tracking in FPV will require more than just large-scale training. We hope the results presented in this section will encourage the community to work on domain adaptation techniques for offline trackers that are currently starting to be investigated [30]. Figure 12: SS, NPS, and GSR results per sequence attribute achieved by the 33 benchmarks on the TREK-150 benchmark. Figure 13: SS, NPS, and GSR results achieved by the 33 benchmarks on the TREK-150 benchmark considering each verb associated to the action performed by the camera wearer. Figure 14: SS, NPS, and GSR results achieved by the 33 benchmarks on the TREK-150 benchmark considering the different target categories.